Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anybody else find it peculiar that the majority of these articles about AI say things like "of course I don't doubt that AI will lead to major discoveries", and then go on to explain how they aren't useful in any field whatsoever?

Where are the AI-driven breakthroughs? Or even the AI-driven incremental improvements? Do they exist anywhere? Or are we just using AI to remix existing general knowledge, while making no progress of any sort in any field using it?



There is rarely a constructive discussion around the term “AI”. You can’t say anything useful about what it might lead to or how useful it might be, because it is purely a marketing term that does not have a specific meaning (neither do both of the words in its abbreviation).

Interesting discussions tend to avoid “AI” in favour of specific terms such as “ML”, “LLM”, “GAN”, “stable diffusion”, “chatbot”, “image generation”. These terms refer to specific tech and applications of that tech, and allow to argue about specific consequences for sciences or society (use of ML in biotech vs. proliferation of chatbots).

However, certain sub-industries prefer “AI” precisely because it’s so vague, offers seemingly unlimited potential (please give us more investment money/stonks go up), and creates a certain vibe of a conscious being useful when pretending not to be working around IP laws and creating tools based on data obtained without relevant licensing agreements (cf. the countless “if humans have the freedom to read, therefore it’s unfair to restrict the uses of a software tool” fallacies, often perpetuated even by seemingly technically literate people, in pretty much every relevant forum thread).


It's not even that certain sub-industries prefer "AI", it's the umbrella term a company can use in Marketing for virtually any automated process that provides a seemingly subjective result.

Case in point:

For a decade the implementation of cameras went through development, testing and tuning of Auto Exposure, Auto Focus and Auto White-Balance ("AAA") engines as well as image post-processing.

These engines ran on a Image Signal Processor (ISP) or sometimes on the Camera sensor itself, extensive work was done by Engineering Teams on building these models in order to optimize them to run on low-latency on an ISP.

Suddenly AI came along and all of these features became "AI features". One company started with "AI assisted Camera" to promote the process everyone was doing all-along. So all had to introduce AI, without any disruptive change in the process.


I remember somethings similar when the term "cloud" came up. It is still someone else's server or datacenter with tooling.


Yeah, can they just stop coining terms to refer to old, pre-existing things? I still hate the term "cloud".


My favorite description is “The cloud is a computer you don’t own in Reston, VA”


> One company started with "AI assisted Camera" to promote the process everyone was doing all-along.

Before the "AI" labeling more advanced image processing was often called "computational photography". At least in the world of smartphone cameras. Because they have tiny image sensors and lenses smartphone cameras need to do a lot of work to get a decent image out of any environment that doesn't have perfect lighting. The processing is more traditional computer vision.

There's not legitimate generative AI features being peddled like editing people out of (or into) photos. But most of the image processing pipelines haven't fundamentally changed but not have AI labeling to please marketers and upper management.


I agree it's completely meaningless. At this point I think marketing would label a toilet fill valve as "AI".


Well the "smart toilet" is definitely a thing you can buy today:

> The integration of Artificial Intelligence (AI) and the Internet of Things (IoT) in bathroom fixtures, particularly toilets, is shaping the future of hygiene, convenience, and sustainability.


While automated AI measurement of the chemical makeup of .. human effluent could be helpful for tracking health trends, I fear it’d also come with built in integrations for Instagram and TikTok.


Good news! The integration will be used to customize your feed by recommending foods and medicine that you might enjoy.

The future is allowing advertisers to bid on specific proteins and triglyceride chains detected by the smart toilet.


or (and i can actually see this happening) an Amazon integration to reorder bathroom tissue and bowl cleaner



Also, the strong predictions about AI are using a vague term because the tech often doesn't exist yet. There isn't a chatbot right now that I feel confident can out-perform me at systems design but I'm pretty certain something that can is coming. Odds are also good that in 2-4 years there will be new hotness to replace LLMs that are much more functional (maybe MLLMs, maybe called something else). We can start to predict and respond to their potential even though they don't exist yet; it just takes a little extrapolating. But it doesn't have a name yet.

Which is to agree - obviously if people are talking about "AI" they don't want to talk about something that exists right this second. If they did it'd be better to use a precise word.


Totally agree.

Also the term 'LLM' is more about the mechanics of the thing than what the user gets. LLM is the technology, but some sort of automated artificial intelligence is what people are generally buying.

As an example, when people use ChatGPT and get an image back, most don't think "oh, so the LLM called out to a diffusion API?" - they just think "oh chat GPT can give me an image if I give it a prompt".

Although again, the term is entirely abused to the extent that washing machines can contain 'AI'. Although just because a term is abused doesn't necessarily mean it's not useful - everything had "Cloud" in it 10 years ago but that term was still useful enough to stick around.

Perhaps there is an issue that AI can mean lots of things, but I don't know yet of another term that encapsulates the last 5 years advancements in automated intelligence, and what that technology is likely to be moving forwards, which people will readily recognise. Perhaps we need a new word, but AI has stuck and there isn't a good alternative yet, so is probably here to stay for a bit!


> Although again, the term is entirely abused to the extent that washing machines can contain 'AI'.

I remember when the exciting term in appliances was "fuzzy logic". As a technology it was just adding some sensors beyond simple timers and thermostats to control things like run time and temperatures of automated washers.


> As an example, when people use ChatGPT and get an image back, most don't think "oh, so the LLM called out to a diffusion API?" - they just think "oh chat GPT can give me an image if I give it a prompt".

Note: your first part skipped entirely the process of obtaining the data for and training of both of the above, which is a crucial part at least on par with what called which API.

I don’t think it’s unreasonable to expect people to build an intuition for it, though. It’s healthy when underlying processes are understood to at least a few layers of abstraction, especially in potentially problematic or morally grey areas.

As an analogy to your example, you could say that when people drink milk they usually don’t think “oh, so this cow was forced to reproduce 123 times with all her children taken away and murdered so that it makes more milk” and simply think “the cow gave this milk”.

However, like with milk, like with the ML tech, it is important to realize that 1) people do indeed learn the former and build the relevant intuitions, and 2) the industry is reliant on information asymmetry and mass ignorance of these matters (and we all know that information asymmetry is the #1 enemy of free market working as designed).


I like your analogy, but I don't think people even go "the cow gave this milk" - I think they tend to just go "mmmm yummy yummy milk"

People usually see the product rather than the process.


I disagree; people are not mindless consumers, people are interested in what comes from where, and given the knowledge most of them would make a choice that would be ethically good, when they can afford it.

What breaks this (and prevents the free market from working as intended) is lack of said knowledge, i.e., information asymmetry. In case of the milk example, I think it mostly comes to two factors:

1) Lack of this awareness is financially beneficial to respective industries. (No need for conspiracy theories, but they sure as hell not going to engage in educational campaigns on these topics, or facilitating any such efforts in any way, including not suing them. Further complicated by the fact that many these industries are integral parts of many local economies, which would make it in the interest of respective governments to follow suit.)

2) The facts can be so harsh that it can be difficult to internalise and accept reality.

Even still, many people do learn and internalise this—ever noticed the popularity of oat and almond milk in coffeeshops, even despite higher prices?—so I think it is not unreasonable to expect this in certain ML-based industries, either.


I think this seems to be veering off into some specific ethical viewpoint about milk supply chains and production rather than an analogy for product vs process.

But personally I can't see how the popularity of oat and almond milk in indepenent coffee shops tells us that much about how people perceive the inner workings of chatGPT.


There is a difference between descriptive and prescriptive statements. I don’t disagree that the status quo is that many people may not care, but I believe it is reasonable to expect them to care, just like they do in other domains (e.g., the milk analogy). The information asymmetry can (and maybe should) be fought back against.


This article is all about PINNs being overblown. I think it’s a reasonable take. I’ve seen way too many people dump all their eggs in the PINNs basket when there are plenty of options out there. Those options just don’t include a ticket to the hype train.


I think AI is a useful term which usually means a neural network architecture but without specifying the exact architecture.

I think Machine Learning doesn't mean this as a word, as it can also refer to linear regression, non-linear optimisation, decision trees, bayesian networks etc.

That's not saying that AI isn't abused as a term - but I do think a more general term to describe the latest 5 years advancements in neural networks to solve problems is useful. Particularly as it's not obvious which model architectures would apply to which fields without more work (or even if novel architectures will be required for frontier science applications).


This is incorrect. Machine Learning is a term that refers to numerical as opposed to symbolic AI. ML is a subset of AI as is Symbolic / Logic / Rule based AI (think expert systems). These are all well established terms in the field. Neural Networks include deep learning and LLMs. Most AI has gone the way of ML lately because of the massive numerical processing capabilities available to those techniques.

AI is not remotely limited to Neural Networks.


The field of neural network research is known as Deep Learning.


Eh, not really. All Deep Learning involves neural networks, but not all neural networks are part of deep learning. To be fair, any modern network is also effectively built by deep learning, but your statement as such is inaccurate.


> There is rarely a constructive discussion around the term “AI”.

You hit the nail on the head there. AI, in its broadest terms, exists at the epicenter of hype and emotions.


AlphaFold is real.

To the extent you care about chess and Go as human activities, progress there is real.

there are some other scientific computing problems where AI or neural-network-based methods do appear to be at least part of the actual state-of-the-art (weather forecasting, certain single-molecule quantum chemistry simulations).

i would like the hype of the kind described in the article to be punctured, but this is hard to do if critics make strong absolute claims ("aren't useful in any field whatsoever") which are easily disproven. it hurts credibility.


I've never seen an AI critic say AI isn't "useful in any field whatsoever". Especially one that is known as an expert in and a critic of the field. There may be names that aren't coming to mind because that stance would reduce their specific credibility. Do you have some in mind?


the post that I replied to?


The post you replied to was asking for examples based on the general critical discussions they have seen.

And no offense to the GP, but they clearly aren't an expert in the field or they wouldn't be asking.

Probably should have replied directly to the post you replied to as much as yours. Was just pointing out that "not useful in any field whatsoever" is not something I've seen from anyone in the field. Even the article doesn't say that.


Even more relvant, AlphaEvolve is real.

Could easily be brick 1 of self-improvement and the start of the banana zone.


> "the start of the banana zone"

What does this mean? Is it some slang for exponential growth, or is it a reference to something like the "paperclip maximizer"?


It's slang for the J part of the exponential curve. Didn't expect that to be a problem here, sorry.


There is a bunch of new phrases people have been using around AI topics, so it can be hard to tell what exactly is being talked about, like "Roko's Basilisk", the "lottery card hypothesis?"(not quite remembering the phrasing on this one), "the bitter lesson", "paperclip maximizing", "stochastic parrot", etc.. Thank you for clarifying. I was kind of hoping there was a fun blog or story with a "banana zone".


I’m with gthompson512

Sounds like a routine Bill Hicks might have come up with if he was still with us.

He hated obfuscation.


It’s why it keeps looking exactly like NFT’s and crypto hype cycles to me: Yes the technology has legitimate uses, but the promises of groundbreaking use cases that will change the world are obviously not materialising and to anyone that understands the tech it can’t.

It’s people making money off hype until it dies and move on to the next scam-with-some-use.


We already have breakthroughs. Benchmark results which have been unheard of before ML.

Alone language translation got so much better, voice syntesis, voice transcription.

All my meetings now are searchable and i can ask 'ai' to summarize my meetings in a relative accurate way impossible before that.

Alphafold made a breakthrough in protein folding.

Image and Video generation can now do unbelievable things.

Realtime voice communication with computer.

Our internal company search suddenly became usefull.

I have 0 use case for NFT and Crypto. I have tons of use case for ML.


> Alphafold made a breakthrough in protein folding.

Sort of. Alphafold is a prediction tool, or, alternatively framed, a hypothesis generation tool. Then you run an experiment to compare.

It doesn't represent a scientific theory, not in the sense that humans use them. It does not have anywhere near something like the accuracy rate for hypotheses to qualify as akin to the typical scientific testing paradigm. It's an incredibly powerful and efficient tool in certain contexts and used correctly in the discovery phase, but not the understanding or confirmation phase.

It's also got the usual pitfalls with differentiable neural nets. E.g. you flip one amino acid and it doesn't really provide a proper measure of impact.

Ultimately, one major prediction breakthrough is not that crazy. If we compare that to e.g. Random Forest and similar models, the impact in science is infinitely more with them.


We already have a precise and accurate theory for protein folding. What we don’t have is the computational power to do true precise simulations at a scale and speed we’d like.

In many aspects a huge tangled barely documented code base written by inexperienced grad students of quantum shortcuts, err, perturbative methods isn’t that much more or less intelligible than an AI model learning those same methods.


What "precise and accurate theory for protein folding" exists?

Nobody has been able to demonstrate convincingly that any simulation or theory method can reliably predict the folding trajectory of anything but the simplest peptides.


> What "precise and accurate theory for protein folding" exists?

It’s called Quantum Mechanics.

> Nobody has been able to demonstrate convincingly that any simulation or theory method can reliably predict the folding trajectory of anything but the simplest peptides.

No we don’t have simplified models or specialized theories to reduce the computational complexity enough to efficiently solve the QM or even molecular dynamics systems needed to predict protein folding for more than the simplest peptides.

Granted, it’s common to mix up things and say that not having a computationally tractable models means we don’t have precise and accurate theory of PF. Something like [0] resulting in an accurate, precise, and fast theory of protein folding would be incredibly valuable. This however, may not be possible outside specific cases. Though I believe AlphaFold indicates otherwise as it appears life has evolved various building blocks which enable a simpler physics of PF tractable to evolutionary processes.

Quantum computing however could change that [1]. If practical QM is feasible that is, which it’s beginning to look more and more likely. Some say QC is already proven and just needs scaled up.

0: https://en.m.wikipedia.org/wiki/Folding_funnel 1: https://www.nature.com/articles/s41534-021-00368-4


I don't think anybody is 100% certain that doing a full quantum simulation of a protein (in a box of water) would recapitulate the dynamics of protein folding. It seems like a totally reasonable claim, but one that could not really be evaluated.

If you have a paper that makes a strong argument around this claim, I'd love to see it. BTW- regarding folding funnels, I learned protein folding from Ken Dill as a grad student in biophysics at UCSF, and used to run MD simulations of nucleic acids and proteins. I don't think anybody in the field wants to waste the time worrying about running full quantum simulations of protein folding, it would be prohibitevly expensive even with far better QM simulators than we have now (IE, n squared or better).

Also the article you linked- they are trying to find the optimal structure (called fold by some in the field). That's not protein folding- it's ground state de novo structure prediction. Protein folding is the process by which an unfolded protein adopts the structured state, and most proteins don't actually adopt some single static structure but tend to interconvert between several different substructes that are all kinetically accessible.


> I don't think anybody is 100% certain that doing a full quantum simulation of a protein (in a box of water) would recapitulate the dynamics of protein folding.

True, until it's experimentally shown there's still some possibility QM wouldn't suffice. Though I've not read anything that'd give reason to believe QM couldn't capture the dynamic behavior of folding, unlike the uncertainty around dark matter or quantum supremacy or quantum gravity.

Though it might be practically impossible to setup a simulation using QM which could faithfully capture true protein folding. That seems more likely.

> It seems like a totally reasonable claim, but one that could not really be evaluated.

If quantum supremeacy holds, my hunch would be that it would be feasible to evaluate it one day.

The paper I linked was mostly to showcase that there seem to be approaches utilizing quantum computing to speed up solving QM simulations. We're still in the early days of quantum computing algorithms and it's unclear what's possible yet. Tackling a dynamic system like a unfolded protein folding is certainly a ways away though!

> Also the article you linked- they are trying to find the optimal structure (called fold by some in the field). That's not protein folding- it's ground state de novo structure prediction.

Thanks! I haven't worked on quantum chemistry for many years, and only tangentially on protein folding, so useful to know the terminology. The meta table states and that whole possibility of folding states / pathways / etc fascinates me as potentially being emergent property of protein folding physics and biology as we know it.


> It’s called Quantum Mechanics.

Nobody is suggesting anything entails a possible violation of quantum mechanics, so yes, obviously any system under inquiry is assumed to abide by QM.


One one hand, maybe it's good to have better searchable records, even if it's hard to quantify the benefit.

On the other hand, now all your meetings produce reams of computer searchable records subject to discovery in civil and criminal litigation, possibly leading to far worse liability than would have been possible in a mostly email based business.


Maybe don't do crimes?

If the technology provides a major boost in productivity to ethical teams, and is useless for unethical teams, that kinda seems like a good thing.


I guarantee you that every company you have ever worked for has committed crimes and incurred various forms of potential civil liability.


I doubt that, but even if you're right it doesn't change my point: if you're not willing to stop doing that, you'll be less productive than the firms that are willing to operate legally and ethically.

And if in some industry it's really not possible to do business without committing crimes, then let's reform the criminal code to something reasonable.


That is absolutely correct.

The problem is that the hype assumes that all of this is a baseline (or even below the baseline), while there are no signs that it can go much further in the near future – and in some cases, it's actually cutting-edge research. This leads to a pushback that may be disproportionate.


I'm sure there's many people out there who could say that they hardly use AI but that crypto has made them lots of money.

At the end of the day searching work documents and talking with computers is only desirable inasmuch as they are economically profitable. Crypto at the end of the day is responsible for a lot of people getting wealthy. Was a lot of this wealth obtained on sketchy grounds? probably, but the same could be said AI (for example, the recent sale of windsurf for an obscene amount of money).


Crypto is not making people rich, it is about moving money from Person A to Person B.

And sure everyone who got the money from others by gambling are biased. Fine with me.

But in comparision to crypto, people around me actually use AI/ML (most of them).


Every activity that is making people rich is by definiton moving money from Person A to Person B.


Lets be nitpicky :)

I said move money from A to B, which implies that nothing else is happening. Otherwise it would be an exchange.

Sooo i would say my wording was right?! :)


Crypto is not creating anything. Its a scheme that is based on gamble. Person A gets rich. Person B loses money. It does not really contribute to anything.


> is only desirable inasmuch as they are economically profitable.

The bug difference is that they are profitable because they create value, when cryptocurrencies are a zero sum game between participants. (It is in fact a negative-sum game, since some people are getting paid to make the thing work so that others can gamble on the system).


Which ai program do you use for live video meeting translation?


MS Teams, Google Meet (whatever they use, probably gemini) and wispher


You have to understand, real AI will never exist. AI is that which a machine can't do yet. Once it can do it, it's engineering.


I don’t remember when NFTs and cryptos helped me draft an email, wrote my meetings minutes for me or allowed me to easily search information previously locked in various documents.

I think there is this weird take amongst some on HN where LLMs are either completely revolutionary and making break through or utterly useless.

The truth is that they are useful already as a productivity tool.


Having tried to use various tools - in those specific examples - I found them either pointless or actively harmful.

Writing emails - once I knew what I wanted to convey, the rest was so trivial as to not matter, and any LLM tooling just got in the way of actually expressing it as I ended up trying to tweak the junk it was producing.

Meeting minutes - I have yet to see one that didn’t miss something important while creating a lot of junk that no one ever read.

And while I’m sure someone somewhere has had luck with the document search/extract stuff, my experience has been that the hard part was understanding something, and then finding it in the doc or being reminded of it was easy. If someone didn’t understand something, the AI summary or search was useless because they didn’t know what they were seeing.

I’ve also seen a LOT of both junior and senior people end up in a haze because they couldn’t figure out what was going on - and the AI tooling just allowed them to produce more junk that didn’t make any sense, rather than engage their brain. Which causes more junk for everyone to get overwhelmed with.

IMO, a lot of the ‘productivity’ isn’t actually, it’s just semi coherent noise.


+1 for all of the above.

> Meeting minutes - I have yet to see one that didn’t miss something important while creating a lot of junk that no one ever read.

Especially that one. In the beginning for very structured meetings with a low number of participants it seemed to be ok but once they got more crowded, maybe not all are native speakers and took longer than 30 minutes (like workshops) it went bad.


> Writing emails - once I knew what I wanted to convey, the rest was so trivial as to not matter, and any LLM tooling just got in the way of actually expressing it as I ended up trying to tweak the junk it was producing.

+1 LLM will help you produce the "filler" nobody wants the read anyway.


That's ok, the recipient can use an LLM to summarize it.

In the end, we'll all read and write tight little bullet points, with the LLM text on the wire functioning as the world's least efficient communication protocol.


> wrote my meetings minutes

why is this such a posterchild for llms. everyone always leads with this.

how boring are these meetings and do ppl actually review these notes? i never ever saw anyone reading meeting minutes or even mention them.

Why is this usecase even mentioned in LLM ads.


I think the same thing every time. I've never had anyone read my meeting notes and they're better off in some sort of work order system anyways.

All I'm hearing is an appeal to making the workplace more isolating. Don't talk to each other, just talk to the machine that might summarize it wrong


Because meeting minutes are hard, annoying to do and LLMs are good at it.

To be blunt, I think most HNers are young software developers, never attend any meetings of significance and don’t have to deal with many différent topics so they fail to see the usefulness because they are not in a position to understand it.

The tails are everywhere like people mentioning work orders which is something extremely operational. If nothing messy and complicated is discussed in the meetings you attend, it’s no surprise you don’t get why minutes are useful or where the value is. It doesn’t mean there is no value.


ok i'll take your word for it that people read meeting notes.


Meeting notes are not only there to be read. Their usefulness is that they are a trace of what was said and decided that everyone in the meeting agreed upon which is extremely important as soon as things get political.


sounds like you work at a fun place :) . watch your every word you utter incase it gets used against you.


Indeed, it seems doubtful that an org having so structurless meetings that they are struggling to write minutes is capable of having meetings for which minutes serves any purpose beyond covering ass.


I think imagination may be the reason for this. Enthusiasts have kept that first wave of amazement at what AI is able to do, and find it easier to anticipate where this could lead. The pessimists on the other hand weren't impressed with its capabilities in the first place - or were, and then became disillusioned for something it couldn't do for them. It's naturally easier to look ahead from the optimistic standpoint.

There's also the other category who are terrified about the consequences for their lives and jobs, and who are driven in a very human way to rubbish the tech to convince themselves it's doomed to failure.

The optimists are right of course. A nascent technology at this scale and with this kind of promise, whose development is spurring a race between nation states, isn't going to fizzle out or plateau, however much its current iterations may come short of any particular person's expectations.


> It's naturally easier to look ahead from the optimistic standpoint.

It is similarly easy to look ahead from a pessimist standpoint (e.g. how will this bubble collapse, and who will pay the bill for the hype?). The problem rather is that this overhyped optimistic standpoint is much more encouraged from society (and of course from the marketing).

> There's also the other category who are terrified about the consequences for their lives and jobs, and who are driven in a very human way to rubbish the tech to convince themselves it's doomed to failure.

There is also a third type who are not terrified of AI, but of the bad decisions managers (will) make because of all this AI craze.


No, I meant it's easier for optimists to look ahead at the possibilities inherent in the tech itself, which isn't true of pessimists, who - as you show - see instead the pattern of failed techs in it, whether that pattern matches AI or not.

If you can see the promise, you can see a gap to close between current capability and accomplished product. The question is then whether there's some barrier in that gap to make the accomplished product impossible forever. Pessimists tend to have given up on the tech already, so to them any talk about closing that gap is idle daydreaming or hype.


I don’t remember when they wrote half my code in a fraction of the time for my high paid SWE job.

I do have a bad memory from all the weed though, so who knows


Exactly this. What we expect from them is our speculation. In reality nobody knows the future and there's no way to know the future.


Wow I didn't expect this to be down voted. I guess there are people who knows the future.


For now, the reasoning abilities of the best and largest models are somewhat on par with those of a human crackpot with an internet connection, that misunderstands some wild fact or theory and starts to speculate dumb and ridiculous "discoveries". So the real world application to scientific thought is low, because science does not lack imbeciles.

But of course, models always improve and they never grow tired (if enough VC money is available), and even an idiot can stumble upon low hanging fruits overlooked by the brightest minds. This tireless ability to do systematic or brute-force reasoning about non-frontier subjects is bound to produce some useful results like those you mention.

The comparison with a pure financial swindle and speculative mania like NFTs is of course an exaggeration.


I see myself in these words:

>that misunderstands some wild fact or theory and starts to speculate dumb and ridiculous "discoveries"

>even an idiot can stumble upon low hanging fruits overlooked by the brightest minds.


I want an idiot to stumble on the low hanging fruits of my meeting minutes.


The hype surrounding them is not as a pa and tbh a lot of these use cases already have existing methods that work just fine. There are ways to find key information in files already, and speedy meeting minutes is really just a template away.


Absolutly not true

I was not able to get meeting transcription in that quality that cheap ever before. I followed dictation software for over a decade and tx to ML the open source software is suddenly a lot better than ever before.

Our internal company search with state of the art search indexes and search software was always shit. Now i ask an agent about a product standard and it just finds it.

Image generation never existed before.

Building a chatbot in a way that it actually does what you expect and its more complicated than answering the same 10 theoretical features it can do was hard and never really good and it now just works.

Im also not aware of any software rewriting or even writing documents for me, structer them etc.


A lot of these issues you have had are simply user error or not using the right tool for the job.


I work for one very big software company.

If this was 'a simple user error' or 'not using the right tool for the job' than this was an error from smart people and it still got fixed by using AI/ML in an instant.

With this, my argument still stands even if it would be for a different reason which i personally doubt.


Often big companies are the least efficient. And big companies can still make mistakes or have very inefficient processes. There was already a perfectly simple solution to the issue that could have been utilised prior to this and overall still the most efficient solution.

Also, everyone does dumb things, even smart people do dumb things. I do research in a field that many outsiders would say you must be smart to do (not my view) and every single one of us does dumb shit daily. Anyone who thinks they don't isn't as smart as they think they are.


Well, LLMs are the right tool for the job. They just work.

I mean if you are going to deny their usefulness in the face of plenty of people telling you they actually help, it’s going to be impossible to have a discussion.


They can be useful, however for admin tasks, there are plenty of valid alternatives that really take no longer time wise so why bother using all that computing power.

They don't just work though, they are not fool proof and definitely require double checking.


> valid alternatives that really take no longer time wise

That’s not my experience.

We use them more and more at my job. It was already great for most office tasks including brainstorming simple things but now suppliers are starting to sell us agents which pretty much just work and honestly there are a ton of things for which LLMs seem really suited for.

CMDB queries? Annoying SAP requests for which you have to delve through dozens of menus? The stupid interface of my travel management and expense software? Please give me a chatbot for all of that which can actually decipher what I’m trying to do. These are hours of productivity unlocked.

We are also starting to deploy more and more RAG on select core business dataset and it’s more useful than even I anticipated and I’m already convinced. You ask, you get a brief answer and the documents back. This used to be either hours of delving through search results or emails with experts.

As imperfect as they are now, the potential value of LLMs is already tremendous.


How do you check accuracy of these? You stated brainstorming as an example that they are great at. As obviously experts are experts for a reason.

My issue here is that a lot of this is solved by good practice, for example,travel management and expenses have been solved, company credit card. I don't need one slightly better piece of software to manage one terrible piece of software to solve an issue that has a solution.


> How do you check accuracy of these?

Because LLMs send you back links to the tools and you still get the usual confirmation process when you do things.

The main issue never was knowing what to do but actually getting the tools to do it. LLMs are extremely good at turning messy stuff into tools manipulation especially where there never was an API available in the first place.

It’s not a question of practices. Anyone who has ever worked for a very large company knows that systems are complicated by need and everything move at the speed of a freighter ship if you want to make significant changes.

Of course we need one slightly better piece of software to manage terrible pieces of software. There are insane value there. This is a major issue for most companies. I have seen millions spent into getting better dashboards from SAP which paid for themselves in actual savings.


You know what they were doing and what tools they were using… how?


Ok take Transcription, they were trying to use free as in cost tools instead of using software that works efficiently that has been effective for decades now.


I'm following transcription software for 2 decades.

You assume too much...


Microsoft is absolutely selling them as pa and already selling a lot. I think HNers being mostly software developers live in a bubble when it comes to the reality of what LLMs are actually used for.

Speedy minutes are absolutely not a template away. Anyone who ever had to write minutes for a complicated meetings knows it’s hard and requires a lot of back and forth for everyone to agree about what was said and decided.

Now you just turn on Copilot and you get both a transcript and an adequate basis for good minutes. Bonus point: it’s made by a machine so no one complains it has bias.

Some people here are blind to how useful that is.


There are so many tasks in the world that

1. Involve a computer

2. Do not require incredible intelligence

3. Involve the messiness of the real world enough that you can't write exact code to do it without it being insanely fragile

LLMs suddenly start to tackle these, and tackle them kind of all at once. Additionally they are "programmed" in just English and so you don't need a specialist to do something like change the tone of the summary or format, you just write what you want.

Assuming the models never get any smarter or even cheaper, and all we get is neater integrations, I still think this is all huge.


Do you really believe the outlay in terms of computer power is worth it to change the tone of an email? If it never gets better, this is a vast waste of an enormous amount of resources.


That's not what I've talked about them being for, but regardless it depends on the impact surely. If it can show you how someone may misunderstand your point and either help correct it or just show the problem then yes that can easily be worth spending a few cycles on. The additional energy cost of further back and forths caused by a misunderstanding could very easily be higher. At full whack, my GPU draws something like 10x what my monitor does, so fixing something quickly and automatically can easily use less power than doing it automatically.

Again though, that's not at all what I've talked about.


This is a business practice issue and staff issue, not a meeting minutes issue. I have meetings daily, and have never had this issue. You make it clear what is decided during the meeting, give anyone a chance to query or question, then no one can argue.


[flagged]


You would be wrong. I am actually quite perky, I just don't suffer foolish admin tasks easily. I only have meetings with a goal (not just for the sake of it), and then I simply make sure it is clear what the solution is, no matter whose idea it was. I don't care about being right or wrong in a meeting, I care that we have a useful outcome and it isn't an hour wasted. Having a meeting whereby the outcome of a meeting is unclear is a complete waste of time, and is not solved by tech, it is solved by how your meetings are managed.


> I simply make sure it is clear what the solution is

People simply humour you to get around your personality.


You can have your opinion on that.


What I see LLMs at this point is simplified input and output solutions with reduced barriers of entry. So application could become more widespread.

Now that I think of it, maybe this AI era is not electricity, but rather GUI - like the time when Jobs(or whoever) figured out and adopted modern GUI on computers allowing more widespread uses of computer


Do they only have reduced barriers of entry if you aren't fussed about the accuracy of the output? If you care that everything works correctly and is factually correct, do you not need the same competency as just doing the task by hand.


It's a good analogy because the key development does seem to have been the interface. Instead of wrapping it up as a text autocomplete (a la google search), openai wrapped it up as an IM client, and we were off to the races.


> to anyone that understands the tech it can’t.

This is a ridiculous take that makes me think you might not « understand the tech » as much as you think you do.

Is AI useful today ? That depends on the exact use case but overall it seems pretty clear the hype is greater than the use currently. But sometimes I feel like everyone forgets that ChatGPT isn’t even 3 years old, 6 years ago we were stuck with GPT-2 whose most impressive feat was writing a non sense poem about a unicorn, and AlphaGo is not even 10 years old.

If you can’t see the trend and just think that what we have today is the best we will ever achieve, thus the tech can’t do anything useful, you are getting blinded by contrarianism.


If there is a single objective right answer, the model should output a probability of 1 for it, and 0 for everything else. Ex. If I ask "Is a sphere a curved object?" The one and only answer is "100% yes" not "I am 99% sure it is" (and once in a while actually say it isn't)

This is pretty much impossible to achieve with current architectures (which aren't all that different to those of old, just bigger). If they did, then they'd be woefully overfitted. They can't be made reliable. Anyone who understands the tech does know this.


> Anyone who understands the tech does know this

Yes, and this is does not mean the technology can never be useful.

I work everyday with that have false beliefs about a tech, I have a friend that until recently thought there were rivers on the moon, and some believe climate change is a hoax, I often forget things people told me and they have to tell me again.

Are humans not useful at anything ?


If someone responds with "no" to that question even once, then yes, I don't consider them trustworthy for anything involving complex thought.


I think people are mostly bad at value judgements, and AI is no exception.

What they naively wished the future was like: Flying cars. What they actually got (and is way more useful but a lot less flashy): Cheap solar energy.


> What they naively wished the future was like: Flying cars.

This future is already there:

We have flying cars: they are called "helicopters" (see also https://xkcd.com/1623/).


Oh they're not even close in availability as cars, much harder to operate, much more expensive and tend to fall out of the sky.

Thank you for providing an example that directly maps to usefulness of ANN in most research though.


Helicopters don't just fall out of the sky, atleast not any different than planes fall out of the sky, they can auto-rotate to the ground without engine power.


You are absolutely correct about autorotation and helicopters not falling out of the sky. There is one nuance that the rotor blades still need to be able to rotate for this, and a failed gearbox can prevent that. Anecdotally that feels like the most common cause when I read about another crash in the North Sea.


True but even lowering collective and letting autorotation take over you still hit the ground REALLY hard - enough to sustain injuries to your back and neck in some cases.

Glide ratios in GA (like highwing cessnas) is much more forgiving assuming you can find a place to put down.


AI looks exactly like NFTs to you? I don't understand what you mean by that. AI already has tons more uses.


One is a technical advance as important as anything in human history, realizing a dream most informed thinkers thought would remain science fiction long past our lifetimes, upending our understanding of intelligence, computation, language, knowledge, evolution, prediction, psychology... before we even mention practical applications.

The other is worse than nothing.


> the promises of groundbreaking use cases that will change the world are obviously not materialising and to anyone that understands the tech it can’t.

In what world is this obviously not materializing? Plenty of people use GenAI for coding, with some claiming we're approaching the level of GenAI being able to automate vast portions of a developer's job.

Do you think this is wrong and the people saying this (some of them very experienced developers) are simply mistaken or lying?

Or do you think it's not a big deal?


LLMs have blown crypto and nfts off the map, at least for normies like me. I wonder what will blow LLMs off the map?


Code assistants alone prove this to be false.


I'd be interested in reading some more from the people you're referring to when talking about experts who understand the field. At least to the extent I've followed the discussion, even the top experts are all over the place when it comes to the future of AI.

As a counterpoint: Geoffrey Hinton. You could say he's gone off the deep end on a tangent, but I definitely don't his incentive is to make money off of hype. Then there's Yann LeCun saying AI "could actually save humanity from extinction". [0]

If these guys just out of touch talking heads, who are the new guard people should read up on?

[0]: https://www.theguardian.com/technology/2024/dec/27/godfather...


> It’s why it keeps looking exactly like NFT’s and crypto hype cycles to me: Yes the technology has legitimate uses

AI have legitimate uses, cryptocurrency only has “regulations evasion” and NFT has literally no use at all, though.

But that's very true that the AI ecosystem is crowded with grifters who feed on baseless hype, and many of them actually come from cryptocurrencies.


Speaking out against the hype is frowned upon. I'm sure even this very measured article about "I tried it and it didn't work for me" will draw negative attention from people who think AI is the Second Coming.

It's also very hard to prove a negative. If you predict "AI will never do anything of value" people will point to literally any result to prove you wrong. TFA does a good job debunking some recent hype, but the author cannot possibly wade through every hyperbolic paper in every field to demonstrate the claims are overblown.


> then go on to explain how they aren't useful in any field whatsoever

> Where are the AI-driven breakthroughs

> are we just using AI to remix existing general knowledge, while making no progress of any sort in any field using it?

The obvious example of a highly significant AI-driven breakthrough is Alphafold [1]. It has already had a large impact on biotech, helping with drug discovery, computational biology, protein engineering...

[1] https://blog.google/technology/ai/google-deepmind-isomorphic...


I'm personally waiting for the other shoe to drop here. I suspect that, since nature begins with an existing protein and modifies it slightly, AlphaFold is crazy overfitted to the training data. Furthermore, the enormous success of AlphaFold means that the number of people doing protein structure solving has likely crashed.

So not only are we using an overfitting model that probably can't handle truly novel proteins, we have stopped actually doing the research to notice when this happens. Pretty bad.


> that probably can't handle truly novel proteins

AlphaFold is able to predict novel folds, see https://www.nature.com/articles/s42003-022-03357-1


Could once is not the same as can predictably first of all.

Second, how can they possibly know this fold isn't actually in the ENTIRE PDB? I doubt very much that the can. The PDB is enormous.


> Could once is not the same as can predictably first of all

I was merely addressing your claim in the previous post.

> Second, how can they possibly know this fold isn't actually in the ENTIRE PDB? I doubt very much that the can. The PDB is enormous.

There are well-established fold classification databases (such as SCOP and CATH) where you can query newly solved structures using several structural comparison algorithms (DALI, TM-align, etc).

The Protein Data Bank might be enormous, but there is an enormous amount of structural redundancy as well, from which reduced datasets can be (and are) derived. Protein structure is, after all, much more conserved than sequence, mainly due to the physicochemical principles that govern folding and stability.


Why do you expect this, or is this just a "I need to find a reason to hate AI" thing?


It's like every time AI "can code" and then falls on its face when presented with super basic problems that are outside its training data. Why would protein folding be different?


> AlphaEvolve’s procedure found an algorithm to multiply 4x4 complex-valued matrices using 48 scalar multiplications, improving upon Strassen’s 1969 algorithm that was previously known as the best in this setting. This finding demonstrates a significant advance over our previous work, AlphaTensor, which specialized in matrix multiplication algorithms, and for 4x4 matrices, only found improvements for binary arithmetic.

> To investigate AlphaEvolve’s breadth, we applied the system to over 50 open problems in mathematical analysis, geometry, combinatorics and number theory. The system’s flexibility enabled us to set up most experiments in a matter of hours. In roughly 75% of cases, it rediscovered state-of-the-art solutions, to the best of our knowledge.

> And in 20% of cases, AlphaEvolve improved the previously best known solutions, making progress on the corresponding open problems. For example, it advanced the kissing number problem. This geometric challenge has fascinated mathematicians for over 300 years and concerns the maximum number of non-overlapping spheres that touch a common unit sphere. AlphaEvolve discovered a configuration of 593 outer spheres and established a new lower bound in 11 dimensions.

https://storage.googleapis.com/deepmind-media/DeepMind.com/B...

(this is an LLM driven pipeline)


That's less LLM and more three projects by DeepMind team.

And it's far from commercial availability.


Well to me personally it at least proves something that's long been touted as impossible, that the current architecture can in fact do better than all humans at novel tasks, even if it needs a crutch at the moment.

An LLM-based system now holds the SOTA approach on several math problems, how crazy is that? I wasn't convinced before, but now I guess it won't be many decades before we view making new scientific advancements as viable as winning against stockfish.


Yeah, but those rely on setting up the evolve function. And they aren't well guaranteed to be better than humans. They might find you an improvement, but aren't guaranteed to do as shown here[1] (green means better, red means worse, gray means same).

[1] https://youtu.be/sGCmu7YKgPA?t=480


Last I checked, humans weren't guaranteed to do anything either.


Sure, but they have one thing that makes them better than CPUs, they consume more ;)


> Where are the AI-driven breakthroughs? Or even the AI-driven incremental improvements?

literally last week

https://deepmind.google/discover/blog/alphaevolve-a-gemini-p...


But it only seems to be labs and companies that also have a vested interest in selling it as a product that are able to achieve these breakthroughs. Which is a little suspect, right?


too tinfoil hat. google is perfectly happy to spend billions dogfooding their own TPUs and not give the leading edge to the public.


I’m not saying they’re phoney - just we need to take this stuff with a big pinch of salt.

The Microsoft paper around the quantum “ breakthrough ” is in a different field, but maybe a good example of why we need to be a little more cautious of research-as-marketing


Yea, except that we see "breakthrough" stuff like this all the time, and it almost always is quickly found out that it's fraudulent in some way. How many times are we to be fooled before we catch on and don't believe press releases with massive selection bias?


If they didn’t say that the rah-rah-AI crowd would come for them with torches and pitchforks. It’s a ward against that, nothing more.


Similar to the way many Trump supporters, when daring to criticize him, feel the need to assert that they still love him and would vote for him again.

(See, eg. r/LeopardsAteMyFace for examples. It’s fascinating.)


Or any time one dares to criticize Israel for their recent contributions to peace on Earth (wink wink) -- it has to be prefaced with "Let me say that I'm the biggest defender of the Jews and fight against anti-Semitism".

It's moot.


An example of an "AI" incremental improvement would be Oxford Nanopore sequencing. They extrude DNA through a nanopore, measure the current, and decode the bases using recurrent neural networks.

They exist all over science, but they are just one method among many, and they do not really drive hypotheses or interpretations (even now)


Some new ish maths has been discovered. It's up to you if this is valid or impressive enough, but I think it's significant for things to come: https://youtu.be/sGCmu7YKgPA?si=EG9i0xGHhDu1Tb0O


Personally, I have been very pleased with the results despite the limitations.

Like many (I suspect), I have had several users provide comments that the AI processes I have defined have made meaningful impacts on their daily lives - often saving them double digit hours of effort per week. Progress.


The article itself lists as successful, even breakthrough, applications of AI: protein folding, weather forecasting, and drug discovery.


> say things like "of course I don't doubt that AI will lead to major discoveries", and then go on to explain how they aren't useful in any field whatsoever?

This is "paying the toll", otherwise one will be accused of being a "luddite."


> Where are the AI-driven breakthroughs?

The only thing that seems to live up to the hype is AlphaFold, which predicts protein folding based on amino acid sequences, and of which people say that it actually makes their work significantly easier.

But, disclaimer, this is only from second-hand knowledge, I'm not working in the field.


This is another dimension of the problem - what's even considered AI ? AlphaFold is a very specialized model - and I feel the AI boom is driven by hypothesis that general models eventually outperform specialized ones given enough size/data/whatever.


While I hate the apparent renaming of everything ML to "AI", things like AlphaFold would be "narrow AI".

As to the common idea of having to wait for general AI (AGI) to bring the gains, I have been quite sure since the start of the recent AI hype cycle that narrow AI will have silently transformed much of the world before AGI even hits the town.


In my head, I just substitute "AI" with "machine learning" or "statistics".

> and I feel the AI boom is driven by hypothesis that general models eventually outperform specialized ones given enough size/data/whatever.

I think in the sciences, I'd generally put my money on the specialized models.

I hope that the hype around AI makes it easier (by providing tooling, platforms, better algorithms, educational materials etc.) to train specialized models.

Kind of a trickle-down of hype money :-)


I expect the opposite - hardware/compute going to be locked up in AGI quests unless the bubble pops and then it gets discounted


Depending on your definition of AI, a pipeline for drug repurposing my team used was able to identify a therapeutic for a rare disease that was beating the state of the art in every test they threw at it, eventually being given orphan drug designation by the FDA. I doubt this would have happened without machine learning or AI or whatever you want to call it.

I'm also against "agents as scientists" as a concept for numerous reasons, but deep learning etc has or is leading to breakthroughs.


> Where are the AI-driven breakthroughs?

Define breakthrough. When is the improvement big enough to count as one?

Define AI. Are you talking about modern LLM, or is old school ML also in that question?

I mean Googles AI-company had with AlphaFold and other project quite the impact.

> Or are we just using AI to remix existing general knowledge

Is remixing bad? Isn't many science today "just" remixing with slight improvements? I mean, there is a reason why we have theoretical and practical scientists. Doing boring Lab-work and accidentally discovering something exciting is not the only way science is happening. Analysing data and remixing information, building new theories, is also important.

And don't forget, we don't have AGI yet. Whatever AI is doing today, is limited by what humans are using it for. Another question is, whether LLM is not normalized enough already that we do not see it as very special any more, if it's used somewhere. So we might not even see it if AI has significant impact on any breakthrough.


There's hype, then there's the understanding that you never use version 1.0 of a product. In this sense, AI as we know is barely in an alpha version. I think the authors you're referring to understand the marketing around AI overstates its usefulness, but they are hopeful for a future where AI can be helpful.


I suspect that people saying this are avoiding to make broad conclusions based only on the AI tools that exist right now at this moment. So they leave a lot of room for the next versions to improve.

Maybe too much room, but it's hard to predict if AI tools will overcome their limitations in the near future.


New numerical computing algorithms are being developed with AI assistance, which probably would not have been discovered otherwise. There was an article here a few days ago about one of those. It's incremental but it's not nothing.


They do mention that it has been somewhat useful in protein folding.

> Or are we just using AI to remix existing general knowledge, while making no progress of any sort in any field using it?

AIUI they are generally not talking about LLMs here.


it's the new "I love my tesla but here are 15 reasons why it's broken" - if you don't provide platitudes the mob provides pitchforks


> Or even the AI-driven incremental improvements?

You have no idea what you are talking about. Every day there is plenty of research published that used AI to help achieve scientific goals.

Now LLMs are another matter, and probably a ways off before we reap the benefit beyond day to day programming / writing productivity.


"AI is a competent specialist in all fields except in mine."


Protein structure prediction is a pretty useful tool. There are various 'foundation' models in biology now that are quite useful. I don't know if you want to count those as AI or ML.

If you're looking for breakthroughs due to AI, they're not going to be obviously attributable to AI I think. Focusing on the biology related foundation models ... The ability to more quickly search through the space of sequences->structure, drugs, and predicted cell states (1) like with the biology based foundation models will certainly lead to some things being discovered/rejected/validated faster.

I heard about this vevo company recently so it's on my mind. Biology experiments are hard and time consuming, and often hard to exactly replicate across labs. A lot of the data is of the form

a) start in cell state X (usual 'healthy' normal state) under conditions Y (environment variables like temperature, concentrations, etc.)

b) introduce some environmental pertubation P like some drug/chemical at some concentration, or maybe multiple pertubations at once

c) record trajectory or steady/final state of cell.

This data is largely hidden within published papers in non-standardized format. Vevo is attempting to collate all of those results with considerations for reproducibility into a standard easy-to-use format. The idea being that you can gradually build up a virtual sort of input-output (causal!) model that you can throw ideas for interventions against and see what it thinks would happen. Cells and biology are obviously enormously complicated so it's certainly not going to be 100% accurate/predictive, but my experience with network models in quantitative biology plus their proclaimed results make me pretty confident it's a sound approach.

Thus approach is clearly "AI" driven (maybe I would call this ML) and if their claims are anything close to reality, this is an incredibly powerful tool for all of academia and industry. You can reduce the search space enormously and target your experiments to cover the areas the virtual model doesn't seem so good, continuously improving it in a sort of crowd-sourced "active learning" manner. A continuously improving experimentally backed causal (w.r.t. perturbations) model of cells has so many applications. Again, i don't think this will directly lead to a breakthrough, but it can certainly make the breakthrough more likely and come faster.

There are many other examples like this that are some combination of 1) collating+filtering+refining existing data into an accessible easily query-able format

2) combining data + some physically motivated modeling to yield predictions where there is no data

3) targeted, informed feedback loop via experiments, simulations, or modeling to improve the whole system where it's known to be weak or where more accuracy is desired.

Assuming it all stays relatively open, that's undeniably a very powerful model for more effective science.

And that's just one approach. In physics ML can be use for finding and characterizing phase transitions, as one example. In the world of soft matter/biophysics simulation here's a few ways ML is used:

a) more efficient generation of configurations (Noe generative models). This is a big one, albeit still kind of early stages. Historically(simplifying), to generate independent samples in the right regions of phase means you need to integrate the system for long enough to hit that region multiple times. So regions of the space separated by rare transitions will take a loooong time to hit multiple times. The solution was simply longer simulations. Now, under some restrictions, you can leverage and augment existing data (including simulation data) to directly generate independent samples in the regions of interest. This is a really big deal.

b) more efficient, complex and accurate NN force-fields. Better incorporation of many-body and even quantum effects.

c) more complex simulation approaches via improved pipelines like automated parameterization and discovery of collective variables to more efficiently explore relevant configuration space.

Again, this is tooling that improves the process of discovery & investigation and thus directly contributes to science. Maybe not in the way you're picturing, but it is happening right now.

1) vevo https://www.tahoebio.ai/


From the article:

> Besides protein folding, the canonical example of a scientific breakthrough from AI, a few examples of scientific progress from AI include:1

> Weather forecasting, where AI forecasts have had up to 20% higher accuracy (though still lower resolution) compared to traditional physics-based forecasts.

> Drug discovery, where preliminary data suggests that AI-discovered drugs have been more successful in Phase I (but not Phase II) clinical trials. If the trend holds, this would imply a nearly twofold increase in end-to-end drug approval rates.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: