Hacker News new | past | comments | ask | show | jobs | submit login

It’s why it keeps looking exactly like NFT’s and crypto hype cycles to me: Yes the technology has legitimate uses, but the promises of groundbreaking use cases that will change the world are obviously not materialising and to anyone that understands the tech it can’t.

It’s people making money off hype until it dies and move on to the next scam-with-some-use.




We already have breakthroughs. Benchmark results which have been unheard of before ML.

Alone language translation got so much better, voice syntesis, voice transcription.

All my meetings now are searchable and i can ask 'ai' to summarize my meetings in a relative accurate way impossible before that.

Alphafold made a breakthrough in protein folding.

Image and Video generation can now do unbelievable things.

Realtime voice communication with computer.

Our internal company search suddenly became usefull.

I have 0 use case for NFT and Crypto. I have tons of use case for ML.


> Alphafold made a breakthrough in protein folding.

Sort of. Alphafold is a prediction tool, or, alternatively framed, a hypothesis generation tool. Then you run an experiment to compare.

It doesn't represent a scientific theory, not in the sense that humans use them. It does not have anywhere near something like the accuracy rate for hypotheses to qualify as akin to the typical scientific testing paradigm. It's an incredibly powerful and efficient tool in certain contexts and used correctly in the discovery phase, but not the understanding or confirmation phase.

It's also got the usual pitfalls with differentiable neural nets. E.g. you flip one amino acid and it doesn't really provide a proper measure of impact.

Ultimately, one major prediction breakthrough is not that crazy. If we compare that to e.g. Random Forest and similar models, the impact in science is infinitely more with them.


We already have a precise and accurate theory for protein folding. What we don’t have is the computational power to do true precise simulations at a scale and speed we’d like.

In many aspects a huge tangled barely documented code base written by inexperienced grad students of quantum shortcuts, err, perturbative methods isn’t that much more or less intelligible than an AI model learning those same methods.


What "precise and accurate theory for protein folding" exists?

Nobody has been able to demonstrate convincingly that any simulation or theory method can reliably predict the folding trajectory of anything but the simplest peptides.


> What "precise and accurate theory for protein folding" exists?

It’s called Quantum Mechanics.

> Nobody has been able to demonstrate convincingly that any simulation or theory method can reliably predict the folding trajectory of anything but the simplest peptides.

No we don’t have simplified models or specialized theories to reduce the computational complexity enough to efficiently solve the QM or even molecular dynamics systems needed to predict protein folding for more than the simplest peptides.

Granted, it’s common to mix up things and say that not having a computationally tractable models means we don’t have precise and accurate theory of PF. Something like [0] resulting in an accurate, precise, and fast theory of protein folding would be incredibly valuable. This however, may not be possible outside specific cases. Though I believe AlphaFold indicates otherwise as it appears life has evolved various building blocks which enable a simpler physics of PF tractable to evolutionary processes.

Quantum computing however could change that [1]. If practical QM is feasible that is, which it’s beginning to look more and more likely. Some say QC is already proven and just needs scaled up.

0: https://en.m.wikipedia.org/wiki/Folding_funnel 1: https://www.nature.com/articles/s41534-021-00368-4


I don't think anybody is 100% certain that doing a full quantum simulation of a protein (in a box of water) would recapitulate the dynamics of protein folding. It seems like a totally reasonable claim, but one that could not really be evaluated.

If you have a paper that makes a strong argument around this claim, I'd love to see it. BTW- regarding folding funnels, I learned protein folding from Ken Dill as a grad student in biophysics at UCSF, and used to run MD simulations of nucleic acids and proteins. I don't think anybody in the field wants to waste the time worrying about running full quantum simulations of protein folding, it would be prohibitevly expensive even with far better QM simulators than we have now (IE, n squared or better).

Also the article you linked- they are trying to find the optimal structure (called fold by some in the field). That's not protein folding- it's ground state de novo structure prediction. Protein folding is the process by which an unfolded protein adopts the structured state, and most proteins don't actually adopt some single static structure but tend to interconvert between several different substructes that are all kinetically accessible.


> I don't think anybody is 100% certain that doing a full quantum simulation of a protein (in a box of water) would recapitulate the dynamics of protein folding.

True, until it's experimentally shown there's still some possibility QM wouldn't suffice. Though I've not read anything that'd give reason to believe QM couldn't capture the dynamic behavior of folding, unlike the uncertainty around dark matter or quantum supremacy or quantum gravity.

Though it might be practically impossible to setup a simulation using QM which could faithfully capture true protein folding. That seems more likely.

> It seems like a totally reasonable claim, but one that could not really be evaluated.

If quantum supremeacy holds, my hunch would be that it would be feasible to evaluate it one day.

The paper I linked was mostly to showcase that there seem to be approaches utilizing quantum computing to speed up solving QM simulations. We're still in the early days of quantum computing algorithms and it's unclear what's possible yet. Tackling a dynamic system like a unfolded protein folding is certainly a ways away though!

> Also the article you linked- they are trying to find the optimal structure (called fold by some in the field). That's not protein folding- it's ground state de novo structure prediction.

Thanks! I haven't worked on quantum chemistry for many years, and only tangentially on protein folding, so useful to know the terminology. The meta table states and that whole possibility of folding states / pathways / etc fascinates me as potentially being emergent property of protein folding physics and biology as we know it.


> It’s called Quantum Mechanics.

Nobody is suggesting anything entails a possible violation of quantum mechanics, so yes, obviously any system under inquiry is assumed to abide by QM.


One one hand, maybe it's good to have better searchable records, even if it's hard to quantify the benefit.

On the other hand, now all your meetings produce reams of computer searchable records subject to discovery in civil and criminal litigation, possibly leading to far worse liability than would have been possible in a mostly email based business.


Maybe don't do crimes?

If the technology provides a major boost in productivity to ethical teams, and is useless for unethical teams, that kinda seems like a good thing.


I guarantee you that every company you have ever worked for has committed crimes and incurred various forms of potential civil liability.


I doubt that, but even if you're right it doesn't change my point: if you're not willing to stop doing that, you'll be less productive than the firms that are willing to operate legally and ethically.

And if in some industry it's really not possible to do business without committing crimes, then let's reform the criminal code to something reasonable.


That is absolutely correct.

The problem is that the hype assumes that all of this is a baseline (or even below the baseline), while there are no signs that it can go much further in the near future – and in some cases, it's actually cutting-edge research. This leads to a pushback that may be disproportionate.


I'm sure there's many people out there who could say that they hardly use AI but that crypto has made them lots of money.

At the end of the day searching work documents and talking with computers is only desirable inasmuch as they are economically profitable. Crypto at the end of the day is responsible for a lot of people getting wealthy. Was a lot of this wealth obtained on sketchy grounds? probably, but the same could be said AI (for example, the recent sale of windsurf for an obscene amount of money).


Crypto is not making people rich, it is about moving money from Person A to Person B.

And sure everyone who got the money from others by gambling are biased. Fine with me.

But in comparision to crypto, people around me actually use AI/ML (most of them).


Every activity that is making people rich is by definiton moving money from Person A to Person B.


Lets be nitpicky :)

I said move money from A to B, which implies that nothing else is happening. Otherwise it would be an exchange.

Sooo i would say my wording was right?! :)


Crypto is not creating anything. Its a scheme that is based on gamble. Person A gets rich. Person B loses money. It does not really contribute to anything.


> is only desirable inasmuch as they are economically profitable.

The bug difference is that they are profitable because they create value, when cryptocurrencies are a zero sum game between participants. (It is in fact a negative-sum game, since some people are getting paid to make the thing work so that others can gamble on the system).


Which ai program do you use for live video meeting translation?


MS Teams, Google Meet (whatever they use, probably gemini) and wispher


You have to understand, real AI will never exist. AI is that which a machine can't do yet. Once it can do it, it's engineering.


I don’t remember when NFTs and cryptos helped me draft an email, wrote my meetings minutes for me or allowed me to easily search information previously locked in various documents.

I think there is this weird take amongst some on HN where LLMs are either completely revolutionary and making break through or utterly useless.

The truth is that they are useful already as a productivity tool.


Having tried to use various tools - in those specific examples - I found them either pointless or actively harmful.

Writing emails - once I knew what I wanted to convey, the rest was so trivial as to not matter, and any LLM tooling just got in the way of actually expressing it as I ended up trying to tweak the junk it was producing.

Meeting minutes - I have yet to see one that didn’t miss something important while creating a lot of junk that no one ever read.

And while I’m sure someone somewhere has had luck with the document search/extract stuff, my experience has been that the hard part was understanding something, and then finding it in the doc or being reminded of it was easy. If someone didn’t understand something, the AI summary or search was useless because they didn’t know what they were seeing.

I’ve also seen a LOT of both junior and senior people end up in a haze because they couldn’t figure out what was going on - and the AI tooling just allowed them to produce more junk that didn’t make any sense, rather than engage their brain. Which causes more junk for everyone to get overwhelmed with.

IMO, a lot of the ‘productivity’ isn’t actually, it’s just semi coherent noise.


+1 for all of the above.

> Meeting minutes - I have yet to see one that didn’t miss something important while creating a lot of junk that no one ever read.

Especially that one. In the beginning for very structured meetings with a low number of participants it seemed to be ok but once they got more crowded, maybe not all are native speakers and took longer than 30 minutes (like workshops) it went bad.


> Writing emails - once I knew what I wanted to convey, the rest was so trivial as to not matter, and any LLM tooling just got in the way of actually expressing it as I ended up trying to tweak the junk it was producing.

+1 LLM will help you produce the "filler" nobody wants the read anyway.


That's ok, the recipient can use an LLM to summarize it.

In the end, we'll all read and write tight little bullet points, with the LLM text on the wire functioning as the world's least efficient communication protocol.


> wrote my meetings minutes

why is this such a posterchild for llms. everyone always leads with this.

how boring are these meetings and do ppl actually review these notes? i never ever saw anyone reading meeting minutes or even mention them.

Why is this usecase even mentioned in LLM ads.


I think the same thing every time. I've never had anyone read my meeting notes and they're better off in some sort of work order system anyways.

All I'm hearing is an appeal to making the workplace more isolating. Don't talk to each other, just talk to the machine that might summarize it wrong


Because meeting minutes are hard, annoying to do and LLMs are good at it.

To be blunt, I think most HNers are young software developers, never attend any meetings of significance and don’t have to deal with many différent topics so they fail to see the usefulness because they are not in a position to understand it.

The tails are everywhere like people mentioning work orders which is something extremely operational. If nothing messy and complicated is discussed in the meetings you attend, it’s no surprise you don’t get why minutes are useful or where the value is. It doesn’t mean there is no value.


ok i'll take your word for it that people read meeting notes.


Meeting notes are not only there to be read. Their usefulness is that they are a trace of what was said and decided that everyone in the meeting agreed upon which is extremely important as soon as things get political.


sounds like you work at a fun place :) . watch your every word you utter incase it gets used against you.


Indeed, it seems doubtful that an org having so structurless meetings that they are struggling to write minutes is capable of having meetings for which minutes serves any purpose beyond covering ass.


I think imagination may be the reason for this. Enthusiasts have kept that first wave of amazement at what AI is able to do, and find it easier to anticipate where this could lead. The pessimists on the other hand weren't impressed with its capabilities in the first place - or were, and then became disillusioned for something it couldn't do for them. It's naturally easier to look ahead from the optimistic standpoint.

There's also the other category who are terrified about the consequences for their lives and jobs, and who are driven in a very human way to rubbish the tech to convince themselves it's doomed to failure.

The optimists are right of course. A nascent technology at this scale and with this kind of promise, whose development is spurring a race between nation states, isn't going to fizzle out or plateau, however much its current iterations may come short of any particular person's expectations.


> It's naturally easier to look ahead from the optimistic standpoint.

It is similarly easy to look ahead from a pessimist standpoint (e.g. how will this bubble collapse, and who will pay the bill for the hype?). The problem rather is that this overhyped optimistic standpoint is much more encouraged from society (and of course from the marketing).

> There's also the other category who are terrified about the consequences for their lives and jobs, and who are driven in a very human way to rubbish the tech to convince themselves it's doomed to failure.

There is also a third type who are not terrified of AI, but of the bad decisions managers (will) make because of all this AI craze.


No, I meant it's easier for optimists to look ahead at the possibilities inherent in the tech itself, which isn't true of pessimists, who - as you show - see instead the pattern of failed techs in it, whether that pattern matches AI or not.

If you can see the promise, you can see a gap to close between current capability and accomplished product. The question is then whether there's some barrier in that gap to make the accomplished product impossible forever. Pessimists tend to have given up on the tech already, so to them any talk about closing that gap is idle daydreaming or hype.


I don’t remember when they wrote half my code in a fraction of the time for my high paid SWE job.

I do have a bad memory from all the weed though, so who knows


Exactly this. What we expect from them is our speculation. In reality nobody knows the future and there's no way to know the future.


Wow I didn't expect this to be down voted. I guess there are people who knows the future.


For now, the reasoning abilities of the best and largest models are somewhat on par with those of a human crackpot with an internet connection, that misunderstands some wild fact or theory and starts to speculate dumb and ridiculous "discoveries". So the real world application to scientific thought is low, because science does not lack imbeciles.

But of course, models always improve and they never grow tired (if enough VC money is available), and even an idiot can stumble upon low hanging fruits overlooked by the brightest minds. This tireless ability to do systematic or brute-force reasoning about non-frontier subjects is bound to produce some useful results like those you mention.

The comparison with a pure financial swindle and speculative mania like NFTs is of course an exaggeration.


I see myself in these words:

>that misunderstands some wild fact or theory and starts to speculate dumb and ridiculous "discoveries"

>even an idiot can stumble upon low hanging fruits overlooked by the brightest minds.


I want an idiot to stumble on the low hanging fruits of my meeting minutes.


The hype surrounding them is not as a pa and tbh a lot of these use cases already have existing methods that work just fine. There are ways to find key information in files already, and speedy meeting minutes is really just a template away.


Absolutly not true

I was not able to get meeting transcription in that quality that cheap ever before. I followed dictation software for over a decade and tx to ML the open source software is suddenly a lot better than ever before.

Our internal company search with state of the art search indexes and search software was always shit. Now i ask an agent about a product standard and it just finds it.

Image generation never existed before.

Building a chatbot in a way that it actually does what you expect and its more complicated than answering the same 10 theoretical features it can do was hard and never really good and it now just works.

Im also not aware of any software rewriting or even writing documents for me, structer them etc.


A lot of these issues you have had are simply user error or not using the right tool for the job.


I work for one very big software company.

If this was 'a simple user error' or 'not using the right tool for the job' than this was an error from smart people and it still got fixed by using AI/ML in an instant.

With this, my argument still stands even if it would be for a different reason which i personally doubt.


Often big companies are the least efficient. And big companies can still make mistakes or have very inefficient processes. There was already a perfectly simple solution to the issue that could have been utilised prior to this and overall still the most efficient solution.

Also, everyone does dumb things, even smart people do dumb things. I do research in a field that many outsiders would say you must be smart to do (not my view) and every single one of us does dumb shit daily. Anyone who thinks they don't isn't as smart as they think they are.


Well, LLMs are the right tool for the job. They just work.

I mean if you are going to deny their usefulness in the face of plenty of people telling you they actually help, it’s going to be impossible to have a discussion.


They can be useful, however for admin tasks, there are plenty of valid alternatives that really take no longer time wise so why bother using all that computing power.

They don't just work though, they are not fool proof and definitely require double checking.


> valid alternatives that really take no longer time wise

That’s not my experience.

We use them more and more at my job. It was already great for most office tasks including brainstorming simple things but now suppliers are starting to sell us agents which pretty much just work and honestly there are a ton of things for which LLMs seem really suited for.

CMDB queries? Annoying SAP requests for which you have to delve through dozens of menus? The stupid interface of my travel management and expense software? Please give me a chatbot for all of that which can actually decipher what I’m trying to do. These are hours of productivity unlocked.

We are also starting to deploy more and more RAG on select core business dataset and it’s more useful than even I anticipated and I’m already convinced. You ask, you get a brief answer and the documents back. This used to be either hours of delving through search results or emails with experts.

As imperfect as they are now, the potential value of LLMs is already tremendous.


How do you check accuracy of these? You stated brainstorming as an example that they are great at. As obviously experts are experts for a reason.

My issue here is that a lot of this is solved by good practice, for example,travel management and expenses have been solved, company credit card. I don't need one slightly better piece of software to manage one terrible piece of software to solve an issue that has a solution.


> How do you check accuracy of these?

Because LLMs send you back links to the tools and you still get the usual confirmation process when you do things.

The main issue never was knowing what to do but actually getting the tools to do it. LLMs are extremely good at turning messy stuff into tools manipulation especially where there never was an API available in the first place.

It’s not a question of practices. Anyone who has ever worked for a very large company knows that systems are complicated by need and everything move at the speed of a freighter ship if you want to make significant changes.

Of course we need one slightly better piece of software to manage terrible pieces of software. There are insane value there. This is a major issue for most companies. I have seen millions spent into getting better dashboards from SAP which paid for themselves in actual savings.


You know what they were doing and what tools they were using… how?


Ok take Transcription, they were trying to use free as in cost tools instead of using software that works efficiently that has been effective for decades now.


I'm following transcription software for 2 decades.

You assume too much...


Microsoft is absolutely selling them as pa and already selling a lot. I think HNers being mostly software developers live in a bubble when it comes to the reality of what LLMs are actually used for.

Speedy minutes are absolutely not a template away. Anyone who ever had to write minutes for a complicated meetings knows it’s hard and requires a lot of back and forth for everyone to agree about what was said and decided.

Now you just turn on Copilot and you get both a transcript and an adequate basis for good minutes. Bonus point: it’s made by a machine so no one complains it has bias.

Some people here are blind to how useful that is.


There are so many tasks in the world that

1. Involve a computer

2. Do not require incredible intelligence

3. Involve the messiness of the real world enough that you can't write exact code to do it without it being insanely fragile

LLMs suddenly start to tackle these, and tackle them kind of all at once. Additionally they are "programmed" in just English and so you don't need a specialist to do something like change the tone of the summary or format, you just write what you want.

Assuming the models never get any smarter or even cheaper, and all we get is neater integrations, I still think this is all huge.


Do you really believe the outlay in terms of computer power is worth it to change the tone of an email? If it never gets better, this is a vast waste of an enormous amount of resources.


That's not what I've talked about them being for, but regardless it depends on the impact surely. If it can show you how someone may misunderstand your point and either help correct it or just show the problem then yes that can easily be worth spending a few cycles on. The additional energy cost of further back and forths caused by a misunderstanding could very easily be higher. At full whack, my GPU draws something like 10x what my monitor does, so fixing something quickly and automatically can easily use less power than doing it automatically.

Again though, that's not at all what I've talked about.


This is a business practice issue and staff issue, not a meeting minutes issue. I have meetings daily, and have never had this issue. You make it clear what is decided during the meeting, give anyone a chance to query or question, then no one can argue.


[flagged]


You would be wrong. I am actually quite perky, I just don't suffer foolish admin tasks easily. I only have meetings with a goal (not just for the sake of it), and then I simply make sure it is clear what the solution is, no matter whose idea it was. I don't care about being right or wrong in a meeting, I care that we have a useful outcome and it isn't an hour wasted. Having a meeting whereby the outcome of a meeting is unclear is a complete waste of time, and is not solved by tech, it is solved by how your meetings are managed.


> I simply make sure it is clear what the solution is

People simply humour you to get around your personality.


You can have your opinion on that.


What I see LLMs at this point is simplified input and output solutions with reduced barriers of entry. So application could become more widespread.

Now that I think of it, maybe this AI era is not electricity, but rather GUI - like the time when Jobs(or whoever) figured out and adopted modern GUI on computers allowing more widespread uses of computer


Do they only have reduced barriers of entry if you aren't fussed about the accuracy of the output? If you care that everything works correctly and is factually correct, do you not need the same competency as just doing the task by hand.


It's a good analogy because the key development does seem to have been the interface. Instead of wrapping it up as a text autocomplete (a la google search), openai wrapped it up as an IM client, and we were off to the races.


> to anyone that understands the tech it can’t.

This is a ridiculous take that makes me think you might not « understand the tech » as much as you think you do.

Is AI useful today ? That depends on the exact use case but overall it seems pretty clear the hype is greater than the use currently. But sometimes I feel like everyone forgets that ChatGPT isn’t even 3 years old, 6 years ago we were stuck with GPT-2 whose most impressive feat was writing a non sense poem about a unicorn, and AlphaGo is not even 10 years old.

If you can’t see the trend and just think that what we have today is the best we will ever achieve, thus the tech can’t do anything useful, you are getting blinded by contrarianism.


If there is a single objective right answer, the model should output a probability of 1 for it, and 0 for everything else. Ex. If I ask "Is a sphere a curved object?" The one and only answer is "100% yes" not "I am 99% sure it is" (and once in a while actually say it isn't)

This is pretty much impossible to achieve with current architectures (which aren't all that different to those of old, just bigger). If they did, then they'd be woefully overfitted. They can't be made reliable. Anyone who understands the tech does know this.


> Anyone who understands the tech does know this

Yes, and this is does not mean the technology can never be useful.

I work everyday with that have false beliefs about a tech, I have a friend that until recently thought there were rivers on the moon, and some believe climate change is a hoax, I often forget things people told me and they have to tell me again.

Are humans not useful at anything ?


If someone responds with "no" to that question even once, then yes, I don't consider them trustworthy for anything involving complex thought.


I think people are mostly bad at value judgements, and AI is no exception.

What they naively wished the future was like: Flying cars. What they actually got (and is way more useful but a lot less flashy): Cheap solar energy.


> What they naively wished the future was like: Flying cars.

This future is already there:

We have flying cars: they are called "helicopters" (see also https://xkcd.com/1623/).


Oh they're not even close in availability as cars, much harder to operate, much more expensive and tend to fall out of the sky.

Thank you for providing an example that directly maps to usefulness of ANN in most research though.


Helicopters don't just fall out of the sky, atleast not any different than planes fall out of the sky, they can auto-rotate to the ground without engine power.


You are absolutely correct about autorotation and helicopters not falling out of the sky. There is one nuance that the rotor blades still need to be able to rotate for this, and a failed gearbox can prevent that. Anecdotally that feels like the most common cause when I read about another crash in the North Sea.


True but even lowering collective and letting autorotation take over you still hit the ground REALLY hard - enough to sustain injuries to your back and neck in some cases.

Glide ratios in GA (like highwing cessnas) is much more forgiving assuming you can find a place to put down.


AI looks exactly like NFTs to you? I don't understand what you mean by that. AI already has tons more uses.


One is a technical advance as important as anything in human history, realizing a dream most informed thinkers thought would remain science fiction long past our lifetimes, upending our understanding of intelligence, computation, language, knowledge, evolution, prediction, psychology... before we even mention practical applications.

The other is worse than nothing.


> the promises of groundbreaking use cases that will change the world are obviously not materialising and to anyone that understands the tech it can’t.

In what world is this obviously not materializing? Plenty of people use GenAI for coding, with some claiming we're approaching the level of GenAI being able to automate vast portions of a developer's job.

Do you think this is wrong and the people saying this (some of them very experienced developers) are simply mistaken or lying?

Or do you think it's not a big deal?


LLMs have blown crypto and nfts off the map, at least for normies like me. I wonder what will blow LLMs off the map?


Code assistants alone prove this to be false.


I'd be interested in reading some more from the people you're referring to when talking about experts who understand the field. At least to the extent I've followed the discussion, even the top experts are all over the place when it comes to the future of AI.

As a counterpoint: Geoffrey Hinton. You could say he's gone off the deep end on a tangent, but I definitely don't his incentive is to make money off of hype. Then there's Yann LeCun saying AI "could actually save humanity from extinction". [0]

If these guys just out of touch talking heads, who are the new guard people should read up on?

[0]: https://www.theguardian.com/technology/2024/dec/27/godfather...


> It’s why it keeps looking exactly like NFT’s and crypto hype cycles to me: Yes the technology has legitimate uses

AI have legitimate uses, cryptocurrency only has “regulations evasion” and NFT has literally no use at all, though.

But that's very true that the AI ecosystem is crowded with grifters who feed on baseless hype, and many of them actually come from cryptocurrencies.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: