Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They'll get better. Humans are far from perfect, and I have no doubt that LLMs will eventually outperform them for non-trivial tasks consistently.


Maybe so, but at this stage I wouldn't be betting a business model on it.


Businesses do bet on imperfect and even criminal models all the time (way before LLMs existed)... they call it cost of doing business when they get it wrong or get caught.


> Humans are far from perfect

Humans running multishot with mixture of experts is close to perfect. You can't compare a multishot mixture of expert AI to a single human, humans doesn't work in isolation.


Machine learning models will get better for sure. We don't know if LLM are the end game though and it's not sure if this particular technique is what we'll need to reach the next level.


Or they might not get better. It could be that we are at a local optimum for that sort of thing, and major improvements will have to wait (perhaps for a very long time) for radical new technologies.


Maybe, but it certainly hasn’t been the arc of the past few years. I don’t know how anyone could look at this and assume that it’s likely to slow down.


They already have superhuman image classification performance.


I remember talking to a radiologist who said he was sure something like this was coming like ten years ago where instead of a radiologist looking at scans manually, a machine would go through a lot of images and flag some for manual review.

We haven't even gotten there yet, have we?


Yes, we absolutely are there: https://youtu.be/D3oRN5JNMWs?feature=shared

My professor (Sir Michael Brady) at university 14 years ago set up a company to do this very thing, and he already had reliable models back before 2010. I believe their company was called Oxford Imaging or something similar.


Yep, everyone seems to forget that ML was available before 2021. Had a conversation recently with my former colleague who learned about some plastic packaging company which used "AI" to predict client orders and inform them about scheduling implications. When I told him that you don't need Transformers and 30GB models for that, he was quasi-confused, cause he kinda knew it but the hype just overtook his knowledge.


In ML courses, you’re taught to try simpler methods and models before turning to more complex ones. I think that’s something that hasn’t made it into the mainstream yet.

A lot of people seem to be using GPT-4 for tasks like text classification and NER, and they’d be much better off fine-tuning a BERT model instead. In vision, too, transformers are great but a lot of times, a CNN is all you really need.


We haven't even gotten there yet, have we?

Yes and no. Countless teams have solved exactly this problems at universities and research groups across the world. Technically it's pretty much a solved problem. The hard part is getting the systems out of the labs and certified as an actual product and convincing hospitals and doctors to actually use them.


Maybe it's a liability issue, not a competency issue.


Until a single pixel makes a cat a dog or something like that.


Changing a single pixel is usually not enough to confuse convolutional neuronal networks. Even so, human supervision will probably always be quite important.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: