Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> No they can't because they make stuff up, fail to follow directions, need to be minutely supervised, all output checked and workflow integrated with your companies shitty over complicated procedures and systems.

What’s the difference between what you describe and what’s needed for a fresh hire off the street, especially one just starting their career?



> What’s the difference between what you describe and what’s needed for a fresh hire off the street, especially one just starting their career?

The fresh hire has the potential that after training and working for a while to become a much more valuable and reliable senior.


> has the potential

Good choice of wording! Definitely not a given though.


Real talk? The human can be made to suffer consequences.

We don't mention this in techie circles, probably because it is gauche. However you can hold a person responsible, and there is a chance you can figure out what they got wrong and ensure they are trained.

I can’t do squat to OpenAI if a bot gets something wrong, nor could I figure out why it got it wrong in the first place.


The bell curve is much wider for humans than LLMs, I don't think this needs to be said.


The difference is that a LLM is like hiring a worst-case scenario fresh hire that lied to you during the interview process, has a fake resume and isn't actually named John Programmer.


Entirely disagree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: