> No they can't because they make stuff up, fail to follow directions, need to be minutely supervised, all output checked and workflow integrated with your companies shitty over complicated procedures and systems.
What’s the difference between what you describe and what’s needed for a fresh hire off the street, especially one just starting their career?
Real talk? The human can be made to suffer consequences.
We don't mention this in techie circles, probably because it is gauche. However you can hold a person responsible, and there is a chance you can figure out what they got wrong and ensure they are trained.
I can’t do squat to OpenAI if a bot gets something wrong, nor could I figure out why it got it wrong in the first place.
The difference is that a LLM is like hiring a worst-case scenario fresh hire that lied to you during the interview process, has a fake resume and isn't actually named John Programmer.
What’s the difference between what you describe and what’s needed for a fresh hire off the street, especially one just starting their career?