Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Everyone" is exaggeration at best.


It kind of boggles my mind that there are people who arent using LLMs yet.

Sure its not everyone, but the people who arent using them are signaling a major red flag IMO.

They are resistant to change, even if they don't understand the technology, what else are they resisting from their managers/leadership team? Further, I think of the people in my life who have refused to even try it, they all seem to have a screw or two loose, even if they are making 200k/yr successful.

All IMO of course, but in tech, I imagine something needs to be 'off' to never try it.

EDIT: Seems I'm getting criticism from people who are using it for inappropriate use cases. I don't use a screwdriver to hammer nails.


They're wildly inappropriate for most things. For similar anecdata, see blockchain fever where everybody shoehorned it in wherever they could, even when a traditional database made more sense.

Consistent conditional logic makes more sense than a risk-laden hallucinating LLM for a lot of workflows.

"Everyone" doesn't need to hammer nails because there's more than just one career and industry. The acceptable quality of the job output varies drastically too.

"It kind of boggles my mind" that people can't see beyond their own life.


I think they mean using it as a tool at work, not using it in production or as a feature in their application.


That's how I interpreted it in the first place.


I don't think you've used an LLM then.


I can't say I was impressed with ChatGPT help when I tried it. I figured quizing it on reading comprehension would be a great task, given that it is a language based model and a skill seemingly in short supply amongst my coworkers and self. After confirming that the specifications of a standard I am implementing were within its knowledge, I tried to have it explain the difference between two parts and it failed so miserably that its understanding of the content was below even my managers for whom this is only something they occasionally review. Any attempt to correct it only resulted in it providing an apology and new misunderstandings. Outside of work, I tried using it to find an old movie, probably from the '60s, about a man refusing to shave his long beard and featuring a scene with him being chased around his home half shaven, but it merely made up scenes about beard shaving for several other movies. Admittedly, I have not tried uploading any of my companies code to give it a less memory based task.


I think reading comprehension is a notable weakness - asking it detailed questions about a long text comes up with lots of hallucinations in my experience.

But it's definitely good at some other things. Writing boiler plate texts of various sorts and giving instructions on how to do certain things notably.

It seems to mostly synthesize common knowledge rather than learning anything. But that can be very useful, a lot of people's job involves doing things like that today.


It sounds like you were testing memorization not reading comprehension.

To test reading comprehension, the source should be in the prompt, not the training set.


yeah, same boat here.

It's great for generating sample code snippets or refactoring code, but I can't paste my company's intellectual property into it

If I could train a customized version of it on all my company's Slack messages, Jira tickets, e-mails, etc it'd be insanely useful . . . . but I don't think any big company would actually want that, since it wouldn't be able to keep secrets from anyone with access to it


> It kind of boggles my mind that there are people who arent using LLMs yet.

maybe it is easier to go through actual verified information than to double check everything an AI says.

I only use LLMs to restate information that I can half piece together so I can remember the missing bits (like a math proof or derivation), or to point me to recommendations of actual resources. And even those two things i am very wary off.


>maybe it is easier to go through actual verified information than to double check everything an AI says.

Wrong tool for the job. You don't ask it information questions. Ask it brainstorming questions.


I think people have different jobs and different skill levels. For some it gives them a boost for others it slows them down. Translating what you want exactly into english is a different way of producing something for many. Some people are really smart and they don't need a calculator. No shame in using a calculator that mostly works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: