Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ironically, I find it strong at things I don't know very well (CSS), but terrible at things I know well (SQL).

This is probably really just a way of saying, it's better at simple tasks rather than complex ones. I can eventually get Copilot to write SQL that's complex and accurate, but I don't find it faster or more effective than writing it myself.



Actually, you've reinforced their point. It's only bad at things the user is actually good at because the user actually knows enough in that domain to find the flaws and issues. It appears to be good in domains the user is bad at because the user doesn't know any better. In reality, the LLM is just bad at all domains; it's simply whether a user has the skill to discern it. Of course, I don't believe it's as black and white as that but I just wanted to point it out.


Yes, that is precisely what I meant. It just occurred to me and I will see how that idea holds up.


Yeah, my goal was to reinforce their point in a humorous way.


It’s like the Gell-Mann Amnesia effect but for LLMs instead of journalism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: