Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have found it to be good at things I am not very strong at (SQL) but terrible at the things I know well (CSS).

Telling, isn't it?



Ironically, I find it strong at things I don't know very well (CSS), but terrible at things I know well (SQL).

This is probably really just a way of saying, it's better at simple tasks rather than complex ones. I can eventually get Copilot to write SQL that's complex and accurate, but I don't find it faster or more effective than writing it myself.


Actually, you've reinforced their point. It's only bad at things the user is actually good at because the user actually knows enough in that domain to find the flaws and issues. It appears to be good in domains the user is bad at because the user doesn't know any better. In reality, the LLM is just bad at all domains; it's simply whether a user has the skill to discern it. Of course, I don't believe it's as black and white as that but I just wanted to point it out.


Yes, that is precisely what I meant. It just occurred to me and I will see how that idea holds up.


Yeah, my goal was to reinforce their point in a humorous way.


It’s like the Gell-Mann Amnesia effect but for LLMs instead of journalism.


I kind of agree. It feels like they're generally a superior form of copying and pasting fro stack overflow where the machine has automated the searching, copying, pasting, and fiddling with variable names. It be just as useful or dangerous as Google -> Copy -> Paste ever was, but faster.


Funny, I find it to be good at things I'm not very strong at (CSS) but terrible at the things I know well (SQL). :)

Actually I think it's perfectly adequate at SQL too.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: