Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> I mean, it's there any genuine case you can cover with SO that you cannot with your favorite LLM?

Perhaps better than current models at detecting and pushing back when it sounds like the individual asking the question is thinking of doing something silly/dubious/debatable.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: