Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like the idea of AI usage comes down to a measurement of "tolerances". With enough specificity, LLMs will 100% return what you want. The goal is to find the happy tolerance between "acceptable" and "I did it myself" via prompts.


> With enough specificity, LLMs will 100% return what you want.

By now I’m sure it won’t. Even if you provide the expected code verbatim, LLMs might go on a side quest to “improve” something.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: