Hacker News new | past | comments | ask | show | jobs | submit login

>> This can probably be mitigated by refining the prompt

Sometimes it explains things like I am a child and sometimes it doesn't explain things well enough. I think fixing this just by a simple prompt change won't work - it may fix it in one part and make things worse in the other part. This is a problem which I have with LLM: you can fine-tune the prompt for a specific case but I find it difficult to write a universally-working prompt. The problem seems to be LLM "does not understand my intents", like it can't deduce what I need and "proactively" help. It follows requirements from the prompt but the prompt has to (and can't) handle all situations. I am getting tired of LLM.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: