Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs can be very creative, when pushed. In order to find a creative solution, like antirez needed, there are several tricks I use:

Increase the temperature of the LLMs.

Ask several LLMs, each several time the same question, with tiny variations. Then collect all answers, and do a second/third round asking each LLM to review all collected answers and improve.

Add random constraints, one constraints per question. For example, to LLM: can you do this with 1 bit per X. Do this in O(n). Do this using linked lists only. Do this with only 1k memory. Do this while splitting the task to 1000 parallel threads, etc.

This usually kicks the LLM out of its confort zone, into creative solutions.



Definitely a lot to be said for these ideas, even just that it helps to start a fresh chat and ask the same question in a better way a few times (using the quality of response to gauge what might be "better"). I have found if I do this a few times and Gemini strikes out, I've manually optimized the question by this point that I can drop it into Claude and get a good working solution. Conversely, having a discussion with the LLM about the potential solution, letting it hold on to the context as described in TFA, has in my experience caused the models to pretty universally end up stuck in a rut sooner or later and become counterproductive to work with. Not to mention that way eats up a ton of api usage allotment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: