Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Right, so in rough outline an LLM-based brainstorming framework might generate multiple responses to a question with multiple different combinations of temperature, top_p, top_k, etc., to get a mix of dull-but-baseline (you don’t want to miss those options) and more off-the-beaten path responses, then take each one and send it back to the LLM with a prompt to evaluate “does this provide a coherent answer – however unconventional or bad of an idea – to the question asked?” to filter out the cases where it was asked for summer vacation ideas and responded with something like “Banana banana banana banana” (this request probably with default tuning.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: