Hacker News new | past | comments | ask | show | jobs | submit login

Do LLMs inference engines have a way to seed their randomness? so tho have reproducible outputs with still some variance if desired?





Yes, although it's not always exposed to the end user of LLM providers.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: