The return value is not guaranteed to be correct. REASON uses function calling to force the model to return a valid JSON object; however, sometimes models hallucinate.
For OpenAI models that have JSON mode, REASON uses that instead.
However, it has an internal validation step where it checks the return value, and if it's not correct, it'll throw an error (this behavior can be disabled though).
Your suggestion to use logits is super interesting, though! And it sparked this line of thinking: since REASON uses function calling, it has been somewhat difficult to integrate with OSS models — however, if under-the-hood we use Jsonformer, that might be the missing key for incredible OSS model support (?). What do you think?
I took a very quick glance at Jsonformer - it looks like the right tool here. If you are looking for a PR, I can help, but might take me a bit of time to get oriented.
I am considering using RΞASON for a side project so interested in it.
Yeah, adding audio to the video would help — I'll try do record something later today and update it.
When building LLM apps a good chunk of the workflow is testing new stuff and seeing what works or not, and one of the most useful features of RΞASON to me is that its OpenTEL compatible. Which allows for things for you to inspect your LLM at a close level with ease (see: https://docs.tryreason.dev/docs/essentials/observability#usi...).