I'm working on a hard problem recently and have been keeping my "model" setting pegged to "high".
Why in the world, if I'm paying the loss leader price for "unlimited" usage of these models, would any of these companies literally respect my preference to have unfettered access to the most expensive inference?
Especially when one of the hallmark features of GPT-5 was a fancy router system that decides automatically when to use more/less inference resources, I'm very wary of those `/model` settings.
Because intentionally fucking over their customers would be an impossible secret to keep, and when it inevitably leaks would trigger severe backlash, if not investigations for fraud. The game theoretic model you’re positing only really makes sense if there’s only one iteration of the game, which isn’t the case.
That is unfortunately not true. It's pretty easy to mess with your customers when your whole product is as opaque as LLMs. I mean they don't even understand how they work internally.
Why in the world, if I'm paying the loss leader price for "unlimited" usage of these models, would any of these companies literally respect my preference to have unfettered access to the most expensive inference?
Especially when one of the hallmark features of GPT-5 was a fancy router system that decides automatically when to use more/less inference resources, I'm very wary of those `/model` settings.