Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

fwiw OpenAI uses different models for ChatGPT GPT-4 and API GPT-4 (the latest one, not talking about the pinned 0314 one). In the past I noticed the API model was newer than the ChatGPT model, but in general it seems like they are willing to make different tradeoffs between the two https://twitter.com/kevmod/status/1643993097679020037

I wouldn't be surprised if they are trying to reduce the cost of the ChatGPT GPT-4 model since if you use it heavily they will be losing money on you. They could also be trying to increase the number of people they can serve with the existing amount of compute that they have available.

In my anecdotal experience I noticed that ChatGPT GPT-4 had gotten far faster recently which is consistent with a theory that they are trying to cost-optimize the model, though today it is back to the normal speed. I've also had some frustrating interactions with GPT-4 recently similar to what people are saying, but overall I think the prior is pretty strong that we are seeing normal statistical variation.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: