1. Access to specific large open models (Qwen3 235b, Deepseek 3.1 671b, Llama 3.1 405b, GPT OSS 120b)
2. Having them available via the Ollama API LOCALLY
3. The ability to set up Codex to use Ollama's API for running tools on different models
I mean, really, nothing else is even close at this point and I would rather eat a bug than use Microsoft's cloud.
1. Access to specific large open models (Qwen3 235b, Deepseek 3.1 671b, Llama 3.1 405b, GPT OSS 120b)
2. Having them available via the Ollama API LOCALLY
3. The ability to set up Codex to use Ollama's API for running tools on different models
I mean, really, nothing else is even close at this point and I would rather eat a bug than use Microsoft's cloud.