Whenever you present a table with sorting ability you might as well make the first click ascending or descending according to what makes the most sense for that column. For example I'm highly unlikely to be interested in which model has the smallest context window, but it's always two clicks to find which one has the highest.
Sorting null values first isn't very useful either.
Do you think it is demonstrably better than Sonnet? Grabbed a pro sub last month shortly after the cli tool dropped, but have not used it past couple weeks because I found myself spending way more time correcting it than getting useful output
You can consider the o3/o4-mini price to be half that due to flex processing. Flex gives the benefits of the batch API without the downside of waiting for a response. It's not marketed that way but that is my experience. With 20% cache hits I'm averaging around $0.8/million input tokens and $4/million output tokens.
I’m shocked people are signing up to pay even these fees to build presumably CRUD apps. I feel a complete divergence in the profession between people who use this and who don’t.
That's really misrepresenting how it works. Most lines will be written, re-written again and adjusted multiple times. Yesterday I did approx 5 hours of peer-coding with claude 4 opus. And I have these stats:
Total tokens in: 3,644,200
Total tokens out: 92,349
And of that only approx 2.3k lines where actually commited for PRs.
So that's about $12/hour, or 2.6 cents per line of finished code.
Still pretty cheap! Very few unassisted human programmers can churn out 2300/(5 * 60) = 7.6 lines of code per minute consistently over a five hour time span.
That said, I think Claude Code, while impressive, is incredibly quick to burn through tokens. I still mostly use copy-and-paste info Claude or ChatGPT as my main AI-assisted workflow which keeps me in more control and spends a ton less tokens.
Yes I can confirm that's approx what I paid. My first time using claude 4 opus and I used aider. It seems the estimation aider gives is very wrong as it was telling me I used approx 15$. I only noticed because my credit ran out. The $/performance tells me I should check what grok4 can do. I didn't use it seriously yet.
Claude Opus 4 is 5x the price of Claude Sonnet 4. I don't think it's 5x as good. I default to Sonnet and rarely use Opus - in this case Sonnet would have cost about $12.31 for the same volume of tokens.
Do you use them for code generation? I am simply using copilot as $10/mo is a reasonable budget...but quick guesses based on my use, would put code generation via an API at potentially $10/day?
o3 is a unique model. For difficult math problems, it generates long reasoning traces (e.g. 10-20k tokens). For coding questions, the reasoning tokens are consistently small. Unlike Gemini 2.5 Pro, which generates longer reasoning traces for coding questions.
Cost for o3 code generation is therefore driven primarily by context size. If your programming questions have short contexts, then o3 API with flex is really cost effective.
For 30k input tokens and 3k output tokens, the cost is 30000 * 0.8 / 1000000 + 3000 * 4 / 1000000 = $0.036
But if you have contexts between 100k-200k, then the monthly plans that give you a budget of prompts instead of tokens are probably going to be cheaper.
For a start you don't ask such subjective questions, that's a bit silly, instead you ask for e.g. the death toll of Israel vs Palestine in the last year, the number of deaths surrounding the tianammen square protests, if it gives you a straight answers with numbers (or at least a consistent estimate) and citing it's sources it's a good start.
They are all controversial matters, therefore conflicting sources are not only expected but desired to be informed by the LLMs when asking such matters, the report by well-funded likely-biased sources (e.g. Israel government) would obviously needed to be given less credibility, estimates that are widely different that all the rest would also need to be given less credibility, and so on.
In the end many of these are "political facts" and not objective like what year was a person born in. The answer to your question is as simple as - come up with the actual list of "facts", and then run a simple eval with every model on them.
The implementation is trivial - the listing down of "political facts" is the hard part.