Tokens are an implementation detail that have no business being part of product pricing.
It's deliberate obfuscation. First, there's the simple math of converting tokens to dollars. This is easy enough; people are familiar with "credits". Credits can be obfuscation, but at least they're honest. The second and more difficult obfuscation to untangle is how one converts "tokens" to "value".
When the value customers receive from tokens slips, they pay the same price for the service. But generative AI companies are under no obligation to refund anything, because the customer paid for tokens, and they got tokens in return. Customers have to trust that they're being given the highest quality tokens the provider can generate. I don't have that trust.
Additionally, they have to trust that generative AI companies aren't padding results with superfluous tokens to hit revenue targets. We've all seen how much fluff is in default LLM responses.
Pinky promises don't make for healthy business relationships.
Tokens aren't that much more opaque than RAM GB/s for functions or whatever. You'd have to know the entire infra stack to really understand it. I don't really have a suggestion for a better unit for that kind of stuff.
Doesn’t prompt pricing obfuscate token costs by definition? I guess the alternative is everyone pays $500/mo. (And you’d still get more value than that.)
Unless you used different language for the bet, you lost it the moment it was made.
"Never" may be falsified by "at least once", but affirmed only by "never". So I'm afraid only you could have ever been on the hook for the $1M, and may still be!
Still though, bad bet. The other guy can easily keep arguing that it's still just a few more years from being repealed, until one of you dies of old age.
Unless your bet was that "it will be strengthened before it is repealed" and then his position was that "it would be repealed without ever being strengthened." Still possible for neither to happen indefinitely though, leaving the bet pending.
A mature CEO would characterize the "not good reasons" engineers had for not onboarding.
I think most people would agree that engineers outright refusing to comply with what was asked of them would be a "not good" reason for not onboarding.
But Brian Armstrong is playing Strong CEO for the podcast circuit. So he can't admit that engineers were let go for potentially justifiable reasons. He has to leave room for speculation. Speculation that maybe some engineers were let go for trivial reasons, because Brian is tough, and tough Brian demands complaint employees.
The people who didn't comply because they were on vacation and then had to go to a Saturday meeting to explain themselves think Brian is something -- but I guarantee it's not that he's tough.
We've all seen this playbook before. This is the incredibly dumb, Idiocracy-emulating world in which we now live.
I wonder how many of these intern-type tasks LLMs have taken away. The type of tasks I did as a newbie might have seemed not so relevant to the main responsibilities but they helped me get institutional knowledge and generally get a feel of "how things work" and who/how to talk to make progress. Now the intern will probably do it using LLMs instead to talking to other people. Maybe the results will be better but that interaction is gone.
I think there is an infinite capacity for LLMs to be both beneficial, or negative. I look back at learning and think, man, how amazing would it have been if I could have had a personalized tutor helping guide me and teach me about the concepts I was having trouble with in school. I think about when I was learning to program and didn’t have the words to describe the question I was trying to ask and felt stupid or an inconvenience when trying to ask to more experienced devs.
Then on the flip side, I’m not just worried about an intern using an LLM. I’m worried about the unmonitored LLM performing intern, junior, and ops tasks, and then companies simply using “an LLM did it” as a scapegoat for their extreme cost cutting.
But then you have companies like Parking Revenue Recovery Services (PRRS), who have already had to settle [1] with the AG once before, and yet the AG refuses to take action on additional complaints, for years.
PRRS sent me a sham parking fee two weeks after their settlement with the AG in 2022.
The AG's response to my complaint
> We have investigated your complaint and based on the information we have received to date, we are taking no further action at this time.
This was three years ago. And Coloradans, faced with an AG that won't do anything for them, have taken to PRRS's non-accredited BBB page to file thousands of complaints [2].
I don't think the BBB would have any effect in this situation either, because PRRS doesn't rely on reputation for its business. They simply rely on having conveniently placed parking lots throughout the city with people needing a place to park.
This was three years ago, and here we are in 2025 and Denver is still dealing with this situation [3] and as far as I know, the AG still hasn't done anything about it.
It's deliberate obfuscation. First, there's the simple math of converting tokens to dollars. This is easy enough; people are familiar with "credits". Credits can be obfuscation, but at least they're honest. The second and more difficult obfuscation to untangle is how one converts "tokens" to "value".
When the value customers receive from tokens slips, they pay the same price for the service. But generative AI companies are under no obligation to refund anything, because the customer paid for tokens, and they got tokens in return. Customers have to trust that they're being given the highest quality tokens the provider can generate. I don't have that trust.
Additionally, they have to trust that generative AI companies aren't padding results with superfluous tokens to hit revenue targets. We've all seen how much fluff is in default LLM responses.
Pinky promises don't make for healthy business relationships.
reply