I'm not arguing it's a bubble. I'm arguing it's not going to be a "gigantic benefit" for society.
> Will also be good for consumers in the long-term: much faster pace of drug discovery and new tech generally.
Medicine and tech that knowledge workers won't be able to pay for without a job. I also don't think "new tech" is necessarily a good thing societally.
The calculus is easy, really: AI makes 100% of my income worth less but only decreases the cost of a fraction of my expenditures. AI is bad for anyone that works for a living. That's pretty much all of society except for the top x%.
My experience has gone the other way than OOP: Anecdotally, I have had VCs ask me to review AI companies to tell them what they do so they can invest. The VC said VCs don't really understand what they're investing in and just want to get in on anything AI due to FOMO.
The company I reviewed didn't seem like a great investment, but I don't even think that matters right now.
To be clear when I said “finance folks” I wasn’t really referring to VCs. I’m talking more family office types that manage big pools of money that you don’t know about. The super wealthy class that has literally has more money than the King but would be horrified if you knew their name. Old money types. They’re well aware of the “dumb VC” vibe that just throws money after hype. The finance folks I’m talking about are the type that eat failed VCs for lunch.
In my experience they invariably conflate LLMs with AI and can’t/won’t have the difference explained.
This is the blind spot that will cause many to lose their shirts, and is also why people are wrong about AI being a bubble. LLMs are a bubble within an overall healthy growth market.
Machine learning is a perfectly valid and useful field, traditional ML is super useful and can produce very powerful tools. LLMs are fancy word predictors that have no concept of truth.
But LLMs look and feel like they're almost "real" AI, because they talk in words instead of probabilities, and so people who can't discern the distance between and LLM and AGI assume that AGI is right around the corner.
If you believe AGI is right around the corner and skip over the bit where any mass application of AGI to replace workers for cheaper is just slavery but with extra steps, then of course it makes sense to pour money into any AI business.
And energy is the air in the economy at large. The Eliza effect is not the only bias disposing people to believe AGI is right around the corner. There are deeper assumptions many cling to.
No, I see the answers from GPT-5 are sometimes marginally better, and sometimes arguably marginally worse than GPT-4, which suggests to me that we have plateaued and even regressed.