All the smartest people I know in finance are preparing for AI to be a huge bust that wipes out startups. To them, everyone in tech is just a pawn in their money moving finance games. They view Silicon Valley at the moment salivating on what there plays are to make money off the coming hype cycle implosion. The best finance folks make the most money when the tide goes out… and they’re making moves now to be ready.
Especially startups in AI are at risk, because their specific niche projects can easily be outcompeted when BigTech comes with more generalized AI models.
Broader impact, but while the big players will take a hit the new wave of startups stands to take the brunt of the impact. VCs and startups will suffer the most.
At the end of the day though it’s how the system is designed. It’s the needed forrest fire that wipes out overgrowth and destroys all but the strongest trees.
My experience has gone the other way than OOP: Anecdotally, I have had VCs ask me to review AI companies to tell them what they do so they can invest. The VC said VCs don't really understand what they're investing in and just want to get in on anything AI due to FOMO.
The company I reviewed didn't seem like a great investment, but I don't even think that matters right now.
To be clear when I said “finance folks” I wasn’t really referring to VCs. I’m talking more family office types that manage big pools of money that you don’t know about. The super wealthy class that has literally has more money than the King but would be horrified if you knew their name. Old money types. They’re well aware of the “dumb VC” vibe that just throws money after hype. The finance folks I’m talking about are the type that eat failed VCs for lunch.
In my experience they invariably conflate LLMs with AI and can’t/won’t have the difference explained.
This is the blind spot that will cause many to lose their shirts, and is also why people are wrong about AI being a bubble. LLMs are a bubble within an overall healthy growth market.
Machine learning is a perfectly valid and useful field, traditional ML is super useful and can produce very powerful tools. LLMs are fancy word predictors that have no concept of truth.
But LLMs look and feel like they're almost "real" AI, because they talk in words instead of probabilities, and so people who can't discern the distance between and LLM and AGI assume that AGI is right around the corner.
If you believe AGI is right around the corner and skip over the bit where any mass application of AGI to replace workers for cheaper is just slavery but with extra steps, then of course it makes sense to pour money into any AI business.
And energy is the air in the economy at large. The Eliza effect is not the only bias disposing people to believe AGI is right around the corner. There are deeper assumptions many cling to.
Take that for what it is.