Here is my current thinking:
Creating LLMs or AGI-style models requires massive compute and data, which startups are unlikely to have. Therefore, incumbents have a huge advantage when it comes to general AI. This leaves the option of using an API or similar to create a startup in a niche, but it's difficult to create a moat with such a startup, and the incumbents keep innovating and creating their own services that often make these startups obsolete.
Therefore, an "AI startup" would do best to develop domain expertise (or have a co-founder with domain expertise), create a useful product in that domain, collect data from users, and finally use the data to create a useful domain-specific narrow AI. Many software engineers want to create developer tools with AI, as this is the domain they know best. But this is precisely the domain that is most likely to be oversatured with AI tools, because AI people already tend to be developers who know about software development.
Are there some flaws in this thinking? Do you agree/disagree? I'm curious to see what HN thinks.
In particular I'm wondering what the best way to acquire this domain expertise is for a technical (CS) person, and whether it's necessary at all, or if it's better to learn as you go or find a cofounder in a non-computer domain.
I had the chance to talk with one of the healthcare startups, they're trying to make a product that competes with Dragon DAX Copilot (which is incredible) by cobbling together some of the tools from Azure, Google Cloud Platform, and AWS. I didn't believe their pitches, and their prices were insane, because they got some seed funding and are building an MVP still, while also trying to get some revenue. They shared with me a document that was confidential because anyone with technical know how could see they had nothing unique, they were only shuttling data between a couple cloud providers to utilize different pre-built technologies. When confronted with this they got rather defensive and moderately offensive, and even tried to end-run around IT by going to providers and filling their heads with nonsense.
Right now AI can do 10% of what people promise it can, and it's so new audiences don't have the tools to know who is real and who is a fly by night operator. It's like the beginning of the app store, AI is full of incredibly low effort apps right now and most are garbage. People are rushing in like it's a gold rush, they want some easy money before bigger, more competent products come out.
I'd say this is the biggest issue, too many ethically challenged people are spinning up cheap minimum-viable-product apps and slick marketing materials to take some cash off the table before collapsing. The industry is full of snake oil.