This paper is far too long and poorly written, even considering that the topic of expert systems was once "a thing."
There are three key parallels that I see applying to today's AI companies:
1. Tech vs. business mismatch. The author points out that AI companies were (and are) run by tech folks and not business folks. The emphasis on the glory of tech doesn't always translate to effective results for their businesses customers.
2. Underestimating the implementation moat. The old expert systems and LLMs have one thing in common: they're both a tremendous amount of work to integrate into an existing system. Putting a chat box on your app isn't AI. Real utility involves specialized RAG software and domain knowledge. Your customers have the knowledge but can they write that software? Without it, your LLM is just a chatbot.
3. Failing to allow for compute costs. The hardware costs to run expert systems were prohibitive, but LLMs invoke an entirely different problem. Every single interaction with them has a cost, both inputs and outputs. It would be easy for your flat-rate consumer to use a lot of LLM time that you'll be paying for. It's not the fixed costs amortized over the user base, like we used to have. Many companies' business models won't be able to adjust to that variation.
There are three key parallels that I see applying to today's AI companies:
1. Tech vs. business mismatch. The author points out that AI companies were (and are) run by tech folks and not business folks. The emphasis on the glory of tech doesn't always translate to effective results for their businesses customers.
2. Underestimating the implementation moat. The old expert systems and LLMs have one thing in common: they're both a tremendous amount of work to integrate into an existing system. Putting a chat box on your app isn't AI. Real utility involves specialized RAG software and domain knowledge. Your customers have the knowledge but can they write that software? Without it, your LLM is just a chatbot.
3. Failing to allow for compute costs. The hardware costs to run expert systems were prohibitive, but LLMs invoke an entirely different problem. Every single interaction with them has a cost, both inputs and outputs. It would be easy for your flat-rate consumer to use a lot of LLM time that you'll be paying for. It's not the fixed costs amortized over the user base, like we used to have. Many companies' business models won't be able to adjust to that variation.