These issues are often attributed to a bad implementation of AI, but I think the problem is a little more fundamental.
The potential of AI that causes VCs and investors to swap their eyes for dollar signs is the ability to take unstructured, unpredictable inputs and convert them into structured actions or data: in this case a drive through conversation into a specific order. However, the ability to generalize to unseen inputs (what we call common sense) is neural networks glaring weakness. LLMs can look amazingly capable through internal testing, but there is a long and ever increasing tail of unseen interactions when it comes to human conversation.
I’ve seen this play out repeatedly over the last decade in the contact center industry with neural networks as a data scientist in this field.
The potential of AI that causes VCs and investors to swap their eyes for dollar signs is the ability to take unstructured, unpredictable inputs and convert them into structured actions or data: in this case a drive through conversation into a specific order. However, the ability to generalize to unseen inputs (what we call common sense) is neural networks glaring weakness. LLMs can look amazingly capable through internal testing, but there is a long and ever increasing tail of unseen interactions when it comes to human conversation.
I’ve seen this play out repeatedly over the last decade in the contact center industry with neural networks as a data scientist in this field.