It makes sense to protect investors from falsely investing in new "AI" tech that isn't really new AI tech, but why do consumers need to be protected? If a software product solves their problem equally well with deep learning or with a more basic form of computation, why is the consumer harmed from false claims of AI?
To put it another way, if you found out that Chat GPT was implemented without any machine learning, and was just an elaborate creation of traditional software, would the consumer of the product have been harmed by false claims of AI?
If you buy a painting advertised as a Monet, you are similarly not harmed if it wasn’t actually painted by Monet. But people like to know what they’re buying.
Less sarcastically, info about how a thing is made helps consumers reason about what it’s capable of. The whole reason marketers misuse the term is to mislead as to what it’s capable of.
Yeah - it needs to be clear to investors if the tech will scale as the business grows and if the tech has a good chance of improving if trained on a larger dataset or ML techniques improve generally.
Consumers should care about if a product is able to solve an AI-like problem that normally requires domain knowledge. Shouldn't care if done by ML, rules-based systems, or people. (Except perhaps may want assurance the product will continue to be able to support them as the customer scales.) Also should care about how the decision-making works.
To put it another way, if you found out that Chat GPT was implemented without any machine learning, and was just an elaborate creation of traditional software, would the consumer of the product have been harmed by false claims of AI?