Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is rarely a constructive discussion around the term “AI”. You can’t say anything useful about what it might lead to or how useful it might be, because it is purely a marketing term that does not have a specific meaning (neither do both of the words in its abbreviation).

Interesting discussions tend to avoid “AI” in favour of specific terms such as “ML”, “LLM”, “GAN”, “stable diffusion”, “chatbot”, “image generation”. These terms refer to specific tech and applications of that tech, and allow to argue about specific consequences for sciences or society (use of ML in biotech vs. proliferation of chatbots).

However, certain sub-industries prefer “AI” precisely because it’s so vague, offers seemingly unlimited potential (please give us more investment money/stonks go up), and creates a certain vibe of a conscious being useful when pretending not to be working around IP laws and creating tools based on data obtained without relevant licensing agreements (cf. the countless “if humans have the freedom to read, therefore it’s unfair to restrict the uses of a software tool” fallacies, often perpetuated even by seemingly technically literate people, in pretty much every relevant forum thread).



It's not even that certain sub-industries prefer "AI", it's the umbrella term a company can use in Marketing for virtually any automated process that provides a seemingly subjective result.

Case in point:

For a decade the implementation of cameras went through development, testing and tuning of Auto Exposure, Auto Focus and Auto White-Balance ("AAA") engines as well as image post-processing.

These engines ran on a Image Signal Processor (ISP) or sometimes on the Camera sensor itself, extensive work was done by Engineering Teams on building these models in order to optimize them to run on low-latency on an ISP.

Suddenly AI came along and all of these features became "AI features". One company started with "AI assisted Camera" to promote the process everyone was doing all-along. So all had to introduce AI, without any disruptive change in the process.


I remember somethings similar when the term "cloud" came up. It is still someone else's server or datacenter with tooling.


Yeah, can they just stop coining terms to refer to old, pre-existing things? I still hate the term "cloud".


My favorite description is “The cloud is a computer you don’t own in Reston, VA”


> One company started with "AI assisted Camera" to promote the process everyone was doing all-along.

Before the "AI" labeling more advanced image processing was often called "computational photography". At least in the world of smartphone cameras. Because they have tiny image sensors and lenses smartphone cameras need to do a lot of work to get a decent image out of any environment that doesn't have perfect lighting. The processing is more traditional computer vision.

There's not legitimate generative AI features being peddled like editing people out of (or into) photos. But most of the image processing pipelines haven't fundamentally changed but not have AI labeling to please marketers and upper management.


I agree it's completely meaningless. At this point I think marketing would label a toilet fill valve as "AI".


Well the "smart toilet" is definitely a thing you can buy today:

> The integration of Artificial Intelligence (AI) and the Internet of Things (IoT) in bathroom fixtures, particularly toilets, is shaping the future of hygiene, convenience, and sustainability.


While automated AI measurement of the chemical makeup of .. human effluent could be helpful for tracking health trends, I fear it’d also come with built in integrations for Instagram and TikTok.


Good news! The integration will be used to customize your feed by recommending foods and medicine that you might enjoy.

The future is allowing advertisers to bid on specific proteins and triglyceride chains detected by the smart toilet.


or (and i can actually see this happening) an Amazon integration to reorder bathroom tissue and bowl cleaner



Also, the strong predictions about AI are using a vague term because the tech often doesn't exist yet. There isn't a chatbot right now that I feel confident can out-perform me at systems design but I'm pretty certain something that can is coming. Odds are also good that in 2-4 years there will be new hotness to replace LLMs that are much more functional (maybe MLLMs, maybe called something else). We can start to predict and respond to their potential even though they don't exist yet; it just takes a little extrapolating. But it doesn't have a name yet.

Which is to agree - obviously if people are talking about "AI" they don't want to talk about something that exists right this second. If they did it'd be better to use a precise word.


Totally agree.

Also the term 'LLM' is more about the mechanics of the thing than what the user gets. LLM is the technology, but some sort of automated artificial intelligence is what people are generally buying.

As an example, when people use ChatGPT and get an image back, most don't think "oh, so the LLM called out to a diffusion API?" - they just think "oh chat GPT can give me an image if I give it a prompt".

Although again, the term is entirely abused to the extent that washing machines can contain 'AI'. Although just because a term is abused doesn't necessarily mean it's not useful - everything had "Cloud" in it 10 years ago but that term was still useful enough to stick around.

Perhaps there is an issue that AI can mean lots of things, but I don't know yet of another term that encapsulates the last 5 years advancements in automated intelligence, and what that technology is likely to be moving forwards, which people will readily recognise. Perhaps we need a new word, but AI has stuck and there isn't a good alternative yet, so is probably here to stay for a bit!


> Although again, the term is entirely abused to the extent that washing machines can contain 'AI'.

I remember when the exciting term in appliances was "fuzzy logic". As a technology it was just adding some sensors beyond simple timers and thermostats to control things like run time and temperatures of automated washers.


> As an example, when people use ChatGPT and get an image back, most don't think "oh, so the LLM called out to a diffusion API?" - they just think "oh chat GPT can give me an image if I give it a prompt".

Note: your first part skipped entirely the process of obtaining the data for and training of both of the above, which is a crucial part at least on par with what called which API.

I don’t think it’s unreasonable to expect people to build an intuition for it, though. It’s healthy when underlying processes are understood to at least a few layers of abstraction, especially in potentially problematic or morally grey areas.

As an analogy to your example, you could say that when people drink milk they usually don’t think “oh, so this cow was forced to reproduce 123 times with all her children taken away and murdered so that it makes more milk” and simply think “the cow gave this milk”.

However, like with milk, like with the ML tech, it is important to realize that 1) people do indeed learn the former and build the relevant intuitions, and 2) the industry is reliant on information asymmetry and mass ignorance of these matters (and we all know that information asymmetry is the #1 enemy of free market working as designed).


I like your analogy, but I don't think people even go "the cow gave this milk" - I think they tend to just go "mmmm yummy yummy milk"

People usually see the product rather than the process.


I disagree; people are not mindless consumers, people are interested in what comes from where, and given the knowledge most of them would make a choice that would be ethically good, when they can afford it.

What breaks this (and prevents the free market from working as intended) is lack of said knowledge, i.e., information asymmetry. In case of the milk example, I think it mostly comes to two factors:

1) Lack of this awareness is financially beneficial to respective industries. (No need for conspiracy theories, but they sure as hell not going to engage in educational campaigns on these topics, or facilitating any such efforts in any way, including not suing them. Further complicated by the fact that many these industries are integral parts of many local economies, which would make it in the interest of respective governments to follow suit.)

2) The facts can be so harsh that it can be difficult to internalise and accept reality.

Even still, many people do learn and internalise this—ever noticed the popularity of oat and almond milk in coffeeshops, even despite higher prices?—so I think it is not unreasonable to expect this in certain ML-based industries, either.


I think this seems to be veering off into some specific ethical viewpoint about milk supply chains and production rather than an analogy for product vs process.

But personally I can't see how the popularity of oat and almond milk in indepenent coffee shops tells us that much about how people perceive the inner workings of chatGPT.


There is a difference between descriptive and prescriptive statements. I don’t disagree that the status quo is that many people may not care, but I believe it is reasonable to expect them to care, just like they do in other domains (e.g., the milk analogy). The information asymmetry can (and maybe should) be fought back against.


This article is all about PINNs being overblown. I think it’s a reasonable take. I’ve seen way too many people dump all their eggs in the PINNs basket when there are plenty of options out there. Those options just don’t include a ticket to the hype train.


I think AI is a useful term which usually means a neural network architecture but without specifying the exact architecture.

I think Machine Learning doesn't mean this as a word, as it can also refer to linear regression, non-linear optimisation, decision trees, bayesian networks etc.

That's not saying that AI isn't abused as a term - but I do think a more general term to describe the latest 5 years advancements in neural networks to solve problems is useful. Particularly as it's not obvious which model architectures would apply to which fields without more work (or even if novel architectures will be required for frontier science applications).


This is incorrect. Machine Learning is a term that refers to numerical as opposed to symbolic AI. ML is a subset of AI as is Symbolic / Logic / Rule based AI (think expert systems). These are all well established terms in the field. Neural Networks include deep learning and LLMs. Most AI has gone the way of ML lately because of the massive numerical processing capabilities available to those techniques.

AI is not remotely limited to Neural Networks.


The field of neural network research is known as Deep Learning.


Eh, not really. All Deep Learning involves neural networks, but not all neural networks are part of deep learning. To be fair, any modern network is also effectively built by deep learning, but your statement as such is inaccurate.


> There is rarely a constructive discussion around the term “AI”.

You hit the nail on the head there. AI, in its broadest terms, exists at the epicenter of hype and emotions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: