That's because people can't handle speed. With a natural delay, they could cool down or at least become more detached. Society needs natural points where people are forced to detach from what they do. That's one reason why AI and high-speed communications are so dangerous: they accelerate what we do too quickly to remain balanced. (And I am speaking in general here, of course there will be a minority who can handle it.)
Let's not also forget material use. According to [1], "Global AI demand is expected to consume 4.2-6.6 billion cubic meters of water by 2027, surpassing Denmark’s total annual water withdrawal of 4-6 billion cubic meters." And there's also all the mining required for the materials to build the computers, and the fossil fuels shipping them and making them in the first place in places like Taiwan.
Many things don't have to be the way they are. But as long as the powerful big tech can subsidize their costs on the commons of the environment in the form of environmental damage without regulation, they will only pay lip service to making things more efficient. Money is a much more powerful motivator to the unscrupulous than protecting the long-term health of the commons.
Not if spending a little extra money and keeping with the inefficiency helps make money in other ways, such as getting the product out faster or allowing their workers to focus on the tech stack. Saving a little electricity cost might cost them in their development, so they're likely to use the cheap electricity and offset the cost to the environment.
> I think AI does help people do better research faster, which is a significant uplift to humanity,
One must consider both sides. Better research also contributes (no question) to more consumerism and the furthering of technology, which also uses more fossil fuels. It's time we acknowledged that research isn't free, and all of the damage to the biosphere was mainly enabled by science.
And AI uses a huge ton of energy. For example, according to [1], "In Ireland, [...] electricity demand from data centres represented 17% of the country’s total electricity consumption for 2022". And we also have to consider the raw materials and mining used.
1. In Ireland, where the data centre market is developing rapidly, electricity demand from data centres represented 17% of the country’s total electricity consumption for 2022
Of course you are wrong. "IEA's models project that data centres will use 945 terawatt-hours (TWh) in 2030, roughly equivalent to the current annual electricity consumption of Japan." [1] And don't forget the recent announcement by Fermi America to build a 6 gigawatt nuclear datacenter in Texas. And what are we getting in return:
(1) AI is primarily used, by and far, to accelerate consumerism.
(2) AI has very few applications that are actually solving the world's problems (the world's problems are mostly nontechnical problems). The cited ones of scientific or medical are either too abstract, or else they are likely to make the problem worse. And in the case of medical, maybe the AI applications will help a few hundreds of thousands of people – but hurt/kill many more with its contribution to climate change. So not worth it.
I consider those who promote AI to be enemies of humanity and biological life since they are using a commodity that we should be using less of with reckless abandon.
Nuclear is good though, it's the single densest source of energy in the world (and, well, outside of it too).
I don't get your argument though, where has it "accelerated consumerism?" And you can't be serious about its medical applications, I doubt more people would die from its incremental climate change effects (as compared to, say, car accidents, much less car pollution in cities) as compared to being saved from death. Even outside of the scientific fields, AI sure is solving a lot of my problems, saying one is an "enemy of humanity" is the sort of hyperbole I'd only see on HN.
> Nuclear is good though, it's the single densest source of energy in the world (and, well, outside of it too).
It would be good if the energy were used in critical applications. Wasted energy is stil wasted.
> I don't get your argument though, where has it "accelerated consumerism?"
Are you serious? It is making people richer, allowing people to make products faster, including software. Of course that makes consumption faster.
> And you can't be serious about its medical applications, I doubt more people would die from its incremental climate change effects (as compared to, say, car accidents, much less car pollution in cities) as compared to being saved from death.
No, I am serious. And it will get much worse in the future. Check out [1], [2], etc.. And lets not forget the increased storms, flooding, etc. We are all responsible for that. But those that use electricity like big tech should be held especially responsible because they encourage the bad behaviour and use resources directly.
> Even outside of the scientific fields, AI sure is solving a lot of my problems, saying one is an "enemy of humanity" is the sort of hyperbole I'd only see on HN.
Solving your problems doesn't really mean solving the world's problems, just like making the top 10% richer doesn't make the world a better place. I absolutely consider them the enemy of humanity.
> Maybe on break but while deep-working I just want the information necessary to do the job, the communication being there to communicate the information, efficiently.
The problem is that we are slowly be pushed to become cogs who only really think this way. We shouldn't just want to be the most efficient possible. Technology already reduces the ability for us to connect, which is why connections at work seem weird or shallow in the first place. We simply don't need each other as much, so it makes sense that AI seems like the next logical step.
Your sentiments are just your instinctual desire to move to the next local maximum in a sequence of descending maxima that lead to the bottom.
> My argument is that using a machine to replace your thinking, your voice, or your relationships is a very bad thing. Humans have intrinsic worth—machines do not.
I agree with that, and the only logical path if we are to preserve this principle is to eradicate AI, and not try and control it. There is no way to control it (think prisoner's dilemma, greedy individuals, etc.)
No, what will happen is that time wasted believing in magical LLMs, instead of developing technical and interpersonal skills will prove unproductive longterm. Like most goldrush claims not panning out, it will be followed by broad amorality among the newly destitute.
Unproductive for the person perhaps, but not for the development of technology. But I do agree in general that using AI is not a great strategy for human beings. Read a book indeed.
That is true. What will happen in with the world is that consumerism and capitalism will be pushed aside for direct technological construction. In this world, AI, rather than the market, optimizes.
> I am not saying that LLMs are worthless—they are marvels of engineering and can solve some particularly thorny problems that have confounded us for decades.
Disagree with that, because firstly, they have not really solved any problems that outweight the negatives that they have unleashed and will unleash on society.
So they make programmers more effective: is that actually a good thing, though? Fact is, most software is designed to make consumerism and corporations more effective, and that's not really a good thing for the long-term health of the planet.
Your article also indicates a sort of independence between keeping intellectual tasks primarily human and allowing AI/LLMs to work in specific domains. However, those with the power don't care about principles. They just want to replace as much as they can and use the human instinct to get ahead quickly to do so. And no amount of priniciple will stop them. AI is just too powerful to be used in a way that is consistent with human beings keeping their intellectual environment healthy.
> Disagree with that...they have not really solved any problems that outweight the negatives
And I disagree with that. They are marvels of engineering and they have solved thorny problems. Just because they problems they've solved in the very short time they've been solving problems don't yet outweigh the negatives, doesn't mean they won't soon, and doesn't make either statement false.
Great things take time, and great omelets are made from broken eggs. Nothing new under the sun, except AI.
You can't be serious. This is a highly specialized field in a topic that a few dozen people have interest in. Rather useless and basically a mental stimulating game for some professors. I have a math PhD and know very well that math went well past its point of diminishing returns in solving real-world problems a long time ago.
Not going to do all your legwork for you, but there are tons of fields that are changing rapidly because of AI. In Material Science, there's a thorny problem of how to accelerate material development, or even how to perform non-destructive testing of materials.
> AI, primarily through generative AI models, has dramatically changed our approach by accelerating the design process significantly. These models can predict material properties from extensive datasets, enabling rapid prototyping and evaluation that used to take years. We can now iterate designs quickly, focusing on the most promising materials early in the development phase, enhancing both efficiency and creativity in materials science. This is a huge leap forward because it reduces the time and cost associated with traditional materials development, allowing for more experimentation and innovation.
> One notable application is using deep learning models to infer the internal properties of materials from surface data. This technology is groundbreaking, particularly for industries like aerospace and biomedical, where non-destructive testing is crucial. These models can predict internal flaws or stresses by analyzing external properties without physically altering the material. This capability is essential for maintaining the integrity of critical structures and devices, making materials safer and more reliable while saving time and resources. Other recent advances are in multimodal AI, where such models can design materials and understand and generate multiple input and output types, such as text, images, chemical formulas, microstructural designs, and much more.
I don't consider that anything good. Design is just about making new products faster, which is a bad thing as it accelerates consumerism. And medical scans? That might help maybe a thousand extra people at the cost of gigagwatts of energy used that is polluting the entire planet.
To me, all of those positives are dwarfed by negatives.
Increasing research iteration speed is not speculation. Showing double the rate of detecting issues in scans is also not speculation.
Drawing distinctions between LLMs and other kinds of ML and AI is not particularly interesting: it's all machines using pattern recognition to automate things that previously took thought.