This is a really strained analogy. Nuclear bombs only have one - tremendously negative thing - they do, other than the one positive of the fact it's so negative nobody uses them, to avoid the same repercussions.
AI on the other hand has a wide range of extremely positive applications, some of which have such tremendous life-saving potential it's almost ridiculous. Many, or most of which, likely might never be achieved without AI.
The analogy is as ridiculous to me as calling personal computers in the 80's nuclear bombs because they could be used to hack and shut down the power grid. Yes, they could. And I'm sure some were scared of the prospect then, too.
What "extremely positive applications" does ChatGPT have, exactly? From where I'm standing all I see is an infinitely deep and vast ocean of pure spam, scams and data harvesting on a never-before seen scale where megacorps like Micro$oft hoover up any and all data that they can, meaning we do all the hard work while M$ and similar corpos get to sell our own work back to us at a premium.
ChatGPT isn't the be-all of AI advancements. However, a model that can coherently understand and explain advanced topics to people in a tailored way, has huge educational benefits. Proper education is the core of every issue we face.
A subtler aspect of this is the potential for cheap, tailored counselling. A few iterations and it will no doubt be possible to enhance the mental well-being of those without prior access, for comparatively little cost.
Those benefits again extend into every area, crime and poverty being rooted both in lack of education and lack of social and emotional support.
The social acceptance of chatting with an AI is important in this as it gets people over that mental hurdle. Localising chats so they are not subject to privacy concerns also advances benefits such as these.
There's positive benefits to be found everywhere with AI, but they won't be if we don't envisage, look for, and develop them. And they need to be found for balance, as it remains true there are many potential negatives.
Sorry, but all of this strikes me as a very naive take on where AI is headed. The only reality I can see happening is that it just gets used to peddle even more ads to people while harvesting every single scrap of data possible on everyone while replacing large swathes of the population with cheap labor for the ruling classes.
This utopia you envision where we use AI for anything remotely good sure would be nice, but with the way the world and the people pushing for this AI especially work there just isn't a chance in hell that's how it's gonna end up going.
This is a very naive take. Out best psychologists aren’t using their expertise to solve mental health but to hack them. What makes you think people will use LLMs for good? It’s far more profitable to do bad with it.
Our best psychologists cost significant money, and there are a limited number of them (effect and cause). Whereas no motive of profit is necessary to affect the changes I've described, that's the entire point and benefit here.
Any single not-for-profit social health organisation, of which there are a huge number, could use a tiny part of their funding to fine-tune an LLM on counselling resources and case transcripts. It'd cost little, and only need to be done once.
The major hurdle here, again, is education. Once such organisations realise how much more easily they can reach their goals using AI, they'll jump on it. The final obstacle is then social acceptance of AI assistance (growing now).
ChatGPT is a better Google. Instead of finding a page that matches your query, it can interpret the results of multiple pages and output a response that's more tailored to your exact prompt. The only downside to this is that ChatGPT becomes your primary source instead of the page(s) it sources content from, so you can't verify it's authenticity.
But the "extremely positive applications" to ChatGPT are, at the very least, the same positive applications of any other search engine.
I can't locate any nuance in that article. Hyperbole, panic and many unfounded assumptions serving those first two, easy.
Good for clicks and getting an extremely manipulatable public coming back for more, I guess.
Historically, whenever we have created new technology that is amazing and impactful but that all of the positives/negatives are not fully understood, it's worked out fine. If we want to be scientific about it, that's our observable knowledge here.
Restricting access to nuclear weapons is feasible, because of how hard they are to make on a larger scale, and even then it's very difficult. They are significant projects for entire nation states to undertake.
Training a LLM is something random companies with some cash and servers are doing.
So the outcome of "pretty much only we have nukes" is an option. I don't think that's a realistic proposal for AI. Given that, is the outcome of "more trustworthy people stop, others continue" a good one?