Curious where you get the idea that regional accents are gone. If you travel around much in the US you'll hear many different regional accents. I have relatives from the west coast, mid-west, south and east coast (we're spread around) and each region has an easily recognizable accent. Some more pronounced than others, but still very much alive.
I've noticed this more in Belgium Flanders.
A region of a postage stamp country compared to the US where accents/dialects often weren't even fully mutually intelligible over distances that weren't so great.
This homogenization was first pushed trough education and such but it remained among the more rural folk (in as far as rural exists in such a densely populated region) but now media and migration are rapidly killing it off.
In my experience I don't notice any difference in accent between the east coast and west coast. The only regional accent I notice in many native english speakers is southern. All other accents seem to be cultural (AAVE, ESL) or dying (older generations have it, younger ones don't).
Social media has already homogenized our thoughts so much. So many facts and perspectives are presented that it's impossible to construct our own opinions on it all without taking inspiration from others, and the upvote button provides a convenient consensus.
Geography still dictates some things in the diversity of the experiences it imparts, though admittedly much of our technology exists to insulate us from that stuff.
The problem in the case of AI is who is curating that homogeneity, and to what end. Dynamic systems like IRC and messengers let folks connect and gravitate more “naturally”, while AI - being a walled garden curated by for-profit entities funded by billionaire Capitalists - naturally have a vested interest in forcing a sort of homogeneity that benefits their bottom line and minimizes risk to their business model.
Not sure about that. Billionare Capitalists live in this world too. They might cause harm, sure, but that harm generally takes predictable form and is of finite magnitude.
AI behavior on the other hand can cause under-informed users to do crazy things that no one would ever want. The form of the harm is less predictable, the magnitude isn't limited by anything except the user's ability and skepticism.
Imagine whatever US president you think is least competent talking to ChatGPT. If their conversation ventures into discussion of a Big Red Switch That Ends The World, it's going to eventually advise on all the reasons the button should be pushed, because that's exactly what would happen in the mountains of narrative material the LLM has been trained on.
Hopefully there is no end the world button and even the worst US president isn't going to push it because ChatGPT said it was a good idea. ... But you get the idea, and there absolutely are people leaving their families and doing all manner of crazy stuff because they accidentally prompted the AI into writing fiction starting them and the AI is advising them to live the life of a fictional character.
I think AI doomers have it all wrong, AI risk to the extent it exists isn't from any kind of super-intelligence, it's significantly from super-insanity. The AI doesn't need any kind of super-human persuasion, turns out vastly _sub_-human persuasion is more than enough for many.
Wealthy people abusing a new communications channel to influence the public isn't a new risk, it's a risk as old as time. It's not irrelevant, by any means, but we do have a long history of dealing with it.
> I think AI doomers have it all wrong, AI risk to the extent it exists isn't from any kind of super-intelligence, it's significantly from super-insanity. The AI doesn't need any kind of super-human persuasion, turns out vastly _sub_-human persuasion is more than enough for many.
Totally agree. We have a level of technology today that is enough to ruin the world. We don’t need to look any further for the threat to our souls.
> AI behavior on the other hand can cause under-informed users to do crazy things that no one would ever want. The form of the harm is less predictable, the magnitude isn't limited by anything except the user's ability and skepticism.
> Not sure about that. Billionare Capitalists live in this world too.
They do, alright. I mean they do live in their world, we also live in their world.
>They might cause harm, sure, but that harm generally takes predictable form and is of finite magnitude.
"Harm of finite magnitude" extends up to the boundaries of infinity. No mortal can cause infinite harm, so the limitation you offer... just isn't.
> Wealthy people abusing a new communications channel to influence the public isn't a new risk, it's a risk as old as time. It's not irrelevant, by any means, but we do have a long history of dealing with it.
The "long history of dealing with" media monopolization is a long history of defeats of the common man - press, radio, tv, internet - were all subjugated by monopolistic interests. And you think that's somehow good for the subjugated?
> AI behavior on the other hand can cause under-informed users to do crazy things that no one would ever want. The form of the harm is less predictable, the magnitude isn't limited by anything except the user's ability and skepticism.
Ah, that was it, the little people are way too dangerous and should be put in shackles for their own good. The big bosses, on the other hand, can only do harm that "generally takes predictable form and is of finite magnitude". That's very soothing and all but unfortunately, both history and logic lead to the exact opposite conclusion.
I'm still amused by the bold claims you make - according to you, people of unlimited power and means can only do harm of finite magnitude, while little people of no power and no means are very scary and unpredictable indeed... That could only be the worldview of the former imposed on the latter.
> Not sure about that. Billionare Capitalists live in this world too. They might cause harm, sure, but that harm generally takes predictable form and is of finite magnitude.
They don't really think that way.
See Trump defunding medical research. Capitalists need medical care too, but they think quarterly.
Exactly how and which medical research is funded is a thing that reasonable and well informed people can debate-- particularly since the US taxpayer funds medical research at greater levels than our European peers that are so often cited has having enlightened public policy on the subject of health care.
> See Trump defunding medical research. Capitalists need medical care too, but they think quarterly.
You're right about the defunding of research - it's loudly proclaimed as a way to save money.
You're also right that the long term effects would indeed cost a lot more but I would add only if the current social contract is kept in place. As I see it, the chainsaw brandishing is really aimed at the latter, the former is just for show.
Redefining the social contract appears to be the endgame here, in that case, the thinking goes much further than quarterly.
But not that deep… because if 99.999% of the people can't afford a surgeon we will just have fewer surgeons. And as it is, for some procedures, there's like just 1 or 2 people in the world who can perform them. Take away 99.99999% of their customers and there will be nobody who can perform them instead.
So quality of care will decrease (to a much lesser extent) for rich people as well.
I agree with you about the ugly aspects of a polarized world. Both of us would rather live in a normal world than be among the elite in a polarized one. However, that might be the very reason we aren't currently among the elite. I mean, they think differently and not by accident. They think long term but with a different value system in mind, they would accept some risks for the sake of power.
If you watch the video of Dan Bongino talking about power you'll get the vibes.
Through unlimited amusement, entertainment, and connection we are creating a sad, boring, lonely world.