Cynical view: without new regulation in the area of AI, it will reduce the value of labor in many domains and completely eliminate the need for it in many others. Profits will go to companies like OpenAI, unemployment will rise and people will be left to fend for themselves, and it's exactly what's going to happen.
Realistic view: Unemployment is actually at an all time low despite centuries of industrialization, automation, etc.
AI technology is becoming a commodity at a rapid rate. OpenAI has some nice data moat but their tech is being copied left right and center and much of what they do has been replicated successfully by others; including some open source projects. I don't see OpenAI end up with all the profit here.
AI is a transformative technology for sure. But just like previous introductions of transformative technology it won't play out as doom predictors predict.
Most of the goods we buy and consume are actually produced in parts of the world where workers are exploited just fine without the help of AI. The dystopia already exists; just not in our little bubble. And a lot of those places have leveled up quite a bit economically in recent decades. So things aren't that bad anymore.
Our own past is actually built on the dystopia of the industrial revolution where people had no rights and worked until they dropped dead. Most of us on this forum have jobs that most of those people would not have considered real work. Hence us procrastinating on hacker news instead of doing real work.
AI will cause more of that to happen everywhere. But we'll find ways to keep ourselves busy. And more free time means that we can do things that are valuable to us. And what are economies other than just the accumulation of things we value? It used to be that we mostly valued not starving to death. Most of our economies were basically related to food production. Now food production is only a tiny part of our economies. We found more valuable things to do. Whatever AIs do or don't do, we'll find a way to find new things that are valuable to us. AIs simply expand our economy to include more such things. That's what transformative technology does. It grows our economies.
> Unemployment is actually at an all time low despite centuries of industrialization, automation, etc.
You've got an interesting point there. But I'm wondering, isn't this mostly true for places like the US? Looking at it globally, it's a bit of a mixed bag: the global unemployment now is higher than 30 years ago, for example.
And about the industrialisation bit – I mostly agree with you, but let's not forget the hard fought battles for fair work conditions. We got to where we are because people stood up for their rights, not just because machines started doing the heavy lifting. The original post seems to nudge towards more rules or better safeguards with AI, kind of like what happened with the rise of factories. Are you not in favour of that?
Small sidenote: calling your own view 'realistic' might put some people off. It sort of implies other opinions are not, you know?
> Small sidenote: calling your own view 'realistic' might put some people off. It sort of implies other opinions are not, you know?
I'm just countering the cynical view here; which at least puts me off. Anyway, there's always somebody that is going to disagree. To me the cynical view is historically always there and usually wrong.
I don't actually agree that AI is causing any perceived worker injustice. The US is a bit special because it generally seems be a bit different than the rest of the world in terms of a lack of worker protections. Like getting decent health insurance, job protections, and not being forced to work crazy hours just to get slightly over the poverty line. Whether you agree with that or not, a lot of that predates the whole "AI is bad" debate and is simply the result of decades long policy. Rolling back some of that or changing those policies is a different topic. IMHO that would be a good thing regardless of AI or any other transformative economic effects of other innovations.
And to counter that, I've mostly lived in places where things arguably are not that bad. People get decent insurance. They don't work crazy hours. And they mostly get paid fair wages for what they do. There are some exceptions to that of course. But people are doing pretty OK and I don't think that will change because of AI. I just don't see the need for a lot of pre-emptive measures here.
Globally, we have more people than ever and they are wealthier and more healthy than ever. Sure, there are some pretty grim outliers but that's mostly in places with despotic regimes and really crappy economies. That too is not caused or made much worse by AI.
The opposite of a cynical view is an optimistic one, not a realistic one. Optimism like yours has been as wrong as cynicism throughout history, and its unrealistic to believe otherwise.
I wholeheartedly agree (at least in the near future of a decade or so). IMO the only thing HN needs to worry about is that this round it could be the programmers that are one of the careers obsoleted and all the really well paid jobs will be things that require being good with your hands. So maybe people who are paid to think have jobs, but it is paid more like retail work.
Which; y'know, fair enough if that does happen. Worse problems to have. Mankind will be in a great spot. I always wanted to learn to weld.
> much of what they do has been replicated successfully by others; including some open source projects
This is true, except for GPT4 which no one has been able to even come close to in actual usage.
As long as OpenAI maintains the lead by making the very best model available via a consumer-friendly interfaces, and via API, I'd wager they remain in the lead.
That could be a temporary thing and is mostly just a side effect of them having a lot of money and infrastructure. I don't think most of the world is ready to just defer to them and give them all their data. There's a lot of incentive to come up with alternative solutions. The more OpenAI earns, the bigger the incentive to work around them.
As for the UI and UX. Chat GPT looks a bit like a rush job. Midjourney and others figured out that discourse wasn't half bad as a UI and it kind of snow balled from there. Chat GPT basically took that and did not add very much to it. It's very middle of the road as a chat UI actually. Completely unremarkable. Without knowing what they did exactly, it looks to me like someone spun up some javascript project, found some chat related libraries and components and knocked together a prototype in a few weeks. I've been there and done that actually on a project a few years ago. It's not that hard and a very sensible thing to do for them.
The value of chat gpt is of course in the quality of the conversation, not the UX. I expect a lot of innovation around this topic during this year and it might not be OpenAI that leads that. UX is so far not their core strength. I know they are hiring pretty aggressively to fix that. But it's not a given that throwing money at the problem will ensure an easy victory here. World + dog is focusing on outdoing them on this front. They'll have a lot of competition and investment.
> Realistic view: Unemployment is actually at an all time low despite centuries of industrialization, automation, etc.
A key word is "centuries" - previously this industrialisation was not such a rapid process (took centuries indeed) - people had time to reskill and new generations didn't pursue professions of their parents if those jobs got automated.
This time it's more possible that someone who is going to college this year, graduate in 5 years then maybe will work 5 years to find out their professions got completely automated 10 years from now.
That 'unemployment is at an all time low' statistic is disingenious, one needs to include the sibling statistics that 'under employment is currently at the highest it has ever been." People are working crap jobs that merely maintain, they are not in their careers of education, they are in those "bullshit jobs" that abuse them.
> Realistic view: Unemployment is actually at an all time low despite centuries of industrialization, automation, etc.
As a whole, yes... the problem is something else: quality of employment. Good paying, unionized jobs - the backbone of all Western economies in the "boom age" between WW2 and the fall of the USSR - in farming, mining, manufacturing and industry that employed lots of people in the past have either been lost to technological progress (farming) or gone off to China, India, Taiwan, Vietnam and Thailand - mostly because of massively lower wages, but also (especially in silicon industry) due to massively more permissive environmental protection laws.
What's left for people to make a living is mostly either low-skill and extremely low-pay stuff that reasonably can't be automated (cleaning!), a bit of medium-skill stuff like tradespeople, and high-skill intellectual jobs (STEM). Now that a lot of the high-end jobs is being threatened by AI as well, high-skilled people from there will also be pushed down, intensifying the competition for lower rungs of society even more.
And let's face it: this will be dangerous, particularly as ever more and more of the share of global wealth concentrates in the hands of very few people. Simply from a wealth relation, Elon Musk, Jeff Bezos, Larry Ellison, Warren Buffett and Bill Gates are each richer than actual medieval emperors related to what the common people had. This is not sustainable, and eventually (re)distribution fights will break out.
ETA: Just came in - in the last three years, despite a global pandemic wrecking entire economies, followed by the first land-grab war by an imperialist power ever since WW2 and the resulting economic consequences, the top 5 of the uber rich actually more than doubled their wealth [1], at the expense of everyone else. Clearly, this cannot go on for much longer.
> AI will cause more of that to happen everywhere. But we'll find ways to keep ourselves busy. And more free time means that we can do things that are valuable to us.
As if. Any free time we got gets immediately usurped by the need to take up a second job just to make rent, not to mention that there hasn't been a significant reduction in hours-worked for decades (to the contrary, "expected" aka unpaid overtime has become the norm). Women didn't enter the workforce because of feminism, women entered the workforce because capitalism needed more workers to exploit - with the nasty side effect now becoming evident that young people don't become parents at all or significantly late in their careers, worsening the demographic collapse.
> Our own past is actually built on the dystopia of the industrial revolution where people had no rights and worked until they dropped dead. Most of us on this forum have jobs that most of those people would not have considered real work. Hence us procrastinating on hacker news instead of doing real work.
Part of that fight for workers' rights led directly to Karl Marx and Friedrich Engels inventing Communism. My history on that topic is a little vague, so it may be mere ignorance that I have no reason to think neither foresaw Stalin.
Likewise for capitalism, given what (little) I know of Adam Smith[0], I don't think he would've foreseen the Irish potato famine.
Smith and Marx both saw the world changing, the era of feudalism passing and fading, and the need for a new system to replace it. What we have now is neither what Smith nor what Marx advocated, though bits of each are still popular.
So… what's the AI version of the February Revolution? What's the AI version of the Great Depression (as in 1929–1939, I didn't mistype "Great Recession")?
I can very easily see ways that AI can bring about surveillance to make the Stasi blush. Those amateurs were drilling holes in walls and putting bugs in watering cans, today we carry trackers and bugs in our pockets voluntarily, and even when those are restricted, laser-microphones are simple enough to be high-school student projects, and WiFi can be modified to run as wall-penetrating radar that can do pose detection with enough resolution to give pulse and breathing rates.
The Paperclip Optimiser is basically the reductio ad absurdum of capitalism's disregard for environmental impact and externalities, except that software generally has bugs and pre-LLM AI generally hasn't shown the slightest sign of what people would consider "common sense", which makes it… my Latin is almost non-existent, "reductio non absurdum"? For what AI may do.
Even between those two examples, while it's certainly possible on paper for AI to give us all lives of luxury with minimal to no work required… from the point of view of the pre-industrial age, so did machine tools, so did the transition from alchemy to chemistry (despite chemical weapons), so did electricity and the internal combustion engine (despite the integrated CO2 emissions), so did atomic theory (despite the cold war)… and despite that, we still have 40 hour weeks.
So perhaps we'll all end up like aristocrats, or perhaps rents (literal and metaphorical) will go up to take the full value of whatever UBI[1] we are given.
[0] while I doubt politicians who quote him know any better than me, this cynicism may be borne from the last decade of British Prime Ministers…
[1] IMO, UBI is the only possible way for a sustainable society where AI is at the level of the smartest human, and in practice it's necessary well before AI is that capable — if a suitably embodied AI can do every task that an IQ 85 human can do, for a TCO/time less than your local minimum living wage[2], you've already got 15% of your population in a permanent economic trap.
I also think that UBI can only avoid a hyperinflation loop when the government distributing the UBI owns the means of production, because if they don't then the people who do own the AI will be tempted to raise prices to match the supply of money.
But there's always the temptation for a government to exclude some group, for one reason or another — "Oh, not them, they're foreign. Not them, they're criminals. Not them, they're too young. Not them, they're not smart enough. Not them, they're…", and it's very hard to make those exclusion lists smaller, as those on the less-money list wield less power, and also everyone else would have to lower their expenses if they undid it and shared their wealth more equally.
That is already an issue with technological progress. Peoples protesting because machines are more efficient and removing jobs. The issue being the profit of increased tech not being shared correctly among humanity.
This issue has not been solved. I am glad there are other peoples becoming aware of it with the rise of IA.
If we have slow takeoff, AI will be like any other automation. It will increase economic productivity. Capital owners will benefit the most, but everyone else will also benefit, because it's not zero-sum. People will lose their jobs but new jobs will be created and everyone will get richer.
If we get ASI, then that's a paradigm shift and all bets are off.
then you will simply see those jobs being moved/outsourced to the third world without regulations, just like virtually all manufacturing did.
china, russia, india and a ton of other countries won't give a shit about 'global moratorium on AI research' and 'assault GPU ban'. US+EU are a fraction of the world's population.
In some industries, prices will follow wealthier people as more profit will come from gouging those on the lucky side now much larger wealth divide, and the middle class will vanish.
You saw an example of this in ecommerce during the pandemic's economic upheaval. Luxury goods recorded increased sales, as did bottom of the barrel retailers. Services and items in the comfortable middle class we all deserve suffered decreases.
I was not talking about scarcity, I was talking about the perversions of markets in an increasing wealth gap. There is no scarcity of brand name clothes for example. It's only expensive because people with excess money choose to pay for it, but it cost cents to make and there is no mismatch between supply and demand, and it is often not better than cheaper clothing.
Why would you need a monopoly? They aren't competing on price. It's an irrational market.
It depends what business you're in. If you're a company making yachts, supercars, or private jets, then owners consume much more than workers.
As wealth shifts to fewer hands, companies making mass-market goods are forced to drop prices, squeezing their margins and forcing consolidation and further automation, as the buying power of the customer base disappears. Investment capital shifts into the luxury sector where demand is growing, and prices and production quantities increase.
More and more of the economy gets dedicated to serving the needs of the wealthy (which is essentially what what it means for the rich to get richer).
The short answer is because markets are not monolithic.
You can have all kinds of price and market distortion as long as it’s a small group of people or a group of people that has a significant wealth effect over those that do not have it.
Just like you have services and products specifically for multimillionaires and billionaires, where you can get things that nobody else has, and in many cases even aware exist, you can see that happen for broader parts of society without having a deleterious effect on the overall survival of the human species.
It’s just market specialization, and we haven’t seen it with commodities yet, but we could see it with commodities in the future, and that would really change the game if commodities become available, only to those who are inside of a small pool of people that are making all the money with artificial intelligence, and the rest of us have no access to it. That is a very real possibility.
When sufficient profit motive exists to serve commodity markets for small numbers of people instead of the broad collective, a wage/price acceleration condition could be had which we haven’t seen before where more-and-more resources are locked up by a smaller and smaller but wealthier and wealthier small part of the population, essentially creating a wage / price spiral but inside of a wealth effect. Think of it is the impact of capital, and you begin to see why well concentration could lead to commodity concentration, in ways that we’ve never seen before. We have some examples that are similar to this happening right now, with the consolidation of industry, after industry by private equity, which is unprecedented.
AI has the possibility of accelerating these trends.
We have seen this happen many times, where broad and cheap food groups become speciality food groups and product categories, rise in price as they become popular with the smaller wealthier elite groups (think of ox tails, bluefin tuna, certain purebred dogs, fine wines and cheeses, etc) they become out-of-reach to the general society.
We haven’t seen this for commodities but there is little reason that it couldn’t also come into effect in broad commodity markets with automation and AI remove the need for wage classes.
Ai is the automation of intellectual work, but also the ground technology for advanced robotics and automation powered by AI. The impact will be more profound than anyone here applying 20th century economic theory and colloquialisms really understands. The potential for interruptions is profound, and AGI and it’s ongoing improvements is very likely like a new nuclear threat.
People like to apply the logic of the wave theory of Internet, mobile, radio, television, etc. you can go all the way back to the weaving machine, the printing press, and the cotton gin.
The real question is whether or not this time is different. You know everything is not different until the time that it actually is and when you start thinking about building a digital brain, you’re not just talking about creating another transformational technology revolution, you’re talking about replicating the creature that actually creates those technology waves. That’s slightly different and deserves a little more consideration many of the people talking about this AI revolution tend to give it. Instead of creating something new that changes society, we’re creating something to replace the thing that creates those technology tools and waves in the first place. Something like us.
Like all technology revolutions, the resulting profits and productivity can be used for the uplift of humanity and for social good, or it can be used for selfish motivations. And to be fair, it could be used for selfish motivations, and end up benefiting all of humanity if the system allows for those selfish motivations to benefit the larger society. But that comes down to wise governance and careful shepherding by those in power. Gene Roddenbury is prescient as always.
So, in short, you either end up with amplification, like you’re talking about, which creates a stratification between the classes of benefit from AI technology, and the classes of those who were left behind, were you see a general transformation of the overall society that brings everyone into a new era of productivity that changes everything on earth as a paradigm-style transformation. Which model applies? The question is this: is it different to build a new machine or a machine that dreams up and makes new machines?
Look up what the business model for in-app purchases is. In short, it's a few rich "whales" that sustain those economies, everyone else is just providing free labor to amuse the "whales".
optimistic (slightly distopian) view: governments will step in and build national artificial intelligence machines. OpenAI will be competing with open source LLMs. we'll all re-create society structured around a priesthood / ceremonial worship of a big ai that symbolizes how great your "tribe" is and compete with the followers of the other gods.
I agree, except for the OpenAI part. These subscriptions are currently much cheaper than labor - about 100x even for poorly paid jobs - and so there is simply not that much value to capture in terms of total size of the economy. In other words, if the employee costs $2000 and OpenAI charges $20, then $1980 are captured by the company using the tech, not OpenAI. So there would be a problem of course, but it‘s not like the value would necessarily go just to the tech industry. Instead it might go increasingly to those who own „the means of production“ to use a Marxist term for analysis purposes.
And if the prices they charge go up, it should be possible to compete with open products.
AI runs on electricity, which we can produce a joule directly of for much cheaper than we can produce a joule of food to feed a human on. This is a fundamental consequence of thermodynamics. It will eventually require us to ask ourselves whether we are willing to arbitrarily prioritize our own species over the cheaper to run alternative.
We human beings, however, have both the first mover advantage and a natural instinct to preserve both ourselves and our ingroup. So I don't ultimately see this law as being hard to pass. Enforcing it is another issue, but mostly also
solvable issue via financial bounty systems.
This is such a bizarre post. Tools are built to serve man. AI is just another tool. When mechanization replaced human labor there were no questions about prioritizing machines over people. This is no different. You have a distorted framing of the situation.
> Tools are built to serve man. AI is just another tool.
This logic reads as an attempted proof that no man-made machine or technology can ever possibly be net harmful. That seems like an argument easily punctured.
"This is no different" is also a leap, because humans have a large but finite number of ways to be of service to each other. Mechanization augmented one of them, which shifted more value onto the remainders (mostly intellectual and emotional work). If AI can replace the latter, it's not implausible that this really will broadly displace the majority of human labor skills without growing new value areas, similar to how mechanization replaced horses.
Ask ourselves? Who is us? Your comment doesn’t apply if everyone has access to their own local AGI.
Right now training is expensive and inference is cheap. There’s no reason to believe that inference cost will suddenly balloon when we cross the line to general intelligence.
Why are you letting your AGI run wild? Let’s make the assumption that AGI is just a GPT combined with a new training algorithm that allows it to learn rapidly and generalize from small amounts of information. The moment you turn off the training algo your AGI reverts to what’s essentially a next token predictor. It retains everything it learns but is unable to form long term memories.
I think it would be very hard for anyone to be manipulated by an entity without long term memory.
Parents recently sold their $1.5M+USD house (2021) to the owner of an online transcription/translation company (human staffed) -- an "expensive rich house" where they live.
The free Whisper.app (works on Mac, including Intels) delivers the new homeowner's entire company product [transcription/translation] about 40x faster than real-time-playback, which is hundreds of times faster than any human employee could listen and translate. For FREE.
I expect to see new owner's forclosure notice any moment. Certainly human worker(s) still check over the translations, but for most [potential] customers a perfect document is not necessary, particularly considering Whisper.app is FREE to use off-line [1]. An absolutely incredible piece of software — runs lighting fast on my M2Pro.
If what I do as a seasoned software engineer becomes redundant because of "artificial intelligence", so be it. Then I'll need to learn more, which I am! I'm not in the business of stiffening technology like an oil baron.
I do understand that IP, for example using an actor's likeness via CGI+AI, is a very real issue that needs to be addressed.
People who are not working in IT but in a much less paid professions don't have enough war chest and savings to have luxury take a year off to learn new skill.
This might even apply to fresh IT students who started collage this year - they won't be happy that 10 years from now, just 5 years after graduation and still with student debt, their salaries are not above average anymore.
Seriously: it's time to get an MBA if you've ever considered it. Why? Because an MBA teaches one how organizations operate, from non-profits, to government agencies, to startups, and all the transitions to a major conglomerate. As AI proposes to change how all this is done, those of us that understand the essentials of designing, implementing, and maintaining dynamic automated systems that incorporate AI to operate organizations are in the positions to change how business is done for a very long time. It's an extreme power seat, if and only if you also have the professional communications to convince, create collations, and manage the confusion that is sure to arise as AI changes fundamental methods of running organizations.
In my albeit limited experience so far… ChatGPT4 is surprisingly good at “doing the MBA bullshit” as I call it. Vague thoughts go in from me the user, coherent business jargon comes out and I can even ask for an explanation if it tosses me a term I’ve never heard before so I can make sure I actually understand the suggested MBA bullshit that I loathe thinking/writing like and can then use it properly.
I wouldn’t be betting like the GP that an MBA will somehow help more than other skillsets… the fact that it already seems like ChatGPT4 is at least capable of MBA ish things at at a quality level that approximates at least basic levels of competency in the field of business management… should not really be a surprise when you think about how much gets written about management by managers to other people in business and management, lots of training data.
But a senior engineer with an MBA is neither, they are both and more. A senior engineer with an MBA is far more capable than the majority realize. They can optimize within a business to the degree entire departments are automated. Serious: we all work in complex organizations, and knowing how they are composed and how they interoperate, and how all that is boiled into an accounting soup is what an MBA teaches. If we're supposed to be automating all that, having an MBA is essential to not be a mindless minion working on processes without knowledge of how your work fits into the whole during this gargantuan transition.
At that time, there still will be a "A100/H100/successor replacement specialist" probably necessary. Or any other variant of a data centre hardware admin
A propos the likeness point, a question to get people's opinions. What is the difference between your skills today which lead to earning potential and an actor's likeness which gives the actor more rights to a continued income stream?
My software engineering skills are more that of a robot and if someone could make many of me, I/we could get a lot more done and that's a good thing. An actor has 1 likeness as their skill so it should be at their discretion. Plenty of overlap between the two situations, but to really replicate my skillset to the point where it could be harnessed outside of my control is a lot more sophisticated than replicating someone's image, voice, mannerisms, etc.
One thing that's very likely to happen is that major economies (US, EU, China, India, etc.) will start to isolate. Because AI will have such a profound effect on the competitiveness of companies and the regulatory environment between these blocks will be so different their respective companies are all potentially at risk. Within months let's say a US based and entirely AI run company could quickly supplant all European companies in the same field because the respective European companies don't have the computing power or expertise to compete or can't fire the now redundant workers due to some trade union influence. Rather than allowing all those European companies to be replaced there will be intense lobbying to effectively shut out any non-EU companies. AI will effectively end globalization with the exception of basic commodities trading.
The risk of exposure in emerging and low market economies is significantly lesser than advanced economies according to this report . The nature of work in advanced economies by comparison then is going to significantly change. If 60% of jobs in advanced economies are at risk am surprised with statements from these countries that they are well equipped and ahead of the curve to deal with it . Is anyone seeing anything tangible being proposed and planned to deal with apart from educating people to use this technology? Or is this all a prelude to another AI winter I wonder as we start to make these forecasts and raise expectations
By now I think many people have seen my takes on this forum regarding AI and know I'm an enthusiastic skeptic. I think the capability of this technology is being exaggerated to the benefit of the large corporations that have made huge investments in it. I think a lot of the so called AI experts have put their eggs in the basket of throwing a shit load of data at a pretty limited number of relatively simple models. I think a lot of the people trying to apply AI to every field they can get their hands on are non-experts in that field and lack the knowledge and skill to know if their AI is any good or if it's just dogshit. I think LLMs will never be reliable enough for applications where correctness matters and we're never going to solve the hallucination problem. I think the people telling you we can solve those problems are trying to sell you something. I think the end result of all this bullshit is going to be a lot of wasted computational cycles and an entirely undeserved devaluation of knowledge work until the MBAs trying to fire all their knowledge workers realize they've made a huge mistake. I think this technology is going to make the internet worse and the biggest beneficiaries are the charletans and liars that never cared about truth in the first place.
I too remember 2017 and headlines about millions upon millions of people being replaced by robots around 2025. I even saw videos of little bots peforming the job of baristas, trucks self driving, and all sorts of other crazy things. Not that they are not doable but you know, they are difficult to implement by people that have no clue how things work, let alone how humans interact.
The difficulty of the technology is crucial to understanding what is happening right now. Developing ai is _hard_, unless you cheat and steal content, plagiarise people, and the promise riches to those replacing said people.
What caught my eye is that according to the IMF jobs in the west will be affected harder than elsewhere. Meaning poor countries will benefit more. Also as with manufacturing jobs will be sent there since lower skilled workers can now perform the same task better and cheaper.
The only real issue with ai is that people will indeed be replaced, not fully, but where they are expensive by those that are cheap.
Furthermore, services such as health care and education will regress significantly in the west. Instead of solving the underlying issues your doctor will be replaced by a fast and concise google search made by ai on your behalf. The future looks bleak in the west to be honest.
So called ai will in fact mean a massive transfer of knowledge from rich to poor countries, while keeping poor countries rich, and people in rich countries poor and docile - as it has always happened when tech was designed to “help and enhance”.
> one of the things that sets AI apart is its ability to impact high-skilled jobs
Am I the only one that fundamentally disagree with this? AI rather impacts low-skilled white collar jobs. Or the "low skill" part of a high skill job. For a dev one example is AI is good at writing the boilerplate for you.
Well IMF just keep tabs on the AI billionaire/millionaire underling production rate every quarter. If it's going up then AI is definitely doing something dumb and unnecessary. To fix track them down and ship them off to China's jack ma center of reeducation. And humanity will be just fine.
This coming from the IMF blog (?!), i.e. the head of a vile institution like the IMF writing about a bubble-y subject like AI while everyone else is economically and socially struggling, is the next level of late-stage capitalism. These guys and ladies are losing it, they're losing all contact to the reality on the ground.