Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How Will Language Modelers Like ChatGPT Affect Occupations and Industries? (arxiv.org)
29 points by yenniejun111 on July 23, 2023 | hide | past | favorite | 66 comments


One of the occupations listed in the appendix that will be affected are political scientists. (1.687)

My first degree is in political science. I spend four years trying to figure out what a political scientist actually does. I never figured it out.

How are we going to lose these jobs to automation, if we can barely figure out what they do or what their actual use is?

-----

On a more serious note, studies like this that take a whole swath of careers and make pronouncement about how they will be affected by AI seem relatively useless to me.

There is no way the researchers (or the consultants in the case of the Mckinsey study) can have a deep understanding of each of these careers and all the activities that are involved, what makes a good "worker" in that career, and all of that.

Just a bunch of academic bloviating on a hot topic, and using data to draw conclusions that data alone cannot fulfill...

Just my opinion...


Not to be rude, but the purposes of political sciences is easy to grasp and important. From structuring aid and refugee program, to making policies that affect our every day lives, to researching and working on issues related to economic competitiveness, treaties, and resources. you can read latest articles to get some ideas. [1]

Plus Economics, Law, and Political Science are strictly separate sciences just like chemistry and physics are not strictly separated sciences. I think the figuring out what a mathematician does is more difficult, but also still understandable and important. [1]: https://www.cambridge.org/core/journals/political-science-re...


Yeah, and creating specialized "agents" in an AI "lab" discussing solutions could solve most of the problems in a day that take actual political scientists a year to just end up at a stalemate, or impasse because of no concensus.


The thing I find very fascinating about the rise of generative AI is that it's one of the first tech advancements in a long time that puts more highly paid jobs at risk, while actually raising the value of more manual jobs that are harder to automate.

This article from the Washington Post [1] says it well:

> In March, Goldman Sachs predicted that 18 percent of work worldwide could be automated by AI, with white-collar workers such as lawyers at more risk than those in trades such as construction or maintenance. “Occupations for which a significant share of workers’ time is spent outdoors or performing physical labor cannot be automated by AI,” the report said.

While I really feel for people who have lost their job due to AI, I also can't help but feel that the "non-automatable" jobs that they're going into actually have a lot more value to society. E.g. the article talks about a copywriter who loses his copywriting work and decides to go become an HVAC technician and go into plumbing. I value the work of having working A/C and hot water a lot more than yet another creative blurb trying to sell me something.

1. https://www.washingtonpost.com/technology/2023/06/02/ai-taki...


From the paper:

> We also find a positive correlation between wages and exposure to AI language modeling.

That is only intermediary. There will be a slight increase in wages for those who can operate with AI, but soon it will be able to operate without humans in most domains and then wages will drop to zero.


This is not saying that wages will increase with AI. It is saying that people with higher wages are (statistically) more likely to lose their job to AI.


This doesn't really make any logical sense if you think about it. The reason jobs exist is not because the person hiring can't do them, but because they have other things they'd rather focus their time and energy on. You don't hire a janitor because you can't clean the floors, toilets, and other areas - but because you have different priorities for your own time.

To consider this more intuitively, take a field you probably less skilled in - art. There are some really quite amazing "AI" based generative systems for art, far more domain competent than e.g. ChatGPT is at coding. Now imagine you're setting out to make a video game. Do you suddenly feel like taking on the entire art load would be a good way to spend your time and energy, let alone if we compare that to what an artist using the same tools would accomplish in the same time? If anything "AI" will probably just be a skill multiplier.


s/ai/loom/ the sentiment is always the same and it doesn't work that way. Worry about concentration of wealth and power, sure, but automation never net removed jobs from the economy it makes more.


Here is the problem with your argument: our planet is finite, and growth will eventually plateau. It cannot go on forever.

Automation does remove jobs, but so far it's done so on a small scale compared to what AI will do. On the other hand, AI will remove jobs on a large scale on an order of MAGNITUDE more than other automations.

The argument of drawing an analogy with the loom is weak. As technology progresses, the power of it in its scope increases, and computer technology is much more wide-reaching than the loom. Unfortunately, analogy only goes so far.

At some point, there may indeed be more jobs. But even then, they will be (to the human mind) quite meaningless, such as the maintenance of AI machines and of such speciality that people will lose all hope in life by doing them.


We don't know at all what additional jobs will spring up from AI or will be removed - we already see the hype cycle ending on genAI. For example, I expect an explosion in bureaucracy with associated jobs from AI.


Exactly, which is why I recommend caution and ethical investigations, instead of hoping that this new stuff will just be hype and burn itself out. Caution is a lot better a strategy than guess and check.


If — and it will remain "if" right up until we have actually made it — if we get an AI which is as general as a human mind, then every job can be automated, including the new ones.

What that means depends how much the AI costs to run.

Assuming the hardware stays working long enough to amortise to a rounding error, $100,000 per year at $0.05/kWh is 228 kW (and by extension $10k/year is about 23 kW), so a "good enough" AI can "afford" to draw a lot of electricity while still being cheaper than many experts, or to draw more electricity than most households while still being cheaper than per-capita income in 15 of the G20 countries.

On the other hand, if the best we can figure out how to do while giving it human-level generality is an IQ of 85, and it's a thing that runs on 1 kW, that makes 16% of humans uncompetitive even at the UN "abject poverty" threshold, while smarter people can still be gainfully employed.

Current AI is neither, as it's a combination of absurd training costs, low inference costs, and being very good at specifically and (IMO) exclusivity at the things we use to define "intelligent", but also without the stuff we don't count as "intelligence" because approximately every human has it and it's not a differentiator.


You'd still need to build the physical stuff for the AI, just intelligence is not enough for every job: the bookshelf cleaning machine needs more than thinking.


The hard part of robotics isn't the actuators, it's the brains. We had vacuum cleaners (and pre-programmable factory equipment in the form of punch card looms) for a century before the Roomba.


That hardware costs money and making complex hardware vs using a human could have a non-trivial cost boundary for a long time.

I don't expect a cost efficient bookshelf cleaner robot any time soon (or even a robot vacuum cleaner that works in non-trivial settings).


IQ doesn't mean anything here.


Going to be too bad if no new ones pop up?


There is already a whole compliance industry springing up around AI use.


You don't know any of this.

It was commonly thought that the planet would run out of food until the Wheatbelt was developed.

The fundmental shift is changing the mindset.

- Away from deriving economic tasks from logical goals developed over historical time.

- Into and towards, Men guesstimating and taking risks in bold new directions for global innovation.

The gloomerism is getting kind of boring and unrewarding tbh. We need better criticisms of the future, these "total-meaninglessness" jabs don't hold water.


Are you serious? Of course, when we develop more ways of using resources, we can extend them. But we are already on the brink of climate disaster and we are nowhere near to colonizing other planets. Unless our population radically decreases, we will run out of resources. It's just mathematics: resources are finite.

Also, I am not advocating TOTAL meaninglessness. I am advocating a new way of life that does not include AI, that's it. I never said everything is doomed. I think humans have a chance but that we should proceed WITHOUT AI. That's it.


Resources are finite so long as you stop developing new ones. Resources exist on the planet that are either too expensive in the current market to extract, or do too much damage to the local enviroment to get approvals. They are accessible when times get tougher.

The electrification of the planet is massively reducing the costs to support life. We buy and sell flipped bits to each other and renew our car batteries from home solar. You could live like a monk, automated hydroponics, electric everything and probably not notice the majority of service collapsing except sewage and water.

Water is a weird one because it's often not 'destroyed', just pumped around the ecosystem by natural forces or power plants and dirtied in the process. Developing cheaper water cycling methods could push costs even lower.

The mathematics of finite resources depends heavily on who is doing the mathematics. The pre-wheatbelt guys couldnt guess what was going to happen next.

AI is just a better tool. It has changed the relationship to work, like a car means we no longer ride horses. But we still use cars to do the same thing horses did. The categories have not changed as much as everyone is afraid of. Get out of trivial work, aka working in grammar, rhetoric and logic, and you'll probably feel a lot safer.


What if there was UBI, and everyone was guaranteed food, healthcare, education, and a home. Then they were basically encouraged to just follow their passions, whereever that takes them whether it's art, science, music, etc.

Imagine that everyone who's in replaceable fields simply grew up to be scientists, and entertainers. Those were basically the only fields left to go into, and scientists are more like project managers directing AI towards finding solutions to specific goals, and making new discoveries, etc...

We could fix a shit ton of issues with 70% of the population being scientists or science adjacent in their "careers" which that word is more like "life's work" or a hobby, than what it is today.


"Automation does remove jobs, but so far it's done so on a small scale compared to what AI will do."

But, when automation was new, didn't they say it would automate everything? And it turns out that...it didn't?

It seems that pattern is somewhat likely to repeat...


I think you need to look again. Apart from everything being a fallacy, it’s been pretty much a wipeout.

Agriculture: from 70% of people working in agriculture to 25% in the 70s to 2% now.

Most other physical work, by volume follows the same trajectory.

Everything is meaningless


Everything is more meaningful, do you mean? In developed countries, agriculture is highly automated, and if you want a hobby garden in your leisure time, you can choose how much automation you want.


By the time we exhaust Earth, we will be a space faring species.


That seems highly unlikely, especially because space travel in real life is a lot more difficult than Star Trek makes it appear.

But even so, I find the idea of exhausting earth rather morally reprehensible. There are beautiful life forms on earth such as birds, bears, fish, and flowers that don't deserve our carelessness. Should we not treat them with some basic level of kindess and compassion also?


I agree with your second paragraph but am confident other life forms will ultimately prevail.


What type of life forms do you think will prevail ?


Non-human "life forms on earth" to quote OP. Maybe not birds or bears, could be just a slime mold, but something here will still be going strong after we wipe ourselves out.


I think a more appropriate conclusion to draw is that big leaps in automation required huge, costly reorganizations of society to ensure enough people have jobs to avoid civil unrest, increasingly in ways that were dubiously productive compared to, say, directly providing for the people whose labor had suddenly become worthless. Whether the next big leap(or, for that matter, the next two or five big leaps) will finally require a retool whose cost we can't afford is very much an open question.


That's an interesting viewpoint, and I think it's one way to look at it. But, we have to go back and ask: let's suppose we CAN make sure that enough people get jobs to avoid civil unrest, to use your terminology.

What will those jobs be? They will unlikely be creative jobs, since most of those will be taken over by AI. Perhaps there will be a few creative jobs left (I sincerely doubt AI will be able to write the next Dostoyevsky novel in my lifetime, but you never know). But even IF there are a few creative jobs left, they will be rather miniscule. Thus, the only jobs left will be the kind that is already increasing today: some small cog in a massive corporate scheme that somehow has not yet been automated yet.

I believe there are at least some alternatives, such as: we could form a new conglomeration (perhaps a union or cartel of sorts) of people who refuse to use AI in their business, and who refuse to do business with anyone using AI. Obviously, it would take some effort, but maybe it's possible. In this union, we would operate society normally, but with a code of ethics that prevents the use of advanced AI and maybe other dangerous technologies, and we would develop a more sustainable way of living that involves sustainable farming and being more reliant on each other instead of letting massive, soulless tech companies control us.

I am formulating a more detailed proposal but that's the gist of one of my ideas.


"They will unlikely be creative jobs, since most of those will be taken over by AI. Perhaps there will be a few creative jobs left (I sincerely doubt AI will be able to write the next Dostoyevsky novel in my lifetime, but you never know). But even IF there are a few creative jobs left, they will be rather miniscule"

This doesn't make sense to me. If AI eliminates most creative jobs, yet there are still humans who can do better (modern Dostoyevskys) would it not make sense for businesses to hire creative people to "out creative" their competitors, who just use the AI?


Dostoyevsky (to stay with the specific example) produced literary fiction that really challenges the reader on many levels. He might have written some of the "best" novels, but even devoted literary fiction readers need something lighter and simpler sometimes.

AI can probably already write a serviceable beach novel.


"AI can probably already write a serviceable beach novel."

So far...


There are communities today who, by policy/choice, have been extremely selective in adopting modern technology, the Plain People (Amish et al).

I’m not sure if that model would scale up to “uses all tech except pledges to not use advanced AI”, because the unifying thread of religion has a somewhat unique level of power that mere commercial pledges seem to not have, but it would be worth looking into the existing practices as a potential model and trying to project the good and bad that would come from it.


That is an interesting viewpoint, and I like it. Perhaps humanity needs a new "religion", which doesn't have the cult-like effects of old ones. Perhaps something grounded in Buddhism/meditation and a profound respect for the natural world like some indigenous tribes.


> What will those jobs be?

Jobs that require human contact might be good. Babysitter, massage therapist, counselor, police officer, waiter, nurse, aesthetician, etc.

Hard to say with certainty though.


> but automation never net removed jobs from the economy it makes more.

That’s true for more limited forms of automation - e.g. the loom, heavy industrial machinery, even computers - as people just move on to other jobs and society gains a net improvement in productivity.

AI, specially AGI (which is the endgame for AI), is a different matter. It’s not specialised. “General” is literal in its name. It could do nigh every job. There will be no jobs for everyone being displaced to move on to.


Maybe it hasn't yet removed a net number of jobs, but it's definitely removed jobs at the skill level that's being automated. You upgrade your weaving business from manual looms to automatic looms, maybe you create more jobs for automatic loom engineers but you certainly also remove jobs for weavers. Maybe the additional productivity means that society as a whole now has more jobs but ultimately none of those jobs are going to be easier to do than weaving.

Also, looms are a deceptive example because weaving is still skilled labour. Chances are if you can learn to weave good quality cloth you'll be capable of retraining into another skilled trade. The same can't be said for "unskilled" labour, your 50-year-old factory line workers displaced by automation aren't going to be re-training as graphic designers and web devs. Now apply that not to a specialized machine that screws lids on water bottles really fast, but to a machine that can do any desk job a moderately competent white collar worker can do, and you'll see we have a problem.


This isn't like the loom though. If you have a loom you obsolete all weavers, but only the weavers.

This is more like a magical typewriter, and it's coming for all white collar jobs.


Isn't the absolute entire point of AI to remove jobs, I mean why on earth would we create AI otherwise ? It's the other half of the industrial revolution ?


Do you feel the same about any technology that can help someone do their job better? Calculators, typewriters, card catalogs, tool use? What's the minimum level of technological sophistication that's acceptable to you as no being about removing jobs?


It's quite different, a calculator isn't supposed to work autonomously.


Well, before they were machines, they were jobs. We can no more imagine what AI will bring than people from the 1800s could imagine the changes computers woould bring. That is to say: everything changed.


The only way I can imagine humans stay relevant in the job market, is if we can someone augment our intelligence to the point where we can basically keep up with AIs progress.

I don't actually think this is impossible as in some ways, mobile phones and other wearable devices are probably the first steps to get there.

Who knows?


Can we define automation then because my understanding was that it quite literally means sans human labor. Are you saying it's merely a political buzzword?


Yes, the nobility always will need their serfs doing something.


From the most recent issue of Logic magazine, talking about Charles Babbage: “Babbage’s Difference engines were designed to automate the work of the “least skilled” mathematicians…


Drive-by, low-quality comment. Please ignore.

I applaud Hollywood, including Sarah Silverman and the unions currently on strike, for trying to fight AI. I think they're doomed, but good for them.

I don't see a scenario where Hollywood exists in anything like it's current form in 10-15 years. In the next 5 years, AI will be a huge help, allowing wonderful new content produced with much less work.

Beyond that? I fully expect that in 10-15 years time the value of the content owned by companies like Disney will approach zero. Mickey Mouse(ok, not the biggest asset they have now, but an example) will be replaced by Ricky the Raccoon or thousands of other characters generated by AI. They will all act out stories influenced by, but nothing like, much of modern storytelling. They will weave stories out of the lives of their viewers, producing a new form of media capable of dynamically interaction. The fusion of modern gaming and cinema, but personal and streaming endlessly, in an immersive format such as VR.

This future holds no appeal to me, and I find it repulsive, but I suspect the next generation will find it beautiful and irresistible.


I think generative AI applied to the arts will be a tool of the mind like a calculator or Mathematica or R or numpy. They democratize the mechanics of language and visual arts, but they don’t erase the difference between a skilled artist and a layman. A skilled artist has a spark that sees beyond the average mind into a world that inspires. AI can make the boilerplate processes easier and more rote, and also unlock a new generation of artists not hobbled by their lack of linguistic or craft skill. The average quality of mindless garbage will improve and that’s not a bad thing. But the quality of skilled art will also improve and that’s a good thing. I think you can already see this if you watch the galleries for Midjourney and other forums - the creativity on display isn’t coming from the AI, the prompts are elaborate but their basis is very creative.

Artists bemoaned the camera for very similar reasons. Yes it did put out of work portrait artists. But it also opened the door to abstract, modernism, post modernism, and a new exploration of art not being simply reproducing what can be seen.

There is a real risk to the livelihood of people who generate SEO and other garbage. On the other hand, SEO and other garbage produced by GPT4 is actually informative. The garbage will become better garbage. But the people who survive by making garbage will need to find other roles in the world.

The artists of our society, writers, visual artists, and others, so long as they adapt learn and innovate rather than die fighting the machine as John Henry did will survive, thrive, and usher in a new age of art.


Well, I give you kudos for at least making a verifiable prediction. I want to look up this comment in 5-10 years time and see how it turns out.

In all honesty, I don't think it will turn out well. Primarily because you seem to be arguing that AI will reduce the value of content owners, e.g. "I fully expect that in 10-15 years time the value of the content owned by companies like Disney will approach zero. Mickey Mouse(ok, not the biggest asset they have now, but an example) will be replaced by Ricky the Raccoon or thousands of other characters generated by AI."

This is sooooo similar to what people expected in the late 90s with the rise of the consumer Internet, and the exact opposite happened. "Modern media companies will have to contend with everyone in the world being a potential competitor when the bar to getting a website up on the Internet and publishing your own content is so low" the story went. But the lesson learned is that when production and distribution become so cheap, this results in more concentration, not less, because it leads to "winner take all" dynamics where people are quick to switch to the best product, even if it's just a teeny bit better.

Sure, there may be Ricky the Raccoon or thousands of other AI generated characters. It's just that 99% of them will suck and nobody will care about them. There were a bajillion Geocities sites back in the day, too, and yet we all ended up spending most of our time on the same top few websites.


I doubt it. AI currently can do some replacement of features, or dream up scenes, but it cannot animate a character at will.

So perhaps we'll have an even larger green screen acting industry, and even more computer graphics, but there will still be humans involved to create anything compelling, at least until a next major breakthrough in AI, which is quite hard to predict.

Also, I am already way past bored with the output from most contemporary AI. It seems to fill but a small hole in the large space of creativity.


https://www.move.ai/

What our senses find acceptable is a finite set of parameters

It’s only a matter of time

Copyright exists as an artificial constraint on humans doing exactly the same thing; I could just work a local grocery and conjure my own worlds with free time

When did freedom become “prop up the norms of yesteryear”?

In the same way an atheist can chose not to be a vassal for Christianity future people do not have to store us in their memory

To relate it to a technical concept; society will become an LLM training on old outputs and exhaust itself. Reality is not a forever fractal; longtermerism is just another attempt at secularizing religion

https://www.businessinsider.com/elon-musk-believes-it-is-imp...


AI doesn't have to do everything, or be the best at everything, it just has to do some things well enough, or augment humans enough that fewer people can do the job.

There's a lot of labor destroying potential in animation. For example, a human could make a small number of key frames (this already happens) and AI could fill in the in-between frames (this is where a lot of labor goes) you could cut costs significantly.


AI can't animate a character at will? After everything we have seen AI become capable of, it seems incredible to claim that animating a character is something it simply won't be able to do in the future.


Sure, it will also go to the theater in the future, and pay for a ticket to see humans act.

Note that I used the word "currently". Given enough time, AI will be able to do things way beyond my current comprehension.


But AI CAN animate a character, they've animated a bear riding a skateboard, or dolphin surfing, etc.

Albeit it's kinda shitty and imperfect but the proof of concept is already there, it's basically stable diffusion at the pre-prototype stages. Given 2 years it'll be able to churn full videos.

I've already begun creating a Youtube shorts channel fully made of videos created by AI + CC licensed images and videos.

I consult companies to do similar things, etc. There's a whole world out there that is going to change overnight. I'd say AGI is 4-5 years away tops. At that point, all bets are off the table.

The reason I consult in AI? Contingency plan, if AI becomes the last major industry. I've got a family to support (that and I'm obsessed with the novelty and exciting things I can do with AI -- in the same way the scientist in Independence day is "excited" when the aliens come because all the stuff turned on in their captured spaceship - i.e. it's cool, but maybe I shouldn't be so giddy since it could spell our eventual doom).


No, this will most likely not happen in 2 years, because it's not a straightforward progression from the current state of the art.

Generating realistic video requires using 3D models, instead of the 2D models that Stable Diffusion and similar use now. To get beyond a crappy level, at least two problems have to be tackled:

1. More example data has to be available. There is an abundance of 2D video and still images, on which the current systems are trained. There is, however, not enough 3D footage.

2. The systems that generate 2D images now cannot easily scale up to 3D images. Point clouds would require ~1000 times the amount of data, and 3D meshes are so different from images, that current techniques won't work.

So, yes, it is possible that this will happen in two years time, but it seems highly unlikely.

What I deem possible in two years time is generating animations such as cartoons, in which a perfect understanding of a 3D world is not required. And this would still require a lot of human input to be anywhere near interesting.

Also, if you consult companies about this, and say that AGI is 4-5 years away, then I wonder if you are trolling, and I should possibly have refrained from writing this comment. If you are not trolling, please consult a book on AI, or any SF novel for that matter, from the 1960s.


Mickey Mouse is not a difficult character to emulate, branding and marketing are what continues makes it a household name. Even with it's original cultural trailblazing, it would have faded away if not for the IP being constantly brought to market by Disney, same for all their IPs.

It is luck when an indie creator strikes on a new IP idea and it happens to get viral traction, otherwise it is deliberate delivery to market by brands that makes an IP succesful.

All to say, if it were possible to out-market Disney it would have happened regardless of AI, producing the creative material isn't Disneys edge at all, it's their brand position and budget.


Generated content will never have social capital.

People not only watch Star Wars because it's a great movie. People watch it because other people do. They can discuss it, they can argue about it, they can share it. A teenager watches Serbian film to show off how badass he is, or Primer to show off how smart he is, plays Dark Souls to show off his skill.

With generated content, you don't have those qualities. I'm actually working on one project like this, and it's a great challenge.

Consuming media may seem like "single player" game when you just sit and watch it, but it's never really is.


This future is not new. AI might make The Holodeck from 1970s Star Trek finally exist. Our iPhones are tricorders. Alexa is the ship's computer. Automated agriculture, automated transport and automated food service machines are the ship's replicator. We ticked off the box from 40yrs ago.

The AI gloomerism is really missing something. AI is the next tool to build off. It has structural weaknesses and humans make better products with AI than AI does alone.

I'm not going to use 2020 AI tech to push out 2010 Disney films. Get to the next technology already, AI is logically incomplete.


> Mickey Mouse(ok, not the biggest asset they have now, but an example) will be replaced by Ricky the Raccoon or thousands of other characters generated by AI.

I don't know about that. There are already thousands of characters, universes, stories (and they have been much cheaper to make than in the past, hence the proliferation), yet people still go see Star Wars movies, no matter how bad the previous one was (and the story is the same for video games).

Culture is about shared references, so owning the property over that common-ground references is going to be even more valuable as the cost of making content declines and new entrants are prevented from entering because of the competition. (we need a copyright reform to prevent a handful of corporations to own the entirety of culture, but it's as likely as a socialist revolution at this point…)


I find it funny how 'AI Ethicists' worry about AI says. And yet, they see no problem with AI possibly displacing millions of jobs and leaving people without income. That is what should be focused on, not nitpicking what the AI should say.

I know this is a low quality comment, I'm just angry at the current situation regarding AI.


Because most 'AI ethicists' are overeducated well off urbanite upper-middle class landed gentry-adjacents who believe they will never be replaced by AI because the touchstones of their distributed cognitive-emotional load are too complex for even far off SOTA models to ever grasp. The fate of the poor simple proles and their millions destitute are a non-issue to them. AI bots saying mean things on Twitter is a far greater crisis because it pertains to what they actually care about, not the lives of millions of people less fortunate than them.


> Because most 'AI ethicists' are overeducated well off urbanite upper-middle class landed gentry-adjacents

Every time I come back to HN after a break, I am quickly reminded of why I left.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: