AI doesn't have needs any desires, humans do. And no matter how hyped one might be about AI, we're far away from creating an artificial human. As long as that's true, AI is a tool to make humans more effective.
That's fair, but the question was whether AI would destroy or create jobs.
You might speculate about a one-person megacorp where everything is done by AIs that a single person runs.
What I'm saying is that we're very far from this, because the AI is not a human that can make the CEO's needs and desires their own and execute on them independently.
Humans are good at being humans because they've learned to play a complex game, which is to pursue one's needs and desires in a partially adversarial social environment.
This is not at all what AI today is being trained for.
Maybe a different way to look at it, as a sort of intuition pump: If you were that one man company, and you had an AGI that will correctly answer any unambiguously stated question you could ask, at what point would you need to start hiring?
You're taking your opinion to extreme because I don't think anyone is talking about replacing all engineers with a single AI computer doing the work for a one-person mega-corporation.
The actual question, which is much more realistic, is if an average company of, let'say, 50 engineers will still have a need to hire those 50 engineers if AI turns out to be such an efficiency multiplier?
In that case, you will no longer need 10 people to complete 10 tasks in given time-unit but perhaps only 1 engineer + AI compute to do the same. Not all businesses can continue scaling forever, so it's pretty expected that those 9 engineers will become redundant.
You took me too literally there, that was intended as a thought experiment to explore the limits.
What I was getting at was the question: If we feel intuitively that this extreme isn't realistic, what exactly do we think is missing?
My argument is, what's missing is the human ability to play the game of being human, pursuing goals in an adversarial social context.
To your point more specifically: Yes, that 10-person team might be replaceable by a single person.
More likely than not however, the size of the team was not constrained by lack of ideas or ambition, but by capital and organizational effectiveness.
This is how it's played out with every single technology so far that has increased human productivity. They increase demand for labor.
Put another way: Businesses in every industry will be able to hire software engineering teams that are so good that in the past, only the big names were able to afford them. The kind of team required for the digital transformation of every old fashioned industry.
In my 10-person team example, what in your opinion would the company with the rest of the 9 people do once the AI proves its value in that team?
Your hypothesis is AFAIU is that the company will just continue to scale because there's an indefinite amount of work/ideas to be explored/done so the focus of those 9 people will just be shifted to some other topic?
Let's say I am a business owner I have a popular product with a backlog of 1000 bugs and I have a team of 10 engineers. Engineers are busy both juggling between the features and fixing the bugs at the same time. Now let's assume that we have an AI model that will relieve 9 out of 10 engineers from cleaning the bugs backlog and we will need 1 or 2 engineers reviewing the code that the AI model spits out for us.
What concrete type of work at this moment is left for the rest of the 9 engineers?
Assuming that the team, as you say, is not constrained by the lack of ideas or ambition, and the feature backlog is somewhat indefinite in that regard, I think that the real question is if there's a market for those ideas. If there's no market for those ideas then there's no business value $$$ created by those engineers.
In that case, they are becoming a plain cost so what is the business incentive to keep them then?
> Businesses in every industry will be able to hire software engineering teams that are so good that in the past, only the big names were able to afford them
Not sure I follow this example. Companies will still hire engineers but IMO at much less capacity than what it was required up until now. Your N SQL experts are now replaced by the model. Your M Python developers are now replaced by the model. Your engineer/PR-review is now replaced by the model. The heck, even your SIMD expert now seems to be replaced by the model too (https://github.com/ggerganov/llama.cpp/pull/11453/files). Those companies will no longer need M + N + ... engineers to create the business value.
> Your hypothesis is AFAIU is that the company will just continue to scale because there's an indefinite amount of work/ideas to be explored/done so the focus of those 9 people will just be shifted to some other topic?
Yes, that's what I'm saying, except that this would hold over an economy as a whole rather than within every single business.
Some teams may shrink. Across industry as a whole, that is unlikely to happen.
The reason I'm confident about this is that this exact discussion has happened many times before in many different industries, but the demand for labor across the economy as a whole has only grown. (1)
"This time it's different" because the productivity tech in question is AI? That gets us back to my original point about people confusing AI with an artificial human. We don't have artificial humans, we have tools to make real humans more effective.
Hypothetically you could be right and I don't know if "this time will be different" nor am I trying to predict what will happen on the global economic scale. That's out of my reach.
My question is rather of much narrower scope and much more concrete and tangible - and yet I haven't been able to find any good answer for it, or strong counter-arguments if you will. If I had to guess something about it then my prediction would be that many engineers will need to readjust their skills or even requalify for some other type of work.
It should be obvious that technology exists for the sake of humans, not the other way around, but I have already seen an argument for firing humans in favour of LLMs since the latter emit less pollution.
LLMs do not have desires, but their existence alters desires of humans, including the ones in charge of businesses.
I agree the latter part is a risk to consider, but I really think getting an AI to replace human jobs on a vast scale will take much more than just training a bit more.
You need to train on a fundamentally different task, which is to be good at the adversarial game of pursuing one's needs and desires in a social environment.
And that doesn't yet take into account that the interface to our lives is largely physical, we need bodies.
I'm seeing us on track to AGI in the sense of building a universal question answering machine, a system that will be able to answer any unambiguously stated question if given enough time and energy.
Stating questions unambiguously gets pretty difficult fast even where it's possible, often it isn't even possible, and getting those answers is just a small part of being a successful human.
PS: Needs and desires are totally orthogonal to AI/AGI. Every animal has them, but many animals don't have high intelligence. Needs and desires are a consequence of our evolutionary history, not our intelligence. AGI does not need to mean an artificial human. Whether to pursue or not pursue that research program is up to us, it's not inevitable.
To be clear, I'm not arguing humans will stop being involved in software engineering completely. What I fear is that the pool of employable humans (as code reviewers, prompt engineers and high-level "solution architects") will shrink, because fewer will be needed, and that this will cause ripples in our industry and affect employment.
We know this isn't far-fetched. We have strong evidence to suspect during the big layoffs of a couple of years ago, FAANG and startups all colluded to lower engineer salaries across the board, and that their excuse ("the economy is shrinking") was flimsy at best. Now AI presents them with another powerful tool to reduce salaries even more, with a side dish of reducing the size of the cost center that is programmers and engineers.
Honestly, I wasn't even talking about jobs with that. I worry about an intelligent IOT controlled by authoritarian governments or corporate interests. Our phones have already turned society into a panopticon, and that will can get much worse when AGI lands.
But yes, the job thing is concerning as well. AI won't scrub a toilet, but it will cheaply and inexhaustibly do every job that humans find meaningful today. It seems that we're heading inexorably towards dystopia.
> AI won't scrub a toilet, but it will cheaply and inexhaustibly do every job that humans find meaningful today
That's the part I really don't believe. I'm open to being wrong about this, the risk is probably large enough to warrant considering it even if the probability of this happening is low, but I do think it's quite low.
We don't actually have to build artificial humans. It's very difficult and very far away. It's a research program that is related to but not identical to the research program leading to tools that have intelligence as a feature.
We should be, and in fact we are, building tools. I'm convinced that the mental model many people here and elsewhere are applying is essentially "AGI = artificial human", simply because the human is the only kind of thing in the world that we know that appears to have general intelligence.
But that mental model is flawed. We'll be putting intelligence in all sorts of places that are not similar to a human at all, without those devices competing with us at being human.
To be clear, I'm much more concerned about the rise of techo-authoritarianism than employment.
And further ahead, where I said your original take might not age well; I'm also not worried about AI making humanoid bodies. I'd be worried about a future where mines, factories, and logistics are fully automated: an AI for whom we've constructed a body which is effectively the entire planet.
And nobody needs to set out to build that. We just need to build tools. And then, one day, an AGI writes a virus and hacks the all-too-networked and all-too-insecure planet.
> I'm also not worried about AI making humanoid bodies. I'd be worried about a future where mines, factories, and logistics are fully automated: an AI for whom we've constructed a body which is effectively the entire planet.
I know scifi is not authoritative, and no more than human fears made into fiction, but have you read Philip K. Dick's short story "Autofac"?
It's exactly what you describe. The AI he describes isn't evil, nor does it seek our extinction. It actually wants our well-being! It's just that it's taken over all of the planet's resources and insists in producing and making everything for us, so that humans have nothing left to do. And they cannot break the cycle, because the AI is programmed to only transition power back to humans "when they can replicate Autofac output", which of course they cannot, because all the raw resources are hoarded by the AI, which is vastly more efficient!
I think that science fiction plays an important role in discourse. Science fiction authors dedicate years deeply contemplating potential future consequences of technology, and packaging such into compelling stories. This gives us a shorthand for talking about positive outcomes we want to see, and negative outcomes that we want to avoid. People who argue against scifi with a dismissal that "it's just fiction" aren't participating in good faith.
On the other hand, it's important not to pay too close attention to the details of scifi. I find myself writing a novel, and I'm definitely making decisions in support of a narrative arc. Having written the comment above... that planetary factory may very well become the third faction I need for a proper space opera. I'll have to avoid that PKD story for the moment, I don't want the influence.
Though to be clear, in this case, that potentiality arose from an examination of technological progress already underway. For example, I'd be very surprised if people aren't already training LLMs on troves of viruses, metasploit, etc. today.
I think we're talking about different time scales - I'm talking about the next few, maybe two or three decades, essential the future of our generation specifically. I don't think what you're describing is relevant on that time scale, and possibly you don't either.
I'd add though that I feel like your dystopian scenario probably reduces to a Marxist dystopia where a big monopolist controls everything.
In other words, I'm not sure whether that Earth-spanning autonomous system really needs to be an AI or requires the development of AI or fancy new technology in general.
In practice, monopolies like that haven't emerged due to competition and regulation, and there isn't a good reason to assume it would be different with AI either.
In other words, the enemies of that autonomous system would have very fancy tech available to fight it, too.
I'm not fussy about who's in control. Be it global or national; corporate or governmental; communist or fascist. But technology progresses more or less uniformly across the globe and systems are increasingly interconnected. An AGI, or even a poor simulacrum cobbled together from LLMs with internet access, can eventually hack anything that isn't airgapped. Even if it doesn't have "thoughts" or "wants" or "needs" in some philosophical sense, the result can still be an all-consuming paperclip maximizer (but GPUs, not paperclips). And every software tool and every networked automated system we make can be used by such a "mind."
And while I want to agree that we won't see this happen in the next 3 decades, networked automated cars have already been deployed on the street of several cities and people are eagerly integrating LLMs into what seems to be any project that needs funding.
It's tempting to speculate about what might happen in the very long run. And different from the jobs question, I don't really have strong opinions on this.
But it seems to me like you might not be sufficiently taking into account that this is an adversarial game; i.e. it's not sufficient for something just to replicate, it needs to also out-compete everything else decisively.
It's not clear at all to me why an AI controlled by humans, to the benefit of humans, would be at a disadvantage to an AI working against our benefit.
Agreed on all but one detail. Not to put too fine a point on it, but I do believe that the more emergent concern is AI controlled by a small number of humans, working against the benefit of the rest of humanity.
In the AI age, those who own the problems stand to own the AI benefits. Utility is in the application layer, not the hosting or development of AI models.