Extrapolating from my current experience with AI-assisted work: AI just makes work more meaningful. My output has increased 10x, allowing me to focus on ideas and impact rather than repetitive tasks. Now apply that to entire industries and whole divisions of labor: manual data entry, customer support triage, etc. Will people be out of those jobs? Most certainly. But it gives all of us a chance to level up—to focus on more meaningful labor.
As a father, my forward-thinking vision for my kids is that creativity will rule the day. The most successful will be those with the best ideas and most inspiring vision.
I keep seeing these "my output is 10X with LLMs" but I'm not seeing any increase in quality or decrease in price for any of the very many tech products I've updated or upgraded in the last couple of years.
We're coming up in 3 years of ChatGPT and well over a year since I started seeing the proliferation of these 10X claims, and yet LLM users seem to be bearing none of the fruit one might expect from a 10X increase in productivity.
I'm beginning to think that this 10X thing is overstated.
>The most successful will be those with the best ideas and most inspiring vision.
This has never been the truth of the world, and I doubt AI will make it come to fruition. The most successful people are by and large those with powerful connections, and/or access to capital. There are millions of smart, inspired people alive right now who will never rise above the middle class. Meanwhile kids born in select zip codes will continue to skate by unburdened by the same economic turmoil most people face.
If it actually works like that, it'll be just like all labor-saving innovations, going back to the loom and printing press and the like; people will lose their job, but it'll be local / individual tragedies, the large scale economic impact will likely be positive.
It'd still suck to lose your job / vocation though, and some of those won't be able to find a new job.
Honestly, much of work under capitalism is meaningless (see: The Office). The optimistic take is that many of those same paper-pushing roles could evolve into far more meaningful work—with the right training and opportunity (also AI).
When the car was invented, entire industries tied to horses collapsed. But those that evolved, leveled up: Blacksmiths became auto mechanics and metalworkers, etc.
As a creatively minded person with entrepreneurial instincts, I’ll admit: my predictions are a bit self-serving. But I believe it anyway—the future of work is entrepreneurial. It’s creative.
> The optimistic take is that many of those same paper-pushing roles could evolve into far more meaningful work—with the right training and opportunity (also AI).
There already isn't enough meaningful work for everyone. We see people with the "right training" failing to find a job. AI is already making things worse by eliminating meaningful jobs — art, writing, music production are no longer viable career paths.
Only under the capitalistic model without basic income / social services. AI art is really fascinating, but there is something so meaningful about watching art being made by a talented artist. I don't think this will ever go away.
>the future of work is entrepreneurial. It’s creative.
How is this the conclusion you've come to when the sectors impacted most heavily by AI thus far have been graphic design, videography, photography, and creative writing?
First off, is there any? That's making an assumption, one which can just as easily be attributed to human-written code. Nobody writes debt-free code, that's why you have many checks and reviews before things go to production - ideally.
Second, in theory, future generations of AI tools will be able to review previous generations and improve upon the code. If it needs to, anyway.
But yeah, tech debt isn't unique to AIs, and I haven't seen anything conclusive that AIs generate more tech debt than regular people - but please share if you've got sources of the opposite.
(disclaimer, I'm very skeptical about AI to generate code myself, but I will admit to use it for boring tasks like unit test outlines)
> Second, in theory, future generations of AI tools will be able to review previous generations and improve upon the code. If it needs to, anyway.
Is that what's going to happen? These are still LLMs. There's nothing in the future generations that guarantees those changes would be better, if not flat out regressions. Humans can't even agree on what good code looks like, as its very subjective and context heavy with the skills of the team.
Likely, you ask gpt-6 to improve your code and it just makes up piddly architecture changes that don't fundamentally improve anything.
As a father, my forward-thinking vision for my kids is that creativity will rule the day. The most successful will be those with the best ideas and most inspiring vision.