The question is... is this based on existing capability of LLMs to do these jobs? Or are companies doing this on the expectation that AI is advanced enough to pick up the slack?
I have observed a disconnect in which management is typically far more optimistic about AI being capable of performing a specific task than are the workers who currently perform that task.
And to what extent is AI-related job cutting just an excuse for what management would want to do anyway?
I do not see anything in this study that accounts for the decline in economic activity. Is it AI replacing the jobs, or is it that companies are not optimistically hiring, which disproportionally impacts entry level jobs?
Agree, I think the high cost of full time hires for entry level software jobs (total comp + onboarding + mentoring) vs investing in AI and seeing if that gap can be filled is a far less risky choice at the current economic state.
6-12 months in, the AI bet doesnt pay off, then just stop spending money in it. cancel/dont renew contracts and move some teams around.
For full time entry hires, we typically dont see meaningful positive productivity (their cost is less than what they produce) for 6-8 months. Additionally, entry level takes time away from senior folks reducing their productivity. And if you need to cut payroll cost, its far more complicated, and worse for morale than just cutting AI spend.
So given the above, plus economy seemingly pre-recession (or have been according to some leading indicators) seems best to wait or hire very cautiously for next 6-8 months at least.
"I think the high cost of full time hires for entry level software jobs (total comp + onboarding + mentoring) vs investing in AI and seeing if that gap can be filled"
I think it's more to do with the outsourcing. Software is going the same way as manufacturing jobs. Automation hurts a little, but outsourcing kills.
The numbers say otherwise. The US is outsourcing about 300k jobs annually, with about 75% of those being tech. The trend has generally increased over the past decade.
Even then why hire a junior dev instead of a mid level developer that doesn’t need mentoring? You can probably hire one for the same price as a junior dev if you hire remotely even in the US.
Or H1B / outsourcing replacement. There are data points showing tech companies hiring thousands of foreign workers while layout off domestic employees. It should factor in to these analyses of displaced junior developers.
Exactly this. 2023Q1 was when the interest rate hike from the previous year really kicked in with full force. It was the first hiring market I ever saw in well over a decade where the employers were firmly in the drivers seat even for seniors.
I can imagine that there were a decent number of execs who tried chatgpt, made some outlandish predictions and based some hiring decisions upon those predictions though.
This paper looks kinda trashy - confusing correlation with causation and clickbaity.
> In 2017 Trump made businesses have to amortize these [R&D] expenses over 5 years instead of deducting them, starting in 2022 (it is common for an administration to write laws that will only have a negative effect after they're gone). This move wrecked the R&D tax credit. Many US businesses stopped claiming R&D tax credits entirely as a result. Others had surprise tax bills.
Then companies bought their own stock instead of investing in labor:
to me it is just market pressure to exploit the high stress on laborers atm. the level of uncertainty today is only a problem if you don't have a ton of existing capital, which everyone in charge does. so they are insulated and can treat people poorly without repercussions. in a market that prefers short term profits, they will then do this in order to make more money right now.
companies must do this, 'cause if they don't then their competition will (i.e. the pressure)
of course, we can collectively decide to equally value labor and profit, as a symbiotic relationship that incentivizes long term prosperity. but where's the memes in that
How about an alternative potential explanation -- AI is great at bits and pieces but not great at everything, and definitely not good at stitching together all the various pieces. So you need someone experienced to fill in the gaps and integrate all the vibe-coded bits.
This is what I see, but want to hear others' experiences.
An alternate explanation is that, even if AI does not have any chance of entirely replacing an employee, not even junior hires, in the hands of competent seniors AI does substantially improve their productivity and eliminates whole classes of tasks that were traditionally where juniors cut their teeth.
So companies reduce junior hiring because their work is relatively less valuable, and they can meet their goals by shuffling existing resources. When they can't do that, they go for seniors since the immediate bang for the buck is higher (ofc, while depleting the global pipeline that actually produces seniors in the long run, in a typical tragedy of the commons)
One issue we're running into at my job: we're struggling to find entry-level candidates whoaren't lying about what they know by using an LLM.
For the tech side, we've reduced behavioral questions and created an interview that allows people to use cursor, LLMs, etc. in the interview - that way, it's impossible to cheat.
We have folks build a feature on a fake code base. Unfortunately, more junior folks now seem to struggle a lot more with this problem
The thing about entry-level candidates is that we expect them to know relatively little, anyway. When I've been delegated to participate in interviewing new candidates, a question I really like is "What's your favorite project you've worked on lately? What was interesting about it? Run into any tricky problems along the way? It can be anything: for work, school, a hobby project. Doesn't even need to software".
It slices through the bullshit fast. Either the person I'm interviewing is a passionate problem solver, and will be tripping over themselves to describe whatever oddball thing they've been working on, or they're either a charlatan or simply not cut out for the work. My sneaking suspicion is that we could achieve similar levels of success in hiring for entry level positions at my current company if we cut out literally the entirety of the rest of the interviews, asked that one question, and hired the first person to answer well.
We came up with some simple coding exercises (about 20 minutes total to implement, max) and asked candidates to submit their responses when applying. Turns out one of the questions regularly causes hallucinated APIs in LLM responses, so we've been able to weed out a large percentage of cheaters who didn't even bother to test the code before submitting.
The other part is that you can absolutely tell during a live interview when someone is using an LLM to answer.
This is the big question. It could be any combination of the following and it likely depends on the company/position too:
- Generative AI is genuinely capable enough to replace entry level workers at a lower cost.
- The hype around Generative AI is convincing people who make hiring decisions that it is capable enough to replace entry level workers at a lower cost.
- The hype around Generative AI is being used as an excuse to not hire during an economic downturn.
"Or are companies doing this on the expectation that AI is advanced enough to pick up the slack?"
I think it's the expectation. We have some publicized examples of needing to hire people back. My company isn't laying off, but they also aren't hiring much, especially at the entry level. They're also trying to force attrition with PIPs and stuff. If it was a right now thing, they'd just layoff.
Also, just because coding gets 30% faster (say some studies), doesn't mean that's a 30% reduction in headcount since many tasks are about design and stuff. This seems to further point to a lack of hiring being anticipatory in my estimation.
This was close to my first thought as well. I don't think we're far enough along the LLM adoption curve to actually know how it will affect the business case and thus employment long term. In the last couple of years of LLM/AI honeymoon, the changes to accommodate the technology may obscure direct and second order effects.
At least for developers, AI can absolutely do the same quality work as most fresh graduates - and the business case for hiring fresh graduates was already pretty shaky; IME they take 6+ months to become a net positive (usually more like two years).
I have always kept that latter point to myself at work - the next generation have to learn somewhere, and training them can be a pleasure (depending on the person) - but even quite good folk with 4-5 years of experience need (what feels like) a lot of assistance to understand/follow the architecture laid out for them.
I don’t use AI to help me code because IME it’s no better than an enthusiastic junior dev who doesn’t learn, and helping them learn is the main reason I might want to work with such a dev.
I've yet to see any company show actual factual revenue increase from employing AI. So I doubt they have created the internal analysis structures to ensure what exactly they're using AI for.
Even if a company has somehow managed to successfully replace all human labor with AI and fire 100% of its human workforce, the revenue wouldn't necessarily spike.
The question of reasoning doesn't really matter once you show its happening.
When you constrict the market like they have done, you naturally get distortions, and the adversarial nature of the market fails to perform economic calculation potentially leading to grave consequences. Even at this point, whipsaws would be quite destructive. I know people who have abandoned their careers due to lack of job availability for the foreseeable future. They were searching for years with no recovery.
When you destroy a pipeline of career development for short-term profit which is possible because of the decoupled nature of money-printing/credit facility, decisions made are psychologically sticky. When there is no economic benefit for competency, the smart people leave for a market where this is possible.
The smart people I know right now are quietly preparing for socio-economic collapse as a result of runaway money-printing. When you have a runaway dangerous machine, you can either step back and let it destroy itself (isolating yourself), or you can accelerate the breakdown of its dependent cycles. Many choose the former, since the latter carries existential risk for no real benefit in the short-term but the latter would result in the least amount of causalties.
70% of the economy covers the white-collar market, which will be gone soon, not because the jobs can be replaced by AI, but because the business leaders in consolidated industry all decide to replace workers becoming the instrument of their own demise through deflationary economics.
There are a lot of ways one can prepare for such a collapse, the most impactful way is to develop skills and preserve knowledge of history, production, and other important information that is being suppressed or pigeonholed to only industry in foreign countries; to a greater or lesser degree.
Preserve the dependencies that promote critical thinking, rational thought, and measured reason.
Aside from that, figure out what you need for survival beyond the basics in a non-permissive environment without a rule of law; and build a community of people that have these skills. The lone wolf always dies, the pack survives.
Austere Medicine, Food Preservation, Guns/Self Defense, Information Recon, etc... Map dependencies, costs, and yields (information that is largely kept confidential these days).
Learn how to make stuff from scratch, and learn what areas you can leapfrog given knowledge of the development of such the technologies in such industries.
Most of the materials we rely on today would not be available because we rely on knowledge of chemistry which few have.
The actual factors that determine the Wealth of Nations are based in the ability of the individual citizen to be self sufficient and produce necessities from scratch that will last without excessive recurring cost. Through a distribution of labor. The book from Adam Smith covers these factors in great detail, and while LTV is now defunct/refuted, the cost portion implicit in the making of intermediate goods resulting from such valuation/economics is not, and remains valid.
Debt-based money-printing on the otherhand is all about distorting the market allowing nationalized industry to outcompete legitimate companies through slave labor under financial engineering. The debasement stolen from everyone holding the currency is in fact slave labor. A runaway positive feedback system that destroys everything it touches (eventually), tainted by Ahriman.
There are a lot of people who only care about how much leveraging AI will net them personally, very little thought is being given to the efficacy of current LLMs beyond this for many folks.
The question is... is this based on existing capability of LLMs to do these jobs? Or are companies doing this on the expectation that AI is advanced enough to pick up the slack?
I have observed a disconnect in which management is typically far more optimistic about AI being capable of performing a specific task than are the workers who currently perform that task.
And to what extent is AI-related job cutting just an excuse for what management would want to do anyway?