Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My primary worry since the start has been not that it would "replace workers", but that it can destroy value of entire sectors. Think of resume-sending. Once both sides are automated, the practice is actually superfluous. The concept of "posting" and "applying" to jobs has to go. So any infrastructure supporting it has to go. At no point did it successfully "do a job", but the injury to the signal to noise ratio wipes out the economic value a system.

This is what happened to Google Search. It, like cable news, does kinda plod along because some dwindling fraction of the audience still doesn't "get it", but decline is decline.



> it can destroy value of entire sectors. Think of resume-sending. Once both sides are automated, the practice is actually superfluous

"Like all ‘magic’ in Tolkien, [spiritual] power is an expression of the primacy of the Unseen over the Seen and in a sense as a result such spiritual power does not effect or perform but rather reveals: the true, Unseen nature of the world is revealed by the exertion of a supernatural being and that revelation reshapes physical reality (the Seen) which is necessarily less real and less fundamental than the Unseen" [1].

The writing and receiving of resumes has been superfluous for decades. Generative AI is just revealing that truth.

[1] https://acoup.blog/2025/04/25/collections-how-gandalf-proved...


Interesting: At first I was objecting in my mind ("Clearly, the magic - LLMs - can create effect instead of only revealing it.") but upon further reflecting on this, maybe you're right:

First, LLMs are a distillation of our cultural knowledge. As such they can only reveal our knowledge to us.

Second, they are limited even more so by the users knowledge. I found that you can barely escape your "zone of proximal development" when interacting with an LLM.

(There's even something to be said about prompt engineering in the context of what the article is talking about: It is 'dark magic' and 'craft-magic' - some of the full potential power of the LLM is made available to the user by binding some selected fraction of that power locally through a conjuration of sorts. And that fraction is a product of the craftsmanship of the person who produced the prompt).


My view has been something of a middle ground. It's not exactly that it reveals relevant domains of activity are merely performative, but its a kind of "accelerationism of the almost performative". So it pushes these almost-performative systems into a death spiral of pure uselessness.

In this sense, I have rarely seen AI have negative impacts. Insofar as an LLM can generate a dozen lines of code, it forces developers to engage in less "performative copy-paste of stackoverflow/code-docs/examples/etc." and engage the mind in what those lines should be. Even if, this engagement of the mind, is a prompt.


I find most software development performative, and I believe LLMs will only further that end. I suppose this is a radical view.


Yeah man, I'm not so sure about that. My father made good money writing resumes in his college years studying for his MFA. Same for my mother. Neither of them were under the illusion that writing/receiving resumes was important or needed. Nor were the workers or managers. The only people who were confused about it were capitalists who needed some way to avoid losing their sanity under the weight of how unnecessary they were in the scheme of things.


> This is what happened to Google Search

This is completely untrue. Google Search still works, wonderfully. It works even better than other attempts at search by the same Google. For example, there are many videos that you will NEVER find on Youtube search that come up as the first results on Google Search. Same for maps: it's much easier to find businesses on Google Search than on maps. And it's even more true for non-google websites; searching Stack Overflow questions on SO itself is an exercice in frustration. Etc.


Yeah I agree. But this is a strong perception and why Google stock is quite cheap (people are afraid Search is dying). I think Search has its place for years to come (while it will evolve as well with AI) and that Google is going to be pretty much unbeatable unless it is broken up.


I can't buy it, unfortunately, because I've used Google long enough to know what it can be, and currently the primary thing it turns up for me is AI-generated SEO spam. I'll agree, though, that many other search systems are inferior.


Im not sure this is a great example... yes the infrastructure of posting and applying to jobs has to go, but the cost of recruitment in this world would actually be much higher... you likely need more people and more resources to recruit a single employee.

In other words, there is a lot more spam in the world. Efficiencies in hiring that implicitly existed until today may no longer exist because anyone and their mother can generate a professional-looking cover letter or personal web page or w/e.


I'm not sure that is actually a bad thing. Being a competent employee and writing a professional-looking resume are two almost entirely distinct skill sets held together only by "professional-looking" being a rather costly marker of being in the in-group for your profession.


Resume-sending is a great example: if everyone's blasting out AI-generated applications and companies are using AI to filter them, the whole "application" process collapses into meaningless busywork


No, the whole process is revealed to be meaningless busywork. But that step has been taken for a long time, as soon as automated systems and barely qualified hacks were employed to filter applications. I mean, they're trying to solve a hard and real problem, but those solutions are just bad at it.


Doesn't this assume that a resume has no actual relation to reality?


The technical information on the cv/resume is, in my opinion, at most half of the process. And that's assuming that the person is honest, and already has the cv-only knowledge of exactly how much to overstate and brag about their ability and to get through screens.

Presenting soft skills is entirely random, anyway, so the only marker you can have on a cv is "the person is able to write whatever we deem well-written [$LANGUAGE] for our profession and knows exactly which meaningless phrases to include that we want to see".

So I guess I was a bit strong on the low information content, but you better have a very, very strong resume if you don't know the unspoken rules of phrasing, formatting and bragging that are required to get through to an actual interview. For those of us stuck in the masses, this means we get better results by adding information that we basically only get by already being part of the in-group, not by any technical or even interpersonal expertise.

Edit: If I constrain my argument to CVs only, I think my statement holds: They test an ability to send in acceptably written text, and apart from that, literally only in-group markers.


For some applications it feels like half the signal of whether you're qualified is whether the CV is set in Computer Modern, ie was produced via LaTeX.


input -> ai expand -> ai compress -> input'

Where input' is a distorted version of input. This is the new reality.

We should start to be less impressed volume of text and instead focus on density of information.


> the whole "application" process collapses into meaningless busywork

Always was.


Are you sure suggesting google search is in decline? The latest Google earnings call suggests it’s still growing


Google Search is distinct from Google's expansive ad network. Google search is now garbage, but their ads are everywhere are more profitable than ever.


On Google's earnings call - within the last couple of weeks - they explicitly stated that their stronger-than-expected growth in the quarter was due to a large unexpected increase in search revenues[0]. That's a distinct line-item from their ads business.

>Google’s core search and advertising business grew almost 10 per cent to $50.7bn in the quarter, surpassing estimates for between 8 per cent and 9 per cent.[0]

The "Google's search is garbage" paradigm is starting to get outdated, and users are returning to their search product. Their results, particularly the Gemini overview box, are (usually) useful at the moment. Their key differentiator over generative chatbots is that they have reliable & sourced results instantly in their overview. Just concise information about the thing you searched for, instantly, with links to sources.

[0] https://www.ft.com/content/168e9ba3-e2ff-4c63-97a3-8d7c78802...


This is anecdotal but here's a random thing I searched for yesterday https://i.imgur.com/XBr0D17.jpeg


> The "Google's search is garbage" paradigm is starting to get outdated

Quite the opposite. It's never been more true. I'm not saying using LLMs for search is better, but as it stands right now, SEO spammers have beat Google, since whatever you search for, the majority of results are AI slop.

Their increased revenue probably comes down to the fact that they no longer show any search results in the first screenful at all for mobile and they've worked hard to make ads indistinguishable from real results at a quick glance for the average user. And it's not like there exists a better alternative. Search in general sucks due to SEO.


Can you give an example of an everyday person search that generates a majority of AI slop?

If anything my frustration with google search comes from it being much harder to find niche technical information, because it seems google has turned the knobs hard towards "Treat search queries like they are coming from the average user, so show them what they are probably looking for over what they are actually looking for."


Basically any product comparison or review for example.


Let's try "samsung fridge review". The top results are a reddit thread, consumer reports article, Best Buy listing, Quora thread and some YouTube videos by actual humans.

Where is this slop you speak of?


> Quite the opposite. It's never been more true. I'm not saying using LLMs for search is better, but as it stands right now, SEO spammers have beat Google, since whatever you search for, the majority of results is AI slop.

It's actually sadder than that. Google appear to have realised that they make more money if they serve up ad infested scrapes of Stack Overflow rather than the original site. (And they're right, at least in the short term).


Most Google ads comes from Google search, its a misconception Google derives most of their profits from third party ads that is just a minor part of Googles revenue.


You are talking past each other. They say "Google search sucks now" and you retort with "But people still use it." Both things can be true at the same time.


You misunderstand. Making organic search results shittier will drive up ad revenue as people click on sponsored links in the search results page instead.

Not a sustainable strategy in the long term though.


I've all but given up on google search and have Gemini find me the links instead.

Not because the LLM is better, but because the search is close to unusable.


We're in the phase of yanking hard on the enshittification handle. Of course that increases profits whilst sufficient users can't or won't move, but it devalues the product for users. It's in decline insomuch as it's got notably worse.


The line goes up, democracy is fine, the future will be good. Disregard reality


GenAI is like plastic surgery for people who want to look better - looks good only if you can do it in a way it doesn't show it's plastic surgery.

Resume filtering by AI can work well on the first line (if implemented well). However, once we get to the the real interview rounds and I see the CV is full of AI slop, it immediately suggests the candidate will have a loose attitude to checking the work generated by LLMs. This is a problem already.


> looks good only if you can do it in a way it doesn't show it's plastic surgery.

I think the plastic surgery users disagree here: it seems like visible plastic surgery has become a look, a status symbol.


In the specific case of résumé-sending, the decline of the entire sector is a good thing. Nothing but make-work.


> This is what happened to Google Search. It, like cable news, does kinda plod along because some dwindling fraction of the audience still doesn't "get it", but decline is decline.

Well their Search revenue actually went up last quarter, as all quarters. Overall traffic might be a bit down (they don't release that data so we can't be sure) but not revenue. While I do take tons of queries to LLMs now, the kind of queries Google actually makes a lot of money on (searching flights, restaurants etc) I don't go to an LLM for - either because of habit or because of fear these things are still hallucinating. If Search was starting to die I'd expect to see it in the latest quarter earnings but it isn't happening.


I had similar thoughts, but then remembered companies still burn billions on Google Ads, sure that humans...and not bots...click them, and thinking that in 2025 most people browse without ad-blockers.


Most people do browse without ad blockers, otherwise the entire DR ads industry would have collapsed years ago.

Note also that ad blockers are much less prevalent on mobile.


People will pay for what works. I consult for a number of ecommerce companies and I assure you they get a return on their spend.


Probably the first significant hit are going to be drivers, delivery men, truckers etc. a demographic of 5 million jobs in US and double that in EU, with ripple effects costing other millions of jobs in industries such as roadside diners and hotels.

The general tone of this study seems to be "It's 1995, and this thing called the Internet has not made TV obsolete"; same for the Acemoglu piece linked elsewhere in the. Well, no, it doesn't work like that, it first comes for your Blockbuster, your local shops and newspaper and so on, and transforms those middle class jobs vulnerable to automation into minimum wages in some Amazon warehouse. Similarly, AI won't come for lawyers and programmers first, even if some fear it.

The overarching theme is that the benefits of automation flow to those who have the bleeding edge technological capital. Historically, labor has managed to close the gap, especially trough public education; it remains to be seen if this process can continue, since eventually we're bound to hit the "hardware" limits of our wetware, whereas automation continues to accelerate.

So at some point, if the economic paradigm is not changed, human capital loses and the owners of the technological capital transition into feudal lords.


I think that drivers are probably pretty late in cycle. Many environments they operate in are somewhat complicated. Even if you do a lot to make automation possible. Say with garbage move to containers that can simply be lifted either by crane or forks. Still places were those containers are might need lot of individual training to navigate to.

Similar thing goes to delivery. Moving single pallet to store or replacing carpets or whatever. Lot of complexity if you do not offload it to receiver.

More regular the environment is easier it is to automate. A shelving in store in my mind might be simpler than all environments where vehicles need to operate in.

And I think we know first to go. Average or below average "creative" professionals. Copywriter, artists and so on.


Generative AI has failed to automate anything at all so far.

(Racist memes and furry pornography doesn't count.)


Yeah no, I'm seeing more and more shitty ai generated ads, shop logos, interior design & graphics for instance in barber shops, fast food places etc.

The sandwich shop next to my work has a music playlist which is 100% ai generated repetitive slop.

Do you think they'll be paying graphic designers, musicians etc. for now on when something certainly shittier than what a good artist does, but also much better than what a poor one is able to achieve, can be used in five minutes for free?


> Do you think they'll be paying graphic designers, musicians etc. for now on

People generating these things weren't ever going to be customers of those skillsets. Your examples are small business owners basically fucking around because they can, because it's free.

Most barber shops just play the radio, or "spring" for satellite radio, for example. AI generated music might actively lose them customers.


That's not automation, that's replacing a product with a cheaper and shittier version.


Given that the world is fast deglobalizing there will be a flood of factory work being reshored in the next 10 years.

There's also going to be a shrinkage in the workforce caused by demographics (not enough kids to replace existing workers).

At the same time education costs have been artificially skyrocketed.

Personally the only scenario I see mass unemployment happening is under a "Russia-in-the-90s" style collapse caused by an industrial rugpull (supply chains being cut off way before we are capable of domestically substituting them) and/or the continuation of policies designed to make wealth inequality even worse.


The world is not deglobalizing, US is.


The world is deglobalizing. EU has been cutting off from Russia since the war started, and forcing medical industries to reshore since covid. At the same time it has begun drive to remilitarize itself. This means more heavy industry and all of it local.

There is brewing conflict across continents. India and Pakistan, Red sea region, South China sea. The list goes on and on. It's time to accept it. The world has moved on.


> Global connectedness is holding steady at a record high level based on the latest data available in early 2025, highlighting the resilience of international flows in the face of geopolitical tensions and uncertainty.

https://www.dhl.com/global-en/microsites/core/global-connect...

Source for counter argument?


Source for counter argument is in the page that you just linked here. You have cherry picked one sentence.


"Nothing to see here, folks! Keep shipping your stuff internationally!"


navel gazing will be shown to be a reactionary empty step, as all current global issues require more global cooperation to solve, not less.

the individual phenomena you describe are indeed detritus of this failed reaction to an increasing awareness of all humans of our common conditions under disparate nation states.

nationalism is broken by the realization that everyone everywhere is paying roughly 1/4 to 1/3 of their income in taxes, however what you receive for that taxation varies. your nation state should have to compete with other nation states to retain you.

the nativist movement is wrongful in the usa for the reason that none of the folks crying about foreigners is actually native american,

but it's globally in error for not presenting the truth: humans are all your relatives, and they are assets, not liabilities: attracting immigration is a good thing, but hey feel free to recycle tired murdoch media talking points that have made us nothing but trouble for 40 years.


Allow me to refer you to Chesterton's Fence:

> There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, 'I don't see the use of this; let us clear it away.' To which the more intelligent type of reformer will do well to answer: 'If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.' [1]

The problem with anti-border extremism is that it ignores the huge success national borders have had since pre-recorded history in building social cohesion, community, and more generally high-trust societies. All those things are precious, they are worth making sacrifices for, they are things small town America has only recently lost, and still remembers, and wants back. Maybe you haven't experienced those things, not like these people you so casually dismiss have.


> The world is deglobalizing.

We have had thousands of years of globalising. The trend has always been towards a more connected world. I strongly suspect the current Trump movement (and to an extent brexit depending on which brexit version you chose to listen to) will be blips in that continued trend. That is because it doesn't make sense for there to be 200 countries all experts in microchip manufacturing and banana growing.


>We have had thousands of years of globalising.

It happens in cycles. Globalization has followed deglobalization before and vice versa. It's never been one straight line upward.

>That is because it doesn't make sense for there to be 200 countries all experts in microchip manufacturing and banana growing.

It'll break down into blocs, not 200 individual countries.

Ask Estonia why they buy overpriced LNG from America and Qatar rather than cheap gas from their next door neighbor.

If you think the inability to source high end microchips from anywhere apart from Taiwan is going to prevent a future conflict (the Milton Friedman(tm) golden arches theory) then I'm afraid I've got bad news.


>It's never been one straight line upward.

Agree, but I never said it was.

>If you think the inability to source high end microchips from anywhere apart from Taiwan is going to prevent a future conflict (the Milton Friedman(tm) golden arches theory) then I'm afraid I've got bad news.

Why are you saying that? Again, I didn't suggest that.


But doesn't make sense to be dependent on your enemies either.


Much of the globalized system is dependent upon US institutions which currently dont have a substitute.

BRICs have been trying to substitute for some of them and have made some nonzero progress but theyre still far, far away from stuff like a reserve currency.


Yeah you need a global navy that can assure the safe passage of thousands of ships daily. Now, how do you ensure that said navy will protect your interests? Nothing is free.


LLMs are the least deterministic means you could possibly ever have for automation.

What you are truly seeking is high level specifications for automation systems, which is a flawed concept to the degree that the particulars of a system may require knowledgeable decisions made on a lower level.

However, CAD/CAM, and infrastructure as code are true amplifiers of human power.

LLMs destroy the notion of direct coupling or having any layered specifications or actual levels involved at all, you try to prompt a machine trained in trying to ascertain important datapoints for a given model itself, when the correct model is built up with human specifications and intention at every level.

Wrongful roads lead to erratic destinations, when it turns out that you actually have some intentions you wish to implement IRL


If you give the same subject to two different journalists, or even the same one under different "temperature" settings, say, he had lunch or not, or he's in different moods, the outputs and approaches to the subject will be completely different, totally nondeterministic.

But that doesn't mean the article they wrote in each of those scenarios in not useful and economically valuable enough for them to maintain a job.


If you want to get to a destination you use google maps.

If you want to reach the actual destination because conditions changed (there is a wreck in front of you) you need a system to identify changes that occur in a chaotic world and can pick from an undefined/unbounded list of actions.


Until we solve the hallucination problem google search still has a place of power as something that doesn’t hallucinate.

And even if we solve this problem of hallucination, the ai agents still need a platform to do search.

If I was Google I’d simply cut off public api access to the search engine.


>google search still has a place of power as something that doesn’t hallucinate.

Google search is fraught with it's own list of problems and crappy results. Acting like it's infallible is certainly an interesting position.

>If I was Google I’d simply cut off public api access to the search engine.

The convicted monopolist Google? Yea, that will go very well for them.


LLMs are already grounding their results in Google searches with citations. They have been doing that for a year already. Optional with all the big models from OpenAI, Google, xAI


And yet they still hallucinate and offer dead links. I've gotten wrong answers to simple historical event and people questions with sources that are entirely fabricated and referencing a dead link to an irrelevant site. Google results don't do that. This is why I use LLM's to help me come up with better searches that I perform and tune myself. That's valuable, the wordsmithing they can do given their solid word and word part statistics.


Is that using the state of the art reasoning models with Google search enabled?

OpenAI o3

Gemini 2.5 Pro

Grok 3

Anything below that is obsolete or dumbed down to reduce cost

I doubt this feature is actually broken and returning hallucinated links

https://ai.google.dev/gemini-api/docs/grounding


People talk about LLM hallucinations as if they're a new problem, but content mill blog posts existed 15 years ago, and they read like LLM bullshit back then, and they still exist. Clicking through to Google search results typically results in lower-quality information than just asking Gemini 2.5 pro. (which can give you the same links formatted in a more legible fashion if you need to verify.)

What people call "AI slop" existed before AI and AI where I control the prompt is getting to be better than what you will find on those sorts of websites.


What's the alternative here? Apart from well-known, but not so useful useful advice to have a ton of friends who can hire you or be so famous as to not need an introduction.


There isn't one. However, every dumb thing in the world is a call to action. Maybe you can show how to do things going forward :)


Why is this a worry? Sounds wonderful


I'm a bit worried about the social impacts.

When a sector collapses and become irrelevant, all its workers no longer need to be employed. Some will no longer have any useful qualifications and won't be able to find another job. They will have to go back to training and find a different activity.

It's fine if it's an isolated event. Much worse when the event is repeated in many sectors almost simultaneously.


> They will have to go back to training

Why? When we've seen a sector collapse, the new jobs that rush in to fill the void are new, never seen before, and thus don't have training. You just jump in and figure things out along the way like everyone else.

The problem, though, is that people usually seek out jobs that they like. When that collapses they are left reeling and aren't apt to want to embrace something new. That mental hurdle is hard to overcome.


What if no jobs, or fewer jobs than before, rush in to fill the void this time? You only need so many prompt engineers when each one can replace hundreds of traditional workers.


> What if no jobs, or fewer jobs than before, rush in to fill the void this time?

That means either:

1. The capitalists failed to redeploy capital after the collapse.

2. We entered into some kind of post-capitalism future.

To explore further, which one are you imagining?


The capitalists are failing to redeploy capital today. Thats why they have been dumping it into assets for years. They have too much capital and dwindling things they can do with it. AI will skyrocket their capital reserves. There is a poor mechanism for equalizing this since the Nixon years.


> They have too much capital and dwindling things they can do with it.

Yes, we've had full employment for a long, long time. But the idea here is that AI will free up labor that is currently occupied doing something else. If you are trying to say it will fail to do that, that may be true, but if so this discussion is moot.


As others in this thread have pointed out, this is basically what happened in the relatively short period of 1995 to 2015 with the rise of global wireless internet telecommunications & software platforms.

Many, many industries and jobs transformed or were relegated to much smaller niches.

Overall it was great.


Man 1995, what a world that was. Seemed like a lot less stress.


Good thing that we have AI tools that are tireless teachers


Making dumb processes dumber to the point of failure is actually a feature.


Funny you call it value I call it inefficiency.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: