During the 1990s dotcom boom we massively overbuilt fiber networks. It was indiscriminate and most of the fiber was never lit.
After the dotcom crash, much of this infrastructure became distressed assets that could be picked up for peanuts. This fueled a large number of new startups in the aftermath that built business models figuring out how to effectively leverage all of this dead fiber when you don't have to pay the huge capital costs of building it out yourself. At the time, you could essentially build a nationwide fiber network for a few million dollars if you were clever, and people did.
These new data centers will find a use, even if it ends up being by some startup who picks it up for nothing after a crash. This has been a pattern in US tech for a long time. The carcass of the previous boom's whale becomes cheap fuel for the next generation of companies.
I can barely get 50Mbps up/down and I only have xfinity in this area. No fiber, I will pay for it, but here we are. 2025 in good ol USA. In an urban area too.
When a few years ago I moved from Eastern Europe (where I had 1GB/s to my apartment for years) to the UK I was surprised that "the best" internet connection I was able to get was about 40MBit/s phone line. But it's a small town, and during past years even we have fiber up to 2GB/s now.
I'm surprised US still has issues that you mentioned. Have you considered Starlink(fuck Musk, but the product is decent)/alternatives?
One is, of course, the size of the country, but that's hardly an "excuse." It does contribute though.
The other big reason is lack of competition in the ISP space, and this is compounded by a distinctly American captured system where the owners/operators of the "public" utility poles shut out new entrants and have no incentive to improve the situation.
Meanwhile the nationwide regulatory agencies have been stripped down and courts have de-toothed them, reducing likelihood of top-down reform, and often these sorts of problems inevitably end up running into the local and state government vs national government split that is far more severe in the US.
So it's one of those problems that is surprising to some degree, but when you read about things like public utility telephone poles captured by corporate interests, it's also distinctly ridiculous and American, and not surprising at all.
I think this is because the infra is already built and so there is no incentive to upgrade, since you won't get more customers, aside maybe taking them from competition. Afaik even the latter might be an issue, because typically you get whatever provider that is in the given building, so the provider wont get any new customers.
La ti da. My 50Mbps in an urban area doesn't even provide 10Mbps up.
> In an urban area too.
Funnily enough, my farmland has gigabit service.
But I, unfortunately, don't live there. Maybe some day I'll figure out how to afford to build a house on that land. But, until then, shitty urban internet it is, I guess.
telcos lay fibers for free up to basement electrical closets, from closets to each units are on landlords, and to each units to actual wall outlets needs arrangements with tenants. Sometimes ISPs subsidizes that cost, but lots of arrangements still needs to be made.
For one entire rented or owned house, it's just a call and a drill away.
Re hype: Why is it that so many people are completely obsessed with replacing all developers and any other white-collar job? They seem to be totally convinced that this will happen. 100%
To me, this all sounds like an “end-of-the-world” nihilistic wet dream, and I don’t buy the hype.
I'm afraid that this might sound flippant, but the answer to your question comes through another question - why were early 19th century industrialists obsessed with replacing textile workers? Replacing workers with machines is not a new phenomenon and we have gone through countless waves of social upheaval as a result of it. The debate we're currently having about AI has been rehearsed many, many times and there are few truly novel points being made.
If you want to understand our current moment, I would urge you to study that history.
Programers are going to be replaced by AI in the same way accountants got replaced by VisiCalc and engineers by CAD and mathematicians by calculators and software like Mathematica
Calculators didn't replace mathematicians, they replaced Computers (as an occupation). To the point that most people don't even know it used to be a job for people.
I say calculators but there is a blurry line between early electronic computers and calculators. Portable electronic calculators also replaced the slide rule, around the late 1970s, which had been the instrument of choice for engineers for around 350 years!
Same reason so many people got excited in the early Internet days of how much work and effort could be saved by interconnecting everyone. So many jobs lost to history due to such an invention. Stock trading floors no longer exist, call centers drastically minimized, retail shopping completely changing, etc.
I had the same thought you did back then. If I could build a company with 3 people pulling a couple million of revenue per year, what did that mean to society when the average before that was maybe a couple dozen folks?
Technology concentrates gains to those that can deploy it - either through knowledge, skill, or pure brute force deployment of capital.
There's a lot of non-engineering people who are very happy to see someone else get unemployed by automation for a change. The people who formerly were automating others out of a job are getting a taste of their own medicine.
I am not an engineer and I expect my white collar job to be automated.
The reason to be excited economically for this is if it happens it will be massively deflationary. Pretending CEOs are just going to pocket the money is economically stupid.
Being able to use a super intelligence has been a long time dream too.
What is depressing is the amount of tech workers who have no interest in technological advancement.
I'm not sure exactly what you mean by deflationary, but in general deflation in an economy is a very bad thing. The most significant period of economic deflation in the US was 1930-1933, ie, the great depression, and the most recent period was the great recession.
And since when do business executives NOT pocket the money? Pretty much the only exception is when they reinvest the savings into the business, for more growth, but that reinvestment and growth usually is only something the rest of us care about if it involves hiring..
> that would cause a tremendous drop in demand for the services the schaudenfreuden folks provide, hurting them as well
You're correct. But it doesn't matter. Remember the San Francisco protests against tech? People will kill a golden goose if it's shinier than their own.
> If this goose is also pricing others out of housing market it's not entirely unreasonable
It's self-defeating but predictable. (Hence why the protests were tolerated to backed by NIMBY interests.)
My point is the same nonsense can be applied to someone not earning a tech wage celebrating tech workers getting replaced by AI. It makes them poorer, ceteris paribus. But they may not understand that. And the few that do may not care (or may have a way to profit off it, directly or indirectly, such that it's acceptable).
I don't quite follow. What exactly have non-tech people of San Francisco got from all the tech people working there? How did they become richer (ok, apart from landlords) or how would they become poorer if they lose their jobs?
CEOs run every major media outlet and public platform for communication, people that hype AI will get their content promoted and will see more success which creates and incentive to create content.
This doesn't even require any "conspiracy" among CEOs, just people with a vested interest in AI hype who act in that interest, shaping the type of content their organizations will produce. We saw something lessor with the "return to office" frenzy just because many CEOs realized a large chunk of their investment portfolio was in commercial real estate. That was only less hyped because I suspect there were larger numbers of CEOs with an interest in remaining remote.
Outside of the tech scene, AI is far less hyped and in places where CEOs tend to have little impact on the media it tends to be resisted rather than hyped.
I don’t think software developer is a white collar job. It’s essentially manufacturing. There are some white collar workers at the extremes but the overwhelming majority of programmers are doing the IT equivalent of building pickup trucks.
> Why is it that so many people are completely obsessed with replacing all developers and any other white-collar job?
For the same reason people are obsessed with replacing all blue-collar jobs. Every cent that a company doesn't have to spend on its employees is another cent that can enrich the company's owners.
The general view in my bubble was that blue collar jobs are seen as dumb, physically demanding and dangerous, so we kind of replacing them for their own good, so that they can do something intellectual (aka learn coding). Whereas intellectual labour is kind of what humans exist for, so making intellectual work redundant is truly the end of the world.
Maybe it's my post-communist background though and not relevant for the rest of the world
Nobody is obsessed with it. People are afraid of it. And yet, what will you do? Will you renounce adopting a tool that can make your work or someone else's work faster, easier, better? It's a trap: once you've seen the possibilities you can't go back; and if you do, you'll have to compete with those who keep using the new tools. Even if you know perfectly well that in a few years the tools will make your own job useless.
Personally, however, I would find it possibly even more depressing to spend my day doing a job that has economic value only because some regulation prevents it being done more efficiently. At that point I'd rather get the money anyway and spend the day at the beach.
> Personally, however, I would find it possibly even more depressing to spend my day doing a job that has economic value only because some regulation prevents it being done more efficiently.
That's true for many jobs. The only reason many people have a job is because of a variety of regulations preventing that job from being outsourced.
> At that point I'd rather get the money anyway and spend the day at the beach.
You won't get the money and spend the day at the beach; you'll starve to death.
I'm not convinced that there are that many jobs that can be effectively outsourced- locality is an important factor even for jobs that can in theory be performed fully remotely. I also don't see that many barriers to outsourcing or offshoring in general.
In any case, there's also a difference between the idea that it can be me or another person doing the same job, and maybe that person can be paid less because of their lower cost of living, but in the end they will put in the same effort as I do; and the idea that a tool can do the job effortlessly and the only reason I have to suffer over it is to justify a salary that has no reason to exist. Then, again, just force the company to pay me while allowing them to use whatever tool they want to get the job done.
That's only possible if you as the worker are capturing the efficiencies that the automation provides (i.e. you get RSU's, you have an equity stake in the business as well).
Believe it or not most SWE's and white collar workers in general don't get these perks especially outside the US where most firms have made sure tech workers in general are paid "standard wages" even if they are "good".
I mean, if the state can pass a law that forces companies to employ a person instead of an LLM- then it can also pass a law that forces them to pay that person while the LLM does their job. Companies would prefer that for sure: instead of having to keep the worker and the bad performance, they could at least get the nice LLM performance.
Nobody wants to be unemployed, but people generally love the idea of getting what they want without having to interact with not to say pay to other people.
Like you have a brilliant idea, but unfortunately don't have any hard skills. Now you don't have to pay enormous sums of money to geeks and have to suffer them to make it come true. Truly a dream!
Ok, but who benefits from these efficiencies? Hint: not the people losing their jobs. The main people that stand to benefit from this don't even need to work.
Producing things cheaper sounds great, but just because its produced cheaper doesn't mean it is cheaper for people to buy.
And it doesn't matter if things are cheap if a massive number of people don't have incomes at all (or even a reasonable way to find an income - what exactly are white collar professionals supposed to do when their profession is automated away, if all the other professions are also being automated away?)
Sidenote btw, but I do think it's funny that the investor class doesn't think AI will come for their role..
To me the silver lining is that I don't think most of this comes to pass, because I don't think current approaches to AGI are good enough. But it sure shows some massive structural issues we will eventually face
> I do think it's funny that the investor class doesn't think AI will come for their role..
investors don't perform work (labour); they take capital risk. An ai do not own capital, and thus cannot "take" that role.
If you're asking about the role of a manager of investment, that's not an investor - that's just a worker, which can and would be automated eventually. Robo-advisors are already quite common. The owner of capital can use AI to "think" for them in choosing what capital risk to take.
And as for massive number of people who don't have income - i dont think that will come to pass either (just as you dont think AGI will come to pass). Mostly because the speed of these automation will decline as it's not that trivial to do so - the low hanging fruits would've been picked asap, and the difficult ones left will take ages to automate.
unless if the company's output is pure, raw intellectual property, there's still going to be a need for some sort of input material. Investors' capital is always needed to provide the capital for such upfront purchases, not to mention plant and equipment purchases. Not to mention the running of said AI, even in the pure intellectual property case, would require upfront capital.
And if a single bootstrapped "investor" can support such a company, that's an even better world than today isnt it? It means everyone has a chance at breaking out with a successful company/product.
It's allowed our lifestyle for a short period of time, to small amount of people. It's not given to poor people, to people in places we've bombed, or extracted resources from, or people in the future since it's destroying the planet.
We're all far closer to poor than we are to having enough capital to live off of efficiency increases. AI is the last thing the capitalist class requires to finally throw of the shackles of humanity, of keeping around the filthy masses for their labor.
It is the one thing I believe capitalism at some level works for. Invest capital to build something that gives competitive advantage over other capital. Buying newer bigger factory that allows producing more for cheaper to compete with others.
This is exactly it. I was talking to my wife about this this morning. She's a sociologist researcher, and a lot of people that work in her organization are responsible for reading through interviews and doing something called coding, where you look for particular themes and then tag them with a particular name associated with that theme. And this is something that people spend a lot of hours on, along with interview transcription, also done by hand.
And I was explaining that I work in tech, so I live in the future to some degree, but that ultimately, even with HIPAA and other regulations, there's too much of a gain here for it not to be deployed eventually, And those people in their time are going to be used differently when that happens. I was speculating that it could be used for interviews as well, but I think I'm less confident there.
> Re hype: Why is it that so many people are completely obsessed with replacing all developers and any other white-collar job? They seem to be totally convinced that this will happen. 100%
Because the only thing that gets the executive class hornier than new iPhone-tier products is getting to layoff tons of staff. It sends the stock price through the roof.
It follows from there that an iPhone-tier product that also lets them layoff tons of staff would be like fucking catnip to them.
The job description of a developer is to replace themselves by automating themselves so they can get promoted/find a new more relevant role. That's the point of compilers and new programming languages.
There's no such thing as taking people's jobs, nobody and nothing is going to take your job except for Jay Powell, and productivity improvements cause employment to increase not decrease.
> Why is it that so many people are completely obsessed with replacing all developers and any other white-collar job?
> They seem to be totally convinced that this will happen.
The two groups of people are not same. I for example belong to the 2nd but not the 1st. If you have used the current gen LLM coding tools you will realize they have gotten they are scary good.
Because white collar salaries are extremely high, which makes the services of white collar workers unavailable to many.
If you replace lawyers with AI, poor people will be able to take big companies to court and defend themselves against frivolous lawsuits, instead of giving in and settling. If you replace doctors, the cost of medicine will go down dramatically, and so will waiting times. If you replace financial advisors, everybody will have their money managed in an optimal way, making them richer and less likely to make bad financial decisions. If you replace creative workers, everybody will have access to the exact kind of music, books, movies and video games they want, instead of having to settle for what is available. If you automate away delivery and drivers (particularly with drones), the price of prepared food will fall dramatically.
> completely obsessed with replacing all developers
I’m paid about 16x an electronics engineer. Salaries in IT are completely unrelated to the person’s effort compared to other white collar jobs. It would take an entire career to some manager to reach what I made after 5 years. I may be 140IQ but I’m also a dumbass in social terms!
That's cool. It sounds like most of us aren't making what you make. I don't make 16x what someone paid minimum wage makes, much less an electrical engineer.
Especially outside the US where having a 140 IQ isn't really enough to have a high wage. Only social EQ and high capital does that in most of the world.
I’m just a founder and that’s the dividends from a currently-lucky bootstrapped startup (and that will change). I may be an outlier but employee salaries are quite high too; still enough to be worth replacing — and AI is nowhere near humans, really.
Imagine a world where there is 10x as much wealth and 10x as many hard problems being solved. Suppose there's even a 5% chance of that happening. It's clearly worth doing
The dream of many business owners is running their business with no products, no employees, and no customers, where they can just collect money. AI promises to fulfil this dream. AIs selling NFTs to other AIs paying in crypto is the final boss of capitalism.
Notice how it wasn't and isn't a big deal when it's not in your own back yard (i.e. destroying blue collar professions); our chickens have just come home to roost. It's amazing the number of gullible, naive nerds out there that can't or won't see the forests for the trees. The number of ancap-lite libertarians precipitously drops when it's their own livelihood getting its shit kicked in.
One of the more annoying parts about being in the tech community was listening to the tone-deafness of these sorts of folks. Coming from a very blue collar background and family.
It's difficult to have much empathy for the "learn to code" crowd who seemingly almost got a sense of joy out of watching those jobs and lifestyles get destroyed. Almost some form of childhood high school revenge fantasy style stuff - the nerd finally gets one up on the prom king. Otherwise I'm not sure where the vitriol came from. Way too many private conversations and overheard discussion in the office to make me think these were isolated opinions.
That said, it's not everyone in tech. Just a much larger percentage than I ever thought, which is depressing to think about.
It's certainly been interesting to watch some folks who a decade ago were all about "only skills matter, if you can be outcompeted by a robot you deserve to lose your job" make a 180 on the whole topic.
The rest of the world has not caught up to current LLM capabilities. If it all stopped tomorrow and we couldn't build anything more intelligent than what we have now: there would be years of work automating away toil across various industries.
my experience using LLM-powered tools (e.g. copilot in agent mode) has been underwhelming. like, shockingly so. like not cd-ing to the wrong dir where a script is located, and getting lost, disregarding my instructions to run ./tests.ps1 and running `dotnet test`, writing syntactically incorrect scripts and failing to correct them, particularly being overwhelmed by verbose logs. sometimes it even fails to understand the semantic meaning of my prompts.
whereas my experience describing my problem and actually asking the AI is much, much smoother.
I'm not convinced the "LLM+scaffolding" paradigm will work all that well. sanity degrades with context length, and even the models with huge context windows don't seem to use it all that effectively. RAG searches often give lackluster results. the models fundamentally seem to do poorly with using commands to accomplish tasks.
I think fundamental model advances are needed to make most things more than superficially automatable: better planning/goal-directed behavior, a more organic connection to RAG context, automatic gym synthesis, and RL-based fine tuning (that holds up to distribution shift.)
I think that will come, but I think if LLMs plateau here they won't have much more impact than Google Search did in the '90s.
As long as liability is clearly assigned, it doesn't have an economic impact. The ambiguity of liability is what creates negative economic impact. Once it's assigned initially through law, then it can be reassigned via contract in exchange for cash to ensure the most productive outcome.
e.g. if OpenAI is responsible for any damages caused by ChatGPT then the service shuts down until you waive liability and then it's back up. Similarly if companies are responsible for the chat bots they deploy then they can buy insurance or put up guard rails around the chat bot, or not use it.
I'm one of those people who thinks simultaneously that (a) current AI cannot replace developers, it just isn't good enough (and I don't think it's good for it to write much code), and (b) AI is simply an incredible invention and will go down as one of the top 5 or 10 in history.
I've said the same thing as you, that there is a LOT left to be done with current AI capabilities, and we've barely scratched the surface.
I'm always surprised by the number of people posting here that are dismissive of AI and the obvious unstoppable progress.
Just looking at what happened with chess, go, strategy games, protein folding etc, it's obvious that pretty much any field/problem that can be formalised and cheaply verified - e.g. mathematics, algorithms etc - will be solved, and that it's only a matter of time before we have domain-specific ASI.
I strongly encourage everyone to read about the bitter lesson [0] and verifier's law [1].
Your examples are not LLMs, though, and don't really behave like them at all. If we take the chess analogy and design an "LLM-like chess engine", it would behave like an average 1400 London spammer, not like Stockfish, because it would try to play like the average human plays in it's database.
It isn't entirely clear what problem LLMs are solving and what they are optimizing towards... They sound humanlike and give some good solutions to stuff, but there are so many glaring holes. How are we so many years and billions of dollars in and I can't reliably play a coherent game of chess with ChatGPT, let alone have it be useful?
>because it would try to play like the average human plays in it's database.
Why would it play like the average? LLMs pick tokens to try and maximize a reward function, they don't just pick the most common word from the training data set.
Maybe you didn't realise that LLMs have just wiped out entire class of problems, maybe entire disciplines- do you remember "natural language processing"? What, ehm, happened to it?
Sometimes I have the feeling that what happened with LLMs is so enormous that many researches and philosophers still haven't had time to gather their thoughts and process it.
I mean, shall we have a nice discussion about the possibility of "philosophical zombies"? On whether the Chinese room understands or not? Or maybe on the feasibility of the mythical Turing test? There's half a century or more of philosophical questions and scenarios that are not theory anymore, maybe they're not even questions anymore- and almost from one day to the other.
How is NLP solved, exactly? Can LLMs reliably (that is, with high accuracy and high precision) read, say, literary style from a corpus and output tidy data? Maybe if we ask them very nicely it will improve the precision, right? I understand what we have now is a huge leap, but the problems in the field are far from solved, and honestly BERT has more use cases in actual text analysis.
"What happened with LLMs" is what exactly? From some impressive toy examples like chatbots we as a society decided to throw all our resources into these models and they still can't fit anywhere in production except for assistant stuff
> Can LLMs reliably (that is, with high accuracy and high precision) read, say, literary style from a corpus and output tidy data?
I think they have the capability to do it, yes. Maybe it's not the best tool you can use- too expensive, or too flexible to focus with high accuracy on that single task- but yes you can definitely use LLMs to understand literary style and extract data from it. Depending on the complexity of the text I'm sure they can do jobs that BERT can't.
> they still can't fit anywhere in production
Not sure what do you mean for "production" but there's an enormous amount of people using them for work.
People assume (rightly so) that the progress in AI should be self-evident. If the whole thing is really working that great, we should expect to see real advances in these fields. Protein-folding AI should lower the prices of drugs and create competitive new treatments at an unprecedented rate. Photo and video AI should be enabling film directors and game directors to release higher-quality content faster than ever before. Text AI should be spitting out Shakespeare-toppling opuses on a monthly basis.
So... where's the kaboom? Where's the giant, earth-shattering kaboom? There are solid applications for AI in computer vision and sentiment analysis right now, but even these are fallible and have limited effectiveness when you do deploy them. The grander ambitions, even for pared-back "ASI" definitions, is just kicking the can further down the road.
The kaboom already happened on user-generated media platforms. YouTube, Facebook, tiktok, and so on are flooded with AI-generated videos, photos, sounds, and so on. The sheer volume of this low-quality slop is because AI lowered the barrier of entry for creating content. In this space the progress is not happening through pushing the upper bound of quality higher but by reducing the cost for minimal quality to down to near-0.
Another perspective for the kaboom is search and programming tasks for the average person.
For the average consumer, LLM chatbots are infinitely better than Google at search-like tasks, and in effect solve that problem. Remember when we had to roll our eyes at dad because he asked Google "what are some cool restaurants?" instead of "nice restaurants SF 2018 reddit"? Well, that is over, he can ask that to ChatGPT and it will make the most effective searches for him, aggregate and answer. Remember when a total noob had to familiarize himself with a language by figuring out hello world, then functions, etc? Now it's over, these people can just draft a toy example of what they want to build with Cursor instantly, tell it to make everything nice and simple, and then have ChatGPT guide them through what is happening.
In some industries you just don't need that much more code quality than what LLMs give you. A quick .bat script doesn't need you to know the best implementation of anything, neither does a Python scraper using only the stdlib, but these were locked behind programming knowledge before LLMs
> I'm always surprised by the number of people posting here that are dismissive of AI and the obvious unstoppable progress
Many of us have been through previous hype-cycles like the dot-com boom, and have learned to be skeptical. Some of that learning has been "reinforced" by layoffs in the ensuing bust (reinforcement learning). A few claims in your note like "it's only a matter of time before we have domain-specific ASI" are jarring - as you are "assuming the sale". LLMs are great as a tool for some usecases - nobody denies that.
The investment dollars are creating a class of people who are fed by those dollars, and have the incentive to push the agenda. The skeptics in contrast have no ax to grind.
Well, I was hedging a bit because I try to not overstate the case, but I'm just as happy to say: LLM's can't reason. Because it's not what they're built to do. They predict what text is likely to appear next.
But even if they can appear to reason, if it's not reliable, it doesn't matter. You wouldn't trust a tax advisor that makes things up 1/10 times, or even 1/100 times. If you're going to replace humans, "reliable" and "reproducible" are the most important things.
Frontier models like o3 reason better than most humans. Definitely better than me. It would wipe the floor with me in a debate - on any topic, every single time.
Frontier models went from not being able to count the number of 'r's in "strawberry" to getting gold at IMO in under 2 years [0], and people keep repeating the same clichés such as "LLMs can't reason" or "they're just next token predictors".
At this point, I think it can only be explained by ignorance, bad faith, or fear of becoming irrelevant.
> At this point, I think it can only be explained by ignorance, bad faith, or fear of becoming irrelevant.
Based on the past history with frontier-math & AIME 2025 [1],[2] I would not trust announcements which cant be independently verified. I am excited to try it out though.
Also, the performance of LLMs was not even bronze [3].
Finally, this article shows that LLMs were just mostly bluffing [4].
It's very different from chess etc. If we could formalise and "solve" software engineering precisely, it would be really cool, and probably indeed just lift programming to a new level of abstraction.
I don't mind if software jobs move from writing software to verifying software either if it makes the whole process more efficient and the software becomes better as a result. Again, not what is happening here.
What is happening, at least in AI optimist CEO minds is "disruption". Drop the quality while cutting costs dramatically.
I mentioned algorithms, not software engineering, precisely for that reason.
But the next step is obviously increased formalism via formal methods, deterministic simulators etc, basically so that one could define an environment for a RL agent.
It's unlikely that LLMs are gonna get us there though. They ingested all relevant data at this point at the net effect might very well kill future sources of quality data.
How is e.g. stackoverflow gonna stay alive if the next generation of programmers relies mainly on copilot and vibe coding? And what will the LLMs scrape once it's gone?
I'm surprised too. You'd think tech people would understand what's going on. But of the prior replies to your comment 7 out of 8 seem dismissive. I'm in the "obvious unstoppable progress" camp but we seem to be a minority.
I guess maybe it isn't that obvious - I've read quite a lot in the area. People saying LLMs aren't very good are a bit like people long ago saying chess programs aren't very good. It was true but there was an inevitable advance as the hardware got better and then that led to enthusiasm to improve the software and computers became better than humans in a rather predictable way. It's driven in the end by hardware improvements. Whether the software is LLM or some other algo is kind of unimportant.
Mathematics cannot be "solved", that's a consequence of Gödel's First Incompleteness Theorem.
It can already be "cheaply verified" in the sense that if you write a proof in, say, Lean, the compiler will tell if you if it's valid. The hard part is coming up with the proof.
It may be possible that some sort of AI at some stage becomes as good, or even better than, research mathematicians in coming up with novel proofs. But so far it doesn't look like it - LLMs seem to be able to help a little bit with finding theorems (e.g. stuff like https://leansearch.net/), but to my understanding they are rather poor beyond that.
on the surface this is a great achievement - if it holds . alpha-geometry required 1) human formalization of the question and 2) a solver for geometry
If the questions were given as-is (without a human formalizing it) and the llm didnt need domain solvers, and the llm was not trained on it already (which happened with frontier math) - I would be impressed.
Based on the past history with frontier math [1][2] I remain skeptical. The skeptic in me says that this happens prior to big announcements (GPT-5) to create the hype.
Finally, this article shows that LLMs were just bluffing in the usamo 2025 [3].
I live next to an abandoned building from the Spanish property boom. It's now occupied illegally. Hype's over yet the consequence is staring at me every day. I am sure it'll eventually be knocked down or repurposed yet it'd be better had the misallocation never happened.
I bought a flat in the Spanish property boom. It was empty a while with ~80% of flats in the area vacant, then I had a squatter, now kicked out. Now most of the property is occupied and the Spanish government are bringing in huge restrictions to ease the property shortage. These things go in cycles. The boom and bust isn't very efficient but there you go.
Very cheap game consoles and VR headsets. Unironically that could really help world peace and QOL: less news and doomscrolling, and people would have an outlet for stress, anger, and boredom.
That's a subset of gamers, specifically gamers who actively want a hypersexualized characters and still haven't figured out how to ignore the irrelevant products and still chose to be angered if some product doesn't fit their requirements (despite no shortage of games that would fit the bill).
The data are easily accessible, and the target group has little political power and is seen as problematic and about the only remaining legitimate target for negative discourse.
Imagine the reception that studies of female aggression get.
That would be $75 billion combined for 2025. A drop in the bucket.
--
> From 2013 to 2020, cloud infrastructure capex rose methodically—from $32 billion to $119 billion. That's significant, but manageable. Post-2020? The curve steepens. By 2024, we hit $285 billion. And in 2025 alone, the top 11 cloud providers are forecasted to deploy a staggering $392 billion—MORE than the entire previous two years combined.
On the other hand, drug discovery sounds like it's a candidate for really benefitting from AI. To fuel AI model development, there maybe has to be all the garbage that comes with AI.
Drugs to cure the diseases caused by your environment. It's not as if people are suddenly going to be making perfect decisions (e.g. never getting a sunburn, not eating meat, avoiding sugary foods).
>Drugs to cure the diseases caused by your environment
Humans have so far completely failed to develop any drug with minimal side effects to cure lifestyle diseases; it's magical to think AI can definitely do it.
Everything has side effects. In this case we have three pretty good interventions, Ozempic, FMT, and telling people to drink Coke Zero. The worst "side effect" is just that the first two are expensive.
Oh, in this case GP seems to be including sunscreen as a treatment for lifestyle diseases. Pretty sure those don't have side effects, but Americans don't get the good ones.
Is your objection just over the word "cure"? Because hypertension, depression, arthritis, asthma are a few in an absurdly long list of lifestyle diseases that use drugs as a primary method of treatment.
So all these things that skyrocketed in the span of 75 years are immutable facts of life, but magic drugs are somehow in the realm of possibilities?
What's easier, educate your people and feed them well to build a strong and healthy nation OR let them rot and shovel billions to pharma corps in the hope of finding a magic cure?
>skyrocketed in the span of 75 years are immutable facts of life
A number of them seem to have skyrocketed with quality of life and personal wealth. I suspect my ancestors were skinny not because they were educated on eating well but because they lacked the same access to food we have in modern society, especially super caloric ones. I don't super want to go back to an ice cream scarce world. Things like meat consumption are linked to colon cancer and most folk are unwilling to give that up or practice meat-light diets. People generally like smoking! Education campaigns got that down briefly but it was generally not because people didn't want to smoke, it's because they didn't want cancer. Vaping is clearly popular nowadays. Alcohol, too! The WHO says there is no safe amount of alcohol consumption and attributes lots of cancer to even light drinking. I suspect people would enjoy being able to regularly have a glass of wine or beer and not have it cost them their life.
Bullshit. Heart disease and cancer (and a long tail of medical problems) cook up with age and kill ~everyone inside ~100 years. If you think that environment and exercise can fix this, show me the person who is 200 years old.
We would have to 100x medical research spending before it was clearly overdone.
Obviously healthy habits prolong your life. Nobody argued otherwise. My contention is specifically with the idea that they matter to the exclusion of science and pharma. Healthy habits clearly hit a wall: if they didn't, we would have health and fitness gurus living to 200 and beyond by virtue of having a routine that could actually defeat cancer and heart disease. The absence of these 200 year old gurus indicates that no, 80% of cancer (and heart disease, everyone always forgets heart disease) cannot be avoided with diet and exercise. Hence, work on getting through the wall is valuable.
You can 1000x the research if you want, a 50kg overweight person who doesn't exercise, drink alcohol and lives next to a highway is statistically fucked no matter what. You'd need straight up magic to undo the damages
You're not going to fix lifestyle diseases with drugs, and lifestyle diseases are the leading cause of death
If you have a GPU which you've used for AI training, but that's no longer valuable, you could sell that GPU; but then you'd incur taxable revenue.
If you destroy the GPU, you can write it off as a loss, which reduces your taxable income.
Its possible you could come out ahead by selling everything off, but then you'd have to pay expensive people to manage the sell off, logistics, etc. What a mess. Easier to just destroy everything and take the write-off.
Capital equipment is depreciated over time. By the time you're selling it off, it's pennies on the dollar and a small recoupment of cost. Paying 30% (or whatever) taxes on that small amount of income and having 70% remaining is still better than zero dollars and zero taxes.
It is a giant pain to sell off this gear if you are using in-house folks to do so. Usually not worth it, and why things end up trashed as you state. If I have a dozen 10 year old servers to get rid of - it's usually not worth anyone's time or energy to list them for $200 on ebay and figure out shipping logistics.
However, at scale the situation and numbers change - you can call in an equipment liquidator who can wheel out 500 racks full of gear at a time and you get paid for their disposal on top of it. Usually a win/win situation since you no longer have expensive people trying to figure out who to call to get rid of it, how to do data destruction properly, etc. This usually is a help to the bottom line in almost all cases I've seen, on top of it saving internal man-hours.
If you're in "failed startup being liquidated for asset value" territory, then the receiver/those in charge typically have a fiduciary duty to find the best reasonable outcome for the investors. It's rarely throwing gear with residual value in the trash. See: used Aeron chair market.
> that's no longer valuable, you could sell that GPU; but then you'd incur taxable revenue
Unless GPUs are like post-Covid used cars you're going to sell them at a loss which can be written off. Write-offs don't have to involve destroying the asset. I don't know where you got that idea.