Maybe someone can help me wrap my head around this in a different way, because here's how I see it.
If these tools are really making people so productive, shouldn't it be painfully obvious in companies' output? For example, if these AI coding tools were an amazing productivity boost in the end, we'd expect to see software companies shipping features and fixes faster than ever before. There would be a huge burst in innovative products and improvements to existing products. And we'd expect that to be in a way that would be obvious to customers and users, not just in the form of some blog post or earnings call.
For cost center work, this would lead to layoffs right away, sure. But companies that make and sell software should be capitalizing on this, and only laying people off when they get to the point of "we just don't know what to do with all this extra productivity, we're all out of ideas!". I haven't seen one single company in this situation. So that makes me think that these decisions are hype-driven short term thinking.
I wonder if some of this output will take a while to be visible en masse.
For example, I founded a SaaS company late last year which has been growing very quickly. We are track to pass $1M ARR before the company's first birthday. We are fully bootstrapped, 100% founder owned. There are 2 of us. And we feel confident we could keep up this pace of growth for quite a while without hiring or taking capital. (Of course, there's an argument that we could accelerate our growth rate with more cash/human resources)
Early in my career, at different companies, we often solved capacity problems by hiring. But my cofounder and I have been able to turn to AI to help with this, and we keep finding double digit percentage productivity improvements without investing much upfront time. I don't think this would have been remotely possible when I started my career, or even just a few years ago when AI hadn't really started to take off.
So my theory as to why it doesn't appear to be "painfully obvious": you've never heard of most of the businesses getting the most value out of this technology, because they're all too small. On average, the companies we know about are large. It's very difficult for them to reinvent themselves on a dime to adapt to new technology - it takes a long time to steer a ship - so it will take a while. But small businesses like mine can change how we work today and realize the results tomorrow.
Companies that needed to hire 10 people to grow, only need to hire 9 now
In less than 5 years that’s going to be 7 or 6 people
I’m doing more with 5 engineers than I was able to do with 15 just 10 years ago
Part of that is libraries etc have matured too but we’ve reached the point from a developer perspective that you don’t need to build new technologies, you just need to put what exists together in new ways
All the parts exist for any technology to be built, it’s about composition and distribution at this point
I think it's important to start with identifying your bottlenecks, and work from there to determine the solutions you need. In the case of our business, I feel that my time is best spent talking to customers and prospects. These discussions directly impact revenue, retention, product strategy, etc.
So then I start thinking ... what sort of things am I doing that take me away from talking to customers? I spend a lot of time on implementation. I spend a lot of time on administrative sales tasks (chasing people for meetings, writing proposals, negotiating contracts). I spend a lot of time on meeting prep and follow-up. And many more. So I'm always on the hunt for tools with a problem already in mind.
In terms of specific tools...
Claude is a great backbone for a lot. Both the chatbot but also the API. I use the chatbot to help me write proposals and review contracts. I used it to write scripting to automate our implementation process which was once quite manual and is now a button click.
Cursor has been a game changer. In particular, it means that we spend very little time on bugfixes and small features. This keeps my CTO almost 100% focused on big picture needle-moving projects. We are now doing some research into things like Codex/Claude Code to see how we could improve this further.
Another app that I really love is called Granola. It automatically joins all of my meetings, writes notes, reminds me what promises I made, helps me write follow-up emails, and helps me prep for meetings.
Finally, we use an email client called Sedna (disclaimer: I used to work at Sedna) which is fully programmable. We've been building our own internal tooling (leveraging the Claude API) on top of Sedna to help automate different workflows. For example, my inbox is now perfectly prioritised. In many cases, when I receive emails from customers, an AI has already written a draft that I can review and send. I know there are a lot of out-of-the-box tools out there like Fyxer to help with things like this, but I've really appreciated the ability to get exactly what we want by building certain things ourselves.
Do existing teams (and ossified office politics) benefit from n-times faster devs? I witnessed (implied) Gantt charts so shaped that shrinking dev activities won't shrink the chart.
"shouldn't it be painfully obvious in companies' output?"
No.
The bottleneck isn't intellectual productivity. The bottleneck is a legion of other things; regulation, IP law, marketing, etc. The executive email writers and meeting attenders have a swarm of business considerations ricocheting around in their heads in eternal battle with each other. It takes a lot of supposedly brilliant thinking to safely monetize all the things, and many of the factors involved are not manifest in written form anywhere, often for legal reasons.
One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields. Another is art "creatives": graphic artists in particular. They're early victims and likely to be fully supplanted in the near future. A little further on and it'll be writers, actors, etc.
Maybe this means that LLMs are ultimately good for small buisness. If large buisness is constrained by being large and LLMs are equally accesible to 5 people or 100 then surely what we will see is increased productivity in small companies?
My direct experience has been that even very small tech businesses contend with IP issues as well. And they don't have the means to either risk or deliberately instigate a fight.
> One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields.
Great point. The perfect example: (From Wiki):
> In 2024, Hassabis and John M. Jumper were jointly awarded the Nobel Prize in Chemistry for their AI research contributions for protein structure prediction.
AFAIK: They are talking about DeepMind AlphaFold.
Related: (Also from Wiki):
> Isomorphic Labs Limited is a London-based company which uses artificial intelligence for drug discovery. Isomorphic Labs was founded by Demis Hassabis, who is the CEO.
I think AlphaFold is where current AI terminology starts breaking down. Because in some real sense, AlphaFold is primarily a statistical model - yes, it's interesting that they developed it using ML techniques, but from the use standpoint it's little different than perturbation based black boxes that were used before that for 20 years.
Yes, it's an example of ML used in science (other examples include NN based force fields for molecule dynamics simulations and meteorological models) - but a biologist or meteorologist usually cares little how the software package they are using works (excluding the knowledge of different limitation of numerical vs statistical models).
The whole thing "but look AI in science" seem to me like Motte-and-bailey argument to imply the use of AGI-like MLLM agents that perform independent research - currently a much less successful approach.
I specifically didn't call LLMs a statistical model - while they technically are, it's obvious they are something more. While intelligence is a hard concept to pin down, current gen LLMs already can do most (knowledge work) based things better than most people (they are better writers than most people, they can program better than most people, they are better at math than most people, have better medical knowledge than most people...). If the human is the mark of intelligence - it has been achieved.
Alphafold is something else though. I work with something similar (specifically FNOs for biophysical simulations) and the insight that data only models perform better than physics based model is novel - I think that the Nobel prize was deservedly awarded - however the thing is still closer to a curve fit than to LLMs regarding intelligence - or in other words, it's about as "intelligence" as permutation based black boxes were.
> where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields.
Can you give an example, say in Medicine, where AI made a significant advancement? That is we are talking neural networks and up (ie: LLM) and not some local optimization.
Even still, in theory this should free up more money to hire more lawyers, markerters, etc. The effect should still be there presuming the market isn't saturated with new ideas .
Something else will get expensive in the meantime, e.g. it doesn't matter how much you earn, landlords will always increase rent to the limit because a living space is a basic necessity
No, landlords will increase rent as much as they can because they like money (they call it capitalism for a reason). This is true of all goods, both essential and non-essential. All businesses follow the rule of supply and demand when setting prices or quickly go out of business.
In the scenario being discussed - if a bunch of companies hired a whole bunch of lawyers, markerters, etc that might make salaries go up due to increased demand (but probably not super high amoung as tech isnt the only industry in the world). That still first requires companies to be hiring more of these types of people for that effect to happen, so we should still see some of the increased output even if there is a limiting factor. We would also notice the salaray of those professions going up, which so far hasn't happened.
It's an observed effect that rent increases until everyone is just as miserable as before. Regulatory capture of the building industry might have something to do with it, but you can't just say it doesn't happen.
» landlords will always increase rent to the limit
In your own words, a business will run out of business quickly if supply and demand do not match. So unless you are confident that there will be a buyer, you cannot raise prices infinitely.
» because a living space is a basic necessity
While most people can live without a netflix subscription (hence they cannot raise prices infinitely and still expect to find buyers) most people prefer to live in housings. A housing is a basic necessity hence as a landlord you can confidently raise prices to the barrier of an affordability limit.
» Something else will get expensive in the meantime
Lets assume electricity prices get really cheap because humanity discovers fusion reaction. Well guess what, now landlords will increase the rents again because they can.
Hope I expressed myself to your liking. I mean you just walz in here and start lecturing people about capitalism, maybe you should change your careerpath and become a teacher.
>A little further on and it'll be writers, actors, etc.
The tech is going to have to be absolutely flawless, otherwise the uncanny-valley nature of AI "actors" in a movie will be as annoying as when the audio and video aren't perfectly synced in a stream. At least that's how I see it..
I get what you mean, but the last year has been a story of sudden limits and ceilings of capability. The (damned impressive) video you post is a bunch of extremely brief snippets strung together. I'm not yet sure we can move substantially beyond that to something transformative or pervasively destructive.
A couple years ago, we thought the trend was without limits - a five second video would turn into a five minute video, and keep going from there. But now I wonder if perhaps there are built in limits to how far things can go without having a data center with a billion Nvidia cards and a dozen nuclear reactors serving them power.
Again, I don't know the limits, but we've seen in the last year some sudden walls pop up that change our sense of the trajectory down to something less "the future is just ten months away."
Approximately 1 second was how long AI could hold it together. If you had a lot of free time you could extend that out a bit, but it'll mess something up. So generally people who make them will run it slow-motion. This is the first clip I've seen with it at full speed.
The quick cuts thing is a huge turnoff so if they have a 15 second clip later on, I missed it.
When I say "1second" I mean that's what I was doing with automatic1111 a couple years ago. And every video I've seen is the same 30-60 generated frames...
I wonder if this is going to change the ad/marketing industry. People generally put up with shitty ads, and these will be much cheaper to produce. I dread what's coming next.
I mean, it’s very uncanny valley, I would not want to watch a full movie of that. It’s so close! I mean it could be next year! Or it could be 20 years,
Bullshit: Chatbots are not failing to demonstrate a tangible increase in companies' output because of regulations and IP law, they are failing because they are still not good for the job.
LLMs only exist because the companies developing them are so ridiculously powerful that can completely ignore the rule of law, or if necessary even change it (as they are currently trying to do here in Europe).
Remember we are talking about a technology created by torrenting 82 TB of pirated books, and that's just one single example.
"Steal all the users, steal all the music" and then lawyer up, as Eric Schmidt said at Stanford a few months ago.
Maybe in some industries and for some companies and their products but not all.
Like let's take operating systems as an example. If there are great productivity gains from LLMs while aren't companies like Apple, Google and MS shipping operating systems with vastly less bugs and cleaning up backlogged user feature requests?
Don't forget that to start a company the more common goal is to make money, not to produce a good or service that others want or need including employment. Meaning this is just a lesser concern. Generally business owners are just as likely to shed quality as to shed employees as long as the profits go up. What helps to sustain this is the gradual lowering of the bar on quality that leads to consumers settling for garbage products and sending a positive signal to the business to continue. This is exacerbated by monopolistic trends in the world where only one company is providing a good or service and buying out the competition when it raises to control choice. What we end up with is similar from the consumer's viewpoint to late Communism in regards to choice and quality. In the end Capitalism didn't win. It just lost last.
The things you mention in the legion of other things are actually things LLMs do better than intellectual productivity. They can spew entire libraries of marketing bs, summarize decades of legal precedents and fill out mountains of red tape checklists.
They have trouble with debugging obvious bugs though.
> The bank [Goldman Sachs] now has 11,000 engineers among its 46,000 employees, according to [CEO David] Solomon, and is using AI to help draft public filing documents.
> The work of drafting an S1 — the initial registration prospectus for an IPO — might have taken a six-person team two weeks to complete, but it can now be 95 per cent done by AI in minutes, said Solomon.
> “The last 5 per cent now matters because the rest is now a commodity,” he said.
In my eyes, that is major. Junior ibankers are not cheap -- they make about 150K USD per year minimum (total comp).
This is certainly interesting and I don’t want to readily dismiss it, but I sometimes question how reliable these CEO anecdotes are. There’s a lot of pressure to show Wallstreet that you’re at the forefront of the AI revolution. It doesn’t mean no company is achieving great results but that it’s hard to separate the real anecdotes from the hype.
I mean that's such a text heavy area anyway. I am not an expert in filing S1 but won't a lot of it be more or less boilerplate + customisations specific to the offering? Any reasonably advanced model should be able to take you a good chunk of the way. Then iterate with a verifier type model + a few people to review; even with iterations that should definitely shorten the overall time. It seems like such a perfect use case for an LLM - what am I missing that is hidden in the scepticism of the sibling comments?
I find that this is on point. I've seen a lot of charts on the AI-hype side of things showing exponential growth of AI agent fleets being used for software development (starting in 2026 of course). Take this article for example: https://sourcegraph.com/blog/revenge-of-the-junior-developer
Ok, so by 2027 we should be having fleets of autonomous AI agents swarming around every bug report and solving it x times faster than a human. Cool, so I guess by 2028 buggy software will be a thing of the past (for those companies that fully adopt AI of course). I'm so excited for a future where IT projects stop going overtime and overbudget and deliver more value than expected. Can you blame us for thinking this is too good to be true?
This is like asking if tariffs are so bad, why don't you notice large price swings in your local grocer right now?
In complex systems, you can't necessarily perceive the result of large internal changes, especially not with the tiny amount of vibes sampling you're basing this on.
You really don't have the pulse on how fast the average company is shipping new code changes, and I don't see why you think you would know that. Shipping new public end-use features isn't even a good signal, it's a downstream product and a small fraction of software written.
It's like thinking you are picking up a vibe related to changes in how many immigrants are coming into the country month to month when you walk around the mall.
Reistically its because layoffs have a high reputational cost. AI provides an excuse that lets companies do lay offs without suffering the reputation hit. In essence AI hype makes layoffs cheaper.
It also matters a bit where the reputation cost hits. Layoffs can spook investors because it makes it look like the company is doing poorly. If the reputation hit for ai is to non-investors, then it probably matters less.
It's not about consumer reputation, it's about the financial reputation. Slashing headcount can look desperate. AI makes it sound innovative, or at least that's the idea.
In big companies, this is a bit slower due to the need to migrate entrenched systems and org charts into newer workflows, but I think you are seeing more productivity there too. Where this is much more obvious is in indie games and software where small agile teams can adopt new ways of working quickly...
What if the number of game critics just hasn’t increased, and since they can only play/review a fixed number of games each year due to time constraints, the number that they acclaim each year hasn’t grown? Not saying this is necessarily the case, just suggesting the possibility.
Has the amount of 95%+ reviews games-released increased though? And how much of that is due to the pandemic? Its anecdotal, but the game-dev discord I'm in has had a decent reduction in # of regulars since the tail end of the pandemic 24-25. And ironically, I was one of them until recently. I think people actually just had more time.
It's cause there are still bottlenecks. AI is definitely boosting productivity in specific areas, but the total system output is bottlenecked. I think we will see these bottlenecks get rerouted or refactored in the coming years.
Informational complexity bottlenecks. So many things are shackled to human decision making loops. If we were truly serious, we would unshackle everything and let it run wild. Would be chaotic, but chaos create strange attractors.
Quality control, for one. The state of commercial software is appalling. Writing code itself is not enough to get a useable piece of software.
LLMs are also not very useful for long term strategy or to come up with novel features or combinations of features. They also are not great at maintaining existing code, particularly without comprehensive test suites. They are good at coming up with tests for boiler plate code, but not really for high-level features.
Considering how software is increasingly made out of seperate components and services, integration testing can become pretty damn difficult. So quite often, the public release is the first serious integration test.
From my experience, this stuff is rarely introduced to save developers from typing in the code for their logic. Actual reasons I observe:
1. SaaS sales/marketing pushing their offerings on decision makers - software being a pop culture, this works pretty well. It can be hard for internal staff to push back on What Everyone Is Using (TM). Even if it makes little to no sense.
2. Outsourcing liability, maintenance, and general "having to think about it". Can be entirely valid, but often it indeed comes from an "I don't want to think of it" kind of place.
I don't see this stuff slowing down GenAI or not, mainly because it has usually little to do with saving time or money.
I feel like one of us must be in a bit of our own bubble.
The company that I work for is currently innovating very fast (not LLM related), creating so much value for other companies that they have never gotten from any other business.. I know this because when they switch to our company, they tell us how much better our software product is compared to anything they've ever used. It has tons of features that no other company has. That's all I can say without doxxing too much.
I feel like it's unimaginative to say:
> What more tech is there to sell besides LLM integrations?
I have like 7 startup ideas written down in my notes app for software products that I wish I had in my life, but don't have time to work on, and can't find anything that exists for it. There is so much left to create
I speak only from a very high-level POV. From a lower-level/in the "trees" -- yes, I don't disagree whatsoever with your characterization that a single company can achieve that. I know of many many many products I use (tech even!) that I could create exponentially better alternatives for, as well.
Now, there come a few considerations I don't believe you have factored in:
- Just because your company has struck gold: does that mean that pathway is available or realistic enough for everyone else; and to a more important point, is it /significant/ enough that it can scoop the enormous amount of tech talent on the market currently and in the future? I don't believe so.
- Segueing, "software products that I wish I had in my life." Yes, I too have many ideas, BUT: is the market (the TAM if you will) significant enough to warrant it? Ok, maybe it is -- how will you solve for distribution? Fulfillment is easy, but how are you going to not only identify prospective customers (your ICP), find them and communicate to them, and then convince them to buy your product, AND do this at scale, AND do this with low enough churn/CAC and high enough retention/CLTV, AND is this the most productive and profitable use of your time and resources?
Again, ideas are easy -- we all have them. But the execution is difficult. In the SaaS/tech space, people are burned out from software. Everyone is shilling their vibe-coded SaaS or latest app. That market is saturated, people don't care. Consumer economy is suffering right now due to the overall economy and so on. Next avenue is enterprise/B2B -- cool, still issues: buyer fatigue; economic uncertainty leading to anemic budgets and paralysis while the "fog" clears. No one is buying -- unless you can guarantee they can make money or you can "weather the storm" (see: AI, and all the top-down AI mandates every single PE co and board is shoving down exec teams throats).
I'm talking in very broad strokes on the most impactful things. Yes, there is much to create -- but who is going to create it and who is going to buy it (with what money?). This is a people problem, not a tech problem. I'm specifically talking about: "what more tech is there to sell -- that PEOPLE WILL BUY -- besides LLM integrations?" Again, I see nothing -- so I have pivoted towards finance and selling money. Money will not go out of fashion for a while (because people need it for the foreseeable future).
Ask yourself, if you were fired right now at this moment: how easy would it be for you to get another job? Quite difficult unless you find yourself lucky enough to have a network of people that work in businesses that are selling things that people are buying. Otherwise, good luck. You would have more luck consulting -- there are many many many "niche" products and projects that need to be done on small scales, that require good tech talent, but have no hope of being productized or scaled (hint!).
I do think I may struggle a bit to find something comparable to my current company, but we’re also hiring right now. And it’s a very small company in the grand scheme of things, even though we have customers much bigger.
I guess having that experience makes me think that there must be a lot of other small companies working in their own interesting niche, providing a valuable product for a subset of major companies. You just don’t usually know they exist unless you need their specific niche.
But I recognize your points too. It seems like the B-to-C space is really tricky right now, and likely fits closer with what you’re describing.
I think that the flip side is that a company doesn’t need to make it big to be successful. If you can hire 5 developers and bring in $2m/yr, there’s nothing at all wrong with that as a business. Maybe we will get lucky and the market will trend towards more of those to fill in the void that you mentioned. I think it could lead to a lot of innovation and a really healthy tech world! But maybe it’s just being overly optimistic to think that might be the path forward :)
I don't get it either. You hire someone in the hope for ROI. Some things work some kinda don't. Now people will be n times more productive therefore you should hire fewer people??
That would mean you have no ideas. It says nothing about the potential.
“we'd expect to see software companies shipping features and fixes faster than ever before. There would be a huge burst in innovative products and improvements to existing products.”
Shipping features faster != innovation or improvements to existing products
Because theyre just pushing out stuff that nobody mighy even need or want to buy. Because its not even necessarily leading to more revenue. Software companies arent factories. More stuff doesnt mean more $$$ made
Our jobs are full of a lot more than just writing code. In my case it seems like it’s helping to accelerate a portion of the dev cycle, but that’s a fairly smart portion, say 20%, and even a big impact on that just gets dominated by the other phases that haven’t been accelerated.
I’m not as bullish as some are on the impact of AI, but it does feel nice when you can deliver something in a fraction of the time it used to take. For me, it’s more useful as a research and idea exploration tool, less so about writing code. Part of that is that I’m in Scala land, so it just tends to not work as well as a more mainstream language.
We haven’t used it to help the product management and solution exploration side, which seems to be a big constraint on our execution.
Intuitively I agree. In the long run, we’ll know better. But for now, nobody truly knows what the new equilibrium is.
That said: it’s one type of work that is getting dramatically cheaper. The debate is about the scope and quality of that labor, not whether it’s cheap or fast (it is). But if anything negative (errors, faults) compound, and the correction can NOT be done with the same tools, then you must still have humans triage errors. In my experience, bad code can already have negative value (it costs more to fix than rewrite).
In the medium term, the actual scope and ability for different tasks will remain unknown. It takes a lot of time to gather the experience to tell if something was a bad idea – just look at the graveyard of design patterns, languages and software practices. Many of them enjoyed the spotlight for a decade before the fallout hit.
Anyway, while the abilities are unknown, AI will be used everywhere for everything – which is only wise if it’s truly better at every general task – despite every available data about it shows vastly different ability in different domains/problem types. Many of those things will be both (a) worse than humans and (b) expensive to reverse, with compounding effects.
The funny thing is I have already seen enthusiasts basically acknowledging this but explaining that those compounding issues (think tech debt) is the right choice now because better AI will fix those issues in the future. To me, this feels like the early formations of religion (not metaphorically even). And I have a feeling that the goalpost moving from both sides will lead to an unfalsifiability deadlock in the debate.
> shipping features and fixes faster than ever before
Meanwhile Apple duplicated my gf's contract, creating duplicate birthdays on my calendar. It couldn't find duplicates despite matching name, nickname, phone number, birthdays, and that both contacts were associated with her Apple account. I manually merged and ended up with 3 copies of her birthday in my calendar...
Seriously, this shit can be solved with a regex...
The number of issues like these I see is growing exponentially, not decreasing. I don't think it's AI though, because it started before that. I think these companies are just overfitting whatever silly metrics they have decided are best
I don't think it would be trivial to increase demand by 10x (or even 2x) that quickly. Eventually, a publicly traded company will get a bad quarter, at which point it's much easier to just reduce the number of employees. In both scenarios, there's no need for any new-hire.
I think there’s always demand for more software and more features. Have you ever seen a team without a huge backlog? The demand is effectively infinite.
Right, that’s kind of the whole point. If it’s in the backlog, someone thinks it’s valuable, but you might never get to it because of other priorities. If you’re 10x more productive, that line gets pushed a lot farther out, and your product addresses more people’s needs, has fewer edge case bugs, and so on.
If the competition instead uses their productivity boost to do layoffs and increase short term profits, you are likely to outcompete them over time.
Productivity results in increased profit, not necessarily output. They don't need to innovate, make new products, or improve things. They just need to make their shit cheaper so their profit margin is higher. If you can just keep churning out more money, there is no need to improve anything.
Effort in this equation isn't measured in man hours saved but dollars saved. We all know this is BS and isn't going to manifest this way. It's tantamount for giving framers a nailgun versus a hammer. We'll still be climbing the same rafters and doing the same work.
In 1987 the economist Robert Solow said "You can see the computer age everywhere but in the productivity statistics".
We should remark he said this long before the internet, web and mobile, so probably the remark needs an update.
However, I think it cuts through the salesmen hype. Anytime we see these kinds of claims we should reply "show me the numbers". I'll wait until economists make these big claims, will not trust CEOs and salesmen.
They didn't change a bit the productivity statistics.
The problem with computers not changing the productivity statistics is one of the great mysteries economists argue about. It's very clear nowadays that there are problems on both the "statistics" and "productivity" sides of it, but the internet, web, and mobile didn't change anything.
Before enterprise AI systems are allowed to spread their wings, first they need to support existing processes. Once they're able to generate the same customer-facing results relatively autonomously, then they'll have the opportunity to improve those results. So the first place to look for their impact is, I'd wager, cost-cutting. So watch those quarterly earnings reports.
Most significant technology takes almost a generation to be fully adopted. I think it is unlikely we are seeing the full effect of LLM's at the moment.
Content producers are blocking scrapers of their sites to prevent AI companies from using their content. I would not assume that AI is either inevitable or on a easy path to adoption. AI certainly isn't very useful if what it "knows" is out of date.
In 10 years with the same amount of money and time that's been pumped into AI, still a financial black hole, we had the entire broadband internet build out completed and the internet was responsible for adding a trillion dollars a year to the global economy.
AI tools seem to be most useful for little things. Fixing a little bug, making a little change. But those things aren’t always very visible or really move the needle.
It may help you build a real product feature quicker, but AI is not necessarily doing the research and product design which is probably the bottleneck for seeing real impact.
Or a lot of small fixes all over the place. Yet in reality we dont see this anywhere, not sure what exactly that means.
Maybe overall complexity creeping up rolls over any small gains, or devs are becoming more lazy and just copy paste llms output without a serious look at it?
My company didnt even adapt or allow use of llms in any way for anything so far (private client data security is more important than any productivity gains, which anyway seems questionable when looking around.. and serious data breaches can end up with fines in hundreds of millions ballpark easily).
It’s also possible that all of these gains fixing bugs are simply improving infrastructure and stability rather than finding new customers and opening up new markets.
Having worked on software infrastructure, it’s a thankless job. You’re most heroic work has little visibility and the result is that nothing catastrophic happened.
So maybe products will have better reliability and fewer bugs? And we all know there’s crappy software that makes tons of money, so there isn’t necessarily a strong correlation.
> And you DO see companies laying off people in large numbers fairly regularly.
Sure but, so far, too regularly to be AI-gains-driven (at least in software). We have some data on software job postings and the job apocalypse, and corresponding layoffs, coincided with the end of ultra-low interest rates. If AI had a recent effect this year or last, its quite tiny in comparison.
Layoffs happen because cash is scarce. In fact, cash is so scarce for anything that’s not “AI” that it’s basically nonexistent for startup fundraising purposes.
Well, it sort of evens out. You see the developers are pushed to use the AI to generate a lot of LoC-Slop, but then they have to fix all the bugs, security issues and hallucinated packages that were thrown in by the magic-machines. But at least some deluded MBA can BS about being "AI-first".
I mean, if a mega corp like Google or Amazon had plus/minus 10% of their headcount, as a lay observer I don't think I'd really be able to detect the difference in output either.
That doesn't mean it isn't a real productivity gain, but it might be spread across enough domains (bugs, features, internal tools, experiments) to not be immediately or "painfully obvious".
It'll probably get more obvious if we start to see uniquely productive small teams seeing success. A sort of "vibe-code wonder".
Firstly, the capex is currently too high for all but the few.
This is a rather obvious statement, sure. But the impact is a lot of companies "have tried language models and they didn't work", and the capex is laughable.
Secondly, there's a corporate paralysis over AI.
I received a panicky policy statement written in legalaise forbidding employees from using LLMs in any form. Written both out of a panic regarding intellectual property leaking but also a panic about how to manage and control staff moving forward.
I think a lot of corporates still clutch at this view that AI will push the workforce costs down and are secretly wasting a lot money failing at this.
The waste is extraordinary, but it's other peoples money (it's
actually the shareholders money) and it's seen as being all for a good cause and not something to discuss after it's gone. I can never get it discussed.
Meanwhile, at a grass roots level, I see AI is being embraced and is improving productivity, every second IT worker is using it, it's just that because of this corporate panicking and mismanagement, it's value is not yet measured.
By SAAS I assume you mean public LLMs, the problem is the hand-wringing occurring over intellectual property leaking from the company. Companies are actually writing policies banning their use.
In regards to Private LLMs, the situation has become disappointing in the 6 months.
I can only think of Mistral as being a genuine vendor.
But given the limitations in context window size, fine tuning is still necessary, and even that requires capex that I rarely see.
But my comment comes from the fact that I heard from several sources, smart people say "we tried language models at work and it failed".
However in my discussion with them, they have no concept of the size of the datacentres used by the webscalers.
It's not clear to me that fine-tuning is even capex. If you fine tune new models regularly, that's opex. If you mean literally just the GPUs, you would presumably just rent them right? (Either from cloud providers for small runs or the likes of sfcompute for large runs) Or do you imagine 24/7 training?
This is a good reminder that every org is different. However some companies like Microsoft are aggressively pushing AI tools internally, to a degree that is almost cringe.
I don't want to shill for LLMs-for-devs, but I think this is excellent corporate strategy by Microsoft. They are dog-fooding LLMs-for-devs. In a sense, this is R&D using real world tests. It is a product manager's dream.
The Google web-based office productivity suite is similar. I heard a rumor that at some point Google senior mgmt said that nearly all employees (excluding accounting) must use Google Docs. I am sure that they fixed a huge number of bugs plus added missing/blocking feature, which made the product much more competitive vs MSFT Office. Fifteen years ago, Google Docs was a curiosity -- an experiment for just how complex web apps could become. Today, Google Docs is the premiere choice for new small businesses. It is cheaper than MSFT Office, and "good enough".
Google docs has gotten a little better in that time, but it's honestly surprisingly unchanged. I think what really changed is that we all stopped wanting to layout docs for printing and became happier with the simpler feature set (along with collaboration and distribution).
The tools are often cringe because the capex was laughable.
E.g. one solution, the trial was done using public LLMs and then they switched over to an internally built LLM which is terrible.
Or, secondly, the process is often cringe because the corporate aims are laughable.
I've had an argument with a manager making a multi-million dollar investment in a zero coding solution that we ended up throwing in the bin years later.
They argued that they are going with this bad product because "they don't want to have to manage a team of developers".
They responded "this product costs millions of dollars, how dare you?"
How dare me indeed...
They promptly left the company but it took 5 years before it was finally canned, and plenty of people wasted 5 years of their career on a dead-end product.
Companies are not accepting that their entire business will mostly go away. They are mostly frogs boiling in water, that's why they are kinda just incorporating these little chat bots and LLMs into their business, but the truth of the matter is it's all going away and it's impossible to believe. Take something like JIRA, it's entirely laughable because a simple LLM can handle entire project management with freaking voice with zero programming. They just don't believe that's the reality, we're talking about Kodak moment.
Worker productivity is secondary to business destruction, which is the primary event we're really waiting for.
That's silly. You still need a way to track and prioritize tasks even if you use voice input. Jira may be replaced with something better, built around an LLM from the ground up. But the basic project management requirements will never go away.
Yes, that's quite easy. I say "Hey reorganize the tasks like-so, prioritize this, like so", and if I really need to, I can go ahead and hook up some function calls but I suspect this will be unnecessary with a few more LLM iterations (if even that). You can keep running from how powerful these LLMs are, but I'll just sit and wait for the business/startup apocalypse (which is coming). Jira will not be replaced by something better, it'll be replaced by some weekend project a high schooler makes. The very fact that it's valued at over a billion dollars in the market is just going to be a profound rug pull soon enough.
So let me keep it real, I am shorting Atlassian over the next 5 years. Asana is another, there's plenty of startup IPOs that need to be shorted to the ground basically.
If replacing Jira is really as easy as you claim, then it would have happened by now. At the very least, we'd be getting hit by a deluge of HN posts and articles about how to spin up your very own project management application with an LLM.
I think that this sentiment, along with all of the hype around AI in general, is failing to grasp a lot of the complexity around software creation. I'm not just talking about writing the code for a new application - I'm talking about maintaining that application, ensuring that it executes reliably and correctly, thinking about the features and UX required to make it as frictionless as possible (and voice input isn't the solution there, I'm very confident of that).
You are not understanding what I am saying. I am saying its the calm before the storm before everyone realizes they are paying a bunch of startups for literally no comparative value given AI. First the agile people are going to get fired, then the devs are just going to go "oh yeah I just manage everything in my LLM".
I'll be here in a year, we can have this exact discussion again.
I understand what you are saying, I just don't agree with it.
"AI" is not going to wholesale replace software development anytime soon, and certainly not within a year's time because of the reasons I mentioned. The way you worded your post made it sound like you believed that capability was already here - nevertheless, whether you think it's here now or will be here in a year, both estimates are way off IMO.
If there was only one consequence, and that consequence is Jira and Atlassian being destroyed, then I am all for it!
Realistically though, they might incorporate that high schooler's software into Jira, to make it even more bloated and they will sell it to your employer soon enough! Then team lead Chris will enter your birthday and your vacation days in it too, to enable it to also do vacation planning, without asking you. Next thing is, that Atlassian sells you out and you receive unsolicited AI calls for your holiday planning.
What sort of assurances can I get from that weekend project? I think we're going to build even more obscene towers of complexity as nobody knows how anything works anymore, because they choose not to.
No not really. The people that are behind the LLMs don't really know why it keeps getting better with more compute and data, they are literally just trying shit. Yet, the world has seen just how useful the thing is. We don't have any assurances from the damn thing, yet it's the most useful thing we ever made (at least software-wise).
In smaller businesses some roles won’t need to be hired anymore.
Meanwhile in big corps, some roles may transition from being the source of presumed expertise to being one neck to choke.
I’d love it not to be true, but the truth is Jira is to projects what Slack/Teams are to messaging. When everybody is a project manager Jira gets paid more, not less.
> Take something like JIRA, it's entirely laughable because a simple LLM can handle entire project management with freaking voice with zero programming
When I used a not-so-simple LLM to make it act as a text adventure game it could barely keep track of the items in my inventory, so TBH i am a little bit skeptical that an LLM can handle entire project management - even without voice.
Perhaps it might be able to use tools/MCP/RPC to call out to real project management software and pretend to be your accountant/manager/whoever, but i wouldn't call that the LLM itself doing the project management task - and someone would need to write that project management software.
There are innovative ways to accomplish the consistency you seek for the example application you mentioned. They are coming a lot sooner than you think, but hey this thread is a bit of a poker game before the flop, I’m just placing my bet - you can call the bluff.
We just have to wait for the cards to flip, and that’s happening on a quadratic curve (some say exponential).
I don't think extra productivity in software development ever reflected in established companies building things faster.
The more likely scenario is that if those tools make developer so much more productive, we would see a large surge in new companies, with 1 to 3 developers creating things that were deemed too hard for them to do.
But it's still possible that we didn't give people enough time yet.
I will never understand this argument. If you have a super tool, that can magically double your output, why would you suddenly double your output publicly? So that you now work twice essentially for the same money? You use it to work less, your output stays static or marginally improves - that’s smart play.
Note: I’m talking about your run of the mill SE waggie work, not startups where your food is based on your output.
If they were hiding their "power level" and maintaining my or pre my "power level", what incentive do they have to suddenly double it if they were hiding it in the first place?
If these tools are really making people so productive, shouldn't it be painfully obvious in companies' output? For example, if these AI coding tools were an amazing productivity boost in the end, we'd expect to see software companies shipping features and fixes faster than ever before. There would be a huge burst in innovative products and improvements to existing products. And we'd expect that to be in a way that would be obvious to customers and users, not just in the form of some blog post or earnings call.
For cost center work, this would lead to layoffs right away, sure. But companies that make and sell software should be capitalizing on this, and only laying people off when they get to the point of "we just don't know what to do with all this extra productivity, we're all out of ideas!". I haven't seen one single company in this situation. So that makes me think that these decisions are hype-driven short term thinking.