Hacker Newsnew | past | comments | ask | show | jobs | submit | wcfrobert's commentslogin

The results are obviously predictable, but it's nice that the authors took the time to prove a thing everyone already knows to be true with the rigors of science.

I wonder how the participants felt writing an essay while being hooked up to an EEG.


If AI makes everyone 10x engineers, you can 2x the productive output while reducing headcount by 5x.

Luckily software companies are not ball bearings factories.


unluckily, too many corporate managers seem to think they are :/


> If AI makes everyone 10x engineers, you can 2x the productive output while reducing headcount by 5x.

Why wouldn't you just 10x the productive output instead?


I don't think it would be trivial to increase demand by 10x (or even 2x) that quickly. Eventually, a publicly traded company will get a bad quarter, at which point it's much easier to just reduce the number of employees. In both scenarios, there's no need for any new-hire.


I think there’s always demand for more software and more features. Have you ever seen a team without a huge backlog? The demand is effectively infinite.


Isn’t a lot of stuff in the backlog because it’s not important enough to the bottom line to prioritize?


Right, that’s kind of the whole point. If it’s in the backlog, someone thinks it’s valuable, but you might never get to it because of other priorities. If you’re 10x more productive, that line gets pushed a lot farther out, and your product addresses more people’s needs, has fewer edge case bugs, and so on.

If the competition instead uses their productivity boost to do layoffs and increase short term profits, you are likely to outcompete them over time.


Or a 4-hour workweek.


Past a certain scale, complexity is mostly unavoidable.

Accidental complexity is compressable, but essential complexity is not. At some point, you cannot compress further without losing nuance.

In compiler design, there's a concept called the waterbed theory of complexity which states that you can try to abstract complexity away, but it'll just show up elsewhere.


Reinventing the wheel is the best way to learn. But imo that's really the only context where you should.

I love my rabbit holes, but at work, it's often not viable to explore them given deadlines and other constraints. If you want your wheel to be used in production though, it better be a good wheel, better than the existing products.


99% of the people that reinvent wheels at work don't know how the wheel they don't like is even made and why it has the compromises it has.


The other 1% does it because the original wheel constrains them to an inferior approach that doesn’t work for their employer.


I’m reminded of Chesterton’s Fence [1]. I just explained this to a coworker that was proud of indiscriminately eliminating 200+ positions.

[1] https://www.lesswrong.com/w/chesterton-s-fence


If your fencing was done by idiots, then at some point you have no alternative but to start tearing down fences and see what happens. Chesterton's Fence implies either a degree of design or a timeworn quality that most software products don't match.

Chesterton himself was using it as a religious metaphor, and I think most of us agree that software engineers are not literal gods.


It’s not that you can’t tear down the fences. It’s that you studied why they were there in the first place.


Sometimes the reason a fence exists is because the person who put it there was some combination of a) an idiot, b) extremely confused, or c) not qualified to construct fences.

Also, in software, you may find the fence sometimes exists in higher dimensions than the usual three, can turn invisible for part of its length, and occasionally exists but is four inches tall and blocks only mice. And it may also be designed, at least in part, by a blind idiot god called ChatGPT on whose altar consciousness itself is sacrificed. At this point it's worth considering whether keeping the fence around is worse than letting the bull out.


Market-based hypothesis is that the person erecting the fence incurs costs and if the expected value of the fence isn't higher than the cost of erecting it, it wouldn't have been erected in the first place.

Of course mismanagement happens but the implied value of understanding why the fence was erected is to understand the expected value it would bring and understand the problem it was trying to solve. This does not imply that it should have been erected, just that there were others before you trying to solve a problem and if they're failing it's important to know why so you don't fail in the same way.


It's funny. I had a similar analogy with some legacy systems recently. No one seemed to own or know where the data egressed to or whether it was even used anymore.

But it was also low-value data and even in the worst case we surmised the most we'd do is anger someone in marketing briefly.

I think so long as you can ascertain that the stakes are low, this is a good tactic.


>it's often not viable to explore them given deadlines and other constraints

True for life in general. We have limited lifespans. Aging and death really are the great grand daddy of all problems.


It's not really the best way to learn because it's the most expensive and time-consuming. What needs to be learned just needs to be well-documented and possible to tinker with, and clarity of communicating knowledge is a problem on its own, but you shouldn't have to build the whole thing from scratch.


> It's not really the best way to learn because it's the most expensive and time-consuming.

The expense (time or otherwise) follows from how intimately you have to get to know the subject. Which is precisely why it's the best way to learn. It's not always viable, but when you can spare the expense nothing else compares.


No, the expense follows from how many wasted options you'll have to go thorough. The working intimate knowledge can be acquired directly without that waste.


That sounds like learning something only very superficially. Rewriting from scratch is the only way to really learn any topic of non-trivial depth and complexity.


It's not necessarily superficial learning. If you want to learn math, you'd be crazy to rewrite the rules of math just to learn it---you'd be better off with a good teacher and constant practice instead of building the theorems and formulas yourself, or you'll never finish learning math in your lifetime.

Same can be said of learning a large inherited codebase. If you're assigned to a large piece of functionality and you need to understand it, your first instinct shouldn't be to rewrite the whole thing from scratch just to understand it; it should be to either read the code and documentation, or if those are impossible to comprehend, ask around what it's supposed to do, maybe even write tests to get a general sense of the behavior.

I don't expect programmers to be cognitive psychologists here but the thing about learning is that you must do a "progressive overload" of complexity as you go, which is why building on top of what exists is the best way to go. You get a general idea first of what something is supposed to do, and then when you're good enough, that's when you can work backwards to building something from scratch where all the complexity at lower levels of abstraction wouldn't be too much for your brain to handle. Starting with a large amount of complexity will impair your learning and only discourage you.


Sure, you draw the line at different levels of abstraction, but you still need to rewrite from scratch the thing you're learning. If you want to learn how a web server works, you better write one. You don't need to write the OS that it runs on, however, although doing so also informs certain aspects of web server design. Reading the documentation of an existing web server alone is only going to result in a superficial understanding of it.


Different people learn differently. I learned computer science by first learning low-level languages, and then moving up the abstraction tree. Many go the opposite direction.

Personally, I’m a fan of learning by doing. Reinventing the wheel works better for me than just about anything I’ve tried.


depends on what you're optimizing for


I think the article is getting at the fact that in a post-AGI world, human skill is a depreciating asset. This is terrifying because we exchange our physical and mental labor for money. Consider this: why would a company hire me if - with enough GPU and capital - they can copy-and-paste 1,000 of AI agents much smarter to do the work?

With AGI, Knowledge workers will be worth less until they are worthless.

While I'm genuinely excited about the scientific progress AGI will bring (e.g. curing all diseases), I really hope there's a place for me in the post-AGI world. Otherwise, like the potters and bakers who can't compete in the market with cold-hard industrial machines, I'll be selling my python code base on Etsy.

No Set Gauge had an excellent blog post about this. Have a read if you want a dash of existential dread for the weekend: https://www.nosetgauge.com/p/capital-agi-and-human-ambition.


> With AGI, Knowledge workers will be worth less until they are worthless.

"Knowledge workers" being in charge is a recent idea that is, perhaps, reaching end of life. Up until WWII or so, society had more smart people than it had roles for them. For most of history, being strong and healthy, with a good voice and a strong personality, counted for more than being smart. To a considerable extent, it still does.

In the 1950s, C.P. Snow's "Two Cultures" became famous for pointing out that the smart people were on the way up.[1] They hadn't won yet; that was about two decades ahead. The triumph of the nerds took until the early 1990s.[2] The ultimate victory was, perhaps, the collapse of the Soviet Union in 1991. That was the last major power run by goons. That's celebrated in The End of History and the Last Man (1992).[3] Everything was going to be run by technocrats and experts from now on.

But it didn't last. Government by goons is back. Don't need to elaborate on that.

The glut of smart people will continue to grow. Over half of Americans with college educations work in jobs that don't require a college education. AI will accelerate that process. It doesn't require AI superintelligence to return smart people to the rabble. Just AI somewhat above the human average.

[1] https://en.wikipedia.org/wiki/The_Two_Cultures

[2] https://archive.org/details/triumph_of_the_nerds

[3] https://en.wikipedia.org/wiki/The_End_of_History_and_the_Las...


I’ve thought the same. Goons powered by AI, that is.


That seems like a very narrow perspective. For one, it is neither clear we will end up with AGI at all—we could have reached or soon reach a plateau with the possibilities of the LLM technology—or whether it’ll work like what you’re describing; the energy requirements might not be feasible, for example, or usage is so expensive it’s just not worth applying it to every mundane task under the sun, like writing CRUD apps in Python. We know how to build flying cars, technically, but it’s just not economically sustainable to use them. And finally, you never know what niches are going to be freed up or created by the ominous AGI machines appearing on the stage.

I wouldn’t worry too much yet.


This is only terrifying because of how we’ve structured society. There’s a version of the trajectory we’re on that leads to a post-scarcity society. I’m not sure we can pull that off as a species, but even if we can, it’s going to be a bumpy road.


the barrier to that version of the trajectory is that "we" haven't structured society. what structure exists, exists as a result of capital extracting as much wealth from labor as labor will allow (often by dividing class interests among labor).

agreed on the bumpy road - i don't see how we'll reach a post-scarcity society unless there is an intentional restructuring (which, many people think, would require a pretty violent paradigm shift).


I think we think of it as 'extracting' because people are coerced into jobs that they hate. I think AI can help us exit the paradigm of working as extracting. Basically, passion economy (ai handles marketing, internet distribution). Allows you to focus on what you actually like, but it can actually make money this time.


to be trite, we've been promised a world where AI will help to alleviate the menial necessities so that we're free to pursue our passions. in reality, what we're getting is AI that replaces the human component of passion projects (art, music, engineering as craft), leaving the "actually-hard-to-replace" "low-class" roles (cashiering, trash collection, housekeeping, farming, etc) to humans who generally have few other economic options.

without a dramatic shift in wealth distribution (no less than the elimination of private wealth and the profit motive), we can't have a post-scarcity society. capitalism depends entirely upon scarcity, artificial or not.


> With AGI, Knowledge workers will be worth less until they are worthless.

The article you've linked fundamentally relies on the assumption that "the tasks can be done better/faster/cheaper by AIs". (Plus, of course, the idea that AGI would be achieved, but without this one the whole discussion would be pointless as it would lack the subject, so I'm totally fine with this one.)

Nothing about AGI (as in "a machine that can produce intelligent thoughts on a given matter") says that human and non-human knowledge workers would have some obvious leverage over each other. Just like my coworkers' existence doesn't hurt mine, a non-human intelligence is of no inherent threat. Not by definition.

Non-intelligent industrial robotics is well-researched and generally available, yet we have plenty of sweatshops because they turn out to be cheaper than robot factories. Not fun, not great, I'm no fond of this, but I'm merely taking it as a fact, as it is how it currently is. So I really wouldn't dare to unquestionably assume that "cheaper" would be true.

And then "better" isn't obvious either. Intelligence is intelligence, it can think, it can make guesses, it can make logical conclusions, and it can make mistakes too - but we've yet to see even the tiniest hints of "higher levels" of it, something that would make humans out of the league of thinking machines if we're ranking on some "quality" of thinking.

I can only buy "faster" - and even that requires an assumption that we ignore any transhumanist ideas. But, surely, "faster" alone doesn't cut it?


- Not reporting large cash transactions (over $10,000)

- Using someone else's ID can be interpreted as identity theft (sharing student ID discount, Costco cards, epic passes)

- torrenting copyrighted content (textbooks, music, movies, TV shows, audio books). I'm sure most of my classmates in school torrented some of those $200 textbooks.


>- Not reporting large cash transactions (over $10,000)

If you're talking about Currency Transaction Report, only financial institutions have to file those.

>- Using someone else's ID can be interpreted as identity theft (sharing student ID discount, Costco cards, epic passes)

Is "identity theft" actually a distinct crime? Or is it just fraud? If it's the latter, it's not a felony unless you're getting absurdly high amounts of benefit.

>- torrenting copyrighted content (textbooks, music, movies, TV shows, audio books). I'm sure most if not all of my classmates in school torrented some of those $200 textbooks.

Copyright infringement is a civil infraction unless you're doing it commercially (eg. burning bootleg DVDs to sell)


I was in jail with tons of people with identity theft indictments. In Illinois it is absolutely a separate crime with a whole list of ways you can easily commit it.


I was thinking form 8300.

Could seeding a torrent be interpreted as distribution?

You're right most of these would result in a slap on the wrist or fines. Perhaps 3 misdemeanors a day? But I think the overall sentiment still stands - that it's hard to be a saint.


>I was thinking form 8300.

It's only criminal for "willful" infractions. If you sold a car and forgot to file, that's probably not willful. Moreover, how often are people really doing >$10k cash transactions? "it's hard to be a saint" is a massive shifting of the goalposts from "3 felonies per day".

https://www.irs.gov/businesses/small-businesses-self-employe...

>Could seeding a torrent be interpreted as distribution?

From wikipedia: United States v. LaMacchia 871 F.Supp. 535 (1994) was a case decided by the United States District Court for the District of Massachusetts which ruled that, under the copyright and cybercrime laws effective at the time, committing copyright infringement for non-commercial motives could not be prosecuted under criminal copyright law.


"Willful" is one of those things highly dependent on whoever may be prosecuting you. If you have from some disliked class then the state is going to push very hard on you doing it willfully.


> I'm sure most of my classmates in school torrented some of those $200 textbooks.

That is so 2020. Today, AI pirates the textbooks; the student just generate the homework.


I feel you are mixing something up here. Assume that the student ID discount or Costco card, etc, are being borrowed: used with the card holder's blessing. Or borrowing a Netflix password.

It is the institution giving the student ID discount, or Costco or Netflix, who are not happy about it, and call it theft.

But it is not theft of the identity.


> "The CEO was wavering until Tom found out they both owned the same obscure Italian motorcycle. Tom took him for a ride along the coast. Contract was signed the next day."

As a junior, I often wonder how many deals are signed in exclusive country clubs, on golf courses, and at the dining table with endlessly flowing Maotai.

For a successful career, is it better for one to prioritize network over skills? It seems to me that the latter can be commoditized by AI, while the former can not. Rather than learning Lisp, maybe it's time to pick up golf. I'm only half joking.


> As a junior, I often wonder how many deals are signed in exclusive country clubs, on golf courses, and at the dining table with endlessly flowing Maotai.

Virtually none in our business. (Databases.) What does get deals is listening carefully to what customers actually want and putting together offerings that get it to them at a reasonable price. Incidentally, good sales people are vastly better at this than devs. There are a number of skills that go into it but being a good listener is the most important.


prioritize your soul


I am not a frontend dev but centering a div came to mind.

I just want to center the damn content. I don't much care about the intricacies of using auto-margin, flexbox, css grid, align-content, etc.


I'm afraid that css is so broken that even AI won't help you to generalize centering content. Otoh, in the same spirit you are now a proficient ios/android developer where it's just "center content - BOOM!".


I know this is a meme but centering a div is really not hard.

15 years ago it was just a google away, im sure AI can handle it fine.


Why do you think this is only a meme? Flow modes, centering methods and content are still at odds with each other and don't generalize. This idiotic model cannot get it right unless you're designing for a very specific case that will shatter as soon as you bump its shoulder.

Edit: I've been in the AI CSS BS loop just a few days ago, not sure how you guys miss it. I start screaming f-'s and "are you an idiot" when it cycles through "doesn't work", "ignored prereqs" and "doesn't make sense at all".


Just do everything with flexbox. https://flexboxfroggy.com is a good example of what's possible


What if I have text nodes in the mix? And I don't know that in advance, e.g. I'm doing <div>{content}</>? What if this div is in a same-class flexbox and now its margins or --vars clash with the defaults of its parent, which it knows nothing about by the principle of isolation? Then you may suggest using wrapper boxes for all children, but how e.g. align-baseline crosses that border is now a mystery that depends on a bunch of other properties at each side.

Your reply is correct, but it's exactly that "just do this specific configuration" sort of correct, which punctures component isolation all the way through and makes these layers leak into each other, creating a non-refactorable mess.


That doesn't seem like a #2 scenario, unless you're okay with your centered divs not being centered some of the time.


looking at most websites, regardless of how much money and human energy has been spent on them:

yes I think we're okay with divs not being centered some of the time.

many millions have been spilled to adjust pixels (while failing to handle loads of more common issues), but most humans just care if they can eventually get what they want to happen if they press the button harder next time.

(I am not an LLM-optimist, but visual layout is absolutely somewhere that people aren't all that picky about edge cases, because the success rate is abysmally low already. it's like good translations: it can definitely help, and definitely be worth the money, but it is definitely not a hard requirement - as evidence I point to the vast majority of translated software.)


Humans can extract information quicker from proper layouts. A good layout brings faster clarity in your head. What developers often get wrong: it's not just about doing something, it's also about how simple and fast to parse and understand it was (from a visual point of view as well, of course information architecture and UX matter a lot as well). Not aligning things is a slippery slope. If you can't center a div, probably all the other things that are more complex in your website / app are going to be off or even broken. Thankfully AIs can center divs by now, but proper grid systems understanding is at best frontier.


It absolutely helps, but this is about whether it's truly needed or not.

I think there's overwhelming evidence that it's not truly necessary.


I could imagine a vision-enabled transformer model being useful to create a customizable “reading mode”, that adjusts page layout based on things like user prefs, monitor/window size, ad recognition, visual detail of images, information density of the text, etc.

Maybe in an alternate universe where every user-agent enabled browser had this type of thing enabled by default, most companies would skip site design all together and just publish raw ad copy, info, and images.


Are you describing coding html via LLM or actually using the llm as a rendering engine for ui


Neither. They're describing the philosophical similarities of:

  * "Has only been that way so far because that's how computers are" and
  * "I just want to center the damn content.
     I don't much care about the intricacies of using
     auto-margin, flexbox, css grid, align-content, etc."
Centering a div is seen as difficult because complexities that boil down to "that's just how computers are", and they find (imo rightful) frustration in that.


> I don't much care about the intricacies of using auto-margin, flexbox, css grid, align-content, etc.

You do / did care, e.g. browser support.


This sounds like a front-end dev that understands the intricacies of all of this when, again, this person is saying "I just want the content centered".


> again, this person is saying "I just want the content centered".

You can't just want. It always backfires. It's called being ignorant. There are always consequences. I just want to cross the road without caring too. Oh the cars might just hit me. Doesn't matter?

> This sounds like a front-end dev that understands the intricacies of all of this

That's the person that's supposed to do this job? Sounds bog standard. What's the problem?


At some point this is just silly.

If you're assuming the user knows nothing then all tasks are hard. Ever try putting an image in a page if you don't know HTML? It's pretty tricky.


At some point, sure; but there is always value in comprehending why someone might find an existing flow overly obtuse and/or frustrating when they "just want to do a simple thing".

To imagine otherwise reminds me of The Infamous Dropbox Comment.

Addendum: to wit, whole companies, like SquareSpace and Wix, exist because web dev is a pain and WYSWIG editors help a lot


> Addendum: to wit, whole companies, like SquareSpace and Wix, exist because web dev is a pain and WYSWIG editors help a lot

But these companies DO care (or at least that's the point) and don't "just want to do a simple thing".

The point of outsourcing is to give it to a professional with expertise like seeing a doctor. Dropbox isn't "just a simple thing" either, so no not the same.


> "Criticism seems sophisticated, and making new things often seems awkward, especially at first; and yet it's precisely those first steps that are most rare and valuable."

This is what makes silicon valley is so amazing. It's filled with those who want to make good new things, who aren't afraid of looking awkward. This type of culture is actually quite weird. In most other places, you'd be dissuaded by conventional wisdom, or "who-do-you-think-you-are-isms".


Maybe 25 years ago. Hardly today. Today it's Big Tech, hardly different from Big Pharma, Big Tobacco, Big Oil, Big Finance, etc.


> It's filled with those who want to make good new things

It's crazy you think this is even remotely unique to SV. Broad swaths of the country (referred to as "flyover" by coastal people) are fully employed in the production of new things that are essential to the survival of the human race.

Just, for some reason, you think "new things" is just bleep bloop and not moo oink.


All new inventions have tendencies to overturn existing power structure (i.e. disrupt the status quo). It's probably why certain cultures disincentivize innovation and spurn entrepreneurs.

But I think creative destruction is a net good, and I'd argue that micro-dosing on revolutions is essential for dynamism and social mobility.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: