Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“Better” is always task-dependent. LLMs are already far better than me (and most devs I’d imagine) at rote things like getting CSS syntax right for a desired effect, or remembering the right way to invoke a popular library (e.g. fetch)

These little side quests used to eat a lot of my time and I’m happy to have a tool that can do these almost instantly.



I've found LLMs particularly bad for anything beyond basic styling since the effects can be quite hard to describe and/or don't have a universal description.

Also, there are often times multiple ways to achieve a certain style and they all work fine until you want a particular tweak, in which case only one will work and the LLM usually gets stuck in one of the ones that does not work.


Multi modal LLMs to the rescue. Throw a screenshot or mockup in there and tell the LLM "there, like this". Gemini can do the same with videos.


Still terrible result. Multi modal = actually understands the image


Also tends to write CSS where if you actually have opinions about what good CSS is, is clearly an abomination. But most engineers don’t really care about that.


I have found it to be good at things I am not very strong at (SQL) but terrible at the things I know well (CSS).

Telling, isn't it?


Ironically, I find it strong at things I don't know very well (CSS), but terrible at things I know well (SQL).

This is probably really just a way of saying, it's better at simple tasks rather than complex ones. I can eventually get Copilot to write SQL that's complex and accurate, but I don't find it faster or more effective than writing it myself.


Actually, you've reinforced their point. It's only bad at things the user is actually good at because the user actually knows enough in that domain to find the flaws and issues. It appears to be good in domains the user is bad at because the user doesn't know any better. In reality, the LLM is just bad at all domains; it's simply whether a user has the skill to discern it. Of course, I don't believe it's as black and white as that but I just wanted to point it out.


Yes, that is precisely what I meant. It just occurred to me and I will see how that idea holds up.


Yeah, my goal was to reinforce their point in a humorous way.


It’s like the Gell-Mann Amnesia effect but for LLMs instead of journalism.


I kind of agree. It feels like they're generally a superior form of copying and pasting fro stack overflow where the machine has automated the searching, copying, pasting, and fiddling with variable names. It be just as useful or dangerous as Google -> Copy -> Paste ever was, but faster.


Funny, I find it to be good at things I'm not very strong at (CSS) but terrible at the things I know well (SQL). :)

Actually I think it's perfectly adequate at SQL too.


> and most devs I’d imagine

What an awful imagination. Yes there are people who don't like CSS but are forced to use it by their job so they don't learn it properly, and that's why they think CSS is rote memorization.

But overall I agree with you that if a company is too cheap to hire a person who is actually skilled at CSS, it is still better to hoist that CSS job onto LLMs than an unwilling human. Because that unwilling human is not going to learn CSS well and won't enjoy writing CSS.

On the other hand, if the company is willing to hire someone who's actually good, LLMs can't compare. It's basically the old argument of LLMs only being able to replace less good developers. In this case, you admitted that you are not good at CSS and LLMs are better than you at CSS. It's not task-dependent it's skill-dependent.


Hum... I imagine LLMs are better than every developer on getting CSS keywords right like the GP pointed. And I expect every LLM to be slightly worse than most classical autocompletes.


Getting CSS keywords right is not the actual point of writing CSS. And you can have a linter that helps you in that regards. The endgame of writing CSS is to style an HTML page according to the specifications of a design. Which can be as detailed as a figma file or as flimsy as a drawing on a whiteboard.


This is like saying that LLMs are better at knowing the name of that one obscure API. It's not wrong, but it's also not the hard part about CSS


Wait until they hear how good dictionaries are at spelling.


I'm one of those weirdos who really likes handwriting CSS. I frequently find ChatGPT getting my requests wrong.


... even better with a good fountain pen ...


The LLM outputs good enough CSS, but is (way) cheaper than someone who's actually good at CSS.


I think that's great if it's for something outside of your primary language. I've used it to good effect in that way myself. However, denying yourself the reflexive memory of having learned those things is a quick way to become wholly dependent upon the tool. You could easily end up with compromised solutions because the tool recommends something you don't understand well enough to know there's a better way to do something.


So here's an analogy. (Yeah, I know, proof by analogy is fraud. But it's going to illustrate the question.)

Here's a kid out hoeing rows for corn. He sees someone planting with a tractor, and decides that's the way to go. Someone tells him, "If you get a tractor, you'll never develop the muscles that would make you really great at hoeing."

Different analogy: Here's someone trying to learn to paint. They see someone painting by numbers, and it looks a lot easier. Someone tells them, "If you paint by numbers, you'll never develop the eye that you need to really become good as a painter."

Which is the analogy that applies, and what makes it the right one?

I think the difference is how much of the job the tool can take over. The tractor can take over the job of digging the row, with far more power, far more speed, and honestly far more quality. The paint by numbers can take over the job of visualizing the painting, with some loss of quality and a total loss of creativity. (In painting, the creativity is considered a vital part; in digging corn rows, not so much.)

I think that software is more like painting, rather than row-hoeing. I think that AI (currently) is in the form of speeding things up with some loss of both quality and creativity.

Can anyone steelman this?


> Here's a kid out hoeing rows for corn. He sees someone planting with a tractor, and decides that's the way to go. Someone tells him, "If you get a tractor, you'll never develop the muscles that would make you really great at hoeing

In this example the idea that losing the muscles that make you great at hoeing" seems kind of like a silly thing to worry about

But I think there's a second order effect here. The kid gets a job driving the tractor instead. He spends his days seated instead of working. His lifestyle is more sedentary. He works just as many hours as before, and he makes about the same as he did before, so he doesn't really see much benefit from the increased productivity of the tractor.

However now he's gaining weight from being more sedentary, losing muscle from not moving his body, developing lower back problems from being seated all day, developing hearing loss from the noisy machinery. His quality of life is now lower, right?

Edit: Yes, there are also health problems from working hard moving dirt all day. You can overwork yourself, no question. It's hard on your body, being in the sun all day is bad for you.

I would argue it's still objectively a physically healthier lifestyle than driving a tractor for hours though.

Edit 2: my point is that I think after driving a tractor for a while, the kid would really struggle to go hoe by hand like he used to, if he ever needed to


> my point is that I think after driving a tractor for a while, the kid would really struggle to go hoe by hand like he used to, if he ever needed to

That's true in the short term, but let's be real, tilling soil isn't likely to become a lost art. I mean, we use big machines right now but here we are talking about using a hoe.

If you remove the context of LLMs from the discussion, it reads like you're arguing that technological progress in general is bad because people would eventually struggle to live without it. I know you probably didn't intend that, but it's worth considering.

It's also sort of the point in an optimistic sense. I don't really know what it takes on a practical level to be a subsistence farmer. That's probably a good sign, all things considered. I go to the gym 6 times a week, try to eat pretty well, I'm probably better off compared to toiling in the fields.


> If you remove the context of LLMs from the discussion, it reads like you're arguing that technological progress in general is bad because people would eventually struggle to live without it.

I'm arguing that there are always tradeoffs and we often do not fully understand the tradeoffs we are making or the consequences of those tradeoffs 10, 50, 100 years down the road

When we moved from more physical jobs to desk jobs many of us became sedentary and overweight. Now we are in an "obesity crisis". There's multiple factors to that, it's not just being in desk jobs, but being sedentary is a big factor.

What tradeoffs are we making with AI that we won't fully understand until much further along this road?

Also, what is in it for me or other working class people? We take jobs that have us driving machines, we are "more productive" but do we get paid more? Do we have more free time? Do we get any benefit from this? Maybe a fraction. Most of the benefit is reaped by employers and shareholders

Maybe it would be better if instead of hoeing for 8 hours the farmhand could drive the tractor for 2 hours, make the same money and have 6 more free hours per day?

But what really happens is that the farm buys a tractor, fires 100 of the farmhands coworkers, the has the remaining farmhand drive the tractor for 8 hours, replacing the productivity to very little benefit to himself

Now the other farmhands are unemployed and broke, he's still working just as much and not gaining any extra from it

The only one who benefits are the owners


I do think you’re missing something, though.

In a healthy competitive market (like most of the history of the US, maybe not the last 30-40 years), if all of the farms do that, it drives down the cost of the food. The reduction in labor necessary to produce the food causes competition and brings down the cost to produce the food.

That still doesn’t directly benefit the farmhands. But if it happens gradually throughout the entire economy, it creates abundance that benefits everybody. The farmhand doesn’t benefit from their own increase in productivity, but they benefit from everyone else’s.

And those unemployed farmhands likely don’t stay unemployed - maybe farms are able to expand and grow more, now that there is more labor available. Maybe they even go into food processing. It’s not obvious at the time, though.

In tech, we currently have like 6-10 mega companies, and a bunch of little ones. I think creating an environment that allows many more medium-sized companies and allowing them to compete heavily will ease away any risk of job loss. Same applies to a bunch of fields other than tech. The US companies are far too consolidated.


> I think creating an environment that allows many more medium-sized companies and allowing them to compete heavily will ease away any risk of job loss. Same applies to a bunch of fields other than tech. The US companies are far too consolidated

How do we achieve this environment?

It's not through AI, that is still the same problem. The AI companies will be the 6-10 mega companies and anyone relying on AI will still be small fry

Every time in my lifetime that we have had a huge jump in technological progress, all we've seen is that the rich get richer and the poor get poorer and the gap gets bigger

You even call this out explicitly: "most of the history of the US, maybe not the last 30-40 years"

Do we have any realistic reason to assume the trend of the last 30-40 years will change course at this point?


> When we moved from more physical jobs to desk jobs many of us became sedentary and overweight. Now we are in an "obesity crisis". There's multiple factors to that, it's not just being in desk jobs, but being sedentary is a big factor.

Sure, although I think our lives are generally better than they were a few hundred years ago. Besides, if you care about your health you can always take steps yourself.

> The only one who benefits are the owners

Well yeah, the entity that benefits is the farm, and whoever owns whatever portions of the farm. The point of the farm isn't to give its workers jobs. It's to produce something to sell.

As long as we're in a market where we're selling our labor, we're only given money for being productive. If technology makes us redundant, then we find new jobs. Same as it ever was.

Think about it: why should hundreds of manual farmhands stay employed while they can be replaced by a single machine? That's not an efficient economy or society. Let those people re-skill and be useful in other roles.


> If technology makes us redundant, then we find new jobs. Same as it ever was.

Except, of course, it's not the same as it ever was because you do actually run out of jobs. And it's significantly sooner than you think, because people have limits.

I can't be Einstein, you can't be Einstein. If that becomes the standard, you and I will both starve.

We've been pushing people up and up the chain of complexity, and we can do that because we got all the low hanging fruit. It's easy to get someone to read, then to write, then to do basic math, then to do programming. It gets a bit harder though with every step, no? Not everyone who reads has the capability of doing basic math, and not everyone who can do basic math has the capability of being a programmers.

So at each step, we lose a little bit of people. Those people don't go anywhere, we just toss them aside as a society and force them into a life of poverty. You and I are detached from that, because we've been lucky to not be those people. I know some of those people, and that's just life for them.

My parents got high paying jobs straight out of highschool. Now, highschool grads are destined to flip burgers. We've pushed people up - but not everyone can graduate college. Then, we have to think about what happens when we continue to push people up.

Eventually, you and I will not be able to keep up. You're smart, I'm smart, but not that smart. We will become the burger flippers or whatever futuristic equivalent. Uh... robot flippers.


What if all work is no longer necessary? Then yes, we're going to have to rethink how our society works. Fair enough.

I'm a bit confused by your read on the people who don't make it through college. The implication is that if you don't make it into a high status/white collar job, you're destined for a life of poverty. I feel like this speaks more to the insecurity of the white collar worker, and isn't actually a good reflection of reality. Most of my friends dropped out of college and did something completely different in the service industry, it's not really a "life of poverty."

> My parents got high paying jobs straight out of highschool. Now, highschool grads are destined to flip burgers.

This feels like pure luck for your parents. Take a wider look at history -- it's just a regression to the mean. We used to have _less_ complex jobs. Mathematics/science hasn't always been a job. That is to say, burger-flipping or an equivalent was more common. It was not the norm that households were held together by a single man's income, etc.


I don’t think we need to get to a point where all jobs are eliminated to start seeing cracks in the system. We already have problems. We’ve left a lot of people behind, we just don’t really care.


> Uh... robot flippers.

Prompt engineers

You are spot on with your analysis. At some point there will be nothing left for people to do except at the very top level. What happens then?

I am not optimistic enough to believe that we create a utopia for everyone. We would need to solve scarcity first, at minimum.


>I think the difference is how much of the job the tool can take over.

I think it is about how utilitarian the output is. For food no one cares how the sausage is made. For a painting the story behind it is more important than the picture itself. All of Picasso's paintings are famous because they were painted by Picasso. Picasso style painting by Bill? Suddenly it isn't museum worthy anymore.

No one cares about the story or people behind Word, they just want to edit documents. The Demo scene probably has a good shot at being on the side of art.


The analogy I would use is that coding via LLM is like learning to drive in a self-driving car that has manual controls as an option that drives overly cautiously (Leaves excessively large following distances, takes corners slower, etc.) while in self-driving mode.

You can let it self-drive, but you'd probably learn nothing, and it will actually take you longer. Put an expert driver behind the wheel, and they'll drive faster and only use automation features for the boring parts.


For me the creativity in software engineering doesn't come from coding, that's an implementation detail. It comes from architecture, from thinking about "what do I want to build, how should it behave, how should it look, what or who is it for?" and driving that forward. Bolting it together in code is hoeing, for that vast majority of us. The creative endeavor sits higher up on the abstraction ladder.


You're right, however I think we've already gone through this before. Most of us (probably) couldn't tell you exactly how an optimizing compiler picks optimizations or exactly how JavaScript maps to processor instructions, etc -- we hopefully understand enough at one level of abstraction to do our jobs. Maybe LLM driving will be another level of abstraction, when it gets better at (say) architecting projects.


> Most of us (probably) couldn't tell you exactly how an optimizing compiler picks optimizations or exactly how JavaScript maps to processor instructions,

That's because other people are making those working well. It's like how you don't care about how the bread is being made because you trust your baker (or the regulations). It's a chain of trust that is easily broken when LLMs are brought in.


Depends, if regulations are the cage that a baker has to work in to produce a product of agreed upon quality, then tests and types and LSPs etc. can be that cage for an LLM.


Regulations are not a cage. They don't constrains you for not doing stuff. They're a threshold for when behavior have destructive consequences for yourself. So you're very much incentivized for not doing them.

So tests may be the inspections, but what is the punitive action? Canceling the subscription?


Yeah, this is what I really like about AI tools though. They're way better than me at annoying minutia like getting CSS syntax right. I used to dread that kind of thing!


And you will keep dreading it for as long as you use them, since you learn nothing from solutions served on a silver plate.


The point is that I don't dread it anymore, because now there are tools that make it a lot easier the one or two times a year I have some reason to use it.


Just wait until they're not there anymore, that's when you realize what you sacrificed.


Does this apply to other services I use? Should I avoid using Google because when it's not there anymore I'll realize what I've sacrificed?


Not the same thing.

Using Google to find an answer is convenient, and I'm sure you would miss it.

But telling a machine to think for you outsources everything, once it's gone you have nothing left.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: