Hacker Newsnew | past | comments | ask | show | jobs | submit | sponnath's commentslogin

Why would these things not be relevant for humans?


They are relevant but dumping it all into one document in the project root isn’t as optimal for humans as it is for agents, especially since a lot of that information is irrelevant to someone landing on your repo, who probably just wants to add it to their dependency manifest or install the app followed by usage instructions geared to humans.


Agents are capable of semantic search and reading an entire directory devoted to human readable docs. So I'm not sure this is a particularly good argument. Just make it clear where to find what.


Because managing an AI’s context is important and you don’t want to put stuff in there that’s not relevant.

Just because they can read it and understand it doesn’t mean there are no better alternatives.


That's also not a strong argument?

Agents often have system prompts specific to their purpose. Having a single dump of agent instructions will increase noise in the context.


I don't think it's correct to generalize front-end work like this. I've found it very underwhelming for the kind of front-end stuff I do. It makes embarrassing mistakes. I've found it quite useful for a lot of the braindead code I need to write for CRUD backends though.

It's good at stuff that most competent engineers can get right while also having the sort of knowledge breadth an average engineer would lack. You really need to be a domain expert to accurately judge its output in specific areas.


Well I wasn't intending to generalize front end work as "easy enough for LLM" what I meant to say was that, since I have no experience with it, its output good enough for me. Classic Gell-Mann amnesia


I would even argue the hard parts of being human don't even need to be automated. Why are we all in a rush to automate everything, including what makes us human?


Something big is definitely happening but it's not the intelligence explosion utopia that the AI companies are promising.

> Who cares if AGI isn’t five minutes away. That’s not the point. The point is we’ve built the closest thing to a machine that actually gets what we’re saying. That alone is insane. You type in a paragraph about your childhood trauma and it gives you back something more coherent than your therapist. You ask it to summarize a court ruling and it doesn’t need to check Wikipedia first. It remembers context. It adjusts to tone. It knows when you’re being sarcastic. You think that’s just “autocomplete”? That’s not autocomplete, that’s comprehension

My experience with LLMs have been all over the place. They're insanely good at comprehending language. As a side effect, they're also decent at comprehending complicated concepts like math or programming since most of human knowledge is embedded in language. This does not mean they have a thorough understanding of those concepts. It is very easy to trip them up. They also fail in ways that are not obvious to people who aren't experts on whatever is the subject of its output.

> And the logic complaints, yeah, it screws up sometimes. So do you. So does your GPS, your doctor, your brain when you’re tired. You want flawless logic? Go build a calculator and stay out of adult conversations. This thing is learning from trillions of words and still does better than half the blowhards on HN. It doesn’t need to be perfect. It needs to be useful, and it already is.

I feel like this is handwaving away the shortcomings a bit too much. It does not screw up in the same way humans do. Not even close. Besides, I think computers should rightfully be held up to a higher standard. We already have programs that can automate tasks that human brains would find challenging and tedious to do. Surely the next frontier is something with the speed and accuracy of a computer while also having the adaptability of human reasoning.

I don't feel threatened by LLMs. I definitely feel threatened by some of the absurd amount of money being put into them though. I think most of us here will be feeling some pain if a correction happens.


I find it kind of funny that in order to talk to AI people, you need to preface your paragraph with "I find current AI amazing, but...". It's like, you guess it, pre-prompting them for better acceptance.


Oh come on, it's not some secret code. People say “AI is amazing, but...” because it is amazing... and also flawed. That’s just called having a balanced take, not pre-prompting for approval. What do you want them to do, scream “THIS SUCKS” and ignore reality? It’s not a trick, it’s just how grown-ups talk when they’re not trying to win internet points.


You say LLMs are “insanely good” at comprehending language, but then immediately pull back like it’s some kind of fluke. “Well yeah, it looks like it understands, but it doesn’t really understand.” What does that even mean? Do you think your average person walking around fully understands everything they say? Half of the people you know are just repeating crap they heard from someone else. You ask them to explain it and they fold like a cheap tent. But we still count them as sentient.

Then you say it’s easy to trip them up. Of course it is. You know what else is easy to trip up? People. Ask someone to do long division without a calculator. Ask a junior dev to write a recursive function that doesn’t melt the stack. Mistakes aren’t proof of stupidity. They’re proof of limits. And everything has limits. LLMs don’t need to be flawless. They need to be better than the tool they’re replacing. And in a lot of cases, they already are.

Now this part: “computers should be held to a higher standard.” Why? Says who? If your standard is perfection, then nothing makes the cut. Not the car, not your phone, not your microwave. We use tools because they’re better than doing it by hand, not because they’re infallible gods of logic. You want perfection? Go yell at the compiler, not the language model.

And then, this one really gets me, you say “surely the next frontier is a computer with the accuracy of a machine and the reasoning of a human.” No kidding. That’s the whole point. That’s literally the road we’re on. But instead of acknowledging that we’re halfway there, you’re throwing a tantrum because we didn’t teleport straight to the finish line. It’s like yelling at the Wright brothers because their plane couldn’t fly to Paris.

As for the money... of course there's a flood of it. That’s how innovation happens. Capital flows to power. If you’re worried about a correction, fine. But don’t confuse financial hype with technical stagnation. The tools are getting better. Fast. Whether the market overheats is a separate issue.

You say you're not threatened by LLMs. That’s cute. You’re writing paragraphs trying to prove why they’re not that smart while admitting they’re already better at language than most people. If you’re not threatened, you’re sure spending a lot of energy trying to make sure nobody else is impressed either.

Look, you don’t have to worship the thing. But pretending it's just a fancy parrot with a glitchy brain is getting old. It’s smart. It’s flawed. It’s changing everything. Deal with it.


You sure spend a lot of energy and time living out this psychodrama.

If it's so self evidently revolutionary, why do you feel the need to argue about it?


I’m trying to fix humanity by smoothing out the sections that are deficient in awareness and IQ.

We need humanity at its best to prepare for the upcoming onslaught when something better tries to replace us. I do it for mankind.


This is why I still come to Hacker News.

Amazing.


you come to feed and encourage troll responses to troll questions?


The human brain demonstrates that human intelligence is possible, but it does not guarantee that artificial intelligence with the same characteristics can be created.


True but this only works well if the natural language "processor" was reliable enough to properly translate business requirements into code. LLMs aren't there yet.


There's some minor content overflow in the x axis. I think it wouldn't hurt to tweak the max width of the page content as I feel like it's a bit too much right now. Also not sure if I'm a fan of the animation you have going on with the quotes. Feels tacky.


I think the only places where the entry-level coder is being killed are corps that never cared about the junior to senior pipeline. Some of them love off-shoring too so I'm not sure much has changed.


“Wait… junior engineers don’t have short-term positive ROI?”

“Never did.”


Reddit and YouTube are such huge social media platforms that it really depends on which bubble (read: subreddits/yt channels) you're looking at. There's the "AGI is here" people over at r/singularity and then the "AI is useless" people at r/programming. I'm simplifying arguments from both sides here but you get my point.


Even looking at r/programming I felt they were less wary of AI, or even comparing the comments here vs those on YouTube for this video


Some places are more "echo-chambery" than others, reddit is probably an extreme example in echo-chambers. At least the bigger subreddits, smaller ones can be a bit more diverse and enjoyable.


Can you actually demonstrate this workflow producing good software?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: