Hacker Newsnew | past | comments | ask | show | jobs | submit | Arisaka1's commentslogin

>Interesting how many people in a hacker forum

I learned to accept the fact that HN reached a critical mass point that made it fill up with people who market themselves as "product-oriented engineers", which is a way to say "I only build things when they lead to products".

People commiting to the hacker ethos that consists of, among many other things, resistance to the established tools, embracing knowledge and code sharing, and exploration for its own sake are the minority.

The fact that there are many commenters who will claim that they finally build something they weren't able to build before and it's all thanks to LLM's is evidence that we already sacrificed the pursuit of personal competence, softly reframing it as "LLM competence", without caring about the implications.

Because obviously, every kid that dreamt of becoming a software engineer thought about orchestrating multiple agentic models that talk to each other and was excited about reviewing their output over and over again while editing markdown files.

The hackers are dead. Long live the hackers.


> I learned to accept the fact that HN reached a critical mass point that made it fill up with people who market themselves as "product-oriented engineers", which is a way to say "I only build things when they lead to products".

This is a mentality I am working extremely hard to get rid of, and I blame HN for indoctrinating me this way.

That said, these days I don't view this place as filled with "product-oriented engineers", but it's become like any other internet forum where naysayers and criticism always rises to the top. You could solve world hunger and the top comment would be someone going "well, actually..."

It's not HN that killed the hackers, it's the Internet snark that put the final nail in the coffin.


I consider myself a hacker as I spend many evenings and weekends writing code for no commercial purpose but to create cool stuff and sometimes even useful stuff all in the open. I have no idea why I should be against using LLM. Just like I use an IDE and wouldn’t want to write code without one, sometimes an LLM can quickly write some drudgery that if I had to write completely myself would likely stop me from continuing. It’s just another tool in the toolbox, stop regarding it as some sort of evil that replaces us! It doesn’t and probably never will, we will always have more important things to do that will still require a human, even if that does not include a whole lot of coding .

> I have no idea why I should be against using LLM

It highly depends on your own perspective and goals, but one of the arguments I agree with is that habitually using it will effectively prevent you building any skill or insight into the code you've produced. That in turn leads to unintended consequences as implementation details become opaque and layers of abstraction build up. It's like hyper-accelerating tech-debt for an immediate result, if it's a simple project with no security requirements there would be little reason to not use the tool.


>It's the same with genetics. Getting lucky with looks is fine but working for the same goal (eg surgery) is somehow bad and people often hide it.

We also tend to hide how hard we work to make our success look natural, but we reveal how hard we work on the extremes of success. For example, if I work hard and take a score of 17 out of 20 in a test people will say "I barely read last evening, phew", but if you're consistently scoring 19-20/20 people may even approach you to learn your studying methods and for tips, because they assume there are important takeaways that they can adopt.

It's my pet peeve with how society recognizes that someone is talented, which is blatantly flawed because all you can do is see what they're capable of doing. Someone may be talented yet unable (or unwilling?) to tap into their talent, but since we recognize talent by the output you can't really tell the existence of talent unless it's at the extremes of success, like the 8 year old who can solve mathematics that are a grade or more above the current grade.

I see talent like a genetic predisposition that can be appropriately cultivated to attain success. It's not much different than my height, because I didn't choose it, yet I can guess that there are men out there who hate the fact that I have their desirable height yet I never hit the gym, cultivate my social skills, or take advantage the fact that I look younger than I am. I am willing to bet everything that I met at least one person who thought of all of these things the first moments they looked at me.

But at least genetic predispositions like height are visible to the naked eye and no one can dispute the differences. When it comes to differences in the brain it's where we ignorantly proclaim that things are obscure therefore they can violate the very facts of observable nature.

In sort, not only I fully agree with you, but I also agree with the obvious double standards in society around it. If I take ADHD medication and that helps with my focus to improve my performance in school or work then I deserve that success as much as someone who naturally had no problems with ADHD. Why is this different for looks (like hair transplants, etc.) is beyond me.


What I find amusing with this argument is that, no one ever brought power savings when e.g. used "let me google that for you" instead of giving someone the answer to their question, because we saw the utility of teaching others how to Google. But apparently we can't see the utility of measuring the oversold competence of current AI models, given sufficiently large sampling size.


Clippy only helped with very specific products, and was compensating for really odd UI/UX design decisions.

LLM's are a product that want to data collect and get trained by a huge amount of inputs, with upvotes and downvotes to calibrate their quality of output, with the hope that they will eventually become good enough to replace the very people they trained them.

The best part is, we're conditioned to treat those products as if they are forces of nature. An inevitability that, like a tornado, is approaching us. As if they're not the byproduct of humans.

If we consider that, then we the users get the shorter end of the stick, and we only keep moving forward with it because we've been sold to the idea that whatever lies at the peak is a net positive for everyone.

That, or we just don't care about the end result. Both are bad in their own way.


One can assume that, given the goal is money (always has been), the best case scenario for money is to make it so the problem also works as the most effective treatment. Money gets printed by both sides and the company is happy.


>People exaggerate the problems of using a stable distro.

Stability isn't a problem, it's a feature. Companies trust Debian, Ubuntu LTS, etc. for their servers EXACTLY because the packages are old.

This isn't the case with desktop computers, where the latest optimizations are delivered weekly if not monthly, and may improve performance across the board.


Every time I read an LLM's response state something like "I'm sorry for X", "I'm happy for Y" reminds me of the demons in Frieren, where they lacked any sense of emotion but they emulated it in order to get humans respond in a specific way. It's all a ploy to make people feel like they talk to a person that doesn't exist.

And yeah, I'm aware enough what an LLM is and I can shrug it off, but how many laypeople hear "AI", read almost human-like replies and subconsciously interpret it as talking to a person?


Every time I read about new .NET version improvements I always remember my attempt to get a job using this stack in my local job market (Greece), where .NET Framework is super prevalent, majorly used by classic companies that don't even give you a fair technical chance if you lack a degree, and the devs are considered to be a cost center.

I really, REALLY wish I was in another timeline where I could say in an interview "yes, I use Linux on my desktop and Rider for my IDE" without being seen as a traveler from outer space.

I enjoy working with modern C# way more than node.js but... that's it.


> don't even give you a fair technical chance if you lack a degree, and the devs are considered to be a cost center.

I've never considered how lucky I am to live in the U.S. and to work at a company that absolutely sees the dev team to be a huge asset rather than another cost. The amount of time, money, stress we've saved by not allowing bad code to enter the code base.. I wouldn't have it any other way.

Also, I've had such great success hiring people without degrees. Truly some of our best contributors came from entirely different career paths. Same applies for some designers I work with.


A bit off-topic, bit hiring exceptional .NET developers is like searching for a needle in a haystack. Way more people have a ton of experience with JS and marginal experience with .NET, just writing very basic API endpoints - yet claiming serious experience.

If you came to me for an interview, your story would have been a breath of fresh air. So maybe try to mention it anyway, someone will be interested.


I've managed big .Net teams. 99% of .Net devs are very, very average. Just crunching out lines of code with little care for quality, performance, readability etc. The best .Net dev I ever hired didn't know a single thing about it; brought him in as the most junior role to tinker with some HTML and within two years he had massively outclassed me.


> I really, REALLY wish I was in another timeline where I could say in an interview "yes, I use Linux on my desktop and Rider for my IDE" without being seen as a traveler from outer space.

Could you please elaborate? Are you referring to most .NET shops not straying away from Windowsland?


It's not about what the company uses, but how informed the technical people responsible for hiring candidates are around the ecosystem they claim they work with.

Example:

Expected: "Oh, you're on Linux? I heard about Rider. We use Windows and Visual Studio here for parity. You're okay with that, right?" (me: Obviously, tools are tools)

Actual: "Does .NET run on Linux? What is Rider?"

I mean, .NET has been running on Linux since forever now (11 years according to https://news.ycombinator.com/item?id=9459513, let's say about 9 for stability because I feel generous). How do they not know about it?


There's still a lot of folks who consider themselves .Net experts who don't know how to program with async/await, so knowing about a niche IDE (which I also exclusively use) is asking a lot for those people.


Rider might be a niche IDE. But ReSharper with Visual Studio was a mainstay for me from 2008-2020 when I was doing a lot of .Net


Yeah I usually introduce Rider to .Net developers by saying "it's by the folks who do ReSharper".


You should try to get a job in Azure, you’d feel right at home!

.net462 baby!

More like 4.6.2


Somehow, .NET jobs seem be tied to waterfall processes ("but we are still agile, because we release two times a year"), requirements in OneNote, and a 5 kg Windows laptop.


What an absolute load of bullshit.


When I was in high school, I would see the algebra teacher work through expressions and go "ohhh, that makes sense". But when I got back home to work with the homework, I couldn't make the pieces fit.

Isn't that the same? Just because you recognize something someone else wrote and makes you go "ohh, I understand it conceptually" doesn't mean that you can apply that concept in a few days or weeks.

So when the person you responded to says:

>almost overnight *my abilities* and throughput were profoundly increased

I'd argue the throughput did but his abilities really weren't, because without the tool in question you're just as good as before the tool. To truly claim that his abilities were profoundly increased, he has to be able to internalize the pattern, recognize the pattern, and successfully reproduce it across variable contexts.

Another example would be claiming that my painting abilities and throughput were profoundly increased, because I used to draw stick figures and now I can draw Yu-Gi-Oh! cards by using the tool. My throughput was really increased, but my abilities as a painter really haven't.


>and if you’re training an agent for this specific task anyway, you’re effectively locking yourself to that specific LLM in perpetuity rather than a replaceable or promotable worker.

That's ONE of the long games that are currently played, and is arguably their fallback strategy: The equivalent of vendor lock-in but for LLM providers.


From my IT POV, that’s what this is all about. It’s why none of these major players produce locally-executable LLMs (Mistral, Llama, and DeepSeek being notable exceptions), it’s why their interfaces are predominantly chat-based (to reduce personal skills growth and increase dependency on the chatbot), it’s why they keep churning out new services like Skills and Agents and “Research”, etc.

If any of these outfits truly cared about making AI accessible and beneficial to everyone, then all of them would be busting hump to distill models better to run on a wider variety of hardware, create specialized niches that collaborate with rather than seek to replace humans, and promote sovereignty over the AI models rather than perpetual licensing and dependency forever.

No, not one of these companies actually gives a shit about improving humanity. They’re all following the YC playbook of try everything, rent but never own, lock-in customers, and hope you get that one lucrative bite that allows for an exit strategy of some sort while promoting the hell out of it and yourself as the panacea to a problem.


"It’s why none of these major players produce locally-executable LLMs (Mistral, Llama, and DeepSeek being notable exceptions)"

OpenAI have gpt-oss-20b and 120b. Google have the Gemma 3 models. At this point the only significant AI lab that doesn't provide a locally executable model are Anthropic!


Fair point, I’d forgotten those recent-ish releases from OpenAI and Google both - but my larger point still stands that the entire industry is maximizing potential vectors for lock-in and profit while spewing lies about “benefitting humanity” in public.

None of the present AI industry is operating in an ethical or responsible way, full stop. They know it, they admit to it when pressed, and nobody seems to give a shit if it means they can collapse the job market and make money for themselves. It’s “fuck you got mine” taken to a technological extreme.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: