Hacker Newsnew | past | comments | ask | show | jobs | submit | bestcommentslogin
Most-upvoted comments of the last 48 hours. You can change the number of hours like this: bestcomments?h=24.

I'm one of the Tailscale engineers who built node state encryption initially (@awly on Github), and who made the call to turn it off by default in 1.92.5.

Another comment in this thread guessed right - this feature is too support intensive. Our original thinking was that a TPM being reset or replaced is always sign of tampering and should result in the client refusing to start or connect. But turns out there are many situations where TPMs are not reliable for non-malicious reasons. Some examples: * https://github.com/tailscale/tailscale/issues/17654 * https://github.com/tailscale/tailscale/issues/18288 * https://github.com/tailscale/tailscale/issues/18302 * plus a number of support tickets

TPMs are a great tool for organizations that have good control of their devices. But the very heterogeneous fleet of devices that Tailscale users have is very difficult to support out of the box. So for now we leave it to security-conscious users and admins to enable, while avoiding unexpected breakage for the broader user base.

We should've provided more of this context in the changelog, apologies!


> But the reality is that 75% of the people on our engineering team lost their jobs here yesterday because of the brutal impact AI has had on our business.

Adam is simply trying to navigate this new reality, and he's being honest, so there's no need to criticize him.


Of note: the US's per capita consumption of meat has increased by more than 100 pounds over the last century[1]. We now consume an immense amount of meat per person in this country. That increase is disproportionately in poultry, but we also consume more beef[2].

A demand for the average American to eat more meat would have to explain, as a baseline, why our already positive trend in meat consumption isn't yielding positive outcomes. There are potential explanations (you could argue increased processing offsets the purported benefits, for example), but those are left unstated by the website.

[1]: https://www.agweb.com/opinion/drivers-u-s-capita-meat-consum...

[2]: https://ers.usda.gov/data-products/chart-gallery/chart-detai...


What bothers me about posts like this is: mid-level engineers are not tasked with atomic, greenfield projects. If all an engineer did all day was build apps from scratch, with no expectation that others may come along and extend, build on top of, or depend on, then sure, Opus 4.5 could replace them. The hard thing about engineering is not "building a thing that works", its building it the right way, in an easily understood way, in a way that's easily extensible.

No doubt I could give Opus 4.5 "build be a XYZ app" and it will do well. But day to day, when I ask it "build me this feature" it uses strange abstractions, and often requires several attempts on my part to do it in the way I consider "right". Any non-technical person might read that and go "if it works it works" but any reasonable engineer will know that thats not enough.


Tyson foods and other meatpacking companies lobbied and funded RFK...

Here's industry reports

https://www.nationalbeefwire.com/doctors-group-applauds-comm...

https://www.wattagnet.com/business-markets/policy-legislatio...

And straight up lobbying groups

https://www.nationalchickencouncil.org/new-dietary-guideline...

https://www.meatinstitute.org/press/recommend-prioritizing-p...

Lobbying groups, putting out press releases, claiming victory...

Here's some things you won't find in any of the documents, including the PDFs at the bottom: community gardens, local food, farmers markets, grass fed, free range... Because agribusiness doesn't make money with those.

Just because you might like the results doesn't mean they aren't corrupt as hell


The article doesn't answer the question. The content can be summarised as "The Gmail app is 700 MB!"

Very sad to hear, I bought Tailwind UI years ago and although it was a lot more expensive than I wanted, I've appreciated the care and precision and highly recommend buying it (It's now called Tailwind Plus) even still (maybe even especially now).

Mad props to Adam for his honesty and transparency. Adam if you're reading, just know that the voices criticizing you are not the only voices out there. Thanks for all you've done to improve web development and I sincerely hope you can figure out a way to navigate the AI world, and all the best wishes.

Btw the Tailwind newsletter/email that goes out is genuinely useful as well, so I recommend signing up for that if you use Tailwind CSS at all.


The key word here is "Wall Street". And this statement is playing off a popular misconception around corporate investors buying up American houses.

There has been a bit of a panic around "Investors buying up all the property!!!" With people often citing Black Rock and Blackstone as the main culprits. But most of the "investors" buying up property are individuals purchasing investment properties.

Here's an article on the topic from 2023[0], a bit old but my understanding is large institutional investment in residential real estate was already starting to cool down.

Black rock isn't buying up all the housing, your neighbors are.

I suspect this statement, and even if it becomes an actual ban, is largely to gain wider popular support around a largely imaginary concern people have.

0. https://www.housingwire.com/articles/no-wall-street-investor...


For all the lunacy of RFK this somehow is actually a really good set of guidelines? Certainly better than the previous version. I didn't expect that to be honest.

This is interesting to hear, but I don't understand how this workflow actually works.

I don't need 10 parallel agents making 50-100 PRs a week, I need 1 agent that successfully solves the most important problem.

I don't understand how you can generate requirements quicky enough to have 10 parallel agents chewing away at meaningful work. I don't understand how you can have any meaningful supervising role over 10 things at once given the limits of human working memory.

It's like someone is claiming they unlocked ultimate productivity by washing dishes, in parallel with doing laundry, and cleaning their house.

Likely I am missing something. This is just my gut reaction as someone who has definitely not mastered using agents. Would love to hear from anyone that has a similar workflow where there is high parallelism.


It makes sense that you wouldn't hire in such an uncertain environment. We have a President using emergency powers to affect sweeping, unpredictable, consequential changes to the economy that can dramatically alter unit economics overnight and completely tank a previously viable business. Within this calendar year, the President's ability to do this may be upended by pending court cases, an election, or both. Following those potential changes, the breach of trust created by the previous chaos may mean that trade never returns to normal. I don't envy anyone trying to make long-term business decisions, like hiring, in such an environment.

Most software engineers are seriously sleeping on how good LLM agents are right now, especially something like Claude Code.

Once you’ve got Claude Code set up, you can point it at your codebase, have it learn your conventions, pull in best practices, and refine everything until it’s basically operating like a super-powered teammate. The real unlock is building a solid set of reusable “skills” plus a few agents for the stuff you do all the time.

For example, we have a custom UI library, and Claude Code has a skill that explains exactly how to use it. Same for how we write Storybooks, how we structure APIs, and basically how we want everything done in our repo. So when it generates code, it already matches our patterns and standards out of the box.

We also had Claude Code create a bunch of ESLint automation, including custom ESLint rules and lint checks that catch and auto-handle a lot of stuff before it even hits review.

Then we take it further: we have a deep code review agent Claude Code runs after changes are made. And when a PR goes up, we have another Claude Code agent that does a full PR review, following a detailed markdown checklist we’ve written for it.

On top of that, we’ve got like five other Claude Code GitHub workflow agents that run on a schedule. One of them reads all commits from the last month and makes sure docs are still aligned. Another checks for gaps in end-to-end coverage. Stuff like that. A ton of maintenance and quality work is just… automated. It runs ridiculously smoothly.

We even use Claude Code for ticket triage. It reads the ticket, digs into the codebase, and leaves a comment with what it thinks should be done. So when an engineer picks it up, they’re basically starting halfway through already.

There is so much low-hanging fruit here that it honestly blows my mind people aren’t all over it. 2026 is going to be a wake-up call.

(used voice to text then had claude reword, I am lazy and not gonna hand write it all for yall sorry!)

Edit: made an example repo for ya

https://github.com/ChrisWiles/claude-code-showcase


It will make very little difference in the end.

Australia's land tax system makes it effectively impossible for large corporations to own large chunks of residential property, but our real estate is amongst the world's most expensive and landlords are still awful - it's just that the landlords are hundreds of thousands of dentists and, yes, software engineers rather than corporate entities.

If you want housing to be cheaper and renters to be better treated, increase supply. Everything else is window-dressing.


I just uninstalled a game from my mobile phone this morning that had heavy ad usage. It was interesting to note the different ad display strategies. From least to most annoying:

- display a static ad, have the "x" to close appear soon (3-10 seconds)

- display an animated ad, have the "x" to close appear soon (3-10 seconds)

- display a static ad, have the "x" to close appear after 20-30 seconds

- display an animated ad, have the "x" to close appear after 20-30 seconds

- display several ads in succession, each short, but it automatically proceeds to the next; the net time after which the "x" to close appears after 20-30 seconds

- display several ads in succession, each lasts for 3-10 seconds but you have to click on an "x" to close each one before the next one appears

I live in the USA. The well-established consumer product brands (Clorox, McDonalds, etc.) almost all had short ads that were done in 3-5 seconds. The longest ads were for obscure games or websites, or for Temu, and they appeared over and over again, making me hate them with a flaming passion. The several-ads-in-succession were usually British newspaper websites (WHY???? I don't live there) or celebrity-interest websites (I have no interest in these).

It seems like the monkey's-paw curse for this kind of legislation is to show several ads in a row, each allowing you to skip them after 5 seconds.


My favorite most annoying ad tactic is the trick slowing down progress bar. It starts off fast making it seem like it’s going to be, say, a ten-second ad so you decide to suffer through it… but progressively slows so you notice at like the 20 second mark you’re only 2/3 of the way through the progress bar, so probably less than halfway done. Murderous rage.

Every project and programmer shouldn't feel they have to justify their choice not to use Rust (or Zig), who seem to be strangely and disproportionately pushed on Hacker News and specific other social media platforms. This includes the pressure, though a bit less in recent years, to use OOP.

If they are getting good results with C and without OOP, and people like the product, then those from outside the project shouldn't really have any say on it. It's their project.


OP's classifiers make two assumptions that I'd bet strongly influence the result:

1. Binning skepticism with negativity.

2. Not allowing for a "neutral" category.

The comment I'm writing right now is critical, but is it "negative?" I certainly don't mean it that way.

It's cool that OP made this thing. The data is nicely presented, and the conclusion is articulated cleanly, and that's precisely why I'm able to build a criticism of it!

And I'm now realizing that I don't normally feel the need to disclaim my criticism by complimenting the OP's quality work. Maybe I should do that more. Or, maybe my engagement with the material implies that I found it engaging. Hmm.


I've often wondered whether the world would be better without ads. The incentive to create services (especially in social media) that strive to addict their users feels toxic to society. Often, it feels uncertain whether these services are providing actual value, and I suspect that whether a user would pay for a service in lieu of watching ads is incidentally a good barometer for whether real value is present.

Don't get me wrong, I'm well aware this is impractical. But it's fun to think about sometimes.


My uncle had an issue with his balance and slurred speech. Doctors claimed dementia and sent him home. It kept becoming worse and worse. Then one day I entered the symptoms in ChatGPT (or was it Gemini?) and asked it for the top 3 hypotheses. The first one was related to dementia. The second was something else (I forget the long name). I took all 3 to his primary care doc who had kept ignoring the problem, and asked her to try the other 2 hypotheses. She hesitantly agreed to explore the second one, and referred him to a specialist in that area. And guess what? It was the second one! They did some surgery and now he's fine as a fiddle.

> It's not that Dell doesn't care about AI or AI PCs anymore, it's just that over the past year or so it's come to realise that the consumer doesn't.

I wish every consumer product leader would figure this out.


You’re exactly right: This one incident did not shape the entire body of scientific research.

There is a common trick used in contrarian argumentation where a single flaw is used to “debunk” an entire side of the debate. The next step, often implied rather than explicit, is to push the reader into assuming that the opposite position must therefore be the correct one. They don’t want you to apply the same level of rigor and introspection to the opposite side, though.

In the sugar versus saturated fat debate, this incident is used as the lure to get people to blame sugar as the root cause. There is a push to make saturated fat viewed as not only neutral, but healthy and good for you. Yet if you apply the same standards of rigor and inspection of the evidence, excess sugar and excess saturated fat are both not good for you.

There is another fallacy in play where people pushing these debates want you to think that there is only one single cause of CVD or health issues: Either sugar, carbs, fat, or something else. The game they play is to point the finger at one thing and imply that it gets the other thing off the hook. Don’t fall for this game.


Speaking from personal experience, this is consistent with multiple doctors over the years recommending high-protein, low carb diets. (Clarification: low does not mean no carb.)

I don't understand people freaking out over this - outside of a purely political reflex - hell hath no fury like taking away nerds' Mountain Dew and Flamin' Hot Cheetos.

Nor do I understand the negative reactions to new restrictions on SNAP - candy and sugary drinks are no longer eligible.


Nice! The author touches on the area properties and here's the most practical life hack derived from the standard I personally use. It uses the relationship between size and mass.

Because A0 is defined as having an area of exactly 1 square meter, the paper density (GSM or grams per square meter) maps directly to the weight of the sheet.

>A0 = 1 meter square.

>Standard office paper = 80 gsm

>Therefore, one sheet of A0 = 80 grams.

>Since A4 is 1/16th of an A0, a single sheet of standard A4 paper weighs 5 grams.

I rarely need to use a scale for postage. If I have a standard envelope (~5g) and 3 sheets of paper (15g), I know I'm at 20g total. It turns physical shipping logistics into simple integer arithmetic. The elegance of the metric system is that it makes the properties of materials discoverable through their definitions.


Back when Reddit allowed API access, I used a reader (rif) which allowed blocking subreddits. I did an experiment where I would browse /r/all and block any subreddit that had a toxic, gruesome, nsfw, or other content playing on negative emotions (like a pseudo feel-good post based on an otherwise negative phenomena). After a few years, and hundreds of banned subreddits, my /r/all was very wholesome, but contained only animal or niche hobby related subreddits. It was quite eye-opening on how negative reddit is, and also revealed how boring it is without the kind of algorithmic reaction seeking content.

In other words, if 35% of hn content is positive (or neutral?), compared to reddit and most mainstream social media, it's actually very positive!

Edit: I found the list of blocked subreddits if anyone is curious to see:

https://hlnet.neocities.org/RIF_filters_categorized.txt

Note that it also includes stuff I wasn't interested in at the time, like anime, and only has subreddits up until I quit, around the API ban.


It's not that simple - the problem is that those institutions are market makers. They are a tiny portion of the market, but a huge driving force in setting and manipulating prices, because their properties get leveraged, instrumentalized, and securitized, with derivative products, speculation, and all sorts of incentives that you don't normally want operating in the arena of housing.

The things that they do have massively outsized downstream impact contrasted against their relatively tiny overall participation in the market, and they can afford to behave in ways that manipulate the behavior of the majority.

If you can decouple them from the housing markets, you also decouple the interests of the donor class, and you allow for policy that doesn't maximize the cost of real estate over the interests of the majority of the population.


Mr. Beast on youtube is guilty of that. Matt Parker of Standup Maths fame did an in-depth look at how that works. Whoever came up with that type of progress bar must hate people in general.

https://www.youtube.com/watch?v=uc0OU1yJD-c


When this news first came out it was mind blowing, but at the same time I don't entirely get it.

So the money quote seems to be:

> The literature review heavily criticized studies linking sucrose to heart disease, while ignoring limitations of studies investigating dietary fats.

They paid a total of 2 people $50,000 (edit: in 2016 dollars).

That doesn't seem like enough to entirely shape worldwide discourse around nutrition and sugar. And the research was out there! Does everybody only read this single Harvard literature review? Does nobody read journals, or other meta studies, or anything? Did the researchers from other institutions whose research was criticized not make any fuss?

I guess the thing that I most don't get is it's now been 10 years since then, and I haven't seen any news about the link between sugar and CVD.

> There is now a considerable body of evidence linking added sugars to hypertension and cardiovascular disease

Okay, where is it? What are the conclusions? Is sugar actually contributing more than fat for CVD in most patients? Edit: Or, is the truth that fat really is the most significant, and sugar plays some role but it's strictly less?


The paid products Adam mentions are the pre-made components and templates, right? It seems like the bigger issue isn't reduced traffic but just that AI largely eliminates the need for such things.

While I understand that this has been difficult for him and his company... hasn't it been obvious that this would be a major issue for years?

I do worry about what this means for the future of open source software. We've long relied on value adds in the form of managed hosting, high-quality collections, and educational content. I think the unfortunate truth is that LLMs are making all of that far less valuable. I think the even more unfortunate truth is that value adds were never a good solution to begin with. The reality is that we need everyone to agree that open source software is valuable and worth supporting monetarily without any value beyond the continued maintenance of the code.


> the US's per capita consumption of meat

That number seemed unreal to me, so I looked it up. I think it represents the total pre-processing weight, not the actual meat meat consumption. From Wikipedia:

> As an example of the difference, for 2002, when the FAO figure for US per capita meat consumption was 124.48 kg (274 lb 7 oz), the USDA estimate of US per capita loss-adjusted meat consumption was 62.6 kg (138 lb)

Processing, cutting into sellable pieces, drying, and spoilage/loss mean the amount of meat consumed is about half of that number.


I'm tired of constantly debating the same thing again and again. Where are the products? Where is some great performing software all LLM/agent crafted? All I see is software bloatness and decline. Where is Discord that uses just a bunch of hundreds megs of ram? Where is unbloated faster Slack? Where is the Excel killer? Fast mobile apps? Browsers and the web platform improved? Why Cursor team don't use Cursor to get rid of vscode base and code its super duper code editor? I see tons of talking and almost zero products.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: