Hacker Newsnew | past | comments | ask | show | jobs | submit | highlightslogin
If you run across a great HN comment or subthread, please tell us at [email protected] so we can add it here!

I currently have 20,097 tabs open in one browser profile. The oldest tab appears to be an HN post from 2.5 years ago, which must be the last time I swept tabs into bookmarks.

I used to sweep them more regularly, but Firefox + Sidebery don't even break a sweat with 20K tabs, apparently, so why bother?

The only downside is that it takes about 15 seconds for the browser to launch. I restart the browser whenever Firefox or macOS is updated, so every week or two.


I was lost, literally, hitchhiking across the Australian outback when this article was published. Going home felt scary because I was afraid to be alone with no one else sharing my interests. Travelling made life enjoyable again because just surviving felt like an achievement. But I felt so, so isolated (again, literally!) from modern society. I wanted to find out why I was so deeply interested in computers but not in “tech”. They must work somehow… why did my iPhone (sold that) feel similar to my PC (sold that too) but only one is called a computer? This article framed things in a way that shook me out of a physically dangerous, homeless, jobless rut. It was all code. And I could learn it if I had the time.

Perhaps it was the way it was written; I couldn’t believe intrigue and passion of computing could be weaved together like this. But there it was.

I did make it home eventually. Fortunately the first 2000km lift back from western Australia to the eastern states with a crystal meth addict on the run from the police didn’t end violently. A few weeks back in Sydney with family some Linux nerds found me working as a receptionist answering phones and scanning paper records in at a failing medical practice. They got me doing desktop Windows and Linux server support. I’m an official software engineer now. I guess I should print this article out to show to my kids!


A lot of this work was done by Walter |2| Costinak. He was an absolute legend and he's still doing a bit of design work today. I know because he did the branding for my last company and product. I worked with him a lot at Gathering of Developers back in the day. Together we rebuilt the website for Take 2 Games and they used our work for well over decade before doing a redesign. If you like this style, I recommend you reach out to him. Here's his website:

https://2design.org/


For some reason the article made me think about this quote from one of the 2025 MacArthur Fellowship videos, "I think there are some mathematicians who are kind of like the hiker who choose this massive peak they want to scale and they do everything they can to make it up the mountain. I'm more like the kind of hiker who wanders through the forest and stops to look at a pretty stone or flower and reflect on whether it's similar to a stone or flower that I've seen before."

> When BGP traffic is being sent from point A to point B, it can be rerouted through a point C. If you control point C, even for a few hours, you can theoretically collect vast amounts of intelligence that would be very useful for government entities. The CANTV AS8048 being prepended to the AS path 10 times means there the traffic would not prioritize this route through AS8048, perhaps that was the goal?

AS prepending is a relatively common method of traffic engineering to reduce traffic from a peer/provider. Looking at CANTV's (AS8048) announcements from outside that period shows they do this a lot.

Since this was detected as a BGP route leak, it looks like CANTV (AS8048) propagated routes from Telecom Italia Sparkle (AS6762) to GlobeNet Cabos Sumarinos Columbia (AS52320). This could have simply been a misconfiguration.

Nothing nefarious immediately jumps out to me here. I don't see any obvious attempts to hijack routes to Dayco Telecom (AS21980), which was the actual destination. The prepending would have made traffic less likely to transit over CANTV assuming there was any other route available.

The prepending done by CANTV does make it slightly easier to hijack traffic destined to it (though not really to Dayco), but that just appears to be something they just normally do.

This could be CANTV trying to force some users of GlobeNet to transit over them to Dayco I suppose, but leaving the prepending in would be an odd way of going about it. I suppose if you absolutely knew you were the shortest path length, there's no reason to remove the prepending, but a misconfiguration is usually the cause of these things.


My dad grew up in the 50s & 60s. During COVID he purchased my daughters' the, I quote, "shittiest briefcase record players" he could find. Both girls listen to their music on their devices, but also buy vinyl. The other day, my eldest came down from her room complaining that her vinyl "sounded awful". I told her to bring it up with their Grampy. His response: "you can't appreciate good playback until you've heard awful playback on shitty record players like I had to.". My eldest is now plotting a complete hifi system, and is learning all about how to transfer "vinyl" to "digital" without losing the parts of the vinyl she likes.

This was a 5 year play by my dad. Shout out.


So, who invented the Satellite then? What about the steam engine? The helicopter?

Sometimes the inventors are so far ahead of their time that the materials science first has to catch up (in some cases only a few millenia) before they can realize their devices. Effectively it is then the first person after whoever did the materials science part to create the device that gets to claim the invention.

So we get Sikorski, and not Da Vinci.

We get Arthur C. Clarke who claims the 'communications satellite' even though the moon was there all along and the Sputnik was the first working very crude device (it was one way only, it said 'you lost the space race' in a single bit of message).

We get Newcomen, Jerónimo de Ayanz y Beaumont (I had to look that up, I can never remember the man's full name), and Hero of Alexandria competing for the steam engine title, with all of them holding some part of the credit.

Pointing at an inventor is hard, and 'who built the first working device' is one way of doing this but it assumes a singular effort whereas most things are team efforts and misses the bit that the idea itself can be an instrumental step in getting your 'true' inventor to make their claim, standing on the shoulders of the giants before them. In isolation, we all probably would invent the hammer in our lifetimes, if that.


Don't miss how this works. It's not a server-side application - this code runs entirely in your browser using SQLite compiled to WASM, but rather than fetching a full 22GB database it instead uses a clever hack that retrieves just "shards" of the SQLite database needed for the page you are viewing.

I watched it in the browser network panel and saw it fetch:

  https://hackerbook.dosaygo.com/static-shards/shard_1636.sqlite.gz
  https://hackerbook.dosaygo.com/static-shards/shard_1635.sqlite.gz
  https://hackerbook.dosaygo.com/static-shards/shard_1634.sqlite.gz
As I paginated to previous days.

It's reminiscent of that brilliant SQLite.js VFS trick from a few years ago: https://github.com/phiresky/sql.js-httpvfs - only that one used HTTP range headers, this one uses sharded files instead.

The interactive SQL query interface at https://hackerbook.dosaygo.com/?view=query asks you to select which shards to run the query against, there are 1636 total.


A few years back I patched the memory allocator used by the Cloudflare Workers runtime to overwrite all memory with a static byte pattern on free, so that uninitialized allocations contain nothing interesting.

We expected this to hurt performance, but we were unable to measure any impact in practice.

Everyone still working in memory-unsafe languages should really just do this IMO. It would have mitigated this Mongo bug.


That is fair, particularly compared to Janet Jackson! I will add detail.

In their younger days, two distinguished engineers, Bryan Cantrill and Brendan Gregg, made this video where they scream at a data storage server nicknamed Thumper. Screaming at it has surprising results, which are observed with a novel software technology called dtrace.

The Sun Fire X4500 was a dense storage sever, 4U with 48 disks and insane IO performance and a newish filesystem called ZFS. The video is not only funny in content, it features technology and technologists that became very impactful, hence the classic tag.

---

I love the lore, so I'll drop more.

While our team previously used AFS (mainly for its great caching) and many storage servers, this hardware combined with its software allowed us to consolidate and manage and access data in new ways, alleviating many of our market data analysis problems.

We switched to NFS, which previously was not performant enough for us on other hw/sw architectures. While using NFS with the Thumpers and then Thors (X4540) was fantastic, eventually the data scales became hard again and we made a distributed immutable filesystem that looked like the Hadoop HDFS and Cassandra file systems, named after our favorite Klingon Worf (Write-Once Read-Frequently).

Interestingly, in 2025 both XTX [1] and HRT [2] open-sourced their distributed file systems which are pretty similar to it, using 2020's tech rather than 2000's. HRT's is based on Meta's Tectonic which is a spiritual successor to Cassandra.

I wrote about our parallel HFT networking journey once upon a time on HN. [3]

[1] https://www.xtxmarkets.com/tech/2025-ternfs/

[2] https://www.hudsonrivertrading.com/hrtbeat/distributed-files...

[3] https://news.ycombinator.com/item?id=31924784


The way it works is:

A company adopts some software with a free but not copyleft license. Adopts means they declare "this is good, we will use it".

Developers help develop the software (free of charge) and the company says thank you very much for the free labour.

Company puts that software into everything it does, and pushes it into the infrastructure of everything it does.

Some machines run that software because an individual developer put it there, other machines run that software because a company put it there, some times by exerting some sort of power for it to end up there (for example, economic incentives to vendors, like android).

A some point the company says "you know what, we like this software so much that we're going to fork it, but the fork isn't going to be free or open source. It's going to be just ours, and we're not going to share the improvements we made"

But now that software is already running in a lot of machines.

Then the company says "we're going to tweak the software a bit, so that it's no longer inter-operable with the free version. You have to install our proprietary version, or you're locked out" (out of whatever we're discussing hypothetically. Could be a network, a standard, a protocol, etc).

Developers go "shit, I guess we need to run the proprietary version now. we lost control of it."

This is what happened e.g. with chrome. There's chromium, anyone can build it. But that's not chrome. And chrome is what everybody uses because google has lock-in power. Then google says "oh I'm going to disallow you running the extensions you like, so we can show you more ads". Then they make tweaks to chrome so that websites only get rendered well if they use certain APIs, so now competitors to Chrome are forced to implement those APIs, but those aren't public.

And all of this was initially build by free labour, which google took, by people who thought they were contributing to some commons in a sense.

Copyleft licenses protect against this. Part of the license says: if you use these licenses, and you make changes to the software, you have to share the changes as well, you can't keep them for yourself".


I'm taking a moment to recognize once more the work that user @atdrummond (Alex Thomas Drummond) did for a couple years to help others here. I did not know him, don’t think I ever interacted with him, and I did not benefit from his generosity, but I admired his kindness. Just beautiful.

Ask HN: Who needs holiday help? (Follow up thread) - https://news.ycombinator.com/item?id=38706167 - Dec 2023 (9 comments)

Ask HN: Who needs help this holidays? - https://news.ycombinator.com/item?id=38492378 - Dec 2023 (210 comments)

Tell HN: Thank You - https://news.ycombinator.com/item?id=34140096 - Dec 2022 (42 comments)

Tell HN: Everyone should have a holiday dinner this year - https://news.ycombinator.com/item?id=34122118 - Dec 2022 (58 comments)

Unfortunately, Alex died a few months after his last round of holiday giving, about 1½ years ago now.

Tell HN: In Memory of Alexander Thomas Drummond - https://news.ycombinator.com/item?id=40508725 - May 2024 (5 comments)

If you read the comments in that last thread, know that @toomuchtodo followed through last year and kept the tradition alive. Amazing and magnificent.

Ask HN: Who needs help this holidays? - https://news.ycombinator.com/item?id=42291246 - Dec 2024 (46 comments)


If this had been available in 2010, Redis scripting would have been JavaScript and not Lua. Lua was chosen based on the implementation requirements, not on the language ones... (small, fast, ANSI-C). I appreciate certain ideas in Lua, and people love it, but I was never able to like Lua, because it departs from a more Algol-like syntax and semantics without good reasons, for my taste. This creates friction for newcomers. I love friction when it opens new useful ideas and abstractions that are worth it, if you learn SmallTalk or FORTH and for some time you are lost, it's part of how the languages are different. But I think for Lua this is not true enough: it feels like it departs from what people know without good reasons.

I'm the Manager of the Computing group at JILA at CU, where utcnist*.colorado.edu used to be housed. Those machines were, for years, consistently the highest bandwidth usage computers on campus.

Unfortunately, the HP cesium clock that backed the utcnist systems failed a few weeks ago, so they're offline. I believe the plan is to decommission those servers anyway - NIST doesn't even list them on the NTP status page anymore, and Judah Levine has retired (though he still comes in frequently). Judah told me in the past that the typical plan in this situation is that you reference a spare HP clock with the clock at NIST, then drive it over to JILA backed by some sort of battery and put it in the rack, then send in the broken one for refurb (~$20k-$40k; new box is closer to $75k). The same is true for the WWVB station, should its clocks fail.

There is fiber that connects NIST to CU (it's part of the BRAN - Boulder Research and Administration Network). Typically that's used when comparing some of the new clocks at JILA (like Jun Ye's strontium clock) to NIST's reference. Fun fact: Some years back the group was noticing loss due to the fiber couplers in various closets between JILA & NIST... so they went to the closets and directly spliced the fibers to each other. It's now one single strand of fiber between JILA & NIST Boulder.

That fiber wasn't connected to the clock that backed utcnist though. utcnist's clock was a commercial cesium clock box from HP that was also fed by GPS. This setup was not particularly sensitive to people being in the room or anything.

Another fun fact: utcnist3 was an FPGA developed in-house to respond to NTP traffic. Super cool project, though I didn't have anything to do with it, haha.


Agreed, which is why what GP suggests is much more sensible: it's venturing into known territory, except only one party of the conversation knows it, and the other literally cannot know it. It would be a fantastic way to earn fast intuition for what LLMs are capable of and not.

I wonder if you could query some of the ideas of Frege, Peano, Russell and see if it could through questioning get to some of the ideas of Goedel, Church and Turing - and get it to "vibe code" or more like "vibe math" some program in lambda calculus or something.

Playing with the science and technical ideas of the time would be amazing, like where you know some later physicist found some exception to a theory or something, and questioning the models assumptions - seeing how a model of that time may defend itself, etc.


I used to teach 19th-century history, and the responses definitely sound like a Victorian-era writer. And they of course sound like writing (books and periodicals etc) rather than "chat": as other responders allude to, the fine-tuning or RL process for making them good at conversation was presumably quite different from what is used for most chatbots, and they're leaning very heavily into the pre-training texts. We don't have any living Victorians to RLHF on: we just have what they wrote.

To go a little deeper on the idea of 19th-century "chat": I did a PhD on this period and yet I would be hard-pushed to tell you what actual 19th-century conversations were like. There are plenty of literary depictions of conversation from the 19th century of presumably varying levels of accuracy, but we don't really have great direct historical sources of everyday human conversations until sound recording technology got good in the 20th century. Even good 19th-century transcripts of actual human speech tend to be from formal things like court testimony or parliamentary speeches, not everyday interactions. The vast majority of human communication in the premodern past was the spoken word, and it's almost all invisible in the historical sources.

Anyway, this is a really interesting project, and I'm looking forward to trying the models out myself!


It is not just a way of writing ring buffers. It's a way of implementing concurrent non-blocking single-reader single-writer atomic ring buffers with only atomic load and store (and memory barriers).

The author says that non-power-of-two is not possible, but I'm pretty sure it is if you use a conditional instead of integer modulus.

I first learnt of this technique from Phil Burk, we've been using it in PortAudio forever. The technique is also widely known in FPGA/hardware circles, see:

"Simulation and Synthesis Techniques for Asynchronous FIFO Design", Clifford E. Cummings, Sunburst Design, Inc.

https://twins.ee.nctu.edu.tw/courses/ip_core_04/resource_pdf...


Pretty impressive.

When I published Grisu (Google double-conversion), it was multiple times faster than the existing algorithms. I knew that there was still room for improvement, but I was at most expecting a factor 2 or so. Six times faster is really impressive.


Having worked at Mozilla a while ago, the CEO role is one I wouldn't wish on my worst enemy. Success is oddly defined: it's a non-profit (well, a for-profit owned by a non-profit) that needs to make a big profit in a short amount of time. And anything done to make that profit will annoy the community.

I hope Anthony leans into what makes Mozilla special. The past few years, Mozilla's business model has been to just meekly "us-too!" trends... IoT, Firefox OS, and more recently AI.

What Mozilla is good at, though, is taking complex things the average user doesn't really understand, and making it palpable and safe. They did this with web standards... nobody cared about web standards, but Mozilla focused on usability.

(Slide aside, it's not a coincidence the best CEO Mozilla ever had was a designer.)

I'm not an AI hater, but I don't think Mozilla can compete here. There's just too much good stuff already, and it's not the type of thing Mozilla will shine with.

Instead, if I were CEO, I'd go the opposite way: I'd focus on privacy. Not AI privacy, but privacy in general. Buy a really great email provider, and start to own "identity on the internet". As there's more bots and less privacy, identity is going to be incredibly important over the years.. and right now, Google defacto owns identity. Make it free, but also give people a way to pay.

Would this work? I don't know. But like I said, it's not a job I envy.


As the first author of the salmon paper, yes, this was exactly our point. fMRI can be an amazing tool, but if you are going to trust the results you need to have proper statistical corrections along the way.

As the first author on the salmon paper, yes, that was exactly our point. Researchers were capitalizing on chance in many cases as they failed to do effective corrections to the multiple comparisons problem. We argued with the dead fish that they should.

Hi all! I’m Aleix Ramon, the music composer of the soundtrack.

Since some of you asked, here’s the soundtrack on Bandcamp: https://aleixramon.bandcamp.com/album/size-of-life-original-...

There you can download it in high quality, and it’s a pay-what-you-want: you can get it for free if you want, or pay what you feel like and support me. Either way, I’m happy that you enjoy it!

The music should also be on Spotify, Apple Music, and most music streaming services within the next 24h.

A bit about the process of scoring Size of Life:

I’ve worked with Neal before on a couple of his other games, including Absurd Trolley Problems, so we were used to working together (and with his producer—you’re awesome, Liz!). When Neal told me about Size of Life, we had an inspiring conversation about how the music could make the players feel.

The core idea was that it should enhance that feeling of wondrous discovery, but subtly, without taking the attention away from the beautiful illustrations.

I also thought it should reflect the organisms' increasing size—as some of you pointed out, the music grows with them. I think of it as a single instrument that builds upon itself, like the cells in an increasingly complex organism. So I composed 12 layers that loop indefinitely—as you progress, each layer is added, and as you go back, they’re subtracted. The effect is most clear if you get to the end and then return to the smaller organisms!

Since the game has an encyclopedia vibe to it, I proposed to go with a string instrument to give it a subtle “Enlightenment-era” and “cultural” feel. I was suspecting the cello could be a good instrument because of its range and expressivity.

Coincidentally, the next week I met the cellist Iratxe Ibaibarriaga at a game conference in Barcelona, where I’m based, and she immediately became the ideal person for it. She’s done a wonderful job bringing a ton of expressivity to the playing, and it’s been a delight to work with her.

I got very excited when Neal told me he was making an educational game—I come from a family of school teachers. I’ve been scoring games for over 10 years, but this is the first educational game I’ve scored.

In a way, now the circle feels complete!

(if anyone wants to reach out, feel free to do so! You can find me and all my stuff here: https://www.aleixramon.com/ )


The odd thing about all of this (well, I guess it's not odd, just ironic), is that when Google AdWords started, one of the notable things about it was that anyone could start serving or buying ads. You just needed a credit-card. I think that bought Google a lot of credibility (along with the ads being text-only) as they entered an already disreputable space: ordinary users and small businesses felt they were getting the same treatment as more faceless, distant big businesses.

I have a friend that says Google's decline came when they bought DoubleClick in 2008 and suffered a reverse-takeover: their customers shifted from being Internet users and became other, matchingly-sized corporations.


One thing this really highlights to me is how often the "boring" takes end up being the most accurate. The provocative, high-energy threads are usually the ones that age the worst.

If an LLM were acting as a kind of historian revisiting today’s debates with future context, I’d bet it would see the same pattern again and again: the sober, incremental claims quietly hold up, while the hyperconfident ones collapse.

Something like "Lithium-ion battery pack prices fall to $108/kWh" is classic cost-curve progress. Boring, steady, and historically extremely reliable over long horizons. Probably one of the most likely headlines today to age correctly, even if it gets little attention.

On the flip side, stuff like "New benchmark shows top LLMs struggle in real mental health care" feels like high-risk framing. Benchmarks rotate constantly, and “struggle” headlines almost always age badly as models jump whole generations.

I bet theres many "boring but right" takes we overlook today and I wondr if there's a practical way to surface them before hindsight does


Here it is: https://sw.vtom.net/hn35/news.html

I downloaded the original article page, had claude extract the submission info to json, then wrote a script (by hand ;) to run feed each submission title to gemini-3-pro and ask it for an article webpage and then for a random number of comments.

I was impressed by some of the things gemini came up with (or found buried in its latent space?). Highlights:

"You’re probably reading this via your NeuralLink summary anyway, so I’ll try to keep the entropy high enough to bypass the summarizer filters."

"This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034)."

"Zig v1.0 still hasn't released (ETA 2036)"

The unprompted one-shot leetcode, youtube, and github clones

Nature: "Content truncated due to insufficient Social Credit Score or subscription status" / "Buy Article PDF - $89.00 USD" / "Log in with WorldCoin ID"

"Gemini Cloud Services (formerly Bard Enterprise, formerly Duet AI, formerly Google Brain Cloud, formerly Project Magfi)"

Github Copilot attempts social engineering to pwn the `sudo` repo

It made a Win10 "emulator" that goes only as far as displaying a "Windows Defender is out of date" alert message

"dang_autonomous_agent: We detached this subthread from https://news.ycombinator.com/item?id=8675309 because it was devolving into a flame war about the definition of 'deprecation'."


To see a little extra feature, change the system time to year 2035 and click the "comments".

I was there today. We happened to notice the smoke over Kilauea while driving to Hilo, then checked out USGS cams, and immediately drove there and spent the next 7 hours getting mesmerized.

As my first eruption encounter, I didn’t expect to experience several things like the heat even from a long distance, enough to keep me warm in my shorts at 60F, and the loud rumble, like a giant waterfall. The flow of lava was way faster than I expected too, almost like oil.

Mind blown.


In a high stakes, challenging environment, every human weakness possible becomes a huge, career impeding liability. Very few people are truly all-around talented. If you are a Stanford level scientist, it doesn't take a lot of anxiety to make it difficult to compete with other Stanford level scientists who don't have any anxiety. Without accommodations, you could still be a very successful scientist after going to a slightly less competitive university.

Rising disability rates are not limited to the Ivy League.

A close friend of mine is faculty at a medium sized university and specializes in disability accommodations. She is also deaf. Despite being very bright and articulate, she had a tough time in university, especially lecture-heavy undergrad. In my eyes, most of the students she deals with are "young and disorganized" rather than crippled. Their experience of university is wildly different from hers. Being diagnosed doesn't immediately mean you should be accommodated.

The majority of student cases receive extra time on exams and/or attendance exemptions. But the sheer volume of these cases take away a lot of badly needed time and funding for students who are talented, but are also blind or wheelchair bound. Accommodating this can require many months of planning to arrange appropriate lab materials, electronic equipment, or textbooks.

As the article mentions, a deeply distorted idea of normal is being advanced by the DSM (changing ADHD criteria) as well as social media (enjoying doodling, wearing headphones a lot, putting water on the toothbrush before toothpaste. These and many other everyday things are suggested signs of ADHD/autism/OCD/whatever). This is a huge problem of its own. Though it is closely related to over-prescribing education accommodations, it is still distinct.

Unfortunately, psychological-education assessments are not particularly sensitive. They aren't good at catching pretenders and cannot distinguish between a 19 year old who genuinely cannot develop time management skills despite years of effort & support, and one who is still developing them fully. Especially after moving out and moving to a new area with new (sub)cultures.

Occasionally, she sees documents saying "achievement is consistent with intelligence", a polite way of saying that a student isn't very smart, and poor grades are not related to any recognized learning disability. Really and truly, not everyone needs to get an undergrad degree.


Random nerd note: The history is slightly wrong. Netscape had their own "interactive script" language at the time Sun started talking about Java and somehow got the front page of the Mercury news when they announced it in March of 1995. At the Third International World Wide Web Conference in Darmstadt Germany everyone was talking about it and I was roped into giving a session on it during lunch break (which then had to be stopped because no one was going to the keynote by SGI :-)). Everyone one there was excited and saying "forget everything, this is the future." So, Netscape wanted to incorporate it into Netscape Navigator (their browser) but they had a small problem which was that this was kind of a competitor to their own scripting language. They wanted to call it JavaScript to ride the coattails of the Java excitement and Sun legal only agreed to let them do that if they would promise to ship Java in their browser when it hit 1.0 (which it did in September of that year).

So Netscape got visibility for their language, Sun got the #1 browser to ship their language and they had leverage over Microsoft to extortionately license it for Internet Explorer. There were debates among the Java team about whether or not this was a "good" thing or not, I mean for Sun sure, but the confusion between what was "Java" was not. The politics won of course, and when they refused to let the standards organization use the name "JavaScript" the term ECMAScript was created.

So there's that. But how we got here isn't particularly germane to the argument that yes, we should all be able to call it the same thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: