Hacker Newsnew | past | comments | ask | show | jobs | submit | bgwalter's commentslogin

The FSF also ignored the SaaS revolution. They put out the AGPL but did not really market it or convert FSF projects to it.

DDT has been banned, cigarettes are all but banned, leaded fuel has been banned. Nuclear energy has been banned in Germany.

The industry wanted all of that and did not get its way after some time. You can ban "AI", make companies respect copyright. You can do all sorts of things.

Since "AI" can only plagiarize, countries that do the above will have an edge (I'm not talking about military applications that can still be allowed or should be regulated like in treaties for nuclear weapons).


It is everywhere now. Musk censors his X responses, Grok defends billionaires, the all-in podcast has only positive comments in suspiciously perfect English since a month or so. Previously they allowed criticism.

(And hardly anyone mentions Greenland on X.)


HN hasn't changed in this respect in a good 10 years, and no one who sees what gets posted here need fear that criticism is verboten. It isn't, and will never be. We do need to do something about shallow cynicism though (see https://news.ycombinator.com/item?id=46515507 from earlier today, if curious).

Yes, it is odd that this criticism is only allowed for gpg while worse Signal issues are not publicized here:

https://cloud.google.com/blog/topics/threat-intelligence/rus...

Some Ukrainians may regret that the followed the Signal marketing. I have never heard of a real world exploit that has actually been used like that against gpg.


Why would anyone care if you brought phishing attacks on Signal users up?

People who do not wish to get killed may care.

Those people shouldn't be, and thankfully aren't, using PGP. Nobody is suppressing this report on phishing attacks against Signal users; it's just not as big a deal as what's wrong with PGP.

Accidentally replying in plaintext is a user error, scanning a QR code is a user error.

Yet one system is declared secure (Signal), the other is declared insecure. Despite the fact that the QR code issue happened in a war zone, whereas I have not heard of a similar PGP fail in the real world.


First of all, accidentally replying in plaintext is hardly the only problem with PGP, just the most obvious one. Secondly, it's not user error: modern messaging cryptography is designed not to allow it to happen.

Modern cryptography should also not allow users to activate a sketchy linked device feature by scanning a QR code:

"Because linking an additional device typically requires scanning a quick-response (QR) code, threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance."

This is a complete failure of the cryptosystem, worse than the issue of responding in plaintext. You can at least design an email client that simply refuses to send plaintext messages because PGP is modular.


I'm comfortable with what this thread says about our respective arguments. Thanks!

How does this help people who are not following this issue regularly? gpg protected Snowden, and this article promotes tools by one of the cryptographers who promoted non-hybrid encryption:

https://blog.cr.yp.to/20251004-weakened.html#agreement

So what to do? PGP by the way never claimed to prevent traffic analysis, mixmaster was the layer that somehow got dropped, unlike Tor.


You could also say Cryptocat protected Snowden; he used it to communicate with reporters. So, that's how well that argument holds up.

https://en.wikipedia.org/wiki/Cryptocat#Reception_and_usage

"In June 2013, Cryptocat was used by journalist Glenn Greenwald while in Hong Kong to meet NSA whistleblower Edward Snowden for the first time, after other encryption software failed to work."

So it was used when Snowden was already on the run, other software failed and the communication did not have to be confidential for the long term.

It would also be an indictment of messaging services as opposed to gpg. gpg has the advantage that there is no money in it, so there are unlikely to be industry or deep state shills.


Huh? There's no money in anything we're talking about here.

No money in anything?

Signal was made by people who then used it to push their get-rich-quick cryptocurrency scheme on users and who threw all their promises of being open-source and reproducible over board for it. The Signal people are absolutely not trustworthy for reasons of money and greed.


> Signal was made by people who then used it to push their get-rich-quick cryptocurrency scheme on users and who threw all their promises of being open-source and reproducible over board for it.

I reviewed Signal's cryptography last year over a long weekend: https://soatok.blog/2025/02/18/reviewing-the-cryptography-us...

There's a lot to be said for the utility of reverse engineering tools and skills, but I did not need them, because it was open source. Because Signal's client software still is open source.

Whatever you think about MobileCoin, it doesn't actually intersect with the message encryption features at all. At all.

The only part in Signal that's not entirely open source are the anti-spam features baked into the Signal Server software.

And, frankly, the security of end-to-end encryption messaging apps has so little to do with whatever the server software is doing that it's frankly silly to consider that relevant to these discussions. https://soatok.blog/2025/07/09/jurisdiction-is-nearly-irrele...

And, yes, this is only a server-side feature. See spam-filter (a git submodule) in https://github.com/signalapp/Signal-Server but absent from https://github.com/signalapp/Signal-Android or https://github.com/signalapp/Signal-iOS

> The Signal people are absolutely not trustworthy for reasons of money and greed.

I don't think you've raised sufficient justification for this point.


> Because Signal's client software still is open source.

Only when you can trust that the published client source code is equivalent to the distributed client binaries. The only way to do this is reproducible builds, since building your own client is frowned upon and sometimes actively prevented by the signal people. Signal has always been a my-way-or-the-highway centralized cathedral, no alternate implementations, no federation, nothing. Which was always a suspicious thing. Also, "the signal client is open source software" only holds if you don't count in the proprietary Google blobs that the signal binary does contain: FCM and Maps. Those live in the same process and can do whatever to E2EE...

About the signal client that does the E2EE, reproducible builds are frequently broken for the signal client, e.g.: https://github.com/signalapp/Signal-Android/issues/11352 https://github.com/signalapp/Signal-Android/issues/13565 and many more. Just search their issue tracker. The latter one was open for 2 years, so reproducible builds were broken at least during 2024 and most of 2025 for the client. They don't keep their promise and don't prioritize fixing those issues, because they just don't care. People do trust them blindly and the Signal people rely on that blind trust. Case in point: you yourself reviewed their code and probably didn't notice that it wasn't the code for the binary they were distributing at the time.

Now you might say that reproducible builds in the client you reviewed weren't affected by their Mobilecoin cash grab, and you are right, but it shows a pattern in that they don't care, and even lots of professionals singing their praises don't care.

And their server code does affect your privacy even with E2EE. The server can still maliciously correlate who talks to whom. You have to trust their published source code correctly doing its obfuscation of that, otherwise you get metadata leaks the same as in all other messengers. The server can also easily impersonate you, read all your contacts and send them to evil people. "But Signal protects against this", you say? Well, it does by some SGX magic and the assurance that the code inside the enclave does the right thing. But they clearly don't care about putting their code where their mouth is, they rather put their code where the money was. Behind closed doors, until they could finish their Mobilecoin thingy.

>> The Signal people are absolutely not trustworthy for reasons of money and greed.

> I don't think you've raised sufficient justification for this point.

Trust is hard to earn and easy to squander. They squandered my trust and did nothing to earn it back. Their behavior clearly shows they don't care about trust, because they frequently break their reproducibility and are slow to fix it. They cared more about their coin thing. They are given trust, even by professionals who should know better, because their cryptography is cool. But cryptography isn't everything, and one should not trust them, because they obviously are more interested in Mobilecoin than in trust. What more is there to justify, it's obvious imho.


> They squandered my trust

Yes, fine, they squandered your trust.

You don't speak for all of us.


[flagged]


I don't care that it's a tangent, I care that it's incoherent and wrong.

The tangent explicitly talks about generic messaging services. Whatsapp and Signal have more money than gpg. Thinking about it more, it is not even a tangent, because TFA says:

"Use Signal. Or Wire, or WhatsApp, or some other Signal-protocol-based secure messenger."


I wrote TFA. Signal is a nonprofit. The article says to use Signal-protocol-based messengers, of which there are several. Your objection about money doesn't make sense.

Looking at the revenues and the salaries, nonprofit doesn't mean much these days.

cannot downvote you enough, or at all, annoyingly

Maybe they should hire Mario Nawfal for their announcements:

""" BREAKING: AI FOUND VULNERABILITY IN FFMPEG!

After decades of human struggle, humans no longer call the shots.

Pwno decided to take the leap. They did not just find a vulnerability---they found a BOMBSHELL! What took developers weeks to write, AI analyzed in SECONDS! """


This is another drawback of security research, but one that had already existed before "AI" with ossfuzz.

You basically cannot commit in public to the main branch and audit and test everything 3 months before a release, because any error can be picked up, will be publicized and go into the official statistics.


> ... go into the official statistics.

There are no "official" statistics. None of this matters. If we judged projects by the number of security holes they had, then no one would be using ffmpeg, which had hundreds of serious vulns.

Vulnerability research is useful insofar that the bad guys are using the same techniques (e.g., the same fuzzing tools), so any bugs you squash make it harder for others to attack you. If your enemy is a nation state, they might still pack your laptop / phone / pager with explosives, but the bar for that is higher than popping your phone with a 0-day.

Vulnerability research is demonstrably not useful for improving the security of the ecosystem in the long haul. That's where sandboxing, hardening, and good engineering hygiene come into play. If you're writing a browser or a video decoder in C/C++, you're going to have exploitable bugs.


> Vulnerability research is demonstrably not useful for improving the security of the ecosystem in the long haul. That's where sandboxing, hardening, and good engineering hygiene come into play. If you're writing a browser or a video decoder in C/C++, you're going to have exploitable bugs.

IMHO, vulnerability research is the stick that drives the ecosystem towards all those things. Reports of vulnerabilities in the codec for Rebel Assult videos (or whatever) leads one to disable codecs other than those they need. Reports of vulnerabilities in playlist support leads one to disable playlist support where it's unnecessary and run transcodes in a chroot sandbox with no network access. Reports of buffer oveflows leads one to prefer implementation in memory safe languages where available with sufficient performance and also to sandbox when possible.


I mostly agree, and further would say that this doesn't really conflict with the preceding comment.

It’s the projects without CVEs that scare me.

Because nobody’s even looking…


Did you prefer this bug to go unnoticed until it's released to everyone, and only then fixed in a hurry, requiring another release? Why?

The list is pretty short though for 8 months. ossfuzz has found a lot more even with the fuzzers often not covering a lot of the code base.

Manually paying people to write fuzzers by hand would yield a lot more and be less expensive than data centers and burning money, but who wants to pay people in 2026?


Bugs are not equivalently findable and different techniques surface different bugs. The direct comparison you're trying to draw here doesn't hold.

It does not matter what purported categories buffer overflows are in when manual fuzzing finds 100 and "AI" finds 5.

If Google gave open source projects $100,000 per year for a competent QA person, it would cost less than this "AI" money straw fire and produce better results. Maybe the QA person would also find the 5 "AI" detected bugs.


This would make sense if every memory corruption vulnerability was equivalently exploitable, which is of course not true. I think you'll find Google does in fact fuzz ffmpeg, though.

Google gives a pittance even for full ossfuzz integration. Which is why many projects just have the bare minimum fuzz tests. My original point was that even with these bare minimum tests ossfuzz has found way more than "AI" has.

Another weird assumption you've got here is that fuzzing outcomes scale linearly with funding, which, no. Further, the field of factory-scale fuzzing and triage is one Google security engineers basically invented, so it's especially odd to hold Google out as a bad actor here.

At any rate, Google didn't employ "AI" to find this vulnerability, and Google fuzzing probably wouldn't have outcompeted these researchers for this particular bug (totally different methods of bugfinding), so it's really hard to find a coherent point you'd be making about "fuzzers", "AI", and "Google" here.


My guess is the main "AI" contribution here is to automate some of the work around the actual fuzzing. Setting up the test environment and harness, reading the code + commit history + published vulns for similar projects, identifying likely trouble spots, gathering seed data, writing scripts to generate more seed data reaching the identified trouble spots, adding instrumentation to the target to detect conditions ASan etc don't, writing PoC code, writing draft patches... That's a lot of labor and the coding agents can do a mediocre job of all of it for the cost of compute.

If it's finding exploitable bugs prior factory-scale fuzzing of ffmpeg hasn't, seems like a pretty big win to me.

For sure, and I think it expands the scope of what factory scale efforts can find. The big question of course being how to handle remediation because more bugs without more maintainer capacity is a recipe for tears.

[flagged]


I am a professional software developer and have been since the 1990s.

I can't speak to what exactly this team is doing but I haven't seen any evidence that with-robot finds less bugs than without-robot. I do have some experience in this area.

It is clear that Nadella has no clue what he is doing in 2025 and just wants to make another big splash with "AI".

If you force employees to dedicate 100% of their thinking power to agents, prompts, "AI" meetings, working on their necessarily fake "AI" success stories and "impacts", no one has time to do real work. Or have any real new ideas about anything else.

But Nadella doubles down and goes into "startup mode":

https://www.ft.com/content/255dbecc-5c57-4928-824f-b3f2d764f...

Not only Windows 11 got worse, Github got worse, too. So did the free Copilot.


Another AI entrepreneur who writes a long article about inevitability, lists some downsides in order to remain credible but all in all just uses neurolinguistic programming on the reader so that the reader, too, will think the the "AI" revolution is inevitable.

Tldr; initially I thought we might be onto something, but now, I don't see much of a revolution.

I won't put intention into the text because I did not check any other posts from the same guy.

That said, I think this revolution is not revolutionary yet. Not sure if it will be, but maybe?

What is happening os that companies are going back to "normal" number of people in software development. Before it was because of adoption to custom software, later because of labour shortage, then we had a boom because people caught up into it as a viable career but then it started scaling down again because one developer can (technically) do more with AI.

There are huge red flags with "fully automated" software development that are not being fixed but for those outside of the expertise area, doesn't seem relevant. With newer restrictions related to cost and hardware, AI will be even a worse option unless there is some sort of magic that fixes everything related to how it does code.

The economy (all around the world) is bonkers right now. Honestly, I saw some Jr Devs earning 6 fig salaries (in USD) and doing less than what me and my friends did when we were Jr. There is inflation and all, but the numbers does not seem to add.

Part of it all is a re- normalisation but part of it is certainly a lack of understanding of software and/or// engineering.

Current tools, and I include even those kiro, anti-gravity and whatever, do not solve my problems, just make my work faster. Easier to look for code, find data and read through blocks of code I don't see in a while. Writing code not so much better. If it is simple and easy it certainly can do, but for anything more complex it seems that it is faster and more reliable to do myself (and probably cheaper)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: