Hacker Newsnew | past | comments | ask | show | jobs | submit | techbrovanguard's commentslogin

what “work” are you doing, you parasitic hack?


i used protobuffers a lot at $previous_job and i agree with the entire article. i feel the author’s pain in my bones. protobuffers are so awful i can’t imagine google associating itself with such an amateur, ad hoc, ill-defined, user hostile, time wasting piece of shit.

the fact that protobuffers wasn’t immediately relegated to the dustbin shows just how low the bar is for serialization formats.


What do you use?


i tried to submit a patch to retroarch and it was one of the least pleasant experiences i've had in a while. makefile hell, layers and layers of incomprehensible macros, and spaghetti code everywhere. no thank you. i may be a masochist but i still respect my time.


Same, IMO most of the main devs are extremely toxic, and their turnover rate among contributors is the worst I've ever seen in my 30 years of professional and volunteer/FOSS coding.


> chuckles

jesus christ, you’re not light yagami—please stay on /g/


Someone is going on a witch-hunt. Most of my comments under this submission are flagged, including this one: https://news.ycombinator.com/item?id=44464177.

Crazy. They have gotten quite a lot of upvotes, for the record, but feel free to flag ALL of my comments.

I wonder what is so wrong about asking if I cannot chuckle anymore, so wrong that it warrants it to be flagged.

>> chuckles (written by me)

> jesus christ, you’re not light yagami—please stay on /g/ (written by some individual)

However, is not flagged. It would be hilarious if it was not so sad.


i will never understand people that are puritains about swearing


There’s a special form of embarrassment when your five year old suddenly announces to the entire preschool that they “can’t fucking find the truck”.

Some don’t handle it well.


american exceptionalism at work


this is not the gotcha you think it is, you just dropped your gluestick


you’ve taken a slightly smaller shit on the floor than the seo slop factory next to you. do you want a medal?


yes, you purist! :)


> Tech pundits still seem to commonly assume that UB is so fundamentally entangled in C++’s specification and programs that C++ will never be able to address enough UB to really matter.

- denial ← you are here

- anger

- bargaining

- depression

- acceptance

Cope, seethe, mald, etc.


At depression they'll figure out the codebase is full of const-casts and null-dereferences.

I completely agree this is trying to polish a turd, essentially. The train has left the station some decades ago.


But you are never going to rewrite the gazillion or so lines of C++ out there, and currently being used in all sorts of production systems.

But if you have a beter compiler that points out more of the problem UB areas in your codebase, then you have somewhere you can make a start towards reducing the issues and attack surface.

The perfect is often the enemy of the good.

(edit - typo)


I don't doubt that most of the gazillion of so lines of legacy C++ will never be rewritten. But critical infrastructure - and there's a lot of it - most certainly needs to be either rewritten in safer languages, or somehow proved correct, and starting new projects in C++ just seems to me to be an unwise move when there are mature safer alternatives like Rust.

Human civilization is now so totally dependent on fragile, buggy software, and active threats against that software increasing so rapidly, that we will look back on this era as we do on the eras of exploding steam engines, collapsing medieval cathedrals, cities that were built out of flammable materials, or earthquake-unsafe buildings in fault zones.

This doesn't mean that safer C++ isn't a good idea; but it's also clear that C++ is unlikely ever to become a safe language; it's too riddled with holes, and the codebase built on those holes too vast, for all the problems to be fixed.


I'm very much in agreement - in principle. But we are where we are, and that gazillion lines is out there. We don't necessarily know which bits of it are running critical infrastructure - I'm not sure that we are even sure which bits of our IT infrastructure are critical, and we don't always know what problems are lurking in the code.

So yes, moving to safer alternatives is a very good thing. But that's going to take a long time and cost a lot of money, which we don't necessarily have. So if we can mitigate a bunch of the problems with improved C++, it is a definite win.

Let's face it, most of central Italy is still beautiful little stone towns, despite being in an earthquake zone. People still live there in stone houses because demolishing and rebuilding half the country is just not feasible. Our IT infrastructure is possibly in the same state.


A lot of very critical infrastructure is still not even rewritten into C, rewriting it all in Rust or whatever is a pipe dream. And before you say that the financial system is not critical, I'd like to see you stop relying on it.


Does COBOL have undefined behavior and lifetime or aliasing issues like C? I have never heard that it does, but don’t know enough to say for sure it doesn’t.

Rewriting in C seems like a dodged bullet. Better for it to stay on older safer languages.

Most COBOL rewrites I have heard of went to Java, a safe language.


At this rate, we're more likely to see major advancements in AI enabling verifiable rewrites than we are to see the C++ committee make any substantive efforts towards improving safety or ergonomics. And I'm only half-joking.


There is some really promising-looking work on using a mixture of LLMs and formal proof techniques and/or unit testing to perform reliable and idiomatic translation from unsafe to safe languages. See, for example, https://arxiv.org/abs/2503.12511v2 and https://arxiv.org/abs/2409.10506, and https://arxiv.org/abs/2503.17741v1

The nice thing about this approach is that the LLMs don't need to be flawless for it to work, as the formal analysis / unit testing will keep their errors at bay - they just need to be good enough to eventually output something that passes the tests.


It starts by rewritting LLVM, GCC, CUDA, Vulkan, OpenGL, DirectX, Metal, POSIX,..... candidates?

That is the problem, and why we need to fix C and C++, somehow.


A rewrite of such technologies would not fix their semantic problems and related architecture decisions around tractability, debugging etc. For example, rewriting LLVM and GCC does not fix the underlying problem of missing code semantics leading to miscompilations around provenance optimizations in the backend. Likewise, Vulkan is not an ideal driver API (to argue on GPU performance) and let's not even start with OpenGL. POSIX is neither optimal, nor has it a formal model for process creation (security). So far there is no good one for non-micro Kernels.

From my experience with C++ I do expect 1. "verschlimmbessern"/aggravate-improving due to missing edge cases, 2. only spatial problems aka bound-checks to be usable (because temporal ones are not even theoretically discussed) and 3. even higher language complexity with slower compile times by front-end (unless C++ v2 like what Herb is doing becomes available).


Most likely, however then the question is what technoglies get to replace those with safer approaches.

Which as proven by the failure to push safer whole OS stacks, tends to fail on the political front, even if the technologies are capable to achieve the same.

I would have loved to Managed DirectX and XNA to stay around and not be replaced by DirectXTK, that Singularity, Midory, Inferno, Oberon, Midori,.... would gotten a place in the market, and so forth.


This is why Rust is the leading alternative to C/C++; it was designed from the start to both call and be called from other languages to enable progressive migration, rather than requiring an incompatible and impractical big bang change that would never happen.

The mitigations in the cited article are good too, but they don't replace the need for safer languages.


Writing new graphics drivers in Rust will definitely be helpful, and is starting to happen.

The safety of LLVM and GCC need not be a priority... they're not normally exposed to untrusted input. Also, it's a particularly hard area because the safety of generated code matters just as much as the safety of the compiler itself. However Cranelift is an interesting option.

No silver bullet here unfortunately... but writing new infrastructure in C or C++ should mostly be illegal.


Yeah, but API surface also needs to change for it to actually work.


If we want everything sensitive to be written in Rust, we need many more Rust programmers...

... and we also need to be more paranoid about what makes its way into globally significant crates, otherwise we just trade one class of vulnerabilities for another.


> But you are never going to rewrite the gazillion or so lines of C++ out there, and currently being used in all sorts of production systems.

We are, because we will have to, and the momentum is already gathering. Foundational tools and libraries are already being rewritten. More will follow.

> But if you have a beter compiler that points out more of the problem UB areas in your codebase, then you have somewhere you can make a start towards reducing the issues and attack surface.

Sure. But fixing those is going to be harder and less effective than rewriting.


Seriously

wtf someone comes up with "X is UB" and even worse, "Since it's UB this gives a license to do whatever the f we want, including something that's clearly not at all what the dev intended"

No wonder the languages being developed to solve real problems by people with real jobs are moving forward


> Since it's UB this gives a license to do whatever the f we want, including something that's clearly not at all what the dev intended

That’s really not how it works.

Compilers rather works in terms of UBs being constraints (on the program), which they can then leverage for optimisations. All the misbehaviour is emergent behaviour from the compiler assuming UBs don’t happen (because that’s what an UB is).

Of note, Rust very much has UBs, and hitting them is as bad as in C++, but the “safe” subset of the langage is defined such that you should not be able to hit UBs from there at all (such feasibility is what “soundness” is about, and why “unsoundness” is one of the few things justifying breaking BC: it undermines the entire point of the langage).


> Compilers rather works in terms of UBs being constraints (on the program), which they can then leverage for optimisations. All the misbehaviour is emergent behaviour from the compiler assuming UBs don’t happen (because that’s what an UB is).

I think a good way to view this would be that optimization passes have invariants. The passes transform code from one shape to another while ensuring that the output from running the code remains the same. But in order for the transformation to be valid certain invariants must be upheld, and if they are not then the result of the pass will have different output (UB).


That’s part of it, but compilers also use the information more directly especially in languages with inexpressive type systems e.g. dereferencing a null pointer is UB so the compiler will tag a dereferenced pointer as “non-null”, then will propagate this constraint and remove unnecessary checks (e.g. any check downstream from the dereference, or unconditionally leading to it).


> 3) The overreliance on dbus turns the “the unix philosophy” ;) away. Text as a universal communication medium, everything is a file, etc.

have you considered the reality that the "unix philosophy" results in incredibly brittle systems? byte streams ("""plain text""") are untyped and prone to mishaps.


Some of the most reliable systems in the world were unix ones.

SunOS was famous for being incredibly reliable, and its a more pure unix than the current linux environment.

And even if we ignore that, the majority of the web was functioning and functionally reliable atop linux with these text stream systems.

bytestreams are less debuggable, which feels silly to say openly since we are all aware that higher level interpreted languages are easier to write/debug/test and iterate on, but we seem not to be bothered by this not being true for the underlying systems.

Systemd clearly is working though, I’m just levying a criticism of opacity.


> bytestreams are less debuggable

Text streams are considered "better", because the standard UNIX userland (unlike e.g. DOS) provided excellent tools for dealing with text streams: grep, sed, awk, find, tr, etc and of course the shell itself.

But once you get your hands on excellent tools (like jq) for dealing with other kinds of data (like JSON), it turns out everything is even more powerful - you can now work with JSON as easily as with text, it just plugs into the existing ecosystem. But even though JSON has a human-readable text representation, it is no longer just text - it is dynamically-structured, but strongly-typed data. A JSON array is a JSON array, you can't just awk it.

There are byte stream formats (e.g. msgpack) that have feature parity with JSON. jq can't accept msgpack-encoded byte streams, but suppose a hypothetical tool, msgpack2json, is widely available - just plug it into jq. You're still working on the same level of abstraction (shell pipes), but easily dealing with complex byte streams.

And of course, what we understand as "text" in the modern era, are UTF8-encoded byte streams. If your "text" kit deals with ASCII rather than Unicode runes, it's that much less powerful, and likely full of painful edge cases that you now have to debug. (Note that UTF is a 1992 thing, it's been invented when UNIX was 20-something yro, and it's been around for 30+ years.)

Debuggability of anything is entirely up to your toolkit, the quality and comprehensiveness of that toolkit is what decides the battle.


I don't think the most reliable systems in the world were Unix ones. At least, if you compare systems at that time, you should compare with what the telephone operators were using. They had legal requirements you would not find in the computing world.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: