Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
X.org Security Advisory: Security issue in the X server (x.org)
43 points by HieronymusBosch on Feb 7, 2023 | hide | past | favorite | 82 comments


For anybody wondering what caused the bug. They forgot to set a pointer to null after free. Which caused the use after free, because their null checks didn't work.


Are there even any distros who still run Xorg as root?


Does it matter? A user account compromise is as good as a root compromise in many desktop configurations given how often people type "sudo" and how much user data is available as, well, user.


>Does it matter?

Yes, because this vulnerability requires a user compromise to pull off.

I agree that it's already game over once someone has access to your user account.


AFAICT ubuntu 22.04 and void uptodate run Xorg as root by default

Same for sddm/lxdm


Wayland is far from mature, even with xorg-xwayland. I personally use it with (fairly uptodate) KDE/Plasma on a Intel Iris Xe Graphics, and still get some annoyances.


I think the comment you're replying to was emphasizing "as root". I can't remember the last time I saw that.


1) I’ve been daily driving Wayland for six years and the only pain I have is that IntelliJ is blurry at 4K

2) Nothing in the parent comment even mentioned Wayland, unless it has been edited- they're saying that people don’t run X.org as root these days (which I am not certain of, since I haven't been running it for so long)


Wayland works great, it's just KDEs implementation that leave something to be desired.

If you try other desktops that use Wayland such as Gnome or Sway you will find far fewer issues.


I've been using wayland only forever, I'm not sure what you're missing?


For example: Inter-application communication to allow time-tracking apps, automation, accessibility, and a thousand other use cases. (I can't live without barrier and x over ssh)

Wayland is designed to block this feature. Which means it will always remain a niche, only useful for those running multiple monitors with different refresh rates. And even those people are limited to AMD GPUs. So that's a very small niche of a very small niche.


X over ssh can still work on wayland, there is an app called waypipe that does the same thing.


And KDE just built a screen recorder for Wayland. But why bother rewriting all of these tools, when perfectly good, battle-tested solutions already exist in X? Many software makers are unwilling to rewrite their apps from the ground up just to work on a niche system.


I think the idea is that they want to get rid of X eventually, if or when that happens is anyone's guess.


Yes, they have made that quite clear. I think it is a real shame that so much time has to be given fixing bugs and recreating software that already exists, rather than developing interesting new software on a stable platform.


So bad we abandoned all those great SVN tools and created new tools for git…


I think the needs you listed there would be considered the niche.

Most people just want something to display their windows. Those things are cool, sure, but they probably come at some cost.


It is not designed to block this feature, it just has absolutely nothing to do with a goddamn display protocol. The linux userspace is definitely lacking in many other areas but that is simply due to no unified solution (inherent in bazaar style development), and an insane reliance on C for everything.


My ubuntu 22.04 LTS (jammy) is running Xorg as root. I have just dist-upgraded since I installed 18.04, so maybe fresh installs don't?


Maybe.


Memory unsafe language strikes again. How much longer are we going to put up with this?


Forever. Computers are fundamentally unsafe if you tell them to do something you shouldn't have. That's true for every language which has ever been useful.


Formal verification is a thing. Good architectures make undesirable action rare. What's the last time you heard about a worm that spreads from one Android phone to the next, without the owners realizing it?


Given that the majority of android phones have dozens of well known security flaws (because they don't get updates, or get them very late), it's kinda weird they're not all one massive botnet. Why is that?


Mostly because the underlying design makes that difficult. On a traditional desktop an application running as root can do anything, and one running as user can do slightly less than everything. On Android running as root doesn't even exist as a function, and running as user comes with significant limitations enforced structurally by low level SELinux code in the kernel and OS. Android malware can't infect an existing application because the OS both segregates their storage and verifies code signatures.

TPMs take this a step further and create not only logical but physical separation between the main CPU and many cryptographic functions.

Exploits still happen but you need a much longer chain of vulnerabilities to pull it off.


Are you claiming the entirety of Android is formally verified?

Do you have links to those papers?


No, I'm claiming Android has good architecture. If you combined good arch with verification then exploits would be very rare indeed.


This isn’t a meaningful response: “the computer does what you tell it to do” doesn’t mean the same thing as “the computer ends up in an exploitable state when you don’t tell it to do the right thing.”


Yet computers necessarily end up in an exploitable state when you don't tell them to do the right thing. Your back-end is written in Rust? If you don't do your auth checks right, one user can delete another user's content.


Sure, but given a fixed probability of introducing a bug at each possible place, "doing an auth check correctly" has much less probability of being buggy than "doing every memory allocation correctly" because it happens in several magnitudes less places.


No, not necessarily: Rust is by no means the first language to prevent missing bounds checks from becoming exploitable memory unsafety, as just one small example.

All programs are exploitable; the objective is to limit the nature and scope of their exploitability.


Just because we can’t go 100% safe, we shouldn’t even try? That’s a logical fallacy right there.


I'm not saying we shouldn't make safer languages. Rust is okay.


This comment adds nothing tbh; you're mentioning a known problem, but repeating the problem and asking open-ended questions isn't going to solve it. The solutions are there and people are working on them, but this isn't an overnight change.


> The solutions are there and people are working on them

People keep writing new programs in C.


Xorg is a "new program"?


What other options are there?

It's the only systems level language with a formally verified compiler afaik.

rust is a no go because you can't trust the compiler's output (remember, we can't trust people to write correct code, so we obviously can't trust the compiler writers either).


The biggest formally verified program is tiny compared to your regular software suit. That method simply doesn’t scale. Also, it “trusts the hardware” which itself can be (surprisingly) buggy. So there is no 100% safe solution, should we not even try to better our softwares then?

Trusting trust is an interesting paper, but it was never meant as a gotcha, it was meant as a “don’t forget to look at the whole picture from time to time”.


The vast majority of C programmers do not use a formally verified C compiler, and most of them wouldn't care about that anyway.

From a security perspective, demanding a formally verified C compiler is rearranging the deck chairs on the Titanic. Switching to a safer language like Rust will do much more to improve security, even if the compiler is not verified.


Both formally proven compilers and black-box SMT-based translation verification are a thing. Rust is not there yet but both exist for C.


> People keep writing new programs in C.

And you plan to stop me?


Stop lamenting and help X.org, i am pretty sure they have nothing against if you re-implement stuff in rust.

Also, why have you forgot telling linus in 1991, that he should implement linux in something like ADA?


Until security liabilty clauses affect all kind of software, just like they affect physical goods.

If people don't put up with buying a defect pair of shoes, they should behave the same towards software.


> How much longer are we going to put up with this?

Until you rewrite it all in rust


Memory safe languages rely on someone else's software and someone else's compiler/interpreter. Rust and go are memory safe languages because their compiler are (said to be) checking all memory usage either statically or dynamically. It's still software written by someone, running on hardware designed by someone. If I write software in assembler I can just blame myself and the hardware. If I do it in C I can blame myself, the hardware, the compiler and the libraries. If I do it in Java I need to add the JRE and all the Java libraries, plus C etc.

Memory safeness is just an illusion to me or a way to avoid responsibilities in case of issues.


> If I do it in C, I can only blame myself.

Have you considered the fact that the chances that a single developer messes up memory management in an application with thousands of malloc/free/strcpy/strcat/sprintf are much higher than the chances that something breaks in a compiler that took smart people years to build just to solve the problem of memory management?

Of course, if it breaks in C you can only blame yourself. But, in my opinion, it's better to blame the compiler/VM e.g. once a year, rather than blaming oneself once a week.


Yes, that's exactly why i have my nuclear reactor at home. I don't trust anybody else to run one.

Ever heard of the concept called "abstraction" and "separation of concerns"?


And yet companies like Google have reported significant drops in memory related vulnerabilities.


I wonder if there ever could be a C compiler doing all those checks. What is so radically different about Rust?


As a single example: C’s semantics prevent perfect alias analysis, meaning that a C compiler (no matter how smart) can never provide many of Rust’s basic guarantees without significantly altering the C language.


The lifetime analysis is only possible in Rust because Rust has lifetimes. Even if you write a toy Rust program where you didn't apparently write any lifetimes, they were implied and the implied lifetimes were checked - the language was constrained by the fact you didn't specify a lifetime and so the implied lifetime was checked, whereas in C you can't express the lifetime and so it can't be checked.

The most glorious example of this is comparing C23's #embed with Rust's include_bytes! macro. These are both ways to include a file full of data into your program - which if you're a C programmer you will know was really annoying to do cross-platform before #embed

In C23 the data is just some literals like 1, 86, 69, 204 that are promised to be one unsigned byte each and you do with them whatever you want. How long do they live? -shrug- in some sense forever, but, then again maybe not, it depends.

But in Rust it's specifically &'static [u8; N] -- a reference to an array of N bytes which has the 'static lifetime, it lives potentially for the life of your program.


Blaming you for unsafe code doesn't make code safer. Better systems over personal blame.


Safe languages have restrictions like immutability, garbage collection, or "one mutator at a time" (to ensure temporal safety) outside of deliberately-awkward escape hatches (some unsafe). These tradeoffs are often workable, but some applications require no GC pauses, or placement initialization and non-allocating intrusive linked lists, which can make languages with ergonomic manual memory management a better choice for those use cases. "Safe languages are better than having to write correct code" is a thought-terminating cliche that dismisses the complex tradeoffs and limitations of safe languages.


We're in 2023, and I'm still reading about pieces of software that can be exploited because of poor memory management (either unexpected reads, unexpected writes, or incorrect deallocation).

When I started diving into memory issues two decades ago, the younger version of me really thought that, 20 years down the line, we would have figured out better ways to deal with memory other than those buggy malloc, free, sprintf and strcat.

Guess what? We have! But way too much C/C++ code is still around. The X window system is 40 years old, some parts of its code literally stand on toothpicks and there's no one left around who still understands them, security flaws keep popping up once every 1-2 months, but it still powers nearly the entirity of the all the UIs that run on a UNIX - based system. Isn't that just insane?


If you think of code as infrastructure, it's not insane, it's the expected outcome.

Bridges were built differently in the 1960s than they are today. We are not tearing down all the old bridges and building new ones, that would be crazy. Instead we need to spend enough time and money on maintenance of the old bridges, and replace them when that becomes the better option. When we fail to conduct proper maintenance, accidents happen, like Ponte Morandi.


Unlike software, civil engineering is subjected to regulations.


We are taking people to court that build broken bridges.


We aslo bring people to court who endanger/kill peoples per software-(bugs)...

I also have seen many bridges where no responsibility can be assumed (especially in the mountains).


> We aslo bring people to court who endanger/kill peoples per software-(bugs)...

Only in high integrity computing scenarios.

All kinds of refunds and lawsuits from physical goods should apply to software as well.

Regarding the bridges, just because "guilt died alone" as we say back home, doesn't mean we should disregard the bigger picture.


Old bridges are being replaced all the time. Modern bridges aren't built like the 1960's anymore, yet with code it's like we're still in 1990 when a new project starts.


Too many people think they can write memory safe code when they can't, and they can't because they're human beings, and humans cannot write correct C. Then, when you point out that programs written in some languages are immune to entire classes of vulnerability, these people write things like this:

> Memory safeness is just an illusion

We're not going to make any progress as an industry until we stop indulging people who think C is cool.


I'm not saying that it's impossible to break things in Rust/Go/Dart.

Im saying that it's several orders of magnitude more difficult.

Bugs like these show that, no matter your level of experience in C/C++, you ALWAYS have a high chance of messing things up on memory access, because there are so many things you have to keep in mind while writing your code - and often these bugs are also very difficult to spot.

We need to favour languages that don't make things perfect (there's no such thing), but that at least make it much harder for things to break. Like Stroustrup said, we need guns that make it harder to shoot your own leg.


Hence why the industry needs a little legal help push, just like when dealing with hazardous chemicals and sharp cutting tools.

Thankfully it is starting to take place.


It is an illusion, in the sense that memory doesn't actually work "safely", and safety needs to be built from lower level primitives.

Mind you, being an illusion is not criticism, and doesn't make it less of a good idea. There's people that believe all perception is illusion. Do they stop perceiving?


Not it isn't, dealing with chemicals and sharp tools is hazardous, hence why there are laws in place about work practices that must be obeyed.

High integrity computing already has them, we just need a little push to apply them everywhere with liability for non compliance.


The laws are not a property of the universe. They're a convention. Illusory. I'm not advocating not having laws, or saying they don't have value.


They look pretty physical for those affected by them.


Apple has been migrating everything to Swift in part because of this. There is no longer a reason to be using C/C+ in production for OS use. It's fun for prototyping to get something quick and dirty, not something to be trusted in the field where it's going to be attacked.

When we design physical products we put them through rigorous testing to ensure they withstand abuse, and we change the materials and entire toolsets to create safer products.

Rust has significant benefits beyond just the memory part. Because the application is so detailed the compiler can be significantly smarter. Unlike in C/C+ where certain optimizations can't be used because there's not enough information described by the code.


And physical products are constantly being recalled because problems are found on them after they’ve been released.

The problem we have here isn’t that a 40 year old software was written in C. It is that open source doesn’t have the resources to replace a lot of its core infrastructure, like how commercial physical products do.

All of the discussions in this thread about lack of legislation and even taking developers to court completely miss the root problem that there simply isn’t the man power to make these changes.

So what’s the solution?

1. Get businesses to invest more into open source? They’ll just pivot to closed source projects and we’ll be in the same dilemma we are now but with far more bad code out there and far less ability to patch it.

2. We could get the governments to pay for open source development? That’s never going to be popular. Particularly in circles like HN which are generally against government intervention.

3. Or we could just keep patching older software in C when these bugs crop up.

…and this is exactly why we are in the situation we are in. Because it is the only practical solution. It might not be pretty and it might lead to repeated complaints about old code but until open source funding changes drastically, it’s the only option available to us.

Source: someone who writes open source software in memory safe languages but has to do so as a hobby because I also have a family to feed.


> We're in 2023, and I'm still reading about pieces of software that can be exploited because of poor memory management (either unexpected reads, unexpected writes, or incorrect deallocation).

wait until 2123 when you will still read about them and Wayland will almost be ready.

side note: can we please stop this nonsense whining around infrastructures? yes, infrastructures are old, they are old because they work, they work because they were carefully engineered and maintained.

A bug in some infrastructure code is not the end of the World, it's, on the contrary, a sign of its vitality.

Yes, roads still have holes in them, even though we invented them thousands years ago, it doesn't make them less useful.

EDIT: rewriting X11 proved to be almost a failure, after 15 years, in 2023, on my laptop KDE/Wayland can't start the plasmashell, while X11 still rocks the boat without a glitch.

This constant lamentation about "safe languages", that sounds like the sirens in The Odyssey, and the promotion of "rewrite it!" - in this case X11 (AGAIN!) - in some new shiny language of today like if 40 years totally make the difference between the past and the future, it's the most serious form of delusion I have ever witnessed in my whole life.


40 years is a really long time in computing. If you add to that the fact that C was already antiquated when it was designed, it means a modern language that learnt from decades of practice and CS advances can be a truly different beast.


> 40 years is a really long time in computing

get used to it.

it's just the first time we had 40 years ahead of us to use the same software for 40 years.

LISP is just 63 years old, C 51, C++ only 38...

Everything is still brand new in computing.

> it means a modern language that learnt from decades of practice and CS

it's only a matter of computational speed, we knew about "advanced concepts" in the 30s of the last century already.


> it's only a matter of computational speed, we knew about "advanced concepts" in the 30s of the last century already.

Believe it or not, but type theory and CS in general have kept making progress since. I know lambda calculus existed in the 30s, but it was a lot more basic in many ways.

C is so primitive. It has a poor type system, it has no sum types (enums, in rust), no closures, no type inference, no bound checking even though it was already known to be beneficial, really its strong point is that it's easy to compile. If you think these are just "computational speed", that's fine, but other people think these are qualitative improvements.


> C is so primitive. It has a poor type system, it has no sum types (enums, in rust), no closures, no type inference, no bound checking even though it was already known to be beneficial,

computational speed mattered.

compilers needed to be fast.

bounds checking is not really in the ballpark of "advancements in CS".

C could easily become a better language and evolve, the problem is it needs to maintain the billions of lines already written valid.

OTOH there were better languages already when C came out, C won in large part because it was simpler, smaller and a better ASM than ASM.

Rust is a safer language, the problem is once we rewrote everything in Rust, Rust will be as legacy as C and new languages with better features enabled by the orders of magnitude faster chips will be available.

It's like when young people blame the older people, forgetting that they will be old too one day and that in most part life is keeping going on evolving things at a very slow pace, because inertia is the strongest force in societies.


The problem is that, as IT technologies become older, there will be less and less people who can understand what's going on, debug and fix things. At that point, degradation becomes inevitable, and projects go in maintenance-only mode because adding new features is just too hard and risky.

Have you wondered why the X.Org server still fails at doing simple things like inferring the DPI and supporting retina displays without extra configuration? Or why you need external window compositors?

Have you ever tried to play with the X11 C API, just to land on cryptic documentation that hasn't been updated in 25 years? Or, worse, dive into the source code of the server, and find 40 years of workarounds that make you feel like taking one straw away will make the whole castle collapse? Note that the same also applies to other pieces of software (like xterm) that are way past their expiry date.

Software should not get into this stage and still be massively used in production. We should avoid projects like X.Org from becoming the next COBOL.


> Have you wondered why the X.Org server still fails at doing simple things like inferring the DPI

At least on my monitor X seems to have a good idea for the display size:

dimensions: 1920x1080 pixels (508x285 millimeters)

This should be all information any software needs to calculate DPI. "Dpi" is configured to 96 for backwards compatibility, but if any program actually needs that information it can query it.

> and supporting retina displays without extra configuration?

"Retina displays" is just a marketing term for displays that have high pixel density and supporting those is really a thing for the applications (they need to be able to handle scaling so that they appear at the same size as in a normal or low pixel density display). Unless the display server works with device independent units and doesn't deal with any pixel content whatsoever, you always need application support for that.

> Or why you need external window compositors?

(assume you mean desktop compositors like Compiz, etc)

Because it is an optional feature introduced much later than X's inception, cannot work on every single hardware that can run X and after all since X has a mantra of providing mechanism instead of policy, it only provided enough functionality to implement compositors without forcing them or dictated a single way of what sort of functionality they'd provide (which at the time ended up with a multitude of different compositors with various effects and other functionality).

It is also an optional feature which in my eyes is a great thing as i don't like the input lag they add even to simple desktop interactions like moving windows around (even with Wayland where the entire thing is designed with desktop composition in mind).


> At that point, degradation becomes inevitable, and projects go in maintenance-only mode because adding new features is just too hard and risky.

I repeat it: Wayland is 15 years old and it's written in C.

I'm waiting for a rewrite in Rust or whatever, so in 15 years I will still be using X11, because it's the only thing that works reliably across my devices, old and new.

> Have you ever tried to play with the X11 C API

yes, since 1996.

> and find 40 years of workarounds

that's called "SOLUTIONS" in professionals' World.

The real World is full of edge cases, if your code is dealing with none of them, your solution is probably fragile.

https://www.luckymethod.com/2013/03/the-big-redesign-in-the-...

> Software should not get into this stage and still be massively used in production

> We should avoid projects like X.Org from becoming the next COBOL.

That's the wrong way to look at it.

We should ask ourselves: why COBOL is still in use after so long, while a JavaScript framework lasts 6 months and it's deprecated after 9?

Can I rely long term on that thing or not?

If I had to guess, COBOL will still be in use in 20 years from now. The demise of old technologies has been predicted so many times that it's now a joke in the industry.

COBOL was already old and on its way out when I was in high school, studying those languages that would SURELY replace it, in the beginning of the 90s of the last century, almost 35 years ago.

I understand that a young industry, that mostly runs on young people because of the ageism in SV, feels it can fix everything in a few months, but that's the illusion: we collectively can't. The more a technology stays in place, the more time it will stay relevant. There's also a law about it, I don't remember the name now.


Wayland is a fking protocol, it is not written in anything. If you can’t get that right then there is no point in any discussion.


X11 is a protocol too, my pedantic friend.

All the relevant Wayland implementations right now are written in C or C++.

All the beautiful and safe languages implementations out there are used by virtually no one.

After 15 years, X11 still works better (in the sense of on more hardware, with less headaches) than the new kid on the block.

There must be a reason why, I strongly believe it's not the language the two protocol were mostly implemented in, because it's the same.

I believe, but I could be wrong, it's that rewriting a fundamental piece of software infrastructure it's not as easy as people imagine. Implementing the 90% it's easy, making the rest 10% work is hard, making the switch worthwhile it's where usually all the dreams of a perfect World full of rainbows and unicorns go to die.

And that's usually the moment when the "rewrite" gets "rewritten", to not admit failure.

I would love to switch to Wayland, if only it worked all the time.

I need to get things done unfortunately, I can't spend months debugging issues that should not be there in the first place, like drawing a few buttons and windows on the screen reliably.


> we would have figured out better ways to deal with memory other than those buggy malloc, free

It's not malloc and free that are buggy here. Fundamentally, there is some layer of the system that needs to work like that.

What people tend to advocate when they say they don't like this is keeping the size of code that needs to reason that way small and restricted. I.e. not writing an entire X server that way. But the allocator will still be there. Something needs to chop up the buffers in an unsafe way somewhere.


Of course, we all agree that on a low level the memory black magic still needs to happen.

But the developer shouldn't be in charge of it - just like today's developers don't decide on which CPU register a certain variable should be stored, or how to overwrite the instruction pointer when performing a function call.

I mean, of course there will always be cases where a developer needs to go this low - think of OS/firmware developers. But that doesn't apply to 99.9% of the developers out there.

Most of the developers out there want a programming language where they can easily declare an array, append or remove stuff from it without breaking anything, and when there's no piece of code left that references that piece of memory then it should be deallocated - in such a way that should be invisible to the developer.

So C/C++ nowadays cater to the 0.1% that needs to tinker with every bit of the memory, and those who love those languages want us to believe that their use-case is the same as the remaining 99.9%.


> I mean, of course there will always be cases where a developer needs to go this low

it made total sense when writing X11 though

which made X11 last so long and still totally work

we'll see in how many years Wayland will be rewritten because it's legacy, it still doesn't work everywhere after 15 years of trying hard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: