Getting rid of something just because it's old does not seem to be a valid justification. Sure, it's great to rewrite something and make it better, but unless the new thing supports all of the legacy display devices, modes, and protocols, you'll lose something when you "bury" the legacy project.
1. It's ancient, and was made for the graphical requirements of computers from a time before Windows 2.0 even hit shelves. Go back and look at Windows 1.0 - that's the kind of graphics this was made for.
2. Almost nobody understands the code. The contributors have openly said they are probably the only dozen people who could ever work on it.
3. Those contributors hate the job, and have basically abandoned Xorg since 2018, with only one minor release in 2021. Xorg is nowadays abandonware. It still works - but it's still abandonware.
4. X11's design was feature creep from the very beginning. At one point it even handled printing before CUPS was invented and that part was ripped out. The fact that it tried to be many things at once, then was "simplified" into "only" being a graphics server, has caused the code to be abysmal [https://www.x.org/archive/X11R6.9.0/doc/html/Xprt.1.html].
5. Nobody knows how many security vulnerabilities are in X11. When you have a decades-old codebase in C, anything can happen - especially when for most of the time, it was never fuzzed or testable. In 2013, just one security researcher found over 120 bugs in just one part of X11 (GLX) [https://media.ccc.de/v/30C3_-_5499_-_en_-_saal_1_-_201312291...]. Just last year, two major security bugs were found, both dating back to February 1988 [Note 1, https://lists.x.org/archives/xorg/2023-October/061506.html].
It's time to move on. Yes, some things will be lost. That's the cost of progress. We lost the ability to run 16-bit DOS programs decades ago. Ironically, X11 is older than many DOS programs.
[Note 1] Living proof that "open source" does not necessarily mean "more secure," especially when the source code is so complex that "security by obscurity" becomes the actual security strategy. 35 years is older than many people in this forum.
One more thing, even though this is going to be a controversial point:
Some might say, "But Wayland breaks XYZ, or can't do XYZ!"
I'm just going to quote Adam Jackson, who was the project owner for X.org at Red Hat:
"I'm of the opinion that keeping xfree86 alive as a viable alternative since Wayland started getting real traction in 2010ish is part of the reason those are still issues; time and effort that could have gone into Wayland has been diverted into xfree86."
Controversial is underselling it. "Just break things for users to try and get them to give up the working thing and move to our half-baked replacement" is everything that's wrong with Wayland.
I think you're missing the point. Xorg would have broken in every way Wayland has, and far worse, had it not been consuming all of the resources that should have gone into Wayland. Xorg is like a 1987 Hyundai Excel held together with duct tape, being replaced with a 2008 Toyota Yaris. With people complaining the Yaris has smaller cargo space, so we clearly need 7 more rolls of duct tape.
I don't think so; Xorg would have (and arguably has in fact) broken in a completely different way than Wayland. Xorg sucks in that 1. its underlying model of display hardware doesn't really map to how modern computers/GPUs work, and 2. its entire protocol has 40 years of cruft. Wayland sucks in that it declared everything beyond drawing pixels an optional extension, and then took 16 years to implement enough extensions to actually compete with X on features (hence my "half-baked" dig). Or more succinctly, X sucks on the backend, Wayland sucks on the frontend. In your analogy, X is the 1987 Hyundai Excel, and Wayland was born a motoped - fast, fuel efficient, and useless as a car replacement. What I firmly believe the X devs should have done (with 16 years of hindsight) is put all their initial effort into replacing the graphics backend while keeping Xorg for users - basically, make rootful XWayland the only Xorg server on Linux - in order to quickly burn a lot of the parts that were painful to maintain with minimal impact to users. Then, if the rest of X really needed to go, they should have written all the protocols needed to implement at least GNOME and KDE before releasing anything, so we didn't spend years on stupid "we have 3 different incompatible screenshot APIs" games. Instead, they shipped a minimum "viable" product and got upset when users didn't want to switch.