Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Why is there no performant remote desktop for Mac/Linux?
198 points by asiachick on Aug 19, 2022 | hide | past | favorite | 214 comments
IIUC, RDP (Microsoft's remote desktop protocol) sends draw commands across. It's the default way to access remote desktops on Windows. Conversely, AFAICT, the default way on MacOS and Linux is VNC which IIUC sends all the pixels (with compression). I've noticed for years I can work remotely, edit code, etc on RDP. I've done it around the across 8000 miles and had hardly any lag. Conversely I'm trying to share 2 computers right next to each other in the same network via VNC and it's horribly laggy.

Is this just not an itch anyone has wanted to scratch in the last 25 years? On Linux you can maybe XWindows your way to a faster connection by what about Mac? Also, RDP seems to let me run GPU based stuff where as XWindows you're actually not seeing the computer's display.



0. RDP is not as simple as sending draw commands. It has eg commands that correspond to scrolling regions of the screen to save on network use, a framerate limit (25fps I think) and allows some colour space reduction to reduce bandwidth too. I think it does a bunch of raster things (eg maybe caching floating windows like right-click menus). I think the windows server implementation can take advantage of information about the composition of the screen from windows.

1. There is some trade off of latency for bandwidth: it may take more time to figure out a small change to send over the network. Looking at api use from eg X may help with old apps that make small updates but more modern apps (or even modern fonts) which just render to gpu buffers and composite are less amenable to this. Very modern apps that use special apis to do lower latency scrolling/resize may be a little better.

3. A lot of the time for Linux the solution is to use ssh and terminal apps as they tend to make smaller updates and require less bandwidth. You can also try mosh to compensate for high latency connections. Text editors can work ok in terminals, especially fancy modern ones with eg mouse support. And for web things you can do set up a socks proxy over ssh which I think can work for a lot of apps which are really just web sites. So this may be part of the reason: fewer people see Remote Desktop as necessary.

4. I’ve had reasonable success with xrdp on the server and a windows client. One needed to select 32-but colour to get a better protocol version and turning off double-buffering in some apps (eg emacs) helped. But that was over a wired high bandwidth low latency connection.


3 is the cart before the horse.

"Real" terminal apps (i.e. command line utilities) are fine the way they are and you wouldn't use them any differently locally. They don't even need to care if stdout is a tty, and they reap all the benefits from a CLI because you can script them and pipe them into grep/awk/etc...

But as soon as we skid into ncurses territory, we're stuck with ANSI terminal codes to make some sort of graphical UI. Those apps don't benefit from being in a terminal (e.g. you can't grep htop output), and there's obviously some desire to make something visually appealing. Using in-band magic escape codes from the 1970s with severely limited functionality for that is just a workaround for the lack of a standard remote desktop protocol that actually works, is reasonably efficient and SSH-friendly.

Sure, the fact that you can make ncurses apps feeds back into complacency with the status quo, but that's always true. The enemy of great is good enough.

The other alternative is making a web UI, which in principle has the right architecture (run the UI on the client side and only push/pull meaningful data to/from the server), but unfortunately that's tied to HTTP(s), which isn't exactly SSH-friendly. What we'd need is a transport-agnostic version of HTTP that can be given a socket and communicate with the server through that without demanding to know how to open more connections to the server and demanding that those connections be made through AF_INET or AF_INET6 sockets.

So again, some server software is implemented as a web UI (like CUPS), but that's a workaround for the lack of something more straightforward that's also a de-facto standard.


I've tunneled web UI before with good results. For example, to remotely adjust syncthing settings. A slightly more user friendly way of setting up tunnels and service discovery would help.


I have been working on one for the past two/three months to scratch this itch. I will probably release it soon (and hopefully shill it on HN!). In the meantime, NoMachine is a very good remote desktop software (macOS and Linux), and possibly Sunshine (GameStream host), but basically I'm not sure why a more popular one exists.

In essence, a good remote desktop software will use video encoding/decoding (h264/265 for both ways being very fast) to encode captured frames of the desktop, and a good transport protocol over the network with good tuning parameters to achieve low latency (which is what mine does). I believe this is what NoMachine and Parsec do and why they are so good (along with NICE DCV). From my work I've found that video processing libraries/techniques are extremely poorly documented (think the libav* family of libraries), which makes it a very difficult segment to conquer because of what I perceive as honestly massive gatekeeping (or by Hanlon's razor perhaps laziness). There's nothing impossible about making a remote desktop software (I can say this since I'm doing it right now) but I can say it's harder than it has to be.


NoMachine is ok...if you haven't spent much time in RDP. Windows RDP is so far ahead of anything else out there it puts them all to shame. I use RDP for work all day long and I couldn't find a workable solution that would have a similar experience on Mac/Linux. And these machines are all on the local network!


I've literally never found anything better than the NX-based NoMachine (I think version 3 was the last one to support it). I'll never understand why they downgraded themselves to yet another desktop video stream. If you can find a copy of the older version or one of the open source replacements (freenx? x2go?) it's worth it.

I was able to use my home computer lag-free at my highschool when it was at home sitting behind a shitty ADSL connection. Later I used it at college with my computer at the dorm with a 100k/sec upload cap (enforced to discourage student file sharing). It pulled some dirty tricks with X forwarding, caching pixmaps, and jpegging the hell out of the individual elements, but it really felt like the computer I was using was right there.


I've found, over NoMachine and even RDP, that Parsec is hands down the best solution that I've encountered because of how low-latency it is. It might not support all the input methods/etc. that RDP does (IIUC RDP has good touchscreen support), but Parsec is faster than everything else.


This matches my experience. With a Gigabit Ethernet connection, I find it very difficult to distinguish a desktop streamed via Parsec from a local one. RDP is noticeably laggy in comparison. I would even say that Parsec over a good internet connection (a few 10 MBit, ~10 ms ping) feels more responsive than RDP over LAN.

What I don't like is that Parsec relies on third-party servers for the session initiation, and that streaming Linux desktops is currently not supported. Maybe the open source implementations of Nvidia's GameStream (Moonlight and Sunshine) could be an alternative.


thanks for this input! Parsec looks promising


Interestingly the RDP client on the mac app store is really good.

Its better than the Windows client because it just works as another desktop in the Mac OS, with no bar at the top, and you can jump in to and out of various remote session quickly just by switching desktops with Cmd+left or right arrows. Its a really simple, nice to use implementation.


I requested that feature for Windows RDP client over 2 years ago to no avail. Sort of surprising they made it work like this on Mac, but not on Windows.


This was the case 10 years ago too iirc. Windows RDP is excellent. Wish Linux RDP worked as well.


For me it's the opposite when it comes to lan. Vnc is buttery smooth with no perceivable delay, while RDP still seems to limit to some low-ish fps.

For low bandwidth/high latency links, RDP easily wins.


How is Parsec actually in comparison? Never tried RDP but Parsec just blew my mind


> In essence, a good remote desktop software will use video encoding/decoding (h264/265 for both ways being very fast) to encode captured frames of the desktop, and a good transport protocol over the network with good tuning parameters to achieve low latency (which is what mine does). I believe this is what NoMachine and Parsec do and why they are so good (along with NICE DCV)

They do, but the paid version of NoMachine has?/had? a mode called X11 vector graphics explained better by NXDev here [0]. I've used it from 2016 until 2021 and for classic applications (ie apps not requiring a GPU) is incredibly fast. To the point that I used to forget i was working remotely.

[0] https://news.ycombinator.com/threads?id=NXdev


I found x2go being a nice and performing FOSS replacement fo NX NoMachine.


This. x2go has served me well and can be tuned to be even more performant with some X11 settings changes.


> NoMachine is a very good remote desktop software

Glad someone mentioned it. I wondered if NX was still a thing...

fast

secure (over ssh)

NoMachine blows past RDP every day of the week and twice on Sundays.


This was a solved problem 30 years ago in the Unix and later Linux worlds. Completely remote X based terminals/thin clients were quite common. My impression is that things have mostly regressed since then, probably due to neglect since it's a kind of functionality that not many people seem to be interested in these days.


> probably due to neglect

Kinda yes kinda no? Mostly from X drawing commands being a shit way to represent modern UI's and toolkits. So everyone draws stuff on their own, since it's far more flexible and faster.

Same is true on windows. RDP is streaming video far more often than it steams drawing commands.

You could maybe build something that streams GPU commands, though. Use something like VirtIO-GPU + GFXSTREAM ( https://www.phoronix.com/news/VirtIO-Context-Type ) to stream to an entirely different computer instead of just to the VM host?


> Mostly from X drawing commands being a shit way to represent modern UI's and toolkits.

This is really fundamentally wrong. X drawing commands are still just fine for representing modern within-app UIs and toolkits.

The two things that have changed are the underlying GPU technology, which has shifted towards abstractions and APIs mostly targetting gaming and video, and desktop level UI stuff, which is expected to include compositing these days.

However, it's not as if the other remote desktop systems have really changed much to reflect these developments either.


> This is really fundamentally wrong. X drawing commands are still just fine for representing modern within-app UIs and toolkits.

This is really fundamentally wrong. Nobody has been updating X commands for modern effects (eg, blurs, advanced blend modes, etc..). Heck you can't even do round-rect clipping. You have to generate a clip mask pixmap instead. At which point I might as well just generate the final pixmap and forget about X anything. Especially since xlib doesn't even do anti aliasing. Like come the fuck on, it's friggin useless. Can't even handle text adequately!

There's a reason Wayland dumped it all entirely. If it had been maintained and updated sure maybe. But it wasn't. It's a drawing library straight out of the 90s, absolute abandonware


Well, on the one hand, you're right. But on the other hand, X is still being used to do all that stuff (not necessarily using the body of its drawing command set). So even if X's drawing commands themselves are less relevant than they once were, X itself is still entirely capable of presenting GUIs with blurs, anti-aliased text etc. etc.


Well sure but it's just compositing pixmaps from somewhere else. It's just a bad compositor at this point


> so everyone draws stuff on their own, since it's far more flexible and faster

Faster for which usage? Locally yes, but not remotely where it's worse..

In the beginning Unix desktops were optimised for LAN, and in the process of being optimised for standalone PCs to the dismay off those who access their Linux desktop remotely..


> Faster for which usage? Locally yes, but not remotely where it's worse..

How do you know it's actually worse for remote? I can render my entire UI on the GPU & route that straight into a video encoder incredibly quickly. Faster than XLib can draw in the first place.

Now there's the bandwidth question there yeah. The video stream takes X Mbps, and the Xlib stream takes Y Mbps. Unless you've measured it recently, you don't actually know if X or Y is smaller. You can make a guess that xlibs is smaller, sure. After all it's vector, right? Except it isn't, not really. Text is all pixmaps. Images are all pixmaps. Path clipping is all pixmaps. Are you compressing all those pixmaps before sending? If so, to what? And is that faster than your GPUs h264 encoder? Almost certainly not. Is it smaller? Also probably not really.

And I don't know about you, but I got way more bandwidth than I have latency for my Internet connection. So whether or not X or Y is smaller doesn't actually matter to me. What matters is which gives me less end to end latency, which all but certainly is the video encoder & decoder path, which are incredibly well optimized and hardware accelerated.


> How do you know it's actually worse for remote?

Because I've tried `ssh -X` and in my experience it sucks for any "modern" program (that I've used; I suppose there's bias in my sample).


"everyone" is just not true, Qt apps can still send X11 draw commands


You can easily draw any desktop with X11 draw commands. They’ll just be

  Blit
  Blit
  Blit
  Blit
  Blit
  Blit
  Blit
The x11 commands allow you to draw lines, ugly patterns, ugly fonts and blit pixmaps. All of them except blitting pixmaps are basically not useful for a modern desktop.


This is literally false for Qt. Here's the code: https://github.com/qt/qtbase/blob/dev/src/plugins/platforms/...

Here's e.g. dolphin rendered with it here over ssh: https://imgur.com/bLnop55

> All of them except blitting pixmaps are basically not useful for a modern desktop.

that's ridiculous. Most software is basically a succession of drawLines / drawRect calls with the occasional pixmap.


How much of that dolphin UI is hitting X drawing commands, which appears from your source link to be the simple case only, and how many are rendered to pixmaps and only composited by X?

All of the text, for example, is rendered by QT and sent as pixmaps. The drawTextItem path doesn't ever attempt to use XDrawText


What simple case ? Every call to drawRect/drawLine that I can see at a glance will end up inside X functions - e.g. calls to QPaintEngine::drawRects will just end up in the function below. And that's the huge majority of the UI.

Anyways, what matters to me is that I can resize the window without any lag and compression - it immediately becomes much more laggy and redraw-artefact-y if I disable X11 painting when I access my remote machines so it's pretty obvious when things are using it


It’s the simple case of erasing the buffer before blitting other buffers onto it. I hope you agree that clearing is not the interesting part of drawing a window.


> It’s the simple case of erasing the buffer before blitting other buffers onto it.

... but those buffers are going to be paint with the same calls to drawRect. If I look at my software's draw calls, the huge majority are calls to such drawLine / drawRect / drawRects with the occasional small text or pixmap


> probably due to neglect since it's a kind of functionality that not many people seem to be interested in these days

X11 is at its heart an asyncrhonous distributed systems protocol (quite similar to Erlang Dist) that happens to output to the screen as a side effect, but a lot of libraries and toolkits (including xlib) provide synchronous abstractions on top. When applications use synchronous abstractions frequently, you end up waiting for many roundtrips and the UI gets painfully slow. Some app developers regularly test with remote X over high latency and/or low bandwidth links, and some don't, and it's pretty easy to tell which is which. Some apps are painful even on a LAN, lots of roundtrips add up.

If there was appetite to do the work, one could imagine an extension to push client-side rendered window images across in a compressed format, but xorg developers have mostly moved on. There's nx and other external compressors, but there's a missed opportunity, IMHO. Maybe waypipe will fill this void eventually?


X11 is hopeless. It was a turd 30 years ago. It still is a turd. If it was a simple as "make your toolkit asynchronous" then it would of been done a long time ago.

Take Xterm, for example. I did benchmarks over remote systems long ago. When it came to performance over the network it was absolutely trounced by Gnome-Terminal. And Gnome-terminal was a lot slower then then it is now. The reason was because Gnome optimized text output on scrolling text that minimized the latency impact. And it still sucked.

There was a reason that it never saw widespread use in the enterprise outside of very nitch sysadmin users on pretty much local area networks.

Were as Windows has been used to support thousands of users in a single environment with almost entirely remote applications for a very very long time now. For most users and most applications there is no difference in functionality between remote and local applications.

Were as X11 will never be able to accomplish that unless the computer is sitting in the room next to you. Nomachine and similar tech can speed it up, but that isn't really X11 is it?


>Were as Windows has been used to support

It's turning things upside down. Windows has been used in remote because it became dominant, not vice versa. It won in enterprise segment before e.g. even remote support, not to mention remote apps became widespread. And Win's success wasn't that much related to its technical excellence as most of admins from 90s would say


X allows me to run a program on one computer and see it appear on a local desktop, not access a remote desktop from random machines. The former is simply not a use case I am interested in, as I can pretty much always run the graphical program locally instead; in contrast, the latter is something I have found to be extremely useful in numerous situations and even on a near-daily basis back when my life was organized differently... and X simply does not do that, which is why people use solutions like Xvnc (which runs a local X server on the remote machine and then provides you VNC protocol access to its framebuffer).


X certainly does do that, the protocol is called XDMCP and it's been around since the 80's.

"I can pretty much always run the graphical program locally instead"

Then why do you need a remote desktop? This is a really strange comment, because you shouldn't ever care whether the window manager is local or remote (as with XDMCP) - the only thing that should matter is whether the apps are local or remote. Whatever it is you're doing with apps on a remote desktop can be done better by simply displaying the app remotely on your own desktop.

What exactly is the use case for needing to run a second window manager remotely? It's such a strange thing to say that you need.


Right now I am sitting at "my computer". On "my computer" I have a ton of running state: I have a bunch of open web browser windows with a ton of tabs in each, I have open terminal sessions that are currently displaying the output of histories I am interested in or are actively running programs that I'm working with... hell: I have background computations being managed by various programs, such as video editing and encoding software, that I am interacting with. You might choose to call this state my "desktop", and I want to have "remote desktop" access.

Now, I go somewhere else. While I'm out, I realize "oh, I need to change the queue on my encoding software" or I want to talk about some web page I have open. This is what Remote Desktop on Windows is doing for me: I don't connect to my computer remotely and then have some abstract notion of "I guess I can run software at home but see it here"... I actually am able to make that computer I'm at--wherever it is, and whatever it is (often it is my phone! the RDP clients for iOS and Android are excellent)--equivalent to the computer I have at home.

As far as I have ever understood--and I'm NOT an expert at X, though I've been using it for 25 years in various capacities ;P--X fundamentally doesn't do that: X clients connect to X servers, and if the X server dies then the X client also dies. The Window Manager--the behavior and interaction of which in this system I, in fact, care about deeply--also connects to the server. I have never come across a use case where I'd want to run a graphical app on one computer and merely have it display on a local computer: in that case, I can simply run that app locally.

However, what has given me superpowers is the ability to run the entire stack remotely and then "connect to it" from a local computer. I don't want to, while I'm out, run a new web browser process at home and have it appear on whatever device I'm at... I want to have the web browser I've already been running--along with all of its windows, all of their tabs, and their running state that have been actively continuing to execute and do stuff while I'm out--appear on the computer I'm at, making it feel everyone's computer becomes "my computer".


Ah, you want to connect to existing running apps.

Generally speaking, the workflows I try to use center around sharing data between hosts. The tabs in your web browser should be in your history and that's synced between your systems, etc etc.

Regarding rehoming X apps, X can do that with tools like xmove. I haven't ever really bothered to use them because I have no use for that use case, but I do know they exist.

I've always used screen for re-attaching terminals. Batch jobs and background computations are generally logging to files which I can check or interact with from anywhere. That sort of stuff isn't ever connected to a graphical interface in my experience.

The benefit of remote display was, I think, more clear back when there was a greater diversity between systems. I would quite often run graphical tools unique to one unix on the my own system which was fundamentally incompatible (system-wise, cpu arch-wise, etc). Nowadays it's probably all x86 linux, so who cares, right?


I think Xpra might work? It's basically like screen/tmux for X. You start an app with Xpra and it creates an X session. You can attach to that locally or remotely (eg over ssh).

https://github.com/Xpra-org/xpra/blob/master/docs/Usage/READ...

That contains some examples of how it can be used. I don't use it much, and I don't know if there are simply GUIs to make it easy to use.


You want Xrdp or Xvnc. Whether they are as performant as you want, I have no idea, but each of them can maintain the session and you simply connect to it with the client.


I'm not sure what you mean. X can certainly run a full remote desktop, or at least it could, I'm not sure how well it works with current versions of KDE and Gnome.

If you mean connect remotely to a desktop that someone else is using on another machine, yes that is indeed a different use case.


I mean the latter, as I maintain the former isn't useful. The people who want to remote desktop into their Windows computer generally aren't asking about this feature so they can simply get fresh access to a remote computer as that's generally not how remote desktop is configured: it is so they can leave all of their programs running on the remote computer and get remote access to it.


how is the former not useful? At end of the day there's some app you're running on that RDP session right? X simply allows you to run that app remotely and display it locally.

Eg, my kvm (virtualization) host runs RHEL and no desktop environment, but I can easily run virt-manager using X forwarding over SSH when I'm creating a new VM and a GUI is easily than writing a domain XML

In face I've forwarded virt-manager to Linux, macOS (XQuartz) and Windows (WSL w/ X server)


I mean, OK, it isn't never useful: maybe you are sitting at a computer that really doesn't have the power to do the thing you want to do, or maybe the thing you want to do really needs access to very large files and it is better to run the computation next to the data.

However, this isn't why people want to use the "remote desktop" protocol. The app that I'm accessing over RDP isn't something I'm running after I connect to RDP: virtually all of the time it is an app that has been running for weeks and will remain running for weeks to come.

I am using my computer right now, after having woken up from a nap. This computer is doing all the things I left it doing when I went to take that nap. It has a bunch of state for all of these running applications, and if every time I walked away from my computer all of that state disappeared, I'd be annoyed, as I want to be able to sit down at my computer and do five minutes of work and then go off and do something else and come back and do five minutes more of work.

This is the primary use case for the remote desktop protocol: I leave a bunch of software running on my desktop and then I go somewhere else and I can remotely log in to my desktop and see all of my running programs. If I have spotty wifi and I only manage to get connected for 30 seconds before I get disconnected, I can just reconnect and everything is exactly where it was as I wasn't logging to run anything: I'm accessing things that are already running.

So, no: "at the end of the day" these are fundamentally different use cases, and the former use case simply isn't something I feel like I ever want to do. Hell: it isn't even what I generally want with a terminal! (I had always set myself up so I had running static screen sessions going that I'd remotely connect into, and then I use either autossh or a little bash function running ssh in a loop to reconnect to that remote screen session so when I open my laptop it steals my terminal from wherever I was using it last to my local display.)

The premise of remote desktop connections is something which kind of changes how you use your computer: once you try it and get used to it, you start to treat the terminal you are at as a commodity, as every computer not only has access to your files but every computer has the same state and let's you immediately jump right in to where you left off with your work. In contrast, I'd claim the use cases where you want to, while you are out, run an app on a computer you aren't at is a niche use case.


There's this https://xpra.org for X applications, though I haven't tried for awhile.


> as I want to be able to sit down at my computer and do five minutes of work and then go off and do something else and come back and do five minutes more of work.

A seamless experience of passing off a single session across multiple entry points seems like the holy grail of personal computing. I'm willing to accept slightly different experiences between my MacBook, Windows desktop, and iPhone - but I want to feel like I'm picking up where I left off.

In some ways this is starting to happen within certain suites of products, for example, Chrome session handoffs. Nonetheless, it's not a remotely consistent computing experience.

I'd bet it's an incentivization problem at this point, and therefore I wouldn't hold my breath.


I mean, have you tried Remote Desktop Protocol? ;P


Thanks for clarifying. Now I see your point more clearly. Interesting while I do similar things regarding remote ssh/screen/tmux sessions, I never get used to do that with Remote Desktop sessions. I agree it’s different use cases and personal preferences


We we're playing XPilot on X termials on a VAX 25 years ago with no problem (though local university network). I would agree the problem had been solved, but no progress to internet scale. 99.99% of dev effort went into an endless stream of window managers.


It’s not neglect, we just don’t put 3 lines to the screen by CPU rendering anymore.

With the advent of GPU, resolution, complexity of a scene all increased several times, making the problem essentially a completely different one than back in the XMotif days.


Yes it was but the thing is that the problem to solve then got harder.

Think about video streaming, 3d acceleration, sound input and output.

Then whereas you previously had only a handful of vendors to coerce into agreement, now you have a multitude of individuals that are going to complain for every change.


I built labs using NCD and HP xterminals. The world was a different place and today people have different expectations. In the xterminal era they often came with minimal ram, I think it was 2MB, with options for 4 or 8MB. Mono, greyscale, or 8 bit color were common, with only the highest end terminals (often more than the entry level workstations) offering 24 bit color. Gradients were rare, usually stipples where used. It wasn't uncommon for palettes per window causing flashing when moving to a new window (because they had a different 256 colors). There were recommendations for particular 256 color palettes for compatible with other tabs/windows. These days 24 bit color icons, buttons, live video, multiple animated ads per window, heavy use offscreen memory for scrolling and the like would kill X based terminals.

Sadly things moved so quickly that when the X based terminals were new, people where quite excited to use them. However over the 5 years we used them they got noticeably less usable each year. We ended up replacing them with some diskless linux boxes that could run a 24 bit color window manager and browser locally (fast scrolling and window moving) and ran heavy graphical apps remotely over X11, things like Mathematica, Maple, and Matlab. The lab were suddenly much more in demand, and people hugely preferred them over the Mac and Window based labs students had access to. It was a nice mix of offloading servers (by running X11, window manager, and browsers locally) and offloading clients (by running memory intensive software on the server). People seemed to love the low latency keyboard, scrolling, and browsing. Waiting even a few seconds for Mathematica to churn through a notebook of forumulas and rendering was much easier with a responsive desktop. The diskless linux boxes were easy to admin, reliable, and quiet.

Basically browsing is hugely more data intensive than the era of X based terminals, even just scrolling animated windows is tough. Things like a full screen video at 4k was not even under consideration. At the same time people's expectations on performance, latency, and visual richness have increased to the point where X based terminals no longer make sense. Even fast 64 core servers with fast 8 core clients networked over switched GigE gives remote X11 a VERY poor experience these days. Try remote firefox over X11 sometime, it's painful.


Is it a case of people not being interested in the functionality, or a case of the functionality being so poor that people don't want to use it.

I remember using X to run applications from half way across North America about 20 years ago. To be specific, I remember using it to do image editing in Gimp. Performance was great. Heck, I even used it over dial-up connections to my university. For some X applications performance was fine, though I was clearly using it for less demanding software.

Of course, expectations have changed. Microsoft has acknowledged this by continuing to improve upon Remote Desktop. I love and use the feature, even though I cannot stand Windows. While I use a couple of X applications over the network, for the most part I don't even bother. It's finicky to get working. Even when the fiddly bits are set up properly, there is no guarantee the software will work.


> It's finicky to get working.

  ssh -Y remote-host
  ...
  % x-application-name
What's finicky?


I can't remember the details, just that I had to setup access control and environment variables.


That's precisely what ssh -Y or ssh -X do for you.


Are you sure you remember it correct? I specificity remember tests we did over ISDN (64kbit/s). Opening KMail (X forwarded over SSH) and rendering changes took several 10s of seconds. There is a reason NoMachine and X2Go got invented.


Just to clarify: I wasn't using Gimp over a modem.

The ability to run graphical applications across a network may have distorted my impressions, but I am pretty sure I am remembering correctly. The modem experiences were largely things like previewing papers and preparing graphs, neither of which are terribly dependent upon interactivity. It is also worth keeping in mind that many early widget libraries were much simpler than what came later.


I used to use FrameMaker over a 28.8kbps modem and, while it wasn't fast, it wasn't impossible to use. Typing latency was reasonable, but setting up the screen was slow. Expectations were also different back then of course. Running Frame on my 486sx-33 on Windows 3.1 with 4mb RAM would have been a bear, too.


Things have regressed a lot since then. IMHO this should be part of the normal regression testing on things like Chrome, Firefox, and Thunderbird. As things are now, at least on my system, Chrome doesn't run remotely at all, Firefox does, but lags unbearably (and sometimes crashes) at startup, and Thunderbird lags moderately at startup. (All of this on a 1000Base-T LAN.)


Ubuntu 22 ships an RDP server by default these days. Haven't tried it myself (my manual attempts at running xrdp way back in Ubuntu 18 broke stuff) but it seems to work quite well. Just open the settings application and enable it. Might not even be Ubuntu-specific, might be available in every GNOME distro?

Personally, I've used stuff like Steam in-home streaming and Parsec for remote desktop access. They're more meant for game streaming but they handle normal applications just fine and they run on pretty much any platform I can think of. As an added bonus, there's something nice about the idea of picking up a cheap second hand Steam Link and using it as a thin client for your PC.

Don't know about macOS, though.


On Ubuntu I used Remmina client to have a remote desktop of another Ubuntu over ssh tunnel. I did it over local WiFi attached to low-speed DSL and a fiber on the remote end.

It's quite usable graphically even with such channel. Of course, running graphically intense programs or remote video would not be practical, but spreadsheets ok.


The RDP protocol should be pretty good even for running video as it can switch to a simple MP4 stream on the fly, at least in Microsoft clients and servers. I don't know if the open source implementations have that capability, though.

I wouldn't expect too much from a DSL link but at a low render resolution (<720p) even video should be doable with the right tweaks. Quality won't be great, but those stupid video ads shouldn't lag out a browser session for example.

If xrdp or equivalent don't support those extensions, I see an opportunity for distro makers there. Linux distributions (and others) have been reliant on VNC for way too long in my opinion and the only somewhat supported alternatives with more than one implementation aren't exactly built for this purpose, sadly.


Wanted to add this also is setup on Fedora workstation 36 and is turned on the same way. It was super easy and works really well. I only use it on a local network now and then though. So I can’t say I’ve hammered on it very hard, but it was exactly what I wanted at the time.


Last I checked it didn't work if your cpu is amd or your resolution is wrong


Well I'm not sure about the resolution part. Mine was nothing notable, maybe 1920x1080, but it seemed okay? Maybe there are other problems though.

I'm running an AMD CPU and that worked fine. Perhaps it's the built in GPU that causes a problem? The system I connected to has a standalone GPU.


I wonder if that RDP server works over Wayland.


I haven't seen anybody mention xpra ( https://xpra.org/ ) yet, but I used to use it daily a while back when I still used linux on my laptop. "Screen for X" really jives with how I wanted to use it (beefy server, ultrabook laptop, fast local network).

X2Go is also pretty good IIRC.

I used NoMachine at work a few jobs ago, but I didn't like how it required a weird dedicated unix user for itself. I don't know why they needed that (maybe so that they could multiplex multiple users over a single port or something like that?), but I never understood why it didn't just run as my own user. It was fast, I'll give it that.


Very big +1 for Xpra.

For anyone unfamiliar, it's similar to ssh -X but with modern compression, so it works well even with reduced bandwidth.

If you want, you can also use it to view an entire desktop but I find it much more comfortable to have the windows on the remote computer act as windows on my local computer.


Many servers create their own user. PostgreSQL creates postgres and gives ownership of configuration and data files to it. It's especially convenient when those server softwares run on real servers or on multiuser machines. Historically single user Unix machines were the exception. They cost too much. Graphical workstations started the trend, then Linux.


Two hosts on the same LAN can have excellent performance with VNC. There are some things you need to know and some work you need to do, however.

First, the two most common DEs, Gnome and KDE, use X compositing by default. This is very bad for VNC. You want to turn that off. Unfortunately you can't with Gnome, which is sad. You can turn off compositing in KDE, and that's one reason I've preferred KDE for years. Other DEs can also forego compositing.

Second, turn off all encryption between the VNC client and server. No SSH-ing, no build-in VNC encryption. Nothing. Ignore the nag warning about the lack of encryption some VNC clients hit you with.

Third, no VNC compression. You're on a LAN and you don't need it or the lag it adds.

Forth, don't "share" the VNC session with the remote host's desktop. Use the vncserver headless X server. Every "shared" VNC system (where the desktop appears both local and remote) I've encountered is laggy.

Fifth, use a fast VNC client. RealVNC's VNC Viewer is excellent.

Sixth, do not scale. Ensure your vncserver's resolution matches your VNC's client size exactly.

Seventh, tune your DE to remove whatever animations or alpha effects exist.

Eighth, no wifi. Ethernet only between VNC server and client. Wifi is awful for this.

Ninth, use decent ethernet gear. Some NICs are low end and impose more latency. Likewise, low cost switches are bad news as well.

Do these things and your VNC session will be difficult to distinguish from a local DE. If you're using gigabit or faster LAN you'll be able to play video through VNC reasonably well.


9 requirements, and a low latency high bandwidth link... Compared to RDP which works well across the ocean, is encrypted, compressed, and scalable.

I think you just proved that VNC isn't a good replacement for RDP.


I agree. RDP and Windows are light years ahead of Linux here. This is just my take on this for a local LAN environment, and if that's your use case and you follow my guidance you'll be pleased with the result.

One benefit of using VNC is the lack of glitches. I've been down every road; nomachine, various commercial and open source X servers, xrdp, all of it. They all have glitches; sizing problems, decoration problems, font problems, keyboard state problems, pointer problems, you name it. VNC (operated as I've described above) doesn't; everything is correct, all of the time.


Other than the compositing thing, which I have not encountered, I disagree with all this advice.

I've had good luck with tigervnc in the past. My raspberry pi 4 had trouble pulling 4k video, but whatever.

Most of the vnc latency comes from unoptimized compression (like not using SSE for jpeg, etc), or poor default settings.

There are also remote desktop protocols that rely on GPU accelerated compression (mostly for gaming and 3d work). Those are mostly fine for light 3d shooters, etc. Sometimes I even watch YouTube videos over them with a 10-20mbit, 40ms internet connection. (I didn't bother measuring network usage for that; 20mbit is an upper bound.)


Hell of a lot of limitations to do it right in 2022.

Is compression really bad? In instances like file system, sometimes it may even be faster to enable compression as disk IO is slower than computational time.

And you're telling we need desktop to desktop to make it work good and what kind of NIC can't handle enough bandwidth for VNC? You sound like we're playing a next gen VR game.


> Is compression really bad? In instances like file system, sometimes it may even be faster to enable compression as disk IO is slower than computational time.

Compression noticeably adds latency. You see that easily when scrolling windows; it's not difficult to detect at all. The intent is to get minimum latency; I use these remote DEs all day and latency creates fatigue. If you're truly remote with limited bandwidth then compression will clearly have value.

> and what kind of NIC can't handle enough bandwidth for VNC

Today, integrated NICs are great, but not long ago some of these were poor and roundtrip latency was as high as 100 us or more. I could see the difference just holding the space bar and watching a terminal cursor move. If you really want a nice experience use a decent NIC; nothing crazy, just not low end integrated stuff that's barely more than a PHY and relies almost entirely on the CPU.

Latency is the thing I've worked to reduce so my remote desktops feel as local as possible. Every incremental change I've made has been noticeable; 2.5Gb/s is slightly better than 1 Gb/s when playing video, for example; something I noticed recently after upgrading the NIC on one remote host.


I don’t know the details of how a VNC program work, but I would be surprised if encryption would measurably change anything - at least on a file system level it doesn’t make a difference, due to out-of-order execution - it easily fits in the “waits” for IO.


Nobody mentioning something like PCoIP?

I think the direction to go, being compute/GPU nowadays much more powerful and even in the same chip (and bandwidth), is to just remote the framebuffer or the video output built on the source machine, with realtime hardware assistance. No primitives, no API... just pixels, so both world can evolve without being in lockstep.

PCoIP is proprietary but the basic philosophy is pretty sounding ("graceful degradation" etc., I've used it even across WANs and the experience was pretty good).

Amazon AWS is using it to stream its remote cloud desktops.


My work setup (in my house) has been entirely based on RDP for a few years now, and I have to say it's great. I basically never physically touch my work laptop unless I'm not working from within my actual office.

Two of my three monitors run fullscreen RDP to my work laptop. I run Slack and Teams on the other one (on my personal PC) because audio/video over RDP is the one thing that doesn't work well. I can even copy/paste images across.

At one point I tried using a dock. It required me to physically switch the cable as well as switch inputs on one monitor (to still connect to my PC's GPU). Though a multi-hundred-dollar KVM could have "solved" this, it was still occasionally glitchy, comparatively slow to switch, and, frankly, just not as convenient. Doing it 100% software is just better.

I wish I could get the same experience doing this to a linux or mac system. In my experience, remote macOS over VNC is a punishingly bad experience, while RDP to a system even sitting physically next to the mac (same internet/VPN/LAN connection) is entirely usable.


VNC doesn't send all pixels of every frame. If you look at the RFC for the frame buffer protocol it uses, it concerns itself with areas of the screen.

https://www.rfc-editor.org/rfc/rfc6143.html

I've only seen very good performance using the built-in VNC client/server for macOS. You can try it out by turning on remote sharing and doing this from the terminal:

`open vnc://<host>:5900`


Sending pixels is not necessarily a show-stopper for a remote desktop that feels "snappy".

Case in point - scrcpy, remote "desktop" to access Android from Linux/Mac/PC. Very responsive, including during fancy animations, playing video, games, etc.

https://github.com/Genymobile/scrcpy


I used to use NoMachine with good success. I believe they have the cross platform capability and performance you might expect, though it's been years since I used it. Can it work?


Nomachine turned out to handle latency very well. My company is based in Brazil but we have our infrastructure on Aws north Virginia. I build an EC2 with Ubuntu and installed the gnome desktop and nomachine and the result was a very usable and cheap remote workspace.


I second this! I was also looking for performance (3d graphics usage) and NoMachine is surprisingly performant; currently using it across both Windows, Linux and Mac and consider my search to be over.


I remember being surprised that watching a video on my remote desktop worked as well as it did, over 100Mb LAN, circa 2010.


As someone who likes to remotely stream games from my desktop, Parsec has been extremely easy to use and very performant. Could be worth a try!


Same, I Parsec into my machine at the office using my dual screen 1440p monitor set up at home, the latency is incredibly good, so good sometimes I forget I'm using a remote machine. I was skeptical of it at first, but now I am a convert - it works very well for coding as well as gaming stuff.


The only downside IMHO is the lack of an iOS/iPad OS client. I’m in the “make my iPad a general purpose computer” crowd though, mainly to avoid carrying a beefy machine around.


I'll second this. It may be gaming focused, but it's perfectly functional as a general remote access tool too. The visual quality and latency are second to none in my experience.


I think the gaming focus is to market the free product. The enterprise version is all about productivity stuff. The big things it adds are better color quality and multi-monitor support.


https://www.paperspace.com/ is a SaaS that uses Parsec and they are not game-focused - it barely works for games. They are focused on professionals who need good GPUs, like 3D modeling and rendering.


It seems like it's "latency focused" which is good for gaming but also things like video editing, remote visual/graphical coding (like for games). It seems a good number of people use it for more business-related purposes.


Chrome Remote Desktop works pretty well and works on all OS's

https://remotedesktop.google.com/


I use this daily for a number of windows machines and it's great. Even the Android client is surprisingly usable.


I think it's worth mentioning that apple screen sharing is pretty amazing for apple <-> apple.

You can screen share a remote mac very efficiently, drag and drop files, use keystrokes such as modifier-plus-key in a sane way, and more.


Yea I have a 2009 Macbook laying around just for screen share to my work hackintosh.

VNC Linux/Windows > MacOS is so incredibly painful and unusable:

Multimonitor: No. Correct scaling: No. Copypaste: No. Thats only some of the bigger problems.

I wonder what Apple is doing differently with their VNC connection.


Which app is this? I’ve checked the box in sharing for screen sharing and it’s just using vnc. Is there a different one hidden somewhere?


You can use screen sharing to non-apple devices using the VNC protocol, but between macs it does something different.

I use it over ssh myself.

for example, if I have a machine call rem:

  Host rem
    Hostname 10.1.2.3
    User foo
    IdentitiesOnly yes
    IdentityFile ~/ssh/id_rem
    # rem screen shares on port 5900 locally
    # this forwards rem:5900 to this machine:5903
    LocalForward 5903 localhost:5900
then I do in one window (which you leave open):

   ssh rem
then launch /System/Library/CoreServices/Applications/Screen\ Sharing.app

choose Connection -> New and enter: localhost:5903

when you log in using username/password and you will get a nice remote window on your desktop

The window has lots of nice features. You can drag files from your local desktop and drop them on the remote desktop and they are copied over.

cmd-tab in a screen sharing window works on the remote machine, but move it out of the window and it works on the local machine.

You can copy/paste between apps on the local machine and apps on the remote machine, including rich text stuff.

There is probably lots more I don't remember or haven't discovered.


Maybe they have some custom protocol extensions that only work between mac os and mac os?


/System/Library/CoreServices/Applications/Screen\ Sharing.app


The easiest way to see it is to browse to another Mac on the Network in Finder and if it has "Screen Sharing" enabled, you should see a "Share Screen..." button in the finder window.


I keep Screen Sharing App in my dock at right click for prior connections from the popup list.


This comment is interesting to me but for a very different reason: the real magic I find in RDP is that the hand off when going from "local" to "remote" is seamless. On windows, without any pre-setup other then enabling the service, I can logon remotely (and it will lock the screen locally) and resume using all my apps. Then I can come back to that workstation, logon (and it'll kick off the remote session if it's there) and keep using the apps locally - and repeat over and over.

And crucially: none of this harms the performance of my local session. I don't have any background daemons or special modes running, I'm not in a "virtual remote session" when working locally. It adapts resolution and screen sizes when I switch to remote consoles.

Everything just works really well, whereas the same is not remotely true on Linux. The best in class - x2go (NoMachine variant) can't do this. My local session can't become a resizeable remote session, it just becomes a slow screen grab of it, and doesn't allow me to lock the remote session when I log in, or kick off the remote session when I log back on.

If anyone knows how or what can get this experience to Linux, it's IMO not only something I greatly desire - it's vital.


Meshcentral is fantastic and cross-platform. Hosting it yourself is really simple. One of my favorite open source projects. https://meshcentral.com/info/


Thanks for the info -- I'm going to try this out!


TigerVNC[0] might be of interest in this domain.

0 - https://tigervnc.org/


Yes, along with x11vnc and its scale option.

A 4k screen scaled to 1080p, then rescaled on the remote end is very useful.


My work issues me a MBP that I can't stand, so finding a way to access it remotely has been a priority. For CLI apps, I SSH into it from a Linux machine, but for GUI apps, I've tried X (most Mac apps are Cocoa and not X-compatible), NoMachine (it was ok), Splashtop (ok speed but glitchy), VNC (slow), and Jump Desktop (https://www.jumpdesktop.com). I found Jump Desktop to be surprisingly good, roughly on par with RDP. They have a free version, so I'd recommend you try it.


There's an implementation of RDP for Linux, Xrdp.

https://github.com/neutrinolabs/xorgxrdp


Glad someone mentioned Xrdp.

Some years ago I made a WYSIWYG customisation utility to configure not just Xrdp sessions, but also the look and feel of the Xrdp login manager : https://scarygliders.net/xrdpconfigurator/ , https://github.com/scarygliders/XRDPConfigurator

I've not done anything to it for years - lost interest in the whole thing - so it definitely needs an update to more modern versions of Python, PySide, whatever.

Pull requests welcome, in case anyone is ever interested (it's not a sexy project I suppose).

But I was particularly proud of the login screen emulator for the WYSIWYG part of it :)


There is also Miracast and Steam Link.


Remmina on linux will support RDP, VNC, SPICE and others. I've used it to remote into my Windows laptop from my linux desktop.


Yep, I just used Remmina earlier today to go cross-continent and it was fine.



We used to boot sun360's diskless using bootp to a freebsd 1.5 X server, which would serve our 20" black and white CRT monitors and the chunkiest keyboards ever. Remote desktop? Pah. Fvwm. ehem.


VNC is not too bad now with internet connection speeds increasing. It is usable for development, but not for video/graphics in the way RDP can.

I've found the SPICE protocol using the QXL driver with some compression settings tweaks gives the best performance. The latency is better than VNC, but video/graphics intensive screens are still a problem unfortunately.

I use it to connect to the console of a VM over VPN for Linux development currently.


I've been using https://remotedesktop.google.com/ for quite some time now with both Linux and MacOS because it provides the best latencies. Anyone knows what protocols it's using? And is this feature set part of the open source chromium distribution as well?


I think there is - if you're willing to pay for it: Parsec's a pretty good cheap solution, and Teradici (HP have taken them over now) is good at the enterprise level.


The issue is trust.

I don't mind paying for closed source software one bit if I can run it in a sandbox with extremely limited privileges and network access on an opt-in basis.

But a remote desktop solution isn't useful that way. By definition, it has to have all the powers that you have. And for that reason, it's one of a very small number of categories of software for which closed source is simply a non-starter.


There is an excellent and ultra high-performance remote desktop server for Linux. It's called xorgxrdp:

https://github.com/neutrinolabs/xorgxrdp

I love it. It only works with X11, not with Wayland. Wayland refuses to add a protocol command for "blit this image to this surface" (would be ~20 bytes). That is the critical feature your protocol must have in order to get good RDP performance. Windows GDI has it.

To get good, responsive RDP you need to be able to send an image across the network once, and then be able to blit it to on-screen surfaces without having to upload the whole image again.

Windows GDI has this. X11 has this. That's why they're fast.


In Linux's case I think it's partially because RDP/VNC afaik is only really useful for proprietary, locked down software. If I want to do work on remote resources that I can't do over ssh, I'll just download the files and my config, and install the program needed to work with them from my package manager.

The only real other uses for it that I can think of would be if I were working with graphics/video with files so large that transferring them would be annoying, or cross-platform access, like working on a remote Windows installation from a Linux machine. But for that, there seem to be multiple programs on Linux that implement RDP well enough for MS itself to recommend them.


Circa 1995 the physics department I was studying at received a grant from Microsoft and Intel to buy a large number of x86 workstations, first most of them were running Windows NT but the only people who would use NT were another grad student who liked Windows and me who would use vnc to log into a Linux computer without competing for one.

That grad student and I argued about many things, one of which was the relative merits of X windows and RDP and he was right about that one. RDP was written with the X Windows experience in mind and it performs much better. Compressors for the X protocol were made but they did’t address the high latency nature of the protocol.


"High latency nature of the protocol" is a bit misleading.

X is distinctly asynchronous. You don't send the server a command and then wait till it ack's that before doing something else. The actual delay between the client sending the command and the result appearing on the server's display can be extremely fast (dependent mostly on network performance), and by most definitions that is a "low latency" system. However, if the app on the client needs more synchronous behavior from the server, that's when you start to see things slow down and demonstrate "high latency"


I believe it is impossible for an RDP app to rely on "more synchronous behavior" as you put it.

Remote X11 worked well in the past. It's possible to latency-optimize software for it, but it's not common anymore.


I wonder why cloud vendors, even AWS, wouldn't invent a modern remote desktop protocol that works cross platform in easy way such as through ssh protocol that may perform at near native performance and they may just have a wider use case for cloud servers to increase their revenue. Of course it helps other vendors too but no one loses including the users.


AWS apparently has implemented something that works decently and cross platform, but AFAICT they're not sharing. The best docs I can find in a hurry are https://aws.amazon.com/workspaces/faqs/?nc=sn&loc=4 which talks about it not being RDP and only supporting connecting via their custom client.


I’ve been using Xrdp (xorgxrdp) on Linux for years and thoroughly recommend it. It is stupendously efficient, and has evolved a lot beyond the ICA-based mapping of Windows primitives (current RDP streams owe more to video streaming than to any direct mapping of graphics calls).

It has also become a very efficient protocol on the client side: only last week I set up a dedicated terminal using a 512MB Raspberry Pi 3A (https://taoofmac.com/space/blog/2022/08/14/2030), and I can stream a Fedora desktop to it at 2560x1080 with audio (i.e., good enough to watch windowed video) and very low latency over Wi-Fi.

On the Mac, there is no equivalent because Apple pretty much neglected anything to do with remote displays — Apple Remote Desktop is a variation of the VNC protocol with extra authentication but no real encoding improvements (and designed to manage Macs in classrooms or small businesses over a LAN), so it completely lacks any real ability to work over real-world remote connections.

There have been a few attempts at jerry-rigging an RDP proxy on top of the built-in Mac VNC server, but I haven’t seen anything working in years. And NoMachine is just plain useless in most scenarios.

(I’ve been using VNC, Citrix and RDP over the past couple of decades and am quite into all the details - I’ve also streamed desktops over H.264 and various multicast setups for electronic signage, so I’ve explored plenty of hacks)


Apple Remote Desktop is the official client for Mac (server is built-in), only $80 with 2 stars after ~275 ratings:

https://support.apple.com/guide/remote-desktop/welcome/mac

https://apps.apple.com/us/app/apple-remote-desktop/id4099073...


I'm not sure VNC is the really the default on Linux, or that there even is a "default" solution.

On local networks I've had good luck tunneling X, and it's even worked well running lightweight programs remote over a VPN in a pinch.

For "real" remote connections through the internet, x2go has been pretty good. At a previous job I used Remmina (an RDP client) to connect to Windows machines, and X2Go for Linux machines, and they felt about the same to me.

It really depends far more on network speed than anything else.


X2go has been unusable for me on macos (M1). Keeps crashing all the time.


I noticed that remote desktop from Linux to Mac (with Remmina) is much slower than Linux to Windows 10. It's definitely RDP on Windows, maybe it's VNC on the Mac.

No idea about the performances of a remote Linux desktop because I connect to remote machines with ssh and use only the command line. It's maybe 25 years since I run remote X11 applications to my local server.

There are other apps like NoMachine and TeamViewer. I never used them with a remote Mac.


For the slowness of Linux > Mac. I'm not sure if this helps, but I always had better performance from the freerdp2 nightly and remmina next repositories. At least on Ubuntu, looking at the package website for 22.04 freerdp is at version 2.6.0. However 2.8.0 is available in nightly.

I've been using the nightly branch for 6 or 7 years now and have rarely ran into issues. In fact our entire company runs nightly due to using the latest bastions and gateways servers that sometimes the version in the main repository do not support yet.


With Windows RDP it's possible to login in the same session that was started at the phisical machine. Also the physical monitor doesn't wake up upon login and the desktop remains password locked.

None of the options I tried on linux could match this features.

NoMachine did perform well but having the remote computer in the same room I could see it was mirroring the physical monitor and I could type on the keyboard. It does have a feature to make the screen black and lock inputs but then you're unable to log in from physical even if you know the password


You might want to try https://drovio.com/ - it's not free, but I searched a few years ago for the best screen sharing software and so far never found one that beats it.

Key features:

1. Extremely fast and smooth screen sharing. I've used RDP, I've used VNC, I've used Chrome, I've used Apple's screen sharing. None beat it.

2. Multiple mouse cursors! You can see where your co-workers mouse is, and they can click and interact with your desktop just like you do (if allowed of course). So far this has been the best way to do interactive code review sessions and even some pair programming.

3. Smooth animations. I originally started using it (back when it was called USE Together) because I wanted to do a presentation remotely that would show smooth 60fps animations from an app I was working on. It was the only one that could do it.

I never used it on Linux however.


The very simple answer to your question is, the developers that have made these great things weren't Management people. They don't generally work in MiS, and when they target something, they've just gone their own way.

This leaves multiple fragmented camps of GUI to target, increasing the technical debt before you can get anything done. Wayland was a step forward, but overall 20 years too late. This stuff isn't rocket science; if something hasn't been done its a structural issue that needs to be addressed but the project maintainers of the projects that you would need to target don't see it that way.

Why should someone that puts effort in to a project like this be forced into a dovetailed management strategy that is doomed to maintainability failure as a 1 bus factor project.


In the past, I did some optimizations on top of guac (https://guacamole.apache.org/) and got pretty nice results.

I used that 'protocol' to build this: https://allmydesktops.com/


meshcentral - https://meshcentral.com/info/ - self hosted server needed for best effect.

Open Source and development is lead by an Intel employee. I use Arch (btw) exclusively on my personal gear and I'm a first class citizen along with pretty much everything. The Windows binaries are signed too, which is nice. It also fully supports Intel AMT (vPro) which is probably why it is supported by Intel.

You can auth with say MS Azure (documented config required) so you simply click on the MS logo instead of filling in username and password and if you have an Azure cookie you go straight in or you go through the usual MS sign in thing. There are several more auth/auth mechs.

There is an agent install required and my Ansible playbook for it is roughly 10 lines long so rather simple.


I second this, meshcentral is very good and efficient, works on Windows and MacOS too, developpers are very responsive.


You can use Cendio Thinlinc which uses jpeg turbo over SSH for faster remote VNC expierance.

Cendio Thinlinc https://www.cendio.com/thinlinc/what-is-thinlinc

Also you can use game streaming technologies such as Steam remote play over layer2 VPN and hack it to stream your desktop. Since game streaming uses mpeg4 like video compression technology and is built for low latency the lag will be less and video quality good.

Steam streaming https://store.steampowered.com/streaming/

You need a Layer2Vpn to use steam streaming Freelan layer 2 VPN https://www.freelan.org/


+1 for ThinLinc. Also, it's free for personal use.


X2Go here, although it looks like it needs updates for the future to continue.

NX (the protocol it uses) had a novel way of only updating pixels that were changed and tunnels through ssh for security. Although if you're curious on the technicals look them up; my knowledge of it's got to be 20 years old at this point.


Really interesting conversations here! Thanks community.

Is there any free / open source solution for Mac? Parsec is really nice. So is JumpDesktop. But running a closed source is a little jittery because no one knows what it could do and capable of.

The workflow does require some security and privacy since it is not like gaming. Sometimes accessing a sensitive file or a document across multiple networks requires the opensource software's trust which can never be provided by any closed source application.

Some of the mentioned solutions like RustDesk is really good and it is open source and can be self hosted (it does win on many areas naturally!), but it still lacks the performant part to the likes of Parsec or JumpDesktop or similar high performance RDP solutions. Anydesk is again closed source.


For demonstration purposes I've stood up Digital Ocean droplets, installed Apache Guacamole and streamed YouTube videos. If that's your definition of performant check out the app: it's easily installed and configured and there are demo scripts on GitHub that automate it.


I went through a back injury that saw me wanting to use my desktop from bed for extended periods of time. Support for remote desktop _with audio_ seems very lacking. xrdp on Ubuntu sorta works, except that you can't easily drop into an existing session, and it develops horrible audio lag/stutter if too much of the screen changes at once.

I've settled on Steam Link for remote desktop on Linux. It still has one big issue for me: the cursor rendered by Steam jumps around weirdly at the edges of windows. I would love to solve that problem. However, apart from that, it flawlessly transmits video and audio in a way that Just Works.

I just wish it was packaged as a standalone solution.


For many setups, we now have enough bandwidth between peers to stream video with low latency and with a "good enough" to "great" quality, at 60 FPS.

For most cases you just need a GPU or some hardware video encoder (H264/H265/VP9/AV1, etc.) that can work in real time. It's not the most efficient in terms of bandwidth, of course, but if you have a decent connection to your remote peer it's the most straightforward way to accomplish this on any system.

I haven't tried Parsec personally but from their website description and screenshots, it looks like they're taking this approach and I bet it works great.


I found that RustDesk worked very well for me: https://github.com/rustdesk/rustdesk

(I'm using it for linux to linux.)


Related: https://news.ycombinator.com/item?id=31456007 - Rustdesk – Remote desktop software, an open source TeamViewer alternative

https://news.ycombinator.com/item?id=32239025 - RustDesk – Open-source TeamViewer alternative


As in LAN, or did you test across the tubes of the Internet?


I used it across the tubes! ;)

I support my in-laws and my daughters running Linux on their laptops and it worked fine and was pretty quick.

By the way -- we are all running various fairly recent versions of Linux Mint with the MATE desktop. The oldest being mine: [Release Linux Mint 19.3 Tricia 64-bit].


It's probably not what you're looking for, as it's proprietary & closed-source, but I've found AnyDesk to be pretty smooth. They have a free tier for personal use. I really only use it to help out my parents when they can't figure something out on their computers, but one advantage is it's very easy to walk a non-technical person through the setup process, without knowing anything about their network topology.

For my own gaming I used Parsec for a long time, which does stream pixels but has pretty impressive variable rate compression.


RDP is very good but it also hits its ceiling extremely quickly. E.g. the RDP client that ships with Windows is limited to 30 FPS, as is the server that ships with Windows with default config. It can also be extremely CPU intensive on 1080p and larger resolutions to the point it becomes another limitations. You can enable hardware accelerated video encoding for RDP but then it's really not any different than something like NxMachine or Parsec except it has a 30 FPS limit.


If you don't mind a hosted service that doesn't proxy the stream, then https://getscreen.me/ is great, you can even watch a movie over the wire performantly. It uses a custom native handshake client and WebRTC for the capture & encryption, in other words, same as Chrome Desktop but simpler to use.


There’s lots of solutions in this space but they’re not commonly used in non-business settings

- Teradici is the best I’ve used (hardware and software options)

- HP remote graphics licenses come with some Linux workstations

- Amazon NICE DCV is used in AWS Workspaces (disclaimer I work for AWS)

- Thinlinc is high performance but no GPU acceleration or audio the last time I used it

There are others but those are the ones I’ve used the most and recommend


Even RDP isn't great with latency over about 30ms from my experience. The mouse cursor never quite follows my hand movements and everything feels slightly sluggish. Without tech whereby the client can somehow anticipate user actions and predict the next likely update I'm not sure how you solve that.


From experience, Parsec is the best instance of remote desktop I've seen. Another one that is pretty solid is RDP for Windows (it feels as if it uses more information than raw pixels from the display, since it seems to render UI elements accurately).

For macOS, Apple's Screen Sharing is pretty good, using some video compression and diffs to render the frames rather than vanilla VNC.


I'm surprised that nobody mentioned RustDesk.

https://rustdesk.com/


This is a quest for a remote desktop software that is free and/or included in the OS, yes? Because otherwise, in the land of paid software nothing beats TeamViewer. Not even Windows' RDP does. I use TV daily with clients across oceans and nothing else offers a better remote desktop experience than TV.


Others are giving options for Linux/Mac I’ve not used. Probably the reason a name like RDP for Linux/max isn’t so widely spread is because RDP is daily driven in IT “enterprise” at scale for course of business stuff. The equivalent for IT pros in Linux is ssh. Mac is decidedly un-“enterprise” friendly.


Reemo.io is all browser based with fantastic performance, easily competitive with Parsec without requiring a client side agent as it's all browser accessible.

Sound doesn't yet work on macOS (word is coming soon) but I found it otherwise the best in the space. Also supports Linux and Windows.


I've used RDP based remoting in the early 2000s, with a 28K modem, for development, on my full-time job for six weeks. It wasn't amazing, but totally usable. Even today's tight-VNC at the time would have been unusable.


Linux has a CLI culture and SSH works just fine.

It’s more efficient to use a remote shell than to pipe a constant stream of screenshots at video rates, to say nothing about latencies and mousing.

Draw commands would require a uniform windowing solution for all clients.

I cannot speak for why Apple doesn’t make such a product.


> I cannot speak for why Apple doesn’t make such a product.

I think they do; there are a lot of comments in this thread about MacOS having some sort of remote desktop built in, which sounds like a tweaked VNC implementation that actually works well when you're just connecting from one Mac to another.


The answer is basically that in Unix-world (Mac and Linux) we focus a lot more of our attention on the command line. I have mostly remote Linux servers but also a couple of Macs I can remotely ssh into. You use your own computer for UI. The thin client dream is a bit.. 90s.


It's somewhat a case of "they almost went there"; around the time that MCX (Managed Client for Mac OS X) and Apple Remote Desktop (the central management tool, not the built-in one) peaked, they also had the option of selecting or starting a new user session over VNC. This was the foundation of concurrent desktop sessions in Mac OS X (now macOS) and was pretty much one step away from Terminal Services style multi-user computing.

Right then and there, the choice flipped the other way: instead of taking the management approach to remote computing they made management go the MDM route, and made remote computing more of an application-specific detail; if you need something 'remote', it's probably just data so remote data access could cover that. If you need a piece of software, the idea was that you'd simply run it locally instead of remotely over a stream, and if you need something specific to the remote location (i.e. the network) you'd use a VPN connection.

The biggest benefits of terminal-style computing is that you can lock away special software on a server or use computing resources that aren't available locally. That second part was something Apple probably never wanted to have to deal with, either you get the 'big fat expensive machine' for your heavy workload, or you get the Mac mini for your ligher workloads. If you want to have one big machine shared by two people, that wasn't really something they cared about, and you'd just have to buy two of them. This makes sense from their perspective: you buy the machine for a specific task or purpose, and that makes remote computing a bit redundant because you'd have bought the machine that fits your needs.

In a way they are right; nearly every device they make can do the same tasks and only heavy resource eaters really need more hardware than a base configuration can deliver.

For Linux it's different; you can just install an RDP client and server and do the same thing windows does. The only thing you need to do yourself in such a setup is configure the desktop environment so it doesn't do weird things like wobbly window animations over RDP which don't translate well. Microsoft doesn't write RDP clients or servers for Linux, and only has a client for macOS, so not much of a commercially validated option on Linux. There is NoMachine's NX which essentially does what RDP does, and on Linux you'd also not actually transmit the entire application; most of the window chrome can be handled by the local window manager instead, like with X11 forwarding.


Gnome 42 in Ubuntu 22.04 uses RDP, otherwise use X11vnc and GPU acceleration.

https://linuxhint.com/enable-remote-desktop-ubuntu-access-fr...


SSH is super speedy, and for many linux users remote commandline is all that is needed


Have you considered something like NICE DCV which uses QUIC and is pretty performant?


This is what keeps me on Windows. If there would be Linux RDP client with these capabilities I would switch to Fedora or something. But remote desktop performance on Windows with RDP vs. anything on Linux is not even comparable.


Remmina is what you're looking for. I used it daily to RDP into Windows machines from Fedora and the latency/performance/features are actually either on-par or slightly better than Windows Remote Desktop. Prior, I had been using Windows 7/10/11 RDP for decades.


X2Go/FreeNX since forever.


Its a paid product but NoMachine solves this problem fully, including across networks and between different platforms. Accessing a full desktop computer from your ipad is crazy magical the first time you see it.


But there is! With xrdp on the server and xfreerdp on the client, using RFX codec, I can have a remote login across town with 4K resolution through a Wireguard tunnel over a residential DOCSIS broadband.


macOS has Screen Sharing.app that works wonderfully across 8000 miles when screen sharing Mac to Mac.

Mac to Windows is easy with RDP.

But sharing a screen, Linux to Mac or vice versa... I wish you all the best. It's wild to me too.


There's an HP Remote Desktop thing that's "free" for HP workstations. I've used it before and it's good but absolutely chews through bandwidth. Surprisingly good over 5G.


I do 99% of my day job on a windows laptop connected to a Linux NoMachine session. It's not perfect but it works really well.

There are RDP implementations for Linux both client and server. x11rdp, remotedesktop,, etc.


Parsec performance is impressive. I've had good luck with Splashtop, though I've experienced some lagging at times.

There's also Teradici (HP bought them a few months ago) but that can get pretty costly.


It’s flipped the other way around. VMWare’s Blast Extreme is essentially an x264 stream that outperforms RDP in most ways.

It’s pretty awesome, any device with a x264 decoder chip can be a performant client.


So, like Parsec?


Very minor nitpick, but I think you mean h264. x264 is the name of a certain software project used for encoding video into h.264, but it's not a video codec itself.


Moonlight with an NVIDIA GPU is the best solution imo.


AnyDesk exists. It’s cross-platform and works well.


Usually when I'm remoting into systems I don't need a full GUI, I just need access to some resource, for which ssh is great.


We just use Xrdp. It's not as fast as RDP into a Windows server, but it's still pretty decent for us.


If you have an Nvidia card then gamestream (or quadro experience) works really well, with sunlight and moonlight


X2go is super smooth and fast, yes Linux has a fast remote access protocol, server and client software.


Anydesk and nomachine work good for me... windows to linux, linux to linux.


Remmina is your best bet. You should also try X2Go (an NX fork).


What about TeamViewer? Works very well on Windows.


Works very well on Linux as well.


ssh -X works much faster than RDP and VNC here, and doesn't have horrible blurry compression artifacts


x2go works quite well actually. Its definitely not like local but it does the job.


AnyDesk runs great

Teamviewer runs great in Wine


Teamviewer has packages for Linux. Is there any reason to run it in Wine instead?

https://www.teamviewer.com/en/download/linux/


If I recall the Linux package was the Windows exe that shipped with wine.


I was curious, so I extracted the .deb package, and the TeamViewer executable seems to be an ELF binary. It looks to me that they actually compile for this platform.


What is wrong with x2go?


RDP also does its fair share of raster shuffling. Where it has the edge is that most windows programs still use GDI (i.e. windows native draw commands), and RDP can send a lot of that as commands instead of pixels.

Xwindows used to have the same advantage, but lost it for most modern applications (i.e. ones written in GTK or Qt, which pre-rasterize almost everything for simplified cross-platform compatibility). Nowadays, unless you restrict yourself to classic X applications, X forwarding is going to be a slower, dumber, version of VNC.

Mac has so many fades & animations that, raster or command, it will probably always be slower.


RDP has not worked that way in a long time. Look up “rdp gfx rfx” for an idea of how it works a lot more like streaming video today.


DCV is the best remote desktop tool I've found - alas it is closed source and requires a paid license unless you're running on AWS EC2.


It also looks like a paid license can only be purchased by through a "contact us" partner? That's a hurdle.


Do you know if this works wit hardware acceleration (like GPUs)?


Yes, GPUs are supported.


export your $DISPLAY with X11, Citrix stole the idea. not many really do it anymore it seems, and i doubt Wayland does it at all.. works great for me


ssh -x does that, _and_ it sets up authentication, making it harder to blow your foot off. No need to reinvent the wheel.


And it kills performance, making you watch redraws, while sshd is maxing out a single core.


The drawing is local to your machine, so it doesn't have any overhead over exporting $DISPLAY, and several sshd implementations have moved to fully multithreaded, since at least '08.

The only performance hit is X11 (no different than normal) and the network - not ssh.


Doesn't fit with my experience. Moving from using ssh -X to just pointing remote X applications at my local X server without tunnelling it over SSH consistently improves performance for me. Not just in terms of experience, but also no process maxes out a CPU, so the machine has spare compute capacity for other tasks. It's really not an issue of ssh not being multithreaded. That shouldn't happen even on a single core. If sshd divided that workload over my n cores, it would still be very bad.


The capability is there in Linux. I use it myself now and then.

With regards to the Mac, MacOS is BSD-based. BSD UNIX has had that capability and the required software for decades. So ... the capability is there. The software 'app' probably isn't. It would probably only need the installation of several packages.

MacOS is a downgrade from the underlying BSD UNIX that is its foundation.


There are plenty of good ones, but you probably fail to recognize the best ones. In fact, the very question suggests that you have a limited world view. You will reject what I say because it doesn't fit into your narrow view. That's not meant as an attack, just an observation of human nature.

For example, one "remote desktop" that has become popular lately is called Wireguard. You connect, open the spreadsheet stored on the office network drive and print it to either the printer in the office on to the printer sitting next to you.

Many people would say that is sooo much better than RDP. That is not Windows thinking. That is Unix thinking--you know, Mac/Linux. (And BSD and most everybody else in the world)

So, to say it differently, Mac/Linux people often solve problems a different way, so they use different tools. Why don't laser printers come with whiteout toner?


That solves a different problem.

A remote desktop does the CPU-intensive computation on the remote side, not the local side. A tunnel like Wireguard or SSH does the computation on the local side, foz GUI programs that don't run in a terminal.

Now, quite a lot can be done from a terminal, but some compute-intensive things are better suited to GUI programs. CAD/CAM, for example.


What part of opening a spreadsheet made you jump into a terminal?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: