Scrpy is awesome and one of those rare pieces of software that has a clear and powerful thing to do and does it so well you get to forget it's even there. Essentially you get the feeling that this is how it should work.
But at the same time I find it a bit sad example of how much effort to and how much corner case fiddling such conceptually very simple tasks have to take.
Caricatyring here, but it is essentially just blitting pixels over a network connection. I don't think it should be that hard a task with a more reasonable underlying stack. Plan 9 (and probably many other sadly forgotten systems) showed how something like scrcpy can fall out more or less automatically from a better designed system.
Maybe I'm underestimating some fundamental complications, but I can't help but feel something has gone very wrong with how we do things with computers. Maybe something like worse is better on piles of previous worse is better.
- 8.2 MB frame * 60 fps = 480 MB/sec = 3.2 gbps = an order of magnitude higher than even your average techie's internet entire pipe. That's for 1080p. The market baseline for resolution, 4K, 4x's that. 240 fps (market baseline for gamers) adds another 4x. ~50 gbps
As long as you're willing to accept intellectually that _sometimes_ there will be video-like content played, where more than 10% of the pixels change in a frame, you now _need_ to assume some form of compression algorithm is applied on every frame, i.e. do video encoding.
This is why VNC has been the solution for this during many years, and still remains so.
Scrcpy is different because it does not require you to.install anything on the phone. It uses the built-in debugging tools (which you need to just activate in a few clicks).
Yep, exactly. Sending video over the network with low latency is not a trivial task. There’s a reason we have video and image formats that do heavy compression and inter-frame analysis to reduce bandwidth.
I think you're taking for granted just how performance intensive this is. For example, "Just a blit" is a full screen draw worth of pixel operations on a hardware platform that is known to have pixel draw at a premium.
Of course there are a lot of implementation details that need to be done for the bits to move and the pixels to light up.
But the scrcpy code is not about this, it's all hidden behind the libraries and in the hardware. What scrcpy has to do is micromanage and hack the APIs on the both sides for this all to happen in the guts of the systema.
I don't see a reason why all this couldn't be done with something like /dev/remote/window < /dev/screen
Of course there would be a lot of magic happening at the implementation level, but it's not like you type in all the HTTP protocol and negotiate the compression algorithm manually when you go to a website.
It did video encoding because the cable wasn’t capable of streaming full quality video at the required bandwidth. Not because they wanted to make a complicated dongle.
What part of streaming a screen on Android do you find complex? You pretty much just setup MediaEncoder and let it do its thing. For the input to the encoder you create a VirtualDisplay that mirrors the default Display that is backed by the input Surface of the encoder. For the output of the encoder you can just send the output buffer over network.
The PC side isn't complex either. You take those packets from the phone, run it through a decoder that draws the frames to a texture which you can show in a window.
Because there are a lot of features beyond just streaming a screen capture, it supports many versions of Android, and it is a polished utility.
The actually screen recording part is simple since you are just using existing Android APIs to have hook up the diplay server to a media encoder and the media encoder to the network.
That is reasonablish as video APIs go. But it's quite a small part of all scrcpy has to do to mirror the screen over the network.
Most of that code would be covered with something like "gst-launch pipewiresrc ! encodebin" and the whole mirroring with adding a few elements.
Well, in theory at least; gstreamer has the habbit of getting quite fiddly oftentimes. But I don't see any fundamental problems why this too couldn't work with simple composable tools.
Android 14 was originally supposed to add webcam functionality (IIRC it added an "act as a webcam" option when plugging the phone in via USB in the Betas). The speculation is that although it was removed from the final release, it might be added in a December update.
Does that mean effectively it will only be available for devices that release with Android 15 (ignoring pixels)?
Edit: I have a OnePlus nord n30 5G that released with Android 13. Is it eligible for this feature? Iirc OnePlus said it will only give this budget phone one (1) version upgrade so wondering if a December feature drop counts as a second upgrade… :(
Maybe won’t even matter if this requires specialized hardware :(
I dropped my phone and the screen broke. The touchscreen worked, but no backlight.
Luckily I had enabled Android debugging. It took a bit of trial and error but I managed to unlock the phone and then accept the adb key from my laptop.
After that, scrcpy let me grab all my files and transfer everything to my new phone.
If you have an android device take a few minutes to get scrcpy set up. It just might let you recover from a dead screen some day.
Yes, it is quite good for rescuing data etc. because in a lot of cases it is the screen that fails.
In your case where the backlight dies it often helps to shine a flashlight on the screen and it will usually become more intelligible.
One thing to keep in mind is that android will revoke saved debug-computers after 7 days or something meaning you have to press a popup to accept the connection. You can turn that off in the developer options and then you won't need any interactions on the device to start scrcpy (as longs as it is a computer you've used before with adb).
Absolutely, it was a blast to use this in school to control the smartboards where you could enable adb on them. And it worked so well, I could write on the smartboard with my touchscreen laptop.
Which camera app will that use? Can I make it use Vector Camera[1] instead of built-in?
So that the picture would be modified by the app, rather than go 'raw' from the camera?
But at the same time I find it a bit sad example of how much effort to and how much corner case fiddling such conceptually very simple tasks have to take.
Caricatyring here, but it is essentially just blitting pixels over a network connection. I don't think it should be that hard a task with a more reasonable underlying stack. Plan 9 (and probably many other sadly forgotten systems) showed how something like scrcpy can fall out more or less automatically from a better designed system.
Maybe I'm underestimating some fundamental complications, but I can't help but feel something has gone very wrong with how we do things with computers. Maybe something like worse is better on piles of previous worse is better.