I still don't understand any practical use cases. Can you give some examples? (I'm not being obtuse here I'm genuinely curious what this can enable now.)
WebRTC excels at sub-second latency peer to peer, so you can do near-realtime video, so anywhere that is useful.
Say you wanted to do a virtual portal installation connecting views from two different cities with live audio, you could have ffmpeg feed off a professional cinema or DSLR camera device with a clean audio feed and stream that over WebRTC into a webpage-based live viewer.
Or say you wanna do a webpage that remote controls a drone or rover robot, it would be great for that.
The irony is that you don't _actually_ need WebRTC to get subsecond latency; you can fairly reliably get ~100–200ms (plus network latency) with a completely normal TCP stream.[1] But since browsers have effectively standardized on HLS, whose design is completely antithetical to low-latency (you _can_ do low-latency HLS, but only with heroic efforts), low-latency streaming video has never really been part of their bread and butter. So instead, we abuse a _much_ more complicated protocol (WebRTC), because that happens to hit a path that was meant for low-latency videoconferencing.
(I did sub-100ms glass-to-glass streaming with VLC back in the day, so it is eminently possible. But the browser is in your way.)
[1] Much less than that is going to be tricky under non-perfect network conditions, because once you start having any sort of packet drop, you want to go from TCP's retransmission regime and instead start dropping packets, take the artifacts for a little while, and then go on.
The "sub-second latency" thing is the standardized punchline coming from WebRTC folks, but yes, it's confusing. Nothing can make video flow faster than above, only thing you can do by inventing a new standard is to minimize the overhead you must add for your purposes.
I saw this also. WebRTC just is the path of least resistance/highest adoption at this point.
We could go make a better/simpler standard for video streaming that is TCP. What a giant battle though that would never seen adoption .
I have accepted/embraced the challenge of making WebRTC as accessible as possible. Stuff like WebRTC for the Curious in hopes of making it less painful for everyone dealing with the complexity :)
> We could go make a better/simpler standard for video streaming that is TCP. What a giant battle though that would never seen adoption .
What do you mean? <video> in HTTP against a stream works, you don't need a new standard. But it's not a low-latency path (you cannot control the buffer).
Explanation: HTTP Live Streaming slices the bitstream into “segments” (traditionally 6–10 s each) and only starts playing after it has downloaded several of them. Out of the box that means 30–60 s of startup and live-edge latency—fine for linear TV, terrible for anything interactive.
Apple’s LL-HLS spec shrinks those segments into “partial segments” and uses CMAF to let the player start decoding while a chunk is still arriving. With careful encoder tuning, HTTP/2/3 push, CDN support, and a compatible player you can reach 2–5 s, sometimes a bit lower—but every link in the chain has to cooperate, so implementations are still called “heroic” for a reason.
Safari plays HLS natively and on other browsers you can bolt on hls.js via Media Source Extensions. DASH, RTMP, SRT, etc. all need extra code or plugins, so HLS became the “safe default” for on-demand and broadcast-style streaming even though it isn’t low-latency friendly.
My first thought is a nice way to save a stream in whatever format you want (e.g. transcode for watching on an old phone or something on your commute), just ffmpeg -i <stream> and then all your usual video format options, instead of having to download it and then convert it afterwards.
ffmpeg also has some processing abilities of its own, so you could e.g. greenscreen (chroma key) from a stream onto an existing video background.
ffmpeg is a pretty low-level building block and as others have said, it's mostly used as a library - a lot of video players or processing tools can now add support for stream inputs easily, and that's probably where the biggest impact is.
You can only really get a video stream out of Unreal Engine using WebRTC, so now clients can at least use ffmpeg/avconv instead of something even worse like libdatachannel.
I want my desktop app https://ossia.io which uses ffmpeg to be able to send & receive video to another computer over internet without having to fiddle with opening ports on each other's routers. This combined with a server like vdo.ninja solves that.
My guess is you could more easily build an open source client for whatever video conferencing system you want that uses WebRTC (most services like teams, discord, zoom, etc seem to use WebRTC as a fallback for browsers, if not using it wholesale for everything, although there may be countermeasures to block unofficial clients.)
Are there any popular/well-known WebRTC senders (or servers)? I'm pretty sure this is not for YouTube etc., right? So what would I watch through WebRTC?
Hmm, what does that mean for instance for workloads that use gstreamer's whepsrc? Is there a risk of incompatibility of a whep server running today with next year webrtc?