I wish there was a better way to scale receiver feedback. It is very hard/impossible to scale without the server saving every frame and being able to come up with the frames as needed.
It's an umbrella term for a bunch of mechanisms where the receiver can tell the sender about things like dropped packets, congestion etc. So the sender can adjust the stream accordingly.
"Unfortunately, most WHIP implementations are highly complex and require modern C++11 or C++14, or RUST...To meet FFmpeg's requirements, this PR contains just C code. And we have rewritten the WHIP and WebRTC protocol stack using only around 2k lines of C code."
Where is the section on the security audit? For a protocol accepting remote streams especially this seems like a vector for exploitable bugs to be introduced unintentionally.
FFmpeg is an amazing project that everybody uses for good reasons but it's not written with the extreme focus on security that writing secure C demands. If you're using it in an adversarial environment or with untrusted data you should be sandboxing it.
I tried to do this with OBS. The code worked really well. It was so stable and performant. Life is significantly better not debugging SEGV. However it puts too much burden on the maintainers.
So much work goes into packaging etc… I don’t have much hope for the ‘Rust as a submodule in a C/C++ project’ story :(
Just FYI, what we do at https://qbix.com/platform is we send WebRTC via MediaRecorder to a Node.js server via a socket. We then call ffmpeg from there, to record the video. In addition we can send RMTP to YouTube Telegram or Facebook for a livestream.
Actually we are big fans of trying to eliminate ALL dependence on Big Tech platforms so if you want to run your broadcast in a peer-to-peer way, you can:
Yes we built a self-rebalancing peer to peer broadcast tree based on WebRTC, that can be used to “livestream” to unlimited numbers of people even without needing some Big Tech server farms.
If you want to record WebRTC (or stream it to a service like YouTube/Twitch/Facebook), there are many different implementation strategies. The choices and trade-offs can be pretty confusing.
I work at Daily.co where we provide a WebRTC platform which makes cloud-based recording/streaming not just possible but also visually rich and easy to use. The article is from a neutral viewpoint though, the options discussed could be used by any WebRTC app and are not specific to Daily.
More specifically it’s WHIP publishing support, not full WebRTC.
WHIP stands for WebRTC-HTTP Ingestion Protocol. It’s “a simple HTTP-based protocol that will allow WebRTC-based ingestion of content into streaming services and/or CDNs.”
Thanks, I was look for well supported and actively maintained C++ library and libdatachannel looks like it!
Btw you need to check the links the AWS one is broken, and rawrtc has had only a few commits over the last few years. Also neither of those is compared in the webrtc-echoes.
* Simulcast - WebRTC has the concept of uploading multiple quality levels baked in. This means they don’t need to be generated server side anymore. Huge savings on cost/complexity for broadcast servers.
* P2P - WebRTC lets two hosts connect to each other. Instead of uploading to a 3rd party send video directly to a friend! Big wins on latency, cost and privacy.
* Latency - With WebRTC you can get ~100ms broadcasting. Pretty magical if you are interacting with an audience.
This is fluff, but I can't help myself. I think people are sleeping on this second/third wave of WebRTC. Sean, you're a hero - pushing this work in all of the ecosystem, multi-language, promoting the cross-compat testing suite, all of it. Super significant work. And this? ffmpeg is everywhere. I know that even if this merges, it will take a long time to "trickle" to TVs, but the potential with a thin ffmpeg webrtc client built-in to TVs and set-top boxes, with an OSS broadcastbox setup? I'm excited, thank you!
I am not entirely sure with were you are going with this comment but FYI here is an ffmpeg fork with the build system replaced with a "build.zig" file and the Zig build environment: https://github.com/andrewrk/ffmpeg
I would assume that a C implementation of WebRTC would not differ much from other libraries and should be compatible.
Instead of doing fixed interval keyframe we can use receiver feedback (Massive reduction in bandwidth)
Instead of server side generated transcodes we can use Simulcast. Will be better quality AND massively reduced server load.
If anyone wants to use with Pion check out https://github.com/Glimesh/broadcast-box
Then run `ffmpeg -re -f lavfi -i testsrc=s=1280x720:r=30 -f lavfi -i sine=f=440:b=4 -vcodec libx264 -pix_fmt yuv420p -profile:v baseline -r 25 -g 50 -acodec libopus -ar 48000 -ac 2 -f rtc -authorization "STREAM_NAME" "http://localhost:8080/whip"