Sean-Der 1 day ago

I wrote this to make Reverse Engineering WebRTC services easier. Will also let you save/send arbitrary media from WebRTC sessions. The idea is you do all your auth/interaction in the browser, but then do all WebRTC in Go. So you have lots more control. More to do with it, but it is far enough along to share at least.

In the README is an screenshot of sending my webcam, but replacing outgoing video with a ffmpeg testsrc. Handoff sits between so it can replace with any arbitrary video.

  • czbond 1 day ago

    Interesting and novel project. I don't have anything constructive to add, but well done.

    • Sean-Der 1 day ago

      Thanks :)

      No better feeling to work on something and hear it is novel! So many projects that I think will be useful miss the mark.

      • irq-1 18 hours ago

        Connect it to an AI talking head and you have a customer service center - users browsing a store can click to talk with 'someone'.

  • ikety 20 hours ago

    I've bookmarked your project years ago to attempt implementing webrtc fully in a niche programming language. But I think I may have vastly underrated how difficult this is.

    Have you come across https://github.com/elixir-webrtc/ex_webrtc ?

    Wasn't sure if they used Pion as a guide

    • Sean-Der 12 hours ago

      What language? Would love to help :) especially with AI Coding I think it would be a lot more accessible these days.

      ex_webrtc is super cool. They have a cool built-in dashboard/analytics flow. It is way more 'operations friendly' then Pion it seems. I haven't used it heavily myself though.

  • ericmcer 17 hours ago

    I am kind of a WebRTC noob but... this means after I define my input channel (audio track, video, etc.) and establish a peer connection I can send data from a different source?

    Are there any complications with that or is it kind of on me to not confuse the other peer by sending unexpected formats?

    • Sean-Der 12 hours ago

      Yep exactly! After it starts you can slice in any media you want.

      You need to make sure you are sending the same codec that the remote expects, otherwise nothing else! You can do a different resolution, bitrate etc...

Hakkin 1 day ago

Oh, this is interesting. I have been messing around with a WebExtension for dumping encoded WebRTC media streams by intercepting streams on RTCPeerConnection.addTrack, but it doesn't work reliably since the current WebRTC encoded stream API(s) only supports a single reader, so if the actual website is also using the API, it either breaks the site or it's impossible to intercept the media. This seems like a nice workaround, I had briefly considered some kind of proxy but I wrote it off since WebRTC traffic is encrypted, I never considered proxying the peer connection API calls themselves. Pretty clever.

  • Sean-Der 1 day ago

    I can’t wait for https://w3c.github.io/webrtc-rtptransport/ when you talk about pulling vide out seems like the perfect fit.

    I ended up doing proxy because Google Meet doesn’t let me hook at any RTCPeerConnection APIs at all. I wanted to send synthetic media in, but couldn’t get it working. Ending up doing a virtual webcam on Linux.

hparadiz 1 day ago

Would be interesting for a Wayland DM to catch this and draw to a picture in picture overlay

  • Sean-Der 1 day ago

    Oh yes! I will pull together a demo.

    With ‘media-send’ I can send it out to ffmpeg/GStreamer and that does all the heavy lifting

    • hparadiz 1 day ago

      I made a demo recently with my Google home camera using the official API https://github.com/hparadiz/camera-notif

      But your way of grabbing the stream is so much simpler.

      Overlay layer is super new in KDE Plasma is the only problem. You can also do v4l2loopback and make it a virtual camera.

      • Sean-Der 1 day ago

        Have you tried doing video + pipewire yet?

        I am also using v4l2loopback, but its annoying to juggle /dev/video* devices. I wanted to do video stuff in docker containers, and it would be amazing if I could do pipewire in each container and have no global state.

        I couldn't get anything to work in Chromium. FireFox saw the device, but video didn't come across.

        • hparadiz 20 hours ago

          When you say +pipewire you mean just audio playback? If you are pushing video to a picture-in-picture overlay a user might expect that so yea you could write to the pipewire socket like any other program. It's usually fully open for you to do just that.

          I use v4l2 regularly with OBS. In order for Chrome/Chromium to see it you need to make the device before launching Chrome/Chromium. You can start v4l2 devices automatically by setting a modprobe config for your kernel.

          My v4l2 notes might be helpful http://technex.us/2022/06/v4l2-notes-for-linux/

esafak 1 day ago

Is this a good way to improve performance (frame rate, latency, CPU load) ?

  • Sean-Der 1 day ago

    Yea!

    * Do video playback out of the browser. You can render a subset of frames, use a different pipeline for decode etc...

    * Pull video from a different source. Join Google Meet on current computer, but stream from another host.