Does WebRTC fit anywhere in M2M? Chromecast sheds some light on this question.
I’ve heard a couple of times about WebRTC being the next great thing. And it is, but there is hype around it, and some of it is a bit weird. One of the weird things I’ve heard about WebRTC was how good it is for M2M, and for me that looked unfounded. Say M2M to me and immediately I envision sprinklers, large Coke dispensing machines, thermostats and a bunch of other small controllers that just go into anything. Somehow, video conferencing doesn’t make it into that list.
I decided to look into it further, where my usual technique is just placing a draft post on my calendar and then moving it around until I feel comfortable enough to write about it. That M2M with WebRTC synergy? It’s been floating around for a while. And it still is. Until then, here’s what I think might be the first indication of how WebRTC is utilized for an M2M-type of a use case.
Two weeks ago, Google pulled an Apple. They did that by introducing the [AD HERE]Chromecast – a gadget that got sold out immediately. Marketed as a streaming media player, it is an HDMI dongle that connects to a TV and shows whatever it is you have on your Android device. The linkage here to Android is built on 2 things only:
- Chromecast comes from Google, so their current focus is probably in enriching only their own Android ecosystem
- If rumors are true, it operates by way of WebRTC
Android Authority has an explanation of the technology behind Chromecast:
When you’re casting or mirroring a webpage using the extension, the Chromecast loads your current webpage using an HTML5 standard called WebRTC. If you’ve heard of WebRTC before, you normally would associate it with video chatting. That’s still what’s going on here, basically. Your computer essentially video chats or streams a video of the current tab in Chrome over to the Chromecast. The video is constantly encoding and transmitting.
WebRTC here is used for a rather simple task – grab a screen, encode it and then stream it in a single direction (not a video chat) to another device (the Chromecast in this case, which then decodes it and displays it via HDMI).
I am not sure it is an M2M use case exactly, but it is the closest thing to date that I’ve found, and it is cool. It shows how versatile WebRTC can be and how it can be utilizes in new gadgets.
If you have seen other interesting use cases or devices – do share. I am on the lookout for such stories.
Hi,
I read that article too…
One thing I am not still sure is that whether the streaming raw data (datachannel or similar technology) or actually encoding the content…
The former case should still work, since the device is also a CHROME based. So i guess it knows how to “handle” the raw data. In case of encoding, we can (?) easily spot the CPU/GPU usage difference. I don’t have a setup to try this out. If someone can comeback with a try it might be helpful.
I really don’t know…
Actively encoding and decoding the data between browser and Chromecast means using VP8. It seems reasonable, considering the resolutions it is capable of. On the other hand, it means Marvel (the chipset vendor of Chromecast) have added VP8 decoding to it – not unheard of, but a bit of a stretch if I may say.
So is the API that grabs the whole Chrome tab as a stream for sharing a public JS API or something internal? Wouldn’t that determine how much custom work is going on inside Chrome versus just gluing together two streams at a high level? This raises the question of how many other useful information content sharing streams can be easily constructed and sent (and how good VP9/8 is at compressing non-video strreams). Current #WebRTC demos have either very poor or no content sharing capabilities and we need to “disrupt”the web conferencing market as well as video conferencing :).
Lawrence,
Give it time. I am sure the interesting use cases and flashy demos will arrive soon enough.