MTI for WebRTC is… mandatory.
There were again talks in the IETF about the mandatory codecs. It isn’t only video either – there are those who try to open up the voice discussion as well, pushing the AMR voice codecs to be added. I think it is time to state it clearly:
The more codecs we have, the less interoperability and quality we get
Think about it a second.
Having more codecs lead to more choice, and a richer feature set. This in turn leads vendors to need to develop these capabilities. Which require additional testing. Testing that need to include interoperability testing. And interoperability testing are exponential to the amount of features you need to test between the vendors.
This is why H.323 struggled (and still does) with decent interoperability.
This is why SIP can’t do interoperability of anything that is beyond voice.
Don’t put WebRTC in that same position.
The less mandatory codecs we have, the better solution it will be
Let’s take it to the other extreme, and not offer any mandatory codec. Make’em all optional. No least common denominator for a video codec.
In such a stupid case – how exactly do you make a video call from Chrome to IE? (assume Chrome does VP8 and IE does H.264)
The whole point behind WebRTC is to be a seamless and integral part of the browser – not requiring users to think before they act on a web page.
You can always mitigate that through a transcoding function in the backend of the service, but that is expensive and will limit the use cases that will adopt WebRTC as they won’t make any business sense any longer (remember that the whole point of WebRTC is removing barriers of entry to this field).
No mandatory codec means increasing barrier of entry for startups
What about optional codecs then? Have the mandatory ones, but then have vendors add whatever codecs they want into the browser.
It is possible. In theory. But in practice? Where’s the business sense here?
You add AMR-WB to IE, giving a wideband voice codec with good quality with the purpose of making it easier to gateway a browser call to a mobile phone (which theoretically supports AMR-WB). Great. But the server still needs to handle all other browsers out there and transcode from the Opus wideband codec or reduce the quality to a narrow band voice solution using G.711.
This leads us nowhere.
The end result is a backend that needs to deal with transcoding anyway. And that being the case, why make the investment in the browser at all? It can give some differentiation, but this is a game for the big boys of the ecosystem and no one else.
–
If you are planning on developing anything for WebRTC, try focusing on the fact that there will be a single video codec mandated and only two voice codecs (G.711 and Opus) – the rest is fantasy-land (or science fiction).
Not convinced? Check out my thought processes about browser differentiation – a lot of it is around codecs as well.