WebRTC will Kill Enterprise Video Conferencing as we Know it

By Tsahi Levent-Levi

March 22, 2012  

This is the second installment of a series of posts I am writing on WebRTC. The first one was a kind of an introduction to WebRTC. This time, I want to discuss the ramifications it has on enterprise video conferencing.

A bit over 13 years ago I started working in the video conferencing industry. At the time, video conferencing was ruled by circuit switched ISDN lines running a protocol called H.320. it was the dawn of IP for video conferencing with the introduction of the H.323 protocol.

The abused buzzword of the day was convergence: have one device talk to another via video. People communicating with room systems, phones, PCs, their fridge and whatnot. Those were the days.

Then we got to a point where video conferencing required things like security, NAT traversal, manageability, etc. The word du jure was… interoperability! And not interoperability 2.0 mind you – simple interoperability: everyone using the same protocol, working in an orchestrated way – think GSM.

And we moved on. We got sophisticated. We had our SIP as well as H.323. And tada! We had interworking as a word. Having different protocols “speak” to each other through means of translation (gatewaying in our “language”).

Lately as an industry we’ve shifted to unified communications. This time, we’re bridging the islands: using a single application to do all of our communication interactions. Well – good luckwith that one.

But really – enterprise video conferencing is still at the same time it were 15 years ago. A bit more features, better resolution and frame rate. In the end of the day we still get the same crappy stuff: a product that works inside the silo of our own company and doesn’t really communicate with anything else.

Call that interoperability.

While there are ways to bridge these islands of siloed installations and there are companies who are working on it (mainly by way of cloud hosting), this isn’t going to be enough.

WebRTC shifts the whole paradigm: from now on, it won’t matter if you are using H.323 or SIP for your video conferencing. You wouldn’t care about signaling and protocols and interoperability.

A WebRTC demo call in a native browser

Video conferencing will be web browser applications. Maybe bigger, smarter or faster – but still web applications. Once that happens – who cares which app is used for that video call or collaboration session? You fire up a link, get to the browser and communicate.

Enterprises won’t be able to stay behind. They will need to think differently and modify their systems to meet the demands of this brave new world. Call it Video Conferencing 2.0. Or just drop the name and focus on the transition.

How do they do that? This will be the topic of my next post in this series, where I will outline my own definition of the future video conferencing room system.


You may also like

Leave a Reply

Your email address will not be published. Required fields are marked

    1. I think that the impact on Polycom will be large, but then, there are those that believe Polycom is in a bad position today already.

      WebRTC has a long way to go until it reaches the enterprise: there’s stabilization, capturing B2C markets, target consumer markets and only then the enterprise.
      Polycom and other video conferencing vendors will start seeing WebRTC targeting their home market of internal enterprise communications in 3-4 years. But before that happens, they will have to deal with B2C players that will want to eat their lunch.

  1. Dear Tsahi,

    Interesting for me – it might have been called convergence in the past – is that we really need to be able to do video (or in general sensor) streaming from one device to the next in the area of robotics. And preferably not only streaming, but also processing.

    A lot of image processing is very tough for a robot to do on-board, while streaming the images is not such a big deal. It would be very helpful if people would think of real-time image/sensor processing in the cloud. Not so much the non real-time solutions, such as http://www.blitline.com/ but really the low-latency characteristics that SIP/Jabber/WebRTC people care about!

    Maybe (real-time) games using accelerometer/gyroscope/compass data on smartphones will be a game changer.

    Thanks for your excellent posts, and I hope you will find time some day to post about these matters!

  2. Dear Tsahi,

    Interesting for me – it might have been called convergence in the past – is that we really need to be able to do video (or in general sensor) streaming from one device to the next in the area of robotics. And preferably not only streaming, but also processing.

    A lot of image processing is very tough for a robot to do on-board, while streaming the images is not such a big deal. It would be very helpful if people would think of real-time image/sensor processing in the cloud. Not so much the non real-time solutions, such as http://www.blitline.com/ but really the low-latency characteristics that SIP/Jabber/WebRTC people care about!

    Maybe (real-time) games using accelerometer/gyroscope/compass data on smartphones will be a game changer.

    Thanks for your excellent posts, and I hope you will find time some day to post about these matters!

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}