The Challenges in Porting WebRTC to Mobile Devices

July 3, 2013

Ever thought of how WebRTC fits into mobile?

Arik Halperin[Arik Halperin from SoftSkills Solutions had his share of mobile development. Recently, he ported WebRTC to Android and iOS. Here is his experience around that activity]

When people think of WebRTC, they usually think about Chrome or Firefox. But what about mobile applications? WebRTC above all is a great media engine, the source code is open and free and integrating into a mobile app should be a piece of cake, at least this is what I thought…

WebRTC on mobile

As I found out, building mobile applications around WebRTC can be a bit tricky. WebRTC indeed comes with ports for Windows, Mac, Android and iOS (Audio only) and Google is doing an amazing job, but in order to be able to make a truly usable mobile app there are a few challenges that still need addressing:

  1. Device Challenges: Challenges which emerge from the fact that mobile devices vary in form, Hardware configuration and computation resources. Included in this are processor architecture, CPU power, screen and camera resolution, audio acceleration support, different sensors and battery.
  2. Medium Challenges: Medium challenges are challenges which are a product of the mobile environment instability. For example: Changes in connectivity (WiFi, 3G, CDMA), changes in network profile (Network load, interference), ambient noise, changes in lighting etc.

Device Challenges

  • Form factor differences: Form factor differences means one device’s image will have a different look than another in aspect ratio and resolution. What should happen if I use an iPad 4 and a Galaxy S4 mini phone? Or maybe a Chrome browser connecting to a mobile app? How should these varitions be treated?
  • Audio Hardware configuration: Different devices have different implementation for the built in speaker and Bluetooth. Especially in Android there are many differences between devices that require special handling.
  • Screen & camera resolutions: Different camera and screen resolutions can affect the quality of the image in the video stream.
  • Processor architecture: To support WebRTC in video, a CPU which has at least ARMv7 architecture with its NEON extension. Audio can be used in processors without NEON (You will not have enough CPU power for video).
  • CPU Power: CPU power can be a limiting factor in frame rate and image resolution, especially image capture, due to the complexities of the video encoding.
  • Audio acceleration support: Not all devices support audio acceleration. In a device that supports Audio Acceleration (iPhone 4 and above for example) it can be utilized to save the CPU power used by WebRTC audio processing.
  • Sensors: Mobile devices can change orientation, which will affect the image generated by them and the one displayed on their screen.
  • Battery: Left it to the end, because it’s a real pain. Running a WebRTC video call kills your battery! There could be lots of innovation in this one, both from chip vendors and the developer community.

Medium Challenges

  • Changes in connectivity: Changes in connectivity can create a problem when they happen inside a session. You need to have a strategy for handling such connectivity changes, for instance if a device can change its connection from WiFi to 3G within a session, then you should take the correct measures to keep the session alive (Block 3G handoff, recreate the session after handoff etc.)
  • Network profile: In mobile the network profile can change much more dynamically than in a stationary PC environment. For example: User may change position and WiFi or 3G jitter and bandwidth can change as a result. If you are in a car cell handoffs can occur or there could be glitches in connectivity (Better not be the driver though, if doing video). The configuration used for WebRTC in a mobile app needs to take these into account in elements like bandwidth, frame rate and resolution adaptation.
  • Changes in lighting: The user may move with the phone to places where light is more dim and this could affect the video display on the receiving side.
  • Ambient noise and use of speaker: A mobile profile usually demands using more rigorous noise suppression and echo cancellation algorithms.
  • Network challenges: In a mobile environment especially, NATs and firewalls are a real pain (Especially if you want your app to work in corporate environments).

Summary

WebRTC is a great platform for making live video chat mobile applications. However, it requires some tuning. There are inherent challenges which exist in a mobile application, not found in a stationary PC environment. If you want create a great WebRTC mobile app, I will be happy to assist.

 


You may also like

Comment​

Your email address will not be published. Required fields are marked

  1. Most of these challenges are challenges that apply to any software that wants to support multiple devices and environments. Not saying it is easy, especially because WebRTC pushes hard on a lot of the capabilities of devices, from maybe taxing the CPU for encoding and encryption to network latency and thus battery life.

    It is just something you should expect to run into when you set a goal to support all devices. This is something webdevelopers are very aware of.

    It used to be true that supporting IE could be up to 80% of the work of delivering a website. That is insane, but that is just reality.

    1. The intent here is slightly different Lennie.

      WebRTC should have been a “mobile first” initiative. It isn’t. Not by a long shot.

      The code itself isn’t ready for mobile – and the fact that these challenges exist from the point of view of porting the base code means there’s still work to be done there.

      1. Sorry.

        The reason they didn’t could be multiple.

        They probably needed to get something to work as fast as possible, the mobile situation is more complicated. So the desktop was the way to go.

        Why did they want that ? Because of the IETF and W3C process, it’s better to have a working demo to see how things could work and maybe because of the usual business reasons (quick to market).

        Or maybe some of the code the existing code they had only worked on desktop, who knows. When desktop works, it is probably easier to get it working on the mobile browser first and then make a library of it.

  2. Isn’t an Android Chrome beta in works with built in WebRTC already being implemented? Then for iOS we could be fairly confident Google is working on porting the WebRTC engine into their next several Chrome releases?

    I’ve tested the beta on my Droid 4 and it is mediocre – no other apps running. So I wouldn’t expect WebRTC to shine for another year or two before hardware is widespread supporting the codecs along with WebRTC built in to the browsers.

    So is there any good reason to go ahead and implement our own mobile version and expect to beat Google to it? And from what I understand no mobile device supports VP8 natively thus reducing the quality of the conversation somewhat if the CPU gets maxed out?

    I’m just trying to make sense out of this idea of porting our own version of it into the mobiles or wait for one of the browsers to support it.

    1. I would argue that most consumption on mobile today is via apps and not browsers. This means that most “respectable” services that require voice or video communication can and probably should adopt WebRTC in such a form and not rely on the web browser yet.

      As for quality – you can get it to the level of Skype, which isn’t bad for mobile. Just a matter of effort.

      1. Thanks for the comment – that makes sense as it is something that Flash switched from browser centric to downloadable apps and run in AIR. I guess I was pondering the complexity of developing videophone apps and having to deal with inserting the WebRTC engine along with rest of the UI.

        While Adobe’s AIR already has their RTMFP engine abstracted away so that developers can focus on what they do best, this is the idea we should follow for WebRTC mobiles. Perhaps this is what I should be looking for: someone to provide abstracted kit for rest of us to focus on the UI and simply plug the mobile WebRTC API.

  3. Sorry.

    The reason they didn’t could be multiple.

    They probably needed to get something to work as fast as possible, the mobile situation is more complicated. So the desktop was the way to go.

    Why did they want that ? Because of the IETF and W3C process, it’s better to have a working demo to see how things could work and maybe because of the usual business reasons (quick to market).

    Or maybe some of the code the existing code they had only worked on desktop, who knows. When desktop works, it is probably easier to get it working on the mobile browser first and then make a library of it.

  4. Hey,
    We”ve been working on a simple mobile video call application (Android).

    Can you outline the steps needed to get going?

  5. Hi,
    I am an iOS developer.
    I try use WEBRTC on my project sam the link http://www.webrtc.org/native-code/ios
    But the source is too large and i can’t fetch it to my local computer. Sometime it error network, or error
    File”/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py”, line 542, in check_call
    raise CalledProcessError(retcode, cmd)
    subprocess.CalledProcessError: Command ‘(‘gclient’, ‘sync’, ‘–with_branch_heads’)’ returned non-zero exit status 2
    I don’t know why.

    I run the sample project from https://github.com/mschmulen/webrtc-demo with appRTC server https://apprtc.appspot.com,. But it can’t connect to room.

    Can you help me the way to build an application use webrtc. Please

  6. Hi , I have one question in your post.
    I try to develop android app is webrtc chat..
    but Android development of webrtc is only supported on Linux.
    is right?
    I hope to develop on window .
    is impossible ?

    I wait for your answer thank you .

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}