WebRTC doesn’t need SIP. SIP needs WebRTC.
I think it would be fair to say that WebRTC doesn’t really need SIP. You can use SIP if you tweeWebRTC – just pick it up as a signaling protocol choice – one of many.
We can probably split a WebRTC deployment into 3 parts:
- The web backend – this is where the interaction with the web page of WebRTC happens. I am not talking about the serving of HTML itself, but more of the WebRTC interaction – what a lot of developers do today with Node.js for example.
- The VoIP backend – where media gets processed, and to some extent signaling.
My own view?
- Browser side should be proprietary or XMPP. SIP should be used there only for those who are VoIP’ers and wish to connect it to their existing SIP network as an additional way of access.
- VoIP backend should probably use SIP for the complex use cases. In the future, probably 2-3 years down the road, this will also switch to a WebRTC/REST combination as the infrastructure access, but for now it should be SIP.
SIP started adding WebSockets support only after the introduction of WebRTC.
But then, companies are using other solutions than SIP on the browser side – just check out some of my interviews. AddLive and TenHands went proprietary for signaling. Drum uses Jingle on the browser and then translates it to SIP for their backend.
And why is that? People use the tool that is best suitable for them: either because this is what they already know, or that what made sense.
interesting response; Justin on SIP’s relevance to #WebRTC-SIP dev cycle has been in years whereas cadence of web is weeks,days, even hours
— trentjohnsen (@trentjohnsen) November 27, 2012
The tweet above makes a lot of sense to me: SIP and WebRTC have a different vibe to them. As more web developers adopt WebRTC (as opposed to VoIP developers who are the majority still), we will see less use of SIP as the signaling protocol and a lot more proprietary solutions for it.