&yet and WebRTC: An Interview With Henrik Joreteg

By Tsahi Levent-Levi

December 12, 2013  

When a company goes wild with WebRTC.

If there’s a company that I’ve found hard to define in the past year it is definitely &yet. They seem to be doing everything and anything related to WebRTC, but in ways that are unique in one way or another.

Henrik Joreteg

I’ve been trying to get a hold of Henrik Joreteg (@HenrikJoreteg), President and JavaScript Developer at &yet, and once I did, I didn’t let go until he gave me some answers.

What is &yet all about?

People. We build web software with a focus on people and communication. We’re primarily a consulting company in that we build applications for clients. But, we also invest a lot of time experimenting with and building software that we believe should exist (such as Otalk), even when there isn’t necessarily a clear business purpose.

We invest heavily in open source, both in creating it ourselves, but also explicitly supporting open source financially: https://blog.andyet.com/2013/11/27/happy-thanksgiving

We also run several conferences, most famously we’ve put on The Realtime Conference the last 3 years.

Can you elaborate a bit on the WebRTC related projects you’ve done?

We’ve worked with AT&T to create att.js which is an open source library for making/receiving real phone calls in your browser using WebRTC.

We built one of the first public multi-user WebRTC apps, Talky (we used to call it conversat.io).

We built and maintain SimpleWebRTC which is one of the most popular open source WebRTC library available (http://simplewebrtc.com).

We built jingle.js and stanza.io which, in combination allow you to build jingle compliant/interoperable web clients without needing to touch XML.

We’re working on OTalk (http://otalk.im/) an open source Skype alternative using XMPP, Jingle, and WebRTC to provide interoperability and federation with other servers.

We set up http://iswebrtcreadyyet.com/ to track WebRTC implementation progress for various browsers.

Why the rocket on talky.io?

Why not?! It’s fun. Fritzy, on our team, wrote it for fun a while back and when we made Talky we realized it’d be a fun thing to include in order to give users something to do while waiting for other people to join.

What excites you about working in WebRTC?

Being able to build telecom-like services with javascript and deploy them to nearly 1,000,000,000 WebRTC capable endpoints is completely amazing. The addition of native peer-to-peer networking to browser capabilities enables a whole new genre of applications on the web.

I can’t wait to see what people build with it as time goes.

What signaling have you decided to integrate on top of WebRTC?

For Talky, which was the first WebRTC app we worked on in which we built the whole WebRTC stack, we just wrote our own custom message passing server using socket.io. Nothing fancy, very ad-hoc. It’s open sourced here: https://github.com/andyet/signalmaster.

Ultimately, we like standards because they allow for interoperability. To that end, Lance Stout, an XMPP expert and XSF council member on our team built jingle.js and has demonstrated interoperability with other Jingle implementations.

Backend. What technologies and architecture are you using there?

We rely heavily on node.js for server code. For XMPP we prefer Prosody.

Where do you see WebRTC going in 2-5 years?

Given the rate of adoption thus far and the extremely low cost of use, we feel that within 5 years WebRTC will be the #1 way people make audio/video calls.

If you had one piece of advice for those thinking of adopting WebRTC, what would it be?

Just get your hands dirty. Tinker with something at a high level and just have fun with it. For example, a developer from Portland State University, who didn’t have tons of JavaScript experience was able to build a system for remotely viewing and controlling an electron scanning microscope. If you need ideas, I’ve gave a lot of suggestions in my “Making WebRTC Awesome” talk at CascadiaJS: http://www.youtube.com/watch?v=velmlLKmIA8

Given the opportunity, what would you change in WebRTC?

Overall, I really don’t have many complaints. There are a few API design choices that seem a bit odd to me. For example, when getting an incoming SDP, it’s a bit silly that you have to take that string and instantiate RTCSessionDescription object before adding it to the peer connection. Things like that just feel a bit unnecessary.

Rather than have to listen and pass many varying object types, since nearly all of them are stringified and passed, it seems like we should just be able to have a single handler something like, onsignalingmessage that would emit all required messages in string form, for direct consumption by the receiving peer connection. Of course we can encapsulate this logic in libraries and we can do it however we want, but ultimately it seems like the API could have provided a much simpler API for the base case.

On a higher-level, I’d love to see screensharing being enabled everywhere by default, instead of as a flag in chrome and not available at all elsewhere.

I’d also love to have access to simple ways to handle file-chunking and transferring over data channels.

I’m in full support of giving developers low-level access to these types of things, but ultimately having some sort of peerConnection.sendFile() method would have been kind of amazing.

What’s next for &yet?

Lots! We’re starting to offer a lot more technical training; we’ll start with JavaScript training from our recent book launch: http://humanjavascript.com/, we’re currently working on and evaluating some really exciting client projects for the upcoming year and we’re working toward an upcoming product launch. To keep an eye on us follow @andyet on twitter.

The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.


You may also like

Leave a Reply

Your email address will not be published. Required fields are marked

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}