Augmented Reality getting a WebRTC boost.
One of the things that fascinates me about the web is when people take a tool and just use it in ways that aren’t obvious. BuildAR is doing just that by taking WebRTC into Augmented Reality and devices like the Oculus Rift.
Rob Manson, CEO of BuildAR, should not be new to my readers. He is also the author of Getting Started with WebRTC. Now that BuildAR is planning for version 2 of their platform, and doing that using Kickstarter, I thought it would be a good time to interview him about BuildAR and their future plans.
What is BuildAR all about?
If you haven’t heard of buildAR.com before, you can think of us as “Wordpress for the Real World”. We aim to make creating AR as “easy as adding a blog post or sending a tweet”. With our platform you can simply embed your digital content into the physical world around you by linking it to images, locations and more. Our latest work is all built upon open web standards like WebRTC, Web Audio and WebGL, etc.
What excites you about working in WebRTC?
This is the one standard that has really opened up the whole potential of the Augmented Web. It was this that opened up camera and microphone access for the browser so we could finally start bringing Augmented Reality to the Web.
Then when you combine this with the WebRTC DataChannels and the ability to stream video and audio to the cloud for real time analysis you start to see the amazing opportunities this has unlocked for us all.
Your new Kickstarter initiative. What are you planning next for BuildAR?
We’ve already released an open source library that makes it easy to integrate WebRTC and all the other related web standards to create what we are calling Augmented Web Experiences. This has taken us over 5 years of R&D and has been an amazingly challenging process.
Now, we’re asking people to back our project and help fund further development of this open source project. As part of this we’re developing a new open source Natural Feature Tracking library that will let you use WebRTC based video to recognise and track natural images like photos, posters and more. This will hopefully lead on to all kinds of object recognition and 3D tracking.
We’re also using this project to extend our buildAR platform to make it as easy as possible to create these Augmented Web Experiences. We want to make it so you can use any device to create this type of content and interactivity anywhere. Part of this is involving the community so they can be involved in helping us refine and shape this user experience.
How does WebRTC fits into these plans?
We’re already contributing to defining the extension to the getUserMedia API so it can support depth based cameras. If you look at new devices like Project Tango from Google it’s clear that 3D depth sensing will be a standard part of Smart & Wearable devices in the very near future.
And WebRTC DataChannels and PeerConnection streams are the key to letting us link together different types of devices to create whole new types of interactions. Enabling you to use your mobile as a controller for the apps that are running on your Google Glass while capturing information about your body movement from the Kinect sensor you’re standing in front of. The new possibilities are endless.
Where do you see WebRTC going in 2-5 years?
Well the first thing is to get it to 1.0 and get all of the browser vendors on board. We’re about to launch a demo that will hopefully create some positive pressure and incentive for Apple to jump on board more quickly. But all of this should definitely be resolved within the next 12 months.
Beyond that there will be a lot more evolution of the WebRTC APIs as the 1.0 API has a few ugly lumps that need to be resolved.
Over the 2-5 year timeframe I think WebRTC will have some really big strategic impacts on enterprises all over the world. This will move them from managing boxes and networks into a world where they are focused on streams. Your teams will be creating an ever increasing number of streams of media and data and your core business is now “How do we extract value from these streams?”. I think people will really need to focus on intelligent services that process and analyse these streams to turn them from raw data into useful and actionable intelligence. This is a whole new computing environment.
If you had one piece of advice for those thinking of adopting WebRTC, what would it be?
Start. It works now but there’s a lot to be learned from your first implementations and watching how it can really be used. But nothing beats actually getting started and doing some real world implementations. You don’t have to start by changing the way you handle all of your communications… but by starting it will definitely change the way you think about all of them.
Given the opportunity, what would you change in WebRTC?
The issues around SDP munging are probably the biggest one and I’m sure that will be resolved in 2.0. The other things I really want to see are the ability to pass pre-processed or programmatically generated streams into a PeerConnection. Greater control over camera and microphone selection. And as I mentioned above, support for depth cameras.
What’s next for BuildAR?
Working with lots of people to build great working examples of Augmented Web Experiences that are supported by a strong business case. We’ve already shown that we can unlock some amazing new user experiences. Now we want to focus on showing some real utility and providing some real commercial benefit. If you have ideas like this then we’d love to discuss them with you.
We’d also love for you to back our project to help us continue this work.
The interviews are intended to give different viewpoints than my own – you can read more WebRTC interviews.