Gesture is taking center stage these days, with a focus on the living room. Does it make any sense?
Not sure about you, but for the most part, I think a gesture interface is rather useless – at least as a control mechanism. Several years ago, I had the opportunity of going to the CES show: a large consumer electronic event. It was early days for gesture and control – before the iPhone was introduced to the world.
I remember one of the large TV manufacturers showing off a gesture interface for a television. To demo it, it took out a large area in the front of the TV, so people won’t create a problem for the gesture recognition algorithm. You had to stand in front of the television, at a very specific position, and use large hand gestures to control it. it was embarrassing.
Gesture has come a long way since then. Prime Sense came out with their 3D camera, allowing a device to “sniff” the room and learn about depth. It is now part of the Xbox Kinect system – a gaming sensor that gives great pleasure to those who play with it.
But can it be used for pure control on a daily basis? Should we be using it to control a television instead of a smartphone or a remote control? Can it replace the good old keyboard?
I don’t think so.
The technology might be there, but using it for daily control purposes is a bit more complex than that – and it lacks the physical feedback you get from touching things.
I had skin cancer half a lifetime ago. Since then I have minor surgeries once a year or two – more like going to the local butcher to cut off another nevus. A few years ago, I had a nevus to remove on my face. My skin doctor sent me over to one of the best surgeons he knew. He chose well – a year from that surgery and even I can’t even notice the spot.
There’s a moral to this story – bear with me…
Last year I went for my yearly flu vaccine. In the waiting room I watched the television, and there was my favorite surgeon – the one who took care of my face. He was asked about technology in healthcare, and specifically in surgeries. His main thesis was that technology progresses too fast: it gives us insights and capabilities we didn’t have up until now, like the ability in real-time to get feedback on what is being done and why – the simple act of adding a camera and a monitor to the operation room means surgeons can do their job more accurately.
The only problem, he said, was that now the surgeon, who spent over 14 years in school just to learn his profession, looking at his hands while cutting or stitching – is now asked to raise his head to the monitor and not look at his hands. This small change in tactile feedback is an obstacle for 90% of existing surgeons. The younger ones – those in schools are now learning with the new techniques, but they are too green; and the experienced ones – they can’t get used to it.
I guess that gesture controls is going to be similar: it changes the way we interact with our environment – and it has no tactile feedback or visual ones that are natural. This isn’t the next step after the touch technology of the iPhone. It is something different.
It will have its niche markets, but it won’t become mainstream, controlling our TVs anytime soon.