After reading Mr. Norman’s commentary/observations of natural gestures + technological interface, I agreed on some of his points about how there are still a few things that needs to be corrected, especially when he pointed out that sometimes certain gestures don’t mean the same things. I’ve had a similar experiences with hyper-sensitive touchscreens where I accidentally placed my finger on the screen or my sleeve would accidentally brush up against it and the screen thought I tapped twice or moved the screen, which then sends me to another screen that I never wanted. This has happened to me on Metrocard machines, using the GPS on an iPhone or grabbing the iPad I accidentally moved the screen. True, it’s my own fault for not paying attention, but in certain scenarios where one is in a hurry, one just expect things to work without all the extra fuss, accidentally or not. Interfaces such as these were meant to make things efficient but not when it still has errors.
Mr. Norman mentions that we need more time to perfect the natural user interface experience, which I agree. For technology to mimic human gestures perfectly, it has its limitations… for now anyway. Because as with everything that is NOT natural in this world (i.e. human invention, technology, software), it takes TIME to develop precision. And in this context of natural user interfaces, there needs to be a lot of researchers to determine errors, hiring the right developers to correct flaws, acquiring the right materials to build things properly and getting the right funding for support, because all these resources do not come cheap.
But I don’t think we’re not too far off today and timing is just the key.
Take for example James Cameron w/ his film Avatar. He actually had the story well thought out already, but had to wait 12 years for the right technology to come along. The techniques they used to animate the character were very interactive driven, where each human gesture, facial expressions and other characteristic idiosyncrasies were captured to make the film more believable and made it so shooting actors in a fantasy look and location— more convenient (no more traveling to locations that look sci-fi and lugging expensive equipment). See enclosed video detailing process via URL: Animation techniques for Avatar.
The one technology that I’m waiting for to become mainstream is the human interaction with information, without using a keyboard, a mechanical pen or a mouse… Also seen in Minority Report. Tom Cruise’s character uses high tech gloves to search through a plethora of information.
To my surprise, a real company called Oblong Industries is actually developing this gestural interface concept with the use of gloves and a virtual dashboard. They have called it the g‑speak SOE (spatial operating environment). This would actually be useful in all sorts of industries (i.e. military, government, corporate, medical, law enforcement, home security).
The use of tech-gloves actually does make mimicking human gestures for computers a little faster and easier, unlike the controls used in video game consoles or other electronic kiosks out there which tend to be cumbersome, overly sensitive and bulky. There’s more actual grasps and coverage— so I believe anyway. But think of gloves in general, it helps you interact properly with things you need to protect your skin from (i.e. pulling a hot pan from the oven, gardening, cleaning gross bathrooms, doctors performing surgery or other medical procedures). Gloves are the best to handle human gestures and why not use them for interfaces to simulate actual gestures.
For an actual explanation of the SOE technology, view the attached video where they use high tech gloves to manipulate things on the virtual screen: Oblong Industries
I think Mr. Norman would nod at the precision of the glove concept in terms of natural user interfaces, but I’m sure he’ll come up with something to contradict its behavioral responses. But I believe we are getting closer to these gestural interfaces more than we think, perfected or not.
Comments!