How do you redefine a user interface? What steps do you need to take to change the way people interact with technology? It's not just a matter of developing the right tools. You also have to take into account the way people want to use gadgets. The most technologically advanced interface means nothing if it just doesn't feel right when you're taking it out for a spin.
But we're entering an era in which we need to revisit user interfaces. Computers pop up in more gadgets and applications each year. Within a decade, even the most basic appliance might house a type of computer. And with a growing emphasis on 3-D video, a new way to take advantage of this third dimension requires an innovative approach.
[The ZCam camera from 3DV System was a motion-sensitive predecessor to today's 3-D gesture system technology.]
A 3-D gesture system is one way to tackle this challenge. At its most basic level, a 3-D gesture system interprets motions within a physical space as commands. Applications for such technology fall across the spectrum of computing from video games to data management. But creating a workable 3-D gesture system presents a host of challenges.
Several engineers have tried to create systems that can interpret our movements as computer commands. But what kinds of applications will these systems make possible? And what kinds of components are necessary to put together a 3-D gesture system?
The Dimensions of a 3-D Gesture System
You can divide the parts of a 3-D gesture system into two main categories: hardware and software. Together, these elements interpret your movements and translate them into commands. You might be able to blast zombies in a video game, navigate menus while looking for the next blockbuster to watch on movie night or even get to work on the next great American novel just by moving around.On the hardware side, you'll want a camera system, a computer and a display. The camera system may have additional elements built in to sense depth -- it's common to use an infrared projector and an infrared sensor. The computer takes the data gathered by the camera and sensors, crunches the numbers and pushes the image to the display so that you can see the results. The display presents the data in a way that lets you judge how far you need to move to manipulate what's going on.
On the software side, you'll need applications that actually convert information gathered by the software into meaningful results. Not every movement will become a command -- sometimes you might make an accidental motion that the computer mistakes for an instruction. To prevent unintended commands, 3-D gesture software has error-correction algorithms.
Why worry about error correction? A gesture may need to meet a threshold of confidence before the software will register it as a command. Otherwise, using the system could be an exercise in frustration. Imagine that you're working on an important three-dimensional drawing by moving your hands to change its size and shape. Suddenly, you sneeze and the delicate work you've done so far is ruined as your involuntary actions cause the drawing to distort dramatically.
Error-correction algorithms require your actions to match pre-assigned gestures within a certain level of confidence before the action is carried out. If the software detected that your movements didn't meet the level of confidence required it could ignore those motions and not translate them into commands. This also means you may have to perform a gesture in a very specific way before the system will recognize it.
Some commands may not be as sensitive as others. These would have a much lower threshold of confidence. For example, flipping between images by moving your hand to the left or right isn't really a mission-critical command. With a lower confidence requirement, the system will accept commands more readily.
Red More :http://computer.howstuffworks.com/3-d-gestures2.htm
No comments:
Post a Comment