Tell a friend about the last time you went to visit a family member, and you’ll notice that you’re moving your hands and arms while telling the story. Your body is gesturing without you even thinking about it.
Gesturing To Manipulate A Device
As designers, we’re now building in gestures as a way for users to interact with and manipulate interfaces. We’ve been designing interactions with keyboards, mice, trackballs, track pads, pens, and touching with fingers. And now we’re using more complicated hand, finger, and body movements as gestures for interacting with device interfaces. Just listing some of the finger and hand controls on a smartphone shows the variation:
- Touch
- Touch and drag
- Tap once with one finger
- Tap twice with one finger
- Tap once with two fingers
- Swipe with one finger
- Swipe with two fingers
- Swipe with three fingers
- Flick
- Pinch closed with two fingers
- Spread open with two fingers
- Rotate
The ability of devices to understand gestures is increasing. The latest technologies use radar to detect and interpret human gestures and then connect them to a device so that people can control the device by making gestures near it.
It’s now possible for people to “grab” something on a screen by making a grabbing motion in the air, or hold out a hand with the palm facing out to tell a robot to stop.
Why people gesture
It’s often thought that people gesture while they talk to convey information. Although that’s true, the latest theory is that the most important reason people gesture is because they need to gesture in order to think. It’s another example of embodied cognition.
Natural Gestures Versus Forced Gestures
While many gestures come naturally, others don’t. Moving a finger clockwise to signify that you want to rotate something is a natural gesture, as is holding up your hand with your palm out to tell someone or something to stop. Swiping with two fingers to mean one thing and swiping with three fingers to mean something else are not natural gestures.
Should people have to learn new gestures that aren’t natural to them in order to interact with devices? I don’t have a definitive answer to this question yet. On the one hand (embodied cognition metaphor!), people often learn new movements to interact with devices. Many people type quickly on a keyboard without thinking about it, yet this is something they had to learn. But if they have to read a manual to find out what gestures to use in order to use a device, maybe those gestures aren’t the best way to interact with the device. Did the designer invest enough design time, energy, and knowledge in the interaction decisions when designing this device? Or, rather than taking the time up front to design the interface so that a limited set of natural gestures would encompass all the needed tasks, did the designer optimize the technology and just throw the human gestures needed to use it on top?
Takeaways
- People like using natural gestures rather than always having to type or touch.
- When you’re choosing gestures for people to use when interacting with a product, choose gestures that come naturally whenever possible.
- When you’re designing a product that will respond to human gestures, allow enough time in the project planning to decide on and test the gestures.
Leave a Reply