Gesture and User Experience

hand_gestureMy company, Inherent Games, has published the first ever language-learning app to incorporate gesture into its lessons. Here’s a little of what we’ve learned.

Gestures + Technology = Difficult.

We naturally gesture when we talk. It’s a huge part of how we convey meaning to people. But no one taught us gestures. We watched other people. We make our own in the moment. Oftentimes, we’re not even aware that we’re gesturing (unless you’re running for political office, where you spend the better part of the day practicing how to press your thumb on your fist to emphasize words).

Consequently, it’s a little strange for a device to tell you how to perform a particular gesture to perform a function.

Our goal was to teach people Spanish verbs by having them perform those verbs with their devices, because research shows that learners retain words better when they perform a gesture while learning them. Using the device’s built-in functionality, like the gyroscope and accelerometer, users could hold the device like a steering wheel and ‘drive’ when learning the Spanish word for drive, conducir.

But here’s the rub: Prompting the user to engage in full-device motion requires you to convince the user to think of the world outside the device. This isn’t easy.

Contrast full-device motion with a screen swipe. Showing people how to swipe onscreen is easy. Provide a prompt with an arrow or hand and show the results of performing this swipe.

However, in order to get people to perform actual hand gestures, not just swipes, we needed them to think of the world outside of their phone. As it turns out, people don’t intuitively do this.

The Process

All our prototypes were tested in a Denver area high school.

Prototype 1 had a prompt: “Gesture like you’re driving!” It seems obvious now, but this version crashed and burned. Badly. No one knew what we wanted.

We changed it.

Prototype 2 showed a picture of the device—and the device alone—and the desired motion. Again, users were baffled.

We changed it again.

Prototype 3 showed an icon of a user moving the device coupled with a prompt: “Move your device like the picture shows!.” Users were hesitant, but they finally got it. “Oh! I move it like that!” They then performed the gesture and got the payoff (a character named Jumbo Nano clears the screen of bad guys).

IMG_1604

[Small note: I still think the name Jumbo Nano is hilarious.]

Why is this Important?

I happen to believe that full-body gesture is the next critical step in user interfaces. But there are two big problems:

1. Developing technology that registers gestures

2. Convincing users that they want to perform gestures.

Lots of people are working on Problem 1, so I’ll leave them to that.

Problem 2, however, is a domain for linguists and cognitive scientists (as others, like Microsoft, are beginning to point out).

My doctoral research shows that English speakers are particularly adept at projecting their ego onto an an external locus (like a man on a picture). This process is called deixis. I think this is exactly what’s happening with devices. People project their egos onto the device and ignore their bodies beyond the device. In order to really get people to meaningfully interact with devices that register full-body gesture, we need to remind them that their bodies can be the center of the action.

In other words, the device can’t be the center of the action, it has to be a conversant. After all, we’re used to dealing with conversants, and we naturally gesture with them.

In other words, it’s robot time.

Microsoft thinks so, too.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s