With the Rise of Bots, Linguists are Critical to User Experience


A robot lied to me.

And like T-100, it was relentless and had no remorse.

I am presently at my parents-in-law’s house, and they have an older printer that they want to connect to their iPad. They also asked me for help with their VCR—you didn’t misread that, it’s an actual VCR—but one step at a time.

I googled options, and a particularly helpful site popped up, but it will remain nameless. Since millennials lose interest with websites that don’t immediately provide assistance, a chat window quickly popped up in the right hand corner of the window.

A pleasant picture of a helpful man. 6113 helped customers. An impressive background helping people in my exact situation.

Okay, I thought. Finding an answer to this problem is going to be a pain. So I’ll bite.

Him: Welcome! What’s going on with your Apple device?

Me: I need to hook up my iPad to a Pixma MP6000R.

Him: How old is your system?

Me: The iPad is 2014 and I’m not sure about the printer.

Him: Could you estimate how old the system is?

Wait. Something’s wrong here. And how is this guy typing so fast?

Me: 2012 maybe. Not sure.

Him: How long have you been having an issue with the system?

Okay. We’re done here.

The Problem

Was I naive to think that I was chatting with a real person? Probably. Is it wrong to use bots in this circumstance? Could we improve the user’s experience? Well, let’s talk about it.

First, It’s okay to use a bot. But don’t tell me it’s a real person. That’s lying. You just violated my trust as a user, and I’m no longer going to use your service. Ever.

Lean into it. Own it. Tell me it’s a robot. Have it bloop and bleep. Give it an animation like IBM’s Watson. I’m not afraid of robots. Okay. I’m a little afraid of robots.

Watson isn’t trying to convince me it’s human.

Second, let’s talk some Linguistics, shall we?

Conversation, as it turns out, is a big deal. There are a lot—A LOT—of unspoken rules that we learn about conversations in our native language. Granted, a chat is not a spoken conversation. But chats, too, come with an implicit set of conventions, largely based on speech patterns, that native speakers simply take for granted.

I don’t mean to go all Turing-testy here (trademark: Turing-testy), but let’s say (to appease me) that we show that the user is communicating with a bot. In order to make the exchange user-friendly, there are some basic principles that bots need to take into account:

Pauses. There’s a huge literature on the importance of pauses in conversation. This applies to chatting, as well. If you shoot language at me faster than a reasonable person can type, it feels fundamentally wrong. Use pauses to your advantage—don’t assume you need to get the message to me as quickly as possible.

Anaphora. Once humans mention a noun the first time in a conversation, they tend to stop mentioning that noun. In the conversation above, a human wouldn’t repeat the word ‘system.’ A human would say ‘it’ or introduce a new noun. This is a difficult problem in Natural Language Processing, but it’s an extremely important one.

Taking pauses and anaphora into account would go a long way to improving the user experience of this particular user, and I might even use this service again. However, between lying to me and putting me through a very uncomfortable interaction, we’re done here.

The Takeaway

User Experience is more than visual design. It’s about humans’ interactions with everyday products. Since bots are becoming an ever-important part of that experience—maybe even in influencing our votes—computer scientists and designers need to start looking to linguists to help resolve the problems that are simultaneously cropping up.

After all, we are creatures of conversation. Let’s not forget that.


Prototyping: This Is Why We Have Nice Things

Innovation lives in the prototype.

At Inherent Games, we had a serious problem. We wanted to teach people the meaning of words—MEANING—but we didn’t want to use flashcards. That is, we didn’t want to show you the word apple in Spanish and then show you a picture of an apple. That’s not fun.

We wanted fun.

So, we came up with a pretty basic concept: let’s teach people word meanings through actual, physical networks. After all, words are ‘connected’ in several different ways. For example, some words are superordinate to other words (think fruit to apple). Some words are synonyms (think ball and sphere). Words can even be related in their physical relationship to each other, like whether or not they physically belong together in the real world (think chair and table).

Brilliant! Let’s make a game using semantic (meaning) networks!

We made a prototype and took it to a local high school.

Prototype I

Prototype I was a distaster.

We put together a network of words with different relationships to each other and expected players to memorize the network. We would then quiz them on their knowledge of the network. Because quizzes are fun, right?

Prototype I. Not our finest hour.

It bombed hard. Our users hated the prototype.

At the time, this made us sad. Now, it makes me very happy. We thought that we had invented something amazing, and that belief was immediately blown out of the water. This sparked our creativity.

Prototypes II-IV

For prototypes II-IV, we got more dynamic. We wanted to treat the networks like they were neural connections, so the design even include neurons and firing synapses.

The idea was that the user would be presented with three words. Two were random words, and one was a target word.

Players could press the target word and then ‘travel’ along a neural pathway, and their goal was to consistently choose the word that was connected to the previous word.

Prototypes II-IV. Still lacking.

Before you get all judgy, know this: it was more fun that the first prototype. It was dynamic, and it had promise. Of course, it also had all kinds of problems.

Our play testing sessions revealed that players had no idea where to begin. Three words were thrown at them, the target (or the one we wanted them to press) and two random words. But they didn’t know the words, so how on earth could they separate the target from the random words?

As I write this, it seems incredibly stupid that we didn’t catch this as we were designing the game. But, when you’re designing something, you simply don’t see those huge gaps. YOU understand the game, so everyone else should as well.

Prototyping helps eliminate this blindness.

We went through several iterations of this idea, and then we hit a wall. It wasn’t working.

We needed a big idea.

Nano Nano

Prototyping semantic networks gave rise to innovation: players should BUILD the networks.

NetworkConcepts2 copy 2
Nano Nano. The player builds semantic networks.

So we re-thought everything, and more innovations came:

Players needed a translation button! It would be a crutch, and they could use it any time they were feeling overwhelmed (but it would only last a small amount of time).

Players needed a clear target word! What if the whole game was embedded in a story, and they needed to fill in words from the conversations?

Players needed to visually see different semantic connections (synonyms, superordinates, physical contact). So let’s color code them!

The result is Nano Nano, now up on the App Store.

While it sill needs a few iterations, it is the result of lots of game prototyping.

Users informed our decisions, and prototyping gave rise to innovations that we NEVER would have come up with on our own.


This is why we have nice things.

Immediacy & Action

Let’s talk about the future.

In an earlier post, I talked about different representations of timelines. However, time can manifest itself in a myriad of ways, both in how a product ‘talks’ to its user and how a user describes a product. I want to consider both.

The Product Talking

Researchers have established that users conceptualize events and products through psychological distance. For example, we conceptualize tomorrow as physically closer than next year, we conceptualize friends as physically closer than strangers, and so on. This is important because our conceptions of psychological distance influence our behaviors.

For example, Chandran and Menon (2004) show that framing a negative future situation as psychologically closer caused people to consider that event to be more concrete and dangerous. People who read about heart disease as affecting people every day (as opposed to every year) reported that they found heart disease to be a greater problem and that they would take more steps to prevent themselves from developing heart disease.

In other words, presenting an event as closer in time modifies people’s sense of immediacy, and, consequently, the actions they will take for/against that action.

This is all fine, well, and good for day vs. year, but people do not usually talk about things in this way (these are called adverbials, and we can nerd out about them at another time). However, people DO frequently talk about future events. So how do they do it? Two main linguistic constructions:

Will Future (I’ll go with a burger)

Be Going to Future (I’m gonna go with a burger)

On the surface, the difference between these two constructions seems negligible.

However, my doctoral research shows that people think of events that are described with the Be Going to Future (I’m gonna go with a burger) as more immediate in time than events that are described with the Will Future.

I described an event to people and then asked them to estimate how long until that event would occur. There were two groups, and each read a different sentence.

Group 1: He’s gonna discover a planet (Be Going to Future).

Group 2: He will discover a planet (Will Future).

Participants estimated the time to the event on a 1-7 scale, with 1 as seconds from the present and 7 as years from the present.


As you can see from the graph, where the red dots indicate averages for each group, participants reliably estimated the same event (the discovery of a planet) as more immediate (months to years in the future) when described with the Be Going to Future when compared to the Will Future (years in the future).

This is a big deal, because it suggests that the language a product uses could influence the user’s sense of immediacy and actions.

For example, say I’ve got a game on the app store. I’ve got a description for it, and I want to convince the user how much fun it’s gonna be.

My research suggests that using the form Be Going to (it’s gonna rock your world) as opposed to the Will Future (it will rock your world) will likely yield more downloads. That is, if it’s gonna rock my world, it’s more immediate and visceral.

Importantly, this would need to be tested, and it will certainly differ from context to context. But this is the perfect example of how cognitive linguistics can be directly applied to user experience research.

The User Talking

But that’s just for the product. What about the user?

Let’s talk about a game again. I put an iPad in a kid’s hand and ask him to play my newest game, Knights of Boulderia 2: Electric Boogaloo. I ask her to talk her way through the game. Don’t just play. Talk. Explain to me what you’re doing, what your intentions are. I record and transcribe.

If the kid uses the expression “I’m gonna go down this hall at blast that wizard” as opposed to “I’ll go down that hall and blast that wizard,” I may have strong linguistic evidence that the game is eliciting a strong, visceral response from the user. She feels a sense of immediacy and is therefore more engaged with the game.

I could even run a quantitative analysis of game junctures and the verb forms that she uses. This could set the stage for areas that require improvement and areas to capitalize on.

In short, user experience could be qualitatively and quantitatively described by the linguistic constructions that users employ.

Again, this is a hypothesis, and it would need to be tested—contextually! Still, my research shows that there’s a good chance that such research could strong results for actionable product changes.


This post covers the space-time topography of English speakers, or how they conceptualize future events differently depending on how those events are described to them or how they describe those events.  In other words, language can tell us about how to interpret products and how we are experiencing a product.

These insights tell us more about the user experience.

There is so, so much exciting work to be done here.