A robot lied to me.
And like T-100, it was relentless and had no remorse.
I am presently at my parents-in-law’s house, and they have an older printer that they want to connect to their iPad. They also asked me for help with their VCR—you didn’t misread that, it’s an actual VCR—but one step at a time.
I googled options, and a particularly helpful site popped up, but it will remain nameless. Since millennials lose interest with websites that don’t immediately provide assistance, a chat window quickly popped up in the right hand corner of the window.
A pleasant picture of a helpful man. 6113 helped customers. An impressive background helping people in my exact situation.
Okay, I thought. Finding an answer to this problem is going to be a pain. So I’ll bite.
Him: Welcome! What’s going on with your Apple device?
Me: I need to hook up my iPad to a Pixma MP6000R.
Him: How old is your system?
Me: The iPad is 2014 and I’m not sure about the printer.
Him: Could you estimate how old the system is?
Wait. Something’s wrong here. And how is this guy typing so fast?
Me: 2012 maybe. Not sure.
Him: How long have you been having an issue with the system?
Okay. We’re done here.
Was I naive to think that I was chatting with a real person? Probably. Is it wrong to use bots in this circumstance? Could we improve the user’s experience? Well, let’s talk about it.
First, It’s okay to use a bot. But don’t tell me it’s a real person. That’s lying. You just violated my trust as a user, and I’m no longer going to use your service. Ever.
Lean into it. Own it. Tell me it’s a robot. Have it bloop and bleep. Give it an animation like IBM’s Watson. I’m not afraid of robots. Okay. I’m a little afraid of robots.
Second, let’s talk some Linguistics, shall we?
Conversation, as it turns out, is a big deal. There are a lot—A LOT—of unspoken rules that we learn about conversations in our native language. Granted, a chat is not a spoken conversation. But chats, too, come with an implicit set of conventions, largely based on speech patterns, that native speakers simply take for granted.
I don’t mean to go all Turing-testy here (trademark: Turing-testy), but let’s say (to appease me) that we show that the user is communicating with a bot. In order to make the exchange user-friendly, there are some basic principles that bots need to take into account:
Pauses. There’s a huge literature on the importance of pauses in conversation. This applies to chatting, as well. If you shoot language at me faster than a reasonable person can type, it feels fundamentally wrong. Use pauses to your advantage—don’t assume you need to get the message to me as quickly as possible.
Anaphora. Once humans mention a noun the first time in a conversation, they tend to stop mentioning that noun. In the conversation above, a human wouldn’t repeat the word ‘system.’ A human would say ‘it’ or introduce a new noun. This is a difficult problem in Natural Language Processing, but it’s an extremely important one.
Taking pauses and anaphora into account would go a long way to improving the user experience of this particular user, and I might even use this service again. However, between lying to me and putting me through a very uncomfortable interaction, we’re done here.
User Experience is more than visual design. It’s about humans’ interactions with everyday products. Since bots are becoming an ever-important part of that experience—maybe even in influencing our votes—computer scientists and designers need to start looking to linguists to help resolve the problems that are simultaneously cropping up.
After all, we are creatures of conversation. Let’s not forget that.