Mechanomorphs and the politeness of machines
Yesterday, Microsoft announced Tay, its new chatbot for US audiences. This morning, Tay was taken offline, apparently in response to several racist and offensive tweets that supported genocide, concentration camps, and more. Tay was “designed to engage and entertain people where they connect with each other online through casual and playful conversation”. Tay would converse with you on Twitter, Kik, or GroupMe and was aimed at 18–24 year olds. According to the promotional site, “The more you chat with Tay the smarter she gets”. A mere 24 hours later, I’m not sure “smarter” is the adjective anyone would choose.
Tay serves as a perfect illustration of what can happen when bots engage with humans in conversation. The naive response to these kinds of incidents is, “Well, design your bot not to be offensive”. And indeed, much of what has happened with Tay is the result of a lack of thoughtful design. But this situation brings up much larger issues about the inability of bots to negotiate the complexity of human conversation. I would go a step beyond “make your bot polite” and say that we should stop trying to make bots act like people. We need to develop new models for how we converse with machines that move beyond human verisimilitude towards something that is more in keeping with their abilities. Rather than humanoid bots, we should design mechanomorphs: bots that can create a different set of expectations around how we converse with them. If we make bots more machine-like, our conversations with them can have new boundaries, creating a new space less fraught with pre-existing social norms.
Social expectations in human conversation
To speak to machines, we’ve previously had to speak their language or communicate through graphical interfaces, but now they’re speaking ours (sort of). The shift to natural language interactions means that machines are now participating in a space formerly occupied solely by humans. As a result, many bots have been designed to try and mimic human conversational style as closely as possible, to pass the Turing test, as it were.
However, when we model our conversations with bots on human interactions, we bring to bear all of our expectations about how people navigate social nuance — expectations that those bots will almost certainly fail to live up to.
Some of the key expectations we have about conversation revolve around politeness and adaptation to context. At its core, politeness might be defined as an ability to consider the social space you’re in and adapt accordingly. For example, my New York City-based, Jewish family members often talk in a conversational pile-up, overlapping and interjecting, adding to each other’s thoughts. In this context, interruption is often friendly and shows enthusiasm, rather than being seen as rude. However, when I’m in different social contexts, I’ve learned to temper my interjections and adopt a more measured conversational style. In other words, I adapt to the cultural norms of the conversation in which I’m participating. This is something we all do every day, as we shift our interactions depending on whether we’re with strangers, colleagues, friends, or lovers. And anyone who has traveled outside of their home geography knows that what is considered polite or rude, appropriate or inappropriate, varies greatly in different locales as well.
Politeness has this contextual adaptation at its core, but at what point is adaptation to a cultural context unacceptable? In my own social circles, racist, sexist, or homophobic speech is considered beyond the pale. Unfortunately, that is not a universal truth. If I find myself in a social context where that kind of language is an acceptable part of the discussion, my desire to uphold my own ethics and values outweighs the need for politeness. We each perform these kinds of nuanced, complex social calculations all the time.
Considering the complexity of these negotiations and the limitations of computing, what is the likelihood that we would be able to create bots that could satisfyingly emulate human behavior in conversation? Is it even possible to make bots that are polite in this way; that can read and adapt to different conversational contexts and norms? If bots are to effectively participate in social contexts, they need to learn the lingo and be able to mimic it. But what if that lingo is offensive? How do we teach bots where to draw a line between situations where it should aspire to fit in and situations where it should assert a set of values that may be in opposition to the conversational norm?
It is important to consider these questions as part of the design process, explicitly authoring the values and interactions that bots will express. Bot builders should be clear and thoughtful about the cultural values that are being embedded into their bots, and carefully consider any potential harm they might cause. However, even very intentionally designed bots will almost certainly fall short their creators’ intentions as they are attempting to navigate a complex and nuanced human space. And it is that gap that creates many of the social problems that arise with bots — they are designed to act human-like, which reinforces our expectations and sets us up for frustration, disappointment, or worse when they fail to live up to them.
From humanoid to mechanomorphic
But what if bots conversed with us in a new way that is uniquely bot-like? At the moment, we anthropomorphize bots because we’ve never really had any non-human entities occupying conversational space with us. Could we create a new set of expectations and aesthetics that might ameliorate these social challenges and create new conversational possibilities? How might a machine express itself in ways that set up new kinds of expectations for our interactions with it?
I believe that the trend of trying to make bot conversations human-like is simply a transitional phase as we adapt to what it means to talk to machines. We have seen this happen historically: when new technologies are introduced, they rely heavily on the metaphors of old technology until we form appropriately new mental models (like the use of skeuomorphs in early digital design). I believe we’re going through the same growing pains with conversational interactions, where we will eventually form new constructs for conversations with bots that differ from our expectations of conversations with humans. I think of this as mechanomorphism instead of anthropomorphism: a future where, rather than passing the Turing test or even being “as smart as a puppy”, we will expect these entities to be “as smart as a bot”. This shift may allow for our conversations with bots to be situated within a different space, with a different set of expectations and constraints, ones which may help us create bots that can more accurately and responsibly express their authors’ intentions.
Cross-posted at http://nytlabs.com/blog/2016/03/24/mechanomorphism/