R2D2 as a model for AI collaboration


C3PO is a protocol droid; he’s supposedly designed to not only emulate human social interaction, but to do so in a highly skilled way such that he can negotiate and communicate across many languages and cultures. But he’s really bad at it. He’s annoying to most of the characters around him, he’s argumentative, he’s a know-it-all, he doesn’t understand basic human motivations, and often misinterprets social cues. In many ways, C3PO is the perfect encapsulation of the popular fantasy of what a robot should be and the common failures inherent in that model.

Interacting like a human is hard (even for humans).

We don’t all talk to each other the same way. We don’t all have the same set of cultural backgrounds or conversational expectations. Below are charts created by British linguist Richard Lewis to show the conversational process of negotiating a deal in different cultures.

Stop trying to make machines be like people

Obviously, the ideal of creating machines that interact with us just like people do is incredibly complicated and may actually be unattainable. But more importantly, this isn’t actually a compelling goal for how we should be implementing computational intelligence in our lives. We need to stop trying to make machines be like people and find some more interesting constructs for how to think about these entities.

Iron Man

Iron Man (as a metaphor) is an example of using the capabilities of machines to provide us with functional enhancements. You’re taking human abilities and augmenting them with mechanical abilities — superhuman strength, superhuman speed, and in the case of Jarvis, augmented information and intelligence. All the technology here is supporting Tony Stark, giving him superpowers.

Machine as prosthesis

Let’s start at one end: The idea of the machine as a prosthesis — something that is wired directly into our bodies or our brains that can perform with us. Some of those are prosthetics in the traditional sense, where they’ve been developed as assistive devices to adapt for a disability, like cochlear implants, or prosthetic limbs to replace missing limbs. What’s interesting here is there starts to be a blurry line between replacing a missing limb and adding new capabilities that biological limbs don’t have — someone with a prosthetic hand who has a USB drive in his index finger, or runners who have prosthetics designed to run more efficiently than a biological foot. The line between adaptation and augmentation gets very fuzzy.

Machine as servant

Now there’s a lot of fascinating work being done in that realm of prosthesis and human augmentation, but to get closer to thinking about relational dynamics and design, we need to have more space and explicit communication between person and machine. And we start to get that a little bit as we move down the spectrum, into this construct of a machine as a servant or a butler. We see this with automotive tech like assistive parking and driving, with Slack bots that automate actions in our workflows, or with voice assistants like Alexa, Siri, and Google.

Machine as collaborator

But I think things get really interesting, both functionally and aesthetically, when we get to the end of the spectrum where the machine is not only separate from the self but also has agency — it has ways of learning and rubrics for making its own decisions. Here we start to get into the question of what it means for machines to be not just servants, but collaborators — entities that work with us to contribute their own unique form of intelligence to a process.


This is where I look to R2D2, who provides a fascinating construct for how we might imagine our future relationships with robots. R2D2 clearly has agency — he often follows orders from humans, but just as often will disobey orders to pursue some higher priority goal. And R2D2 has his own language. He doesn’t try to emulate human language; he converses in a way that is expressive to humans, but native to his own mechanic processes. He speaks droid. The other characters form relationships with him but it is a completely distinct kind of relationship from the one that they have with other people or the one that Luke has with his X-Wing.

“How does the digital sensor perceive the puppy?”

Ian Bogost, Alien Phenomenology

Timo Arnall took a stab at exploring this question nearly a decade ago in his wonderful film, Robot Readable World, which starts to give us a sense of how the world looks through the gaze of a number of different computer vision algorithms. As we design interactions with these kinds of machine intelligences, what are their versions of R2D2’s language? What expressions feel native to their processes? What unique insights can we gain from the computational gaze?

Let’s do more carpentry

The above people and projects are just a few examples of work that points the way toward a much more interesting future with our computational compatriots. And we need more of that experimentation. I hope to make and see more things that can explore these nascent and rich spaces of possibility and point in new directions. Let’s not let the future of AI be weird customer service bots and creepy uncanny-valley humanoids. Those are the things people make because they don’t have the new mental models in place yet. They are the skeuomorphs for AI; they are the radio scripts we’re reading into television cameras.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store