by Alexis Lloyd, Devin Mancuso, Diana Sonis, and Lis Hubert
Imagine this: you have a text editor, and your team is there too. Your colleagues are making suggestions, answering questions, filling in gaps, and being sounding boards. But one of the team is an AI.
Matt Webb wrote this post yesterday about Ben Hammersley’s new startup, and in it he articulates a perspective on collaborative AI I’ve been thinking about for a while, and I’m excited to see more people engaging in this conversation.
In thinking about human-machine collaboration, I find myself repeatedly returning to a talk I originally gave at Eyeo in 2016, which used C3PO, Iron Man, and R2D2 as three frameworks for how to design for AI, so I took Matt’s post as a nudge to write up those ideas and update them for the current moment. …
If there’s one thing the current moment has done, it is to peel back the façade of radical individualism and reveal the ways in which we are deeply dependent on other people and systems. The often invisible networks of infrastructure and labor that hold up our society have lately been thrown into brilliant relief:
The events of this last week reflected many of the themes we’ve touched on here for over a year: the responsibility of platforms for the conversations they promote, the failure of imagination that occurs when leaders don’t listen to enough perspectives, and how online and offline influence each other to the point where the boundary is mostly meaningless. We are sure there will be much more to say as we all learn more about what happened and come to terms with these risks to our institutions, but make no mistake: nothing that occurred on January 6 was unpredictable.
In this issue we hope to inspire new ways of thinking about our futures: moving beyond a single user to a complex system, considering who defines what future we aim for, and adapting and inventing in the face of radical change. Hopefully these tools will help us be more imaginative about our possibilities, and steer our futures toward brighter, more collaborative outcomes. …
We here at Ethical Futures Lab really like robots. This issue is chock full of them: some that help their human partners, and some that replace human labor. We take a deeper look at the designs and considerations that spring up depending on which of these approaches their manufacturers take. We also touch on the unseen risks within AI and the possibility of mining social capital.
— Matt & Alexis
Social tokens — digital currency that is backed by the reputation of a person or brand — seem to be the new hotness in the cryptocurrency world. Fans can buy a person’s tokens in exchange for access to perks and exclusive content, like a private chat with an influencer, access to a music video, or special merchandise. …
Happy Thanksgiving! This week we dive deep into the uncanny valley, trying to find where a person ends and a bot begins, and how the distinction is perhaps becoming less and less meaningful. Read on to discover the inherent bot-ness of bots, the ways in which we’re already becoming cyborgs, and how to give gifts this holiday season that protect your loved ones’ privacy.
— Matt & Alexis
Last week, Alexis published an essay on a topic she’s been exploring for a while: how we design interactions between humans and machine intelligence. She uses three archetypes — C3PO, Iron Man, and R2D2 — to describe different models for these interactions. C3PO is the model that’s most familiar, where we try to make machines act like humans, and she argues that it’s a fundamentally uninteresting approach that’s doomed to fail most of the time. Iron Man, or using machines to augment ourselves, is a somewhat more compelling model that lets us use computational capabilities to give ourselves superpowers. But where things get deeply weird, delightful, and full of possibility is when we look at R2D2, a model which suggests machine intelligence as a strange companion species that we can treat as a creative…
It seems impossible that it’s been two weeks since last we wrote, but we can’t decide if it feels more like two days or two months since Election Day. Yet again, 2020 proves to be an exercise in time dilation. Anyway, we’re back to our usual tricks this week, thinking about why Sesame Street is so great, how privacy online can be a double-edged sword, and what we would do with a truly public space on the web.
One of our favorite techniques at the Ethical Futures Lab is, when confronted with an assertion that something is “better”, to ask “for whom”? Setting clear goals for the success of a project is critical, and the more we investigate recent innovations like social networks or viral media sites or smart home gadgets, increasingly we find that the answer to “better for whom” is “the company that made it”. …
This is a newsletter about the possible futures we encounter. It’s about how we can make choices with the materials in front of us in order to make the future that emerges one that is desirable, thoughtful, and equitable.
Today is a day that is explicitly about the choices we make, and how we use those choices to shape the path we want. We do this every four years, but we can all recognize that this one is different from what has come before. It is clear that who we elect this year has special significance to determine the world to come. In our last issue, we mentioned George Packer’s piece in The Atlantic, entitled “America’s Plastic Hour is Upon Us”. If you haven’t read it, today is a good day to do so. Packer posits that, most of the time, a system is only capable of changing in small, iterative ways. But every once in a while, circumstances converge such that a paradigm shift is possible, and dramatic change can occur. We are in such a “plastic hour”. …
Over the last couple of weeks it’s become unavoidably clear that we have lived, and are living, through significant change. Nearly all our assumptions about “how stuff works” are shifting, and while that’s frightening, it can also be thrilling. This week we investigate signals that show us examples of that change, how to think rationally about this and other crises, and where new opportunities suddenly reveal themselves. Plus, some truly magical AI-generated real estate listings!
— Alexis & Matt
We talk a lot about how good design happens inside well-defined constraints, and as this long and well-researched piece from Matthew Ball shows, those constraints are often in flux. …
Agency is a critical component in successful innovations, namely: does this give its user or recipient more control over their life? Several of the signals we saw this week deal with agency: who has it, how they get it, and what they can do with it. We also heard other positive signals about the development of AI, computers that can run until they fall apart, and systems that can read your mind.
— Alexis & Matt
Who are you and what do you do? These two questions are tightly coupled for many of us, in that our identity is largely shaped by our work. Historically, that “work identity” has been determined by your employer and your job title. But this essay by Tom Critchlow explores how the way we define identity has changed from a permission-seeking system to a permissionless system: “People used to think they needed permission — so they would ask for somebody else to give them permission to advance, to be something different — a new job title, a new degree, a new certification, a new membership.” …
About