Camera Obscura: Beyond the lens of user-centered design
As the world grows increasingly complex, the limitations of user-centered design are beginning to emerge
by Alexis Lloyd, Devin Mancuso, Diana Sonis, and Lis Hubert
It’s 2020 and our systems are failing us. We are increasingly reliant on technology that automates bias. We are celebrating “essential workers” while they are underpaid and their work is precarious. We are protesting in the streets because of policing systems that put black and brown people at risk every day. We use apps for travel, shopping, and transportation that productize exploitative labor practices. The list goes on and on.
How did we get here? These systems didn’t just emerge of their own accord. They were crafted by people who made hundreds of decisions, big and small, that led to the outcomes we see now. In other words, these systems and all of their component parts were designed. And for the most part, they were designed with processes intended to create positive user experiences. So what went wrong? Might the way we approach design be contributing to the problems we now experience?
It’s unlikely that the techniques that got us into this situation will be the ones to get us out of it. In this essay, we’re going to take a deeper look at dominant design practices — specifically user-centered design — to identify where our frameworks might be failing us and how we can expand our design practices to close those gaps.
User-centered design (alone) won’t fix the problems it created
Any framework is a lens through which you see things. A lens allows you to see some things quite well, but almost always at the expense of obscuring others. Prior to the development of user-centered design, technological experiences were primarily designed through the lens of business needs. The needs of the user were only considered insofar as they furthered or hindered those goals, but it was the bottom line that was firmly the focal point of that approach.
Any framework is a lens through which you see things. A lens allows you to see some things quite well, but almost always at the expense of obscuring others.
User-centered design (UCD) was developed in reaction to those blind spots. It advocated for a design practice that instead focused on the person using the technology, and was intended to create experiences based on an understanding of their needs and goals. As designers, we’ve spent much of the last 25 years convincing our peers of the virtues of putting user needs at the center of our design process.
This practice has produced some amazing products, services and technical innovations. And for designers who entered the industry in the past decade or so, UCD has become a default mindset and approach. By empathizing with users and designing with their needs and wants in-mind, we have strived to create products that are more helpful, more intuitive, and less stressful. Certainly many of the digital tools & platforms we use today would not have been possible without the contributions of designers and the user-centered approach.
However, like any lens, UCD also has its own blind spots, and those have played a role in leading us to our current state. There are three key gaps that we’ve identified and will discuss in more detail.
- First, by focusing on the user, UCD has a tendency to obscure the experiences of other participants in the systems we design — those who aren’t end users, per se, but who interact with or are affected by the system.
- Second, by focusing on ease of use, the approach obscures the friction in an experience. Often that friction doesn’t disappear, but instead gets offloaded on to others whose experiences are less visible or less privileged.
- Finally, UCD’s focus on “successful” experiences obscures possibilities that lie outside of predetermined success metrics, preventing us from designing for uncertainty, failure, or experimentation in the ways we might.
It’s important to note that when UCD was first developed, the technological experiences we were designing also had a much smaller scope. These blind spots were less apparent and less problematic when confined to a specific product or workflow. But today, we find ourselves in a far more complex world where every experience is technologically mediated and ambient, pervasive computing is the norm. The Internet, smartphones, machine learning, deep fakes, decentralized finance, addictive social media: today’s world features fundamentally new levels of complexity and scale which are beginning to expose some gaps in our approach, and we need to develop new approaches that address these complexities.
In effect, many of the problems that we see in product and platform design are not due to malice, but simply because UCD as a practice makes some problems more visible than others. There is a whole set of second-order experiences that we don’t actively design, but happen as a consequence of what we design. Which means that there’s the potential for a great deal of positive change that can be created simply by shifting how we look and what we look at.
We believe that the lens of systems thinking, when added to traditional UCD techniques, can help us to expand our perspective. Systems thinking is intended to help us see how many complex elements interact; to understand a diversity of participants and the structures that connect them. By bringing these two practices together, we can provide ourselves with a robust toolkit that is appropriate to the complexity and scale of the challenges we face as designers today.
Below, we will dive deeper into the three gaps we’ve identified, going into detail about each one. However, we think it’s important to not only have these conversations at a theoretical level, but also to address the nuts and bolts of how these ideas can be applied in our design processes. So we will end with a few resources for you to take into your day-to-day work, starting with a set of questions to guide discussions with colleagues and ending with a set of design strategies and tools that can be applied throughout the creative process.
How much are designers responsible for?
Before we dig in, it’s important to note that, in many cases, the root causes for problematic systems are far bigger than design can neatly solve for. For example, many of the dark patterns we see are a result of the inherent incentive structures of economic systems that are based on scarcity and profit. It’s not typically within the scope of a designer’s job to change the foundational economics of our society (though if you want to try, go for it!). However, we do have a responsibility to develop a clear understanding of the systems that we’re contributing to so that we can minimize unintentional harm. If you are working on a two-sided marketplace, what does that mean? What are the ethical risks of that model? If you’re developing a SaaS product, what incentives does that create and what is the larger system it engages with? The strategies outlined here are about developing an understanding of incentives and impact so that you can more clearly understand the systems you’re working within, and can act more thoughtfully to engage with it.
What’s being obscured in user-centered design
Putting the user at the center of our process has undoubtedly helped us create interactive systems that are more useful and usable than their predecessors. However, whenever we center something in a system, we give it more of our focus; we privilege it above the other elements in the system, often to the detriment of the broader system. A side-effect of our pursuit to place the user at the center of our process, is that all too often we say user when we really mean consumer. This begins to narrow our focus, placing the all-important, paying consumer at the center of our thinking — obscuring and de-prioritizing the broader ecosystem of participants who engage with, and are impacted by our system.
A side-effect of our pursuit to place the user at the center of our process, is that all too often we say user when we really mean consumer.
As Kevin Slavin writes in his essay Design as Participation, “When designers center around the user, where do the needs and desires of the other actors in the system go? The lens of the user obscures the view of the ecosystems it affects.” In effect, user-centered design ends up being a mirror for both individualism and capitalism. It posits the consumer at the center, catering to their needs and privileging their purchasing power. And it obscures the labor and systems that are necessary to create that “delightful user experience” for them.
This is particularly evident in the large, complex, global scale technology platforms we engage with everyday, they often achieve success at such massive scales by ensuring that the greatest number of people can use, and pay for, their products and services.
As Abeba Birhane writes in her essay The Algorithmic Colonization of Africa
“Society’s most vulnerable are disproportionally affected by the digitization of various services. Yet many of the ethical principles applied to AI are firmly utilitarian. What they care about is “the greatest happiness for the greatest number of people,” which by definition means that solutions that center minorities are never sought.”
When designers pursue the greatest amount of “delight” for the greatest number, it becomes increasingly difficult to prioritize the attention on actors within the system that fall outside of the visible majority. So let us begin to shift our focus from solely addressing the end user, and instead acknowledge that there exists a much broader spectrum of actors, each with their own agenda, engaging within our system and being impacted by the outputs of it.
There exists a much broader spectrum of actors, each with their own agenda, engaging within our system and being impacted by the outputs of it.
For example, let’s take a look at Airbnb. It was designed with two sets of users in mind: guests and hosts. The experience of each was deeply considered, they were interviewed and observed, the product was designed in response to their needs. But as we’ve seen, Airbnb also has an impact on a much wider set of participants, including neighbors, service workers, city planners, local legislators, hotel owners, and more. If we consider Airbnb as a system with a complex network of actors, we can start to better see and understand the potential impact of various choices and how they play out in the system as a whole.
By moving from users → actors, we can begin to address this gap in our user-centered approach and the externalities that centering the consumer often introduces. We can begin to think not only about end-users but a much broader definition of participants: good actors who use the system as intended, bad actors who look to subvert the system for personal gains, periphery actors who are impacted by the system without ever interacting with it, and many more.
The challenge therein lies in how to accurately maps the spectrum of actors within our system and identify those who might traditionally be overlooked. We can use techniques such as ‘Participant mapping’ to reveal participants we might not have traditionally accounted for, and dive deeper by ‘Designing for excluded users’ to address the pain points of those not being addressed by our current designs. (More on these techniques in the “Filling the gaps” section below.)
Other projects such as IF’s Society-centered design and Don Norman and Eli Spencer’s Community Based HCD have also proposed alternative lenses to traditional user-centered design to help broaden the definition of participants impacted by the systems you work on.
User-centered design advocates for the design of interactive systems that are useful and easy to use. As designers we have been relentless about prioritizing ease-of-use, identifying friction and eradicating it from the interactions we design. We have strived to design ‘seamless’ digital systems in which users go about completing their goals with little awareness of the underlying technology, where things happen ‘automagically’ in a way that seems ingenious, inexplicable, or magical.
Friction often doesn’t get removed from an experience, but instead is shifted on to other parts of the system.
But by privileging ease-of-use above all else, we have at times obscured friction to the detriment of users. We’ve over-optimized, creating experiences that are addictive, irresponsible, and at times, too easy to use. In addition, friction often doesn’t get removed from an experience, but instead is shifted on to other parts of the system.
We have video platforms that have perfected algorithms to serve us the next tantalizing piece of content, regardless of how long we’ve already been watching. We have commission-free stock trading apps that shower the user’s screen in confetti when they execute a trade, regardless of how risky. And of course, we have social media platforms that place the burden of fact-checking on the user and have contributed to some of the largest global misinformation campaigns we have ever seen.
In somewhat of a tacit response to these criticisms, earlier this year we saw some apps begin to re-introduce friction into their experiences to try and course correct: YouTube added bedtime reminders to help users log off late at night. Twitter’s “read the article before you retweet it” prompt was introduced to help promote informed discussion, encouraging users to read the article they’re sharing in order to promote media literacy and combat misinformation virality. Meanwhile Robinhood introduced changes to its platform, making it more difficult for users to access their options trading feature a month after a 20-year old user committed suicide, distraught after confusion over an apparent $730,000 in investing losses on his Robinhood account.
But it’s not just the friction between the user and the interface. Within tech, particularly Silicon Valley tech, the pursuit of obscuring friction in service-oriented products (think food delivery or support services) has resulted in the dehumanizing of the labor behind these services and the reifying of a two-tier class structure that has insecure gig workers on one hand and people who can afford to outsource their lives to those workers on the other. The fact that so much of the labor inherent in getting these products to our door has been intentionally rendered invisible (in the service of a more seamless user experience) gives businesses leeway to “optimize” that labor in ways that would be untenable if consumers were more exposed to those practices.
“This is the Amazon move: absolute obfuscation of labor and logistics behind a friendly buy button. The experience for a Sprig customer is super convenient, almost magical; the experience for a chef or courier…? We don’t know. We don’t get to know. We’re just here to press the button.”
— Robin Sloan, “Why I Quit Ordering From Uber-for-Food Start-Ups”
A team of unnamed journalists at Renwu recently collectively authored a report looking into the impact of algorithmic controls upon frontline delivery workers across China (known as “riders” because they deliver food and other items by riding electric scooters).
“The system that governs delivery services has the power to continuously consume delivery time. For the system’s creators, this is a praiseworthy advancement, a real-world embodiment of the deep-learning capacity of AI. … But, for the delivery riders tasked with realizing this technological advancement, this can be a nerve-wracking and even deadly experience. Among the variables evaluated by the system, delivery time is the most important metric, and missing delivery targets is strictly forbidden. Exceeding the delivery time limit results in bad reviews, pay cuts, and even dismissal from the job. In a message board for delivery riders, one wrote that delivery is a race with Death, a competition with traffic cops, and a friendship with red lights.”
— Unnamed journalists at Renwu, “Delivery Riders, Trapped in the System” (外卖骑手，困在系统里)
So therein lies the designer’s dilemma, we want to reduce friction in order to create a useful and usable experience, but sometimes in our haste to create frictionless experiences we produce unintended, undesirable consequences for other actors in our system. One tool from systems thinking that can help us here is System mapping — creating a simplified model of our real-world system in order to look at the how the various actors within our system are being incentivized to behave, and where those behaviors are being reinforced through feedback loops. By beginning to understand the various flows of value and feedback loops that exist within our system we can begin to model out what might happen in different scenarios if we were to alter those incentives through the removal or introduction of friction.
When we design for products we typically seek to refine the precise details of the user’s experience and optimize their behaviors towards specific success metrics that the organization deems most important; we want the user to invite a friend, purchase an upgrade or spend more time watching videos.
Within the confines of our website or app we optimize for our product’s success, guiding the user through their ‘happy path’ to the intended engagement or transaction. But when we look at the user experience solely through the lens of optimizing for success, we often fail to design appropriate failure states for when things don’t go as expected.
In addition, narrowing of focus on the ‘successful’ experience limits the designer’s view of possibilities and obscures users’ desire paths outside of predetermined definition of success. It makes it harder for people to ‘play’ a system in novel and creative ways. It is also underpinned by the inherently flawed assumption that the organization (and by proxy, the designer) knows what is best for the user is any given context.
Narrowing of focus on the ‘successful’ experience limits the designer’s view of possibilities and obscures users’ desire paths outside of predetermined definition of success.
One of the reasons for Twitter’s popularity is that it made space (at least in its early years) for a multitude of emergent behaviors. Some of the core features of the service today were user-invented hacks, like hash tags, @ replies, and threads. The platform had constraints, but was fuzzy enough at the edges to make space for emergent behavior, and then the platform was responsive to that behavior, learning from usage to build better features.
As the systems that we work on grow increasingly complex, we must move away from the idea that we can optimize for success through exercising explicit control over the user experience, and instead begin to inquire how we might influence the bigger system at play. To do this we must shift our lens from a singular focus on success to a more broad awareness of consequence, ultimately broadening our approach to support both the successes and the failures of the participants within our system.
We must move away from the idea that we can optimize for success through exercising explicit control over the user experience, and instead begin to inquire how we might influence the bigger system at play.
Where product design seeks to own and control each detail of the user’s experience, when designing for systems we cede some of that control. Instead of designing the artifact, we are designing the playing field and the rules, then observing how participants react (or don’t). Steering influence comes from considering the consequences of introducing or removing elements and interconnections, of conflicting or complementary purposes and the intentions and behaviors of participants, good and bad-actors.
To design for systems rather than users, we must shift into a dynamic posture. Systems are ever-changing and success metrics are too limited a goalpost, instead we may be better served thinking in terms of incentives and consequences. “I am going to influence A, by designing B in this way.” or “I will design A in such a way because it will likely affect B” or “I will incentivize this behavior from A, so as to elicit this response from B for the betterment of both”.
As Cassie Robinson writes in her essay Beyond Human Centered Design
As designers we create user journeys and experience maps to show “if this then that” but we should also ask “if this then what?” — something that those of us who work more systemically already do through consequence mapping — if this for an individual, then what for a community? If this for a social enterprise investment, then what for society?
Kevin Slavin also speaks to this shift in thinking about what it means to design:
“The designers of complex adaptive systems are not strictly designing systems themselves. They are hinting those systems towards anticipated outcomes, from an array of existing interrelated systems. These are designers that do not understand themselves to be in the center of the system. Rather, they understand themselves to be participants, shaping the systems that interact with other forces, ideas, events and other designers.”
Filling the gaps: 5 design strategies
Now that we’ve discussed the gaps we see in user-centered design and some high-level approaches for addressing those shortcomings, let’s dig into some strategies and tactics you can use in your day-to-day design practice.
These design strategies are intended to give you practical techniques to help you consider the bigger system at play. They can help you examine your values and principles, reveal invisible participants, understand the interplay of incentives in your network, and expose potential impact of your product or service.
These are just a few approaches drawn from our own practices and thinking. If you want to explore further, some related work includes: Spotify’s Ethics Assessment Spreadsheet, Make me think! by Ralph Ammer, and Beyond human-centered design, to? by Cassie Robinson.
1. Uncover the exploits
Go beyond the happy path. Take a “white hat” approach to design by actively exploring unintended consequences.
Thinking through and designing for misuse, abuse, and bad actors in an ecosystem allows us to counteract dark patterns with mitigating factors. As well-meaning designers, we often design idealized happy states, flows, and scenarios. We are often reticent to explore how our systems might be misused because we’re usually trying to “sell” those solutions either to internal stakeholders or external clients. This leaves our systems with a variety of holes for bad actors to take advantage of.
In parallel to designing ideal flows, think through all the ways your system can be exploited. Figure out how people might maliciously subvert or break your system. This takes some unique, and likely uncomfortable, thinking on your part. You’ll want to consider:
- What’s the worst thing a user can do with this system?
- Who are my most vulnerable participants?
- Why are the bad actors incentivized to act this way? What do they gain?
Design mitigating solutions to your identified dark paths. Exposing the various ways bad actors can take advantage of your system gives you the power to make it right again. For each exploit you’ve identified, take the time to design a solution that provides guardrails or preventative measures. This overall practice will create a safer environment throughout the ecosystem you’re designing for all users involved.
2. If this, then what?
Consider second and third-order effects as well as alternative outcomes in order to understand the potential consequences of your system.
Going beyond single-outcome thinking and success metrics to imagine a variety of potential outcomes allows us to reveal the potential impact of our products and services. As designers, we often consider only the single path of “if this, then that”(e.g. If I show the customer how long until their order arrives, they will be less anxious waiting for it). Doing so constricts our perspective and leaves our designs open to creating negative impacts beyond those first-order, user-centered interactions.
There are a myriad of techniques you could use to explore more broadly, to understand “if this, then what”. A few we like to use include consequence scanning workshops, The Five Whys, STEEP, and Mapping Impact, but there are many more beyond what we listed here. The important part is that the activity encourages thinking beyond the obvious cause and effect of your product or service and considering possible second-order consequences. All of these techniques are intended to push you to think several steps ahead and anticipate a broader range of outcomes. By identifying those potential consequences, you may reveal new paths for innovation, expose unanticipated risks that you can work to prevent, or open up opportunities for future growth.
3. System mapping
Make the invisible actors visible. Mapping the various relationships and transactions between the actors in our system allows us to better understand how each group is incentivized to behave.
In order to better understand the complex systems we work on, we must take our abstract understanding and begin to create more concrete representations to express our assumptions and understanding. A system map can help us do just that, simplifying the often overwhelming complexity of reality by identifying the various elements, actors and interconnections and incorporating them into a simpler story.
A simple system map shows the actors involved in a system and the flow of value (information, goods, money) between them (“We give you A and in return you give us B.”). A system map is incredibly useful for creating a quick visual description of the system (or business model) you are operating within. By identifying how value flows we can start to identify where actors are being rewarded and how they are being incentivized to act, or not. Ultimately mapping out your system can also help you identify areas where your understanding is not as complete, and can enable teams to form a common understanding of how their system operates.
- Define the actors in your system, those who are involved in creating, delivering, capturing value (think users, creators, brands, merchants, partners, etc).
- Start to map the three flows of value between actors: money, goods and information. Draw arrows between the actors indicating which way the value flows
- Look at how actors are being incentivized or disincentivize within your system, how is this impacting their behavior? Are the incentives consistent with the espoused goals of your users? Consider whether or not your incentive flows help your actors or harm them.
4. Design for excluded users
Practice conscious design by looking beyond expected user groups to include a diverse range of people with a need for your product/service.
We often unintentionally, whether through business assumptions or personal biases, design for the majority, in an attempt to develop solutions that work for “most people”. This often leads to design solutions that preclude diverse groups of people who may in fact be potential users, but fall outside the majority.
Start by looking at your own team. How many different viewpoints do you have? How diverse is it when it comes to race, gender, socioeconomic status, and disability spectrum? Who are you missing? How can you include the missing links? Conscious design starts with the design team. Become aware of your own biases and assumptions. While we cannot fully get away from them, we can be aware of missing viewpoints.
Next, look at your user research sample. You’ve included users based on your assumptions about who the users are. How diverse is your research sample? Conscious design can be practiced by conducting research/testing with users across race, gender, socioeconomic, and disability spectra to test assumptions and identify needs that are specific to each group for a more inclusive design solution.
5. Ethics-oriented competitive research
Examine the impact of similar products or systems to gather insight on which outcomes you want your system to replicate and which you want to avoid.
Understanding the ethical impact of competitors’ products can reveal methods to emulate or avoid. Most of us are familiar with conducting competitive research to inspire ideas for our designs. By turning that research to focus on the social and systemic effects of the competitive set, we take our findings behind user-centered thinking. Consider what impact they have had, the criticism they’ve faced, and how you might avoid the same pitfalls (or realize similar successes).
You will especially want to include competitors that have similar systems architecture — for example, a SaaS business, an online retailer, a two-sided marketplace, etc. Orient your research towards ethics by including questions like: What impact has this competitor had on all participants in the system?, What unintended consequences has their product created?, and What praise or criticism have they encountered?
Let’s keep exploring
This collaboration began as a result of a shorter essay Alexis wrote back in March, which spurred new connections and led to deeper conversations with Devin, Lis, and Diana. What we’ve written here isn’t intended to be a fixed solution, but rather a set of ideas in an ongoing conversation. We want to continue to interrogate these ideas and our own practices in discussion with other designers, technologists, academics, social scientists, futurists, and more. Let’s keep learning from each others’ techniques and perspectives so that we can create a more thoughtful and intentional future.
You can reach Alexis, Devin, Lis, and Diana on Twitter.
Thank you to the friends and colleagues who generously offered their eyes and feedback: Mike Davidson, Matt Boggie, Shannon Blike, Jack Baldwin, Theo Strauss, Andrea Mignolo, and Katy Atherholt.