Keeping it 100 When it Comes to Human-Centered AI Design

Don Norman was right— human-centered design is harmful.

Thirteen years ago the king of modern interaction design wrote a purely provocative piece calling into question the sacred tenet of the design empire he created. In the now infamous article for the Association for Computing Machines “Human-Centered Design Considered Harmful,” he put a pitchfork in the budding design movement of the early 2000s when he wrote that human-centered design was problematic because…

“… first, the focus upon humans detracts from support for the activities themselves; second, too much attention to the needs of the users can lead to a lack of cohesion and added complexity in the design.”

The article got so much backlash that Norman had to write a “clarification,” others might call it a retraction, to clear up what he meant by what he wrote. I won’t belabor the debates that ensued but I mention this provocative stance here now because it’s a prime example of the myths and misunderstanding that can infiltrate an industry when principles and philosophies become catch phrases and pithy slogans.

It seems we’re at that harmful crossroads again, where the very codification and myopic principles that Norman saw popping up in interaction design all those years ago, are now resurfacing in the race to inject some humanity into artificial intelligence design.

The same design misinterpretation and malpractice that pushed Norman to reinvent a new design theory so as to cancel out his popularization of human-centered design has now cropped up in the rush to label a new movement of human-centered AI. Like Norman, I’m going to be provocative here and call in to question all that has been bandied about as examples of human-centered artificial intelligence design as not only harmful, but dangerous because the result of this design malpractice is not human-centric at all.

Do You Know Your Design Theory Origin Story?

Even though Don Norman popularized the catchy moniker of “user-centered design,” Norman’s rant against human-centered design in 2005 wasn’t really wrong.

He was fighting against a growing tide of posers and charlatans creating frameworks and designs without truly understanding the philosophy behind human-centered design. Sound familiar (coughs, Design Thinking anyone).

But before we get to Norman’s criticism of those HCD short-cut takers let’s do a refresh on just what is human-centered design.

Just from where did this popular catchphrase originate?

According to the International Organization for Standardization (say 3x fast) or ISO, human-centered design is an approach to designing technology systems which have the needs, requirements of users/people in mind to enhance efficiency and effectiveness but also to “counteract possible adverse effects of use on human health, safety and performance.” We’ll get to the second part of that long sentence in a minute.

So there is a standard for what equals HCD and many of the products we’re creating today with machine learning and deep learning techniques just don’t meet them. To understand the mistakes of yesterday we’re repeating today; let’s take a walk down memory lane to see how HCD was actually created as a design approach.

One of the best pieces I’ve read and one I share with all my students is this piece by Jo Szczepanska, a fellow Medium writer and one smart cookie. Her Design Thinking origin piece and its accompanying visualized human-centered design historical road map establishes some much needed foundation for how designers began to apply the HCD point of view in design.

Visualization of Design Thinking Evolution from Jo Szczepanska’s DT Medium article

Like any movement, Human-Centered Design was ushered in on a wave of technological, historical and economic transformative alterations that changed forever how we humans communicate and relate.

Design Thinking became a popularized methodology to this approach to design. See what I did there? Design Thinking and Human-Centered Design are not the same. You can do Design Thinking and NOT do Human-Centered Design; this is where our current AI design problem lies. Let’s continue with the history lesson.

Wikipedia will tell you that HCD began with Mike Cooley and his Human-Centered Systems framework developed in the 1980s, but as Jo points out this idea of designing especially with humans in mind began a lot earlier.

Jo rightly pinpoints the period between the 1960s and 1980s as being the seminal decades in the development of the human-centered approach to design. She explains:

What I have to point out here is for anyone born after the 1980’s — yup, I’m one of them — is that this block of time between the 1960’s and the 1980’s was the first real instance of humans designing non-tangible things like software and interactions. What is quite fascinating is that at this early stage of making non-physical designs, the design profession called on social sciences like psychology and anthropology to help them understand how people reacted to fundamentally new ways of doing things via a machine.

But I would posit to say that the idea of designing with the full human in mind, not just our appendages or our physical attributes but adding our cognitive, social and psychological functions to the mix began years before, unequivocally with a man named J.C. Licklidder.

J.C. Licklidder

The Man Behind the Dream Machine

It’s ironic that Wikipedia labels Licklidder an American psychologist when really he was the nation’s first computer scientist. I think this is a distinction worth pointing out because Licklidder never saw computerized machines as just tools for humans to use. He always saw the future of computing as a fortuitous partnership of humans and machines.

This “man-computer symbiosis,” which he described in his seminal paper of the same name in 1958, was a new state of being for technology but also humanity.

In the paper that literally spawned a thousand innovations and what we know now as human-computer interaction design, Licklidder gave a vision of a new world of man and machine:

The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.

Xerox, then Jobs, then Apple, then everyone else; they all drew inspiration from Licklidder’s idea of a brave new world. For the first time technology design placed humans in the center of the action, making technology personal instead of transactional.

Such a symbiosis meant there had to be a way to talk, touch and interact with machines that once stood in the corner of the room doing their own thing. Now they would be at your desk, in your home and even in your hand.

Engineers could no longer keep humans at bay, they had to go beyond perfunctory interactions like “On/Off,” switches into a lot more nuanced engagement.

With that one idea — that computers should be personal — Licklidder catapulted computer science from the Human Factors era of big systems being controlled by small groups of people, to hand-sized systems with as much computing power to put a man on the moon being controlled by toddlers to play BabyShark on repeat.

His ideas, eloquently documented in one of my favorite books, The Dream Machine, were honed during his time as a mathematician and cognitive psychologist at MIT, way back in the 40s and 50s.

I would argue that Human-Centered Design began in the dank basement laboratory Licklidder shared with his wife and a few other crazy kids with PhDs. This lab on MIT’s fledging computer science campus, and was a place where Licklidder and his team cooked up all kinds of ways for military pilots to use auditory sounds to guide missile computers.

Licklidder thought of a networked-powered machine not as a tool to be used but as an extension of humans. He was obsessed with designing ways for pilots to better control their environments through machines.

He wanted to know what they saw, how they thought and what they felt during bombing missions. And then he sought to design machines that made those processes easier not just by taking on tasks but by anticipating human needs.

And even though many thought leaders on human-centered AI evoke Licklidder as an inspiration, I, like Don Norman, more than a decade ago, am worried that they’re only understanding half of the story. And the half that they’re missing, the idea that computerized technology is not designed for technology tools only; but something far greater and more powerful is essential to the future of AI design.

Licklidder sought not just to understand how a human could interact with a machine, he wanted to understand how humans thought, acted and believed to help him make a better machine. But that message of amplifying and augmenting human abilities has gotten lost in a deluge of “human replacement,” phraseology. The AI doomsdayers and their AI-boosting counterparts are aligned on the false assumption that AI=Without Human. And for the sake of a better future that assumption can’t be further from the truth.

Learning from our Human-Centered Sins of the Past

If the 17 million results on Google are any indication the world is really into human-centered design and artificial intelligence. I haven’t read all 17 million entries but this piece in Wired and this center at Stanford are two good indications of thought leadership in this arena.

Unsplash.com

And yet, as I scan conference after conference, read journal article after journal article and listen to podcasts and radio shows I’m getting the sneaky suspicion that people are talking about human-centered AI design without really understanding what that means. And this is precisely what lead to that pesky troubleshooting Norman did over a decade ago.

He wrote about the adaptation of “personas,” in so-called human-centered design efforts as being bogus. He wrote:

“Now, there has been one change, more in emphasis than substance. There has been far too much emphasis on individual people, trying to model them, trying to build fascinating scenarios and “personas.” I think much of this work misplaced, irrelevant, and potentially harmful if it diverts the limited time and resources of the design team away from matters that can actually help.”

If you read the piece and its clarifying companion article he’s not railing against human-centered design the philosophy only its practice. His beef is with designers who were myopically focused on “listening to users,” to the detriment of good design. He felt people were using human-centered design as a crutch, checking off boxes but not really doing good design. He wrote:

The problem, however, is that HCD has developed as a limited view of design. Instead of looking at a person’s entire activity, it has primarily focused upon page-by-page analysis, screen-by-screen. As a result, sequences, interruptions, ill-defined goals — all the aspects of real activities, have been ignored. And error messages — there should not be any error messages. All messages should contain explanations and offer alternative ways of proceeding from the message itself.”

I believe the same is happening with AI design. For many, Human-Centered AI design is all about utility. It’s all about task replacement and decision making. And the design methodology is focused narrowly on what humans do — their activities and not who we are — what makes us do what we do. And that to me is dangerous thinking.

Human-Centered Is More than Focusing on Humans

For example, of the prevailing philosophy of human-centered AI design look no further than a few large-page adverts, Wired articles and conference speeches. You’ll know them when you hear them.

They are good and speak with good intent. They make all the right points and I believe are truly sincere. They quote both Don Norman and Daniel Kahnmen, the Twin Towers of human-centered thinking. Hitting all the HCD and cognitive homers. They’ve got the informational processing model down and are all about AI focusing on tasks, automation and efficiency.
And yet…something is off.

When I read these diatribes about how much we need human-centeredness in AI, why am I left with the feeling that humans are just this pesky entity you have to design around? Like we’re obstacles in the way of AI advancement and in order to truly get to a high level of AI you have to design with the “weak humans,” in mind. They speak of augmenting but it feels more parsing than partnering. Because you can put humans at the center of your design and not be human centered.

Instead of augmenting human abilities, I’m seeing AI platforms and products designed to exploit human frailty and behavior.

Instead being built with human need in mind, I see a drift toward AI products and services that are built with and understanding of human psychology but built to bolster business needs and therefore doesn’t counteract harmful effects on humans, but rather causes them.

Examples include:

  • Creating false sense of urgency to trick people into buying products
  • Defaulting app notifications onto people, forcing them to be tethered to their phone
  • Forcing people to give data to access content so you can then sell it, but giving them very little in return
  • Algorithms that spread dubious content with no filter and frightening speed
  • Being agnostic about content allowing algorithms to promote whatever speech crosses your platform without redress including hate, bigotry, fakery and the like

Each one of these examples plays upon a cognitive bias generally found around the globe in humans. And this doesn’t live up to the ISO standard of HCD. It violates that second half of counteracting “ possible adverse effects of use on human health, safety and performance.” Because the design exploits human cognitive responses and behavior.

So far AI has been built for efficiency. This is a task-focused view of AI that is as narrow as the persona-chasing HCD POV Norman complained about.

When I hear this version of human-centered AI, I keep thinking back to my overall worldview and how it evolved and I go back to this quote by Licklider,

“Man-computer symbiosis is an expected development in cooperative interaction between men and electronic computers. It will involve very close coupling between the human and the electronic members of the partnership.”

I’m missing the symbiotic relationship here, the give and take. The idea that humans can teach and the AI can learn from them as they work together. What most important here is I’m missing the AI that counteracts the harm of a machines to people and amplifies the ability to serve human needs.

What we have now in AI, it seems, is a HCD approach that looks at human behavior and decision making in a task-orientation. It’s design as a noun not verb. It’s design about usability, automation, being task efficient and decision-making on task activities.

But isn’t human-centered design more than just a focus on tasks or what we do? It’s more than how we think when we make decisions. This task-focused philosophy makes our quirky tendencies as humans to buck logic a weakness rather than part of our humanity. Does it have to be?

If we want truly human-centered design in an intelligent world shouldn’t it be in service of humans, yes to make our tasks easier and better to do, but also to help us be who we are? Not just to help us think but to help us be human? To express identity, and find a safe space and take ownership of our world?

So Don Norman was wrong. Human-centered design shouldn’t be considered harmful. But human-centered design misinterpretation and malpractice definitely is. Especially when it comes to the speed, scale and universality of artificial intelligence creation.

Mindful AI Focuses on Humanity Not Behavior

So I’d like to go beyond the term human-centered AI. Instead, I prefer to talk about “Mindful AI.” Mindful AI is less concerned about what people do and more concerned about why they do it. Mindful AI is design not just with human behavior in mind but also human values and aspirations.

Increasingly designers are going to have to be concerned not just with human behavior and decision-making, but also more about fundamental human needs and values. That’s because AI products and services won’t just have the ability to do human tasks but also convince, cajole and even prevent humans from doing activities.

And when you design products and services that interrupt a person’s free will, say for example an AV car door that refuses to open because it’s camera can see a cyclist in the door’s pathway better than the driver, then designers better have a fundamental understanding of how to create such gestures without violating abstract notions such as trust and the need for freedom.

In a truly AI world, design will be less about what humans do and more about what humans fundamentally need to meet that definition of humanity for themselves.

In light of that, I get the idea that human-centered design as a framework seems a little narrow.

It may seem not broad enough because really you can’t really design something for every individual on the planet, right?

Well, with artificial intelligence with its scale, speed and broad adoption and adaptation, we’re going to have to get pretty damn good at designing for everyone on the planet and not harm individuals in the process.

In an intelligent world, in the age of AI, we have to broaden our definitions and understanding of what it means to design for humans. As designers we have to have a fundamental understanding of what it means to be human otherwise we will continue the path that we’re on now, designing for business reasons, or technical reasons, leaving humanity as collateral damage.

If we’re going to design products that are sentient, that are going to work side-by-side with humans to make decisions to fulfill human needs then we as designers must have a better understanding and confidently know what it freaking means to be human.

Principal Design Director, Design Research Lead @IDEO, Karaoke specialist, cold-water swim enthusiast, 3x Ironman — yep that’s me! Living life like it's golden.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store