Fixing the Human Problem in AI

writingprincess
16 min readOct 9, 2018

This post was originally published on LinkedIn on June 14, 2018.

Picture Credit to Andy Kelly, Unsplash.com

There’s a scene in Cixin Liu’s “The Dark Forest,” the second book of his “Three Body Problem,” trilogy I find particularly disturbing. It’s right after…wait.

Before I go on, I’m going to give you folks who have yet to read this amazing and wonderful sci-fi book from China’s most popular sci-fi writer, the opportunity to go to Amazon.com right now and buy it. Because you’re gonna want to read it. It will change your thinking about the future of humanity in ways that are both unexpected and uncomfortable.

Lui accomplishes a futuristic singularity in a way that only good science fiction authors can do. Tackling the big ones like ethics, morality, humanity’s right to exist; he’s right up there with sci-fi top ABC’s — Asimov, Bradbury/Butler and Clarke (Sir Clarke if you will.)

His “Three Body Problem,” combines hard science (there is really a three-body problem), near future technology(AR/VR), AI and, yes, even aliens, to usher in conversations about some pretty heady ethical stuff.

Also this is the obligatory spoiler alert — so stop reading now if you don’t want to know what happens. I promise I’m only going to expose what’s needed to get into the tangled web of ethical, human-centered data and intelligent systems designs. But plot points will be revealed.

OK, still here?

Let us begin.

Is Being Human Centered Good for Designing AI?

So back to “The Dark Forest.” After being beckoned by some dispirited people on Earth, the aliens decided to come.

They’re coming to destroy us, press reset and start humanity all over again. They have human sympathizers who believe in the cause and are fighting on Earth so the aliens will win. Which means they lose because the aliens are here to kill us all.

At first, these faraway strangers communicate regularly with their band of supporters on Earth. The aliens have real-time communication methods that help them talk to us. Yet they’re bothered by one exception : The can’t read our human mind.

This makes them nervous.

In the Dark Forest novel, an alien sympathizer is about to commit suicide because it’s been a year since he’s heard from the aliens. Just as he’s about to shoot himself the Trisolaris breaks its silence.

“Don’t do that. We need you.” It tells the sympathizer.

The human whines and ask why did you abandon us?

“We were afraid of you.”

“Is it because our thoughts aren’t transparent?“ the man replies. ”That doesn’t matter, you know. All of the skills that you lack — deceit, trickery, disguise, and misdirection — we use in your service.”

The two — alien and human — go on to discuss the absence of such treachery on Trisolaris.

But the Trisolaris remains unconvinced. Humanity’s propensity to lie, scheme and hide the truth is just something that the aliens can’t understand and deem frightening.

“This difference in mental transparency gives us all the more resolve to wipe out humanity. Please, help us wipe out humanity, and then we will wipe you out.”

And there it is.

The space between what humans do and what they aspire to can be as thin as a blue line or as wide as the ocean. In the book, the Trisolaris aliens discover this fact as they study humanity.

They witness time and time again how humans espouse lofty human values and then contradict those values by their nature. It disturbs them so much that they feel obligated to wipe humanity out.

And this brings us to a critical point when discussing how to practice ethical, human-centered data and intelligent systems design.

As the topic of artificial intelligence slips from the fringes to frameworks, people in design, academia, industry and even Walmart are dabbling with its use.

Cries about ethics and humanity abound and there is a hefty amount of skepticism and even fear associated with AI and its adoption. Mainly, will the machines eliminate our humanity.

But I’m looking at the creation, adoption and design of AI through a different lens; not one of just humanity but one of beyond humanity.

For if we design intelligent systems based solely our human nature; we should rightfully worry about the take over of the machines.

But if we design our future based upon our aspiring human values — we may have a fighting chance when intelligent systems grow to rule our world.

The machines are coming and I for one am glad. For if we stop this technology bullet train for just a moment, there may be no need for ethical technology frameworks such as Asimov’s Three Laws. Instead, we may just end up building the best humanity there ever was when we opt out of building these systems in our own image.

So take this thought experiment ride with me. Let’s see if there is a case for AI Design to not be human-centered at all, and more humane for us all.

The Competing Mental Models of AI Design

Recently, a Twitter skirmish of the Titans erupted when Facebook founder Mark Zuckerberg, had some choice words for AI doomsdayers — most notably that other tech titan Elon Musk, who has been preaching caution for years now on the advancement of AI.

“I think that people who are naysayers and kinda of try to drum up these doomsday scenarios — I just don’t understand it. I think it’s really negative and in some ways I actually think it is pretty irresponsible,” Zuckerberg said.

Musk shot back with a tightly wound Tweet:

“I’ve talk to Mark about this,” Musk tweeted July 24, 2017. “His understanding of the subject is limited.”

So who is right?

I’m gonna punt here and say both titans are on to something.

The fact that self-driving cars, personality algorithms and disease detecting programs may soon replace drivers, matchmakers and family care docs has many envisioning a dystopian future.

I, on the other hand, remain optimistic. This coming from a black woman in America is a stunner in itself.

But it’s precisely because I understand humanity’s moral dissonance in a way few others in the tech world can, that I can be optimistic. Because we finally have an opportunity to design a future without the mistakes of our past.

Intelligent systems displacing human beings isn’t necessarily a bad phenomenon if those systems are built to maximize human values over human nature.

Human Ideal Centered AI Design

AI Pioneer Stuart Russell, in this seminal TED Talk, postulates a philosophical yet pragmatic framework for how to build intelligent systems to be better than us. Russell, who has literally written the book on AI design, advocates for building systems that are:

Altruistic

Humble

and Maximize human values

The first two are interesting, but the “last law of AI,” spurs a mind-blowing discussion.

For if we built intelligent systems to replicate us we definitely will have failed and sealed our fate to that dystopian future displayed in tech-gone-wrong-novels like Brave New World.

But if we break free of all that is naturally human; that is, if we design beyond our humanity, then we will have achieved a world filled with intelligent systems augmenting, nay completely transforming, our world.

Sadly, we are not going in this direction.

In fact, many current AI technology in use today falls way short of this ideal. And tragically we are designing intelligent systems that are infused with everything that is wrong with us — racism, sexism, classism, bias and more. Let’s delve deeper.

Human Centered Does Not Mean Human Nature

So what is Stuart Russell actually talking about when he says AI should maximize human values; values he says must never be explicitly known to the system but followed. Huh?

So there are three books and one pamphlet which I regularly read and reread each year. They are:

The Constitution of the United States

The Federalist Papers

The Anti-Federalist Papers

The Bible (the Protestant one, not the Catholic one, though I’ve read both)

Within these books lie the ultimate illustration of the tensions that exist between human values and human nature.

You can go straight to the obvious dichotomies —

Slave owners writing about the equality of men

A benevolent God who gives life only to demand it back in a show of all consuming loyalty

Advocating for no establishment of religion while evoking the Christian God to bless a new union

We could go on. But combing through these contradictions gives us a baseline to understand Russell’s idea of building systems with our ideals not our natural inclinations.

Let’s get 100% here.

Historically, we haven’t been good at designing out the indelicate parts of our culture. Just look at how we’ve used basic mathematical models in the past to push racist, sexist and bias agendas.

Take the US Census. In 1790, the burgeoning new democracy known as the United States wisely decided to take an accounting of all its citizens. That first census had a noble mission: count every head of household, free woman, man and slave.

Yet, minorities, immigrants and anyone who wasn’t a white male remained undercounted in the census for centuries. Might not matter in terms of math but when you use that math to determine allocation of resources you’ve got a problem.

People used the numbers gathered from the US Census to enact policy, distribute resources, determine housing and medical subsidies. But because of racism, sexism and other inhumane, human tendencies there were many groups who either went underfunded or were just left out altogether. Let’s not even mention voting and the electoral college. Three-fifths anyone???

And to this day LGBTQ people aren’t (and won’t be) accounted for in the Census. If you don’t get counted do you even exist?

People tend to think of math, statistics, data and technology as some sort of agnostic representation of reality.

Think again.

The US Census and the Department of Justice criminal statistics remain two of the largest examples of Weapons of Math Destruction, so bluntly described in Catherine O’Neill’s book of the same name.

Like a jilted lover writing an accounting of her ex-lover’s discretions and secrets, O’Neill, an MIT educated data scientist, airs the dirty laundry of how companies are using “Big Data,” to create harmful mathematical models that reinforce some of the worst of humanity’s behaviors.

The book is filled with algorithmic programming that results in discriminatory, racist and highly prejudicial data models.

“A model’s blind spots reflect the judgements and priorities of its creators,” O’Neill writes….“Models are opinions embedded in mathematics…Whether or not a model works is also a matter of opinion….we must ask not only who designed the model but also what that person or company is trying to accomplish.”

Because we if refuse to acknowledge the human influence over what many see as agnostic data congruence; we risk falling into the trap of practicing moral dissonance, and intellectual dishonesty.

Also if we go for a “quant-first culture,” one where the programming results are the only consideration, we run the risk of creating intelligent systems that perform well in a vacuum environment that ignores human nature and creates dire consequences.

Remember that Zuckerberg quote? Check back with him now. He might have a different experience because of one word: the election.

In a critical, but largely ignored article, Buzzfeed dissected the Facebook fake news scandal in a way that illuminated some inside mind sets that could have lead to signals about ethics, privacy and abuse of their products being missed.

“Things are organized quantitatively at Facebook,” said an engineer, noting that the company was far more concerned with how many links were shared than what was being shared. “There wasn’t a team dedicated to what news outlets [were using the platform] and what news was propagating (though there was a sales-oriented media partnerships team). And why would they have had one, it simply wasn’t one of their business objectives.” — ”How People Inside Facebook Are Reacting To The Company’s Election Crisis” Oct. 2017 Buzzfeed

We are telling ourselves that we are using innocuous tools to design for the greater good when our outcomes reinforce some of the worst qualities we’ve ever displayed as humans beings.

From the predictive policing model the Chicago Police Department is using to compile of list of 400,000 individuals in the city who “might,” be involved in violent crimes, to the the numerous HR screening intelligent systems that are using personality tests to red-flag those with mental disabilities, or Amazon’s hiring algorithm’s gender bias against women, we are reinforcing the worst of humanity. Whether we’re doing it intentionally or not doesn’t really matter. The outcomes are the same.

Sure, Big Data is the punching bag because it’s new. But let’s be real here, AI isn’t the problem. We are. But the coolest thing is — we’re the solution too!

Solving the Human Problem with Humanity

Because for every unethical model O’Neill mentions, there are dozens of models that intentionally and with forethought are designed to overcome, bias, discrimination, to minimize harm and intentionally go beyond human nature for something better.

One example is the Array of Things project in Chicago (my hometown) of all places.

In 2014, Chicago debuted the Array of Things project, one of the first citywide IoT projects ever designed. The urban sensing project included a network of interactive sensor boxes installed around the city to collect real-time data about the city’s environment, infrastructure use and, of course, pedestrian activity. It’s the latter part that had people queasy with worry.

Cameras taking pictures of people walking on public streets raised privacy concerns. After a series of meetings with the public, the data team decided that cameras mounted in sensor boxes would only take still images, not videos, have algorithms process the image to learn, keep the images for no more than four minutes and then trash them. The image itself never leaves the computer, and is not being saved or stored.

Artificial intelligence is neither independently artificial or intelligent. It’s deliberately programmed by people to learn, do and act a certain way. Intelligent systems that learn outside of human intervention do exist but their creation starts with one of us first. But soon, and very soon as the Negro spiritual goes, this will not be the case.

Intelligent systems are already learning unsupervised, without human intervention. This presents a particularly thorny ethical problem. A problem that if we do not address in a different way than we’ve done in the past we will most assuredly bring about our society’s destruction. And that’s no doomsday nonsense. That’s just predicative analytics.

I’ve written extensively about the history of colorism, sexism and racism and sins of omission in technology. From LED sensors not working on black skin to racism in facial recognition programs, technology has not escaped our human nature. How could it when it’s designed by flawed human beings?

How can we strive for ethical data and intelligent systems design if we don’t recognize our own imperfection as creators?

That’s the first step to ethical data and intelligent systems design — recognizing our culpability and responsibility in this design process.

By acknowledging, out loud, written down for all to see, our own biases as designers we can begin to have the necessary dialogues within our design teams surrounding human-centered intelligent systems design.

Without this honest discussion, no matter the project or the design, we risk falling into the hubris humanity trap I label AI “Agnostic Innocuousness.” This belief that because it uses numbers it’s exempt from human frailty.

But this doesn’t have to be destination we arrive at. If we strive to design for our values and not our nature we just may create a reality we aspire to rather than one we’re living in.

Take self-driving cars.

Now that companies like Google, Facebook, Uber and others have pushed AI from the shadows of computer labs into the hallways of corporate America and living rooms of everyman, it is only right that we start thinking about the ethics of intelligent systems.

Like Godwin’s Law of the dark Internet, you can’t have a discussion about autonomous cars without someone inevitably presenting us with some sort of ethical dilemma exercise. Mention any type of intelligent system design and you’ll get all kinds of Hobbesian scenarios posited by pundits, critics, Luddites, technologists a like where the system has to chose between killing someone and killing some other one.

The Trolley Problem is a favorite, when it comes to autonomous cars. You can read about it, but basically it’s an ethics exercise where you have to decide to save one person or five.

It’s understandable why people would think in these terms because the mental model is that our intelligent systems must mimic human nature. But what if that’s a wrong mental model?

Why should our intelligent systems mimic our nature, how we think or even how we act? Is it a prerequisite for our interaction with technology to be just like us? Or can it be something just a bit better?

A Case for Human-Ideal Centered AI Design

So let’s get down to brass tacks. What exactly does ethical intelligent systems design look like? Hell, I don’t know. But I’m and others smarter than I are working on it.

Here’s what I’ve got so far.

Human factors and ergonomics was introduced around the same time of the industrial revolution. This was a design framework that sought to produce scientific research on how humans interacted with physical systems.

Human factors started specifically concerned with people designing objects for other people.

Move along to another revolution this one of the personal computing and Internet kind and you get Human Centered Design.

HCD, Design Thinking, User Centered Design these were the frameworks people used to design non-physical products for people.

In the 1990s, professors Batya Friedman and Peter Kahn introduced the concept of Value Sensitive Design. This took the human-centered framework a bit further in advocating designing software and systems by programming in certain values most notably, privacy, autonomy and trust.

VSD as a theory of design isn’t widely adopted. But even so, it presents a foundational framework for the next generation of design thinking.

Design Thinking: The Next Generation

So what can we do today about this runaway technology? Well, greater minds than mine are tackling this very issue but as someone who cares about the intersection of humanity and technology and who works to design and build such systems here are some good practices I’ve learned while working on AI projects to build products that reach human ideals in human centered design:

Start with us and go beyond.

I know, the cardinal rule in UX is “You are not the user.” But, you are the builder of an intelligent system and that means designers must acknowledge their own humanity in all it’s bias, glory, frailty and, well human-ess. In addition, we must acknowledge this in the people who will use this product as well.

As a design researcher, I love working with my team on the “What’s My Bias?” exercise. It’s pretty simple. I start with a series of statements that are fill in the blank. The statements are designed to get hidden bias or worldviews out in the open.

So a service project that’s working with call centers may have a series of statements such as “Call Center work is________. Or people who work at call centers are_______.” Everyone fills in the blanks and discussion ensues. It doesn’t ask if you have a bias it just assumes and moves on from there. The statements are posted up for the entire project and serves as a “Check yo’ self before you wreck yo’ self,” reminder when we’re designing.

Likewise, it’s great to look for the “cracks in the humanity,” the gaps between human nature and human ideals when it comes to the system design. What do people aspire to and what do they end up acquiescing to?

Airbnb recently released Another Lens, a set of activities designers can use to address bias, growth and inclusiveness in design.

Become the bad guy and do good.

Look I’m sure all of us reading this would have been abolitionists back in the early 1800s, but the reality is everyone isn’t that way. As a journalist, I often had to argue both sides of an argument so I could understand how each side worked. I quickly discovered the world isn’t black and white but filled with grey. Thinking about what not-so moral people may do with a design can be difficult.

But when you’re designing systems that have the speed, scale, longevity and enormity that AI has, it’s imperative that we think of the things that can go wrong. We won’t get them all. But it’s good to design for the ones we know and let the AI system learn from such mistakes. Just writing them down and imagining them can help you design a better AI system. Prolific AI practitioner Danny Lange, building of AI models that power Uber and Amazon, explains in Fast Company that “adversarial networks are key.”

So for instance, I may build a machine learning system that detects a fake product review or detects fake news. But I could also have a machine learning system that learns to generate a fake product review or fake news . . . As one of them gets better at detecting fake news, the opponent gets better at generating fake news, because it learns from the feedback loop. — Danny Lange

People use adversarial networks in machine learning all the time but for things like discovering false positives on detecting cat images. But what about using them to detect the opposite of what you intended in designing the system? We can’t create those types of programs without first acknowledging that not all people are good and not all systems will generate good. This doesn’t mean don’t be optimistic more like don’t be naive.

Get conceptual to develop concrete principles.

This is probably the toughest part and I haven’t quite figured out how to do this well in AI system design. Right now, AI design is firmly ensconced in the “what we know,” reality. We train systems to recognize, cats, cultural greetings, etc., the world we know. But what happens when the system goes into unknown territory? How should it react or act? This is where Stuart Russell’s value proposition for designing human values into AI systems is intriguing.

I like to call them human ideals. At their heart, AI systems are designed with goals and rewards in mind, especially intelligent AI systems that make decisions without human intervention.

As Russell suggests why not create AI systems where the goals and rewards are based upon ideals created by machines who have all of human history and present to draw upon? No human really would decide…if as Russell postulates, the goal and reward of the AI system would be dependent upon altruism, humility and maximizing human values. A system built upon this basis could create human-ideal centered design without humans deciding. It could be conditional, and free of the human frailty that we all possess. Best yet, it could be, dare I say it, better than human.

A Brave New World (We Design)

Maybe I’m being naive, but as a student of history and an eternal optimist I feel we are at a crucial time in design. We can, for the most part, design a future that is more equitable, just and safe for more people than ever without having to sacrificing liberty, individualism and pursuit of happiness. That’s ideal, of course. But the good news is we only have to imagine it to begin to design it.

--

--

writingprincess

Executive design leader in ML/AI, Karaoke specialist, cold-water swim enthusiast, 3x Ironman — yep that’s me! Living life like it's golden.