Human Ideal Centered AI Design (Part II)

AI Pioneer Stuart Russell, in this seminal TED Talk, postulates a philosophical yet pragmatic framework for how to build intelligent systems to be better than us. Russell, who has literally written the book on AI design, advocates for building systems that are:

Altruistic

Humble

and maximize human values

The first two are interesting, but the “last law of AI,” spurs a mind-blowing discussion.

For if we built intelligent systems to replicate us we definitely will have failed and sealed our fate to that dystopian future displayed in tech-gone-wrong-novels like Brave New World.

But if we break free of all that is naturally human; that is if we design beyond our humanity then we will have achieved a world filled with intelligent systems augmenting, nay completely transforming, our world.

Sadly, we are not going in this direction. In fact, many current AI technology in use today falls way short of this ideal. And tragically we are designing intelligent systems that are infused with everything that is wrong with us — racism, sexism, classism, bias and more. Let’s delve deeper.

Human Centered Does Not Mean Human Nature

So what is Stuart Russell actually talking about when he says AI should maximize human values; values he says must never be explicitly known to the system but followed. Huh?

So there are three books and one pamphlet which I regularly read and reread each year. They are:

The Constitution of the United States

The Federalist Papers

The Anti-Federalist Papers

The Bible (the Protestant one, not the Catholic one, though I’ve read both)

Within these books lie the ultimate illustration of the tensions that exist between human values and human nature.

You can go straight to the obvious dichotomies —

Slave owners writing about the equality of men

A benevolent God who gives life only to demand it back in a show of all consuming loyalty

Advocating for no establishment of religion while evoking the Christian God to bless a new union

We could go on. But combing through these contradictions gives us a baseline to understand Russell’s idea of building systems with our ideals not our natural inclinations.

Let’s get 100% here. Historically, we haven’t been good at designing out our humanity. Just look at how we’ve used basic mathematical models in the past to push racist, sexist and bias agendas.

Take the US Census. In 1790, the burgeoning new democracy known as the United States wisely decided to take an accounting of all its citizens. That first census had a noble mission: count every heads of household, free woman, man and slave.

Yet, minorities, immigrants and anyone who wasn’t a white male remained undercounted in the census for centuries. Might not matter in terms of math but when you use that math to determine allocation of resources you’ve got a problem.

People used the numbers gathered from the US Census to enact policy, distribute resources, determine housing and medical subsidies. But because of racism, sexism and other inhumane, human tendencies there were many groups who either went underfunded or just left out altogether. Let’s not even mention voting and the electoral college. Three-fifths anyone???

And to this day LGBTQ people aren’t (and won’t be) even accounted for in the Census. If you don’t get counted do you even exist?

People tend to think of math, statistics, data and technology as some sort of agnostic representation of reality.

Think again.

The US Census and the Department of Justice criminal statistics remain two of the largest examples of Weapons of Math Destruction, so bluntly described in Catherine O’Neill’s book of the same name.

Like a bitter divorcee writing an accounting of her ex-lover’s discretions and secrets, O’Neill, an MIT educated data scientist, airs the dirty laundry of how companies are using “Big Data,” to create harmful mathematical models that reinforce some of the worst of humanity’s behaviors.

The book is filled with algorithmic programming that results in discriminatory, racist and highly prejudicial data models.

“A model’s blind spots reflect the judgements and priorities of its creators,” O’Neill writes….“Models are opinions embedded in mathematics…Whether or not a model works is also a matter of opinion….we must ask not only who designed the model but also what that person or company is trying to accomplish.”

Because we if refuse to acknowledge the human influence over what many see as agnostic data congruence; we risk falling into the trap of practicing moral dissonance, and intellectually dishonesty.

We are telling ourselves that we are using innocuous tools to design for the greater good when our outcomes reinforce some of the worst qualities we’ve ever displayed as humans beings.

From the predictive policing model the Chicago Police Department is using to compile of list of 400,000 individuals in the city who “might,” be involved in violent crimes, to the the numerous HR screening intelligent systems that are using personality tests to red-flag those with mental disabilities, we are reinforcing the worst of humanity. Whether we’re doing it intentionally or not doesn’t really matter. The outcomes are the same.

Sure, Big Data is the punching bag because it’s new. But let’s be real here, AI isn’t the problem. We are. But the coolest thing is — we’re the solution too!

Principal Design Director, Design Research Lead @IDEO, Karaoke specialist, cold-water swim enthusiast, 3x Ironman — yep that’s me! Living life like it's golden.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store