I’ve been in design give or take on and off for more than 20 years. Design has always been my background to my storytelling foreground a way to willfully widen the narrative of the world to include people like me; those often on the periphery and in the margins.
For the last five years, I’ve been immersed wholly in AI design and striving to make it ethically sound and less harmful. After learning and experimenting with some of the brightest designers, data scientists and researchers in the world I’m now focused on sharing what I’ve learned.
My brain has been overloaded reading papers from the 1950s on how to create AI, articles on AI systems wreaking havoc at some of the biggest tech companies, to reading books released this year on how algorithms are making inequality in America even worse (must read by Virginia Eubanks) and arguing with colleagues about who is really at fault when algorithms go awry.
And what I have concluded after studying everything I could get my hands on about AI, ethics, design and society, is that: it’s the people stupid.
No matter how many people point to the newest technology as being a threat to humanity I still maintain that it is us, humans, that are a threat to each other.
Algorithm outputs are a reflection of the algorithm curator.
In fact, our intelligent systems can even take on our personality and exhibit behaviors that they learn from us. It literally took less than a day for Microsoft’s chat bot “Tay,” to become a racist, disgusting douchebag after being exposed to Twitter users.
Change the algorithm, create new practices, shut down systems; at the end of the day our AI will only be as ethical and humane as we are.
It’s so clearly apparent that many see us as incapable of seeing a way to ethical AI that even the biggest tech companies of today are clamoring for government to step in. (Read Ethical A.I. Even Possible? NYT)
But I maintain, and I am by nooo means an expert, that if we want more ethical AI then we’re going to have to be more mindful of our ethics as designers, developers, engineers, business executives and consumers.
Forget Ethics…How Do We Learn to Care About Humanity?
So I’ve used terms like “ethical AI,” and “ethics,” because I know the algorithm will like it because people like you have trained it that “ethical,” is good. But I actually abhor the word. I think talking about creating “ethical AI,” is a cop out and just another layer of subterfuge that keeps us from addressing the real problem in tech these days — the lack of humanity.
Think about your undergraduate or graduate school courses. How much time did you spend learning to be nice? How much time did you spend learning to be compassionate? Like actually learning through a pedagogical structure how to care about your fellow humans? I’m not talking about religious or morality but shared objective principles of how we as humans should behave with other humans?
I’ve seen in the bro culture in tech, the racist bullying in schools , the last four years of dehumanization in politics and even in houses of worships the lack of humanity we as a nation exhibit is frightening. It’s no wonder that it’s ended up in our code!
Because I’m a black woman in tech I’m often asked, especially in the BLM aftermath, what will it take for us to have more diverse workplaces? What will it take to have less racist, homophobic, transphobic, sexist orgs? And my answer is always simple and the same: Don’t hire racist, homophobic, transphobic, sexist leaders or those who tolerate that behavior.
Last I heard, organizations follow the lead of the people who lead them. Get better leaders and your diversity problem will be solved.
And the same goes for the AI industry. If you want less harmful AI technology you need people who understand what it means to mitigate harm to humanity creating AI. Period. It’s really that simple.
And after reading and watching the actions and aftermath of the firing of respected AI Ethics researcher Timnit Gebru, and the susbsequent outster of the woman who hired her and the founder of Google’s AI Ethical Team, Margaret Mitchell, I’m beginning to wonder if a tech industry that’s overwhelmingly privileged, white and male is going to be capable of creating humane AI without a lot of training and work. Don’t get your cackles up. It’s not attack a white guy day. I’m certainly not saying that current engineers, data scientists, researchers and designers working in AI aren’t humane; I’m just saying that they’re not incentivized to create humane technology.
Same with leaders and diversity. You don’t have diverse organizations because leaders aren’t incentivized to create one. So you have a lot of speeches and postulating but very little progress.
In my work I try to incentivize humanity-focused product development by integrating it into current processes.
I don’t use fake personas. I design quick and useful research studies with real people. I don’t do design thinking workshops with C-suite folks only I invite plant managers, frontline workers, into the room to tell their truth. I don’t assume all data is equal and I question always the suggestion of AI as a solution. As a matter of routine. It’s a reflex. Like breathing.
But again not everyone has had the same embedded humanity education so here are some ideas to bat around the water cooler when your struggling to figure how your company can be more humane when it comes to AI.
Do the Work on You First.
If you’re smart enough to understand machine learning you’re smart enough to research the racist, sexist, classist and xenophobic history that is embedded in code, technology and everything we do today. Start here — My How to Be an Ally Starter Kit.
And if you feel this is too much for you, if you think that “liberals,” are taking over AI, if you think that it’s much to do about nothing then maybe ethical AI isn’t in your wheelhouse. And that’s OK. Just get out of the way of people who truly care about making AI more humane.
Because in my mind you don’t get to speak without receipts. You can’t really be serious about creating ethical AI if you’re not serious about understanding the inequities of our world today. Because everything we do as a society ends up in code.
Look around the room. If everyone looks like you; you’re doing it wrong
If you read my piece on empathy than you know that science says we humans have a difficult time empathizing with other humans who “aren’t in our tribe.” That literally means we find it difficult to care about people who aren’t like us. And so when a person who grew up in a middle class suburban family creates an alogrithm model that will predicit who will be a criminal in a poverty-ridden urban city, can we really say, that this is a fair method? Like ever? Until we get serious about diversifying ship rooms, data science teams, tech boards, C-suites with a more universal represenation of people affected by AI we are going to keep repeating our mistakes.
Hire more liberal arts grads. Make them equal to engineers.
I once had a client tell me that all he had to do is hire a legion of developers and his company would have better customer experience. I laughed. Like not in his face, but then I cried because sooo many people believe this myth. Seriously dude, enough with the engineers and the developers and the data scientists they’re amazing but they need balance.
AI isn’t the holy grail it just isn’t. So much more could be accomplished if you just tried to understand your customer’s needs first rather than automating the crap out of your already crappy products.
Design researchers, anthropologists, culturalist, artists, futurists it’s high-time they are not just invited into the code room but the room ceases to be only coding. I have screamed to the rooftops that humane AI is best created with transdisciplinary teams. How do I know? Because I was on one.
There are too many nuances to model and product making that require diverse disciplinary acume to be in the SAME space creating together.
I do not want to know any more design reseachers who have never worked with data scientists. I do not want to hear any more of designers who don’t understand how algorithm models are created. Proximity is not enough.
You can’t just make a triad of design, engineering and whatever and think you’re going to meld minds and processes.
It’s not skills you need to mix. It’s mental models, ideas, the intangible virtues of culture, values and varying ethics — yes ethics isn’t universal any more than morality is. So if they’re not working side-by-side then what you’re doing won’t get into products. You can’t do drive-by ethics and think you’re creating ethical products. Checklists don’t do it. A true understanding of incongruence of AI and humanity is what’s needed.
And yes, I say incongruence because artificial intelligence was never designed to interact with humanity. It’s single-pursuit of rationality makes this clear. Humans by nature are irrational and the tech companies who are creating AI aren’t taking that into account. Which is why we’re having so many unintended consequences. These can be avoided — not wholistically — but a lot better than we’re doing today.
And I’m done. This may seem a short list but everything on this list seems impossible. Are people willing to do the work they need to break down their own power structure and dismantle their own privilege?
Are organizations willing to concede that tech-led design has led to just as many questionable results as goodness?
Are we really willing to dismantle our own power in favor of a more equitable future for us all?
I am a perennial optimist. We’ll see if I get disappointed.