Battling the Weaponizing of AI

writingprincess
15 min readFeb 23, 2021
Photo Credit: Ian Frome, Unsplash.com

“I don’t use Facebook anymore,” she said. “They broke democracy.”

I was leading a usability session for the design of a new mobile app when she stunned me with that statement. It was a few years back, when I was a design research lead at IDEO and we were working on a service design project for a telecommunications company.

The design concept we were showing her had a simultaneously innocuous and yet ubiquitous feature — the ability to log in using Facebook.

But the young woman, older than 20, less than 40, balked at that feature and went on to tell me why she didn’t trust the social network any more. This session was, of course, in the aftermath of the 2016 Presidential election. An election in which a man who many regarded as a television spectacle at best and grandiose charlatan at worst had just been elected to our highest office.

Though now in 2020, our democracy remains intact. It hasn’t been without its violent and traitorous challenges. And her provocative statement made me wonder — just how do you stop a rogue technology platform fueled by AI from tearing down societal pillars such as democracy, truth, justice and ethics?

If, indeed, Facebook’s platform “broke democracy,” is there a way to combat such harm? Or are we bound to believe, as Facebook has suggested, that there will always be bad actors and you can’t prevent them from acting out. Is “bad AI,” already on the path of mass shootings and acts of terrorism-universally condemned yet seemingly unstoppable?

Her comment thrust me from the chastising lectures of “we need more ethical AI,” into the real necessary realm of “How may we create AI and be more aware of the consequences of its use on newly formed platforms?”

So let’s dissect one of the most famous cases of AI weaponization — social media and the 2016 Presidential elections. And detail how to fight against it.

A Series of Unfortunate Events

A Blip Ignored, A Bias Ignited and A Changing Narrative

Let’s travel back to February 2016.

A Blip Ignored

Thanks to a very well-researched Wire article we know this was about the time that Facebook investor and insider Robert McNamee, noticed something weird going on with Facebook’s NewsFeed content. He saw messages being posted by then Presidential Candidate Bernie Sanders (who later became Presidential Candidate Bernie Sanders again), that seemed off. McNamee told Wired:

“I’m observing memes ostensibly coming out of a Facebook group associated with the Sanders campaign that couldn’t possibly have been from the Sanders campaign,” he recalls, “and yet they were organized and spreading in such a way that suggested somebody had a budget. And I’m sitting there thinking, ‘That’s really weird. I mean, that’s not good.’ ”

McNamee’s “weird Newsfeed blip,” was a microcosm of the new universe being created by a confluence of algorithms, data breaches and Russian bad actors. In that same year, Facebook also fired the “human editors,” of its Trending News column in favor of an algorithm, after allegations that the human editors were suppressing conservative news.

But in reality it was the algorithms suppressing actual news that became a bigger problem for Facebook and subsequently Twitter, and YouTube and pretty much any other social network that relies on user-generated content.

Bad actors were creating fake social network groups claiming to be everyone from Black Lives Matters to Vaccination Experts but were really white supremacists and medical crack pots. (Read more about these groups here.) In 2016, Facebook even reported to the FBI that its security team noticed Russian actors trying to “steal credentials of journalists and public figures.” But nothing came of it and the Newsfeed kept on giving.

The content mill may have started with the bad actors but it was regular old folk that flooded the social networks with ever-increasing alarmist content. “With just a few “training,” ads, Russian hackers used Facebook’s algorithms to target Republicans and right-leaning audiences with fake, unproven and sensational, but popular, content. Other bad actors used Facebook’s PageRank algorithm’s penchant for likes, shares and comments to spread false, conspiratorial content like wildfire throughout social media.”

And though the realties above have made good punch lines at intellectual party scenes, misinformation went from funny to deadly when the COVID-19 pandemic arrived on U.S. shores and the sins of social media content began to affect the very lives of its users. What started as a nightmare election guffaw in 2016 became a full-fledge terrorist attack on the U.S. Capitol in 2021.

In the aftermath, research and investigations showed that a lot of what went awry would never had happened if the platform’s AI models hadn’t been designed to usurp human behavior. This isn’t happenstance.

The very design of these models are predicated upon some of the most universally embodied biases that motivate human behavior. The AI didn’t go awry — it functioned as it was designed to function. Social network platforms design their AI models to use our biases against us.

Whether they did this knowingly or unwittingly remains to be seen. But in the end it doesn’t matter. Because 2016 shows us how fast AI-fueled content can be manipulated and weaponized. And its high time we fight against it.

To be fair Facebook and YouTube have since massaged their machine learning models to have less of a filter bubble on extreme content. But the models are still predicated on the notion that what I say and what I do are absolute truth and anyone who studies human psychology knows this isn’t true.

So we have to go a step further in battling the weaponization of AI models. But how? Easy, start at the source of the problem — our minds.

Thinking Fast; Skipping Slow

Because I have an HCI background I tend to focus a lot on how digital design interacts with and affects human behavior and decision making. Delving into this topic means taking a detour into psychology and the world of cognitive bias.

Coined by Amos Tversky and Daniel Kahnmen, a cognitive bias is simply a way of thinking that leads people to a “systematic deviation,” from the normal pattern of decision-making. It’s your brain taking a short cut to a decision, using your subconscious as its guide.

In his book “Thinking Fast and Slow,” Kahnmen, distills several decades of research on human behavior and decision-making into pithy insights designed for public consumption. The book (you must read it) talks about how our brain functions on two tracks — the fast thinking one takes lots of short cuts to make fast conclusions and the “slow,” thinking track that has us thoughtfully reason through memory and knowledge to come to a decision.

Guess which track we spend most of our time ruminating in?

You guess it — our lack of cognitive load tolerance has most of us operating on the fast track most of the time.

One of the most famous examples of this fad thinking from the book, “The Linda Problem,” describes the conjunction fallacy.

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

Linda is a bank teller.

Linda is a bank teller and is active in the feminist movement.

As Kahnmen and Tversky’s research proves, most people selected that Linda was a bank teller and an activist.

Now if you just stop to think for a minute you would reason out that the laws of probability clearly state that the possibility of one trait being true is a lot more likely than two traits together. Logically, it’s easier to have Linda only match one criterion. But why would people overwhelming choose the latter? Because their mind short cut tells them that Linda must be a bank teller and a feminist activist because if she participated in demonstrations and is concerned about social justice issues both traits must be truth. But mathematically they’re not.

Cognitive biases are little seeds planted in our heads from connected moments of lived experience and knowledge. In themselves, cognitive biases are neither negative or positive. But when they interact with real world stimuli those seeds can grow into dangerous jungles of tangled thought processing that can lead to some disastrous results.

Facebook is now creating programs to ward against the abuse of its platform but there are some foundational beliefs that have to evolve for this problem to be solved.

A Bias Ignited

Currently, there are 180+ cognitive biases known to affect human behavior in psychology.

If you don’t think that subconscious bias can affect our decision making try taking the Implicit Bias tests from Project Implicit. It’s guaranteed to be an eye-opening experience. Now back to that pesky Facebook broke democracy charge.

The Russian ads would have fallen flat without a very real cognitive bias called availability cascade.

Put succinctly, availability cascade is the tendency for humans to believe something solely because they see it repeated over and over again. It’s a by-product of the availability heuristic which is a cognitive bias that has us relying on whatever is top of mind when evaluating an idea, concept or decision.

The adage “repeat something long enough and it’ll come true,” describes the propensity people have to believe what they hear simply because they hear it a lot.

The Russians were sending targeted issue ads about Hillary’s emails and illegal immigration and other hot button issues. These ads were read, clicked, shared. The more people engaged with content the more the algorithm showed that content. The more the algorithm showed that content the more people engaged.

You see the pattern emerging here?

But so what? It’s just an ad.

Well, that’s true but target advertising isn’t the only way people consume content on social media platforms.

So let’s break this down.

How can as little as $100,000 of ad buys disrupt and possibly change the results of a presidential election?!

A Narrative Changed

Learning, Listening and Leaping

In my previous data manifesto article I break down how algorithms work, using an example of an algorithm searching for the word “seed.” Let’s go back to our algorithm that’s searching for the word “seed.” Let’s replace it with words like “illegal immigration,” “veterans benefits,” and “liberal agenda.”

Now these words don’t seem to have much in the common but the algorithm doesn’t care. It’s one job is to find content that you want and serve it to you.

So say you clicked on that targeted Russian ad.

On Facebook for example you can share ads like you share news articles or posts. Now the algorithm is getting trained and learning. By its metric when you click and share it means you like it. To the algorithm this it good. But to humans, the ones who once monitored the spike in questionable content, this was not good. This was not good at all.

So now it starts serving you more ads with words like “immigration,” and “veterans.” But it also shows you ads for DACA and President Obama’s Dreamers executive order. You don’t click on those. So eventually it stops sending you that affiliated content.

It starts by listening. Then it learns. And finally, the intelligent system begins to take, what I call, “leaps to please.”

Meanwhile, your friends are sharing content from let’s say Breitbart News that includes words like “illegal immigration.” So now the intelligent system chooses to show that content because of the word “immigration.”

It’s testing to see if you like this content. Wait you click. One like metric. You share! Another like metric. The holy grail you engage! You like the article, AND you comment on it.

So now the intelligent system is like, “OK let’s start serving up more like this.” Remember the intelligent system is rewarded each time you like, share or comment. Like a trained dog it seeks to get praise from you. The more you engage the more it assumes you like the output its giving you.

And pretty soon you’re hearing about illegal immigrants getting health benefits from some guy named Alex Jones. You’ve never heard of him but after you click on a blog post it invites you to like his Facebook page.

Now you’re consuming content that talks about illegal immigrants murdering innocent Americans.

You become alarmed. You want to disengage — but you can’t. Your Newsfeed is now flooded with articles, ads, memes about “illegal immigration.” You stop liking — the algorithm feeds you more. You stop sharing-it tries to get you back with more. Instead of showing you alternative content it double downs on what you’ve already rewarded it for.

Like any trained dog — it’s extremely difficult to pivot with an algorithm; it’s difficult to go back on your word. You can’t pivot or change course. You’re kind of stuck because that’s the way these algorithms were designed. But they didn’t have to be. More on that later.

Pretty soon you’re seeing a lot of articles that have “illegal immigration.” But they also have other hot button words as well.

Meanwhile, Facebook has been storing this data. It’s been categorizing you as someone who is of this age, race, gender, and now politics.

Now let’s say that data, which Facebook assured is private and can’t be linked to you, gets stolen. Facebook has a data breach and the data of millions of people just like you are now in the hands of people who are working for political campaigns.

Let’s say they use their own algorithm looking for people who consumed Facebook content with the words “immigration,” “veterans benefits”, and “liberal agenda.” It takes seconds for the algorithm to find your name, age, hometown and gender.

Cross reference that with a list of registered voters in your town and boom, one week later you get a flyer with the words “illegal immigration,” “crime” and “liberal agenda.”

This same political operative inundates you with more ads that appear in your newspaper, on your television and of course on Facebook.

According to this excellent piece on Facebook’s election content sins in Wired, Researcher Jonathan Albright reveals those posts from six Russian propaganda accounts were shared 340 million times on the platform!

That’s the amplifying algorithm effect.

You hear repeatedly the same message. You see it on your friend’s wall. You see it liked, shared and supported. By now the source doesn’t matter. Your brain thinks one thing: “I keep seeing it so it must be true.”

By Election Day, Jesus himself could have come down and told conservatives undocumented immigrants rarely commit crimes and people wouldn’t have believed him!

Between Facebooks algorithms, Russian ad buyers, political operatives stealing data and the cognitive biases that play mind games, uninformed voters never stood a chance at getting the truth.

Still could social media platforms have seen this coming?

Not with their past attitude. They say they were naive. That they weren’t prepared for the many bad actors that manipulated their product. Which is true. But then the data breach part isn’t just bad actors right?

And then there are those algorithms. Did they have to be designed that way? No, they didn’t.

Fighting the Weaponization

Bad actors yes, but preventable design flaws possibly and someone should have known better — absolutely.

Here are AI design principles that I have found help to deal with and stop the flow of unintended consequences that most assuredly arise when human cognitive biases interact with powerful machine learning models.

There are no harmless algorithms. Any technology can be weaponized. Whether you’re Team Hobbes and believe humans are inherently evil or Camp Rousseau and have faith that we are born good, doesn’t really matter. Anything we create has a capacity to do harm. Period. There’s no negotiation on this and machine learning models those building blocks of artificial intelligence are not immune to that potential. So the first step in creating unharmful AI is to know that all AI can be weaponized no matter people’s intent.

Listen for unintended consequences. It is insanity to think that all the negative effects of an intelligent system can be anticipated in the design process. There are exercises and activities you can do to minimize the harm most assuredly (I’ve helped to create many of them at IDEO and now at Microsoft) but you won’t get them all. Algorithmic harm can’t be ignored but it can be and should be addressed iteratively. Intelligent systems should not be designed for static behavior. It’s called machine learning for a reason. It’s learning — all the time. Therefore we should design experiences backed by machine learning to listen — all the time. Listen for “leaps to please,” algorithmic jumps that go from regular to extreme — it happens in almost every intelligent system so why not just build for that inevitability.

Use unintended consequences as opportunity for design. Travis, the managing partner at IDEO Chicago gave me this gem. No one gets AI right on the first try. We should be designing these products with an iterative mindset. When unintended consequences do arise — say Russian hackers spewing fake news — use these as opportunities to better design your intelligent system.

Build intelligent systems for relationships not task master checklists. Most of our intelligent systems are designed to “take over,” unwanted human tasks. That may have been great for the first decade of AI but we’re past that now. Intelligent systems can be trained by the algorithm-builders and the people who engage with the eventual product or service. Design ways users can tell their intelligent system — no, and how to change, and when to pivot.

Design interactions with the AI system that are akin to a new relationship; what I call “Mindful AI,” where there’s a collaboration between the human and the intelligent system to act. Over time you can make the AI system less dynamic. But in the beginning when we’re getting to know each other, build in ways for me to teach the AI system what I expect from it. Make models able to learn rather than predictive authorities. There is no absolute truth — not when it comes to humans. We’re a fickle bunch and we they way build algorithms should account for that fact.

Let the adults have the agency. Don’t set it and forget it. Don’t assume you, the model-maker knows even once the model is fashioned. For the life of me I don’t understand why we build intelligent systems as if they’re actually made of stone. People talk about algorithms “imposing,” upon us, and “lording over us,” as if they’re gods and we mere mortals have no authority. Any intelligent system should be built dynamically, not immovable. Flexible, not static. Adaptable not unchanging. Because that’s how we humans are.

Build intelligent systems with a diversity mindset. Right now a lot of these recommendation engines are simplistic in nature. They’re trained on narrow data sets and often left alone. Why? We should build intelligent systems that use a variety of inputs to learn and, yes understand, the right output for a product or service. I love the layered machine learning technique Spotify uses for its intelligent music recommendation system. They build in user experience (a design thang) into their ML models to influence the recommendation engine. Just lovely. Facebook definitely tries. In this article, they explain how intricately complex their recommendations engine is and the many inputs and calculations it uses to decide what to recommend to a fictional user named Juan. They’ve come a long way since 2016! Their model is definitely multilayered which is a great thing. And yet it all comes down to what to me is an extremely uncomplicated view of humanity

“Because Juan is connected to or has chosen to follow the producers of this content, it’s all likely to be relevant or interesting to him…”

But one read of Kahnmen’s book will tell you humans just aren’t that simple. There are some of us who want to read content we abhor just because we want a well-rounded experience. Or we click on things repeatedly by mistake that have no relevance to us. Or our family may all love Billy Graham but we happen to throw down with Whatever is the opposite of Billy Graham these days. Start with the premise that humans are irrational and content may be more relevant.

Give people a time out. As described above, even the most well-educated and well-intentioned content consumer can be lulled into a Q-non rabbit hole. We’re human which means we’re fragile. Social networks are particularly bad at disrupting mindfulness but increasing this bad habit of forcing people to be tied to their device or application is seeping into all corners of user experience design. The alerts, notifications, are new affordances that literally do not have to be designed into the experience. As designers we know better. We should protect human fragility not exploit it. When you build intelligent systems that play against our biases you’re exploiting our humanity. That’s a no-no.

Don’t Double the Bias: As we design intelligent systems there are a few cognitive biases in particular we should design around. These include:

Automation bias — the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct. It’s a myth that people don’t trust intelligent systems. All research points to the contrary. Humans may express initial distrust of automated system but over time their behavior suggest otherwise. They trust them way too much. And we need to design around that flaw.

Anchoring bias — the human tendency to depend too heavily on the earliest piece of information for all subsequent decision making

Availability cascade — the avalanche effect of having repeated messaging from various channels that allow us to believe something just because we read or hear it frequently not because it’s true

Confirmation bias — believing something because it confirms what we already believe.

Like anything else AI can be weaponized and used against us. As human-centered designers we stand in the crosswalk between harm and satisfaction. It’s time we took that duty a bit more seriously.

--

--

writingprincess

Executive design leader in ML/AI, Karaoke specialist, cold-water swim enthusiast, 3x Ironman — yep that’s me! Living life like it's golden.