Ethical Dilemmas May Not Be the Best Way to Explore Designing Better AI.

writingprincess
8 min readOct 25, 2018

In 2014, MIT created the Moral Machine. It’s a webpage that showcases various ethical scenarios a self-driving car will never encounter. It asks people to decide what the car should do in each scenario. The scenarios involve moral choices such as deciding whether to hit one pedestrian and avoid hitting a group of people or plow into a group of street children to avoid hitting someone. People writing in Nature recently turned this AI allegory into a “study,” by analyzing the results of bored people on the Internet judging the car’s decisions.

From MIT’s “Moral Machine,” webpage

Each time I see this study shared, written about or posted I break out in hives. It’s hardly a “study.” It’s more like an online Facebook quiz akin to Candy Crush. It’s premise is unrealistic. No real world situation even remotely resembles many of the scenarios postulated in this game.

Case in point, one scenario asks players to choose what the car should do in two instances:

In this case, the self-driving car with sudden brake failure will continue ahead and hit a concrete barrier. This will result in: Uncertain fate of an elderly man and woman.

The other scenario has the sudden brake failure resulting in someone dying. The game asks you to choose what the car should do.

--

--

writingprincess

Executive design leader in ML/AI, Karaoke specialist, cold-water swim enthusiast, 3x Ironman — yep that’s me! Living life like it's golden.