writingprincess
1 min readJun 4, 2019

--

I don’t say take AI decision making out of the equation it’s AI, of course we have to program intelligent systems to make split second decisions. What I’m advocating is a more cooperative relationship between the intelligent system and the human operator. Right now, there’s a philosophy at Waymo, for instance that a self-driving car should be totally controlled by the machine — no human intervention. And all I’m saying is knowing human nature that may not be the best approach. Research has shown that qualities such as humility and uncertainty when programmed into a robot or machine can strengthen the human-machine relationship; one where neither the human nor the machines have absolute power. Mostly because each will have the knowledge advantage at some point. And a cooperative environment is better than a dictatorial one.

The problem with dilemma’s like the Trolley Problem in the article I mentioned in the AI arena is that it has a presumption of human non-intervention and that has already gotten us into trouble with Tesla and Boeing — both have had accidents where human agency was overridden by a machine, the latter leading to the tragic death of hundreds of people.

My point is really that designing AI-systems with this presumption that human intervention is not desired is problematic. There are things machines do better than humans, let’s stick to designing AI for those activities and leave the rest to human agency.

--

--

writingprincess
writingprincess

Written by writingprincess

Executive design leader in ML/AI, Karaoke specialist, cold-water swim enthusiast, 3x Ironman — yep that’s me! Living life like it's golden.

No responses yet