Skimming through articles over the weekend, I came across
- Why There Will Be a Robot Uprising,
- If the Robots Kill Us, It’s Because It’s Their Job, and
- 10 Reasons an Artificial Intelligence Wouldn’t Turn Evil.
Huh. When the Internet can’t agree whether I should be irrationally afraid of some future possibility, what am I supposed to do?
The first two articles are based largely on the paper Autonomous technology and the greater good by Steve Omohundro, published in the Journal of Experimental & Theoretical Artificial Intelligence (link goes to the paper; access is free). To grossly oversimplify, Omohundro points to the coming ubiquitous nature of autonomous devices, their reach, and their coldly rational and self-protective natures. The third article, from io9, suggests that the lack of human bias errors and emotion would prevent a digital super-intelligence from being actively evil.
Predictive algorithms can have unintended effects, and since we have a tendency to anthropomorphize actions that seem to be conscious decisions, we might be inclined to see them as good or evil. They’re not. One example: during some bone-chilling days near the end of our exhausting winter this year, I treated myself to a couple extra degrees of warmth in my home: 68F instead of 66 on the floor where I was working. I was surprised to feel the furnace running in the following week when there was a break in the cold. I pulled out my smartphone and checked the setting on the Nest programmable thermostat. Nest had decided that since I tweaked the temperature above the set schedule, I must want it warmer every day. It was neither concerned about my cold nose nor trying to boost my energy bill: it had simply extrapolated a pattern from a repeated aberrant behavior.
Anyone who has taken a basic philosophy survey course knows that passionate arguments can be made for and against pure utilitarianism. Consider Phillipa Foot’s Trolley Problem. In a class a couple years ago, after the usual kvetching about the scenario being implausible, there was universal agreement about what to do in part one of the problem:
Sorry guy, but when it was a matter of flipping a switch to choose between the accidental death of four people or the accidental death of one, everyone eventually agreed to flip the switch. However, the second part of the problem produced mixed results.
Though I argued that flipping the switch in part one was actively killing the isolated guy as much as pushing him off the bridge, the other people in the room were far queasier about pushing a man instead of a switch. That made them feel like killers, rather than the first scenario, where passivity would be complicit in the deaths of three additional people. What would a robot do? If it’s simply a question of one versus four, the level of active participation in the death of the one shouldn’t matter. That’s what sends science fiction writers (and drone-controlling military strategists) into the dark zone of assassinations to prevent possible future crimes, using a utilitarian calculation that one death now will prevent many later. Maybe. Oh, and don’t think that we’ll be saved by Asimov’s Three Laws of Robotics, specifically a version that prevents harming humans. We won’t.
Anthropologically, I think that human beings, cultures, and societies are complex organizations that thrive because of our illogical choices as much as our rational ones. We are not at peak efficiency or justice, but the continual balancing of logic, passion, empathy, innate drives, etc is what makes us who we are. So for me, the threat of widespread AI is not really a question of life and death of our species, but of our selves. As much as I adore robotics, we need to be careful that we do not cede our humanity to them. For now, we’re lucky that there’s a sure-fire way to recognize an evil robot: look for the goatee.