Peace Games
Computers make better decisions than humans, sometimes…
In the 1983 Cold War movie WarGames, American nuclear commanders discover that many of the foot-soldiers in missile silos are unwilling to actually launch their nuclear-tipped apocalypse engines, even when they believe that they have been given a valid launch order. To remedy this “problem”, direct control over nuclear launches is turned over to a computer system called the WOPR, taking humans “out of the loop”.
The same year WarGames was released, Soviet Air Defense Colonel Stanislav Petrov was placed in much the same position as the fictional Captain Jerry Lawson from the film’s opening scene. Petrov was the watch officer on the night shift for the Soviet satellite-based Oko early-warning system. In the wee small hours of the morning on September 26th, his computer advised him that it had detected the launch of American ICBMs. Over the course of fifteen scant minutes, he concluded with fingers-crossed that it was a false alarm and declined to inform his superiors of the suspected launch. The Oko false alarm was later attributed to sunlight reflecting off clouds. We can only imagine what the world would look like today if the WOPR had been watching the Oko sensors.
Petrov is not the only person to have been placed in this position. At least a half-dozen cold warriors failed to order global thermonuclear war despite seemingly valid orders or sensor readings. Stanislav Petrov died earlier this year in May of 2017 at the age of 77. With him comes the end of an era where people are always in a position to second-guess and overrule the decisions of machines.
Every day, we outsource more and more analysis and decision-making to automated systems. This promises incredible benefits. Computers can analyze vast amounts of information and take on tasks far too tedious for a person to perform. Fortunately, most of the tasks we currently assign to AI and expert systems don’t have immediate annihilation potential of nuclear war. But there is still tremendous social impact to the tools we are currently developing and implementing. And their oversights and blind spots may be much harder to see.
Replacing human drivers with autonomous vehicles will save untold lives. Humans are simply terrible at driving, and even the mediocre driving computers we have now are a manifold improvement over our pathetic meat brains. Though driving computers will certainly make everyone safer, it’s important to ensure that they make everyone safer equally. It’s not inconceivable that a driving computer could be slightly worse at identifying people with particular skin tones — it’s a mistake that’s been made before. If the designers of driving computers mostly look and think alike, they might not be able to spot this type of oversight. If driving computers are mostly tested in one area with mostly heterogeneous people and surroundings, subtle problems might not be detected until much later — making vehicles substantially safer when used by and around the people who designed them, and much less safe in other communities and surroundings.
Even more scrutiny is demanded when opaque expert systems advise courts and other institutions with immense power. These systems may base their decisions on data which merely encodes existing biases. When given decision-making authority, these systems can allow current inequities to metastasize, making them much harder to detect and combat. Courts already use risk-assessment tools which produce wildly inequitable recommendations. Imagine those same expert systems placed in a cybernetic loop where the outcomes of their past biased decisions become the source data for their future decisions. At least humans decision-makers can be asked to explain their choices. Their reasoning can be searched for evidence of bias or animus. Neural networks are notoriously bad at articulating their “why”s. Without caution, we may be on track to replace the discretion of judges and juries with the inscrutable “computer says no”.
None of this is to say that computers should not be entrusted to do the things at which they excel. Freeing people from having to perform tedious and error-prone work is a very good idea. There are so many things at which computers are better than people or mechanical systems, and we should absolutely delegate that work to halfway-intelligent robots. When we deputize machines to make decisions, we should always be aware of the ways that those computers can make poor ones. We should be aware of their limitations and our own, and work diligently to ensure that their recommendations are equitable — especially when those decisions are opaque or difficult to appeal.
Much of the work on these sorts of computers is performed by a relatively-small group of relatively-similar people in a relatively-small area. And these people — mostly straight, cis, white men — have demonstrated time and again that they are not adept at seeing their own biases or owning up to their own mistakes. As we continue down the utopian path of elevating society through reasoned automated analysis and evidence-based decision-making, we should be careful that these chuckleheads don’t put us on a path to dystopia instead.