The story so far
In case you haven't heard, a British t-shirt company was selling shirts with the phrase "Keep Calm and Rape A Lot", among many others, on Amazon.
They then non-apologized by observing that an algorithm did it. Quiet Babylon adeptly explores the strangeness of this remark, but I think misses one key point.
But we do this all the time
Back in 2010, on the 6th of May, at 2:42 P.M., the bottom fell out of the stock market. The Dow Jones Industrial Average lost 600 points in five minutes. In order to stop the sudden free-fall, the Chicago Mercantile Exchange froze electronic trading for a few seconds. Within fifteen minutes, the market recovered. This became known as the Flash Crash, but another way of thinking of it is: a bunch of algorithms had an argument.
It's been nearly three years since the Flash Crash happened, and we're still trying to determine what caused it. The SEC issued a post-mortem that lays out one theory, but there's still a lot of discussion and disagreement amongst experts. Everyone agrees that it was automatic trading algorithms, executing financial transactions in microseconds, that caused it. But no one's quite sure which ones, or why.
Building a better mousetrap
If you hit a guy with a hammer, no one is going to accept the defense: "the hammer did it". A hammer is a simple tool: everyone understands how it works, everyone understands what it does. It operates in our world, as an extension of our own motion. It's possible you hit him by accident, but you're clearly still at fault.
If you make an offensive t-shirt with an algorithm, apparently we're in different territory. It's easy to make an algorithm that's has unintended consequences. It's not so hard to make one too complex for its creator to understand, and too vast in scope for its human handlers to comprehend the output of. Even very smart people (the people making lightning trades on the Chicago Mercantile Exchange hire very smart people) build programs that can behave unexpectedly.
But in the end, you can't blame an algorithm: it can only do what it's told.
Or, how I learned to stop worrying and love the algorithm
It's easy to make programs that run: what's hard about programming is making algorithms that do what you want them to. Writing and debugging programs is hard because programs aren't like hammers--they do not work the way we do. They are not extensions of our own motion. Writing a program requires us to think in ways that aren't human.
Programs are a series of small instructions, marching on in sequences and loops and forking paths towards some end. We don't keep track of thousands of little automata well. Worse, when enough of them assemble, they can have emergent properties: they can seem to take on a life of their own, quite beyond our intentions. And they can crash the stock market without anyone in charge of them understanding why.
We've grown to trust algorithms because they're all around us, running all sorts of things in our world. But algorithms are more plodding and stubborn than the most infuriating bureaucrat: they will only ever do what they're told, and they never answer questions.
Algorithms are as fallible as their creators. Actually, no: they're more fallible. Their creators may not understand what they've made.
Let's hope this is keeping at least one guy who makes Predator drones up at night.