A quick thought for your Sunday lunch:
Those who advocate self-driving cars propose not just to implement the trolley problem in the real world, but to assert an irreversible answer onto it as well.
You’ve heard the trolley problem many times, right? Its basic version goes something like this: Walking across a bridge, you notice a runaway trolley with five people in it passing beneath you. Oh no, it’s about to make a fatal plunge into a trendy urban sinkhole! But if you pull this lever attached to the bridge — quite creative infrastructure planning around here, evidently — you can divert its course to a safe, alternate track. But there’s some yahoo eating a Snickers bar and listening to a Walkman standing on that track, pondering a cloud and oblivious to the peril. The rescued trolley would surely squash them flat! Do you pull the lever?
People will die in self-driving car accidents. The causes of the accidents will be different than those that cause auto accidents today, and I have no doubt that the annual fatality numbers will prove far smaller. The fact remains, though, that if auto-autos supplant human-powered cars, then people will die in accidents that would (by definition) not have happened in a world where today’s familiar meat-dependent cars continue their reign.
I have long felt very skeptical about self-driving cars, but I lately begin to hear very smart people whose voices I trust talking about the deprecation of human-controlled automobiles as not just inevitable but soon, far sooner and more graspable than the perpetually twenty-year AI horizon or whatnot. I used to think that only Google, home of an increasing number of laughable failures suffering from Silicon Valley-blindness, put work into this field. More recently, I start to come around to the notion that a plurality of diverse interests now turn their attention to this as a solvable problem. Even accepting this, though, I find myself resisting the notion’s feasibility, and I think this may relate to my answer to the trolley problem.
As contrived as it is, I find that thought experiment interesting in how it carefully avoids presenting an explicit binary choice — turn right, or turn left — but the very subtly different binary of action or no-action. And while I empathize with the thinking of those who easily answer that they would pull the lever, accepting the cold calculus of ending one life to save five, I know with certainty that I would not pull it. I know, from knowing myself, that I would follow an instinct to make myself a non-factor in a clearly unavoidable tragedy — even if the tragedy possesses variable magnitude. If I do nothing, I will witness a terrible tragedy. If I pull the lever, I will cause a terrible tragedy, and I want nothing to do with that.
Those who propose the self-driving cars would pull that lever. If self-driving cars become a practical reality, then the trolley will hop the track it’s ridden for the last hundred-plus years, and the tens of thousands of people otherwise doomed to die every year in the U.S. alone due to car accidents will be saved. In their place: the relative handful who will die in novel ways by auto-autos, probably more early on, while the v1.0 kinks get ironed out.
(Oh, the many ways clever humans will find to get themselves killed by doing things that will confuse a high-speed passenger robots and surprise their designers, especially in the early days. When one of my clients finds a bug in production code, I feel both sad and happy, and I send them an email that mixes apology, gratitude, and congratulations. I imagine the range of emotions that auto-autos’ engineers will feel as the bug reports roll in, and shiver.)
I don’t see this as stating an objectively correct “solution” to the trolley problem, but I do see the parallel as suddenly spooky. And if this technology does in fact turn the corner and bear down on us, then speaking as one who tends to favor the continuous application of new technology to improve civilization, I will do nothing to stop it.