Tesla is notorious for advertising a “self-driving” car and asking the passenger in the left front seat to pay full attention to driving the same as a dumb old car. Eric Peters recently called out BMW for the same lie. Meanwhile regulators are being asked to look elsewhere as the “trolley problem” makes the rounds again.
As an animal species humans have two remarkable traits. We are good long distance runners and we are good learners.
Even “dangerous” intersections are navigated successfully by 99.999% of road users. We have learned to pay enough attention to driving, walking, or riding to stay out of serious trouble.
We also learn when not to pay attention. The stream of signs along the roadside doesn’t distract you from driving because you know reading “mile 100.2” and “speed limit 55” won’t tell you anything useful. The instructions to treat a self-driving car the same as a traditional car are obeyed about as much as a 55 mile per hour sign on a freeway.
I read a story from the chemical industry that probably happened a thousand times. Once a year a 99.9% reliable human failed to stop a process in time and caused an accident. So management replaced him with a machine. The machine wasn’t perfect either, but management figured the human could back up the machine. If the machine failed 1 in 1,000 times and the human failed 1 in 1,000 times the failure rate would drop to 1 in 1,000,000.
In fact they didn’t reduce the failure rate as all. The bored human stopped paying attention. When the machine failed the human was not ready.
This is a well known phenomenon. Asking two inspectors to sign off on a job can be worse than asking only one. The first knows the second will catch mistakes and the second trusts the first to have done the job already. Two bad jobs do not add up to one good job.
Humans asked to back up machines are not going to make split-second decisions reliably, no matter what Tesla’s terms of service say.
What about making machines back up humans? Sounds like a better idea. Machines don’t get bored. The car slams on the brakes and calls you an idiot for not paying attention. Will you learn?
Sure, you will learn, but maybe not the right lesson. When an activity feels safer we’re likely to take more risks. The phenomenon is called risk homeostasis. Sometimes changes in the name of safety fail because people overreact to the perception of safety.
We already know what happens when brakes get better. That experiment was done 30 years ago and humanity hasn’t evolved much since then. Emergency braking systems are going to trigger more and more often as drivers rely on the computer to bail them out.
One of my recurring themes is the failure of policy makers to acknowledge that roads are used by humans. I recently listened to a resident saying he didn’t care if his stop sign made the road more dangerous because anybody who got hurt deserved it. Red light camera advocates similarly dismissed injuries caused by photo enforcement. They don’t want a safer road, they want to kill people they disapprove of.
The trolley problem that’s making the rounds again is a hypothetical no-win situation like the Kobayashi Maru scenario from Star Trek. It’s a choice between death and death that is almost irrelevant to real-world driving. Is your car going to kill the right person when the time comes to kill somebody?
Who cares?
Once in decades of driving I almost had to choose who to hit. Two drivers cut me off simultaneously, stopping nose to nose to block the entire width of the icy road. I hit the brakes hard and slid to a stop in time. It wouldn’t have mattered much which one I hit. It’s tempting to say they both deserved to be hit, but my policy is to avoid hitting people who deserve it. I’m not out there to teach a lesson.
I bet self-driving car makers love to talk to regulators about the trolley problem. If regulators are busy with that hypothetical they aren’t looking at the real problem, which is human-machine interaction.
Tesla is being sued by a passenger whose car autopiloted itself into a stopped car at freeway speed. I’m hoping the case sets a precedent that Tesla is responsible for creating a false sense of security.
But surely Tesla’s lawyers will settle the case if there is the slightest chance of setting a precedent. Unlike driving, lawyering leaves plenty of time to weigh options.
The opinions expressed in this post belong to the author and do not necessarily represent those of the National Motorists Association or the NMA Foundation. This content is for informational purposes and is not intended as legal advice. No representations are made regarding the accuracy of this post or the included links.