Do you remember the horror flick Christine? It was about a self-driving ’58 Plymouth Fury that liked to run people over.
Body by Plymouth. Soul by Satan.
What unholy spirit animates “self-driving” Teslas? And who is responsible when one of these “Christines” runs over someone?
According to the felony manslaughter charges just filed against Los Angeles limo driver Kevin Riad, he is. Even though it was the limo, his self-driving Tesla, which drove itself right through a red light and struck a Honda that had the right-of-way, killing both of its occupants and injuring both Riad and his passenger.
The company, Tesla, which builds and sells cars, in part, due to this capability, isn’t in the dock. It’s all on the driver Riad. By offering this capability, Tesla encouraged Riad to use it–the same as power windows or air conditioning.
But not quite the same.
What exactly is meant by “self-driving” capability–On this everything hinges?
Clearly, Tesla intends its customers to believe it means what the words literally mean– that the car drives itself. Tesla says (fast voice, many asterisks directing you to the fine print) that the driver must “monitor” the self-driving car at all times and be ready to “intervene” whenever necessary. In this case, we are not really talking about self-driving capability, are we?
We are talking about driver assistance technology.
The distinction is important because it speaks to the issue of responsibility and liability. An elevator is a good example of self-driving technology. A person enters and pushes a button for a destination. This passenger is not expected to “monitor” the subsequent actions of the elevator and is certainly not held responsible if the elevator malfunctions, even if he didn’t try to push the “emergency stop” button.
The presumption here is that he is a passenger, being conveyed and, as such, isn’t expected to be in charge of the elevator’s operation.
Tesla wants to have it both ways. To market and sell “self-driving” cars to customers who are expected to “monitor” the car and be ready to “intervene” at all times. If not, and there’s a wreck, it’s the (ahem) driver’s fault.
Catch meet 22.
There is something shystery about a company that touts the “self-driving” capabilities of its cars, if the cars cannot be trusted to safely drive themselves without being constantly “monitored.” If the person behind the wheel must keep constant track of how the car drives, then isn’t that person driving?
Isn’t the glaring presence of the steering wheel, itself, indicative of the car not being truly and fully, to use Tesla’s term “self-driving”? Why have a steering wheel at all if the car doesn’t need it?
Tesla wants to have it both ways—to market cars that are supposedly capable of driving themselves (a big selling point for the company), but leave the driver holding the bag for the consequences of far-from-fully “self-driving” technology, as through a red light and into another car.
The company says that its “self-driving” technology is intended to make driving easier, not absolve the driver from being responsible for the drive.
Many new cars come equipped with various driver assistance technologies such as Lane Keep Assist, which uses cameras to feed data about the car’s position in between the painted lines on the road and electric motors to nudge the car’s steering wheel left or right, to help maintain a centered position in the travel lane. LKS is not marketed as “self-steering” and the driver is the one who will be held responsible if the car wanders into the opposing lane of traffic.
Driving is one of those either-it-is-or-isn’t things. A driver who doesn’t have his hands on the wheel and his eyes on the road isn’t driving the vehicle.
He is at best “monitoring” its driving–assuming he’s not asleep.
This idea that he won’t go to sleep or check his texts in a car that touts its ability to drive is disingenuous.
Granted, Riad bears some of the blame here. Obviously, he, like anyone who sits behind the wheel, had an obligation to “monitor” what his car was doing. But who is to blame for encouraging Riad to believe his car could be trusted to drive itself?
Riad expected the car to do what Tesla’s advertising copy says it can do. He trusted that the “technology” was capable of doing what anyone who hears the words, “self-driving” would assume–The very thing that makes “self-driving” technology appealing to people.
It lacks that appeal if the driver must maintain the same situational awareness and be ready to “intervene” as if he were, in fact, driving the car. And if he must “monitor” and be ready to “intervene” to prevent the car from blasting through red lights or running over pedestrians, then “self-driving” is just another electronic gimmick.
A very dangerous gimmick.
Precisely because it eggs on the kind of inattentive driving that resulted in Riad’s Tesla blowing through a red light, resulting in the deaths of two innocent people.
The fact that Tesla isn’t on trial is as startling as the fact that Tesla has been given a free pass to use the public right-of-way to test its “self-driving” technology, leaving others to clean up (and pay for) the mess created.
Whatever happened to safety?
It’s interesting to speculate why Tesla gets a pass (and a get-out-jail-free) card.
Could it possibly have anything to do with the clearly broadcasted intention of the government to take the driver’s hand off the wheel entirely?
Eric Peters lives in Virginia and enjoys driving cars and motorcycles. In the past, Eric worked as a car journalist for many prominent mainstream media outlets. Currently, he focuses his time writing auto history books, reviewing cars, and blogging about cars+ for his website EricPetersAutos.com.
Editor’s Note: The opinions expressed in this article are those of the author.