Uber Court Decision
This past week, Yavapai County, Arizona Judge Sheila Sullivan Polk declared that Uber has no criminal liability for the death of Tempe pedestrian Elaine Herzberg in March 2018. Herzberg was the first known pedestrian fatality of a driverless car.
Walking her bike across a dark street outside of a crosswalk, Herzberg was hit by the Uber Volvo which was operating in autonomous mode. The car was piloted by Rafaela Vasquez who can be seen in an internal camera video feed, glancing down and away from the road view. According to the Tempe police report, Vasquez was watching a TV program on her phone at the time and could now face charges of vehicular manslaughter. The judge recommended to the police that they now obtain expert analysis of what Vasquez should have seen at the time of the accident, given speed, light conditions and other factors.
Soon after the accident, Uber settled an undisclosed amount with Herzberg’s family and declined to comment on this current ruling.
Arizona immediately suspended consent to Uber testing driverless cars in the state. Uber also pulled out its driverless program entirely from all other states until December. In a way, Uber pivoted completely and is now concentrating on mobility as service devices such as driverless bike and scooter rideshares. The biggest fallout from this tragedy—public confidence in driverless cars in general.
This ruling certainly does not help with that. The question from the beginning has been who is responsible for accidents involving driverless cars?
Most autonomous vehicle (AV) companies blame “driver error” for most accidents that occur. Consumer Reports stated in a 2017 post that experts suggest the issue may stem from computers driving more cautiously than humans.
But what happens if and when these robot cars have no pilots…who will be responsible then?
As we slowly gravitate to autonomous level 3, 4 and 5 (no human intervention), this responsibility factor may become more acute.
Driving myself still seems the best solution. I have no confidence that driverless cars will make us safer and if folks are hurt or killed, no one will be responsible either.
In other Auto Tech Watch news this week…
By Dllu – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=63450446
Algorithmic Bias in Driverless Cars
The Georgia Institute of Technology brought out a new study that started out with a simple question: How accurately do state-of-the-art object-detection models, like those used by self-driving cars, detect people from different demographic groups? Researchers found that detection was five percentage points less accurate, on average, for darker skinned people. The study indicates that human bias seeps into automated decision-making systems or algorithmic bias.
Not surprised by this information since the same problem occurs with facial recognition devices.
Union of Glass and Metal
Researchers at Edinburgh, Scotland’s Heriot-Watt University has figured out how to use lasers to melt glass onto metal which combines into a unified unit. The process is called “ultrafast laser microwelding” which uses super short pulses of infrared laser to fuse two dissimilar materials together. The researchers were able to weld quartz, borosilicate glass, sapphire and aluminum, titanium and stainless steel. Most auto glass is held in place with adhesives which makes manufacturing more complex and adhesive sometimes wears out with age. Will be interesting to see how this research will actually be used.
NMA’s Auto Tech Watch will be back again next week! If you see a story that should be featured on this blog, please contact us with the URL at nma@motorists.org.