By Lauren Ensor
The only thing clear is that as technology accelerates, the lack of guidelines and clear accountability may chill autonomous driving commercialisation.
“Hands down the best car I have ever owned and used to its full extent,” Joshua Brown of Canton, Ohio posted in April 2017. The 2015 Tesla Model S, with its semi self-driving function “Autopilot” had, according to Brown’s testimony, saved his life when a truck swerved onto his lane on the interstate. One month later, Brown was killed when the Tesla drove under an eighteen-wheeler’s trailer. Its driver had made a last-minute decision to turn left in front of him, shredding the Tesla’s roof and killing Joshua Brown on impact.
The accident was the first of several in the US where semi-automatic cars were involved in cases of fatality. The safety concerns raised in light of each incident have been widespread. Is the technology reliable? How do self-controlled compare with driver-controlled vehicles? And significantly, who is held responsible when tragedy occurs?
The answers to these questions are at times, a little uncertain. For instance, the technology is considered reliable – but only to a certain extent. The ability for a car to automatically detect an object on the road can be traced as far back as the late 1950s, when General Motors introduced wire-current technology for that specific purpose. In the 1980s universities teamed up with automotive companies and transportation agencies to develop autonomous functions including lane-keeping and distance following. After the early 2000s, US Defense departments sponsored further development efforts in this area. That is to say, the self-driving car has had more than sixty years of research and design investment with the backing of government, an automotive industry worth four trillion dollars and technology giants like Google actively pursuing it. However, accidents that have already occurred reveal its somewhat rudimentary limitations. In March of 2018, an Uber Inc. Self-driving car killed a pedestrian due to camera limitations in low visibility conditions. Researchers attempting to discover if the accident could have been avoided were at a loss, concluding “It is still unknown as to why Uber’s systems were unable to detect the pedestrian before the crash happened.” Brown’s accident too was presumably due to the Tesla’s camera failing to detect a white truck against the bright sky.
However, risk factors need to be tempered with the associated risks of driver-controlled vehicles. Every year in the US, there are more than 30,000 traffic accident fatalities. In 2009, a survey revealed one accident every 100 million miles for driver controlled vehicles while Tesla had managed to log 130 million miles without any fatal incidents. And, while there may be an increased risk associated with an overreliance on technology, autonomous cars are not subject to sleep deprivation, distraction or drug or alcohol impairments. On these evaluations, a strong case could be argued for the driverless car which will become safer as technology develops, whereas driver-controlled vehicles will maintain the same levels of risk.
Where problems really arise is over accident liability. Primarily, there is no federal regulatory agency tasked with monitoring the development and production of artificial intelligence in the transport industry. Left to the courts, existing liability law can be applied but this is limited; any advances made towards fully autonomous vehicles will significantly alter the legal game, and courts will fail to keep up. Even in its current stage of development, software system technicalities mean any allegations of design defects would involve sifting through coding in order to find programming errors which is far too time consuming and complex. Waivers have been suggested as a way to shift responsibility back onto the driver; however, this may require consumers to understand software functions and potential failures which seems unlikely and, unreasonable.
A balance needs to be struck; placing sole responsibility on the manufacturer means production is no longer fiscally worthwhile, but placing responsibility on the consumer may be cause for negligent and risky manufacturing. Additionally, while public attitudes reveal expectations of responsibility placed on manufacturers and governments over owners, AI is still not capable of making ethical decisions quite like humans can – as programmers cannot possibly log every scenario into software systems. This too, presents an issue: The idea of a self-driving car is that it can fundamentally, self-drive. Or so Joshua Brown thought too – accident investigators believe he was watching a Harry Potter DVD at the moment of collision.
The only thing clear is that as technology accelerates, the lack of guidelines and clear accountability may chill autonomous driving commercialisation. Regulatory measures should be considered to pave a way for technology and mitigate any future accidents.
Lauren Ensor is a masters student studying conflict and terrorism studies at the University of Auckland.
Disclaimer: The ideas expressed in this article reflect the author’s views and not necessarily the views of The Big Q.
You might also like:
How did the promising aircraft design end in catastrophe? The rise and fall of the Boeing 737 MAX
Is our obsession with electric mobility driving an increase in lead poisoning?