Automation is one such technological leap for which there is little legal framework. Innovation outpaced legislation, resulting in the present situation of lawmakers and legal organisations playing catch-up with regard to the ramifications of autopilot and assisted driving on the law, as well as the implications of assigning liability in criminal and civil cases. Here, we’ll examine the issue in closer detail, from the roll-out of assisted and automated driving to where the responsibility lies, and the future of legal understanding on the subject.
Danger to Life
As forward thinking as autopilot and driving automation is, there have still been notable hiccups in experimentation with the technology. Despite assurances of safety from industry leaders and chief technicians alike, the roll-out of self-driving technology has resulted in a number of incidents, several of which have been sadly fatal. 49-year-old Elaine Herzberg was killed by a self-driving Uber in Arizona in 2018, a complicated case with many moving parts – from an allocated safety driver allegedly watching a reality TV show at the time of the accident to serious concerns regarding the safety of the car’s night-time driving capabilities. The very public incident is one of 12 noted accidents involving autonomous vehicles, the other 11 of which are attributed to Tesla’s Autopilot system.
The Issue of Liability
With regard to the Elaine Herzberg case, the legal outcome with regard to Uber’s liability for the accident was historic; it was decided that Uber were not criminally liable for Herzberg’s death, despite issues raised regarding Uber’s sensors and detection software. The assigned safety driver for the vehicle has been indicted for negligent homicide, but to what extent does this case inform other related cases? And does it accurately represent liability in the case of autopilot?
The situation is frustrated even with the inclusion of the safety driver as a concept. With regard to the Herzberg case, it is argued the driver could have prevented the accident if paying more attention to the road. But even if they were paying attention, the delay between noticing a hazard and responding to it is greater than if they were in complete control of the vehicle. This is a function of human programming, and not an individual fault; can that be punished?
Regulation Following Innovation
In the case of self-driving vehicles, the swift roll-out of the technology to live testing has resulting in a disconnect between existing technologies and the laws that could apply to them. The ramifications of an algorithm causing a fatal accident are not easily unpicked by lawmakers, and the reverberations of the accidents suffered by pedestrians and drivers alike at the hands of autonomous vehicles will be felt for years to come. Technology lawyers will become even more crucial to the understanding and implementing of new lines of though to the legal process, while existential conversations about autonomous driving become central to discussions on new innovation.