Legal Challenges of Autonomous Driving

Legal Challenges of Autonomous Driving

On September 25th earlier this year, California became the third state in the US, following Nevada and Florida, to pass legislation in allowing the operation of driverless cars on the road. Two short years earlier, Stanford Artificial Intelligence Lab director Sebastian Thrun unveiled that Google had created the world’s first autonomous car.

In testing, the cars, which were manned by trained operators, drove around California, from Mountain View to Santa Clara and on to Hollywood Boulevard. They successfully navigated the Pacific Coast Highway, crossed the Golden Gate Bridge, and even made it around Lake Tahoe. The test cars logged over 140,000 miles in total. The following year, Thrun gave a TED talk explaining how autonomous cars could save lives, time, and fuel. The National Highway Traffic Safety Administration reported 32,885 motor vehicle deaths in 2010, the leading cause of death among teenagers. Furthermore, the majority of car accidents can be attributed to driver error and other human factors (from 57% to as high as 90%) that would not be present in driverless cars.

Still, the public has not been unanimously welcoming of autonomous cars. Some are concerned with privacy issues. Since Google’s cars rely heavily on Google Maps’ GPS technology, they could track the car wherever it goes. Not only would this make many people uneasy, the data could potentially be presented as evidence in court. In United States v. Jones, 565 U. S. (2012) the Supreme Court ruled that data collected from a GPS device police placed on the underside of the defendant’s car without a valid warrant was not admissible. However, there was no discussion on how to treat such information held by a third party like Google Maps. In the past, American courts have consistently held that a person has no legitimate expectation of privacy in information voluntarily turned over to third parties (e.g. Smith v. Maryland, 442 US 735 (1979); United States v. Miller, 425 US 435 (1976). This premise was challenged by Justice Sotomayor in Jones (at 5), who argued that “[t]his approach is ill suited to the digital age." Indeed, we often disclose personal information (like phone numbers, visited URLs, purchase histories) to third parties in our day-to-day activities. This could very well include our geographical location if autonomous driving becomes prevalent on the roads.

Another identified legal issue is accountability. While driverless cars may well significantly reduce the frequency of motor vehicle accidents, the number will never be zero. The question is, when driverless cars do get into accidents, who is liable? An article by Andrew P. Garza recently published in the New England Law Review posited that product liability law is capable of handling the advent of autonomous cars just as it handled earlier safety devices such as seat belts, air bags, and cruise control. Garza argued that though manufacturers have been historically resistant to incorporating new technologies fearing increase in liability, they would benefit from autonomous vehicles in the long run because of three reasons. Firstly, autonomous vehicles are safer on the whole, which leads to a net decrease in manufacturer liability and the cost of insurance. Second, the sophisticated technology of autonomous vehicles will reduce litigation costs by decreasing reliance on expert testimony as well as the number of cases that go to trial. Electronic Data Recorders and Lane Departure Warning System cameras installed in driverless cars will act as black boxes to record information on how the accident occurred. This objective information will aid parties in product liability actions to assess the potential outcome of their suits, making trial outcomes more predictable and settlements more likely. Lastly, Garza reasoned that since modern customers are more likely to purchase vehicles with higher safety ratings, the market would inevitably be promising for autonomous vehicles as they gain mainstream awareness.

One point to mention is that Garza was strictly discussing autonomous vehicles that can be overridden by human drivers (like Google’s self-driving cars). However, he suggested that once these overridable autonomous vehicles become popular, there would likely be a shift to fully autonomous, nonoverridable vehicles. Although three states now allow driverless cars on the road, a licensed human driver is still required to be in the driver’s seat. The human driver must be able to take over in the event that the technology malfunctioned or was unable to anticipate or accommodate some obstacle. Should the human driver fail to perform adequately, s/he would inevitably be accountable for any damages incurred. If fully autonomous, nonoverridable vehicles ever become permitted on the road, the lack of any human driver would present further liability to manufacturers.

Nancy Situ is a JD candidate at Osgoode Hall Law School and is currently enrolled in Osgoode’s Intellectual Property Law and Technology Intensive Program.  As part of the program requirements, students are asked to write a blog on a topic of their choice.