“Autonomous cars are the new trend in the automotive sector. Every day, the manufacturers announce or introduce new models based on new technologies. But what about the damages which are caused because of defect sensors or defect software of the car? Do we need changes in our legal system which is based upon the strict liability of the vehicle owner, the fault-based liability of the driver and the liability of the producer, the foundation on of which has caused many disputes among the Turkish scholars? The aim of this paper is to address and nd possible solutions to these and other questions in connection with autonomous driving.”
I interviewed Richard Kelley. He is the lead engineer for the University of Nevada’s autonomous vehicle research. He is also one of the lead investigators for Nevada’s Intelligent Mobility Initiative, a research project that explores the application of artificial intelligence to transportation problems. He is currently working with his students to program a Lincoln MKZ sedan to drive itself around Reno and Lake Tahoe.
Cetin: Shall we begin with a general question? What will be the potential benefits of autonomous vehicles? Why do we need them?
Kelley: I think that everyone who works on autonomous vehicles (AVs) does so at least in part because they believe that a future with self-driving vehicles will be a safer future. Most accidents today are caused by human error, and autonomous vehicles have the potential to elimate that entire source of accidents. But beyond the safety case, autonomous vehicles represent a key step forward for artificial intelligence. Thus far, most autonomous robots have been deployed to solve very simple tasks in relatively constrainted environments (think: vacuuming a house). It may not seem like it, but getting to the point where we have reliable vacuum robots is actually a tremendous technical achievement – it took decades to go from the lab to the living room. Think about how much harder driving a car is compared to vacuuming a house. Getting to fully autonomous cars — cars that can operate outside in the real world where all kinds of crazy things can happen — is requiring roboticists and AI experts to really push the limits of what AI technology can do, and will lead to all kinds of benefits for society as the technology diffuses into our daily lives.
Cetin: There are several criticisms about autonomous vehicles on the risks of hacking, terrorism, privacy and security. What do you think about these risks? Can these risks be overcome?
Kelley: All of these issues are critically important to address. One of the tragedies of the Internet and World Wide Web is that security was almost an afterthought, rather than a focus from the beginning. We have a chance to avoid making the same mistake with autonomous and connected vehicles. To deal with security issues (like hacking and terrorism), it will probably be necessary to rethink how cars are built. These days, a car consists of many dozens of microcontrollers and computers connected via a network protocol (CAN) that was designed in the mid 1980s, a few years before the first computer worm was even invented. We have to accept that car is a computer network that happens to be on wheels, and work to develop new security systems that recognize and address that fact. But I think this is possible, and is something that the industry is working on as public awareness of the risks grows.
As far as privacy is concerned, I think society will need to learn about the modern threats to privacy from advanced technology, and based on that learning will need to decide, democratically, how to proceed. I think we’re starting to see this conversation unfold in the context of social media, so hopefully by the time autonomous cars are more common people will have a better sense of how they value their privacy. All of these challenges are substantial, but I remain hopeful that we can collectively choose a good path forward.
Cetin: What role should the state and private sector play in the development of this technology? Which one can be more useful?
Kelley: Both the state and the private sector are playing and will play essential roles in the future development of autonomous vehicle technology. I think the primary role of the state will be to create a beneficial regulatory environment – one that balances the safety concerns of the general public with the need of companies to exhaustively test their systems. At the same time, there are probably research questions that are too speculative for companies to work on, and there’s probably room for the state to fund research on those sorts of issues. As an example, very few autonomous vehicle companies are interested in technology for networked cities, because they want their systems to work without relying on public infrastructure. However, such infrastructure may have other benefits that the state could support research for. My team is currently working with the City of Reno and the State of Nevada to explore exactly how intelligent lidar networks can make cities safer and more responsive.
The primary job of the private sector will be to push core technical development forward. In the end I think both roles will end up being more less equally important.
Cetin: Whlie improving autonomous vehicles, what is the reflection of the rules of law and ethics?
Kelley: I think the main thing for autonomous vehicle companies to focus on is building systems that robustly follow the law. If they can do this, I think the vast majority of the “obvious” ethical concerns will be addressed.
Something I don’t think is helpful is the interest in “trolley problems.” Fortunately this interest seems to be waning.
Cetin: What legal considerations should we take as a priority in the development of this technology?
Kelley: One of the tricky things about traffic laws is that they were written with humans in mind. We often think of written traffic laws as precise objects, but when you really carefully read them, there’s a lot of room for interpretation. This is something that computers may have a hard time with, so we’re going to need to clarify our expectations regarding the exact capabilities of an autonomous car. For example, should AVs be able to read? It might sound funny, but a surprising amount of traffic control is done using written signs that can’t simply be memorized. If we decide that AVs need to be able to read, then what level of reading comprehension will we require? Should they be able to read at the level of a typical driver? That seems like a high standard, but the point is that we need to decide what our expectations of autonomous vehicles are when it comes to understanding the law.
At the same time, it would probably be helpful if governments could start to create a standard way to present their laws to make it easier for AVs and other robots to download and understand legal constraints. The laws themselves don’t have to be uniform, but ifthere were a standard machine-readable format for the laws of both Nevada (where I am from) and, say, California, then it would be much easier to build robots that could operate in both states.
Basically, I think we need to decide what the “driver’s test” for robots is going to look like. This is another area where governments and private companies can work together.
Cetin: Responsibility of autonomous vehicles in an accident is one of the popular debates. What do you think about this responsibility? For example, how should the responsibility problem be solved when a situation, not foreseen by the programmer, is realized?
Kelley: Overall, I think liability is one of the areas that will get *less* interesting as autonomous vehicles are perfected. The US National Highway Traffic Safety Administration (NHTSA) conducted a comprehensive analysis from 2005 to 2007 to determine the causes of crashes in the US. NHTSA found that almost all (94%) crashes are caused by human error. Basically, humans making avoidably bad decisions. Once AVs can prevent this sort of crash, the remaining problems (that last 6%) will be easier to address. Because of the advanced sensors and AI in autonomous vehicles, even things like “wear and tear” that could cause a tire to burst on the road will likely be detected much earlier than they are now.
And even when there are crashes, they’ll probably be treated in much the same way that we treat airplane crashes. The so-called “black boxes” in each car involved in a crash will be taken from the scene and analyzed carefully by the government and AV manufacturers to make sure that similar crashes are prevented in the future.
Cetin: We know that there are 5 levels of autonomous cars. What could be the problems that a 5th level autonomous vehicle could create on a public road? How can they be overcome?
Kelley: To start I want to say that true “level 5 autonomy” – where you don’t even need a steering wheel and your car can go anywhere – is probably several years (or maybe even decades) away. But once we have that level of technology I think the questions shift away from “can we deploy this kind of car?” to “how do we best take advantage of this level of autonomy?” The role of autonomous systems is almost always to decrease the cost of some activity. In the case of level 5 AVs, it’s to decrease the cost of physical transportation. This will change economic incentives, and may even increase the number of cars on the road. The challenge will be to make sure that autonomy doesn’t lead to more congestion or longer commutes.
The other major issue is employment. For example, in the US a large number of people are employed as truck drivers. We’ll need to make sure that the development of level 5 technology doesn’t rapidly put all of those people out of work. We may also need to retrain drivers to do other kinds of jobs.
Fortunately the path from current technology to level 5 autonomy will take some time, and we’ll be able to start working on these issues before they become absolutely critical.
Cetin: How can the prejudices of manual drivers against autonomous vehicles be overcome in large cities where traffic is heavy, and some vehicles are not autonomous?
Kelley: Dealing effectively with humans is, in my mind, one of the last and largest technical challenges remaining for AVs. There’s still a lot of research required to make autonomous cars “socially intelligent.” This is in fact where most of my research is focused at the moment, and I think the answer is that we need to look to game theory. Traditionally, game theory has been used to analyze well-defined competitive situations with a finite number of outcomes. But there are ways to extend game theoretic analysis to the driving domain, and I’m currently working with several of my students to make that kind of analysis workable on our car. Right now autonomous vehicles behave too conservatively because they mostly don’t model the intentions of other drivers, so they can’t reliably predict how those drivers will react. My team is building “intent recognition” systems that are good at predicting how people will behave, and we are incorporating those systems into the decision-making software of our vehicle so that it is less likely to be bullied by aggressive human drivers.
It will also be important for cars to be able to adapt their behavior dynamically as driving conditions change, and to combine fast machine learning with robust safety guarantees. This is another area with a lot of research potential.
Cetin: I want to ask a futuristic question. How could the future progress of autonomous vehicles be? Do you foresee any software above 5th level for autonomous vehicles?
Kelley: Even though level 5 autonomy is still a long ways off, I think that’s really just the beginning for autonomous vehicles. I expect there will be a whole economy built on top of AVs, much in the same way a complex economy has grown around the Internet and the Web. The economics of transportation networks will need to be rethought, for example. And once cars can drive autonomously, there will be a need to carefully design the in-cabin user experience of these vehicles. We’ll find new ways to be entertained while our cars drive us around (hopefully with fewer ads than we have on the Web today).
More broadly, I think that the technology that will enable level 5 autonomy will be useful in more than just cars. By the time we have level 5 autonomous cars, the same technology will also probably drive smaller autonomous delivery vehicles, for example.
Cetin: Finally, can there be a collective and central software that can run multiple or all autonomous vehicles? What would be the consequences?
Kelley: This is a really interesting question! In the realm of unmanned aerial vehicles (drones) there has already been a lot of research on centralized traffic management systems, primarily through NASA’s UAS Traffic Management (UTM) project. I spent several years working with the State of Nevada to participate in UTM research, and I think the work that NASA is doing there is a template for how centralized robot control networks will develop in every area where robotics becomes prevalent.I expect that individual companies will definitely have their own centralized systems for managing their fleets. Moreover, it may be possible for local governments to have a role here. There’s some evidence that centralized control of a lot of vehicles can be more efficient than decentralized approaches, so I would expect to see a lot of practical research and experimentation on this kind of question in the coming years.