Home About RSS

Autonomous Vehicles for Personal Transport: A Technology Assessment

Whether it was flying cars or jetpacks, we were all promised a compelling vision of the future at some point in our childhoods. And, although it may seem like science fiction, autonomous vehicles—cars that drive themselves—are more than just a promise; they are a growing reality. The widespread coverage in October 2010 of Google’s fleet of Toyota Prii that continue to navigate the highways of California was a clear message to the world that this technology is fact, not fiction. Questions remain, however, as to whether or not the world and the marketplace are ready for this technology.

To explore this topic, Beverly Lu and Matthew Moore (two PhD students at Caltech) conducted an assessment of the current status of autonomous vehicle technologies in the U.S—mainly through interviews with experts in the field. They focused on those technologies that are applicable to modern cars driven on existing, unmodified public roads. Their geographical focus was mainly on the west coast of the United States where a hotbed of activity regarding this technology is located.

To download the full article, click here.

Nevada Bill Would Pave The Road To Autonomous Cars

Is it lawful for a car to drive itself? In the absence of any law to the contrary, it should well be. A new bill is working its way through the Nevada state legislature that would remove any doubt in that state. A.B. 511 directs the Nevada Department of Transportation to authorize autonomous vehicle testing in certain geographic areas of Nevada. Should vehicles meet Nevada DOT standards, they would be permitted to “operate on a highway.”

The bill defines not only autonomous vehicle, but artificial intelligence as well. AI is “the use of computers and related equipment to enable a machine to duplicate or mimic the behavior of human beings.” An autonomous vehicle uses “artificial intelligence, sensors, and [GPS] coordinates to drive itself.”

To be clear: autonomous vehicles are not yet the law of the land in Nevada. This bill must pass through two committees and receive a hearing before it can be voted on and become law. Some preliminary thoughts on the bill in its present form follow.

1) It is wonderful that the state of Nevada is being so proactive. The potential safety, mobility, efficiency, and other benefits of autonomous vehicles are enormous. Creating a process by which to test and certify such vehicles represents an invaluable step forward.

2) That said, the bill’s definition of autonomous vehicles is unclear, even circular. Autonomous driving exists on a spectrum. Many vehicles available today have autonomous features, while falling short of complete computer control. Surely the bill’s authors do not intend to require that, for instance, today’s self-parking Lexus LS 460L be tested and certified.

3) My personal guess is that it was John Markoff’s October 2010 coverage of Google’s autonomous vehicles that sparked this bill. I base this on the presence of that article on Nevada’s website under Exhibits/Senate.

One hopes A.B. 511 is the beginning of an important conversation about the promise and perils of autonomous driving in the United States. This Center, in conjunction with the Center for Automotive Research at Stanford, has started a program dedicated specifically to the Legal Aspects of Autonomous Driving. We just hired a fellow. Look to this blog in the coming months for more on this topic.

The bill itself is here. Thanks to Amanda Smith for her insights.

Copyright boundaries for the development and use of intelligent systems

From an intellectual property perspective, the factors determining whether Artificial Intelligence programs are protected by copyright are the same as for ordinary computer programs. In contrast, the question of intelligent systems to act as authors is particularly interesting. In this case, the key issue is to the identity of the copyright owner: Is it the software, software copyright holders, users of the software, both or no one.

The Anglo-American and civil laws differ in this respect. In the U.S. and the UK, a work may be copyrighted, even if it is machine generated. There is a specific rule in the UK concerning computer-generated works and in the United States this conclusion can be achieved through interpretation. In Finland and elsewhere in Europe, it is assumed that copyrighted works should be made by man: The result of a mechanical operation is not entitled to copyright protection. Works of this kind shall not be regarded as independent and original as required by copyright law. Copyright to works created by an intelligent system does not belong to anyone. The civil law tradition emphasizes the importance of moral rights, which are connected to the author’s personality. In contrast, the Anglo-American copyright tradition is based on the economic rights and moral rights are largely ignored. The European Commission has drawn attention to the special provision on computer-generated works in the UK.  The Commission is concerned that the provision does not comply with the software Directive due to the broad scope of copyright protection granted thereunder.

The copyright system is meant to encourage the creation of works. It is true that machines do not need incentives to create. Machine-generated output, however, is affected by a number of people who, for example, program the machine. These people obviously need incentives, just as the author of a book or a composer. Copyright protection for machine-generated works would encourage the development of useful technology. In this sense, copyright protection should be available, also for machine-generated content, and should not be categorically prohibited, even if work performed by machines is to some extent mechanical. For example, short telegraph news are not necessarily protected irrespective of whether they are written by a man or machine. This kind of assessment of the creativity prerequisite is necessary. The assessment should not, however, be exclusively linked to a requirement that the direct author should be a man.

In the preparatory works and legal literature in the United States,  in the UK legislation and in Finnish preparatory works , authorship is allocated to the user of the software. The software programmer is ignored in this respect although the contribution by the programmer should be recognized. User actions may be extremely simple. This is the case when the user presses a button to compose. In intermediate works the output has been produced by the computer user and the computer programmer and/or a person producing the database. Therefore, both the user and the programmer should be considered co-authors in such cases. The most practical option to achieve this result would be to agree on a contract. However, it is obviously not possible to validly agree on the fulfillment of threshold for copyrightability.

This is an excerpt of an article published in Viestintäoikeuden vuosikirja 2010, Forum Iuris, Helsinki 2011, VII + 200 pages. Unfortunately, the book is only available in Finnish:


The paper on intelligent machines and legal liability has been published in the Nordic Journal of Commercial Law: http://www.njcl.utu.fi/

Duke's James Boyle On Artificial Intelligence

James Boyle of Duke Law School has written a nice paper on the issues around constitutional personhood for artificial intelligence. You’ll recall Lawrence Solum’s wonderful North Carolina Law Review article on AI and legal personhood from 1992. Solum was interested in whether soft AI might meet the technical requirements of a legal entity such as the executor of a trust. Boyle’s argument is more (too?) ambitious. Here’s an excerpt:

My point is a simple one. In the coming century, it is overwhelmingly likely that constitutional law will have to classify artificially created entities that have some but not all of the attributes we associate with human beings. They may look like human beings, but have a genome that is very different. Conversely, they may look very different, while genomic analysis reveals almost perfect genetic similarity. … How will, and how should, constitutional law meet these challenges?

I confess I am one of the “skeptics” Boyle refers to that “point to a history of overconfident predictions that the [AI] breakthrough was just around the corner.” We’ve been doing AI since that terms was coined at my alma matter in the 1950s and, as Mary-Anne Williams put it to me succinctly, robots today are about as smart as insects.

Boyle was specifically asked to write about the future. And he includes biologic possibilities that may be around an earlier corner. Either way, the paper is worth a read. (Hat tip to Gabe and Mike.)

Ethics@Noon: Robots & Privacy

Panel Discussion at Association of Defense Counsel (ADC) – Dec 9, 2010

Dan Siciliano and Ryan Calo (both, of Stanford Law School), along with Stephen Wu (Cooke Kobrick & Wu LLP), Douglas Robinson (Shook Hardy & Bacon, L.L.P.), and Sven Beiker (Center for Automotive Research at Stanford) led a panel on the legal aspects of autonomous driving at the Annual Meeting of the Association of Defense Counsel of Northern California and Nevada in San Francisco on Dec 9, 2010. The panel’s title was “Sudden Acceleration Into the Future: Liability from Autonomous Driving and Robotics.” The objective of the panel discussion was to raise awareness among lawyers about advanced driver assistance systems as well as autonomous driving, and to initiate a conversation in the legal field about the implications this technology might have for the legal realm. The Annual ADC Meeting brings together defense attorneys from Northern California and Nevada to discuss current and upcoming topics in areas such as products liability, construction, medical malpractice, and employment law.

The ADC chose to focus on the future and upcoming legal issues as the theme for its Annual Meeting. The panel discussion on autonomous driving was part of the ADC’s outlook on future legal matters. The goal of the panel was to help lawyers prepare for defense litigation of the future and to seek the lawyers’ opinion and concerns about the fledging field of robotics law.

At the beginning of the panel, Sven Beiker gave a presentation on the challenges of individual mobility and the motivation for autonomous driving. He showed several videos to explain current production and research systems. He also presented a timeline for the evolution of autonomous driving, beginning with today’s driver assistance systems, and projecting a timeline leading to full autonomous driving. The presentation concluded with a call for collaboration and communication regarding the benefits and challenges of autonomous driving. Deploying autonomous vehicle technology will not only require engineering expertise and technical infrastructure, but also legal and public policy support.

Stephen Wu gave an overview of relevant litigation in the field of automated / autonomous systems. His program materials included a survey of robotics products liability cases. He emphasized that the automotive field, with more and more intelligent vehicles, presents a great challenge to the legal field, as accidents will happen even with autonomous vehicles. These accidents will lead to unprecedented liability questions. He also emphasized that lawyers will need to change the way they practice products liability law. With the use of information technology and artificial intelligence in vehicles, lawyers will need to become familiar with the parties involved in manufacturing driver assistance systems, they must understand how manufacturers design and build systems that involve information technology and artificial intelligence, and they will need to become familiar with sources of possible defects. In short, the practice of products liability law will become a “high tech” field.

Douglas Robinson summarized recent litigation in automotive product liability and explained that those cases have now reached already a level of complexity that presents great challenges. For instance, he pointed out that vehicle manufacturers were accused in some cases of not offering certain safety systems (e.g. Electronic Stability Control) in their vehicles, while these systems were available in others. The manufacturers therefore faced the claim that the vehicle was not state of the art. The litigation might create a certain push for autonomous vehicles once the first vehicles will have been equipped with this technology.

Dan Siciliano discussed in his presentation how accident patterns might change through autonomous vehicles. While the normal assumption is that accidents would be reduced significantly through this technology, they will never disappear completely. Moreover, the typical driver profile and characteristics might change due to changing driving tasks for the driver and the autonomous system. Therefore not only lawyers, but also the general public and governments need to consider that while some new accident types might occur, many others would be avoided. Courts may balance these costs and benefits, and should consider this broader perspective when determining possible liability.

Ryan Calo also gave an overview of other cutting-edge technology fields that raise new considerations in litigation. One set of considerations involve how robotics will challenge privacy law by making surveillance easier in public and private spaces. Another has to do with liability for personal robots. Robots will catch on more quickly and be more useful if third-party developers and users can program and modify them. But this also opens manufacturers up to potential product liability claims in the event of physical damage or injury. As a result of these challenges, he is writing publications to present the idea of limited immunity for robot manufacturers and artificial intelligence systems, to foster and encourage innovation, and prevent crippling products liability suits, while preserving some leeway for compensation for damages.

In order to study this field in greater detail, Stanford Law School is currently establishing a fellowship program “Legal Aspects of Autonomous Driving” which is also supported by the Center for Automotive Research. The research by the fellow funded through this program will help lawyers and policy makers better understand the legal challenges involved with mass deployment of autonomous driving systems.

After discussion by the panelists, the question and answer session generated a lot of interest among the audience. While the topic was new to most of the attendees, the relevance and magnitude of the issues raised by the panelists became apparent immediately. Listed below are some of the questions that arose:

– When will autonomous driving be ready for public deployment and what steps will be taken to facilitate deployment?

– What are the next steps in research after the DARPA Grand and Urban Challenges?

– What role will infrastructure play in deploying autonomous cars? – What does the litigation concerning unintended acceleration involve, and where does it stand?

– Is the vehicle loss of control/acceleration litigation a template for future litigation involving autonomous vehicles?

– What do we need to change in order to better defend the manufacturer and the driver?

– What are the compelling “economic” reasons that seem to argue for rapid deployment of autonomous vehicles?

– How might the widespread introduction of autonomous vehicle technology change the insurance industry?

– What can past cases concerning robotics tell us about liability concerning autonomous vehicles?

– If we are going to make legislative changes concerning autonomous vehicle liability, what bodies will consider that legislation

– What sorts of policy and legal hurdles do autonomous vehicles confront?

– What unique business opportunities do autonomous vehicles create?

The publications and presentations can be downloaded below:

– Paper “Legal Aspects of Autonomous Driving” by S. Beiker and R. Calo: download here

– Paper “Summary of Robotics Liability Cases” by S. Wu: download here

– Presentation “Autonomous Driving – When Technology is Not Enough” by S. Beiker: download here

– Presentation “Generalized Model for Driving & Accident Reduction” by D. Siciliano: download here

For further information contact Sven Beiker: beiker@stanford.edu

(Sven Beiker wrote this blog post, and Stephen Wu edited the post.)

Apps for Robots

Over Christmas, I received a series 530 Roomba, the robotic vacuum cleaner from iRobot. It cleans the floor really well. But that is all it does. This year at the Consumer Electronics Show, iRobot revealed the prototype AVA. It is, essentially, an open robotic platform. Think of it as an iPad with a body. It has no dedicated purpose and, importantly, it has an API and will run software made by third-party developers.

Yes, apps for robots. This is a wonderful development, one that I predicted in a forthcoming essay in Maryland Law Review. As iRobot founder Colin Angle points out, “If you think of the thousands of apps out there: Which iPad apps would be more cool if they moved?” More importantly, would you not be more inclined to buy a personal robot that came with thousands of programs, with more on the way.

AVA is representative of a sea change in thinking about robotic products. Just a couple of years ago, iRobot’s Angle was telling The Economist that general-purpose robots were unlikely and that we would have one robot for each task. Now iRobot is building a multi-purpose platform open to outside programmers.

The change can likely be traced back to the start up Willow Garage, home to the laundry folding, pool playing, beer shagging PR2. Willow Garage is betting that the next “killer app” will come from consumers or other, third-party innovators.

To read about the promise and challenges open robots pose, please check out Open Robotics. Your thoughts warmly welcome.

Open Robotics

In a new paper, forthcoming from Maryland Law Review, I argue that consumer robotics will have to be sufficiently open to third party innovation to really take off.  Such openness, however, creates novel legal issues that may require statutory intervention.  Here’s the abstract:

With millions of home and service robots already on the market, and millions more on the way, robotics is poised to be the next transformative technology. As with personal computers, personal robots are more likely to thrive if they are sufficiently open to third-party contributions of software and hardware. No less than with telephony, cable, computing, and the Internet, an open robotics could foster innovation, spur consumer adoption, and create secondary markets.

But open robots also present the potential for inestimable legal liability, which may lead entrepreneurs and investors to abandon open robots in favor of products with more limited functionality. This possibility flows from a key difference between personal computers and robots. Like PCs, open robots have no set function, run third-party software, and invite modification. But unlike PCs, personal robots are in a position directly to cause physical damage and injury. Thus, norms against suit and expedients to limit liability such as the economic loss doctrine are unlikely to transfer from the PC and consumer software context to that of robotics.

This essay therefore recommends a selective immunity for manufacturers of open robotic platforms for what end users do with these platforms, akin to the immunity enjoyed under federal law by firearms manufacturers and websites. Selective immunity has the potential to preserve the conditions for innovation without compromising incentives for safety. The alternative is to risk being left behind in a key technology by countries with a higher bar to litigation and a serious head start.

Robot Paparazzi

The Wall Street Journal recently did a piece on the potential use of drones to stalk celebrities (and tail suspected adulterers).  I hope it’s not too late for me to include this example in my forthcoming book chapter on robots and privacy.  This would make a great example.  Here’s an excerpt from the WSJ:

Ms. Cummings predicted it’s just a matter of time before drone technology and safety improvements make the gadgets a common part of the urban landscape.

Privacy issues could emerge if such drones become common. While the military has rules of engagement governing drone use, there is no similar set of rules to protect privacy for domestic use of drones.

“If everybody had enough money to buy one of these things, we could all be wandering around with little networks of vehicles flying over our heads spying on us,” Ms. Cummings said. “It really opens up a whole new Pandora’s Box of: What does it mean to have privacy?”


Program on Autonomous Driving and Liability

On December 9, 2010, three of us, Ryan Calo, Dan Siciliano, and I will be presenting a continuing legal education program entitled “Sudden Acceleration Into the Future:  Liability from Autonomous Driving and Robotics” in San Francisco.  Sven Beiker of the Center for Automotive Research at Stanford (CARS) and Douglas Robinson of Shook, Hardy & Bacon L.L.P. will join us on the panel.  The program will be one of the track sessions at the Annual Meeting of the Association of Defense Counsel of Northern California and Nevada.

The program description reads:

“Over the next decades, the automotive industry will see sweeping changes, including adopting technology for autonomous vehicles that drive themselves to a destination.  At the same time, products liability poses a challenge for the industry and for robotics generally, especially after alleged sudden acceleration phenomena and resulting litigation.  See a snapshot of the future of the automotive industry, discuss the litigation and products liability challenge to autonomous driving, hear some of the public policy ideas for supporting the nascent field of robotics, and learn about changes in defense practice that will be necessary to defend motor vehicle cases in future decades.”

I will write again after the program to summarize some of the ideas exchanged by the panel.  We know that driver assistance systems, and eventually autonomous driving systems, will save many lives.  At the same time, the systems will not be perfect and will cause some accidents, deaths, and injuries.  On balance, the net effect of adopting anticipated autonomous driving technologies will be to reduce deaths, injuries, and property damage.

How do we encourage the adoption of technologies by industry and consumers that save lives on balance, even though in individual products liability cases, it may be that these same technologies have caused harm to specific people?  The panelists will address this and many other questions concerning liability arising from autonomous driving.

For more information about the Association of Defense Counsel’s Annual Meeting or to register for the program, click here.