Home About RSS

Robotics and the Law
Robotics and the Law

Unmanned Insecurity: Emerging Issues of Drone Security & Information Assurance

I am pleased to announce a program on the security of drones presented by Donna A. Dulo MS, MA, MAS, MBA, JD, PhD(c).  Please attend in person if you can, or if you are out of town, a dial-in is available.  The presentation will be Wednesday, September 25, 2013 at 1:30 Pacific/4:30 Eastern, at the offices of Cooke Kobrick & Wu LLP, 166 Main Street, Los Altos, CA 94022.

The link to RSVP and to obtain the dial-in number is here:



With tens of thousands of unmanned aerial vehicles slated to enter our national airspace in the near future, a significant focus has been on privacy and safety issues. However the issues of security and information assurance (“IA”) are just as pervasive. Data security, communications security, hostile takeover through methods such as GPS spoofing, and issues of lost link resulting in vehicle and data loss are some of the many examples of security and IA issues that face drone operators in all facets of vehicle operations. Maintaining data confidentiality, availability and integrity in drone operations is a complex task, and resulting breaches could potentially emerge not only as IA incidents but also as privacy and safety situations. This teleconference will focus on the issues of security and information assurance in drone operations and the potential legal issues that operators may face when security incidents occur during vehicle operations. It will also cover the compound issues that may emerge between security, privacy and safety in drone operations.

Bio for Donna Dulo:

Donna A. Dulo is a senior mathematician, computer scientist, and software/systems engineer for the US Department of Defense where she has worked for over 25 years in both military and civilian capacities. She is also a systems, software, and safety engineer for Icarus Interstellar where she designs spacecraft and integrates safety and resilience into spacecraft systems. Donna is an adjunct faculty member at the Embry-Riddle Aeronautical University Worldwide Campus where she teaches computer security and information assurance law. Donna provides consulting services to various entities such as NASA and the Monterey Institute for Research in Astronomy, as well as to various universities. Donna is the author of several articles on law, drones, and software engineering and is writing the ABA’s book on unmanned aerial systems law which is due for publication in the summer of 2014.

Donna did her undergraduate work at the US Coast Guard Academy and the University of the State of New York in Management Economics. Her graduate work includes an MS in Computer Science specializing in computer security, wireless autonomous systems, and artificial intelligence from the US Naval Postgraduate School, an MS in Systems Engineering focusing on aerospace systems from Johns Hopkins University, an MAS in Aeronautics and Aerospace Safety Systems from Embry-Riddle Aeronautical University, an MBA in Engineering and Technology Management from City University of Seattle, an MA in National Security and Strategic Studies from the US Naval War College, an MS in Computer Information Systems from the University of Phoenix, and a Doctor of Jurisprudence from the Monterey College of Law. She is currently a PhD candidate in Aerospace Software Engineering at the US Naval Postgraduate School where she is researching the safety impact of software on manned and unmanned systems.

Stephen S. Wu, Partner, Cooke Kobrick & Wu LLP

(650) 917-8045

AUVSI Program on Product Liability

For robot manufacturers, one of the top legal issues is product liability.  Manufacturers don’t want to face a company-ending huge product liability verdict against it.  On Wednesday, June 26, I will be presenting a program on managing the risk of product liability in commercializing robots.  The Association of Unmanned Vehicle Systems International will be hosting the program as a webinar.  To view a program description and register, click here.

This program follows my presentation at the We Robot Conference at Stanford Law School in April.  The white paper I presented at the program appears here.  I focused on the root causes of large-dollar verdicts against manufacturers in product liability suits.  Why did these verdicts take place?  What are the legal theories under which manufacturers face liability?  How can manufacturers limit and manage their risk?

I also considered some topics that product liability presentations don’t usually cover.  These topics concern sources of risk other than design, manufacturing, and other engineering risks.  In particular, I talked about information security risks and supply chain risk.  I also talked about effective records management practices that will help a manufacturer prepare today to win suits that won’t be filed until perhaps decades from now.

If you missed the We Robot Conference, you have a chance to hear the presentation this coming Wednesday.  I will also add an additional thought about a non-traditional source of risk:  liability arising from faulty data sources.  For instance, in the driverless car context, what happens if a manufacturer obtains map data supply vendor that provides faulty data?  We will discuss this and many other questions.

I hope you can tune into the program this Wednesday.

Stephen S. Wu, Partner, Cooke Kobrick & Wu LLP

(650) 917-8045

Automated Vehicles Are Probably Legal in the United States

“One of the most significant obstacles to the proliferation of autonomous cars is the fact that they are illegal on most public roads.” That’s what Wikipedia tells us—at least until I change it. I can’t change a New York Times op-ed that declared “driverless cars” to be “illegal in all 50 states” or the many articles that have repeated this claim.

To the extent that such pronouncements of illegality reflect assumption rather than analysis, they are inconsistent with our nation’s entrepreneurial narrative: An invention is not illegal simply because it is new, and a novel activity is not prohibited just because it has not been affirmatively permitted. So to determine the actual legal status of the automated vehicles that may someday roam our roads, I reviewed relevant law at the international, national, and state levels. While my 100-page study raises a number of questions about both the ultimate design of these vehicles and the duties of their human operators, it finds no law that categorically prohibits automated driving. In short, even without specific legislation, automated vehicles are probably legal in the United States.

A striking corollary of this finding is that Nevada, Florida, and California (the three states that have already enacted pertinent legislation) did not really “legalize” automated vehicles, as has been popularly reported. Instead, those recent laws primarily regulate these technologies. In Nevada, for example, both an automated vehicle and its operator must be specially registered with the state, but across the border in Arizona, where a similar bill failed to pass, no such requirements exist.

The laws are significant for other reasons as well: They endorse the potential of, catalyze important discussions about, and establish basic safety requirements for these long-term technologies. To a more limited extent, these laws also reduce legal uncertainty: “Definitely legal” sounds very different than “probably legal.”

Curiously, however, one of the stronger challenges to the legality of automated vehicles is actually a law that no state can repeal. After World War II, thousands of Americans began shipping their cars across the Atlantic to motor through Europe, where they encountered a variety of drivers—of horses, pack animals, and livestock in addition to cars and bikes—who were following a variety of road customs. National governments, including the United States, sought to harmonize these customs through the 1949 Geneva Convention on Road Traffic. One of the rules of the road specified in this international agreement is that every kind of road vehicle “shall have a driver” who is “at all times … able to control” it. Because the treaty is federal law—domestically comparable to a statute enacted by Congress—no state government or federal administrative agency can lawfully contravene it.

Fortunately for Nevada and its early-adopting brethren, this treaty provision is not necessarily inconsistent with automated driving. Human operators are able to control today’s research vehicles by starting them, stopping them, and intervening at any point along the way. Even a vehicle without a human behind the wheel would probably satisfy this requirement if it performs at least as safely, reasonably, and lawfully as a human driver would. A vehicle that operates within these bounds would essentially be under control, regardless of whether its legal driver is a human, a computer, or a company. The upshot: Emerging technologies are much more likely to shape the future interpretation of this treaty language than the language is to shape the future development of these technologies.

Nonetheless, significant legal uncertainty does remain, even in Nevada, Florida, and California. Take two examples. First, the human who operates or otherwise uses an automated vehicle may need to participate more actively in that operation than the particular technology itself may demand. New York rather uniquely requires a driver to keep one hand on the steering wheel—though it does not require her to actually steer the vehicle to which the wheel is attached. The District of Columbia, among others, prohibits “distracted driving” and mandates “full time and attention” during operation—requirements that the Autonomous Vehicle Act recently passed by its council will not change. And in state tort law, even driver behavior that is not expressly illegal might nonetheless be civilly negligent.

Second, current rules of the road reflect the fact that human drivers necessarily make real-time decisions that are generally judged, if at all, only afterward. Automated driving still requires human decisions, but they are the anticipatory decisions of human designers rather than or in addition to the reactive decisions of human drivers. At the state and local levels, how and to whom will laws that prescribe “reasonable,” “prudent,” “practicable,” and “safe” driving apply? And at the federal level, what will constitute the kind of “unreasonable risk” that triggers a vehicle recall? Do standards like these merely require an automated vehicle to perform as well as a reasonable human driver—or will governments, courts, and consumers expect something more? In particular, when crashes inevitably occur, how will legal responsibility be divided among manufacturers, designers, data providers, owners, operators, passengers, and other potential parties?

These are just some of the important questions that will emerge as particular automation technologies are further developed, tested, and ultimately commercialized. Governments may not be able to answer them yet (and perhaps they shouldn’t yet try), but this does not mean that automated vehicles are illegal. To the contrary, on this threshold question of legality, my analysis suggests that while the road may be curvy, the lights are not all red.

Bryant Walker Smith researches and teaches on the legal aspects of increasing vehicle automation. Stanford will host the Transportation Research Board’s Vehicle Automation Workshop on July 16-19, 2013.

Experiments with socialbots

First: hi. The idea is that I will post sporadically on topics relating to Robots, Freedom, and Privacy.

At the east coast hacker conference HOPE 9 the weekend of July 13-15, 2012, Pacific Social Architecting Corporation’s Tim Hwang reported on experiments the company has been conducting with socialbots. That is, bot accounts deployed on social networks like Twitter and Facebooks for the purpose of studying how they can be used to influence and alter the behavior and social landscapes of other users. Their November 2011 paper (PDF) gives some of the background.

The highlights:
- Early in 2011, Hwang conducted a competition to study socialbots. Teams scored points by getting their bot-controlled Twitter accounts (and any number of supporting bots) to make connections with and elicit social behavior from an unsuspecting cluster of 500 online users. Teams got +1 point for mutual follows; +3 points for social responses; and -15 if the account was detected and killed by Twitter. The New Zealand team won with bland, encouraging statements; no AI was involved but the bot’s responses were encouraging enough for people to talk to it. A second entrant used Amazon’s Mechanical Turk; another user could ask it a direct question and it would forward it to the MT humans and return the answer. A third effort redirected tweets randomly between unconnected groups of users talking about the same topics.

- A bot can get good, human responses to “Are you a bot” by asking that question of human users and reusing the responses.

- In the interests of making bots more credible (as inhabited by humans) it helped for them to take enough hours off to seem to go to sleep like humans.

- Many bot personalities tend to fall apart in one-to-one communication, so they wouldn’t fare well in traditional AI/Turing test conditions – but online norms help them seem more credible.

- Governments are beginning to get into this. The researchers found bots active promoting both sides of the most recent Mexican election. Newt Gingrich claimed the number of Twitter followers he had showed that he had a grass roots following on the Internet; however, an aide who had quit disclosed that most of his followers were fakes, boosted by blank accounts created by a company hired for the purpose. Experienced users are pretty quick to spot fake accounts; will we need crowd-based systems to protect less sophisticated users (like collaborative spam-reporting systems)? But this is only true of the rather crude bots we have so far. What about more sophisticated ones? Hwang believes the bigger problem will come when governments adopt the much more difficult-to-spot strategy of using bots to “shape the social universe around them” rather than to censor.

Hwang noted the ethical quandary raised by people beginning to flirt with the bot: how long should the bot go on? Should it shut down? What if the human feels rejected? I think the ethical quandary ought to have started much earlier; although the experiment was framed in terms of experimenting with bots in reality the teams were experimenting on real people, even if it was only for two weeks and on Twitter.

Hwang is in the right place when he asks “Does it presage a world in which people design systems to influence networks this way?” It’s a good question, as is the question of how to defend against this kind of thing. But it seems to me typical of the constant reinvention of the computer industry that Hwang had not read – or heard of – Andrew Leonard’s 1997 book Bots: The Origin of New Species, which reports on the prior art in this field, experiments with software bots interacting with people through the late 1990s (I need to reread it myself). So perhaps one of the first Robots, Freedom, and Privacy dangers is the failure to study past experiments in the interests of avoiding the obvious ethical issues that have already been uncovered.


CalPoly Talk About Robots & Privacy, Filmed By Robot…

Drones Panel at Stanford Law Review

Technological Rationality: the Logos of Slavery or the Enabler of Human(e) Progress?

Technological development has faced criticism. The efficiency brought by industrialization and computer technology is expected to eventually lead to unpleasant outcome. The critics have developed by means of science-fiction stories about a future filled with technology. It is assumed that people stagnate and indulge only their animal desires. In the second scenario, the people become insensitive robots relying only on pure reason. (Steve Fuller, New frontiers in science and technology (Polity, Cambridge 2007) 232 p)

Herbert Marcuse was of the opinion that the logos of technology equals to the logos of slavery. People have become tools, even if it was thought that the technology releases persons. In his book One-dimensional Man published in 1964, Marcuse says that in the historical continuum man has been and will be the master of the other man. This is a social reality which societal changes do not affect. The basis for domination, however, has changed over the ages. Personal dependence has been replaced by an objective order of dependency, such as the dependency of a slave to the master has changed to the dependence of the economic laws and of the market. In accordance with Marcuse this higher form of rationality deprives natural and spiritual resources more efficiently and shares profits in a new way. A man can be seen as a slave in the production machinery and there is a battle of the existence in the production machinery. The battle affects with the destructive power the production machinery and its parts, such as builders and users.

Marcuse’s ideas certainly give some food for thought. And while I don’t completely agree with them. The debate continues in our new blog: http://legalfuturology.blogspot.com

Let me give you a brief introduction:

The title of the blog, Legal Futurology, contains a certain degree of deliberate ambiguity. You, Dear Reader, may wonder, what kind of a future we are talking about and what the law has to do it. At this point we don’t expect we will be writing about futures (the financial instruments) or about future developments in the law in general, say, regarding the resolution of the financial crisis or the next EU treaty or planned directive this or statute that or what the court will (or should) decide in Rubber v. Glue or whatever. While we cannot promise to avoid such topics altogether (classic evasive move there), what we have in mind are some very specific aspects of the future and the law.

The future we are referring to is that of the William Gibson quotation ‘The future is already here — it’s just not very evenly distributed.’ It is also that of Richard Susskind’s book Future of Law. And since at least one of us is a board-certified Legal Realist, there might be the odd dash of future in the sense of Prediction Theory thrown in as well.

More concretely, we both are researchers at the University of Helsinki working at the intersection of law and artificial intelligence. Our perspectives are quite different, as one of us (Anniina) studies AI as the object of legal regulation, whereas the other (Anna) studies AI as a tool to facilitate legal information retrieval or even do legal reasoning by itself. These complementary perspectives should open up for a broader range of topics than either one of us could do by herself. We are also planning to take advantage of this in more traditional fora through co-authored publications (stay tuned!).

Artificial Intelligence: A Legal Perspective

The Sorcerer's Apprentice, Or: Why Weak AI Is Interesting Enough

Not many people in the legal academy study artificial intelligence or robotics. One fellow enthusiast, Kenneth Anderson at American University, posed a provocative question over at Volokh Conspiracy yesterday: will the Nobel Prize for literature ever go to a software engineer who writes a program that writes a novel?

What I like about Ken’s question is its basic plausibility. Software has already composed original music and helped invent a new type of toothbrush. It does the majority of stock trading. Software could one day write a book. A focus on the achievable is also what I find compelling about Larry Solum’s exploration of whether AI might serve as an executor of a trust or Ian Kerr’s discussion of the effects of software agents on commerce. Read the rest of this entry »

Does Intelligence Matter? – Legal Ramifications of Intelligent Systems

Doctoral research plan. Please feel free to comment!

1)                         Introduction

“Michigan held off Iowa for a 7-5 win on Saturday. The Hawkeyes (16-21) were unable to overcome a four-run sixth inning deficit. The Hawkeyes clawed back in the eighth inning, putting up one run.” This piece of sports news was generated by an intelligent system. It was written by Narrative Science’s computers in the United States. It was not created, nor edited by a human, which means that it is completely computer generated. This particular text is likely not protected by copyright, as it is not sufficiently original and creative. However, when the software evolves and becomes able to create writings that fulfill the prerequisites for copyright protection, the question of authorship becomes relevant. As lawyers, we will then face the question of how to approach this issue under the copyright laws. Another example: Google’s intelligent car was involved in an accident in August 2011. This time, a human was responsible for the accident, but what if the autonomous vehicle had been considered responsible? How would we fit the case within the legal regime governing legal liability? Further, what if an intruder breaks into the video camera of a robot and spies on children in their bedroom? How would this problem be approached from the perspective of the privacy laws? Read the rest of this entry »