Home About RSS

ROBOTS Podcast

I was lucky enough to be interviewed by ROBOTS, a wonderful podcast for news and opinions about robots. I discuss the state of robotics and the law, the issue of robots and liability, robots and privacy, and the future of robotics and artificial intelligence. You can listen to the podcast here.

What's it like to be a robot?

In 1974 as a means to cast new light on the role of intentionality in consciousness and experience Thomas Nagel asked a simple question “what’s it like to be a bat?”. His question is still thought-provoking today as it furthers Wittgenstein’s suggestion that if a lion could speak, we would not understand him, in a more tangible but no less tantalizing way.

The field of robotics is beginning to provide new perspectives and insight on many age old philosophical questions and musings. Given the enormous potential impact robots will have on individuals and society, there is an exciting new opportunity to reconceptualize the role of intention and embodiment in subjective experience, and to explore the legal implications of robots in society.

The largely rationalist approach to robot design was challenged in the early 90s by Rodney Brooks and his behavior based robots that used sensory data to drive action choices without the need to build fancy fragile high maintenance models of the world.  Brooks took the innovative stance that robots do not need a model of the world to determine what to do next because they can simply sense it directly. He advocated the idea that the world is its own best representation. His view challenged the prevailing model-based approaches and although it has since been found to be lacking support for high level capabilities like planning and anticipation, Brooks certainly enriched the field by broadening mainstream robotics which now incorporates and no longer questions his once held extreme position.

If you are interested in these topics you can take a look at two papers that connect behaviour-based approaches to robot design with the philosophical stance that gives primacy to subjective experience. The first paper Representation = Grounded Information provides a new definition of intelligence, a concept that has proven to be difficult to define, and previously described using a circumscriptive but incomplete set of characteristics such as the ability to adapt to change and to learn from mistakes. The paper defines intelligence as the capability that produces representations. Representation making capabilities underpin traits associated with intelligence. For example, adapting to change requires an ability to make new representations for new situations, and learning involves the discovery and incorporation of new examples or whole new representations. Furthermore, the measure of intelligence can be given by measures of representation affordance. For example, deception is associated with high-end intelligence; reptitlian representations do not afford deception, but people over the age of 8 represent the minds of others with ease so human representations do afford deception.

The second paper Autonomy: Life and Being builds on the definitions of representation and intelligence; a genuinely autonomous system must be capable of making sense of its subjective experience, i.e. able to make representations of its own experience. An object is then determined to be alive if it can autonomously represent its own experience all by itself on the fly as it is experiencing. The paper offers a new high level cognitive architecture founded on a robot’s subjective experience that my research group is using to develop robot minds that allow a physical mobile robot to pay attention to different aspects of its subjective experience as it forages about, and importantly to anticipate future experiences. Check out the CMU Dive and the UTS Dodge for real examples of attention-driven anticipatory behaviors in robots.

Robots are destined to have a profound impact on society, indeed all aspects of life and being. Already they have capabilities in perception, information retrieval, data mining, computation, and morphology beyond our own. In order to address legal aspects of robotics in society we must develop a better understanding of autonomy and how it applies to robot experience.  Living entities are autonomous systems, and autonomy is vital to life. Could a genuinely autonomous robot have a right to life?  Our philosophical, scientific and legal understanding of autonomy and its implications lacks depth and clarity, and as a result progress towards designing, building, managing, exploiting and regulating autonomous systems is inhibited. Considering robot experiences during robot design will help us grapple with issues around safety, security, privacy, and liability; it will help us develop effective cues and social mores for robots as we interact. collaborate and develop a sense of social meaning with them. Without being too controversial it is probably safe to say that that robots today have representations derived from their experiences that afford insect-level intelligence  but it is inevitable that they will enjoy rich and meaningful experiences which are as vivid and intense as our own because the only way we can actually achieve the goal of genuine autonomy is to build robots that can make sense of their subjective experiences and attain cognitive independence from us!

When Good Robots Do Bad Things

On August 6, 2010, Ryan Calo, Dan Siciliano, Ian Kerr, and I presented a program entitled “When Good Robots Do Bad Things:  Responsibility and Liability in an Era of Personal and Service Robotics” at the American Bar Association Annual Meeting in San Francisco.  The American Bar Association Section of Science & Technology Law sponsored the program.

The program covered a series of topics beginning with white paper I wrote, which I will later share here, discussing reported cases concerning products liability cases involving robots.  I described some of the rather ordinary cases that involve claims arising from the use of robots.  Over time, we will see more and varied kinds of robotics cases.  Ryan Calo then discussed some of his thoughts about immunity for the robotics industry discussed on another post on this blog.  Dan Siciliano talked about some of the issues arising from reliance on robot expertise and behavior.  For instance, the duty to exercise reasonable care may involve an average competent driver choosing to obtain robot assistance for some driving actions, while not others.  Ian Kerr talked about electronic agency and attributing conduct of a robot to a human.

Following these initial discussions, the panelists talked about more specific legal issues.  Ryan Calo discussed how robots may create risks to our privacy through their surveillance capabilities.  Dan Siciliano talked about damages arising from the loss of training or acquired skills when a trained robot is damaged.  Can a party recover more than just the replacement cost of the robot?  If so, how much more?  Ian Kerr then discussed designing robots with legal and moral concepts for the purpose of constraining their conduct.  Asimov’s Three (or more accurately, four) Laws of Robotics are an example of a robot design philosophy.  Professor Kerr seeks to make us consider how we can implement a system of “robo-ethics.”  Finally, I discussed some of the legal issues arising from strong artificial intelligence and dramatic advances in robotics, if these advances occur along the lines of the predictions of some.

If you are interested in exploring these issues in more detail or more generally legal issues of artificial intelligence and robotics, I encourage you to join the Artificial Intelligence and Robotics Committee of the ABA Section of Science & Technology Law.  Ryan Calo is one of the co-chairs of this committee.  The committee looks at these issues and more generally all aspects of law and devices that replicate or appear to replicate human mental or physical activity – learning, reasoning, communicating, manipulating objects, and so on.

This Week In Law: Robotics

Liberating Intelligent Machines with Financial Instruments

Bit Bang – rays to the future

I was one of the lucky ones selected to participate in Bit Bang 1 – the first multidisciplinary doctoral course organized by the MIDE research program. During the academic year 2008-2009, we draw up roadmaps describing future technology trends until the year 2025. On top of lectures, textbooks and group assignments, we made a study tour to the University of California, Berkeley and Stanford University and visited a number of high-technology companies in Silicon Valley. The joint publication comprising our reports can be found here.

Among other things, we predicted that the industry of delay (= lawyers) will most likely be forced to commit to the legal status of robots. The status of artificially intelligent agents has been debated amongst the Artificial Intelligence (AI) scholars since 1990s. Recently a British Government Report anticipated a “monumental shift” in the area of robo-rights once robots become sufficiently intelligent.

In our cross-multidisciplinary group, I was the only lawyer. Obviously, I had no other choice than to take those predictions seriously and become committed to the legal status of robots. Nowadays, I am writing my LL.D. thesis on the social and legal ramifications of robotics.

From fiction to execution

Bit Bang participants were encouraged to think radically. We tried to predict long-term development and impacts of digitalization. During the spring term, our group was given the research topic “intelligent machines”. As a result of our collaboration, we ended up writing a book chapter entitled “Augmenting Man”.

Here is an excerpt from the chapter (Bit Bang – rays to the future, p. 256):

“The text above describes radioactive fallout after the Chernobyl disaster. In 1979, the inherent risk of nuclear energy was realized with well-known consequences. Nuclear energy is a peaceful side effect of the Manhattan project. Even though the purpose for which nuclear technology was used was within the legitimate interests of our human society, it still caused an enormous amount of human suffering. The disaster is an example of an error made by a man responsible for a machine. The same can happen anytime in the future. In fact, since machines are getting more complex and error prone the probability for accidents is likely to increase. Consequently, we argue that without proper actions our society is heading for disaster.”

Unfortunately, our disaster scenarios and speculations were not far from wrong: Oil Spill in Gulf of Mexico April 29th View

Robots were used to block the oil spill. Maybe they were not intelligent enough. Who knows. All we know is that oil is still leaking into the Gulf of Mexico. Arguably some kind of action is needed in the near future.

Personal robotics is an area where we wait for breakthroughs in the near future. As Ryan Calo has pointed out in his blog post the South Korean parliament has went as far as setting the goal to have a robot in every home by 2015. I don’t know if the United States and Europe are lagging behind, but it is at least certain that an ever-aging population is a reality that the entire western society has to face. For example, personal-robots to aid the elderly and the disabled, is clearly an area with enormous opportunities but also social and legal controversies.

We have been working on a paper about legal liability of intelligent machines. In our working paper entitled “Liberating Intelligent Machines with Financial Instruments”, we propose a model for the development of intelligent systems in which the machine itself becomes a locus of accountability by means of machine insurance. The model is implemented by converging engineering design, business/market, as well as legal and financial practices. Finally, we show how to bring the ultimate machine to the market. Here is an excerpt from the working paper:

“We propose a new kind of legal approach, i.e. the ultimate insurance machine, to solve the related legal and economical difficulties in order to support the technological pursuits. In the insurance machine framework, a machine can become an ultimate machine by emancipating itself from its manufacturer/owner/operator and indeed even become a distinct legal or societal entity. This can be achieved by creating a legal framework around this ultimate machine that in itself has economical value. It can be argued that the ultimate insurance machine emancipates the machine. After all, machines, or at least the men behind the machines, are liberated. The requirement for liberation is, however, that machines have a mandatory insurance, and through that insurance, a tight link to the liability stock market.

If machines are considered legal persons, they can be considered items having rights and duties. Interestingly, this Insurance Machine Constellation does not make it necessary to decide, whether robots or software agents have to be treated as legal persons.”

And below you can find a link to the working paper:

Huttunen, Anniina, Kulovesi, Jakke, Brace, William, Lechner, Lorenz G, Silvennoinen, Kari and Kantola, Vesa, Liberating Intelligent Machines with Financial Instruments (July 1, 2010). Available at SSRN: http://ssrn.com/abstract=1633460

Section 230 Analogy To Robotics

John Gregory has this interesting post on robots and the law over at Slaw, a Canadian legal blog.  Like most such posts (many of mine included!), Gregory largely poses questions.   How do we assess liability?  Will we have to recognize robots or artificial intelligence as legal entities?  Etc.

Gregory mentions my argument that we ought to immunize consumer robotics manufacturers for the uses to which consumers put personal or service robots.  I worry that without such immunity, manufacturers will not have the incentives to create the “generative” platforms (to borrow a term from Jonathan Zittrain) that will spark a cascade of end-user innovation.  I use Section 230 of the Communications Decency Act, which immunizes websites for what users post in most instances, as a point of reference.

I agree that Section 230 immunity is not a perfect analogy—while atoms may be “the new bits,” posting information is hardly akin to modifying or programming a robot.  The basic idea, however, is the same.  Section 230 immunizes platforms for the innovative, and sometimes controversial, uses to which users put their products.  Importantly, it also immunizes platforms for steps they take to guard against unwanted content.  The analogy of this latter immunity is to robotic safety features.  Manufacturers should not be liable, at least initially, for safety features or other limits that can occasionally lead to accident instead of avoid it.  (Think autonomous lane correction or passenger side air bags in cars.)

Still, I’m hardly wedded to the precise Section 230 mechanism.  We could look to the immunity small planes receive under the General Aviation Revitalization Act, without which the consumer aviation industry would have stayed bankrupt.  Or we could look to the (more controversial) immunity enjoyed by gun manufacturers for actions taken by gun owners.  Robotics has the potential to transform and benefit society to a far greater extent than small planes or guns.  Yet one or two high profile lawsuits—with understandably sympathetic plaintiffs and uncertain law—could scare away engagement.   We’ll still see robots, sure, but they will only do one or two things.  They will not open up a universe of possibilities, for fear that some of those possibilities will place the manufacturer or distributor in front of a hostile jury.

The point is: robotics is an industry that needs ground rules to flourish.  Those ground rules should contain incentives for consumer robotics to be as versatile, open, and safe as possible.  The alternative is a world-wide robotics revolution that leaves the United States further behind.

Thanks to Robert Richards for flagging Gregory’s post.

Robotics and the ADA

It’s starting.  Legal scholarship is beginning to address how current and near-term robotic technology presents novel challenges for existing laws.  I get emails about once a month asking me to point a student toward sources and other leads.  I just noticed (thanks to Concurring Opinions) that a Note from the current issue of Iowa Law Review addresses the question point blank.  It argues that recent developments in cybernetics may eventually indicate a need to reform the Americans with Disabilities Act (ADA).  Here’s an excerpt:

In the near future, advances in neuroscience and robotics will change the way our society views the human body by further reinforcing the concept of the body as a machine with interchangeable, replaceable, and upgradeable parts. This Note focuses on cybernetic technologies—mechanical, electronic devices that link directly with the human nervous system.  As these cybernetic technologies become more advanced, they will approach and then surpass ordinary human function, raising the prospect of enhancing human capabilities well beyond the current baseline standard.  This increase in the breadth and depth of the ability spectrum, spurred by the rising power and popularity of cybernetic-enhancement technologies, may lead society to view the healthy, yet unenhanced, human as “disabled.”

I’m not sure I agree with the strong version of the author’s claim.  I believe our natural bodies will prove the default for the foreseeable future.  Many of us use devices in our daily life capable of thousands of calculations per second, but we would not call a diner remedial for doing the tip in her head.  There may be something to the notion, however, that advances in cybernetics will vindicate the Supreme Court’s decision in Sutton v. United Air Lines, holding that courts should take into account available mitigation when assessing disability.  This decision was apparently overruled by statute.  Judge for yourself: you can download Collin Bockman’s full work by clicking here (PDF).

Computers Freedom Privacy… And Robotics

ACM Computers Freedom Privacy is in its 20th year.  This year was exciting to me in that robots entered the mix.  My panel on the topic featured forecaster and essayist Paul Saffo, EFF’s Brad Templeton, philosopher Patrick Lin, and was moderated by Wired’s Gary Wolf.  You can find a video recording of our panel here.  I also spoke to the Dr. Katherine Albrecht Radio Show, which was broadcasting live from the conference.  Click here to listen.

Should The Law Punish Robot Tasks Differently?

I attended a fascinating thesis defense today on the subject of human-robot interaction by Stanford PhD candidate Victoria Groom. HRI experiments apparently tend to focus on human encounters with robots; few studies test the psychology behind robot operation. Groom’s work explores how we feel about the tasks we perform through robots. One of the more interesting questions she and her colleagues ask is: to what extent do we feel like it’s really us performing the task? The question is important where, as in the military, people work through robots to carry out morally charged tasks. And the answer might have repercussions for how we think about evaluation and punishment.

It turns out that how we feel about tasks we perform through robots varies depending on certain conditions. Groom and her colleagues show how people react differently if the robot is more or less anthropomorphic, if people teleoperate the robot or verbally instruct it, and if the robot is real or simulated digitally. In one study, she found that actual, autonomous robots promote self-extension, i.e., the feeling that the technology is a part of you. In another (PDF), she found that anthropomorphic robots tend to inhibit self-extension. We tend to attribute the actions of a humanoid robot to the robot, not to ourselves, at least relative to a non-humanoid robot (like a robotic car).

This area of study has the potential to inform multiple aspects of the law. One is punishment. Depending on our reasons for punishing—rehabilitation, deterrence, or retribution—the introduction of robots may have distorting effects. Consider the case of a soldier that misreads a situation and commands a ground or air robot to fire on civilians. If we punish the soldier to rehabilitate him, he or others may feel a sense of injustice because no blame appears to flow to the robot. If we punish out of retribution or to deter, the impact of punishment may be lessened because of a first or third-party perception that the robot is partly to blame.

You may be thinking: are you saying we should make a show of punishing the robot? Maybe. But more likely we should punish the soldier differently. Groom suggests (PDF) that we can head off this issue to some extent by designing robots for their particular anticipated use. Where we want to remind the human operator of her culpability, for instance, on the battlefield, we may want to maximize self-extension and presence. Where we want to avoid trauma—for instance, in search and rescue operations—we may want to do the opposite. But no design solution is perfect. Add punishment to the growing list of legal issues that robots do or will implicate.

Legal Aspects Of Autonomous Driving

Sven Beiker, Jan Becker, and I organized a workshop around the legal and policy hurdles to autonomous driving.  The workshop was hosted by the Center for Automotive Research at Stanford (CARS).  It featured discussions of liability, privacy, human-car interfaces, and the role of government.  The workshop resulted in a working group that includes industry and academia.  You can download the one-pager here (PDF).  Excerpt below.

While fully autonomous driving (all driving tasks performed by the vehicle without driver interaction) presents the long-term research vision, “advanced driver assistance systems” (e.g. adaptive cruise control, lane keeping assistance) already exist today. Next generation systems with higher degree of autonomy are currently being developed and will be released soon (e.g. automated parking, automated highway driving, etc.).

The workshop attendees from automotive industry, consumer organizations, law firms, government agencies, and academia assessed the situation of autonomous driving in both engineering and law. A consensus emerged that autonomous driving has tremendous potential to improve safety, efficiency, and mobility. However, policies are needed to establish a legal framework for the use of autonomous vehicles in traffic, and there is a clear need to engage in consumer and policy-maker education about the potential benefits of autonomous vehicles. Therefore, a process to develop public policies in conjunction with the evolving technologies is needed in the short term.