JZ passed along this new paper out of Japan about criminal liability in the event that a robot “decides” to end a human life. Here’s the abstract:
In 1981, a 37-year-old Japanese employee of a motorcycle factory was killed by an artificial-intelligence robot working near him. The robot erroneously identified the employee as a threat to its mission, and calculated that the most efficient way to eliminate this threat was by pushing him into an adjacent operating machine. Using its very powerful hydraulic arm, the robot smashed the surprised worker into the operating machine, killing him instantly, and then resumed its duties with no one to interfere with its mission. Unfortunately, this is not science fiction, and the legal question is: Who is to be held liable for this killing?
Northwestern Law professor John McGinnis, meanwhile, recommends government sponsorship of “friendly AI’ in his recent essay on the dangers and promise of artificial intelligence. Here’s an excerpt:
The acceleration of technology can create unparalleled cascades of benefits as well as new risks of catastrophe. This acceleration could potentially endanger the future of the human race, but could also potentially radically extend the life span of individual humans. If such acceleration is the fundamental phenomenon of our age, the assessment of the consequences of technology is an essential task for society. As a result, the government has a particular interest in accelerating the one technology that may analyze the rest of technological acceleration—AI. The question of what degree and what form of support is warranted to boost the acceleration of this technology to help us with decisionmaking about the rest of accelerating is subtle and difficult. But that is the right question to ask, not whether we should retard its development with complex regulations, or still worse, relinquish it.
Still, for my money, the starting place is Lawrence Solum’s classic articleLegal Personhood for Artificial Intelligence and Sam Lehman-Wilzig’s 1992 essay Frankenstein Unbound.
Patrick Lin, George Bekey, and Keith Abney are editing a new book, Robot Ethics: The Ethical and Social Implications of Robotics, forthcoming from MIT Press. I’m writing a chapter on the potential impact of robots on privacy. Here’s an excerpt:
According to a popular quote by science fiction writer William Gibson, “The future is already here. It just hasn’t been evenly distributed yet.” Gibson’s insight certainly appears to describe robotics. One day soon robots will be a part of the mainstream, profoundly affecting our society. The preceding chapter has attempted to introduce a variety of ways in which robots may implicate the set of societal values loosely grouped under the term privacy. The first two categories of impact—surveillance and access—admit of relatively well-understood ethical, technological, and legal responses. The third category, however, tied to social meaning, presents an extremely difficult set of challenges. The “harms” at issue are hard to identify, measure, and resist. They are in many instances invited. And neither law nor technology has obvious tools to combat them. Our basic recourse as creators and consumers of social robots is to proceed carefully.
I’m presenting a draft of the chapter at a cyberlaw colloquium at American University – Washington School of Law organized by Michael Carroll. You can download the paper here.
TechDirt has this write up of a paper I’m working on. The subject is whether robot manufacturers and distributors may need immunity from lawsuit for what consumers end up doing with or through robots. I end up arguing for something similar (though not identical) to the immunity websites enjoy for user content under the Communications Decency Act. I gave the paper as a talk recently at the invitation of the Symbolic Systems department at Stanford. I’m trying to incorporate what I learned from this talk and other feedback.
Forget Asimov’s three laws of robotics. These days, there are questions about what human laws robots may need to follow. Michael Scott points us to an interesting, if highly speculative, article questioning legal issues related to robots, questioning whether or not a new arena of law will need to be developed to handle liability when it comes to actions done by robots. There are certain questions concerning who would be liable? Those who built the robot? Those who programed it? Those who operated it? Others? The robot itself? While the article seems to go a little overboard at times (claiming that there’s a problem if teens program a robot to do something bad since teens are “judgment proof” due to a lack of money — which hardly stops liability on teens in other suits) it does make some important points.
Key among those is the point that if liability is too high for the companies doing the innovating in the US, it could lead to the industry developing elsewhere. As a parallel, the article brings up the Section 230 safe harbors of the CDA, which famously protect service providers from liability for actions by users — noting that this is part of why so many more internet businesses have been built in the US than elsewhere (there are other issues too, but such liability protections certainly help). So, what would a “section 230″-like liability safe harbor look like for robots?
Hoover Institution fellow and American University law professor Ken Anderson spoke to students last year about the use of unmanned drones in warfare. The event was unique in that it brought together three diverse student groups:
In November of last year, we had a great event around the legal ramifications of robotics. Ryan Calo moderated a panel with Paul Saffo, Ken Anderson, and Dan Siciliano. Australian roboticist Mary-Anne Williams introduced. The event was covered by the Associated Press, the San Francisco Chronicle, and many other news outlets and blogs.
Description and video below.
Once relegated to factories and fiction, robots are rapidly entering the mainstream. Advances in artificial intelligence translate into ever-broadening functionality and autonomy. Recent years have seen an explosion in the use of robotics in warfare, medicine, and exploration. Industry analysts and UN statistics predict equally significant growth in the market for personal or service robotics over the next few years. What unique legal challenges will the widespread availability of sophisticated robots pose? Three panelists with deep and varied expertise discuss the present, near future, and far future of robotics and the law.
Kenneth Anderson, Professor of Law, American University; Research Fellow, Hoover Institution on War, Revolution and Peace at Stanford University
Paul Saffo, Consulting Associate Professor, Stanford University; Visiting Scholar, Stanford Media X; Columnist, ABCNews.com
F. Daniel Siciliano, Faculty Director, Arthur and Toni Rembe Rock Center for Corporate Governance; Senior Lecturer in Law and Associate Dean for Executive Education and Special Programs, Stanford Law School
Moderator: M. Ryan Calo, Residential Fellow, Stanford Center for Internet and Society
We were delighted to welcome Yoshi Kohno of the University of Washington to discuss his work on the privacy and security vulnerabilities of household robots. Not only is it relatively easy to tap into the cameras and microphones of some commercially available robots, but the attacker could move the robot to what he wants to see and use multiple robots to coordinate an attack. Dr. Kohno and his colleagues, including PhD candidate Tamara Denning, recommend encryption and other design improvements as well as education. Event details here. You can download the paper here (requires sign up). This event was co-sponsored by our friends at SLATA and the Center for Law & the Biosciences and also covered the privacy and security of embedded medical devices.
On April 14, 2010, the Law, Science, and Technology program partnered with Adept Technology, Willow Garage, Bosch, and Neato Robotics to show off Stanford University and other Bay Area robots for National Robotics Week. We had over 20+ robots on displays and over a thousand people in attendance. For a full list of displays, see the event page.
You can see pictures of the event here (CNET) and here (Willow Garage).