Home About RSS

Section 230 Immunity For Personal Robots

TechDirt has this write up of a paper I’m working on.  The subject is whether robot manufacturers and distributors may need immunity from lawsuit for what consumers end up doing with or through robots.  I end up arguing for something similar (though not identical) to the immunity websites enjoy for user content under the Communications Decency Act.  I gave the paper as a talk recently at the invitation of the Symbolic Systems department at Stanford.  I’m trying to incorporate what I learned from this talk and other feedback.

Forget Asimov’s three laws of robotics. These days, there are questions about what human laws robots may need to follow. Michael Scott points us to an interesting, if highly speculative, article questioning legal issues related to robots, questioning whether or not a new arena of law will need to be developed to handle liability when it comes to actions done by robots. There are certain questions concerning who would be liable? Those who built the robot? Those who programed it? Those who operated it? Others? The robot itself? While the article seems to go a little overboard at times (claiming that there’s a problem if teens program a robot to do something bad since teens are “judgment proof” due to a lack of money — which hardly stops liability on teens in other suits) it does make some important points.

Key among those is the point that if liability is too high for the companies doing the innovating in the US, it could lead to the industry developing elsewhere. As a parallel, the article brings up the Section 230 safe harbors of the CDA, which famously protect service providers from liability for actions by users — noting that this is part of why so many more internet businesses have been built in the US than elsewhere (there are other issues too, but such liability protections certainly help). So, what would a “section 230″-like liability safe harbor look like for robots?

Leave a Reply