First: hi. The idea is that I will post sporadically on topics relating to Robots, Freedom, and Privacy.
At the east coast hacker conference HOPE 9 the weekend of July 13-15, 2012, Pacific Social Architecting Corporation’s Tim Hwang reported on experiments the company has been conducting with socialbots. That is, bot accounts deployed on social networks like Twitter and Facebooks for the purpose of studying how they can be used to influence and alter the behavior and social landscapes of other users. Their November 2011 paper (PDF) gives some of the background.
- Early in 2011, Hwang conducted a competition to study socialbots. Teams scored points by getting their bot-controlled Twitter accounts (and any number of supporting bots) to make connections with and elicit social behavior from an unsuspecting cluster of 500 online users. Teams got +1 point for mutual follows; +3 points for social responses; and -15 if the account was detected and killed by Twitter. The New Zealand team won with bland, encouraging statements; no AI was involved but the bot’s responses were encouraging enough for people to talk to it. A second entrant used Amazon’s Mechanical Turk; another user could ask it a direct question and it would forward it to the MT humans and return the answer. A third effort redirected tweets randomly between unconnected groups of users talking about the same topics.
- A bot can get good, human responses to “Are you a bot” by asking that question of human users and reusing the responses.
- In the interests of making bots more credible (as inhabited by humans) it helped for them to take enough hours off to seem to go to sleep like humans.
- Many bot personalities tend to fall apart in one-to-one communication, so they wouldn’t fare well in traditional AI/Turing test conditions – but online norms help them seem more credible.
- Governments are beginning to get into this. The researchers found bots active promoting both sides of the most recent Mexican election. Newt Gingrich claimed the number of Twitter followers he had showed that he had a grass roots following on the Internet; however, an aide who had quit disclosed that most of his followers were fakes, boosted by blank accounts created by a company hired for the purpose. Experienced users are pretty quick to spot fake accounts; will we need crowd-based systems to protect less sophisticated users (like collaborative spam-reporting systems)? But this is only true of the rather crude bots we have so far. What about more sophisticated ones? Hwang believes the bigger problem will come when governments adopt the much more difficult-to-spot strategy of using bots to “shape the social universe around them” rather than to censor.
Hwang noted the ethical quandary raised by people beginning to flirt with the bot: how long should the bot go on? Should it shut down? What if the human feels rejected? I think the ethical quandary ought to have started much earlier; although the experiment was framed in terms of experimenting with bots in reality the teams were experimenting on real people, even if it was only for two weeks and on Twitter.
Hwang is in the right place when he asks “Does it presage a world in which people design systems to influence networks this way?” It’s a good question, as is the question of how to defend against this kind of thing. But it seems to me typical of the constant reinvention of the computer industry that Hwang had not read – or heard of – Andrew Leonard’s 1997 book Bots: The Origin of New Species, which reports on the prior art in this field, experiments with software bots interacting with people through the late 1990s (I need to reread it myself). So perhaps one of the first Robots, Freedom, and Privacy dangers is the failure to study past experiments in the interests of avoiding the obvious ethical issues that have already been uncovered.
Technological development has faced criticism. The efficiency brought by industrialization and computer technology is expected to eventually lead to unpleasant outcome. The critics have developed by means of science-fiction stories about a future filled with technology. It is assumed that people stagnate and indulge only their animal desires. In the second scenario, the people become insensitive robots relying only on pure reason. (Steve Fuller, New frontiers in science and technology (Polity, Cambridge 2007) 232 p)
Herbert Marcuse was of the opinion that the logos of technology equals to the logos of slavery. People have become tools, even if it was thought that the technology releases persons. In his book One-dimensional Man published in 1964, Marcuse says that in the historical continuum man has been and will be the master of the other man. This is a social reality which societal changes do not affect. The basis for domination, however, has changed over the ages. Personal dependence has been replaced by an objective order of dependency, such as the dependency of a slave to the master has changed to the dependence of the economic laws and of the market. In accordance with Marcuse this higher form of rationality deprives natural and spiritual resources more efficiently and shares profits in a new way. A man can be seen as a slave in the production machinery and there is a battle of the existence in the production machinery. The battle affects with the destructive power the production machinery and its parts, such as builders and users.
Marcuse’s ideas certainly give some food for thought. And while I don’t completely agree with them. The debate continues in our new blog: http://legalfuturology.blogspot.com
Let me give you a brief introduction:
The title of the blog, Legal Futurology, contains a certain degree of deliberate ambiguity. You, Dear Reader, may wonder, what kind of a future we are talking about and what the law has to do it. At this point we don’t expect we will be writing about futures (the financial instruments) or about future developments in the law in general, say, regarding the resolution of the financial crisis or the next EU treaty or planned directive this or statute that or what the court will (or should) decide in Rubber v. Glue or whatever. While we cannot promise to avoid such topics altogether (classic evasive move there), what we have in mind are some very specific aspects of the future and the law.
The future we are referring to is that of the William Gibson quotation ‘The future is already here — it’s just not very evenly distributed.’ It is also that of Richard Susskind’s book Future of Law. And since at least one of us is a board-certified Legal Realist, there might be the odd dash of future in the sense of Prediction Theory thrown in as well.
More concretely, we both are researchers at the University of Helsinki working at the intersection of law and artificial intelligence. Our perspectives are quite different, as one of us (Anniina) studies AI as the object of legal regulation, whereas the other (Anna) studies AI as a tool to facilitate legal information retrieval or even do legal reasoning by itself. These complementary perspectives should open up for a broader range of topics than either one of us could do by herself. We are also planning to take advantage of this in more traditional fora through co-authored publications (stay tuned!).
Not many people in the legal academy study artificial intelligence or robotics. One fellow enthusiast, Kenneth Anderson at American University, posed a provocative question over at Volokh Conspiracy yesterday: will the Nobel Prize for literature ever go to a software engineer who writes a program that writes a novel?
What I like about Ken’s question is its basic plausibility. Software has already composed original music and helped invent a new type of toothbrush. It does the majority of stock trading. Software could one day write a book. A focus on the achievable is also what I find compelling about Larry Solum’s exploration of whether AI might serve as an executor of a trust or Ian Kerr’s discussion of the effects of software agents on commerce. Read the rest of this entry »
Doctoral research plan. Please feel free to comment!
“Michigan held off Iowa for a 7-5 win on Saturday. The Hawkeyes (16-21) were unable to overcome a four-run sixth inning deficit. The Hawkeyes clawed back in the eighth inning, putting up one run.” This piece of sports news was generated by an intelligent system. It was written by Narrative Science’s computers in the United States. It was not created, nor edited by a human, which means that it is completely computer generated. This particular text is likely not protected by copyright, as it is not sufficiently original and creative. However, when the software evolves and becomes able to create writings that fulfill the prerequisites for copyright protection, the question of authorship becomes relevant. As lawyers, we will then face the question of how to approach this issue under the copyright laws. Another example: Google’s intelligent car was involved in an accident in August 2011. This time, a human was responsible for the accident, but what if the autonomous vehicle had been considered responsible? How would we fit the case within the legal regime governing legal liability? Further, what if an intruder breaks into the video camera of a robot and spies on children in their bedroom? How would this problem be approached from the perspective of the privacy laws? Read the rest of this entry »
Whether it was flying cars or jetpacks, we were all promised a compelling vision of the future at some point in our childhoods. And, although it may seem like science fiction, autonomous vehicles—cars that drive themselves—are more than just a promise; they are a growing reality. The widespread coverage in October 2010 of Google’s fleet of Toyota Prii that continue to navigate the highways of California was a clear message to the world that this technology is fact, not fiction. Questions remain, however, as to whether or not the world and the marketplace are ready for this technology.
To explore this topic, Beverly Lu and Matthew Moore (two PhD students at Caltech) conducted an assessment of the current status of autonomous vehicle technologies in the U.S—mainly through interviews with experts in the field. They focused on those technologies that are applicable to modern cars driven on existing, unmodified public roads. Their geographical focus was mainly on the west coast of the United States where a hotbed of activity regarding this technology is located.
To download the full article, click here.
Is it lawful for a car to drive itself? In the absence of any law to the contrary, it should well be. A new bill is working its way through the Nevada state legislature that would remove any doubt in that state. A.B. 511 directs the Nevada Department of Transportation to authorize autonomous vehicle testing in certain geographic areas of Nevada. Should vehicles meet Nevada DOT standards, they would be permitted to “operate on a highway.”
The bill defines not only autonomous vehicle, but artificial intelligence as well. AI is “the use of computers and related equipment to enable a machine to duplicate or mimic the behavior of human beings.” An autonomous vehicle uses “artificial intelligence, sensors, and [GPS] coordinates to drive itself.”
To be clear: autonomous vehicles are not yet the law of the land in Nevada. This bill must pass through two committees and receive a hearing before it can be voted on and become law. Some preliminary thoughts on the bill in its present form follow.
1) It is wonderful that the state of Nevada is being so proactive. The potential safety, mobility, efficiency, and other benefits of autonomous vehicles are enormous. Creating a process by which to test and certify such vehicles represents an invaluable step forward.
2) That said, the bill’s definition of autonomous vehicles is unclear, even circular. Autonomous driving exists on a spectrum. Many vehicles available today have autonomous features, while falling short of complete computer control. Surely the bill’s authors do not intend to require that, for instance, today’s self-parking Lexus LS 460L be tested and certified.
3) My personal guess is that it was John Markoff’s October 2010 coverage of Google’s autonomous vehicles that sparked this bill. I base this on the presence of that article on Nevada’s website under Exhibits/Senate.
One hopes A.B. 511 is the beginning of an important conversation about the promise and perils of autonomous driving in the United States. This Center, in conjunction with the Center for Automotive Research at Stanford, has started a program dedicated specifically to the Legal Aspects of Autonomous Driving. We just hired a fellow. Look to this blog in the coming months for more on this topic.
The bill itself is here. Thanks to Amanda Smith for her insights.