In 1974 as a means to cast new light on the role of intentionality in consciousness and experience Thomas Nagel asked a simple question “what’s it like to be a bat?”. His question is still thought-provoking today as it furthers Wittgenstein’s suggestion that if a lion could speak, we would not understand him, in a more tangible but no less tantalizing way.
The field of robotics is beginning to provide new perspectives and insight on many age old philosophical questions and musings. Given the enormous potential impact robots will have on individuals and society, there is an exciting new opportunity to reconceptualize the role of intention and embodiment in subjective experience, and to explore the legal implications of robots in society.
The largely rationalist approach to robot design was challenged in the early 90s by Rodney Brooks and his behavior based robots that used sensory data to drive action choices without the need to build fancy fragile high maintenance models of the world. Brooks took the innovative stance that robots do not need a model of the world to determine what to do next because they can simply sense it directly. He advocated the idea that the world is its own best representation. His view challenged the prevailing model-based approaches and although it has since been found to be lacking support for high level capabilities like planning and anticipation, Brooks certainly enriched the field by broadening mainstream robotics which now incorporates and no longer questions his once held extreme position.
If you are interested in these topics you can take a look at two papers that connect behaviour-based approaches to robot design with the philosophical stance that gives primacy to subjective experience. The first paper Representation = Grounded Information provides a new definition of intelligence, a concept that has proven to be difficult to define, and previously described using a circumscriptive but incomplete set of characteristics such as the ability to adapt to change and to learn from mistakes. The paper defines intelligence as the capability that produces representations. Representation making capabilities underpin traits associated with intelligence. For example, adapting to change requires an ability to make new representations for new situations, and learning involves the discovery and incorporation of new examples or whole new representations. Furthermore, the measure of intelligence can be given by measures of representation affordance. For example, deception is associated with high-end intelligence; reptitlian representations do not afford deception, but people over the age of 8 represent the minds of others with ease so human representations do afford deception.
The second paper Autonomy: Life and Being builds on the definitions of representation and intelligence; a genuinely autonomous system must be capable of making sense of its subjective experience, i.e. able to make representations of its own experience. An object is then determined to be alive if it can autonomously represent its own experience all by itself on the fly as it is experiencing. The paper offers a new high level cognitive architecture founded on a robot’s subjective experience that my research group is using to develop robot minds that allow a physical mobile robot to pay attention to different aspects of its subjective experience as it forages about, and importantly to anticipate future experiences. Check out the CMU Dive and the UTS Dodge for real examples of attention-driven anticipatory behaviors in robots.
Robots are destined to have a profound impact on society, indeed all aspects of life and being. Already they have capabilities in perception, information retrieval, data mining, computation, and morphology beyond our own. In order to address legal aspects of robotics in society we must develop a better understanding of autonomy and how it applies to robot experience. Living entities are autonomous systems, and autonomy is vital to life. Could a genuinely autonomous robot have a right to life? Our philosophical, scientific and legal understanding of autonomy and its implications lacks depth and clarity, and as a result progress towards designing, building, managing, exploiting and regulating autonomous systems is inhibited. Considering robot experiences during robot design will help us grapple with issues around safety, security, privacy, and liability; it will help us develop effective cues and social mores for robots as we interact. collaborate and develop a sense of social meaning with them. Without being too controversial it is probably safe to say that that robots today have representations derived from their experiences that afford insect-level intelligence but it is inevitable that they will enjoy rich and meaningful experiences which are as vivid and intense as our own because the only way we can actually achieve the goal of genuine autonomy is to build robots that can make sense of their subjective experiences and attain cognitive independence from us!