Saturday, 4 February 2012

Anthropomorphism and the social robot

The use of human-like features for social interaction with people can facilitate our social understanding. It is the explicit designing of anthropomorphic features, such as a head with eyes and a mouth that may facilitate social interaction. This highlights the issue that social
interaction is fundamentally observer-dependent, and exploring the mechanisms underlying anthropomorphism provides the key to the social features required for a machine to be socially engaging.

A robot’s capacity to be able to engage in meaningful social interaction with people inherently requires the employment of a degree of anthropomorphic, or human-like, qualities whether in form or behaviour or both. But it is not a simple problem. As discussed by Foner, people’s expectations based on strong anthropomorphic paradigms in HCI overly increase a user’s expectations of the system’s performance. Similarly in social robotics, the ideal paradigm should not necessarily be a synthetic human. Successful design in both software and robots in HCI needs to involve a balance of illusion that leads the user to believe in the sophistication of the system in areas where the user will not encounter its failings, which the user is told not to expect.

The social robot can be perceived as the interface between man and technology. It is the use of socially acceptable functionality in a robotic system that helps break down the barrier between the digital information space and people. It may herald the first stages where people stop perceiving machines as simply tools.

Anthropomorphism (from the Greek word anthropos for man, and morphe, form/structure), as used in this paper, is the tendency to attribute human characteristics to inanimate objects, animals and others with a view to helping us rationalise their actions. It is attributing cognitive or emotional states to something based on observation in order to rationalise an entity’s behaviour in a given social environment. This rationalisation is reminiscent of what Dennett calls the intentional stance, which he explains as “the strategy of interpreting the behaviour of an entity (person, animal, artifact, whatever) by treating it as if it were a rational agent who governed its ‘choice’ of ‘action’ by a ‘consideration’ of its ‘beliefs’ and ‘desires”’. This is effectively the use of projective intelligence to rationalise a system’s actions.

The role of anthropomorphism in robotics in general should not be to build a synthetic human. Two motivations for employing anthropomorphism are firstly the design of a system that has to function in our physical and social space (i.e. using our tools, driving our cars, climbing stairs) and secondly, to take advantage of it as a mechanism through which social interaction with people can be facilitated. It constitutes the basic integration/employment of “humanness” in a system from its behaviours, to domains of expertise and competence, to its social environment in addition to its form. Once domestic robots progress from the washing machine and start moving around our physical and social spaces, their role and our dealings with them will change significantly. It is in embracing a balance of these anthropomorphic qualities for bootstrapping and their inherent advantage as machines, rather than seeing this as a disadvantage, that will lead to their success.

Still the questions remain. Is there a notion of “optimal anthropomorphism”? What is the ideal set of human features that could supplement and augment a robot’s social functionality? When does anthropomorphism go too far?

A robot not embracing the anthropomorphic paradigm in some form is likely to result in a persistent bias against people being able to accept the social robot into the social domain, which becomes apparent when they ascribe mental states to them. As shown in several psychological experiments and human–robot interaction experiments and pointed out by Watt, familiarity may also ease social acceptance and even tend to increase people’s tendency to anthropomorphise.

Roboticists have recently started to address what supplementary modalities to physical construction could be employed for the development of social relationships between a physical robot and people. Important arenas include expressive faces often highlighting the importance of making eye contact and incorporating face and eye tracking systems. Examples demonstrate two methodologies that employ either a visually iconic or a strongly realistic human-like construction (i.e. with synthetic skin and hair) for facial gestures in order to portray artificial emotion states. The more iconic head defines the degree of anthropomorphism that is employed in the robot’s construction and functional capabilities. This constrains and effectively manages the degree of anthropomorphism employed. Building mannequin-like robotic heads, where the objective is to hide the “robotic” element as much as possible and blur the issue as to whether one is talking to a machine or a person, results in effectively unconstrained anthropomorphism and a fragile manipulation of robot–human social interaction, and is reminiscent of Shneiderman’s discontent with anthropomorphism. Mori nicely illustrated the problematic issues found in developing anthropomorphic facial expressions on robotic heads with “The Uncanny Valley”.

Consequently, it can be argued that the most successful implementation of expressive facial features is through more mechanistic and iconic heads.

From an engineering perspective it is more difficult to realise strong anthropomorphism as found in synthetic humans, i.e., but in order to “solve” AI through hard research, it is a simpler more justifiable route to take. A more stringent research approach should not be to throw everything at the problem and force some answer, however, constrained, but rather to explore and understand the minimal engineering solution needed to achieve a socially capable artificial “being”.

A useful analogy is in not trying to replicate a bird in order to fly but rather recognising those qualities that lead to the invention of the plane. From a social robotics perspective, it is not an issue whether people believe that robots are “thinking” but rather taking advantage of where people still have certain social expectations in social settings. If one bootstraps on these expectations, i.e. through exploiting anthropomorphism, one can be more successful in making the social robot less frustrating to deal with and be perceived as more helpful.

Contrary to the popular belief that the human form is the ideal general-purpose functional basis for a robot, robots have the opportunity to become something different. Through the development of strong mechanistic and biological solutions to engineering and scientific problems including such measurement devices as Geiger counters, infra-red cameras, sonar, radar and bio-sensors, for example a robot’s functionality in our physical and social space is clear. It can augment our social space rather than “take over the world”. A robot’s form should therefore not adhere to the constraining notion of strong humanoid functionality and aesthetics but rather employ only those characteristics that facilitate social interaction with people when required. In order to facilitate the development of complex social scenarios, the use of basic social cues can bootstrap a person’s ability to develop a social relationship with a robot. Stereotypical communication cues provide obvious mechanisms for communication between robots and people. Nodding or shaking of the head are clear indications of acceptance or rejection (given a defined cultural context). Similarly, work on facial gestures with Kismet and motion behaviours demonstrate how mechanistic-looking robots can be engaging and can greatly facilitate ones willingness to develop a social relationship robots.

Building a robot based on a human constrains its capabilities. The human form and function is not the ultimate design reference for a machine, because it is a machine and not human. This does not challenge our humanity, rather frees the robot to be useful to us. Trying to blur the boundary between a robot and human is unnecessary for a successful social robot. What is needed is a balance of anthropomorphic features to hint at certain capabilities that meet our expectations of a socially intelligent entity (and possibly even surprise and surpass these expectations).

When the robot itself is perceived as making its own decisions, people’s perspective of computers as simply tools will change. Consequently, issues of non-fear inducing function and form are paramount to the success of people’s willingness to enter into a social interaction with a robot.

As highlighted in the PINO research, “the aesthetic element [plays] a pivotal role in establishing harmonious co-existence between the consumer and the product”.

If a robot could perform a “techno handshake” and monitor their stress level through galvanic skin response and heart rate monitoring and use an infrared camera system measuring blood flow on the person’s face, a traditional greeting takes on a new meaning. The ability of a robot to garner bio information and use sensor fusion in order to augment its diagnosis of the human’s emotional state through for example, the “techno handshake” illustrates how a machine can have an alternate technique to obtain social information about people in its social space. This strategy effectively increases people’s perception of the social robot’s “emotional intelligence”, thus facilitating its social integration. The ability of a machine to know information about us using measurement techniques unavailable to us is an example where the social robot and robots in general can have their own function and form without challenging ours. What would be the role of the social robot? If therapeutic for say psychological analysis and treatment, would such a robot require full “humanness” to allow this intensity of social interaction develop?

Excerpts from: 'Anthropomorphism and the social robot'
by Brian R. Duffy

No comments:

Post a Comment