From AI to Robotics by Arkapravo Bhaumik

From AI to Robotics by Arkapravo Bhaumik

Author:Arkapravo Bhaumik
Language: eng
Format: epub
Publisher: CRC Press


FIGURE 7.9 Sparky and Feelix. Sparky on the left and Feelix on the right, both these robots have cartoonish actuated faces and that forms the social binding in human-robot interactions. Both images from Fong et al. [107], used with permission.

7.2.1.2 Facial traits

Attention switching, to meet the gaze of a human obersver and mirroring human expressions does help in establishing a ground for social interactions but to derive meaning from a facial expression or action will help it to reach closer to the domains of human-human interactions.

Determining the psychological state using facial expression is an effective non-verbal interaction in day-to-day interactions. Human facial traits can be incorporated into a social robot using a number of techniques: (1) Haar feature-based cascade classifiers, (2) Eigenfaces, (3) Active Shape Methods and Facial Action Coding System (FACS).

The method of Haar classifiers proposed by Viola and Jones for real-time detection has come to be the standard for face detection algorithms and is the default in various image processing software suites. Though easy to implement, Haar classifiers are not exhaustive and lack in robustness, with the best results being for frontal views of faces. The method of Eigenface uses the image matrix of faces and compares it to a calibration and is accompanied with a pre-existing large database.

Active shape models (ASM) are statistical models of the shape of objects as defined by a point distribution where the shape of an object is reduced to a set of points, and such methods have been widely used to analyse facial images. To convey a human emotion the relative positioning of critical points on the human face are observed. These points are known as, ‘landmark’ points. A shape model by itself is not sufficient to detect faces, and more often ASM has to be employed along with a Haar cascade.

These three methods work on still images, for real time execution the best technique is Facial Action Coding System (FACS). This method developed in the 1970s, is a powerful tool used to correlate facial expression a human being can make to their corresponding psychological state. This is a novel method of relating facial taits to the psychological state, for example happiness is seen as cheek raise and lip corner pull, both working at the same time. It takes into account the contraction of each facial muscle and how it affects the appearance of the face. Facial expressions are reduced to Ekman Action Units (EAU), and the combination of these EAUs provides a good estimate of the corresponding human expression as is shown in Table 7.1

TABLE 7.1 Facial Action Coding System (FACS) (1978)

Psychological State

Facial Traits (EAU numeration is given in square brackets)



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.