Integrating Face and Voice in Person Perception by Pascal Belin Salvatore Campanella & Thomas Ethofer

Integrating Face and Voice in Person Perception by Pascal Belin Salvatore Campanella & Thomas Ethofer

Author:Pascal Belin, Salvatore Campanella & Thomas Ethofer
Language: eng
Format: epub
Publisher: Springer New York, New York, NY


4.2 Methods

Thirty-one adult participants (mean age: 20) were instructed to discriminate the emotion expression (angry vs. sad) of a central target face. This central target face was presented either alone, or accompanied by an affective distracter. This distracter could be either auditory (an angry or sad tone of voice), or visual (the written name of an emotion word or another face). Following standard practice, (see Bertelson, 1999; Driver, 1996), we manipulated the emotional congruence between the target face and the affective information presented concurrently and to be ignored. We used a within-subject design, the same procedure and stimulus duration of the central target face across the different conditions. Notably, the effect of the auditory distracter on emotion face perception was studied in a separate block than the effect of the visual distracters (either a written emotion name or an emotional facial expression). Note that only emotions with a negative valence, i.e., angry and sad, were used, in order to avoid possible confounds in the interpretation between the role of (in)congruence between affective content of target and distracter, and actual valence of the emotion displayed (positive vs. negative).

The target face (5 cm width  ×  6.5 cm height) consisted of the static black and white photograph of one out of six actors, posing either a sad or an angry emotional facial expression (see Ekman & Friesen, 1976) and was briefly presented in the center of a 17-in. screen for 150 ms. Auditory stimuli were 12 different spoken words always with the same neutral content (/plane/) pronounced by semiprofessional actors, either with a sad or angry affective tone of voice (see Pourtois et al. 2005, 2000, 2002 for additional details regarding these previously validated auditory stimuli). Mean duration of the auditory fragments was 348 ms. Based on the emotion content of the face and the voice, congruent and incongruent audiovisual pairs were created. The spoken distracter was always presented at such a time that its offset coincided with that of the central face stimulus (duration of 150 ms). Face distracters were identical to the targets. All combinations involved two pictures of the same actor, displaying the same emotion (thus twice the same picture) on congruent trials, or different emotions on incongruent trials. The emotional face distracter was presented in full synchrony with the target face, 5 cm above it (distance from the screen was 60 cm). Emotion written words were two adjectives (/ANGRY/vs./SAD/, in French) printed in Times police 24 (3 cm width  ×  1 cm height). Like for the distracting face, the distracting word was presented synchronously with the central emotional face, 5 cm above it. Congruent and incongruent trials were created based on the (mis)match between the emotion displayed by the central face and that conveyed by the written word briefly presented in the upper visual field.

The experimental session included control trials during which the central target face was presented alone (no distracter) and trials during which it was accompanied by a distracter, in random order. All trials started with the



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.