ELITE NETZWERK BAYERN

English  Sprachen Icon  |  Gebärdensprache  |  Leichte Sprache  |  Kontakt


Forschungsarbeit

“A nice face is the best letter of recommendation” (Swedish Proverb)  - A report by Artyom Zinchenko, student at the program 'Neuro-Cognitive Psychology' (NCP)

By Artyom Zinchenko, Dr. Dragan Rangelov and Prof. Dr. Hermann Müller (25.01.2013)

Perception of faces plays an important role in life of every human being. It gives people valuable information about such parameters as sex, age and emotional states of individuals around them (Bruce & Young, 1986). This information provides people with important cues for building their social relationships with family members, colleagues and authorities. Consistent with the importance of faces, studies show that people are very proficient in remembering, recognizing and distinguishing between different faces (Davies, Shepherd, & Ellis, 1978; Haig, 1985; Tanaka, 2001). Additionally, studies showed that infants preferentially make saccades to faces, as compared to other complex objects, including scrambled faces (e.g., Bruce, 1986) – suggesting that the preference for faces develops very early, or might even be inborn.


Two opposing approaches provide alternative answers to the question how face stimuli are processed: the analytical (e.g., Sergent, 1986) and the holistic hypotheses (e.g., Davies, 1978; Ellis, 1975). According to the analytical hypothesis, since faces are very complex stimuli, humans process them as a set of objects (e.g., chin, mouth, nose, eyes, hair etc.) one by one. So, if asked to compare whether two presented pictures depict the same or different faces, people would, for instance, first compare their mouths, chins, than eyes and so on. For example, eye movement studies show that humans tend to fixate on local parts when observing a face (such as eye, nose etc.; Yarbus, 1967; Bate, Haslam, Tree & Hodgson, 2008), arguing that a face representation is constructed from representations of individual facial features. Moreover, experiments showed that people are able to recognize a face even when only a very limited amount of local facial information is provided (such as only eyes or nose), suggesting that individual facial features are encoded very precisely (G. Davies, Ellis, & Shepherd, 1977; Gosselin & Schyns, 2001; Sadr, Jarudi, & Sinha, 2003).
On the other hand, the holistic view assumes that faces, although being composite stimuli, are perceived holistically as a single object. Evidence that people are limited in perceiving local changes in face configurations supports the holistic face perception mechanism (Barton, Zhao, & Keenan, 2003; Yovel & Kanwisher, 2004). In an fMRI study, Yovel and Kanwisher (2004) presented their participants with pairs of either upright or inverted (upside-down) face stimuli. The members of the pair were either the same or somehow different. When different, either the local distances between face constituents (e.g., the space between the eyes) varied between the members of the pair, denoted as configuration differences; alternatively, one of the faces contained an entirely different face constituent (e.g., a different nose), denoted as constituent differences. The task was to report whether the presented stimuli were the same or different. Comparisons between configuration and constituent differences showed no differential activation in the Fusiform Face Area (FFA) – a brain area specialized for facial recognition – suggesting that FFA discriminates between faces and non-faces on a global scale and does not encode local facial features. Thus, the findings of Yovel and Kanwisher’s study further support the holistic face perception hypothesis.
The existence of the composite face illusion also supports the idea of holistic nature of face perception (Gao, Flevaris, Robertson, & Bentin, 2011). In this paradigm, face stimuli are split in two, top and bottom, halves. When the top half of one face is presented conjoined with the bottom halves of two different faces, participants perceive these two composite faces as depicting entirely different people, rather than perceiving them as mixtures of two different faces that they actually are.  Importantly, the composite face illusion disappears if the top and the bottom halves are misaligned (i.e., do not form a continuous oval of the face). The finding that vertical alignment – a global face property – influences face perception offers additional support for the holistic face perception view. Notably, investigation of fixation patterns when inspecting a composite face revealed no differences between aligned (i.e., composite illusion condition) or misaligned (no illusion condition) top and bottom parts, suggesting a dissociation between perception of faces (holistic) and scanning patterns while inspecting faces (analytical) (de Heering, Rossion, Turati, & Simion, 2008).
In summary, investigation of mechanisms of face processing yielded mixed results – some studies supported holistic, while other studies supported analytic face perception. A way to resolve this standstill is to study whether or not local facial features are processed differently from local features of other, comparably complex, composite stimuli. We assumed that inferior processing of local facial features, relative to that of control stimuli, would suggest that faces are primarily processed on global levels, and only subsequently can local features be accessed, that is: it would support the holistic face processing hypothesis. On the other hand, expedited perception of local facial features, relative to control stimuli, would imply that facial configuration influences processing of its local properties in a beneficial manner, that is: it would support the analytical face processing hypothesis.  
In order to investigate processing of local parts of face and non-face objects, we presented participants with two sets of stimuli. One set of stimuli was composed of normal (Figures 1a and 1b, upper row) and scrambled (Figures 1c and 1d, upper row) schematic face stimuli. Schematic faces were used, rather than real-life face pictures, because they allowed for a better match between faces and non-face stimuli. In order to make a schematic face look as close as possible to the real-life face, we used proportions adapted from Powell and Humphreys (1984). Thus, the schematic faces closely resembled the real-life faces through maintaining the correct location of face parts (eyes, nose and mouth) in relation to each other. The non-faces stimuli (Figure 1, lower row) consisted of four line segments that were either arranged such as to form a diamond contour (normal configuration, Figures 1a and 1b) or they randomly distributed within an oval (scrambled configuration, Figures 1c and 1d).
The target stimulus was always a triangle, embedded into a normal or scrambled face and non-face stimuli. The task was to determine as quickly as possible whether a target item was present or absent in a display. Reaction times and response accuracy were measured.

Figure 1. Normal (1a and 1b) and scrambled (1c and 1d) face and non-face stimuli are illustrated.[Bildunterschrift / Subline]: Figure 1. Normal (1a and 1b) and scrambled (1c and 1d) face and non-face stimuli are illustrated. Columns 1a and 1c show target-present conditions and columns 1b and 1d target-absent conditions for both face and non-face task stimuli. The upper row depicts faces, the lower row non-face diamonds.

The results of several behavioral studies consistently showed slower reaction times and a higher percentage of errors for normal relative to scrambled faces – a phenomenon we termed local feature suppression effect or LFS for short. The LFS effect for faces implies that faces are initially processed on the global level and only later may local information be accessed. The opposite was true for non-faces: the processing of local features was enhanced when embedded in normal relative to scrambled configurations, that is, there was a local feature enhancement or LFE effect. The disparate results for face and non-face stimuli (LFS vs. LFE correspondingly) imply that face context can interfere with the perception of its local parts. Face perception is therefore different from perception of other comparably complex but less socially important objects.
The LFS effects for faces could arise as a result of several different cognitive processes: (i) tuning to a particular spatial scale, (ii) time needed to find the target in a serial manner, and (iii) time needed to disengage attention from one object and shift it to another within a certain (face vs. non-face) configuration. In order to check for these possible causes of the LFS effect, dependent variables that are more precise than response speed and accuracy are necessary. In my master thesis project, I focus on electroencephalography (EEG) measures recorded in a similar experimental paradigm. The main goal is to determine what EEG parameters are affected by having to detect a triangle embedded in normal and scrambled-face stimuli. Understanding the nature of the LFS effect yields valuable information about some of the processes underling face perception and face recognition.


NCP-Alumnus Artyom Zinchenko:
  • Scientific interests and goals:
  • Attention, Face perception, Perception of emotional faces, Prosopagnosia, Ageing and face perception

Dr. Dragan Rangelov:

Prof. Dr. Hermann Müller: