Skip to main content

REVIEW article

Front. Psychol., 09 July 2021
Sec. Perception Science
This article is part of the Research Topic Discrimination of Genuine and Posed Facial Expressions of Emotion View all 10 articles

Review: Posed vs. Genuine Facial Emotion Recognition and Expression in Autism and Implications for Intervention

  • 1Department of Chemical and Biomedical Engineering, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV, United States
  • 2Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, United States

Different styles of social interaction are one of the core characteristics of autism spectrum disorder (ASD). Social differences among individuals with ASD often include difficulty in discerning the emotions of neurotypical people based on their facial expressions. This review first covers the rich body of literature studying differences in facial emotion recognition (FER) in those with ASD, including behavioral studies and neurological findings. In particular, we highlight subtle emotion recognition and various factors related to inconsistent findings in behavioral studies of FER in ASD. Then, we discuss the dual problem of FER – namely facial emotion expression (FEE) or the production of facial expressions of emotion. Despite being less studied, social interaction involves both the ability to recognize emotions and to produce appropriate facial expressions. How others perceive facial expressions of emotion in those with ASD has remained an under-researched area. Finally, we propose a method for teaching FER [FER teaching hierarchy (FERTH)] based on recent research investigating FER in ASD, considering the use of posed vs. genuine emotions and static vs. dynamic stimuli. We also propose two possible teaching approaches: (1) a standard method of teaching progressively from simple drawings and cartoon characters to more complex audio-visual video clips of genuine human expressions of emotion with context clues or (2) teaching in a field of images that includes posed and genuine emotions to improve generalizability before progressing to more complex audio-visual stimuli. Lastly, we advocate for autism interventionists to use FER stimuli developed primarily for research purposes to facilitate the incorporation of well-controlled stimuli to teach FER and bridge the gap between intervention and research in this area.

Introduction

Individuals with autism spectrum disorder (ASD) often have difficulty interpreting and regulating their own emotions, understanding the emotions expressed by others, and labeling emotions based on viewing the faces of others (Harms et al., 2010; Uljarevic and Hamilton, 2013; Sheppard et al., 2016). These differences can contribute to social self-isolation by those with ASD either when others respond negatively if the person with ASD lacks a typical, socially expected response, or if the person with ASD chooses to socially isolate themselves to avoid possibly stressful interactions if they realize they struggle to recognize and respond appropriately to expressions of emotion by others (Jaswal and Akhtar, 2019).

Research investigating facial emotion recognition (FER) in ASD has primarily utilized static images composed of posed facial expressions (Pelphrey et al., 2007; Monk et al., 2010); however, more recent research has begun exploring the use of dynamic video with actors making posed facial expressions (Golan et al., 2015; Fridenson-Hayo et al., 2016; Simões et al., 2018). Few studies have utilized face stimuli of humans expressing genuine, spontaneous expressions of emotion, whether static or dynamic (Cassidy et al., 2014). This distinction is important because research has shown that the human brain, and artificial intelligence (AI) systems, process posed facial expressions differently compared to how spontaneous expressions of emotion are processed (Hess et al., 1989; Schmidt et al., 2006; Wang et al., 2015; Park et al., 2020).

Results have been mixed with most studies indicating that posed expressions of emotion being easier to recognize than those that are spontaneous (Naab and Russell, 2007); however, accuracy for FER may also depend on the specific emotion being evaluated (Faso et al., 2014; Sauter and Fischer, 2018). This may be due to the prototypical nature of posed expressions (e.g., most people show fewer teeth when they smile for posed pictures; Van Der Geld et al., 2008), whereas there is much more variability in genuine expressions of some feelings such as sadness (Krumhuber et al., 2019). Therefore, it has been proposed that the traditional use of posed facial expression stimuli in research may have artificially inflated behavioral measures of accuracy during emotion recognition tasks (Sauter and Fischer, 2018). Therefore, the historically prevalent use of posed facial expression stimuli in ASD research investigating FER may contribute to the mixed results seen in this research area.

How might these dissimilarities in posed vs. spontaneous facial expression stimuli be perceived differently by those with ASD? This review further argues that posed vs. genuine emotion is a critical factor that deserves more consideration when studying FER in ASD. We will first review the rich literature on the perception of posed facial expressions of emotion, highlighting the differences between ASD and control groups, though inconclusively. We will then discuss some recent research investigating how individuals with ASD differ from controls when asked to produce posed facial expressions of emotion and review the latest advances in the field of posed vs. spontaneous/genuine facial expressions and implications into autism research in terms of both perception and production of genuine facial expressions. Finally, based on these findings, we propose a method of teaching FER for individuals with ASD.

Differences in FER in ASD

Autism studies investigating differences in understanding how others think or feel date back to as early as the 1970s (Langdell, 1978; Mesibov, 1984; Weeks and Hobson, 1987; Hobson et al., 1988; Ozonoff et al., 1990). In Langdell (1978) they found that adolescents with autism could identify schematically drawn happy and sad faces, but they demonstrated varying capability when sorting the faces just using the eye area. Another study (Hobson, 1986) provided further convincing evidence about the differences in the appraisal of facial expressions of emotion by children with autism suggesting that their failure to understand the emotional states of others might be related to their difficulty in recognizing the difference between particular emotions. However, due to different experimental designs (e.g., sorting, matching, and cross-modal), the interpretation of these early results is often debatable (Celani et al., 1999).

A more systematic study about the nature of early differences in social cognition in autism was conducted in Dawson et al. (2004) using high-density event-related potentials (ERPs). It was found that children with ASD, as young as 3 years of age, showed a disordered pattern of neural responses to emotional stimuli such as fearful vs. neutral facial expressions. More specifically, typically developing children demonstrated a larger early negative component and a negative slow wave to the fear than to the neutral, while children with autism did not show significant differences in both experiments. In contrast, the faster speed of early processing of the fear face among children with autism was associated with better performance on tasks assessing social attention such as social orienting, joint attention, and attention to distress. These findings have served as direct evidence for atypical psychological components involving emotion recognition among children with autism at a young age (3–4 years old).

To probe into the pathology of the underlying processes related to dysfunction in emotional and social cognition, it has been shown that amygdala dysfunction in ASD might contribute to a different ability to process social information (Adolphs et al., 2001). Varying face perception or emotion recognition in ASD might result from atypical fixations onto faces, which may, in turn, arise from amygdala dysfunction (Breiter et al., 1996; Baron-Cohen et al., 2000). This hypothesis is directly supported by evidence from both single-neuron recordings in the human amygdala (Rutishauser et al., 2013) and neuroimaging studies (Dalton et al., 2005; Kliemann et al., 2012). Given the critical role of the amygdala in emotion processing (Adolphs, 2008), more systematic studies will be needed to reveal whether the amygdala has a different response for posed vs. genuine emotions. Further studies using visual scanning/eye-tracking (Pelphrey et al., 2002), or functional neuroimaging (Dalton et al., 2005; Pelphrey et al., 2005), have shown abnormal activity in patients with ASD. Even with the enhanced emotional salience of facial stimuli, a positron emission tomography (PET) study showed that adults with ASD demonstrated lower activity in the fusiform cortex than typically developing (TD) controls and differed from the TD group within other brain regions (Hall et al., 2003). This line of research was further extended into the identification of differences in key components of human face processing systems that might contribute to the differences in processing facial expressions of emotion (Pelphrey and Carter, 2008).

Unlike previous studies employing more simplistic stimuli (e.g., the face stimulus as an exemplar of a given emotion, “100% expression”), subtle differences in FER were considered (Law Smith et al., 2010; Black et al., 2020). Using stimuli that incrementally morphed the expression between a neutral face and the posed expression, they found that adolescents and young adults with ASD were less accurate at identifying basic emotional expressions of disgust, anger, and surprise. In a follow-up study (Kennedy and Adolphs, 2013), adults with ASD were found to give ratings that were significantly less sensitive to a given emotion and less reliable across repeated testing. Therefore, an overall decreased specificity in emotion perception suggests a subtle but specific pattern of differences in facial emotion perception among those with ASD. Along this line of research, significant differences were found between males and females with ASD for emotion recognition but not for self-reported empathy recognition (Sucksmith et al., 2013). Most recently, a gender-biased study showed that differences in FER in females with autism might not be attributed to ASD but instead to their co-occurring alexithymia (difficulty describing one’s own emotions and those of others; Ola and Gullon-Scott, 2020). Thus, consideration for future FER studies is to recruit significant numbers of male and female participants with ASD and consider sex as a factor in the analysis.

We note that there have been several excellent review articles about research findings of FER in ASD (Harms et al., 2010; Bons et al., 2011; Nuske et al., 2013; Uljarevic and Hamilton, 2013). In Harms et al. (2010), demographic and experiment-related factors are addressed to account for inconsistent findings in behavioral studies of FER in ASD. Future studies of FER in ASD suggested by Harms et al. (2010) include the incorporation of longitudinal designs to examine the developmental trajectory of FER and behavioral and brain imaging paradigms that include young children. In Uljarevic and Hamilton (2013), a formal meta-analytic study has shown that recognition of happiness was only marginally modified in ASD, but recognition of fear was marginally worse than recognition of happiness. In Nuske et al. (2013), it was found that (1) emotion-processing differences might not be universal to all individuals with ASD and are not specific to ASD; and (2) the specific pattern of emotion-processing strengths and weaknesses observed in ASD, involving difficulties with processing social vs. nonsocial, and complex versus simple emotional information, appears to be unique to ASD (Tang et al., 2019). It is also worth noting the “double empathy problem” described (Milton, 2012). It was found that just like people with ASD have difficulty interpreting the facial emotions of TDs, TD people have just as much difficulty understanding people with autism. Such a “double” perspective has profound implications for ASD service providers because differences in neurology could lead to differences in sociality. A more recent study (Milton and Sims, 2016) has demonstrated a need for less focus on remediation for patients with autism. Instead, it advocated for focusing on limiting social isolation as a more constructive solution. The most recent study (Crompton et al., 2020) has shown that peer-to-peer information transfer concerning autism is more effective than information transfer between persons with and without autism.

Given the finding that FER differences are not strictly applicable to those with ASD (Nuske et al., 2013), several studies have been conducted to compare differences in FER in ASD with other neurological disorders. In Wong et al. (2012), emotion recognition abilities are examined for three groups of children aged 7–13 years: high functioning autism (HFA), social phobia (SP), and TD. Although no evidence was found for negative interpretation biases in children with HFA or SP, children with HFA were found to detect mild affective expressions less accurately than TD peers suggesting subtle changes in emotion expression are more difficult for those with ASD. In Sachse et al. (2014), a similar study was conducted with adolescents and adults with HFA, schizophrenia (SZ), and TD to identify convergent and divergent mechanisms between ASD and SZ. It was found that individuals with SZ were comparable to TD in all emotion recognition measures, but the basic visuoperceptual abilities of the SZ individuals were reduced. By contrast, the HFA group was more affected in recognizing basic and complex emotions when compared to both SZ and TD. As reported in Sachse et al. (2014), group differences between SZ and ASD remained but only for recognizing complex emotions after taking facial identity recognition into account. Such experimental results suggest that (1) there is an SZ subgroup with predominantly paranoid symptoms that do not show problems in FER but visuoperceptual differences only; and (2) no shared FER difference was found for paranoid SZ and ASD, implying differential cognitive underpinnings of ASD and SZ about FER.

A study by Lundqvist (2015) directly links sensory abnormality with social dysfunction of ASD – for example, hyper-responsiveness to touch mediated social dysfunction in ASD, and the tactile sensory system is foundational for social functioning in ASD. There is also evidence that social functioning in those with ASD is impacted by sensory dysregulation in multiple sensory modalities that arise early in the progression of the disorder (Thye et al., 2018). This meta-analysis suggests an early intervention that targets sensory abnormalities and social differences, considering the critical role ASD sensory processing differences play in social interactions. In another systematic review and meta-analysis (Zhou et al., 2018), quantitative comparisons of sensory temporal acuity were made between healthy controls and two clinical groups (ASD and SZ). They revealed a consistent difference in multisensory temporal integration in ASD and SZ, which may be associated with differences in social communication. Finally, studying differential patterns of visual sensory alternation using neuroimaging (Martínez et al., 2019) has shown that SZ and ASD participants demonstrated similar FER and motion sensitivity differences, but significantly different visual processing contributed to FER group differences. This data would suggest that FER differences are not unique to ASD.

Differences in Facial Emotion Expression in ASD

It has been hypothesized that in ASD, both FER and FEE are affected, contributing importantly to social differences and difficulty in relationship formation (Manfredonia et al., 2019). By contrast, fewer studies about individuals with ASD have been devoted to FEE than FER in the published literature. In an early study of imitation and expression of facial affect (Loveland et al., 1994), the production of elicited/posed affective expressions is more difficult for individuals with ASD than for patients with Down’s syndrome of similar chronological age, mental age, and IQ. In Begeer et al. (2008), four aspects of emotional competence (expression, perception, responding, and understanding) are reviewed for children and adolescents with ASD. It was found that different emotional competence in ASD was highly dependent on age, context, and intelligence. In another unique study (Faso et al., 2014), the dual problem of FER and FEE were studied, namely how facial expressivity by those with ASD is perceived by others. It was reported that facial expressions of emotion by participants with ASD were regarded as more intense and less natural than expressions by the TD group. Surprisingly, ASD expressions were also identified with greater accuracy by TD judges due primarily to the category of angry expressions. The above findings collectively suggest differences, instead of a reduced ability, in facial expressivity among individuals with ASD. Those differences do not necessarily hinder the accuracy of emotion recognition by others but may affect the quality of social interactions between ASD and TD, as demonstrated in a recent study (Sasson et al., 2017).

In Volker et al. (2009), each participant was photographed after being prompted to enact a facial expression from one of six basic emotions – happiness, sadness, anger, fear, surprise, and disgust. It was reported that children with HFA were significantly less adept at enacting sadness, and their expressions were dramatically odder compared to controls. However, no significant differences were found for anger and fear; and even more surprisingly, the ASD group demonstrated somewhat greater skills at enacting surprise and disgust. More recently, a systematic study (Brewer et al., 2016) investigated TD and ASD participants’ ability to recognize facial expressions of emotion produced by TD and ASD actors posing basic emotions. With three designed posing conditions, this study aimed to determine whether potential group differences were due to (1) atypical cognitive representations of emotion; (2) affected the understanding of the communicative value of expressions; or (3) poor proprioceptive feedback. They found that expressions posed by participants with ASD were not recognized as well by TD and ASD participants as expressions posed by TD posers. Subsequently, a computational approach was used in Guha et al. (2018) to study the details of facial expressions for children with HFA. This study aimed to uncover subtle characteristics of facial expressions by analyzing localized facial dynamics and found differences in the eye region. Finally, in a meta-analysis (Trevisan et al., 2018), it was found that participants with ASD display facial expressions less frequently and for less amount time. Meanwhile, participants with ASD are less likely to share facial expressions with others or automatically mimic the expressions. These observations have partially inspired the design of an intervention system for young children with ASD, as we will elaborate later.

Posed Vs. Genuine Facial Expressions of Emotion

Multiple databases of face stimuli have been developed for FER research (Jia et al., 2020). These databases include static images of computer-generated human faces that can be titrated to modify facial expressions or include static or dynamic images of real human faces containing posed and spontaneous facial expressions (Cassidy et al., 2015). More recently, there has arisen a question in the emotion recognition field regarding whether there is a difference between how the human brain perceives and processes emotions that are posed (artificially generated) compared to those that are genuine (spontaneously generated). One study found that adults are much more accurate at labeling emotions when the facial expression is posed than when it is spontaneous (Krumhuber et al., 2019). In this study, they also used facial recognition software to label the emotions and found the software to be more accurate than the human participants at FER for the posed emotions; however, the accuracy dropped for AI and the human participants to similar levels when the expressions of emotion were spontaneous. It was thought that this result was due to the fact that posed expressions showed more prototypical facial features of the emotions (e.g., downturned mouth and furled brow for sadness) enabling both humans and AI to learn and recognize the posed emotions with higher accuracy. Spontaneous emotional expressions have subtle, but substantial differences compared to posed expressions of emotion, with changes in small muscles and less prototypical facial expressions (Kim and Huynh, 2017). Few studies have compared FER for posed and genuine FEs with mixed results. Here, we will first highlight a few existing studies on posed vs. genuine facial expressions of emotion for ASD and then discuss our envisioned future directions along this line of research.

Recent studies had revealed differences in the literature when processing posed vs. genuine facial expressions of emotion (Pelphrey et al., 2007). There are prototypical signs exhibited for some expressions of emotion, while genuine expressions of the same emotion are more complex and harder to interpret. For example, the expression of sadness when posed includes an out-turned lower lip, though spontaneous expressions of sadness are much more highly variable and often do not include this prototypical expression (Kim and Huynh, 2017). The class of smile expressions has received special attention regarding posed vs. genuine distinction (Blampied, 2008; Boraston et al., 2008). In Blampied (2008), the sensitivity of children with ASD was compared against that of age and sex-matched control children to the different emotions underlying posed vs. genuine smiles. It was found that individuals with ASD are often less sensitive to the differences between posed and genuine smiles than TD participants. Toward deeper reasoning about this difference, it was hypothesized that experience during development viewing the eye region of a face is critical to identifying genuine smiles from posed ones. In a related study (Boraston et al., 2008), the reduced ability to discriminate genuine from posed smiles for adults with ASD is attributed to reduced eye contact. It was also found that the individuals with ASD who were more affected in recognition of genuine smiles also had more severe social interaction differences. In a recent review of studies using eye-tracking (ET) and electroencephalography (EEG) to explore FER in ASD (Black et al., 2017), they report that differences in ET and EEG result from differences in facial emotion processing that arise from functional differences in the social brain.

Evaluating posed and evoked facial expressions of emotion from adults with ASD has been studied (Faso et al., 2014). It was reported that ASD expressions were rated as more intense and less natural than TD expressions. Meanwhile, the naturalness ratings of evoked expressions were positively associated with identification accuracy for TD but not individuals with ASD. These findings collectively highlight differences in facial expressivity among ASD that do not hinder emotion recognition accuracy but may affect the quality of social interaction. Along this line of research, it has also been found that just like the failure of ASD recognize the facial expressions of TD (no matter posed or spontaneous), TD individuals also find it difficult to recognize autistic emotional expressions (Brewer et al., 2016). More recently, it has been found that neurotypical peers are less willing to interact with those with autism based on thin slice judgments (Sasson et al., 2017), and first impressions for intellectually able adults with ASD improve with diagnostic disclosure and increased autism understanding of the part of peers (Sasson and Morrison, 2019).

Considering the differences in TD accuracy for posed and spontaneous FEs, it would stand to reason that differentiating these types of stimuli in autism interventions targeting FER should be considered. Next, we propose a progressive intervention strategy inspired by research investigating posed vs. genuine expressions of emotion.

Implications for ASD Intervention

While FER differences in individuals with ASD may not be universal, they are highly prevalent, and thus FER is often specifically taught as part of the autism curriculum of a child (Ayres and Robbins, 2005). Interventions have been developed that explicitly teach individuals with ASD to recognize specific emotions in others and themselves with mixed results (for a review, see Berggren et al., 2018). Stimuli for FER interventions can vary widely and may include static or dynamic images of the six basic emotions (i.e., sad, happy, angry, afraid, disgust, and surprise) as well as complex emotions, such as jealousy, that are more difficult to recognize and may require the use of contextual clues (Baron-Cohen et al., 2009). The basic goal of teaching FER to those with ASD is to help them better understand others and foster communication and social interactions (core difference areas in ASD). Previous works (Gordon et al., 2014) have focused on how to train children with ASD to produce happy and anger expressions with a computer game (“FaceMaze”). Recently, technology-based learning tools have been designed to help ASD preschoolers with FER and emotional understanding (Boccanfuso et al., 2016; Zhang et al., 2019).

Additionally, based on the observation that happiness is the easiest among the six basic emotions for encoding and decoding by humans, a computer-based tutoring system called SmileMaze (Cockburn et al., 2008) was designed to improve the FEE production skills of children with ASD in a dynamic and interactive format. The Computer Expression Recognition Toolbox (CERT) in SmileMaze is capable of automatically detecting frontal faces from a video stream and encoding each frame into 37 continuous features, including six basic facial expressions as well as 30 facial action units (AUs) as defined by the Facial Action Coding System (Ekman, 1997). Such a computational approach notably targets those characteristics in ASD that are distinct from those in TD children, which are often difficult to detect by direct visual inspection. The combination of FEE training and computer vision systems leads to the most recent work (White et al., 2018) – an automated, game-like system based on the Kinect 3D sensing technology developed by Microsoft. It has been reported that youth with ASD preferred to interact with the system more than their TD peers. Such a discovery seems to suggest that new technology-based interventions (e.g., 3D avatar-based digital twin; Wang et al., 2019), music-based therapeutic methods (Wagener et al., 2020), and computer-based recognition of posed vs. spontaneous facial expressions (Mavadati et al., 2016), have good potential in remediation of transdiagnostic processes such as FER and FEE in ASD and possibly in other disorders with facial emotion processing differences such as SZ, traumatic brain injury, and stroke. It has recently been reported in Keating and Cook (2021) that individuals with autism have difficulties recognizing neurotypical facial expressions and vice versa. TD and ASD individuals might exhibit expressive differences, but individuals with autism tend to display less frequent expressions that are rated as lower in quality by TD observers. Such observation suggests that future research should investigate what specifically is different about the facial expressions produced by ASD and TD individuals (e.g., how dynamic aspects of expressions affect emotion recognition).

Considering the scientific literature outlined in this review on FER in ASD and differences between posed and genuine facial expressions of emotion discussed above, we propose a hierarchical teaching method as part of an intervention to teach FER to individuals with ASD that considers the increased difficulty in processing more complex FER stimuli (Nuske et al., 2013). We propose three aspects for consideration when teaching FER: (1) whether the image is simple (drawings and cartoons) or complex (includes human faces or life-like artificially generated faces); (2) whether the image is static or dynamic [audio-visual (AV)]; and (3) in complex images, whether the expression of emotion is posed or genuine. Those three aspects collectively take previous findings in the literature of FER/FEE in ASD into consideration and introduce a new sequential approach toward posed vs. genuine. Compared with previous approaches such as SmileMaze (Cockburn et al., 2008) and FaceMaze (Gordon et al., 2014), ours distinguishes them by emphasizing hierarchical learning and covering more facial expressions.

We propose two possible approaches for teaching FER/FEE:

Approach (1) Teaching FER/FEE Progressively: This strategy is based on the previous finding that happiness and sadness are the least affected in ASD, but fear, surprise, and disgust are more impacted in ASD. Starting with simple, static images that include basic drawings and cartoon characters and then progressing step-wise to more complex static images with photos of human faces and expressions that are posed and genuine, and then to dynamic AV images using a life-like avatar of the therapist or child conversing in real-time as a transition between static and dynamic images of real people, and finally, real-world AV videos that contain context clues and genuine expressions of emotion. While using photos of real faces constitutes a more natural stimulus and may positively impact generalizability, the simplicity of the hand-drawn images may make them a better place to begin teaching emotions for some individuals with ASD (Sasson et al., 2008). In this vein, similar to standard Applied Behavior Analysis (ABA) methods, once they have mastered an emotion at the hand-drawn image level, it may be beneficial to move to the next level of complexity and target cartoon characters that the child enjoys. Theoretically, intervention would then move to the inclusion of stimuli with real human faces with posed emotions because posed photos are easier for typically developed individuals to label. Ultimately, using real human face stimuli with genuine, spontaneous expressions of emotion (static or dynamic) would be the ultimate target since they may be more difficult to interpret (Hanley et al., 2013). Images of the child undergoing intervention that shows him/her expressing these emotions could also be included and analysis of their facial expressions.

Approach (2) Teaching FER/FEE in a Field of Images: Alternatively, since individuals with ASD often have difficulty generalizing what they have learned in many areas, including FER (Berggren et al., 2018) and FEE (White et al., 2018), it may be best, to begin with, multiple images of a specific emotion to teach a child (e.g., in a field of drawn images, cartoon characters, and posed and random static photos of human faces expressing a target emotion). Teaching skills to individuals with ASD in a field of stimuli has been proposed previously based on the finding that repeatedly using limited stimuli increases the rigidity of thinking and reduces generalizability (Harris et al., 2015). Thus, in Approach 2, we propose to begin by teaching FER in a field that contains static images that are both simple and complex of posed and genuine expressions of a target emotion and then progress to dynamic AV FER stimuli that may contain more context clues and incorporates multisensory integration to facilitate learning (Sasson, 2006). While teaching in a field may take longer to master, research shows it may reduce learned rigidity of thought and improve generalizability. Finally, incorporating these stimuli into games that are enjoyable to play (see above referenced FER/FEE interventions), and could be customized so that the interventionist can select the images at each level of FER/FEE functioning, could facilitate facial emotion training in some individuals with ASD.

While the intrinsic social motivations of a child may not significantly impact how FER/FEE is taught (Garman et al., 2016), delivering the stimuli in a fun and intrinsically motivating way could improve generalizability (Baron-Cohen et al., 2009). A feasibility study was conducted by White et al. (2018) of their system developed to teach FEE to children with ASD. The system provides critical feedback to the child via computer analysis of the facial expression a child made in response to a cue. Such a system could be used in conjunction with a FER/FEE training program since the ability of a child to recognize their own emotions may likely facilitate FER/FEE learning and thus may be a framework upon which recognition of others’ emotions can be built (Manfredonia et al., 2019; Ola and Gullon-Scott, 2020). Additionally, avatars can be created to interact in real-time with a child and may provide an added opportunity for a person with ASD to initiate conversations of their own accord, as has been seen at Disney World where children with ASD willingly interact with an avatar of Crush the turtle from the movie Nemo (Carter et al., 2014). Regarding the dynamic AV stimuli, since multisensory integration has been shown to enhance our ability to learn new information (Shams and Seitz, 2008), incorporating auditory input with visual input may facilitate the ability for individuals with ASD to learn emotion recognition, especially at the more complex levels of FER/FEE as in, where the stimuli would be considered the most complex (real-world, AV, and genuine expressions of emotion). Consideration should be given to the level of functioning of an individual in face/emotion processing and learning style when determining where to begin and whether to teach progressively (Approach 1) or to teach in a field (Approach 2) of static images and then progress to dynamic AV videos.

Additionally, the scientific community has developed multiple datasets of face stimuli for research purposes to investigate how FER/FEE is perceived in TD, ASD, and other disorders (for a review of FER databases, see Jia et al., 2020). These stimuli have static and dynamic expressions of emotion that are often well titrated (morphed levels between two emotions), but these stimuli are generally not known to autism therapists and are not utilized by them for teaching FER/FEE. Thus, the availability of face stimuli for teaching is often dependent upon the funds available to an interventionist. Therapists have been very creative and find free face stimuli to use when teaching their students, which can benefit children when various images are used. However, this can be time-consuming and costly, especially if therapists must purchase images from different datasets to acquire a set of images for teaching a specific emotion. Therefore, we propose that interventionists take advantage of the variety of FER datasets that include both posed and genuine expressions of emotion and dynamic videos of facial expressions of emotions.

Many FER/FEE databases have been developed using the six basic emotions that were found to be universal (Ekman, 1970) and the Face Action Coding System (FACS) that breaks down movements of muscles in the face used to make expressions of emotion into AUs (Ekman, 1997). These same AUs are the primary measures for facial expressions used by entities like Disney to animate characters to make their facial movements more realistic. Thus, using stimuli to teach FER/FEE in those with ASD that has incorporated realistic portrayals of human emotions (e.g., Disney characters) and analyses of human expressions of emotion based on these same measurements of facial micromovements (Leo et al., 2019) would bring full circle the application of the research investigating this critical aspect of human existence to help those who struggle in this area. Lastly, avatar software developed by companies like ObEN can benefit FER intervention by enabling the creation of life-like avatars of a therapist or of the person with ASD,1 which may help individuals with ASD to transition between static images and dynamic real-world videos that contain context clues and possibly help them better understand their own expressions of emotion.

The proposed FETH method requires research to investigate the merits of teaching FER/FEE serially (Approach 1) or teaching in a field of images at different levels of complexity to improve generalizability (Approach 2). Regardless, a more refined FER/FEE intervention based on current scientific outcomes has far-reaching implications for children and adults with ASD and other disorders where FER/FEE difficulties can significantly hinder social interactions, including SZ, stroke, and traumatic brain injury.

Author Contributions

PW, SW, and XL contributed to the research, analysis, and writing of the manuscript. All authors contributed to the article and approved the submitted version.

Funding

This research was supported by an NSF CAREER Award (BCS-1945230), Air Force Young Investigator Program Award (FA9550-21-1-0088), Dana Foundation Clinical Neuroscience Award, ORAU Ralph E. Powe Junior Faculty Enhancement Award, West Virginia University (WVU), and WVU PSCoR Program (to SW), and an NSF grant (OAC-1839909) and the WV Higher Education Policy Commission grant (HEPC.dsr.18.5; to XL).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors want to thank two reviewers for improving the literary presentation of this work and Ms. Amber Li for helping the preparation of the references.

Footnotes

References

Adolphs, R. (2008). Fear, faces, and the human amygdala. Curr. Opin. Neurobiol. 18, 166–172. doi: 10.1016/j.conb.2008.06.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Adolphs, R., Sears, L., and Piven, J. (2001). Abnormal processing of social information from faces in autism. J. Cogn. Neurosci. 13, 232–240. doi: 10.1162/089892901564289

PubMed Abstract | CrossRef Full Text | Google Scholar

Ayres, A. J., and Robbins, J. (2005). Sensory Integration and the Child: Understanding Hidden Sensory Challenges. Los Angeles: Western Psychological Services.

Google Scholar

Baron-Cohen, S., Golan, O., and Ashwin, E. (2009). Can emotion recognition be taught to children with autism spectrum conditions? Philos. Trans. R. Soc. Lond. B Biol. Sci. 364, 3567–3574. doi: 10.1098/rstb.2009.0191

PubMed Abstract | CrossRef Full Text | Google Scholar

Baron-Cohen, S., Ring, H. A., Bullmore, E. T., Wheelwright, S., Ashwin, C., and Williams, S. C. R. (2000). The amygdala theory of autism. Neurosci. Biobehav. Rev. 24, 355–364. doi: 10.1016/S0149-7634(00)00011-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Begeer, S., Koot, H. M., Rieffe, C., Meerum Terwogt, M., and Stegge, H. (2008). Emotional competence in children with autism: diagnostic criteria and empirical evidence. Dev. Rev. 28, 342–369. doi: 10.1016/j.dr.2007.09.001

CrossRef Full Text | Google Scholar

Berggren, S., Fletcher-Watson, S., Milenkovic, N., Marschik, P. B., Bölte, S., and Jonsson, U. (2018). Emotion recognition training in autism spectrum disorder: a systematic review of challenges related to generalizability. Dev. Neurorehabil. 21, 141–154. doi: 10.1080/17518423.2017.1305004

PubMed Abstract | CrossRef Full Text | Google Scholar

Black, M. H., Chen, N. T. M., Iyer, K. K., Lipp, O. V., Bölte, S., Falkmer, M., et al. (2017). Mechanisms of facial emotion recognition in autism spectrum disorders: insights from eye tracking and electroencephalography. Neurosci. Biobehav. Rev. 80, 488–515. doi: 10.1016/j.neubiorev.2017.06.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Black, M. H., Chen, N. T., Lipp, O. V., Bölte, S., and Girdler, S. (2020). Complex facial emotion recognition and atypical gaze patterns in autistic adults. Autism 24, 258–262. doi: 10.1177/1362361319856969

PubMed Abstract | CrossRef Full Text | Google Scholar

Blampied, M. (2008). Are children with Autism Spectrum Disorder sensitive to the different emotions underlying posed and genuine smiles? MS Thesis. University of Canterbury.

Google Scholar

Boccanfuso, L., Barney, E., Foster, C., Ahn, Y. A., Chawarska, K., Scassellati, B., et al. (2016). “Emotional robot to examine different play patterns and affective responses of children with and without ASD,” in Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI); March, 2016; 19–26.

Google Scholar

Bons, D., Scheepers, F. E., Rommelse, N. N. J., and Buitelaar, J. K. (2011). “Motor, emotional, and cognitive empathic abilities in children with autism and conduct disorder,” in Proceedings of the ACM International Conference Proceeding Series; August, 2011; 109–113.

Google Scholar

Boraston, Z. L., Corden, B., Miles, L. K., Skuse, D. H., and Blakemore, S.-J. (2008). Brief report: perception of genuine and posed smiles by individuals with autism. J. Autism Dev. Disord. 38, 574–580. doi: 10.1007/s10803-007-0421-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Breiter, H. C., Etcoff, N. L., Whalen, P. J., Kennedy, W. A., Rauch, S. L., Buckner, R. L., et al. (1996). Response and habituation of the human amygdala during visual processing of facial expression. Neuron 17, 875–887. doi: 10.1016/S0896-6273(00)80219-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Brewer, R., Biotti, F., Catmur, C., Press, C., Happé, F., Cook, R., et al. (2016). Can neurotypical individuals read autistic facial expressions? Atypical production of emotional facial expressions in Autism Spectrum Disorders. Autism Res. 9, 262–271. doi: 10.1002/aur.1508

PubMed Abstract | CrossRef Full Text | Google Scholar

Carter, E. J., Williams, D. L., Hodgins, J. K., and Lehman, J. F. (2014). Are children with autism more responsive to animated characters? A study of interactions with humans and human-controlled avatars. J. Autism Dev. Disord. 44, 2475–2485. doi: 10.1007/s10803-014-2116-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Cassidy, S., Mitchell, P., Chapman, P., and Ropar, D. (2015). Processing of spontaneous emotional responses in adolescents and adults with autism spectrum disorders: effect of stimulus type. Autism Res. 8, 534–544. doi: 10.1002/aur.1468

PubMed Abstract | CrossRef Full Text | Google Scholar

Cassidy, S., Ropar, D., Mitchell, P., and Chapman, P. (2014). Can adults with autism spectrum disorders infer what happened to someone from their emotional response? Autism Res. 7, 112–123. doi: 10.1002/aur.1351

PubMed Abstract | CrossRef Full Text | Google Scholar

Celani, G., Battacchi, M. W., and Arcidiacono, L. (1999). The understanding of the emotional meaning of facial expressions in people with autism. J. Autism Dev. Disord. 29, 57–66. doi: 10.1023/A:1025970600181

PubMed Abstract | CrossRef Full Text | Google Scholar

Cockburn, J., Bartlett, M., Tanaka, J., Movellan, J., Pierce, M., and Schultz, R. (2008). “SmileMaze: A Tutoring System in Real-Time Facial Expression Perception and Production for Children with Autism Spectrum Disorder,” in ECAG 2008 Workshop: Facial and Bodily Expressions for Control and Adaptation of Games; September, 2008; 3–9.

Google Scholar

Crompton, C. J., Ropar, D., Evans-Williams, C. V., Flynn, E. G., and Fletcher-Watson, S. (2020). Autistic peer-to-peer information transfer is highly effective. Autism 24, 1704–1712. doi: 10.1177/1362361320919286

PubMed Abstract | CrossRef Full Text | Google Scholar

Dalton, K. M., Nacewicz, B. M., Johnstone, T., Schaefer, H. S., Gernsbacher, M. A., Goldsmith, H. H., et al. (2005). Gaze fixation and the neural circuitry of face processing in autism. Nat. Neurosci. 8, 519–526. doi: 10.1038/nn1421

PubMed Abstract | CrossRef Full Text | Google Scholar

Dawson, G., Webb, S. J., Carver, L., Panagiotides, H., and McPartland, J. (2004). Young children with autism show atypical brain responses to fearful versus neutral facial expressions of emotion. Dev. Sci. 7, 340–359. doi: 10.1111/j.1467-7687.2004.00352.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Ekman, P. (1970). Universal facial expressions of emotion. Cal. Ment. Health 8, 151–158.

Google Scholar

Ekman, R. (1997). What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). USA: Oxford University Press.

Google Scholar

Faso, D. J., Sasson, N. J., and Pinkham, A. E. (2014). Evaluating posed and evoked facial expressions of emotion from adults with Autism Spectrum Disorder. J. Autism Dev. Disord. 45, 75–89. doi: 10.1007/s10803-014-2194-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Fridenson-Hayo, S., Berggren, S., Lassalle, A., Tal, S., Pigat, D., Bölte, S., et al. (2016). Basic and complex emotion recognition in children with autism: cross-cultural findings. Mol. Autism. 7, 1–11. doi: 10.1186/s13229-016-0113-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Garman, H. D., Spaulding, C. J., Webb, S. J., Mikami, A. Y., Morris, J. P., and Lerner, M. D. (2016). Wanting it too much: an inverse relation between social motivation and facial emotion recognition in autism spectrum disorder. Child Psychiatry Hum. Dev. 47, 890–902. doi: 10.1007/s10578-015-0620-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Golan, O., Sinai-Gavrilov, Y., and Baron-Cohen, S. (2015). The Cambridge mindreading face-voice battery for children (CAM-C): complex emotion recognition in children with and without autism spectrum conditions. Mol. Autism 6, 1–9. doi: 10.1186/s13229-015-0018-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Gordon, I., Pierce, M. D., Bartlett, M. S., and Tanaka, J. W. (2014). Training facial expression production in children on the autism spectrum. J. Autism Dev. Disord. 44, 2486–2498. doi: 10.1007/s10803-014-2118-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Guha, T., Yang, Z., Grossman, R. B., and Narayanan, S. S. (2018). A computational study of expressive facial dynamics in children with autism. IEEE Trans. Affect. Comput. 9, 14–20. doi: 10.1109/TAFFC.2016.2578316

PubMed Abstract | CrossRef Full Text | Google Scholar

Hall, G. B. C., Szechtman, H., and Nahmias, C. (2003). Enhanced salience and emotion recognition in autism: a PET study. Am. J. Psychiatry 160, 1439–1441. doi: 10.1176/appi.ajp.160.8.1439

PubMed Abstract | CrossRef Full Text | Google Scholar

Hanley, M., McPhillips, M., Mulhern, G., and Riby, D. M. (2013). Spontaneous attention to faces in Asperger syndrome using ecologically valid static stimuli. Autism 17, 754–761. doi: 10.1177/1362361312456746

PubMed Abstract | CrossRef Full Text | Google Scholar

Harms, M. B., Martin, A., and Wallace, G. L. (2010). Facial emotion recognition in autism spectrum disorders: a review of behavioral and neuroimaging studies. Neuropsychol. Rev. 20, 290–322. doi: 10.1007/s11065-010-9138-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Harris, H., Israeli, D., Minshew, N., Bonneh, Y., Heeger, D. J., Behrmann, M., et al. (2015). Perceptual learning in autism: over-specificity and possible remedies. Nat. Neurosci. 18, 1–4. doi: 10.1038/nn.4129

PubMed Abstract | CrossRef Full Text | Google Scholar

Hess, U., Kappas, A., McHugo, G. J., Kleck, R. E., and Lanzetta, J. T. (1989). An analysis of the encoding and decoding of spontaneous and posed smiles: the use of facial electromyography. J. Nonverbal Behav. 13, 121–137. doi: 10.1007/BF00990794

CrossRef Full Text | Google Scholar

Hobson, R. P. (1986). The autistic child’s appraisal of expressions of emotion. J. Child Psychol. Psychiatry 27, 321–342. doi: 10.1111/j.1469-7610.1986.tb01836.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Hobson, R. P., Ouston, J., and Lee, A. (1988). What’s in a face? The case of autism. Br. J. Psychol. 79, 441–453. doi: 10.1111/j.2044-8295.1988.tb02745.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Jaswal, V. K., and Akhtar, N. (2019). Being versus appearing socially uninterested: challenging assumptions about social motivation in autism. Behav. Brain Sci. 1–84. doi: 10.1017/S0140525X18001826 [Epub ahead of print]

PubMed Abstract | CrossRef Full Text | Google Scholar

Jia, S., Wang, S., Hu, C., Webster, P., and Li, X. (2020). Detection of genuine and posed facial expressions of emotion: databases and methods. Front. Psychol. 11:580287. doi: 10.3389/fpsyg.2020.580287

PubMed Abstract | CrossRef Full Text | Google Scholar

Keating, C. T., and Cook, J. L. (2021). Facial expression production and recognition in autism spectrum disorders: a shifting landscape. Psychiatr. Clin. 44, 125–139. doi: 10.1016/j.psc.2020.11.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Kennedy, D. P., and Adolphs, R. (2013). Perception of emotions from facial expressions in high-functioning adults with autism. Neuropsychologia 50, 3313–3319. doi: 10.1016/j.neuropsychologia.2012.09.038

PubMed Abstract | CrossRef Full Text | Google Scholar

Kim, Y. G., and Huynh, X.-P. (2017). “Discrimination between Genuine Versus Fake Emotion Using Long-Short Term Memory with Parametric Bias and Facial Landmarks,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW); October, 2017; Venice, 3065–3072.

Google Scholar

Kliemann, D., Dziobek, I., Hatri, A., Baudewig, J., and Heekeren, H. R. (2012). The role of the amygdala in atypical gaze on emotional faces in autism spectrum disorders. J. Neurosci. 32, 9469–9476. doi: 10.1523/JNEUROSCI.5294-11.2012

PubMed Abstract | CrossRef Full Text | Google Scholar

Krumhuber, E. G., Küster, D., Namba, S., Shah, D., and Calvo, M. G. (2019). Emotion recognition from posed and spontaneous dynamic expressions: human observers versus machine analysis. Emotion 21, 447–451. doi: 10.1037/emo0000712

PubMed Abstract | CrossRef Full Text | Google Scholar

Langdell, T. (1978). Recognition of faces: an approach to the study of autism. J. Child Psychol. Psychiatry 19, 255–268. doi: 10.1111/j.1469-7610.1978.tb00468.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Law Smith, M. J., Montagne, B., Perrett, D. I., Gill, M., and Gallagher, L. (2010). Detecting subtle facial emotion recognition deficits in high-functioning autism using dynamic stimuli of varying intensities. Neuropsychologia 48, 2777–2781. doi: 10.1016/j.neuropsychologia.2010.03.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Leo, M., Carcagnì, P., Distante, C., Mazzeo, P. L., Spagnolo, P., Levante, A., et al. (2019). Computational analysis of deep visual data for quantifying facial expression production. Appl. Sci. 9:4542. doi: 10.3390/app9214542

CrossRef Full Text | Google Scholar

Loveland, K. A., Tunali-Kotoski, B., Pearson, D. A., Brelsford, K. A., Ortegon, J., and Chen, R. (1994). Imitation and expression of facial affect in autism. Dev. Psychopathol. 6, 433–444. doi: 10.1017/S0954579400006039

CrossRef Full Text | Google Scholar

Lundqvist, L. O. (2015). Hyper-responsiveness to touch mediates social dysfunction in adults with autism spectrum disorders. Res. Autism Spectr. Disord. 9, 13–20. doi: 10.1016/j.rasd.2014.09.012

CrossRef Full Text | Google Scholar

Manfredonia, J., Bangerter, A., Manyakov, N. V., Ness, S., Lewin, D., Skalkin, A., et al. (2019). Automatic recognition of posed facial expression of emotion in individuals with Autism Spectrum Disorder. J. Autism Dev. Disord. 49, 279–293. doi: 10.1007/s10803-018-3757-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Martínez, A., Tobe, R., Dias, E. C., Ardekani, B. A., Veenstra-VanderWeele, J., Patel, G., et al. (2019). Differential patterns of visual sensory alteration underlying face emotion recognition impairment and motion perception deficits in Schizophrenia and Autism Spectrum Disorder. Biol. Psychiatry 86, 557–567. doi: 10.1016/j.biopsych.2019.05.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Mavadati, M., Sanger, P., and Mahoor, M. H. (2016). “Extended DISFA Dataset: investigating posed and spontaneous facial expressions,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; June, 2016; 1452–1459.

Google Scholar

Mesibov, G. B. (1984). Social skills training with verbal autistic adolescents and adults: a program model. J. Autism Dev. Disord. 14, 395–404. doi: 10.1007/BF02409830

PubMed Abstract | CrossRef Full Text | Google Scholar

Milton, D. E. (2012). On the ontological status of autism: the “double empathy problem”. Disabil. Soc. 27, 883–887. doi: 10.1080/09687599.2012.710008

CrossRef Full Text | Google Scholar

Milton, D., and Sims, T. (2016). How is a sense of well-being and belonging constructed in the accounts of autistic adults? Disabil. Soc. 31, 520–534. doi: 10.1080/09687599.2016.1186529

CrossRef Full Text | Google Scholar

Monk, C. S., Weng, S. J., Wiggins, J. L., Kurapati, N., Louro, H. M. C., Carrasco, M., et al. (2010). Neural circuitry of emotional face processing in autism spectrum disorders. J. Psychiatry Neurosci. 35, 105–114. doi: 10.1503/jpn.090085

PubMed Abstract | CrossRef Full Text | Google Scholar

Naab, P. J., and Russell, J. A. (2007). Judgments of emotion from spontaneous facial expressions of new Guineans. Emotion 7, 736–744. doi: 10.1037/1528-3542.7.4.736

PubMed Abstract | CrossRef Full Text | Google Scholar

Nuske, H. J., Vivanti, G., and Dissanayake, C. (2013). Are emotion impairments unique to, universal, or specific in autism spectrum disorder? A comprehensive review. Cognit. Emot. 27, 1042–1061. doi: 10.1080/02699931.2012.762900

PubMed Abstract | CrossRef Full Text | Google Scholar

Ola, L., and Gullon-Scott, F. (2020). Facial emotion recognition in autistic adult females correlates with alexithymia, not autism. Autism 24, 2021–2034. doi: 10.1177/1362361320932727

PubMed Abstract | CrossRef Full Text | Google Scholar

Ozonoff, S., Pennington, B. F., and Rogers, S. J. (1990). Are there emotion perception deficits in young autistic children? J. Child Psychol. Psychiatry 31, 343–361. doi: 10.1111/j.1469-7610.1990.tb01574.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Park, S., Lee, K., Lim, J. A., Ko, H., Kim, T., Lee, J. I., et al. (2020). Differences in facial expressions between spontaneous and posed smiles: automated method by action units and three-dimensional facial landmarks. Sensors 20:1199. doi: 10.3390/s20041199

PubMed Abstract | CrossRef Full Text | Google Scholar

Pelphrey, K. A., and Carter, E. J. (2008). Brain mechanisms for social perception: from autism and typical development. Ann. N. Y. Acad. Sci. 1145, 283–299. doi: 10.1196/annals.1416.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Pelphrey, K. A., Morris, J. P., and McCarthy, G. (2005). Neural basis of eye gaze processing deficits in autism. Brain 128, 1038–1048. doi: 10.1093/brain/awh404

PubMed Abstract | CrossRef Full Text | Google Scholar

Pelphrey, K. A., Morris, J. P., McCarthy, G., and Labar, K. S. (2007). Perception of dynamic changes in facial affect and identity in autism. Soc. Cogn. Affect. Neurosci. 2, 140–149. doi: 10.1093/scan/nsm010

PubMed Abstract | CrossRef Full Text | Google Scholar

Pelphrey, K., Sasson, N. J., Reznick, J. S., Paul, G., Goldman, B. D., and Piven, J. (2002). Visual scanning of faces in Autism. J. Autism Dev. Disord. 32, 249–261. doi: 10.1023/A:1016374617369

PubMed Abstract | CrossRef Full Text | Google Scholar

Rutishauser, U., Tudusciuc, O., Wang, S., Mamelak, A. N., Ross, I. B., and Adolphs, R. (2013). Single-neuron correlates of atypical face processing in autism. Neuron 80, 887–899. doi: 10.1016/j.neuron.2013.08.029

PubMed Abstract | CrossRef Full Text | Google Scholar

Sachse, M., Schlitt, S., Hainz, D., Ciaramidaro, A., Walter, H., Poustka, F., et al. (2014). Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder. Schizophr. Res. 159, 509–514. doi: 10.1016/j.schres.2014.08.030

PubMed Abstract | CrossRef Full Text | Google Scholar

Sasson, N. J. (2006). The development of face processing in autism. J. Autism Dev. Disord. 36, 381–394. doi: 10.1007/s10803-006-0076-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Sasson, N. J., Faso, D. J., Nugent, J., Lovell, S., Kennedy, D. P., and Grossman, R. B. (2017). Neurotypical peers are less willing to interact with those with Autism based on thin slice judgments. Sci. Rep. 7, 1–10. doi: 10.1038/srep40700

PubMed Abstract | CrossRef Full Text | Google Scholar

Sasson, N. J., and Morrison, K. E. (2019). First impressions of adults with autism improve with diagnostic disclosure and increased autism knowledge of peers. Autism 23, 50–59. doi: 10.1177/1362361317729526

PubMed Abstract | CrossRef Full Text | Google Scholar

Sasson, N. J., Turner-Brown, L. M., Holtzclaw, T. N., Lam, K. S., and Bodfish, J. W. (2008). Children with autism demonstrate circumscribed attention during passive viewing of complex social and nonsocial picture arrays. Autism Res. 1, 31–42. doi: 10.1002/aur.4

PubMed Abstract | CrossRef Full Text | Google Scholar

Sauter, D. A., and Fischer, A. H. (2018). Can perceivers recognise emotions from spontaneous expressions? Cognit. Emot. 32, 504–515. doi: 10.1080/02699931.2017.1320978

PubMed Abstract | CrossRef Full Text | Google Scholar

Schmidt, K. L., Ambadar, Z., Cohn, J. F., and Reed, L. I. (2006). Movement differences between deliberate and spontaneous facial expressions: zygomaticus major action in smiling. J. Nonverbal Behav. 30, 37–52. doi: 10.1007/s10919-005-0003-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Shams, L., and Seitz, A. R. (2008). Benefits of multisensory learning. Trends Cogn. Sci. 12, 411–417. doi: 10.1016/j.tics.2008.07.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Sheppard, E., Pillai, D., Wong, G. T. L., Ropar, D., and Mitchell, P. (2016). How easy is it to read the minds of people with autism spectrum disorder? J. Autism Dev. Disord. 46, 1247–1254. doi: 10.1007/s10803-015-2662-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Simões, M., Monteiro, R., Andrade, J., Mouga, S., França, F., Oliveira, G., et al. (2018). A novel biomarker of compensatory recruitment of face emotional imagery networks in autism spectrum disorder. Front. Neurosci. 12:791. doi: 10.3389/fnins.2018.00791

PubMed Abstract | CrossRef Full Text | Google Scholar

Sucksmith, E., Allison, C., Baron-Cohen, S., Chakrabarti, B., and Hoekstra, R. A. (2013). Empathy and emotion recognition in people with autism, first-degree relatives, and controls. Neuropsychologia 51, 98–105. doi: 10.1016/j.neuropsychologia.2012.11.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Tang, J. S., Chen, N. T., Falkmer, M., Bölte, S., and Girdler, S. (2019). Atypical visual processing but comparable levels of emotion recognition in adults with autism during the processing of social scenes. J. Autism Dev. Disord. 49, 4009–4018. doi: 10.1007/s10803-019-04104-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Thye, M. D., Bednarz, H. M., Herringshaw, A. J., Sartin, E. B., and Kana, R. K. (2018). The impact of atypical sensory processing on social impairments in autism spectrum disorder. Dev. Cogn. Neurosci. 29, 151–167. doi: 10.1016/j.dcn.2017.04.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Trevisan, D. A., Hoskyn, M., and Birmingham, E. (2018). Facial expression production in autism: a meta-analysis. Autism Res. 11, 1586–1601. doi: 10.1002/aur.2037

PubMed Abstract | CrossRef Full Text | Google Scholar

Uljarevic, M., and Hamilton, A. (2013). Recognition of emotions in autism: a formal meta-analysis. J. Autism Dev. Disord. 43, 1517–1526. doi: 10.1007/s10803-012-1695-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Der Geld, P., Oosterveld, P., Bergé, S. J., and Kuijpers-Jagtman, A. M. (2008). Tooth display and lip position during spontaneous and posed smiling in adults. Acta Odontol. Scand. 66, 207–213. doi: 10.1080/00016350802060617

PubMed Abstract | CrossRef Full Text | Google Scholar

Volker, M. A., Lopata, C., Smith, D. A., and Thomeer, M. L. (2009). Facial encoding of children with high-functioning autism spectrum disorders. Focus Autism Other Dev. Disabil. 24, 195–204. doi: 10.1177/1088357609347325

CrossRef Full Text | Google Scholar

Wagener, G. L., Berning, M., Costa, A. P., Steffgen, G., and Melzer, A. (2020). Effects of emotional music on facial emotion recognition in children with Autism Spectrum Disorder (ASD). J. Autism Dev. Disord. 1–10. doi: 10.1007/s10803-020-04781-0 [Epub ahead of print]

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, R., Chen, C. F., Peng, H., Liu, X., Liu, O., and Li, X. (2019). Digital Twin: acquiring high-fidelity 3D avatar from a single image. ArXiv [Preprint].

Google Scholar

Wang, S., Wu, C., He, M., Wang, J., and Ji, Q. (2015). Posed and spontaneous expression recognition through modeling their spatial patterns. Mach. Vis. Appl. 26, 219–231. doi: 10.1007/s00138-015-0657-2

CrossRef Full Text | Google Scholar

Weeks, S. J., and Hobson, R. P. (1987). The salience of facial expression for autistic children. J. Child Psychol. Psychiatry 28, 137–152. doi: 10.1111/j.1469-7610.1987.tb00658.x

PubMed Abstract | CrossRef Full Text | Google Scholar

White, S. W., Abbott, L., Wieckowski, A. T., Capriola-Hall, N. N., Aly, S., and Youssef, A. (2018). Feasibility of automated training for facial emotion expression and recognition in autism. Behav. Ther. 49, 881–888. doi: 10.1016/j.beth.2017.12.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Wong, N., Beidel, D. C., Sarver, D. E., and Sims, V. (2012). Facial emotion recognition in children with high functioning autism and children with social phobia. Child Psychiatry Hum. Dev. 43, 775–794. doi: 10.1007/s10578-012-0296-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, S., Xia, X., Li, S., Shen, L., Liu, J., Zhao, L., et al. (2019). Using technology-based learning tool to train facial expression recognition and emotion understanding skills of Chinese preschoolers with autism spectrum disorder. Int. J. Dev. Disabil. 65, 378–386. doi: 10.1080/20473869.2019.1656384

CrossRef Full Text | Google Scholar

Zhou, H., Cai, X., Weigl, M., Bang, P., Cheung, E. F. C., and Chan, R. C. K. (2018). Multisensory temporal binding window in autism spectrum disorders and schizophrenia spectrum disorders: a systematic review and meta-analysis. Neurosci. Biobehav. Rev. 86, 66–76. doi: 10.1016/j.neubiorev.2017.12.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: facial expression of emotion, emotion recognition, posed vs. genuine emotion, autism spectrum disorder, social deficits

Citation: Webster PJ, Wang S and Li X (2021) Review: Posed vs. Genuine Facial Emotion Recognition and Expression in Autism and Implications for Intervention. Front. Psychol. 12:653112. doi: 10.3389/fpsyg.2021.653112

Received: 13 January 2021; Accepted: 02 June 2021;
Published: 09 July 2021.

Edited by:

Anthony P. Atkinson, Durham University, United Kingdom

Reviewed by:

Mikle South, Brigham Young University, United States
Michael K. Yeung, Hong Kong Polytechnic University, China

Copyright © 2021 Webster, Wang and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Shuo Wang, shuo.wang@mail.wvu.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.