Emotion recognition deficits in eating disorders are explained by co-occurring alexithymia

Previous research has yielded inconsistent findings regarding the ability of individuals with eating disorders (EDs) to recognize facial emotion, making the clinical features of this population hard to determine. This study tested the hypothesis that where observed, emotion recognition deficits exhibited by patients with EDs are due to alexithymia, a co-occurring condition also associated with emotion recognition difficulties. Ability to recognize facial emotion was investigated in a sample of individuals with EDs and varying degrees of co-occurring alexithymia, and an alexithymia-matched control group. Alexithymia, but not ED symptomology, was predictive of individuals' emotion recognition ability, inferred from tolerance to high-frequency visual noise. This relationship was specific to emotion recognition, as neither alexithymia nor ED symptomology was associated with ability to recognize facial identity. These findings suggest that emotion recognition difficulties exhibited by patients with ED are attributable to alexithymia, and may not be a feature of EDs per se.


Introduction
Feeding and eating disorders (hereafter EDs) are axis I disorders characterized by disturbed and inappropriate patterns of eating [1]. Three subtypes are recognized, namely anorexia nervosa (AN; associated with emaciation, distorted body image and a fear of gaining weight), bulimia nervosa (BN; 2015 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.

Stimuli and materials
Emotional face stimuli were taken from the STOIC database [37] and supplemented by stimuli purposely created for the study, yielding a total of 49 stimuli. The stimulus set used in the test phase comprised seven male identities, expressing happiness, sadness, surprise, fear, disgust, anger and pain (see figure 1 for examples). An additional seven images showing the seven identities with an emotionally neutral expression were used during training. Each stimulus was a greyscale image depicting a face, cropped to remove external features. Stimuli were presented on a 15 LCD screen and subtended approximately 8 • × 7 • when viewed at a distance of 60 cm.

Training procedure
Two trial-to-criterion training phases, whereby participants were required to meet predetermined criteria before progressing, preceded the experimental task, allowing participants to learn the names of the seven identities. In the first training phase, participants viewed seven emotionally neutral identities and were prompted to select a particular identity (e.g. 'pick Oscar') using key presses. The location of each identity was randomized throughout. The second training phase ensured that participants could correctly identify each face when presented in isolation; i.e. without the other identities present for comparison. Participants viewed one facial identity and had to select the correct name from the . During the experimental procedure, each emotion was equally likely to be expressed by each of the seven identities, ensuring that identity and emotion were not confounded. Examples show stimuli obscured by increasing levels of visual noise (10%, 30%, 50%, 70% and 90%).
seven options. A screen showing the seven identities with their corresponding names preceded both training tasks. Stimuli remained visible until participants responded, and were followed by accuracy feedback alongside the correct answer. On both training tasks, participants were required to correctly and consecutively identify the seven individuals twice in order to proceed. This ensured that participants had sufficiently memorized the identities. Participants did not receive training in emotion recognition, nor was their ability to recognize emotions measured prior to the test procedure, as this may have altered participants' recognition performance in the experimental paradigm, and emotional expressions are encountered in daily life, unlike the novel identities.

Test procedure
Experimental trials began with a fixation point (1000 ms) followed by a single facial image, depicting one of the seven possible identities exhibiting an emotional expression (800 ms). This was replaced by a prompt to attribute either emotion (e.g. 'anger: yes or no?') or identity (e.g. 'Oscar: yes or no?'). Importantly, identical stimuli were used to test emotion and identity recognition, and emotion and identity trials were interleaved. Participants were therefore unaware whether they would have to judge identity or emotion when viewing each stimulus. This feature of the design ensured that both attributes had to be processed simultaneously, as is the case in real-life interactions. The attribution prompt remained visible until participants responded with a key press. Participants' emotion and identity recognition ability was estimated by determining their tolerance to high-frequency visual noise ( figure 1). An adaptive staircase procedure was used whereby the amount of noise superimposed on each stimulus image was varied incrementally, to determine the maximum level of noise participants could tolerate and still recognize each emotion and identity reliably (hereafter their recognition threshold). Higher thresholds are indicative of superior recognition ability. Initial threshold estimates for each of the seven identities and emotions were set at 50%, and remained at this level for the first 84 trials (six presentations per emotion and identity  Table 1. Means and standard deviations for recognition thresholds demonstrated by control and eating disorder (ED) groups, with t-tests for group differences, and correlations with alexithymia and ED symptomology. (Alexithymia is measured by the Toronto alexithymia questionnaire (TAS-20), whereas ED symptomology is measured by the eating disorder examination questionnaire (EDE-Q). None of the measures of emotion or identity recognition was associated with ED symptomology. Alexithymia was significantly negatively correlated with the threshold for global emotion recognition, and for happiness, disgust and pain recognition. There was also a strong trend for alexithymia to be negatively correlated with anger recognition threshold. * p < 0.05; * * p < 0.  recognized a particular emotion or identity twice consecutively, or made a single incorrect response, that noise parameter was increased (making that attribution harder on subsequent trials) or decreased (making that attribution easier on subsequent trials), respectively. The noise manipulation was achieved by replacing a given proportion (initially 50%) of the greyscale intensity values comprising the image with zeros, setting them to black. Intensity values were selected for distortion at random, sampling the entire facial image uniformly. The size of each stepwise increment decreased as participants progressed through the experiment. From the 85th trial until the 140th trial, stepwise adjustments of ±16% were made. In the second, third, fourth and fifth blocks (each comprising 140 trials), the stepwise adjustments were decreased to ±8%, ±4%, ±2% and ±1%, respectively. Large increments early in the procedure ensured that the staircase quickly arrived at the approximate threshold for each identity and emotion. Smaller increments towards the end of the procedure ensured that threshold estimates became more stable as participants approached their maximum level of tolerance, and allowed estimates to be 'fine-tuned'. The 14 threshold estimates (seven emotions, seven identities) reached after 700 trials (50 trials per emotion and identity) were taken as the final recognition thresholds for that participant. Prior to the experimental paradigm, participants completed five practice trials to familiarize themselves with the format of experimental trials.
A subset of individuals who took part in the pilot study (eight typical participants) completed the task a second time, in order for test-retest reliability of the current paradigm to be determined. Test-retest reliability analysis revealed a trend for global emotion thresholds to correlate across the two time points (r = 0.513, p = 0.073), and a significant correlation between global identity thresholds across the two time points (r = 0.641, p = 0.018). Importantly, this suggests that a specific deficit in emotion recognition could not simply be explained by reduced reliability of the identity task.

Results
In addition to the 14 recognition thresholds (seven emotions, seven identities) estimated for each participant, global emotion and identity thresholds were calculated by averaging across the seven individual emotion and identity estimates. These global measures of recognition ability for identity and emotion are directly comparable-each is a composite of the thresholds estimated for seven categories, comprising seven exemplars. Associations between the resulting distributions, alexithymia and ED symptomology were then determined (table 1)  correlation between alexithymia and anger threshold also approached significance (r 40 = −0.301, p = 0.053). Scatter plots depicting these simple correlations can be seen in figure 2. Recognition of the remaining emotions-sadness, surprise and fear-also showed negative correlations with alexithymia severity, but fell short of statistical significance. In order to determine the impact of ED diagnosis on emotion and identity recognition, a 2 (ED group) × 2 (task: global emotion threshold versus global identity threshold), analysis of variance was performed on the emotion and identity recognition thresholds. Neither the main effect of ED group (F 1,40 = 0.68, p = 0.416, η = 0.017), nor the ED group × task interaction (F 1,40 = 0.05, p = 0.819, η = 0.001) was significant. Follow-up t-tests were conducted to separately assess the impact of ED group on each of the tasks individually. As predicted by the alexithymia hypothesis of affective impairment, individuals with EDs and control participants matched for co-occurring alexithymia demonstrated equivalent recognition thresholds for facial emotion (t 40 = 0.978, p = 0.334, d = 0.309, CI(−0.05, 0.14)) and identity (t 40 = 0.546, p = 0.588, d = 0.137, CI(−0.09, 0.16)). There was no difference between the groups in ability to recognize any of the seven individual emotions and no correlation between EDE-Q score and either emotion or identity recognition thresholds (table 1).
Together, these results suggest that ED symptomology is unrelated to emotion and identity recognition ability, whereas alexithymia explains substantial variation in participants' ability to recognize facial emotion. Nevertheless, it is important to determine whether the significant relationships observed between alexithymia and emotion recognition survive once individuals' depression scores are accounted for [34,35]. In addition, individual differences attributable to age and IQ-variables known to affect performance on face perception tasks-may prevent detection of simple correlations between ED symptomology and emotion recognition ability.
To address these issues, hierarchical regression analyses were conducted on the global emotion and global identity thresholds. Depression scores, age and IQ, were entered in the first step of the regression model, followed by alexithymia in the second step, and ED symptomology in the third. Alexithymia was found to be a significant predictor of the emotion recognition thresholds over and above the demographic variables (t 40 = 2.422, p = 0.020, d = 0.766), and its addition to the model increased the variance accounted  Table 2. (a) Regression models for the prediction of emotion recognition threshold, including demographic variables (age, IQ and depression) in the first step, alexithymia, measured by the TAS-20 in the second step, and ED symptomology, measured by the EDE-Q in the third. (b) Regression models including demographic variables in the first step, ED symptomology in the second step, and alexithymia in the third. (Both hierarchical regressions indicate that alexithymia does, and ED symptomology does not, predict emotion recognition threshold over and above demographic variables, regardless of the order they were entered into the regression model.)  for by 10.1%. ED symptomology, when added in the third step of the model, did not predict emotion recognition threshold significantly and yielded a non-significant increase in the variance accounted for (2.3%). When global identity threshold was also controlled for in the first step of the model, the predictive ability of alexithymia fell to a two-tailed trend (p = 0.065).
Although the control and ED groups were approximately matched for alexithymia, a significant correlation was observed between alexithymia and ED symptomology (r 40 = 0.343, p = 0.026). To ensure that multicollinearity between these variables did not obscure a relationship between ED symptomology and emotion recognition ability, a further regression analysis was conducted: depression scores, age and IQ were again entered in the first step, but now followed by ED symptomology in the second step, and alexithymia third. ED symptomology again failed to predict recognition thresholds for emotion, whereas alexithymia remained a significant predictor (t 40 = 2.56, p = 0.015, d = 0.812), increasing the variance accounted for by 11.3%. See table 2 for a summary of the regression models. In a convergent analysis, partial correlation coefficients were computed between emotion threshold and alexithymia, and emotion threshold and EDs, adjusted for age, IQ and depression. Partial coefficients were compared using Steiger's Z-test. Results showed a significantly stronger relationship between alexithymia and emotion threshold than that between emotion threshold and ED, whether ED was measured using EDEQ scores (z = 1.75, p < 0.05), or as a categorical variable (z = 2.25, p = 0.01).
Finally, hierarchical regression analyses were also conducted for the recognition thresholds calculated for the seven individual emotions. Recognition ability for happiness, anger and disgust was significantly predicted by alexithymia and not ED symptomology, having taken account of age, IQ and depression, irrespective of the order alexithymia and ED symptomology were entered into the model (table 3). Although a significant simple correlation was observed between alexithymia and recognition of pain, alexithymia failed to predict pain thresholds once the variance attributable to age, IQ and depression was accounted for (p = 0.064). Overall, these results indicate that alexithymia, and not ED symptomology, explains emotion recognition.

Discussion
According to the alexithymia hypothesis of affective impairment [17], where observed, emotion recognition deficits in individuals with EDs are, in fact, owing to co-occurring alexithymia and should not be regarded as a core feature of these conditions. To test this assertion, this study compared facial emotion recognition ability in individuals with an ED and an alexithymia-matched control group. Consistent with the alexithymia hypothesis of impaired emotion recognition, the ED and alexithymia-matched control groups demonstrated comparable recognition of facial emotion and identity. Crucially, however, alexithymia, not ED symptomology, was found to predict emotion recognition ability; severe alexithymia was associated with impaired emotion recognition, whereas ED diagnosis was unrelated to emotion recognition thresholds. Importantly, this relationship remained once age, IQ and depression, were taken into account.
These results shed light on the inconsistent literature on emotion recognition in ED populations. Several authors have reported evidence for impaired recognition of facial emotion in EDs [4][5][6], prompting speculation that atypical emotion processing may be an important feature of these conditions [2]. The current findings suggest that emotion recognition impairment is, in fact, unrelated to EDs per se, and that heterogeneity of ED samples, with respect to alexithymia, is probably responsible for many of the contradictory findings reported previously. Where impaired emotion recognition in ED samples has been reported, clinical groups may have contained a greater proportion of individuals with severe alexithymia than control groups [27].
The current findings further support the suggestion that co-occurring alexithymia may explain inconsistent reports of impaired emotion recognition across a range of disorders [17]. Many conditions are associated with elevated rates of alexithymia, and equivocal reports of emotion recognition deficits [16]. Recognition of facial emotion in autism was predicted by levels of co-occurring alexithymia and not by the presence or severity of autism [18]; a pattern replicated in this study with an ED sample. These convergent findings suggest that co-occurring alexithymia can produce similar emotion recognition difficulties in different clinical conditions. It is therefore crucial that future studies of emotion recognition in clinical populations match control groups for alexithymia, or control for its influence statistically, allowing researchers to test whether condition symptomology makes an independent contribution to emotion recognition.
The individual emotion thresholds most strongly related to alexithymia in this study were happiness, disgust and anger. While the relationship with disgust and anger recognition accords well with Cook et al.'s [18] recent findings in ASD, the association with happiness recognition is observed less often, and contradicts the view that alexithymia is disproportionately related to impaired recognition of emotions with negative valence (see [16] for a review). Where recognition deficits are restricted to negative emotions, it is unclear whether this pattern reflects the ease with which happiness may be discriminated. Happiness is often the only emotion studied with a positive valance and happy expressions have highly distinctive local features [38]. In this study, ceiling effects were avoided by increasing difficulty for each emotion independently, by altering levels of visual noise based on performance. In addition, although the current procedure ensured that pain and happiness were assessed independently, the presence of pain is likely to have made happiness harder to discriminate than in previous studies, owing to expressions of pain sharing more physical features with happiness expressions than do other negative facial emotions.  Table 3. (a) Regression models for the prediction of happiness, anger, disgust and pain recognition thresholds, including demographic variables (age, IQ and depression) in the first step, alexithymia, as measured by the TAS-20 in the second step, and ED symptomology, as measured by the EDE-Q in the third. (For happiness, anger and disgust, alexithymia significantly predicts recognition threshold, above demographic variables, whilst ED symptomology is not predictive. For pain, once demographic variables are accounted for, the predictive ability of alexithymia falls short of significance.) (b) Regression models for the prediction of happiness, anger, disgust and pain recognition threshold, including demographic variables in the first step, ED symptomology in the second step, and alexithymia in the third. (As was the case when alexithymia was entered into the model before ED symptomology, for happiness, anger and disgust, it was alexithymia, and not ED symptomology, which significantly predicted recognition threshold, over and above demographic variables. For pain, alexithymia was no longer a significant predictor of recognition threshold once demographic variables were accounted for.)    The influence of co-occurring alexithymia may extend beyond expression recognition, potentially explaining a wide range of emotion processing difficulties observed in EDs. Alexithymia is associated with impaired performance when judging protagonists' emotions from vignettes, in ED participants [39] and individuals with non-clinical disordered eating [8]. While the independent contributions of ED and alexithymia were not addressed, these findings suggest that the impact of co-occurring alexithymia in ED samples may extend to broader socio-emotional abilities. We also note that co-occurring alexithymia is known to be responsible for the difficulties interpreting vocal [19] and musical affect [20], and the reduced empathy [21], seen in some individuals with ASD.
The current ED sample did not contain sufficient numbers of BN or BED patients to determine how the association with alexithymia varies as a function of ED subtype. Alexithymia co-occurs with all subtypes [23,24], however, and neither BN patient was an outlier. Determining whether alexithymia produces equivalent deficits of facial emotion recognition in each subtype remains a priority for future research, especially as only one study has directly compared emotion recognition ability across ED subtypes [23]. Similarly, while the sample size was modest, it was sufficient to observe reliable associations between alexithymia and emotion recognition ability. A power calculation suggested that over 4500 patients would be required for the observed effect of EDs on emotion recognition to reach significance, suggesting the effect of alexithymia on emotion recognition is an order of magnitude greater than that of EDs.
As expected, no association was seen between alexithymia or ED symptomology and identity recognition ability. Although a weak negative relationship was observed between alexithymia and identity recognition, this may reflect top-down effects, whereby individuals with high alexithymia, aware of their emotion recognition problems, attend to cues to facial emotion at the expense of identity recognition. Indeed, anecdotal accounts during debriefing suggest this may be the case. Short presentation durations coupled with the demand to process identity and emotion simultaneously, may have proved challenging for individuals with high levels of alexithymia. That no relationship was seen between alexithymia and identity recognition thresholds confirms that alexithymia is associated with problems of emotion interpretation and not simply impaired interpretation of degraded visual images.
Overall, these findings suggest that levels of co-occurring alexithymia, and not ED symptomology, predict ability to recognize facial emotion, in individuals with and without EDs. These results have significant implications for the conceptualization of EDs-suggesting that disordered emotion processing may not be a core feature of these conditions-as well as for socio-emotional research practice-highlighting the need to measure and control for the influence of co-occurring alexithymia when testing clinical populations. Together with previous findings, these results suggest that alexithymia may explain individual differences in affective processing across a range of clinical conditions. Ethics statement. Ethical approval was granted by the King's College London research Ethics Committee, and the study was conducted in accordance with the ethical standards laid down in the 2008 (sixth) Declaration of Helsinki. All participants gave informed consent prior to participation.
Data accessibility. The dataset supporting this article is available as the electronic supplementary material. Author contributions. V.C. and J.T. assisted with participant recruitment and R.B. completed data collection. All authors contributed to data analysis and interpretation, and writing of the report for publication. R.B. and R.C. created figures.
Funding statement. This research was supported by the Economic and Social Research Council. R.C. was supported by a Future Research Leaders award (ES/K008226/1). R.B. was supported by an ESRC Doctoral Studentship.