Psychologist logo
Abstract ear plugs and ears on blue background
Clinical, Music and sound, Perception

Mapping misophonia: Sound aversion profiles differentiate between similar conditions

Recent investigations at Sussex University open up a new ways to understand, and potentially diagnose, misophonia and other conditions involving atypical responses to sound.

27 July 2023

By Emma Barratt

Chewing, slurping, sniffling… if just the mention of those sounds made you cringe, you may already be familiar with misophonia.

Misophonia is a recently recognised condition centred around a 'hatred of sound'. Those that suffer from it typically find themselves reacting with anger, revulsion, or even panic to sounds that would typically leave someone without misophonia unbothered. Far from just a passing squick, these reactions can be so extreme and intrusive that they necessitate seeking treatment.

While some sounds — like chewing, slurping, or sniffling — are generally associated with the condition, previous research suggests that those with misophonia (misophonics) are actually more likely to rate a variety of sounds as more abrasive than controls. 

This is the backdrop for a recent pre-registered investigation by Nora Andermane and colleagues at Sussex University, who set out to document the responses of misophonics to a wide set of sounds. In doing this, the team hoped to be able to assess if misophonics have a specific 'profile' of negative responses to sound that could differentiate them from those with other sound-based sensitivities, such as hyperacusis, or those seen in autism. To achieve this, they devised an innovative method they call "phenomenological cartography" – mapping out what makes a person misophonic using their responses to sounds. 

Through machine learning, the team were able to analyse 418 misophonic and control participants' ratings of various sounds across 17 reaction dimensions (such as rage, body tension, pain, or anxiety). There were 32 sounds in total, including those commonly associated with misophonia – such as lip smacking – as well as sounds not usually labelled as triggers, like pen clicks, frog croaking, coughing, and more. Several sounds were also presented in a scrambled manner, masking semantic information that would suggest what exactly they were, while retaining the auditory characteristics of the noises themselves.

Of those 32 sounds, 30 were useful in discriminating misophonics from controls. Reactions to commonly known trigger sounds, such as crunching chips or an apple, were most useful at spotting this distinction. However, not all was as expected — reactions within similar categories of sound (human oral/nasal, human actions, non-human, scrambled) didn't appear to be similar. 

What the team actually observed was that misophonics had an overall atypical response to most sounds, with those with more severe misophonia being more widely reactive to sound, regardless of what it was. Reactions such as body tension, annoyance, and anxiety were most highly associated with the condition. Rather than being an issue of noises feeling amplified, or an issue with a particular frequency of sound, the authors suggest that the degree of sound aversion in misophonia might be determined by a complex interplay of several factors, including semantic content, social context, and the physical properties of the sound itself.

Through cross-classification of the strength of those 17 reactions for each sound, the team was able to establish whether or not those with misophonia, hyperacusis, or a high number of autistic traits shared a common profile of sound aversions. Analyses did seem to confirm that, within each of these conditions, a reaction to one sound was in fact capable of predicting reaction to another, unrelated sound – something that wouldn't occur if a common profile of sound aversions was not present. These aversion profiles did all seem to differ, which not only could assist in classifying participants of future research, but may also provide a new avenue for clinical diagnoses. 

Misophonia is also thought to be co-morbid with another type of sound sensitivity, autonomous sensory meridian response (ASMR). However, unlike misophonia, this condition involves sounds (such as whispering, tapping, chewing) being experienced as being particularly pleasant. In order to investigate the similarities and differences between the profiles of these two conditions, the team recruited a further 254 participants, and assessed them for both misophonia and ASMR. 

After excluding those who also had misophonia and splitting the group into high- vs low-ASMR groups, they were played the original 32 sounds, plus 8 ASMR-trigger sounds. Analysis of their reactions showed a wide difference in likelihood to experience sounds as soothing, pleasurable, or having induced tingling sensations. Scrambled sounds in particular seemed effective at inducing ASMR. Though the response profiles of the two conditions differed widely, ASMR responses were also not limited to traditional triggers. The team state that their findings do not support a link between misophonia and ASMR, as has been hypothesised in previous research.

Surprisingly, misophonic participants reported fewer 'visual experiences' in response to sounds. Though this study doesn't probe this aspect in depth, and the participants' understanding of this phrase isn't clear, this could suggest that part of what drives sounds to be so aversive to misophonics may be that they divert their attention inwards, attending to their own bodily reactions, rather than visualising the source of the sound. If true, it may be possible that new interventions for misophonia involving visualisation of sound sources could be developed.

Though machine learning is useful in many ways, it is something of a 'black box'; the results are tricky to interpret, and there's a risk that performance of the machine learning model may be overstated. The authors acknowledge that although they attempted to mitigate these issues by running more traditional analyses to check results, this may still limit the validity of their findings. They also highlight the need for more focused research on the use of this technique in autism and hyperacusis, as well as across other demographics and cultures.

Read the paper in full: https://doi.org/10.1016/j.isci.2023.106299