Psychologist logo
Cyberpsychology, Language and communication, Violence and trauma

Words which can catch a wolf

Fears of child sexual abuse are on the rise in the digital era, with the internet providing a perfect playground for predators. But could technology also be the solution? Talia Gilbey writes.

04 December 2020

In April this year, the National Crime Agency revealed that 300,000 people in the UK pose a sexual threat to children online. Even more alarmingly, the Internet Watch Foundation (IWF) reported a 50 per cent increase in reports of images and videos containing child sexual abuse material circulating online during the first Covid-19 lockdown. About a third of this abusive material actioned by the IWF is 'self-generated' – created and posted by the child themselves after being groomed by online predators.

The task of detecting cyber predators is extremely challenging. They hide behind increasingly sophisticated technology as they target one of the more vulnerable groups in our society. Their concealed identity allows them to communicate with multiple children at once, across several online platforms, adopting numerous personas, each tailored to maximise their appeal to individual targets (de Santisteban et al., 2018; Grant & Macleod, 2020). 

Building a relationship with a child over the internet with the intention of making them engage in some kind of sexual activity is a criminal act regardless of whether the predator ultimately meets with the child face-to-face. As explained by the National Society for the Prevention of Cruelty to Children, grooming can have lasting effects on the child victims, including anxiety, depression or suicidal thoughts, irrespective of whether physical contact was involved. Predators who do use the internet to try and gain physical access to children are sometimes referred to as 'contact-driven' offenders (Briggs et al., 2011). On the other hand, 'fantasy-driven' offenders have no intention of meeting the child offline and instead focus on engaging the child in inappropriate sexual activity online, ranging from sexual conversation to convincing the child to view or produce pornographic images (Briggs et al., 2011). While the usefulness of the distinction between contact and fantasy driven offenders remains debated – given, for example, the disregard for mixed offenders who engage in both types of abuse (Broome et al., 2018) – what remains clear is that all of these offenders are using technology to facilitate the abuse. 

But can technology also be the solution?

Seeing through the sheep's clothing

With the media perpetuating a highly stereotypical image of a child sex offender, the challenge of spotting one might not seem so great. A creepy figure in a trench coat, lurking around parks and playgrounds, convincing children to play, to trust and befriend them. Parents warn their children of 'stranger-danger', explaining that behind the friendly façade lies dangerous, cunning, malicious intent – a wolf in sheep's clothing. The question is, how does this wolf present itself online?

Through analysing chat logs and transcripts of grooming conversations, teams of psychologists, criminologists and linguists are beginning to understand the complex manipulative strategies used by sexual predators to 'successfully' groom children. More importantly, they are beginning to identify how these grooming goals are being realised in and through language and other semiotic modes. 

This knowledge can inform a detection database, where a computer algorithm can recognise online grooming by spotting distinctive language patterns of a grooming conversation. In this way, technology can be used to bridge the gap between a psychological and linguistic understanding of the grooming process and an effective way of detecting these predators in action. If perpetrators can be spotted quickly and early on in the grooming process, it is hoped that the damage to the child will be minimised. 

Trust

One core manipulative strategy consistently identified in grooming conversations is the creation and maintenance of a strong sense of trust between the child and predator (Lorenzo-Dus et al., 2016). For children to be lured into engaging in sexual activities, a deceptive relationship where the child feels an emotional bond to the perpetrator must first exist. This process is often labelled 'deceptive trust development' as groomers hide their ulterior motive behind a seemingly trustful bond (Olson et al., 2007). Examining the linguistic patterns involved in developing a sense of trust through praising the child reveals the importance of compliments (Lorenzo-Dus & Izura, 2017). Compliments on the child's physical appearance tend to focus on their sexual attributes. However, these types of compliments are 'strategically balanced' with non-sexually orientated compliments, which instead often concentrated on the child's personality (Lorenzo-Dus & Izura, 2017, p.80). Therefore, a computer algorithm sensitive enough to detect deceptive trust development will spot not only sexual but also non-sexually orientated compliments, as both are working in tandem to achieve the same manipulative grooming strategy.

Self-disclosure

Another potentially distinctive linguistic approach used by predators to gain children's trust is through self-disclosure, particularly of negative emotions (Chiu et al., 2018). When predators share such personal information, they appear to show the child a more vulnerable side to themselves. They also demonstrate to the child that they trust them enough to disclose their feelings and experiences in the first place, thus encouraging the child to reciprocate. What is particularly telling is that contact-driven offenders are more likely to use this strategy compared to fantasy-driven offenders as predators wishing to meet the child offline must develop a particularly strong sense of trust with the child (Chiu et al., 2018). Therefore, effective preventative technology will also be able to detect deceptive trust development by identifying self-disclosures through positive and negative emotion words and the use of first-person pronouns. 

Distance and isolate

While the predator employs a trust development strategy to build an emotional bond between themselves and the child, they simultaneously work to distance and isolate the child's bonds with other people (Lorenzo-Dus et al., 2016). This strategy, referred to as 'mental isolation', creates gaps in the child's support network, leaving space for the predator to become the person that the child relies and depends on. Given that a child's support network often includes their parents, the language used to facilitate this isolation strategy tends to involve family terms, particularly the words 'mum' and 'daddy' (Lorenzo-Dus & Kinzel, 2019). For example, the predator might ask the child questions like, 'do you forgive your mum for what she has done to you' as a way of emotionally separating them from their mother (Lorenzo-Dus & Kinzel, 2019). 

Unfortunately, mental isolation comprises only a part of the groomer's isolation strategy. In order to minimise the risk of being caught, predators also try and gauge and develop the child's physical isolation. This self-preservation strategy can be recognised through questions regarding parents' work schedules, seeking assurance that there are no adults supervising the child online and even asking the child to delete previous chats (Barber & Bettez, 2014). Predators can also protect themselves from being caught by telling the child that, because of the age difference between them, he (the predator) can get into trouble if anybody found out (Chiang & Grant, 2017). This encourages the child to keep their relationship a secret (Kloess et al., 2019).

Desensitisation

Given the linguistically subtle ways in which groomers increase the child's level of isolation, making it difficult to distinguish a grooming conversation from a conversation between friends, perhaps it would be more effective for a computer algorithm to focus on detecting when groomers prepare the child for various sexual activities, either online or offline; this strategy is labelled 'sexual gratification' (Lorenzo-Dus et al., 2016). One of the ways this is achieved linguistically is through explicit desensitisation. Through graphically describing various sexual activities to generally using sexual slang words, the predator desensitises the child, leading them to believe that this behaviour is normal. The process in itself is sexually gratifying for the predator (Barber & Bettez, 2014) and may be the most obvious indicator of a case of grooming. 

Another approach to obtaining sexual gratification is through implicit desensitisation, which may include speaking about sexual activities in a more metaphorical sense, perhaps making it harder to detect. Groomers also use reframing techniques such as positive politeness strategies where the aim is to maintain the child's positive 'face' or self-image (Lorenzo-Dus et al., 2016; Brown & Levinson, 1987). For example, by framing the sexual activity as ultimately benefitting the child, it appears as though the perpetrator wants what the child wants, making the child feel accepted, appreciated and approved of by the perpetrator; this works to persuade the child to engage in the sexual activities (Lorenzo-Dus et al., 2016; Brown & Levinson, 1987). The complexity of developing algorithms to 'pick out' such subtly dangerous language presents a significant challenge to the potential for using technology to identify online groomers.

Compliance testing

Interestingly, the increased use of the sexual gratification strategy and isolation strategy is correlated with an increased sense of the child's compliance (Lorenzo-Dus et al., 2016). How willing the child is to engage in sexual activities with the predator is being constantly assessed throughout their conversation, a strategy referred to as 'compliance testing'. Barber and Bettez (2014) found that if the child was not compliant, the predator tended not to use blackmail or force the child to engage in these activities but instead would simply stop conversing with the child. 

One of the ways predators test the child's compliance is through reverse psychology, for example by asking the child whether they were 'gonna chiken out' (Lorenzo-Dus et al., 2016, p.49). Additionally, the groomer may adopt a role reversal technique where they mirror the child's expected cautious behaviour, such as suggesting they meet in a public area. What is interesting is that any plans or decisions made with the child is framed to make the child believe that they are in control. This technique, labelled 'strategic withdrawal', can be identified when predators make claims that they only want what the child wants, for example. 

Caution, with a step forward

From strategic withdrawal to complimenting behaviour, it seems as though researchers have a solid grasp not only of the manipulative strategies used by predators to lure victims, but also how these are approached linguistically. However, an element of caution is required when interpreting data from a large majority of these studies. Given the difficulty in obtaining genuine transcripts of child-adult grooming conversations, many of these studies have had to settle with analysing conversations between convicted predators and adult decoys pretending to be children i.e. adult-adult conversations. It remains unclear how accurately adults portray children during these conversations and, therefore, how different results would have been if these experiments analysed interactions with real children (Lorenzo-Dus et al., 2016; Lorenzo-Dus, Kinzel & Di Cristofaro, 2020). With that said, the fact that these transcripts involve predators that have been convicted suggests that they genuinely believed they were interacting with a child, indicating that the adults were mirroring a child's response well. Therefore, the conclusions drawn from such research can still be considered to have significantly helped in uncovering how a wolf presents itself online.

However, the question still remains how easy it is, in reality, to see through the sheep's clothing. 

Driven to find an answer, I spent a summer as an intern at Keepers Child Safety (KCS), a company committed to protecting children online through software that automatically detects potentially dangerous communications. Their artificial intelligence-based app is programmed to identify parts of a conversation which may indicate grooming behaviour. 

The most apparent challenge facing the company, and likely facing any grooming detection software, is its ability to differentiate a harmless from a dangerous conversation. Many of the words and phrases detailed above are used as part of our everyday communications, not just in a grooming context. Hence it is crucial that the algorithm can accurately detect distinctive language patterns indicative of manipulative grooming strategies. The effectiveness of the algorithm may also be measured by the speed at which grooming conversations can be detected. It is hoped that quicker detection will minimise the harm ultimately caused to the child. 

Despite these challenges, safeguarding children from sexual abuse is too important a battle to give up on, and one that cannot be fought by technology companies alone. If law enforcement offices released more 'naturally occurring' i.e. child-adult, grooming transcripts for scientific scrutiny, this can act as fuel for detection algorithms. Constant topping up of detection databases works to improve the algorithm's capability, accuracy and effectiveness; it works to save children from abuse. 

Nonetheless, the existence of algorithms which can detect some cases of online grooming is a huge step towards creating a safer online environment for children. We may never achieve the ultimate goal of creating foolproof technology that detects every grooming conversation; yet meaningful progress which protects children, and offers a solid base from which to continue to strive to improve the effectiveness of a technology solution, is worth pursuing. The protection provided by the current technology to any child from sexual abuse makes research powered by psychology, linguistics and algorithms for the purpose of detecting online grooming indispensable. 

- Talia Gilbey, psychology undergraduate at Durham University. [email protected]

References  

Barber, C., & Bettez, S. (2014). Deconstructing the online grooming of youth: Toward improved information systems for detection of online sexual predators.

Briggs, P., Simon, W. T., & Simonsen, S. (2011). An exploratory study of Internet-initiated sexual offenses and the chat room sex offender: Has the Internet enabled a new typology of sex offender?. Sexual Abuse23(1), 72-91.

Broome, L. J., Izura, C., & Lorenzo-Dus, N. (2018). A systematic review of fantasy driven vs. contact driven internet-initiated sexual offences: Discrete or overlapping typologies?. Child abuse & neglect79, 434-444.

Brown, P. & Levinson, S. C. (1987). Politeness: Some universals in language usage (Vol. 4). Cambridge university press.

Chiang, E., & Grant, T. (2017). Online grooming: moves and strategies. Language and Law= Linguagem e Direito4(1), 103-141.

Children may be at greater risk of grooming during coronavirus pandemic as IWF braces for spike in public reports. IWF. (2020). Retrieved 9 October 2020, from https://www.iwf.org.uk/news/children-may-be-at-greater-risk-of-grooming-during-coronavirus-pandemic-as-iwf-braces-for.

Chiu, M. M., Seigfried-Spellar, K. C., & Ringenberg, T. R. (2018). Exploring detection of contact vs. fantasy online sexual offenders in chats with minors: Statistical discourse analysis of self-disclosure and emotion words. Child abuse & neglect81, 128-138.

De Santisteban, P., Del Hoyo, J., Alcázar-Córcoles, M. Á., & Gámez-Guadix, M. (2018). Progression, maintenance, and feedback of online child sexual grooming: A qualitative analysis of online predators. Child Abuse & Neglect80, 203-215.

Grant, T., & MacLeod, N. Language and online identities. Cambridge University Press.

Grooming. NSPCC. (2020). Retrieved 9 October 2020, from https://www.nspcc.org.uk/what-is-child-abuse/types-of-abuse/grooming/.

Kloess, J. A., Hamilton-Giachritsis, C. E., & Beech, A. R. (2019). Offense processes of online sexual grooming and abuse of children via internet communication platforms. Sexual Abuse31(1), 73-96.

Law enforcement in coronavirus online safety push as National Crime Agency reveals 300,000 in UK pose sexual threat to children. Nationalcrimeagency.gov.uk. (2020). Retrieved 9 October 2020, from https://www.nationalcrimeagency.gov.uk/news/onlinesafetyathome.

Lorenzo-Dus, N., & Izura, C. (2017). "cause ur special": Understanding trust and complimenting behaviour in online grooming discourse. Journal of Pragmatics112, 68-82.

Lorenzo-Dus, N., & Kinzel, A. (2019). 'So is your mom as cute as you?': Examining patterns of language use in online sexual grooming of children. Journal of Corpora and Discourse Studies2, 15-39.

Lorenzo-Dus, N., Izura, C., & Pérez-Tattam, R. (2016). Understanding grooming discourse in computer-mediated environments. Discourse, Context & Media12, 40-50.

Lorenzo-Dus, N., Kinzel, A., & Di Cristofaro, M. (2020). The communicative modus operandi of online child sexual groomers: Recurring patterns in their language use. Journal of Pragmatics155, 15-27.

Olson, L. N., Daggs, J. L., Ellevold, B. L., & Rogers, T. K. (2007). Entrapping the innocent: Toward a theory of child sexual predators' luring communication. Communication Theory17(3), 231-251.