Psychologist logo
Unlocked, by Pete Etchells
Addiction, Cyberpsychology

‘It's down to you to be more reflective around screen use’

Our editor Jon Sutton meets Pete Etchells, author of 'Unlocked: The Real Science of Screen Time (and How to Spend it Better)'.

21 March 2024

I am interested in screen time as mirror, window, lens – it suits all those metaphors – on psychology and on science communication. As a Professor of Psychology and Science Communication, it just seems like the perfect topic for you.

I think you're right, and that's how I got where I am today. I did a PhD in vision science, and then a postdoc at Bristol which was multidisciplinary. I was there for a specific purpose, and I'd never really thought of my career beyond the PhD, so that time was a bit nerve-wracking.

I remember going to the pub on a Friday. I'd read this article on the Daily Mail website – 'Computer games leave children with dementia, warns top neurologist'. I'd probably had a couple of drinks, so I was a bit more ranty than usual. Marcus Munafo was there actually, and he said 'Well, why don't you put your money where your mouth is? If you think this is all nonsense, do some research on it'. That was the point my identity began to form, around not only good research but also how we communicate it.

I started blogging, and one thing led to another and I started writing for The Guardian. Most of what I've written about over the past decade is tech – video games, social media, or screen time more generally. If you look at science blogging 15 years ago, it was often reactive – course corrections of science in the news. People were pontificating about quite serious negative effects of this thing they were calling 'screen time'. Nobody seemed to be saying that 'screen time' might sound catchy, but it doesn't mean anything, it's a completely nonsensical thing to talk about. And here we are years later, and we're talking about it even more. Everybody's getting freaked out about it, we're talking about legislation around all these things. I just find it really weird. I consider it a failure of science communication, and that's something I'm a part of myself.

And also a failure of Psychology? Because it's not necessarily that you're thinking, 'Oh, they're talking about all this and they haven't properly defined it'. We haven't properly defined it. You write in the book about 'jingle jangle problems' – concepts we're still struggling to pin down.

To be fair, it's a really hard topic. Screen time and digital tech occupy this weird space in people's minds – seen as completely unserious science, but at the same time, everybody's freaking out about it. Amy Orben has written about how the cycle of research is invariably reactive. Some new technology comes along, everybody starts worrying about it, politicians get involved because it's the topic du jour to save the kids, and then well-meaning psychologists come along and go, 'We need to research it, and we need to do this urgently'. There's a panic, and people end up doing poor research.

Take self-report surveys. Nothing wrong with them as a starting point, but then forevermore it's self-report surveys. I do research on loot boxes – mechanisms in video games where you spend a bit of money, when you get a random go at getting something rare in the game. Does that drive gambling problems? It's an important question, because it's a huge industry. It's super open science, transparent, people share their data, that's great. But we've got 100 studies in that area now, of which mine is one, where everybody asks their participants, 'How much money do you spend on loot boxes?', gives them the Problem Gambling Severity Index, and gets a correlation of 0.2. Everybody's doing the same study. Slightly different populations, maybe tweaks here and there. But it's not a big body of evidence that shows a causal thing exists. It's just one very big study that shows you a correlation, nothing more.

And you see that in other areas?

Absolutely. Take addiction research. Ivan Goldberg literally invented internet addiction as a joke in 1995, to satirise how easy it is to problematise everyday behaviours. It backfired. Everybody started going, 'that sounds like me!', and self-organising internet addiction groups crop up on the internet. A few years later, researchers start saying, 'everybody's talking about this internet addiction thing, we should look at it'. The first major study was by Kimberly Young, back in 1998, and she created this internet addiction scale, based on gambling addiction. If you went on Google, and searched for internet addiction, a link to her study would crop up. So you find this study, self-report your internet addiction tendencies, and you've got the age of internet addiction.

Then everybody does the same study, with slight tweaks, and we all go about pretending that this is a big body of convergent evidence. At no point has anybody gone 'Hang on? What's the theory here?' The American Psychiatric Association had internet gaming disorder as a thing that they were worried about back in 2013, but they were calling for better research on it. All that happened was everybody went, 'This validates everything that I've done for the past 20 years'. Another 150 studies get published, and then the World Health Organisation says 'This is clearly a thing, we'll formally name it'.

In terms of that reliance on self-report, there are particular reasons it's unreliable with screen time?

Yes, there's work from Heather Shaw at Lancaster which compares self-report with objective measures such as Apple Screen Time, and there was more of negative association between self-report and the measures of well-being. The moment you start looking at more objective measures, effects are dampened, sometimes they disappear, or they might even go in the opposite direction. Part of the book is trying to figure out why that is the case.

One of my lines of thinking is 'the influence of presumed media influence'. Ask people about the effects of media, and they tend to say it's worse for other people than it is for them. But there's an extension of that – if you're told persistently that things are bad for you, you'll start to think that things are bad for you. That would explain why you see stronger effects in self-report data than you do in objective data.

Attention is a great example of this. There's this longstanding worry that our attention is being eroded. It feels like that. It's really easy to get distracted. But then people go down this weird rabbit hole and say 'everything's been designed to be that way', like it's some terrible nefarious scheme. Taking a step back, wouldn't it be weird if anybody in the history of making products, did not design that product with the aim that people would want to use it? But there's always a balance to be had. Facebook do not want you on Facebook 24 hours a day, because if you were they would never make any money. Deliberately addicting people to things is a terrible business plan: people burn out, people will spot it, it'll get regulated out of existence. Nobody does that intentionally.

So people confuse good product design with evil intent, sometimes. In his book Stolen Focus, Johann Hari wrote that if Facebook wanted you to meet up with friends in real life, they would have some sort of mechanism that would allow you to see other people that you know, using Facebook, that are nearby. Facebook literally had that – it was called Nearby Friends. So we have this weird assumption that technology is this big evil in the world, we should fight it at all costs, but ultimately we're powerless to defeat it. It's just the wrong way of thinking about it.

I guess psychologists aren't immune to that – picking up on panic or fear around a thing, and looking to sell books off the back of it. Andy Przybylski has talked about those 'moral entrepreneurs'.

And I'm a massive hypocrite, right? Because I've got a book to sell. And I'm sure that I've done this – we start stepping out of our areas of expertise, and we're going to use terms wrong, or suffer a failure of critical thinking.  

Take a headline from The New York Post a few years ago: 'It's digital heroin: How screens turn kids into psychotic junkies'. But in that article, it's saying 'you want to have conversations with your kid about limiting their time'. Imagine you find out your kid is using actual heroin: the conversation is not going to be, 'maybe you could use it for no more than two hours a day instead of 10'. You would be looking at detox regimes, and helping them through the physical aspects of something that could impact them for the rest of their life. We need to stop talking about addiction because it's a stupid frame. It's not actually what's going on. It ignores any positives that we get from screen use, and it leaves you with a very limited set of tools, which we know actually don't work.

In the book you point out that Amy Orben has used that digital 'diet' metaphor to show the power of psychology, I suppose, in considering all the individual, contextual and societal factors that you would need to consider around your consumption, whether that's food or digital time.

Yes, and I'd like that to see that developed into a more theoretical framework. One I talk about, which changed my thinking, is the technology habits approach. The things you do on your phone are not good or bad, in and of themselves. They can become good or bad, depending on context. There's some great work by Adrian Meier and Leonard Reinecke in this area. Take phone checking: it's not just frequency, it's that if you are doing it in an automatic, mindless way, you may put yourself in a position where that's going to create a bad habit – checking your phone when somebody's trying to talk to you, or in the car, for example. Fundamentally, it's down to you to be more reflective around screen use. If you can do that, and not beat yourself up when things occasionally go wrong, that's good.

But that's not what we tend to see. Any story about digital detox will begin with a 40-plus-year-old bloke who's got kids, who are trying to play with him. They completely ignore them, their kid asks 'Daddy, why do you love phones more than me?', and then there's this epiphany where the Dad goes 'and it was in that moment that I realised that I needed to change'. Now, I get that. It's not a great thing for your kid. But it's really hard being a parent, there's so much shame forced on us. Since writing the book I've thought about those times my daughter Matilda might have wanted to play with me and I haven't given her full attention, and I've tried not to think 'oh my god screens are addictive and it's completely beyond my control, damn you screens!' I've thought 'why was on my on my phone at that moment? Can I turn it into a shared experience?' Matilda likes playing with my phone, and she likes taking photos, so I try to use these things as opportunities.

The worst thing you can do, is go 'This is a bad thing, I must put it away'. Your phone becomes this magical thing that only adults can have. Matilda is growing up in a world where screens are everywhere: what I want to do with her is figure out how we can best provide a support network for her, educate her in good use versus bad use, and just have lines of communication open. This one thing keeps running through loads of studies – you don't want to get into a situation where kids hide their tech use, because then you don't know what's happening, and if they do come across anything at all, they end up having to deal with it themselves.

It's interesting how so much of Psychology comes back to being mindful and having open conversations. And seeing the potential good as well as bad. You use that Matthew Sweet quote, 'Technologies of pleasure always acquire enemies'.

Yes, and we see this around moral panics throughout history. There's often quite a strong misogynistic, almost paternalistic element. With the repeal of the paper tax, there was this puritanical elite hand wringing over the moral degradation that would come with women and children being able to buy more books. That's been true of every moral panic.

There's also always this thing that 'yes, there haves always been moral panics around tech, but this is the one that we should worry about'. 10-15 years ago it was video games. We're in one now with social media and screens, and it's hard to say this is not going to be the one… because there could be one, you do need to be careful about being dismissive for the sake of being dismissive.

But I do think that we have researchers such as Jonathan Haidt and Jean Twenge who have gone to great lengths to systematically say, 'it's definitely the screens'. I think it's the wrongheaded approach. We've got a real problem with kids at the minute with free unstructured play, and the loss of third space. We don't let our kids out of our sight. There's less and less investment in public spaces.

So we've got this weird situation where we're very restrictive when it comes to physical spaces, but relatively speaking on online spaces, we've been much more lackadaisical about it. We're not really paying attention to what they're watching on YouTube. Perhaps we view some of these forms of technology as fundamentally childish or childlike. If you didn't grow up with video games, they're another world that you're maybe not that invested in, a blind spot. And this ends up with loads of kids playing something like Grand Theft Auto when they really, obviously, shouldn't be.

You need to be interested and involved in your kids' online lives. But also, you need to give them a bit of freedom to explore, set rules of engagement. It is such a hard conversation to navigate, and I'm sure my own plan will go horribly wrong, because I've not got a teenager yet! But it's to not have these elusive, mystical things that they're not allowed until a certain day, it's to encourage structured play around it, and co-play, so that we can do things together. I can understand what stuff they like and what they don't like. It's the same with anything – curating experiences, having open conversations around what is age-appropriate, educating yourself a little bit about what media is out there, and offering alternatives.

Let's come back to the psychology research. What's the shift we need?

I remember when I interviewed Andy Przybylski for my last book, Lost In A Good Game. He said imagine if you play football, and somebody kicks the ball down the field, and then everybody moves towards the ball. By the time everybody's got to where the ball is, somebody's kicked it somewhere else. So everybody moves to where the ball is. That's not how you play football, right? You need to work out where the ball is going to be, and strategically place people to maximise your chances of getting a goal, right? Same goes for research on screen time, and for science communication too actually. Everybody's going to where the problem is now. By the time we get there – if we actually get there – the ball has gone somewhere else.

- Unlocked: The Real Science of Screen Time, and How to Spend it Better is out now.

We also talked about science communication, in a 'Part Two' of the interview…

'We've got the players, but we've not got the structure'

Science communication, and your journey in it, has been kind of mediated by screens themselves… the actual platforms for science communication, and what that says about screen use as well.

I've just finished a new module on psych science communication at Bath Spa. That's been a nice thing to do, because it's forced me to think about where we are with it all and where it's going, and where the potential problems are with science, communication, and particularly psychological science communication. 

Obviously, online psychology is a very different beast now, to even 10 years ago. When I got into it, it was really at the peak of blogging and blog networks. That was really liberating, this idea that anybody could come along and put some stuff on the internet. It was almost a self-regulating community – the people who writing good stuff, robust, got a reputation for that. Then we saw the rise of blog networks – newspapers and more formal outlets trying to capitalise on that and lend, I guess, almost immediate credibility. Which, of course, ultimately failed, because all the blog networks died.

Was that down to taking that underground, cottage industry type thing and killing it by the big players buying things up? 

It turned into something that it wasn't originally. With The Guardian science blog network, which I co-ordinated, it was such a great idea, such an innovative thing, but absolutely terrifying at the same time.  I think we had more access to The Guardian's website than anybody else. With your average journalist, what they wrote would have to go through an editor… I could have put tags on anything to put it on the front page. Having that freedom kept people on the straight and narrow… everybody was there with the right intentions, everybody wanted to do good stuff.

But what happened ultimately was that readers didn't know whether what they were seeing was written by a news journalist or was basically a glorified opinion piece by a non-journalist. Everything started looking the same, really. There was quite a clear identity to the blogs when they first started, almost set separate from the rest of the site. Over time they become they became more and more amalgamated. Increasingly bloggers were held, rightly, to the same journalistic standards that everybody else was. When they wrote things that maybe other journalists would have had edited or checked or subbed, and it went out, some people did quite rightly complain and threaten to sue. It happened to me a once, and it scared the shit out of me. I was thinking, 'Oh, my God, I'm gonna get sued, I don't have the fortitude to go through something like that'. Thankfully The Guardian legal team had my back, but to me, that was a near miss. I'm now always quite careful in those sorts of situations. Words are important. Think carefully about them.

So do you think that unfiltered, unmediated communication to audiences is a thing of the past now?

Well, no. This is the problem. Those more formalised blog networks fell away – perhaps from the institution perspective, it wasn't clear where the added value was anymore, which is a shame because I think there's still definitely a place in major mainstream news sites for good, reliable, known scientific voices to talk about this sort of stuff in a way where they have that freedom. It was such a powerful thing. Six years ago it stopped, and I'm still quite sad about it, I still miss it. Certainly through the pandemic, I think that would have been a tremendous force for good. 

But the way we interact with each other and get our information online changed. We went to more immediate, almost instantaneous social media stuff. People can come along who are seemingly knowledgeable about stuff, and we buy into what they say, but the markers of what constitutes expertise have gone wrong. People like Michael Gove saying we've had enough of experts, that caused a lot more damage than I think it was originally intended to. You work on an area for a significant chunk of your life, you know it inside and out, and yet it is not valued as much as your ability to say something in an engaging, impassioned and enthralling way. 

So the problem for me has been the rise of the influencer. It's fairly random, whether you become somebody with a massive following on Instagram or wherever. There's no regulation, no training, if it happens to you, it happens. And it's inevitable then that at some point, you're going to start talking about things that are science-related, even if even if you don't intend to… usually it's health stuff. And usually, it's uninformed. But it's taken as worth listening to, because lots of people follow you. And I think that's a huge problem – for misinformation, and for science communication, generally.

We should have seen that coming, and we didn't. We talk about science communication being really important and valued within academic institutions. But I'm not convinced it is. I think a lot of it is just lip service. Because if it were valued, we would have a much more standardised approach to it. Psychologists doing science communication are doing it because they care about engagement, and because they enjoy it not. Not because it's a fundamental part of their job. You might get the odd half-day workshop here and there, and you can go and do a masters in science communication. There's a brilliant one at the University of the West of England, but that is very much the exception to the rule. Generally, there's no training, and no support network. 

Yes, I was talking with Gilly Forrester about that the other day, and she was saying how science communication funding tends to be tied to a particular piece of research, rather than supported within institutions more broadly.

Yes. And we have no clear measures of what success is. If it's getting your message out to a lot of people, then there are lots of successful science communicators out there who don't know anything about science.

The most successful science communication of the past few years is Chris Whitty and Patrick Vallance in the pandemic. Regardless of your position on the politics or the decisions that were made, I can't think of any other scientists, ever, who have been able to go on TV and say, 'we all need to do this thing right now, because it will save people's lives', and the vast majority of the country has done it. That's incredibly powerful science communication. And yet they also had to deal with backlash, death, threats, all those sorts of things. It's the same with Colin Blakemore: 30-40 years ago, he was advocating for sensible, objective, empathetic discussions around animal testing, and Sarah-Jayne Blakemore talked in her book about the absolutely horrific abuse that he got from animal rights activists, and the impact on her. 

So these are scientists who have taken it upon themselves to become communicators. They're self-taught, they do really impactful, powerful work, and they come to harm as a result. At no point is there any formal structure to support them. And that's been true forever, right? Despite us all having conversations about what science communication is, and what it means, that doesn't change. 

What's the solution?

Well, I think one of the great weaknesses of science communication over the past 10-15 years, is that people have largely been doing it for themselves. Bumbling along, trying to do the best they can. Roger Pielke Jr. wrote a good blog post about this few years ago, after he got backlash around comments on climate science that he'd made to Congress. He said we really need to start thinking in science communication about making music instead of noise. If you get a bunch of people who are all good at playing their instruments into a room and just say 'Right, go', it's awful, right? Just noise. Whereas if you organise them, you create a beautiful symphony. We've got the players, but we've not got the structure. 

I don't know how we do that. Obviously, everything always needs investment. But it's becoming increasingly time-critical that we do something along those lines. We can't keep relying on people becoming excellent science communicators, almost by happenstance. We need to find a more sustainable way of doing it properly.