Psychologist logo
Clinical, Counselling and psychotherapy, Mental health, Practice Ethics, Violence and trauma

When therapy causes harm

Christian Jarrett on the ‘dark underbelly’ of psychology. Could the use of client feedback be the answer?

07 January 2008

When someone undergoes psychotherapy, the hope, obviously, is that they will recover. But if they don't, what is the worst that can happen? That the therapy will prove ineffective? In fact, therapy can be harmful, with research showing that, on average, approximately 10 per cent of clients actually get worse after starting therapy.

Yet belief in the innocuousness of psychotherapy remains persistent and prevalent. In 2006, for example, when Charles Boisvert at the Rhode Island Centre for Cognitive Behavioral Therapy and David Faust at Brown University Medical School surveyed 181 practising psychologists across America, they found that a significant portion (28 per cent) were unaware of negative effects in psychotherapy.

'One of the subtle things we looked at', says Boisvert, 'was whether psychologists' perceived familiarity with the research on psychotherapy was consistent with their actual familiarity. It wasn't – some clinicians thought that they knew the research but when it came down to actually testing them, they didn't score too well.'

A list of harmful therapies

Against this backdrop of apparent ignorance, Scott Lilienfeld, professor of psychology at Emory University in America, put professional sensitivities to one side last year and used current research findings to propose a preliminary list of potentially harmful therapies – 'a work in progress to be revised and refined' (Lilienfeld, 2007). Writing in the journal Perspectives on Psychological Science, he said it should be possible for the field to agree that treatments that potentially cause harm 'should be avoided, or in the case of treatments that yield both positive and negative effects, implemented only with caution'.

  Among the treatments Lilienfeld listed are critical incident stress debriefing, facilitated communication, recovered-memory techniques, boot camps for conduct disorder (see box 1), attachment therapy, dissociative identity disorder-oriented psychotherapy, grief counselling for normal bereavement and expressive-experiential psychotherapies.

How prevalent such approaches are in the UK, America or elsewhere is unknown, but Lilienfeld tells The Psychologist there is a 'dark underbelly' of clinical practice, and that 'many research-oriented clinical psychologists and counselling psychologists underestimate the magnitude of the problem or are unaware of it'.

Lilienfeld's idea would serve to complement the empirically supported therapies movement in the USA; and in the UK the responsibility for such a list would presumably fall to the government's independent advisory body, the National Institute for Health and Clinical Excellence. 'Theoretically it makes sense,' says Boisvert, himself a practising clinician. 'If we're going to look at a list of treatments that are effective, then it makes sense to look at treatments that could potentially be harmful.'

Methodological issues

Banning treatment approaches that the evidence shows cause harm may sound like common sense. But the issue is a methodological minefield. Firstly, one has to decide what counts as psychological harm. Some cases will be obvious. But then there are cases where, even if a client shows improvement after starting psychotherapy, it is possible they would have recovered more quickly if they had participated in a different, more effective, treatment approach. In that sense, their current therapy has been harmful because it has slowed down their progress. Moreover, many trials that find negative effects are never published – what's known as the 'file-drawer' effect. On the other hand, psychological harm can be overestimated. A client who deteriorates after starting psychotherapy might well have deteriorated anyway. In fact, undertaking psychotherapy could have slowed down their deterioration.

Then there is the issue of how much evidence is needed before a treatment approach is classified as harmful. Peter Fonagy, professor of clinical psychology at UCL and a practising psychoanalyst, says this is a 'complex problem', and he cautions that while the idea of banning harmful treatments is welcome in principle, 'you've got to be very, very careful that you specify in what groups a treatment is causing harm, and that the trial this is based on was as rigorously carried out as it would be for evidence-based treatments'.

To take some specific examples from Lilienfeld's list, Fonagy says that concerns about critical incident stress debriefing come from just a couple of randomised controlled trials, and that most of the studies that looked at expressive-experiential psychotherapy were looking at nothing of the sort. 'The treatments were labelled as that in the trials,' he says, 'but actually they were much more like "treatment as usual", as we call it nowadays.'

Another problem with identifying harmful treatments concerns the issue of how well findings from randomised controlled trials (RCTs) translate to the real world. For example, much of the psychological research on treatment efficacy has looked at short-term treatments, where the trial therapists are told to stick to a specific 'manualised' way of conducting therapy. 'I think it's a little bit like how you perform when you do your driving test and somebody is watching you compared with how you drive normally,' says Fonagy. 'For example, do you really use the ten to two position on the steering wheel?' 'There are things that therapists just don't do…,' Fonagy continues. 'You're looking at two different animals when you look at an RCT and when you look at practice as it really is. I think there is a disconnect between the two.'

A further reminder of the need to tread carefully in this area comes from a recent paper published in Professional Psychology: Research and Practice that focused on bereavement counselling, another of the treatments on Lilienfeld's list. Counselling psychologists Dale Larson and William Hoyt found that bereavement counselling's dire reputation is thanks entirely to an unpublished student dissertation, in which it was claimed 38 per cent of bereaved clients would have fared better if, instead of receiving counselling, they had been in the no-treatment control group.
Responding to these concerns, however, Lilienfeld says that, while Fonagy and others might be right, 'the burden of proof rests on those people who make these kinds of claims to do the necessary studies and show that their treatments do work'. Regarding the issue of how well research translates to the real world, he says that more and more people, at least in the US, are doing realistic studies with random assignment, no manuals, in a real clinical context. 'Often times the results of efficacy trials are generalising well to real world settings,' he says. 'But ultimately it's up to people [who want to practise suspect treatment approaches] to show that their efficacy is better in real world settings.'

A third way?

So at present it looks like the idea of banning a list of potentially harmful therapies could end up trapped in the science-practitioner gap, so to speak. But could there be an alternative route to preventing psychological harm?

For the last 16 years Michael Lambert, professor of psychology at Brigham Young University in America, has been studying the use of client feedback to improve outcomes in psychotherapy.

The principle is simple – before each session, ask clients a few brief questions about how they are feeling and how they feel the course of therapy is going. These days, it can even be done on a palmtop at reception while they are waiting to see their therapist. By comparing a client's answers to the average progress made by similar clients at that stage – that is, clients who had similar problems, of similar severity, at treatment outset – Lambert's algorithms are able to say whether a client is 'on track' or 'off track'. This information can then be fed back to the therapist.

Lambert explains: 'The baseline for off-track patients is that, on average, 20 per cent will be worse off when they leave treatment than when they started. If feedback is given to the therapist saying essentially "alarm, alarm, this patient is off-track", we can reduce the deterioration rate down to about 5 per cent. But getting down that low requires more than simply telling the therapist that the case is off-track.'

After a client is identified as off-track, Lambert's system then gives a follow-up questionnaire to the client with the aim of identifying where things have gone wrong – looking at issues such as the rapport they have with their therapist, their motivation and social support. This is then presented to the therapist with a list of recommendations. 'We essentially give clinicians a problem-solving decision tree and scores on each of those dimensions and suggested interventions,' says Lambert.

Part of the reason client feedback has such a dramatic effect is that, as we saw earlier, without this information, therapists seldom expect their clients to get worse. In one striking study conducted by Michael Lambert and colleagues, 40 clinicians were asked to predict which of over 500 of their patients would be worse off at the end of treatment. 'Even though we told them the deterioration base rate for all patients is about 8 per cent, they predicted that virtually no patients would be any worse off. Our algorithms and stats methods picked up 85 to 90 per cent of the 40 cases who deteriorated, the therapists picked up like 3 per cent.'

Clinicians generally react with resistance to client feedback systems. Lambert quips: 'If you think you're a superior clinician, as all clinicians do, then why would you feel you need it? Why collect data that can only bring you down?'

However, Lambert likens measuring and feeding back client outcomes to the measurement of blood pressure in medicine. 'When you go to a physician you get your blood pressure checked. You don't say "I got it checked two months ago so there's no need to check it today". We're just trying to present practitioners with a vital sign like blood pressure. It's simply a mental health vital sign – it says whether it's going in the right direction or it's not and you're going to get weekly fluctuations and so you have to be going in the wrong direction for quite a bit before we say this is alarming, this is not satisfactory.'

A twist of Lambert's work is that it has tended to reveal that outcomes have more to do with the therapist than with the treatment approach they use, and in that sense his programme of research is inconsistent with Lilienfeld's call for a list of potentially harmful treatments. However, it also has the benefit of not getting into the politics of upsetting advocates of a particular treatment approach. As Lilienfeld says, 'The APA has shown a great reluctance to alienate any of its base and to take a stance against any problematic treatments.'

Peter Fonagy agrees there is much to recommend in Lambert's approach: 'The therapist is not in as much of a privileged position to know what is happening with their clients as they think they are. Furthermore, they often ignore signs that patients aren't benefiting, for obvious reasons. But Michael Lambert's tracking of progress methodology makes it so that therapists are forced to keep an eye on how the patient is doing and overall this seems to be beneficial.'

Scott Lilienfeld too is a big supporter of Lambert's work. But he points out that Lambert isn't looking at the 'low end' of treatment efficacy, of the kind in the list of potentially harmful therapies, rather 'he's looking at treatments that generally work pretty well'. And just because a large part of the variance in psychotherapy outcome is explained by the therapist, doesn't mean that we should ignore ineffective treatments. 'To use a sports analogy,' Lilienfeld says, 'if you did a study that found that the quality of the coach mattered, but that the players on the field accounted for a lot more variance – say 75 per cent – would that mean that you wouldn't care who your coach was? In fact you may find you have more control over the coach than you do over the players, in the same way that it might be easier to ban harmful treatments than it is to prohibit poor therapists.'

Looking to the future

At the very least, the idea of publishing a list of potentially harmful treatments draws attention to the issue of psychological harm. Longer term, however, the experts appear to agree that the answer may lie in the way we teach psychology students and trainee clinical psychologists.

As Jim Wood (professor of psychology at the University of Texas at El Paso) says, 'Educating graduate students, that's the crucial thing, the smarter they get, the more critical they become – the harder they are to fool.' Boisvert agrees it is important to raise awareness among students, 'maybe even publishing a textbook that is like the cousin to "treatments that work" – I brought this up in my class the other night and the students were really interested.'

Lilienfeld believes the answer lies with spending as much time teaching students critical thinking skills as teaching them the facts about conditions like schizophrenia and depression. Students need to be taught how to think about the evidence, he says, 'to understand the basic biases and heuristics that can impact our judgement and lead us to believe that certain techniques or treatments work even when they don't.'

Box 1: No safety in numbers

It isn't just individual psychotherapy that can be associated with negative outcomes. Several group interventions for problem behaviour in young people have also been found to make things worse. These include the Cambridge-Somerville Study implemented in the 1940s, The Adolescent Transition Programme (ATP) and the Drug Abuse Resistance Education (DARE) prevention programme.

A common theme in these group-style interventions seems to have been that the young people learned bad habits off each other, and that by bringing misbehaving youngsters together, the interventions unintentionally led to antisocial behaviour appearing 'normal' and widespread.

For example, Chudley Werch and Deborah Owen reported in 2002 that the DARE programme was actually associated with increased alcohol abuse among participants. One possible explanation is that by addressing multiple drugs in the same intervention, participants were left with the impression that abusing cigarettes and alcohol is not so bad.

Similarly, in 1987 when James Catterall evaluated a four-day intensive group CBT-based workshop for low-achieving students, he found participants actually achieved lower grades than controls and showed a trend towards being more likely to drop out of school. The participants reported high levels of satisfaction with the programme, but Catterall suggested perhaps the intervention had served to bond the low-achieving students together, thus exacerbating their estrangement from school.

A key review paper in this area was published in Professional Psychology: Research and Practice in 2005 by Dana Rhule of the University of Washington, entitled 'Take care to do no harm: Harmful interventions for youth problem behaviour'. Rhule says one way to try to prevent the use of harmful interventions in the future could be for funding agencies to seek information from applicants, such as school districts, to show that the programme they want to implement actually has an empirical basis. 'If there was some sort of process in place when people seek funding, I think that would be really helpful,' she says.

Box 2: Psychological assessments that cause harm

Another area where psychologists can potentially cause harm by using techniques that lack empirical support is in assessment. Many feel that projective techniques like the Rorschach inkblot test and the Thematic Apperception Test are particularly problematic. The Rorschach is still used widely, especially in the United States (www.thepsychologist.org.uk archive search term: Rorschach). According to Jim Wood, professor of psychology at the University of Texas at El Paso, the most serious problem with the Rorschach is not that it is invalid – that it doesn't measure what it is supposed to be measuring – but that its norms and decision rules grossly pathologise normal, healthy people. In fact, Wood says, studies show that 70 per cent of people with no psychopathology will come out according to the Rorschach as if they are seriously disturbed. This can have dire real-life consequences in contexts such as custody cases.

So what about the idea of creating a database of psychological assessment tools that could potentially cause harm, akin to Lilienfeld's list of harmful treatments? Wood says such a database would be extremely useful, but that in the US at least, this would be difficult to implement because of the political situation in psychology. 'Rorschach people carry a huge amount of clout in the APA. A few years back they formed the "Psychological Assessment Working Group" to talk about psychological assessment and it was just packed with Rorschachers – there were no critics put on it. So I think the idea of putting together a list is great, but politically, if it were handled by the APA, I think it's predictable that it would be a whitewash.'

-    Dr Christian Jarrett is The Psychologist's staff journalist. [email protected]

References

Boisvert, C.M. & Faust, D.F. (2007). Practicing psychologists' knowledge of general psychotherapy research findings. Professional Psychology: Research and Practice, 37, 708–716.
Lambert, M. (2007). What we have learned from a decade of research aimed at improving psychotherapy outcome in routine care. Psychotherapy Research, 17, 1–14.
Lilienfeld, S.O. (2007). Psychological treatments that cause harm. Perspectives on Psychological Science, 2, 53–70.
Rhule, D. (2005). Take care to do no harm. Professional Psychology, 36, 618–625.
Roth, A. & Fonagy, P. (2004). What works
for whom? New York: Guilford Press.
Wood, J.M., Garb, H.N., Lilienfeld, S.O. & Nezworski, M.T. (2002). Clinical assessment. Annual Review of Psychology, 53, 519–543.