Letters, July 2012
voluntary work experience; replication; marathon running and risk; asylums; speaking out at seminars; healthcare informatics; dyslexia; and more
18 July 2012
Some of us can't work for nothing...
…but some of us can and this has serious consequences for representation across society and for social mobility.
Take Parliament for example: in 1945 the key players in Clement Attlee's cabinet were Ernest Bevin, a farm boy, Nye Bevan – founder of the NHS – a miner, and Herbert Morrison – Attlee's number 2 – a grocer's assistant. They, including their colleagues from all classes, were a powerhouse for social justice and increased life expectancy and living standards beyond anything believed possible in the 1940s.
Jump forward to 2012, and only one in 20 MPs is from a blue-collar background, a number that has halved since 1987.
Why? Unpaid internships are now almost always a prerequisite for getting an MP's staff role. Increasingly, only those who can afford to work without pay, i.e. not blue-collar families, can get their foot in the door.
Similarly, in journalism, over half of the current top 100 journalists were educated in private schools (1 in 14 in the general population). Only a tiny handful of senior journalists are from working-class backgrounds.
Why? A significant contributing factor, notoriously low wages for junior staff, meaning that only those with some form of financial support can do it, i.e. not those from over half the families in Britain who work in blue-collar, manual jobs or white-collar routine clerical jobs, 28 million people.
This skew in representation in politics and the media has been part of a chain of events that has had significant consequences.According to the New Policy Institute, the share of GDP going to wages has fallen steadily since 1978, and this loss has primarily affected those groups underrepresented in politics and the media. The wages of the poorest 20 per cent of workers are 43 per cent lower in real terms now than in 1978. In stark contrast, even during the economic crisis, boardroom pay for the top 100 companies soared by 49 per cent, after it shot up by 55 per cent in 2010. In London, the richest 10 per cent is worth 237 times more than the lowest 10 per cent. Compare this with Japan and some Scandinavian countries, where the richest 20 per cent are less than four times richer than the bottom 20 per cent. At the same time, during the economic crisis, the number of people in poverty has increased to in excess of 13 million.
This information is not on the front pages of our media, nor, it seems, at the forefront of the minds of our politicians. Could it be because those suffering the most are not adequately represented in these two key professions in our democracy? And those with the most
to gain are overrepresented?
We must fight a similar 'social cleansing' happening within psychology. Only rich people can work for free. If that becomes the mainstay for entering our profession, only rich people will be able to become psychologists. What consequences might that have?
Julie Bullen
Independent Consultant
Oxford
I wholeheartedly concur with Megan Down's letter in your June issue: unpaid positions are unfair and discourage diversity in a domain that requires it more than any other.
I found the same when considering training after my first degree. I was advised to take an unpaid position to give me a chance of getting a place, but refused to do so. After three years at university, graduates are looking to pay off debt, get a place of their own and perhaps some even dream of getting a mortgage. Getting into more debt seems irresponsible, particularly when it doesn't ensure admission to clinical training.
Clinical trainers like applicants with life experience, and it is the same in psychotherapy, where training organisations specifically ask for applicants over the age of 40 and will not consider those who haven't already worked in the domain. Unpaid work is again suggested. If it isn't financially difficult for most people to work wage-free in their twenties, it is likely to be harder in their thirties and forties when they have more commitments and responsibilities.
The consequences of this to the client is that he or she is left with a choice of professionals that is dominated by the independently wealthy and spouses of high earners, often this is correlated with variables like class, ethnicity and, in the case of private professionals, geography too. These are the individuals who go on to run the domain, its training and its publications. If unpaid work was viable for them they may not understand that it just isn't for everyone.
We need to listen to the hungry young graduates, who find the winds of the industry blowing them off course. The status quo favours wealthy hobbyists over individuals who have far more to offer.
Peter Sear
Theydon Bois
Epping, Essex
In response to the letter 'We can't work for free' (June 2012), I wish Megan Down had canvassed the opinion of qualified psychologists as she would have quickly ascertained that most have done their fair share of unpaid voluntary work and it is far from a new concept! This isn't just the case for psychology, but also for most other professions, e.g teaching and law.
I myself worked unpaid to obtain my first assistant post whilst later being an unpaid trainee for three years, being in paid employment every weekend and holiday to allow me to do so. I'm also nearly three years post qualifying just about finished with paying off my student loans, so believe me I certainly did not come from a privileged background.
This is also the case for my peers who I qualified with, and for the other psychologist I work with, us both becoming increasingly surprised at the apparent increase in entitled attitude some graduates appear to have.
In respect of the comments about honorary positions being nothing other than unpaid internships, can I also just highlight the amount of support and supervision an assistant at this level often needs; the reality being it would be far easier and quicker for a qualified psychologist to do most tasks themselves, as well as being far better off if we charged for supervision at 'the standard hourly rate'. It is often not possible for people with no/limited clinical experience to obtain paid assistant posts as they do not possess the skills needed to fulfil such a role. One mistake I see being made is that individuals feel they can only get relevant experience from assistant psychologists posts when in fact they can also obtain valuable experience in paid nursing assistant and other paid clinical roles.
The route to becoming a qualified psychologist is far from quick or easy, and honorary positions also often give people the opportunity to see if it the career is for them. Working in a female mental health unit I have had a few honorary assistants who have been grateful for the opportunity but who felt this was not a clinical area they wished to pursue. From my own experience, voluntary work is as much about finding out the areas in which you don't wish to work as much as those that you do. I personally am very grateful for those that I was fortunate to experience a fair few years ago!
Claire Thompson
Chartered Psychologist
Nottingham
I congratulate Megan Down on her much needed and honest reflection on the route to clinical training. I remember being shocked when advised to find somewhere I could work for free to gain the relevant experience. To be honest, as the breadwinner for my family, I dismissed the advice as coming from someone who did not live in the real world! At the time I was unaware that it was common practice.
Once on clinical training I quickly realised I was different from most of my peers, having not walked the normal walk to training. I am for ever grateful to my first placement supervisor who helped me value my previous experience by rescripting it as equivalent to a dual heritage.
At the university I enquired why someone like me had been selected for the course. I was told it was in response to service users' feedback about the need for increased diversity.
I continue to have a somewhat uneasy relationship with the profession of clinical psychology. When I graduated I applied for a number of posts. For one post I was told I was not selected because I had not been on a specialist third-year placement with looked after children; this was despite the fact that prior to my clinical training I worked for over 12 years in a care leavers project supporting looked after young people's transition to independence. For another I was told I had not been offered an interview because I had never supervised a clinical trainee; this was despite having many years of experience of, and some intensive training in, recruiting, training and supervising staff and volunteers. I have also completed the social work practice teaching award: three modules of teaching, observations of practice of student supervision and a written assessment. I continue to supervise and assess student social workers. None of these experiences seem to have been considered relevant.
Having routes to clinical training that require working for free limits the pool of potential applicants to those who have no responsibilities and/or alternative means of supporting themselves. Diversity is a broad concept. In order to value diversity, a profession has to be critically reflective. I believe clinical psychology is a work in progress in this respect. We need to be critically reflective on how we evaluate 'experience', particularly any that is different to our own. This will enable the profession to open up the possible routes to and facilitate the integration of inclusion and diversity.
Rachel Clarke
Plymouth
Replication – the difference between reliability and validity
I read with interest the articles devoted to replication (May 2012). The issue of replication has concerned me for years and it's good to see it brought into the open.
I suspect that some extremely well-known findings in the psychological literature would fail replication. However, even if a finding is robust and replicable, this does not mean it is valid.
I have replicated Piaget's number conservation test many times. Children of six usually say the longer line of counters has more, even though there is the same number of counters in each line. From this, Piaget infers that they do not conserve number. However, all this shows is that the test is reliable. It is certainly not valid. My MSc research showed that they passed a far more rigorous test of the number concept, yet still failed Piaget's test, and… I did repeat the experiment with a totally different set of subjects a month or two later.
Walk and Gibson's famous visual cliff experiment may be another example. It's probably reliable but not valid. Our own baby would have gone over cliff edges if not prevented. It would not be difficult to design an experiment that tested this safely.
I suspect that babies don't go over the visual cliff because of the discrepancy between what their tactile sense tells them and what their vision tells them.
I have a serious suggestion to make. Every psychology degree should have a compulsory replication project in it. This way, students would find: (1) that many experiments are not written up well enough for proper replication; and (2) that not as much stacks up as you'd expect. More rigour is needed – and a better understanding of the difference between validity and reliability.
A further suggestion would be that every journal has a replication article as part of every issue – with passionate calls going out for replications. It would give the students a forum for articles and save their professors from doing things that won't enhance their reputations.
Bill Bailey
Former Lecturer in Educational Psychology
Institute of Education, University of London
I write to thank you for the 'Opinion special' on replication. Our lab has been following the three incidents mentioned and appreciated the readings very much; the team has done a really elegant job of summarising the key issues raised.
I sincerely think this is a comprehensive write-up that will last far beyond the incidents themselves, a much-appreciated discussion of the practice of science and of confidence in inferences made. Many of us in the lab have bookmarked the articles and forwarded them to our friends; we have no doubt that we will continue to do so when new research students join in years to come, to encourage them to a higher level of science.
On that note, thank you very much for your contribution to the field.
Jean Liu Chuan Jin
Cognitive Neuroscience Lab
Duke-NUS Graduate Medical School
Singapore
Running risks – reality and perception
The tragic death of Claire Squires in this year's London Marathon reminds us that long-distance running is not without risk. How dangerous is it to run the marathon?
Some back-of-the-envelope calculations using data from the Office of National Statistics show that the age-standardised mortality rate in the UK, averaged for the period 1981–2010, is 0.002 per cent per day, or a 0.06 per cent mortality rate across all 31 London Marathons. Since the London Marathon's inception in 1981 there have been 854,561 finishers and 11 deaths. Clearly, this is 11 too many; however, if London Marathon runners showed a comparable mortality rate as the UK population, then this would have resulted in 512 deaths throughout the history of the race. Thus the actual mortality rate we see for
the London Marathon is over 46 times less than the national average. One could speculate that this might be because of the fitter-than-average people that take part, the lower-than-average age of competitors and accessibility to medical services.
This is important because we know people are likely to overestimate the risk of widely reported events. Following the 9/11 terrorist attacks, many people switched from using planes to travelling by car, which was perceived as safer. Researchers estimate that this caused an extra 2300 deaths – almost as many people that died in the Twin Towers themselves (Blalock et al., 2009). There is a danger then some will avoid running because they think it is too risky. This would be unfortunate given the evidence for the benefits of running and the relatively low mortality rate in the marathon.
Paul Ibbotson
University of Manchester
Reference
Blalock, G., Kadiyall, V. & Simon, D.H. (2009). Driving fatalities after 9/11: A hidden cost of terrorism. Applied Economics, 41(14), 1717–1729.
Museums of madness?
Diane Lockley's 'Looking back' article on Leicestershire's first lunatic asylum (June 2012) is interesting. However, it seems to have been written with little reference to the wider context. This leads to a number of issues, but this letter will only comment upon one. Her main thesis is that this institution was 'curative'. It is the case that many of the earlier public asylums, with Lancaster being the prime example, were influenced by the principles of 'moral treatment' especially as practised at the Retreat, a Quaker private asylum in York. (Here the term 'moral' is being used in an older sense roughly approximating to 'mental', and not necessarily with the ethical connotations that the word has today.)
As things worked out in public asylums, the steady accumulation of cases, despite many discharges, together with the identification of more and more people as insane, led to considerable overcrowding even with extensions to buildings. Overcrowding and low staffing levels pushed asylums more towards warehousing the insane than offering treatment. In Andrew Scull's term they became 'museums of madness' (Scull, 1979). It would be interesting to know whether the Leicestershire Asylum's presumption of being curative changed as the century went on due to these influences.
Apart from a need to justify their existence, another factor pushing asylum authorities to stress their therapeutic outcomes arose from their being, at least to a degree, in competition. Public asylums were created to cater for 'pauper lunatics'. Who was admitted was outside the control of the asylum or its superintendent, and admission was actually determined by poor law officials and magistrates. The poor law system could deal with the insane in other ways than admission to the local public asylum (Miller, 2009). Early in the 19th century, some were sent to the 'pauper departments' of private asylums, although this gradually tailed off. A not uncommon response was to admit the person to the workhouse. The Leicester workhouse even created a special ward for the insane in the 1840s and was not unique in having such a facility (Carpenter, 1997). Since the costs of caring for pauper lunatics had to be met by the poor law union concerned and since costs for the workhouse were cheaper than the asylum, there was an obvious temptation for the poor law authorities to think of using the workhouse first.
Given this situation there was an incentive to asylums to puff their beneficial contribution to the care of the insane especially before the pressure of numbers built up. Some overblown 'advertisements' were circulated. An example from the Leicestershire asylum is too long to reproduce in this letter but is reproduced by Bartlett (1999, p.120) and presents the asylum as a utopian curative establishment. Claims to 'cure' therefore need to be taken with an appropriate measure of salt.
Ed Miller
Exton, Rutland
References
Bartlett, P. (1999). The poor law of lunacy. London: Leicester University Press.
Carpenter, P.K. (1997). The pauper insane of Leicester. History of Psychiatry, 8, 517–537.
Miller, E. (2009). Variations in the official prevalence and disposal over the insane under the poor law. History of Psychiatry, 18, 25–38.
Scull, A. (1979). Museums of madness. London: Allen Lane.
Forum - Survival guide
In a recent post on my blog, I lamented the reluctance of many women to speak out in seminars. I'd noticed that when I was invited to give a talk, question time was often dominated by men, but women would then come up afterwards and ask me a question in private. So I wrote about this, urging women to be bolder and to speak out more often. I regard seminar question time as an important part of academic activity, and one where we need more women's voices.
I'm pleased to say that a handful of women responded to tell
me that, emboldened by my blogpost, they had asked a question
in a seminar and discovered that it wasn't as bad as they imagined. But several people confessed to a real dislike of speaking out in question time. These included men as well as women, and some quite eminent figures in the field. In some cases, this seemed like a form of social phobia: reluctance to put oneself in a situation where one might end up looking foolish or ill-informed. But others implied that they did not want to ask questions for fear of appearing rude, boastful or bullying.
Now, I'm all in favour of good manners. And I accept that there are people who will use seminar question time to show off. They are a nuisance, but the way to deal with them is not to remain silent, but rather to crowd them out. Otherwise, in our concern to be polite, we may be losing something important: the ability to have public discussions about science.
We need to distinguish questions requesting information, and those that are more challenging. It's always worth requesting information if something is unclear: speakers often leave out crucial detail, not realising they have done so. You may feel worried that you'll be revealed as having been inattentive or ignorant, but the odds are that if you didn't get it, nor did other people in the audience. Usually, you are doing both the speaker and the audience a favour by drawing attention to the problem – and the talk will be better next time as a result.
Challenging questions are more controversial. None of us likes being put on the spot, and for most speakers their greatest fear is that a fatal flaw in the talk will be revealed and they will be exposed as incompetent. I suspect that empathy with the speaker puts many people off asking questions that could be hard to answer. But I think this is a mistake. A question can be challenging without being aggressive, and posing such questions to a speaker is not rudeness: it's part of the engine that drives research forward. If there is a flaw in my argument, or a relevant line of research that I have missed, or an alternative interpretation of my data, I want to know about it! For junior people, this is especially important, because they are likely to be competing for fellowships and jobs where they need to be able to cope with tough interviews. I can't count the number of times I have been helped by a difficult question. I may not always deal well with it at the time, but I'll go away and mull it over. I'll either take the point on board, or develop a good defence against it for the next time. This is one reason why I like, if possible, to give talks about my latest research before I try to publish it: if there are problems with the work, I'd prefer to flush them out before submitting the paper. But is it getting increasingly hard to find audiences who will perform this useful function?
Dorothy Bishop is Professor of Developmental Neuropsychology at the University of Oxford. Read the full version of this column at http://deevybee.blogspot.com. This column aims to prompt debate surrounding surviving and thriving in academia and research.
Psychologists, healthcare informatics and the BPS
What goes on in health care affects us all, and what goes on in health care is increasingly affected by advances in information and communication technologies that have already led to the introduction of electronic general practitioner and hospital records, to the use of the internet to access personal healthcare records and health-related information, to web-based interactions for assessment, therapy, consultation, and to much else. Additionally, the increasing availability and use of smartphones and portable and tablet computers adds a new dimension, enabling quick and easy remote communication and access to an abundance of information. Many of these developments are already in place, and their potential, together with other developments, to revolutionise health care, is evident in the newly released information strategy for the NHS 'The power of information: Putting all of us in control of the health and care information we need' (available from http://informationstrategy.dh.gov.uk).
Information and communication technologies, their applications and associated developments come under the umbrella of healthcare informatics, a discipline encompassing the resources, devices and methods for optimising the acquisition, storage, retrieval and use of information. It also includes the use of computers, clinical guidelines, formal medical terminologies, and information and communication systems. The scope of informatics in health care is much broader than medicine, and includes applications in social care, the criminal justice system, education, and the other domains of applied psychology. Consequently, developments in informatics have profound implications for the Society's members, including its 10,000 plus practitioner psychologists: ultimately, information processing, management and use also involve people, often in key roles, and in doing so, have major psychological ramifications, in relation to which psychologists – researchers, academics and practitioners – will have much to contribute.
The Society has been active in a number of ways in relation to informatics-related issues, for instance publishing guidelines on internet-based psychological services and commissioning a position paper on electronic clinical records. It responded to the consultation on the proposed initial NHS information strategy and secured psychology representation in work on the clinical record structure. These instances of Society involvement are important, but contributions are commonly reactive rather than proactive, dependent on too few individuals, and essentially piecemeal and uncoordinated, failing fully to reflect the actual and potential contribution of psychology and psychologists. Underlying this is the absence of a wider appreciation of the fundamental implications of the advances in healthcare informatics for members and for the users of the services they provide.
These concerns were communicated in a paper I submitted to the Society's Professional Practice Board, and I am delighted that the Board has announced the formation of an Advisory Group on Informatics that will be able to support a proactive, coordinated and psychologically informed approach to these developments [see p.540].
On the proactive front, the Advisory Group might seek to: identify acceptable datasets for psychological practice, and to capture and process psychological formulations, both in ways that meet practice, policy and management objectives; consider constructive ways of managing sensitive service-user information so as not to compromise safety, confidentiality and the use of services; identify research opportunities and priorities; develop undergraduate and pre-qualification material in the psychology of informatics; and identify and engage with other informatics issues relevant to the membership. The group would seek links with other professions and groups with cognate interests, and identify representatives for relevant external groups, particularly those that influence policy. It would also be responsible for overseeing timely responses to consultations with informatics implications, ensuring that the Society can present an informed, representative and coherent position based on the available psychological evidence.
Establishing and developing the Advisory Group will undoubtedly be challenging. There are other ongoing, compelling demands on the Society, its members, and resources: informatics – short-sightedly so – is of seemingly of minimal interest. Consequently, possibly the most difficult task will be fostering a mindset in the membership of the Society that understands informatics as a key component of health care, theirs and that of their wider personal and professional networks, and in other areas of psychology. Such understanding is especially important for the practitioner Divisions and integral to their healthcare practice.
The Advisory Group, albeit set up for a time-limited initial period, will be a start, hopefully enabling Society members to secure a role and influence in shaping a psychology-aware and psychology-relevant national healthcare and other informatics agendas, for now and for the future.
Michael Berger
Emeritus Professor of Clinical Psychology
Royal Holloway, University of London
IQ and learning disability
I am writing in response to the article 'Defining learning disability' (June 2012). I absolutely recognise the situation that the authors describe, and I wish to congratulate them on summarising the inherent flaws in relying on a measure of IQ to assess people's levels of need.
In my experience working in CAMHS, some of the most damaged and needy young people are those in the borderline learning disability range of IQ 70–80 who have had very abusive carer histories but who fall to qualify for any adult services. Those with additional developmental problems, for example ASD or a head injury, can also present as particularly complex and in need of support, but if their IQ happens to fall above 70 they too fall through the net.
I feel it is a time to challenge this overly simplistic way of defining need with our colleagues locally, and I wonder if the BPS could offer some leadership in this matter at a national level.
Martina Waring
Clinical Lead for the Learning Disability Service within CAMHS
Child and Family Unit
St James Hospital, Leeds
Steve Reicher & Alex Haslam column, The 'dyslexia' label, and A level of professional psychology overlooked? can all be found on PDF version.