Psychologist logo
Education

Evidence and learning styles

Several writers respond to Professor Rita Jordan's letter in the May issue.

12 May 2017

In her response to our letter to The Guardian Professor Jordan ('Why don't educators listen to us?', Letters, May 2017) has taken issue with our concerns related to the widespread practice of 'learning styles' in our schools.

Professor Jordan accepts our assertion that there is no scientific evidence to support the efficacy of meshing as advocated by the learning-styles approach but, nevertheless, advocates its continued use by drawing attention to the problem of establishing reliable interventions and outcome measures in a heterogeneous group of atypical children with autism spectrum disorder using randomised controlled trials (RCTs).

While we welcome a debate over this issue, we believe that our position is misrepresented on a number of critical points. First, we are criticising a specific set of practices that are supposedly supported by neuroscience when they are not, and hence we regard learning styles as a 'neuromyth'. We are not criticising individually tailored learning in atypical populations that require special needs. Second, our concerns were based on a lack of evidence to support learning styles as a general educational approach following numerous studies, and not the problems of RCTs with atypical populations. These are separate issues.

In 'unpicking' our claim that there is no scientific evidence to support the use of learning styles in education, Professor Jordan questions what evidence is, and how one should go about assessing it. It is noteworthy that both the terms scientific and evidence are printed with quotation marks as if they are questionable. She points out that most educators and therapists are interested in individuals and that individual design research is more useful. Crucially, Professor Jordan acknowledges that educators should pay attention to evidence-based practices that 'might give an idea of what is worth trying'. We would argue that you cannot have it both ways in practice.

Whatever works might be fine for individual interventions, but with approximately 8.5m schoolchildren in the UK, it is simply not practical to provide tailored education for every child. We need evidence-based studies derived from group data to provide the most effective interventions for this large population; and when there are claims that have neither scientific support nor scientific validity as in neuromyths, it is right to draw attention to them as many perceive them as scientifically credible.

Professor Jordan writes that we should 'save [our] "lectures" for the educational administrators' rather than telling teachers 'how, and what to think'. The speakezee.org network, which I lead, organises talks for schools to inspire students and teachers about neuroscience and explain what scientific evidence is, and why learning styles failed to meet the criterion.

If spreading critical thinking is lecturing to teachers to provide them with the skills to recognise why certain claims and interventions are deemed pseudoscience, then so be it. It is something we can all benefit from.

Professor Bruce Hood
University of Bristol
(on behalf of the original co-signatories)

Whilst I agree with Professor Jordan that most educators are concerned with 'What is the best approach for this individual, at this time, in this context for this purpose?', I am sceptical about the value of psychologists helping educators to learn how to conduct and publish the high-quality individual designs that are needed to conduct more effective individual approaches. Putting it this way still seems to suggest a shift away from the educator's prime preoccupation.

In the background here is a larger question about the role of psychological research in teaching and learning processes in schools. Producing what psychologists would regard as high-quality research designs is not something teachers would find very helpful when trying to improve their day-to-day practice. This is not to say that reflective teachers are not interested in evidence-based practice or in carrying out research, but their idea of research is different. It is less concerned with design and control and more about trial and error, seeing what works and what doesn't, and with the best that can be done in a particular context. In short, they use an ongoing action research model rather than the traditional experimental model employed by psychologists. This is not to say that well controlled studies have no place but they can only ever be a small part of what teachers do when they make action research integral to their teaching.

John Quicke
retired Professor of Education, Hull

Rita Jordan comments on how oversimplifying issues of concern is common in psychology. Indeed, careers and industries have been built on maintaining naive dichotomies. For instance, in his 2012 book, Debunking Myths in Education, Frank Coffield listed 29 such dichotomies in his survey of contrasting teaching styles.

There are dichotomies that ignore the middle ground in other areas, for example: Passive/Aggressive in personality measures; Expert/Facilitator in teaching styles; Fast/Slow in ways of thinking; and many more. Such dichotomies may well exist – but the labels oversimplify the case. I don't want to imply that the most accurate position is always somewhere in the middle. But I do wish psychologists would stop oversimplifying things – as I have done!

James Hartley
School of Psychology, Keele University