How trustworthy is the data that psychologists collect online?
Findings raise general concerns about how closely participants read task instructions.
09 March 2016
The internet has changed the way that many psychologists collect their data. It's now cheap and easy to recruit hundreds of people to complete questionnaires and tests online, for example through Amazon's Mechanical Turk website. This is a good thing in terms of reducing the dependence on student samples, but there are concerns about the quality of data collected through websites. For example, how do researchers know that the participants have read the questions properly or that they weren't watching TV at the same time as they completed the study?
Good news about the quality of online psychology data comes from a new paper in Computers in Human Behavior. Sarah Ramsey and her colleagues at Northern Illinois University first asked hundreds of university students to complete two questionnaires on computer – half of them did this on campus in the presence of a researcher, the others did it remotely, off campus.
The questions were about a whole range of topics from sex to coffee. The researchers started off leading the participants to believe they were really interested in their attitudes to these topics. But when the students started the second questionnaire they were told the real test was to spot how many of the questions on the second questionnaire were repeats from the first. The idea was to see whether the students had really been paying attention to the questions – if they hadn't, they wouldn't be very good at spotting duplicates in the second questionnaire.
In fact, both groups of students – those supervised on campus and those who could do the questionnaire anywhere – performed well at spotting when questions were repeated. This suggests that even those who'd completed the questionnaires at home, or out and about, had been paying attention – good news for any researchers who like to collect data online.
A follow-up study was similar, but this time there were three participant groups: students on-campus, students off-campus, and 246 people recruited via Amazon's Mechanical Turk. Also, the researchers added a trick to see if the participants had read the questionnaire instructions properly – they did this by making an unusual request for how participants should indicate the time they completed the questionnaires.
In terms of the participants' paying attention to the questionnaire items, the results were again promising – all groups did well at spotting duplicate items. Regarding the reading of instructions, the results were more disappointing in general, but actually the Turkers performed the best. Just under 15 per cent of students on-campus appeared to have read the instructions closely compared with 8.5 per cent of off-campus students and 49.6 per cent of Turkers. Perhaps users of sites like Amazon's Mechanical Turk are actually more motivated to pay attention than students because they have an inherent interest in participating whereas students might just be fulfilling their course requirements.
Of course this paper has only looked at two specific aspects of conducting psychology research online, both relating to the use of questionnaires. However, the researchers were relatively upbeat – "These results should increase our confidence in data provided by crowdsourced participants [those recruited via Amazon and other sites]" they said. But they also added that their findings raise general concerns about how closely participants read task instructions. There are easy ways round this though – for example, instructions can include a compliance test that must be completed before the proper questionnaire or other task begins, or researchers could try using audio to provide spoken instructions.
Further reading
Ramsey, S., Thompson, K., McKenzie, M., & Rosenbaum, A. (2016). Psychological research in the internet age: The quality of web-based data Computers in Human Behavior, 58, 354-360 DOI: 10.1016/j.chb.2015.12.049