Psychologist logo
Robot hand and human hand touching
Digital and technology, Education, Ethics and morality

‘As a large language model, apathetic students are more likely to use me’

New research from Swansea University takes a look at student use of AI tools, and what can be done to discourage it.

02 May 2024

By Emily Reynolds

There's been plenty of speculation about ChatGPT's pervasiveness in academia, and in particular on its potential role in helping students cheat. Yet much of the chatter around use of AI tools by students is anecdotal, and there is currently little research on how often they are actually being used, for what reasons, and by who.

In their new paper, published in The Internet and Higher Education, David Playfoot, Martyn Quigley, and Andrew G. Thomas of Swansea University sought to understand how willing students are to employ ChatGPT in assignments, and explore factors associated with its use. Through a series of analyses, they find that the level of apathy students feel towards their studies could be used to predict AI tool usage, suggesting potential interventions that may make this brand of plagiarism and cheating less appealing.

Participants were 160 undergraduates (76% female) who were at different stages of their degree. To start with, they answered questions on their interest in and enthusiasm about their studies, indicating how much they agreed with statements like "I started my degree because I wasn't sure what else to do" and "I feel engaged in my degree".

Next, they completed a questionnaire probing the Big Five personality traits, as well as the so called 'Dark Triad', which has been previously linked to self-reported cheating behaviour. Further questions looked at their academic life, using a scale measuring their confidence in their study skills, including their study routines, critical thinking skills, and ability to successfully use resources to complete assignments.

Finally, they were asked about their knowledge and use of AI, as well as how likely they would be to use it under various different punishment conditions, increasing in severity from no punishment, to failing a module, to total expulsion.

Students generally had a high level of awareness of ChatGPT and similar tools, with about 83% of them stating that they had heard of them. Around 32% said they would use such a tool to write an assignment, with 15% reporting said they had already used one to help with their work.  

Personality did not appear to play a role in willingness to use AI for coursework; there was no significant relationship between either Big Five or Dark Triad traits and likelihood of using ChatGPT.

Instead, it seemed that apathy was a better predictor of AI use. Once data from students who had never heard of ChatGPT before the study were removed (17%), for every one standard deviation increase in students' levels of apathy, they were around 145% more likely to consider using AI tools. Stage of study was also relevant, with AI-aware second year students about 66% less likely to consider using ChatGPT than first years (previous analyses found no difference between first and final year students).

Another crucial factor was risk. If there was no chance of getting caught, students were somewhere between slightly and moderately likely to use ChatGPT, but as risk of being rumbled increased, willingness to use it decreased. Similarly, willingness to employ ChatGPT also decreased in line with the severity of potential punishments, suggesting that setting out consequences for unauthorised AI usage may keep students from turning to them.

Interestingly, previous research has shown links between certain personality traits and likelihood to engage in plagiarism, which this study did not replicate. The team suggests this may be due to the self-report nature of the study — students answering questions about cheating in the context of a study at their current university might have motivated them to be less honest. It's also possible that the self-selecting participant sample was not fully representative of the student body.

Investigations in this area may also be limited by a lack of clarity around whether the use of AI tools should be considered 'cheating'. As the authors note, ChatGPT can be used for anything from total plagiarism to tidying up sentence structure or explaining complex concepts. Whether all of its uses can be considered to be academic dishonesty is not clear, and may have impacted how students responded to questions on willingness to use such tools. This should be clarified in subsequent studies.

It's not entirely surprising that students who are disengaged with their studies might be more tempted to use AI tools to help complete their degrees. As the team puts it, such students "just want to 'get by'", gleaning no real sense of meaning from their degree, and thus may be more susceptible to using tools that help them cut corners.

The results of the study therefore provide educators with some clear areas of focus. Identifying students who feel disengaged with their degrees and working with them to establish a deeper sense of meaning and purpose could limit use of these tools. Beyond that, ensuring that students are clear about potential consequences may make them think twice before logging onto ChatGPT.

Read the paper in full.