AI
Digital and technology

AI and the work of psychologists: Practical applications and ethical considerations

Ian MacRae, a member of the BPS’s Division of Occupational Psychology and Research Board, offers a practical guide for psychologists using or considering generative AI.

09 October 2024

This article covers both the opportunities and risks of using generative AI, emphasising ongoing debates and areas of disagreement. It discusses why the term 'Artificial intelligence' can be misleading and why attributing human-like cognition and emotion to computer software is unwise, particularly for psychologists.

Generative AI – What it is

Generative AI creates new content - such as text, images, or music - by mimicking patterns found in large datasets of existing work. 

Generative AI is more like a complex form of pattern matching rather than decision-making. Generative AI maps the underlying structure of data, its patterns and relationships, to generate outputs that mimic the underlying data. 

Chatbots powered by large language models are a common use of this technology, often for generating, revising, and translating text. While they can quickly create and format content, they are prone to errors and cannot assess the truth or accuracy of what they produce. There's no underlying understanding, intention, or judgment - just a series of calculations to generate content that is the most likely match for the query.

This approach also makes them prone to errors. These models can just as easily produce content in the style of a scientific article or science fiction, but they lack the underlying ability to judge the truth, accuracy, or relevance of what they generate.

Psychologists often get asked, "Can chatbots replace mental health professionals?" The answer, for now, is no, and there are a few important cautions to keep in mind. Some tools that have undergone rigorous and extensive medical device testing in the UK have been approved for initial screening and assessment purposes in clinical settings

However, the language models available to the general public like ChatGPT, Gemini, and  Anthropic have clear limitations. They specify in their terms and conditions that these should not be used for medical, psychological or diagnostic purposes or making consequential decisions for, or about, people. This is because decisions involving mental and physical health require complex, contextually informed judgment that AI isn't equipped to handle. 

Is AI intelligent?

This brings us to a broader question: Is AI intelligent? For psychologists, this is a crucial issue. Neil Lawrence, professor of machine learning at the University of Cambridge, in his book The Atomic Human draws a clear line between 'machine intelligence' and human intelligence, arguing that calling it 'artificial intelligence' is misleading. Human intelligence is embodied; it involves focusing on specific stimuli and managing limited attention in an environment full of more information than we can ever process at once.

Lawrence argues that our ability to focus on what is contextually and socially important is uniquely human. Our processing power is limited and directed by focus and attention which makes our intelligence distinct from machines. For Lawrence, this distinction is fundamental: While machines can be considered intelligent, their machine intelligence is fundamentally different from our own.

Use by psychologists

Psychologists and researchers in general are already using machine intelligence for many text-based tasks. Recent research suggests a sudden spike in articles published using words and phrases that are commonly used by large language models, and according to a survey by Nature about 30 per cent of researchers surveyed had used generative AI tools in their work. And there are some optimistic notes for productivity: Recent research from three randomised controlled trials found coders using an AI assistant completed 26 per cent more tasks. 

A recent article from The American Psychological Association discusses some of these psychological applications of generative AI in education, therapy and higher education, along with the potential opportunities and cautions.

Large language models can be especially useful for psychologists and researchers in coding tasks. They can generate useful code (such as R or Python) for tasks where outcomes are easily verifiable. For example, a language model can easily generate code to format graphs into APA style (a writing style and format for academic documents). 

Additionally, language models can assist in debugging by suggesting fixes based on error messages. If you input an error message into a language model, it can suggest possible problems. Language models are safest for tasks with clear, verifiable outcomes. For example, asking a language model to 'generate a histogram following APA style' has specific, objective criteria where it is straightforward to evaluate the accuracy of the results.

Generative AI can write computer code without using any personal or confidential data, which helps protect sensitive information. However, if you enter your own data into these models, the same risks and ethical concerns around data privacy and security apply, just as they would with any sensitive information.

Cautions for psychologists

There are a range of potential ethical, legal and philosophical issues with AI. These will likely be ongoing areas of discussion and debate as technology tends to move far more rapidly than courts and lawmakers. However, psychologists should keep two key points in mind:

1) Ensure content is verified by qualified professionals

AI-generated content should be verified by someone qualified to assess its accuracy and relevance, rather than relying on a 'feels right' judgment. This aligns with the BPS Code of Ethics under the Principle of Competence.

2) Safeguard privacy by avoiding sharing confidential data

Psychologists should not input confidential or personal data into language models because they can't control how that data is stored or used. Once entered, the information may be retained and potentially regenerated or accessed by others, posing a significant risk to privacy. This aligns with the Principle of Respect, and entering confidential data into language models may also violate UK GDPR regulations, as language models may inadvertently reproduce confidential data for other users.

Be cautious about cognitive and emotional language

Psychologists should avoid attributing human emotions or cognitive processes to AI. While it's common to anthropomorphise systems like language models or image generators, psychologists should avoid doing this. 

For example, rather than saying, "This is what AI thinks the future will look like," it's more accurate to describe these outputs as responses generated from software based on data patterns, not as products of thought or understanding. These systems generate results based on queries and training data; they don't think or process information like humans. It would be misleading to say, "this is what SPSS (software used for statistical data analysis) thinks the relationships between personality traits and health outcomes are", we would describe the results of this analysis as statistical outputs based on the data entered, not as a product of reasoning or insight by the computer software.

While it's tempting to imagine machine intelligence with human-like capabilities, psychologists must avoid this error. As Neil Lawrence notes, what seems remarkable about AI is due to engineering, vast datasets, and powerful computation. The real 'trick' is that AI mimics us, refining patterns from human data. Psychologists must resist ascribing human qualities to AI, especially given how differently these systems operate.

Additional concerns

There are two other issues with generative AI that will likely be long-running debates. The first is largely practical and legal while the second is a broader philosophical discussion that many will feel very strongly about. 

1) Copyright

One of the major concerns with generative AI models is that they have consumed vast amounts of data without the consent of authors, writers, artists or creators. There are ongoing legal discussions and battles that could have significant impacts on both the regulation about training data and generative AI outputs.

The Authors' Licensing and Collection Society says, "the large language models underpinning these systems are developed using vast amounts of existing content, including copyright works which are being used without consent, credit or compensation." And there are large lawsuits underway in the US against OpenAI and Microsoft which are summarised in a blog by Harvard Law.

Remember that when you are using any new technology, especially software as a service, the rules and terms of service can change suddenly, without notice, and not necessarily in your favour. 

2) Philosophical objections

Many people have philosophical objections to machines doing human work, particularly when it involves their own jobs. The idea of machines replacing human effort can feel unsettling, especially when it comes to tasks people consider uniquely theirs.

There's also an ongoing debate about the role of humans in creativity. These debates have been around as long as automation, summarised exceptionally well in The Stones of Venice (1853) by Victorian art critic John Ruskin, emphasising the value of human agency, creativity, mastery, and the appreciation of imperfections that are inseparable from human endeavour. For Ruskin, the soul of good work was in applying one's best efforts and skill without shying away from imperfections. He believed automation smoothed away the rough edges and flaws inherent in the expression of human creativity.

Ruskin's core arguments in this debate remain heated and relevant today. The question of what fundamentally human work should be, and what can (and what should) be automated is far from settled. 

About the author

Ian MacRae is a Division of Occupational Psychology committee member. He is an independent researcher specialising in the psychology of work, digital communication, and emerging technologies. He has developed psychometric tests that have been used by hundreds of thousands of people. He is the author of several books that have been translated into a dozen languages, including High Potential and Dark Social and his latest book is Web of Value: Understanding blockchain and web3's intersection of technology, psychology and business.

 

Read more on these topics