
The value of mental health chatbots
Dr Dawn Branley-Bell, Chair of the Cyberpsychology Section explores the benefits and drawbacks of chatbot technology, and considers how AI can complement, but not replace, human involvement.
31 January 2023
Artificial Intelligence (AI) mental health chatbot Limbic Access has achieved Class IIa UKCA medical device certification, and is already being successfully used by some NHS Talking Therapies services to streamline mental health referrals.
Dr Dawn Branley-Bell, Chair of the Cyberpsychology Section explores the benefits and drawbacks of the technology, and considers how AI can complement, but not replace, human involvement.
AI and chatbots have numerous benefits to offer across many sectors, including healthcare. The healthcare service is notoriously - and increasingly - overworked and operating beyond capacity. Administrative tasks can be incredibly time consuming, so tools that can free up staff time and reduce staff burden can be hugely beneficial.
Reducing workload not only improves staff wellbeing but also contributes to a more effective workforce. Tired staff do not perform to the best of their abilities and anything that can protect staff, and subsequently patients, from the impacts of burnout is a positive.
Chatbots are one of the ways in which AI can be used to aid healthcare processes, for example by helping signpost patients to the most appropriate services based on their symptoms. By reducing workload, this increases staff capacity and time to dedicate to the most vital tasks.
For patients, chatbots have the potential to improve availability and access to services. Some patients may find chatbots more convenient - easier to fit around time constraints, for example - and/or more accessible. Many individuals may find chatbots a less intrusive or embarrassing way to seek initial advice on a healthcare problem, whether physical or mental.
Creating capacity
The NHS is experiencing significant capacity challenges in the face of record demand for Talking Therapies services – with latest data showing a 21.5% increase in people accessing NHS talking therapies services over the last year. This is an area where AI chatbots may have a valuable role to play in service provision.
The Limbic Access chatbot uses machine learning to continuously improve the quality of its digital assessments and conversations, and can be safely incorporated into the psychological therapy pathway to support patient self-referral.
It can classify the eight common mental health disorders treated by NHS Talking Therapies (IAPTs) with an accuracy of 93%, further supporting therapists and augmenting the human-led clinical assessment.
I am currently working alongside colleagues within the Psychology and Communication Technology (PaCT) Lab, here at Northumbria University, exploring the use of chatbots to encourage individuals to seek advice for health conditions that may be regarded as stigmatised or embarrassing. Being able to talk to a chatbot first may help individuals to make that, sometimes difficult, first step towards diagnosis and/or treatment.
Training the system
That said, AI and chatbots also have their limitations. An AI system is only as good as the data that it is trained on. If you put bad data in – for example, while developing and training the system - you can't expect to get anything but bad results out! That may seem obvious, but identifying good data is a lot trickier than it seems.
Data sets may have unintentional biases. If a dataset is largely representative of a certain population - whether this is based on location, socioeconomic status, ethnicity, gender, occupation etc. - then the data may provide results that are biased towards that population. These biases are not always immediately obvious.
This has been demonstrated in the past when AI systems have been built with good intentions, but later discovered to have been trained using data that unknowingly introduced bias into the system. In the worst-case scenarios this could result in a system that is more likely to diagnose certain groups over others – even with the same symptoms!
Of course, if an AI system is trained using good data, it can be incredibly accurate. It also does not suffer fatigue or human error. Therefore, the combination of accurate AI and human expertise could be a huge asset to healthcare.
Human involvement
AI is inevitably going to continue to grow, and with that it will develop and improve over time. However, I feel it will always have limitations in relation to the role it plays, and it will - or should - always require human involvement.
In recent years there has been a significant shift within the research field towards explainable AI (known as XAI) which is the creation of AI systems which are open to human understanding, interpretation and crucially evaluation. Previous AI has tended to operate in a 'black box' style, i.e., the user would input the data and the system would provide an output or decision, but how the system arrived at that decision was largely a mystery to the user.
Now developers are encouraged to create systems that are more transparent, and which allow the user to understand how the output is being computed. The goal with explainable AI is that humans can critically evaluate the systems. It is vital that users can understand, and therefore check and verify, the output of AI systems – without being able to do this, we would not be able to detect errors.
Of course, in situations such as healthcare this is vital, as the accuracy of the system is paramount to patient care and safety. Explainable AI is also crucial for patients, and not just healthcare professionals, as many users may lack trust in automated systems, and therefore initially be reluctant to adopt them. Being transparent about how systems operate is key to gaining user trust.
AI based systems and chatbots will continue to be utilised, and offer the benefits I've discussed, but they should always be open to human evaluation to enable us monitor system accuracy. Much as we would expect medical professionals to seek a professional second opinion where appropriate, we should expect our AI systems to also be overlooked by a human.
AI within healthcare has the potential to complement the vital role of medical professionals, to facilitate and to improve our healthcare services – providing it is developed and implemented correctly and this includes maintaining human involvement within medical diagnoses and decisions.