When you need to talk, will a chatbot suffice?

As mental health services struggle to meet demand, AI is being plugged in to fill the gaps. Danielle Olavario finds out how chatbots can help us to cope, as well as the tech’s limitations.

Wellbeing

Let me start off by painting a picture of what the mental health system looks like right now. In Ireland, only 6% of the wider health budget goes towards mental health. This is not new – historically, charities and local community or civil society organisations have had to step up to plug the gap, playing a crucial role in delivering mental health services and amplifying public awareness about mental health concerns.

In the UK, it’s just as bleak. Almost four months ago, NHS bosses declared mental healthcare a “national emergency” citing overwhelmed services unable to handle the surge in post-COVID cases.

Amidst this crisis, artificial intelligence (AI) emerges as a promising ray of hope. Given the frequent strain and underfunding of mental health services, there’s a growing emphasis on exploring how AI can lend support and enhance these crucial services. However, the pivotal question remains: Is interacting with a chatbot enough for a person struggling with their mental health?

Back in the ’60s, Joseph Weizenbaum, a scientist from MIT, created ELIZA, a computer program that could simulate a psychotherapist. ELIZA was one of the earliest examples of a chatbot and, over the years, advancements in AI technologies have paved the way for more sophisticated applications in mental health.

Nowadays, we have Woebot, Wysa, Youper – the list goes on. These chatbots leverage AI to facilitate mood logging, offer support and provide advice. “The reason why they’re so useful is because they’re almost seen as non-judgemental,” said Dr Ben Cowan, a researcher at Adapt, Ireland’s scientific research centre for AI-driven digital content.

“There's no self-disclosure, which we see in the literature is seen very positively by users,” Ben added.

With no humans on the other end, chatbots offer a non-judgemental haven. Not only that, but they can be on-demand 24/7 for users that might need them.

A first-of-its-kind study involving Woebot’s app for adolescents, W-GenZD, showed positive results. It compared an agent-based digital mental health intervention to therapist-led interventions in a clinical setting. The participants, who were actively seeking outpatient mental health services, reported significant reductions in depressive symptoms. What’s more, the study said that these reductions were statistically indistinguishable from those experienced in clinician-delivered therapy.

But despite the apparent advantages, a tension persists between mental health professionals and the evolving role of AI applications. Ben said that mental health professionals acknowledge the utility of AI tools for tasks like mood reporting, but express reservations about these tools replacing the nuanced role of therapists. “Where it becomes a little bit ropey is if you’re looking for these agents to give clinical intervention-based advice,” he said. There are concerns that these AI agents might provide inaccurate information, give misleading advice, or fail to respond appropriately to the nature of the situation.

Privacy concerns about the integration of AI into mental health practices also demand careful consideration. Algorithmic bias and the security of sensitive user data are paramount issues that necessitate stringent guidelines and legislative safeguards. “There may be private organisations who are involved in this,” said Ben, “and so what happens with the data, and how secure that data is, should be a concern for users.”

And then there’s the ethics of it. Is it ethical for a machine to simulate human emotion, to pretend to relate to human beings who are struggling emotionally, and give them advice? Here, Ben emphasises the importance of considering the ethics of the interaction when creating these apps. “Simulating this kind of human emotion is probably not correct, because your agent doesn’t have that shared emotion or that shared experience that might be being looked at. So it's important for designers to consider that whenever coming up with these applications.”

So how best to deal with mental health chatbots? Ben mentioned that a key consideration is having some level of control over what the chatbot will say. “A lot of the time, there are discussions internally among people who research this area about whether we should be looking at a mix between generative AI or something that’s a little bit more controlled, so that we know what the AI is going to say at particular moments in time,” he said. “That would help with the ability to trust its output."

And for users? Ben said it’s important for them to reflect on how the chatbot is affecting their mental health. “If you feel that journalling, or reporting moods, or creating diary entries with a chatbot is helpful, then that application can really work for you at that particular moment in time,” he said.

But for more intervention-based approach, Ben said it’s best to leave it to the professionals. “These things are not a replacement for that treatment, they’re generally an addition to that type of treatment, or they can help with mental wellbeing. If there’s something more serious in terms of going into serious medical levels of anxiety or medical levels of depression, it’s important to seek help from mental health professionals.”

However, that doesn’t mean that using AI in mental health is something that we should fear. In fact, the application of AI in mental health can be a very useful tool. But regulation is still needed, said Ben. “This is an area where AI can definitely do some major good, but we just need to make sure that we think through what the good and the bad things can be in these interactions, and legislate it.”

If you’re looking to improve your mental wellbeing, check out our upcoming live workshops on stress control, mindfulness, and holistic wellbeing.

Danielle Olavario
Danielle Olavario is a full-time social media expert, part-time writer, and life-long Sex and the City super fan. A digital native born in the Philippines, she has had a love-hate relationship with the internet since the dial-up days.

Get your daily dose of dara & co

By clicking Subscribe, I agree and accept the Terms & Conditions of dara & co.