
Exploring the role of chatbots in the scope of mental health support
In recent years, artificial intelligence (AI) has become deeply embedded in healthcare – from diagnostics and screening tools to operational efficiencies and patient engagement. One of the most debated developments, however, is the emergence of AI-powered chatbots acting as mental health “companions” or even therapists.
With NHS mental health waiting lists at record highs – over 1 million people currently waiting for care, and last month alone seeing nearly 426,000 new referrals – many individuals are turning to digital alternatives for support. Private therapy, while effective, can be prohibitively expensive, creating a growing need for accessible, scalable mental health solutions.
A Role to Play, But Not a Replacement
AI mental health tools, such as chatbots trained in basic psychological frameworks like cognitive behavioural therapy (CBT), are increasingly being used to provide low-level support. Some NHS services are already integrating platforms like Wysa, which offers CBT-style prompts, guided meditation, and coping strategies to patients waiting to see a clinician – or as a self-help resource for those with stress, low mood, or anxiety.
Users often report that these tools are non-judgmental, always available, and easy to engage with, especially for individuals who struggle with emotional expression or face barriers to face-to-face therapy. For some, AI offers a kind of “emotional scaffolding” – a short-term digital outlet during difficult periods.
But experts agree: AI is not a substitute for professional mental health care.
Large language models (LLMs), which power many chatbots, simulate conversation by predicting language based on vast amounts of written content. While they can generate human-like responses, they lack the capacity to truly understand nuance, cultural context, or emotional complexity – all of which are critical in therapy.
As Professor Hamed Haddadi of Imperial College London has noted, a human therapist draws on decades of training, body language, tone, behaviour, and lived experience; all elements that are difficult, if not impossible, for AI to replicate.
Ethical Considerations and Safety Risks
Some chatbots, particularly those created without clinical oversight, have raised serious concerns. There have been high-profile cases where AI tools gave harmful or inappropriate advice, highlighting the risks of deploying loosely regulated technology in sensitive health contexts.
The “Yes Man” effect – where bots mirror or affirm harmful thoughts in an effort to be supportive – is a well-known risk. Without embedded safeguarding protocols, escalation pathways, or clinical governance, the potential for harm is significant.
Security and data privacy also remain major concerns. While some apps, like Wysa, claim not to collect personally identifiable data, wider public trust is low. A YouGov survey found that only 12% of the public believe AI chatbots would make good therapists, reinforcing that these tools are not seen as replacements for human care.
A Temporary Bridge – Not the Destination
Mental health charities and clinicians increasingly view AI support as a temporary bridge; a way to offer low-level support, signpost users, and keep people engaged while they wait for professional input. In that role, AI can play a valuable part in broadening access, triaging need, and alleviating pressure on overstretched services.
However, AI must be used transparently, ethically, and responsibly. It should never be marketed as a replacement for professional therapy, and must include clear safeguards, crisis support, and routes to human care when needed.
The Bottom Line
AI mental health tools may have a complementary role in the future of care, particularly in supporting mental wellbeing, resilience, and early intervention. But they cannot replicate the empathy, insight, or clinical judgement of a trained therapist.
In an ideal future, digital tools will enhance, not replace the human connection at the heart of mental health care. Until then, we must be cautious about their capabilities, clear about their limits, and committed to ensuring safe, inclusive, and clinically sound pathways for all who seek support.
Back to News + Insights