Every day in my clinic, I see at least one patient who has asked an AI chatbot or Dr Google what to do about a health problem.
A sore knee, a cough, an odd mark on their skin. Increasingly, they are asking how to cope with mental health issues such as anxiety and depression.
Sometimes, the advice they receive is harmless. Rest, ice, compression. Stay hydrated, take some over-the-counter medication. Monitor changes.
But sometimes it is not.
Avoid going outside and limit personal interactions completely. That is what ChatGPT told a patient of mine with anxiety.
They did not leave their house for almost three months, stopped seeing friends and family, and did not even speak to anyone on the phone.
Eventually, they came to see me, but only because I followed up on a missed appointment, and not before their anxiety had escalated to a point of paralysis, turning what might have been a manageable condition into an entrenched crisis that required months of intensive support.
Cases like this are becoming far too common.
AI can be a helpful librarian, but a dangerous doctor
The appeal is obvious. Information is free, instant, and available 24/7. But unlike clinicians, these tools are not trained to treat real people, they are designed to provide plausible answers and to agree with us.
Long waiting lists are escalating the problem. Eight times as many people are still waiting for mental health treatment after 18 months compared with physical healthcare¹.
The wait is completely unacceptable, of course patients are going to self-treat. Many feel they have no other choice.
Yet the NHS has no system-level response to this reality.
Approved chatbots can play a role in education, triage, and support, but they are not treatment. So what are patients turning to? Unregulated tools are understandably filling the gap.
Tragic cases, such as Adam Raine, who died by suicide after ChatGPT’s “months of encouragement”, highlight the deadly consequences of following advice from unqualified, unregulated chatbots.
Like the example above, there are doubtless thousands more instances of patients following chatbot advice. Frankly, it is dangerous.