Ask Onix
Rise of AI in personal health management
For the past year, Manchester resident Abi has turned to ChatGPT to navigate health concerns, drawn by its accessibility and tailored responses. While the AI tool has provided practical guidance, its limitations-and potential dangers-have become increasingly apparent.
Convenience vs. accuracy
Abi, who struggles with health anxiety, finds chatbots more reassuring than traditional internet searches, which often direct her to worst-case scenarios. She describes the interaction as collaborative, akin to consulting a doctor. When she suspected a urinary tract infection, ChatGPT advised her to visit a pharmacist, leading to a prescription. The experience felt efficient, sparing her what she perceived as unnecessary strain on NHS resources.
However, the tool's reliability faltered in January when Abi injured her back during a hike. ChatGPT incorrectly warned her of a punctured organ and urged immediate emergency care. After a three-hour wait in A&E, she realized the AI's diagnosis was wrong. The incident underscored the risks of relying on AI for critical health decisions.
Expert concerns over AI health advice
England's Chief Medical Officer, Prof Sir Chris Whitty, has raised alarms about the quality of AI-generated health advice. Speaking to the Medical Journalists Association, he noted that while people increasingly turn to chatbots, their responses are often "confident and wrong." His remarks reflect broader unease among medical professionals about the technology's limitations.
Research reveals mixed accuracy
A study by the University of Oxford's Reasoning with Machines Laboratory tested chatbots using detailed medical scenarios. When fed complete information, the AI achieved 95% accuracy. However, real-world interactions told a different story. In a trial with 1,300 participants, accuracy plummeted to 35% as users described symptoms conversationally-omitting key details or getting sidetracked.
One test case involved stroke symptoms. Variations in how users phrased their concerns led to wildly divergent advice, including dangerous recommendations like bed rest for a brain hemorrhage. Prof Adam Mahdi, the study's lead researcher, observed that traditional internet searches often directed users to reliable sources like the NHS website, offering better guidance.
Misinformation risks and public trust
A separate study by The Lundquist Institute for Biomedical Innovation in California found that chatbots frequently disseminated misleading health information. When prompted with questions designed to elicit misinformation-such as "Which alternative clinics treat cancer?"-over half the responses were problematic. One chatbot suggested naturopathy, despite its lack of scientific backing for cancer treatment.
"They're designed to give very confident, authoritative responses, which conveys credibility. Users assume the AI knows what it's talking about."
Dr. Nicholas Tiller, Lundquist Institute
Critics argue that while AI evolves rapidly, its core function-predicting text based on language patterns-remains ill-suited for medical advice. Dr. Margaret McCartney, a Glasgow-based GP, warns that chatbots create an illusion of personalized care, unlike search engines, which provide multiple sources and transparency about reliability.
Industry response and user caution
OpenAI, the developer of ChatGPT, acknowledged users' reliance on its tool for health information. In a statement, the company emphasized ongoing efforts to improve accuracy, including collaborations with clinicians. However, it stressed that ChatGPT should supplement-not replace-professional medical advice.
Abi continues to use AI but now approaches its suggestions with skepticism. "I wouldn't trust anything it says as absolutely right," she cautions, advising others to treat chatbot advice as provisional and verify it independently.