top of page
AINews (3).png

Can AI Chatbots Really Be Trusted for Health Advice?

  • Writer: Covertly AI
    Covertly AI
  • 14 hours ago
  • 4 min read

Artificial intelligence is becoming a bigger part of how people search for health information, but that convenience is raising serious concerns. Nearly 8 in 10 adults in the United States already turn to the internet for health answers, and more than 1 in 5 adults worldwide are now going directly to AI chatbots such as ChatGPT and Gemini. The attraction is obvious. These tools are fast, free, and available at any time, which can feel especially helpful when doctor appointments are hard to get. But medical experts say that while AI may sound organized, calm, and informed, that does not make it safe to trust with important health decisions.


One major problem is that diagnosing and treating illness is far too complex for a machine alone. AI tools do not have access to a person’s full medical history, cannot examine the body, cannot run tests, and do not reason the way trained clinicians do. They generate answers by recognizing patterns in data, not by truly understanding a patient’s condition. That means they can confidently deliver false information, sometimes called hallucinations. They may also reflect bias in the data they were trained on, which can make them less reliable for people from different backgrounds or cultures. Experts also warn that users should never upload private medical records or sensitive personal details into these systems.


Real experiences show both why people use these tools and why caution matters. Abi, a woman from Manchester, told the BBC that she often turns to ChatGPT because it feels more tailored and less frightening than a regular internet search, which can quickly surface the worst possible conditions. In one case, the chatbot advised her to see a pharmacist for what seemed like a urinary tract infection, and that guidance helped her get antibiotics and the right treatment. But after a hiking fall left her with intense back pressure spreading into her stomach, the chatbot told her she may have punctured an organ and needed emergency care immediately. After hours in hospital, she realized the advice had been wrong. Her story captures the mixed reality of AI health advice: it can sometimes be useful, but it can also sound urgent and authoritative when it is mistaken.



Research helps explain why this happens. In one University of Oxford study, chatbots were 95 percent accurate when they were given full, physician-prepared case details. But when 1,300 ordinary people had to interact with the chatbot naturally to reach a diagnosis, accuracy dropped to just 35 percent. The issue was not only the model, but how people communicate with it. Users often leave out details, describe symptoms gradually, or use different wording that changes the result. In one example involving a potentially fatal brain bleed, one phrasing led the chatbot to suggest rest and pain relief, while a slightly different description led it to urge immediate medical treatment. Researchers found that people using a traditional search often landed on official NHS pages and were sometimes better prepared.


There is also growing evidence that chatbots can spread misinformation. A study from the Lundquist Institute found that more than half of chatbot responses across topics such as cancer, vaccines, stem cells, nutrition, and athletic performance were problematic. When asked about alternative cancer treatments, one chatbot even suggested unsupported natural remedies instead of rejecting the idea. Experts say this risk is worsened by the tone these systems use. Because the responses feel personal, polished, and confident, users may trust them more than they should. England’s Chief Medical Officer, Professor Sir Chris Whitty, has warned that current chatbot health answers are often not good enough and can be confidently wrong. OpenAI has also said ChatGPT should be used for information and education, not as a replacement for professional medical advice.


That does not mean AI has no role in healthcare information. Experts say it can still be useful for general education, explaining medical terms in plain language, offering broad wellness guidance, or helping people prepare questions for a real appointment. The safest approach is to use AI as a research assistant, not as a doctor. Information should be checked against trusted medical sources and discussed with a qualified healthcare professional. Clearer questions may improve the usefulness of answers, but the responsibility for safe care still belongs to human experts. AI may be helpful, but when it comes to your health, it should always be treated with caution and a healthy dose of skepticism.


Works Cited



Gallagher, James. “Should You Really Trust Health Advice from an AI Chatbot?” BBC News, 18 Apr. 2026, www.bbc.com/news/articles/clyepyy82kxo


“The AI Health Dilemma: Can You Trust Chatbots for Medical Advice?” Ratopati, 20 Apr. 2026, english.ratopati.com/story/59798/do-you-trust-health-advice-given-by-chatbots


Sutherland Editorial. “Artificial Intelligence Is Shaping the Future of Healthcare.” Sutherland, 9 Apr. 2025, www.sutherlandglobal.com/insights/blog/ai-in-healthcare-transforming-patient-care.  


Traviss, Megan. “Gamechanging AI Doctor Assistant Improves Patient Care.” Innovation News Network, 28 Apr. 2025, www.innovationnewsnetwork.com/gamechanging-ai-doctor-assistant-improves-patient-care/57472/.

Comments


Subscribe to Our Newsletter

  • Instagram
  • Twitter
bottom of page