top of page
AINews (3).png

Google Removes AI Search Feature After Dangerous Health Mistakes

  • Writer: Covertly AI
    Covertly AI
  • 6 hours ago
  • 4 min read

Google has removed some of its AI Overviews for specific health related searches after an investigation by the Guardian found that the feature was providing misleading and potentially dangerous medical information. 



AI Overviews are generative AI summaries that appear at the top of Google Search results and are designed to give users quick snapshots of information. While Google has repeatedly described these summaries as helpful and reliable, the investigation raised serious concerns about their accuracy when applied to sensitive medical topics, particularly blood test results related to liver health (TechCrunch; The Guardian; Yahoo News).


One of the most concerning examples involved searches such as “what is the normal range for liver blood tests” and “what is the normal range for liver function tests.” According to the Guardian, Google’s AI Overviews presented long lists of numerical ranges without sufficient context and failed to account for important variables such as a patient’s age, sex, ethnicity, or nationality. Medical experts warned that these omissions could lead people with serious liver disease to believe their test results were normal and skip essential follow up care. In response to these findings, Google removed AI Overviews for those exact search terms, a move welcomed by health professionals who described the original summaries as “dangerous” and “alarming” (The Guardian).



However, the investigation also found that the problem was not fully resolved. Slight variations of the same queries, such as “lft reference range” or “lft test reference range,” continued to trigger AI generated summaries in some cases. Liver function tests are complex and involve multiple enzymes and proteins, and experts emphasized that understanding the results requires clinical context, not just comparing numbers. The AI Overviews often bolded test names and values, which could give users false confidence that they were reading definitive guidance. Crucially, the summaries did not warn that people can still have normal looking results while having serious liver disease, a gap that could result in harmful false reassurance (The Guardian).


Google said it does not comment on individual removals within Search but stated that it works to make broad improvements when issues are identified. A company spokesperson explained that an internal team of clinicians reviewed the examples provided by the Guardian and concluded that in many cases the information was not inaccurate and was supported by high quality websites. Google also reiterated that AI Overviews are only shown when the company has high confidence in the quality of the response and that their accuracy is continuously measured across categories, including health. Still, critics argue that even occasional errors are unacceptable when medical decisions may be influenced (TechCrunch; The Guardian).



The Guardian’s reporting also highlighted confusion between AI Overviews and featured snippets. For the removed liver related searches, Google now displays featured snippets instead of AI summaries. These snippets are not AI generated but are pulled directly from external websites. In this case, Google extracted numerical ranges for liver enzymes such as ALT, AST, and ALP from Max Healthcare, a for profit hospital chain based in India. While featured snippets carry their own limitations, experts noted that they are fundamentally different from AI Overviews because they do not synthesize or reinterpret information, reducing the risk of hallucinated or invented medical advice (Yahoo News).


Health advocates say the removals are a positive first step but far from sufficient. Vanessa Hebditch of the British Liver Trust and Sue Farrington of the Patient Information Forum both stressed that many inaccurate AI Overviews remain active, including summaries related to cancer and mental health that experts have called completely wrong. With Google controlling roughly 91 percent of the global search engine market, critics argue that the company has a responsibility to ensure that health related AI features prioritize safety and direct users to trusted, evidence based medical sources. As concerns grow about AI systems hallucinating false information, the debate over whether generative AI belongs at the top of health searches is likely far from over (The Guardian; Yahoo News).


This article was written by the Covertly.AI team. Covertly.AI is a secure, anonymous AI chat that protects your privacy. Connect to advanced AI models without tracking, logging, or exposure of your data. Whether you’re an individual who values privacy or a business seeking enterprise-grade data protection, Covertly.AI helps you stay secure and anonymous when using AI. With Covertly.AI, you get seamless access to all popular large language models - without compromising your identity or data privacy.


Try Covertly.AI today for free at www.covertly.ai, or contact us to learn more about custom privacy and security solutions for your business.  



Works Cited


TechCrunch. “Google Removes AI Overviews for Certain Medical Queries.” TechCrunch, 11 Jan. 2026, https://techcrunch.com/2026/01/11/google-removes-ai-overviews-for-certain-medical-queries/.


The Guardian. “‘Dangerous and Alarming’: Google Removes Some of Its AI Summaries After Users’ Health Put at Risk.” The Guardian, 11 Jan. 2026, https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation.


Yahoo News. “Google Removes Some Health Related Questions from Its AI Overviews Following Accuracy Concerns.” Yahoo News Canada, 11 Jan. 2026, https://ca.news.yahoo.com/google-removes-health-related-questions-115941311.html.


Arora, Dipti. “Google AI Overviews Found to Deliver Misleading Health Advice.” Stan Ventures, 5 Jan. 2026, https://www.stanventures.com/news/google-ai-overviews-found-to-deliver-misleading-health-advice-6560/.


Comments


Subscribe to Our Newsletter

  • Instagram
  • Twitter
bottom of page