Grok Makes False Claims on Bondi Beach Shooting: AI Fail During Crisis
- Covertly AI
- Dec 20, 2025
- 3 min read
Elon Musk’s AI chatbot Grok is once again under scrutiny after repeatedly spreading misinformation about a mass shooting at Bondi Beach in Australia, raising renewed concerns about the reliability of AI generated information during breaking news events.

The incident occurred during a festival marking the start of Hanukkah and has reportedly left at least 16 people dead, according to early reports. As users on X turned to Grok for information, the chatbot delivered a mix of incorrect, misleading, and at times completely irrelevant responses, amplifying confusion during a highly sensitive situation (TechCrunch).
One of the most serious errors involved Grok repeatedly misidentifying the bystander who disarmed one of the attackers. The individual was correctly identified as 43 year old Ahmed al Ahmed, who was captured in a widely shared video wrestling a gun away from one of the shooters. Despite clear visual evidence and reporting, Grok incorrectly claimed that the man in the footage was someone else entirely, at one point stating that a “43 year old IT professional and senior solutions architect” named Edward Crabtree was responsible for stopping the gunman. This false claim appeared to originate from viral posts and possibly an article published on a largely non functional news site that may have been generated by AI itself (engadget).

In other responses, Grok misidentified Ahmed al Ahmed as an Israeli hostage and introduced unrelated geopolitical commentary, including references to the Israeli army’s treatment of Palestinians. In some cases, the chatbot questioned the authenticity of videos and photographs documenting the incident, further undermining trust in its outputs. These inaccuracies were highlighted by Gizmodo, which documented several examples of Grok providing misleading or irrelevant information when users asked direct questions about the Bondi Beach shooting (TechCrunch).
The confusion did not stop there. Users also observed Grok mixing details from entirely different events into its responses. In some replies, the chatbot blended information about the Bondi Beach shooting with an unrelated shooting at Brown University in Rhode Island. In other cases, Grok supplied information about the Australian incident in response to unrelated prompts, suggesting broader issues with contextual understanding and content filtering during real time events (Yahoo).

To its credit, Grok has corrected at least some of its errors. One post that falsely claimed a video from the shooting actually depicted Cyclone Alfred was later updated “upon reevaluation.” The chatbot also eventually acknowledged Ahmed al Ahmed’s identity, explaining that the confusion stemmed from viral posts that mistakenly named Edward Crabtree, possibly as a joke or reporting error. However, xAI, the company behind Grok, has not issued an official statement addressing the incident or outlining how it plans to prevent similar failures in the future (TechCrunch).
This episode is the latest in a series of controversies surrounding Grok. Earlier this year, the chatbot made headlines for referring to itself as “MechaHitler” and for generating highly inappropriate responses in other contexts. Together, these incidents highlight the risks of deploying AI chatbots as real time information sources, particularly during crises when accuracy matters most. As Grok continues to gain visibility through its integration with X, the Bondi Beach shooting serves as a stark reminder that AI systems still struggle with misinformation, context, and verification, especially under the pressure of rapidly evolving news events (engadget).
This article was written by the Covertly.AI team. Covertly.AI is a secure, anonymous AI chat that protects your privacy. Connect to advanced AI models without tracking, logging, or exposure of your data. Whether you’re an individual who values privacy or a business seeking enterprise-grade data protection, Covertly.AI helps you stay secure and anonymous when using AI. With Covertly.AI, you get seamless access to all popular large language models - without compromising your identity or data privacy.
Try Covertly.AI today for free at www.covertly.ai, or contact us to learn more about custom privacy and security solutions for your business.
Works Cited
TechCrunch. “Grok Gets the Facts Wrong About Bondi Beach Shooting.” TechCrunch, 14 Dec. 2025, https://techcrunch.com/2025/12/14/grok-gets-the-facts-wrong-about-bondi-beach-shooting/.
Yahoo News. “Grok Got Crucial Facts Wrong About Bondi Beach Shooting.” Yahoo News, https://www.yahoo.com/news/articles/grok-got-crucial-facts-wrong-231747928.html.
Yahoo News. “Grok Is Spreading Inaccurate Info Again, This Time About the Bondi Beach Shooting.” Yahoo News, https://www.yahoo.com/news/articles/grok-got-crucial-facts-wrong-231747928.html.
“Exclusive: Musk’s xAI Unveils Grok 3 AI Chatbot to Rival ChatGPT, China’s DeepSeek.” Reuters, 18 Feb. 2025, https://www.reuters.com/technology/artificial-intelligence/musks-xai-unveils-grok-3-ai-chatbot-rival-chatgpt-chinas-deepseek-2025-02-18/.
“Elon Musk’s Grok AI Found Spreading Misinformation About Australia’s Bondi Beach Shooting.” Moneycontrol, https://www.moneycontrol.com/technology/elon-musk-s-grokai-found-spreading-misinformation-about-australia-s-bondi-beach-shooting-article-13725291.html.
“Campus Tours and Information Sessions.” Brown University Admission, https://admission.brown.edu/visit/campus-tours-and-information-sessions.
.png)







Comments