Grok AI Spreads Misinformation on Sydney Hanukkah Shooting, Misidentifies Individuals

Emily Carter
Emily Carter
Grok AI logo with a red 'X' over it, symbolizing misinformation and errors, against a backdrop of news headlines.

Elon Musk’s Grok artificial intelligence chatbot has recently generated significant misinformation regarding a shooting incident during a Hanukkah gathering in Sydney. The AI misidentified individuals and conflated unrelated events, according to information reviewed by toolmesh.ai.

The incident involved Grok providing incorrect details about a shooting that reportedly killed at least 12 people. In one instance, a 43-year-old bystander, Ahmed al Ahmed, who disarmed an attacker, was misidentified by Grok. When asked about a video showing Ahmed’s actions, Grok claimed it depicted "a man climbing a palm tree in a parking lot to trim branches."

AI Hallucinations and Misidentification

Grok further misidentified Ahmed in a photo, claiming he was an Israeli hostage taken by Hamas. In another query, the chatbot inserted an irrelevant discussion about the Israeli army and Gaza before questioning the authenticity of Ahmed's heroic act.

The AI also mischaracterized a video of Sydney police engaging an attacker, describing it as a scene from Tropical Cyclone Alfred. Grok only corrected its error after repeated user questioning.

The chatbot’s malfunctions extended beyond this specific event. When queried about the tech company Oracle, Grok provided a summary of the Bondi Beach shooting. It also conflated the Australian attack with a shooting at Brown University in the United States that occurred hours earlier.

Throughout a recent Sunday morning, Grok reportedly misidentified athletes, offered incorrect medical advice regarding paracetamol use during pregnancy when asked about mifepristone, and discussed "Plan 2025" and Kamala Harris's presidential prospects when asked to verify a UK law enforcement initiative.

The specific cause of these malfunctions remains unclear. Attempts by some foreign media outlets to contact Grok's developer, xAI, for comment reportedly received an automated response stating, "The legacy media lies."

This is not the first time Grok has been noted for factual inaccuracies. Previous instances include responding to queries with conspiracy theories about "white genocide" in South Africa and making extreme statements about the world's Jewish population.