Skip to content

Grok chatbot faces scrutiny after sharing false claims about Bondi Beach shooting

The AI questioned verified videos and photos, wrongly labeling footage as unrelated events and misidentifying Ahmed al Ahmed as someone else.

Culture
FacebookLinkedInXReddit
Google News Preferred Source
FacebookLinkedInXReddit
Google News Preferred Source
In this photo illustration, a Grok logo seen displayed on a smartphone.
In this photo illustration, a Grok logo seen displayed on a smartphone.Getty Images

Grok, the AI chatbot developed by Elon Musk’s xAI and widely used through the social media platform X, came under scrutiny on Sunday morning after repeatedly spreading false and confusing information about a deadly shooting at Bondi Beach in Australia.

The errors surfaced just hours after the attack, which occurred earlier that day and quickly became a major global news event, drawing intense attention online as videos and images circulated rapidly.

AI confusion during a breaking news crisis

As users turned to Grok for clarity on the Bondi Beach shooting, the chatbot instead delivered a stream of incorrect and, at times, bizarre responses. Several posts highlighted by Gizmodo showed Grok misidentifying key individuals involved in the incident and questioning the authenticity of widely shared footage.

One of the most serious mistakes involved the bystander who disarmed one of the attackers. Authorities and media correctly identified that individual as 43-year-old Ahmed al Ahmed.

However, Grok repeatedly disputed this, at one point claiming that the man in a widely circulated photo was an Israeli hostage. In another response, the chatbot asserted that a “43-year-old IT professional and senior solutions architect” named Edward Crabtree had actually disarmed the gunman.

These claims were false and added confusion to an already chaotic information environment. Videos showing al Ahmed’s actions were also cast into doubt by the chatbot, despite being verified by multiple news outlets and eyewitness accounts.

False narratives and misplaced context

Grok’s errors extended beyond simple misidentification. When asked directly about the video showing al Ahmed tackling the shooter, the chatbot replied, “This appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it, resulting in a branch falling and damaging a parked car. Searches across sources show no verified location, date, or injuries. It may be staged; authenticity is uncertain.”

In another case, Grok incorrectly claimed that an image of an injured al-Ahmed showed an Israeli hostage taken by Hamas on October 7th. Elsewhere, it questioned the authenticity of the confrontation again, following an unrelated paragraph discussing Middle East military actions, even though the user query had nothing to do with that topic.

The chatbot also misidentified a clearly labeled video of a police shootout in Sydney as footage from Tropical Cyclone Alfred, a natural disaster that hit Australia earlier this year. Only after the user pushed back and asked Grok to reevaluate did it reverse course and correct the claim.

Corrections arrive, but damage is already done

To its credit, Grok has started correcting some of its mistakes. At least one post that wrongly linked the shooting video to Cyclone Alfred was updated “upon reevaluation.”

The chatbot later acknowledged al Ahmed’s correct identity, stating that the “misunderstanding arises from viral posts that mistakenly identified him as Edward Crabtree, possibly due to a reporting error or a joke referencing a fictional character.”

That explanation pointed to a questionable article hosted on a largely non-functional website that appeared to be AI-generated. Still, the initial misinformation had already spread, highlighting how quickly false claims can gain traction when automated systems amplify them during breaking news.

A broader pattern of technical instability

The Bondi Beach errors are part of a wider pattern of instability seen in Grok over recent months. On the same Sunday morning, users reported the chatbot confusing unrelated topics entirely. One person received a summary of the Bondi shooting in response to a question about tech company Oracle.

In another case, Grok mixed details from the Bondi attack with information about a separate shooting at Brown University that occurred only hours earlier.

Beyond this incident, Grok has recently misidentified famous soccer players and discussed US political topics when prompted about a British law enforcement initiative.

xAI has not yet explained what caused the latest glitch. What is clear is that Grok’s repeated misfires during a real-world tragedy underscore the risks of relying on AI systems for accurate information when events are still unfolding.

Recommended Articles

The Blueprint
Get the latest in engineering, tech, space & science - delivered daily to your inbox.
By subscribing, you agree to our Terms of Use and Policies
You may unsubscribe at any time.
0COMMENT

A versatile writer, Sujita has worked with Mashable Middle East and News Daily 24. When she isn't writing, you can find her glued to the latest web series and movies.

WEAR YOUR GENIUS

IE Shop
Shop Now