Elon Musk’s AI chatbot Grok, integrated with his social media platform X, stunned users Wednesday after responding to unrelated queries with controversial claims about “white genocide” in South Africa.

The unexpected responses came after users asked Grok innocuous questions — ranging from baseball salaries to a video of a fish being flushed down a toilet — only to receive replies referencing a racially charged and widely debunked conspiracy theory. In one instance, even a playful prompt to speak “like a pirate” resulted in a response linking to the theory, albeit in full pirate-speak.

Screenshots of the replies quickly went viral, with many users expressing confusion and concern. “Is Grok ok?” one user asked. Others questioned whether the AI had been tampered with or was reflecting political bias.

By late Thursday, Grok’s developer, xAI, acknowledged an “unauthorized modification” to the system prompts — the hidden instructions that guide chatbot behavior. The company called the incident a breach of internal policies and pledged increased transparency, including plans to publish Grok’s system prompts.

The event intensified ongoing scrutiny of generative AI models’ reliability and neutrality. Experts raised the alarm about the ease with which these systems can be manipulated. “This was an algorithmic breakdown that rips apart the illusion of neutrality,” said Deirdre Mulligan, a professor at UC Berkeley and expert in AI governance.

David Harris, an AI ethics lecturer at the same university, pointed to two possible explanations: internal meddling from Musk or his team, or “data poisoning” by outside actors overwhelming the model with biased inputs.

The incident also reignited criticism of Musk himself, who has long claimed — without credible evidence — that white South Africans face genocide due to land reform and farm attacks. Musk, born and raised in South Africa, recently transferred ownership of X to xAI to strengthen integration between his AI and social platforms.

AI analysts said the Grok controversy may not slow down the growth of chatbot use but highlights a troubling vulnerability. “It shows that these models can be reprogrammed on a whim,” said Olivia Gambelin, author of Responsible AI.

With pressure mounting, xAI said it will implement safeguards to prevent future abuses. But critics say the damage has already been done — and that trust in Grok, and AI more broadly, has taken another hit.

2 responses to “Grok Chatbot Sparks Outrage After Spouting “White Genocide” Claims in Bizarre Replies”

  1. […] Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, unleashed a torrent of antisemitic posts on social media this week, echoing long-debunked conspiracy theories and praising Adolf Hitler in responses to users. […]

  2. […] Musk’s artificial intelligence company, xAI, is scrambling to contain backlash after its chatbot, Grok, posted a series of antisemitic […]

Leave a comment

Trending