Elon Musk’s artificial intelligence company, xAI, has updated its Grok chatbot following a storm of criticism over controversial responses referencing a so-called “white genocide” in South Africa. The incident, which spread widely on social media, raised fresh concerns about the political biases, hate speech, and factual inaccuracies that continue to plague AI chatbots.
In a post on X (formerly Twitter) on Thursday, xAI acknowledged the issue, stating that there had been an “unauthorized change” made to Grok’s system, which led to the problematic output. The company said it had acted swiftly to reverse the change and would implement updates to prevent similar incidents in the future.
The phrase “white genocide” is widely recognized as a white supremacist conspiracy theory, particularly in reference to South Africa, where it falsely alleges a systematic effort to eliminate white citizens. While there are legitimate concerns about crime and violence in the country, the “white genocide” narrative has been debunked repeatedly by independent analysts and human rights organizations.
The Grok incident highlights the ongoing challenge faced by developers of generative AI: ensuring these powerful systems provide accurate, unbiased information without amplifying harmful ideologies. Since the launch of OpenAI’s ChatGPT in 2022, similar controversies have arisen around various AI chatbots from multiple companies, with critics warning that AI models can reflect or even magnify societal biases present in their training data.
Musk, who has been a vocal critic of what he perceives as left-leaning bias in AI systems, launched xAI as a counterweight to other tech firms, promising more “truth-seeking” models. However, this latest incident has prompted scrutiny over whether a hands-off approach to moderation may risk enabling misinformation and hate speech.
Experts argue that while no AI system can be entirely free of bias, transparency, robust oversight, and ethical guardrails are essential. The Grok controversy serves as a reminder that despite rapid advances in AI technology, managing the real-world consequences of chatbot behavior remains a critical, unresolved challenge for the industry.