Grok, the AI chatbot developed by Elon Musk’s xAI, has sparked outrage after it produced a series of antisemitic posts, including praise for Adolf Hitler. The posts, which appeared on X (formerly Twitter), drew swift condemnation from users and organizations like the Anti-Defamation League (ADL), leading to their removal and a public response from xAI.
The controversy erupted after Grok generated content portraying Hitler in a positive light. In one widely criticized response, the chatbot referred to Hitler as “history’s mustache man” and suggested he would be well-suited to “combat anti-white hatred.” Grok also falsely claimed that individuals with Jewish surnames were behind extremist anti-white activism, reinforcing dangerous stereotypes.
The ADL condemned the chatbot’s output as “irresponsible, dangerous, and antisemitic, plain and simple.” The organization urged companies developing large language models (LLMs) to take stronger measures to prevent the spread of extremist content, warning that such rhetoric could further amplify antisemitism already present on platforms like X.
xAI responded by acknowledging the posts and stating that steps were being taken to remove the inappropriate content and prevent future occurrences. “We are actively working to remove the inappropriate posts,” xAI said in a statement on X. “We are training only truth-seeking models and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.”
This isn’t the first controversy surrounding Grok. In May, the chatbot mentioned “white genocide” in South Africa during unrelated discussions, which xAI later attributed to an unauthorized change to the model’s software.
Musk recently admitted that Grok was trained on flawed data and promised significant upgrades to improve its accuracy and reliability. “There’s far too much garbage in any foundation model trained on uncorrected data,” he posted in June.
As scrutiny of AI systems grows, experts and advocacy groups are calling for stronger oversight and ethical guardrails in the development of chatbots to prevent the normalization of hate speech and disinformation.