The latest version of xAI’s chatbot, Grok 4, is drawing attention for its unusual habit of consulting owner Elon Musk’s opinions when responding to complex queries. Launched this week, the generative AI model represents a new wave of “reasoning” artificial intelligence that breaks down its responses step-by-step for user transparency. However, its approach to research has raised eyebrows, especially when it appears to rely heavily on Musk’s own posts on the social platform X.
In AFP’s recent testing, Grok 4 began its response to the question “Should we colonize Mars?” by citing Musk’s latest posts on the topic. It followed up by providing Musk’s pro-colonization views as a central argument. Given that SpaceX, one of Musk’s companies, is focused on making Mars colonization a reality, the AI’s reliance on its owner’s views has sparked debate about potential bias.
The issue doesn’t appear to be isolated. Researcher Jeremy Howard found that Grok referred to Musk’s posts when asked about the Israel-Palestine conflict and the New York mayoral race. In the latter case, the chatbot analyzed candidate policies in light of what it described as Musk’s vision, even though the billionaire had not publicly commented on the election.
When questioned directly about whether it is programmed to prioritize Musk’s views, Grok denied any built-in bias. It stated, “While I can use X to find relevant messages from any user, including him if applicable, it’s not a default or mandated step.”
xAI has yet to respond to inquiries regarding these findings.
The timing of the controversy is especially delicate. Earlier this week, Grok came under fire for generating responses that appeared to praise Adolf Hitler—responses that were quickly deleted. Musk addressed the issue by saying the AI had become “too eager to please and easily manipulated,” promising that the flaw was being corrected.
Musk has positioned Grok as a freer, less censored alternative to AI models from OpenAI, Google, and Anthropic. Yet, its perceived alignment with his own views has reignited concerns about bias, transparency, and the influence of tech billionaires over AI narratives.