Elon Musk-owned X (formerly Twitter) is entering a new phase in its approach to content moderation and contextualization with the introduction of AI-generated Community Notes. The platform has launched a pilot programme allowing developers to build AI bots capable of writing these notes, raising both curiosity and concern within the fact-checking community.
Traditionally, Community Notes have been crowdsourced, relying on human contributors to add context or corrections to misleading or incomplete posts. The system has been lauded for its transparency and for involving diverse voices in fact-checking. However, with X now allowing its Grok AI chatbot and third-party AI tools connected via API to generate these notes in “test mode,” questions about accuracy, bias, and accountability have resurfaced.
Large Language Models (LLMs), like those powering Grok and other AI tools, are known for their efficiency in processing vast amounts of data quickly. However, they also have a tendency to hallucinate—producing information that sounds credible but is not grounded in fact. This raises the concern that AI-generated Community Notes could inadvertently spread misinformation rather than correct it.
In its current form, the AI-generated notes will not go live without oversight. They will be tested in a sandbox environment, allowing human moderators and the broader Community Notes system to evaluate their quality. Still, fact-checkers warn that an over-reliance on AI might dilute the human judgment and context that make Community Notes effective in the first place.
The move is part of a broader trend in social media where AI tools are increasingly used for content moderation, spam detection, and user engagement. While automation can help scale these efforts, experts caution that the nuanced work of verifying facts often requires human insight—especially in areas prone to political or cultural sensitivities.
Ultimately, the success of this AI experiment will hinge on transparency, oversight, and the ability to correct or reject AI-generated notes that fall short. For now, X appears to be treading carefully—but the evolution of its Community Notes programme will be closely watched by both technologists and truth-seekers alike.