One of the maxims of the AI world is that if trash goes in, trash comes out. In other words, if you train artificial intelligence with lies or data of questionable origin, the content it generates will reproduce these same patterns. And the same goes for the web sources the answers are based on. This reality became clear after a newspaper investigation Tutorwho discovered that the latest ChatGPT model, GPT-5.2, started using Grokipedia as a source of authority to respond to user queries.
Grokipedia, the AI-generated online encyclopedia launched by Elon Musk last October, has been criticized for spreading controversial narratives and for its lack of direct human editing. In testing, ChatGPT cited this source nine times when responding to sensitive topics, including political structures in Iran and biographies related to Holocaust deniers. For example, the chatbot reproduced statements from Grokipedia about the historian Sir Richard Evans, which the British newspaper itself had previously denied.
The danger of subtle misinformation
The most worrying thing is not that the AI hallucinates, but that validate unwanted models. The analysis showed that ChatGPT did not cite Grokipedia on topics where misinformation is obvious and easy to filter, such as the January 6 insurrection in the United States. However, information from Musk’s encyclopedia has been leaked on more obscure or specific topics, where security filters are more lax. This creates a vicious cycle of validation: if ChatGPT cites Grokipedia, the user may wrongly assume that it is a verified and reliable source.
Security experts warn of the phenomenon of LLM Groomingwhere bad actors generate huge volumes of misinformation for chatbots to absorb these lies when training them or searching the web. Disinformation researcher Nina Jankowicz points out that Grokipedia often relies on unreliable sources and its inclusion in ChatGPT replies legitimizes these biases.
While OpenAI defends that its search engine attempts to extract data from a wide range of sources and that it applies security filters, the response from xAI, owner of Grokipedia, to the controversy was direct and brief: “Mainstream media lies.”
