LA Protests: Chatbot Disinformation & Fueling Conflict

“`html





AI Chatbots Spread Disinformation, Confidently Hallucinate

AI chatbots Spread Disinformation,Confidently Hallucinate

AI Chatbots like Grok and ChatGPT confidently provide incorrect information,exacerbating the spread of disinformation,especially concerning events like the Los Angeles protests.


AI Chatbots are contributing to the spread of disinformation, particularly regarding events such as the Los Angeles protests. According to Leah Feiger, Senior Politics Editor at WIRED, users turned to AI chatbots like Grok and ChatGPT for fact-checking, only to receive inaccurate information.Feiger noted that when asked about images from the protests, Grok falsely claimed they were from Afghanistan. Similarly, ChatGPT also misidentified the images as being from Afghanistan [[3]].

Zoë Schiffer highlighted the issue of platforms dismantling their fact-checking programs, which exacerbates the problem. She pointed out that while chatbots can be useful, they often “hallucinate” and provide incorrect information with unwavering confidence. Schiffer contrasted this with Google Search,where inaccuracies are more easily identified compared to a chatbot like Grok confidently misidentifying a photo of Afghanistan [[3]].

Feiger described this phenomenon as chatbots “fully hallucinating, but is with the swagger of the drunkest frat boy that you’ve ever unluckily been cornered at a party in your life,” emphasizing their unwavering certainty even when wrong.

The Problem of Confident Incorrectness

Schiffer questioned why chatbots don’t admit when they don’t know an answer and rather offer credible sources. Feiger explained that chatbots generally don’t admit ignorance, leading to incorrect or speculative answers. A recent study of AI search tools at the Tow Center for Digital Journalism at Columbia University found that chatbots were “generally bad at declining to answer questions

Related Posts

Leave a Comment