
As protests against Immigration and Customs Enforcement (ICE) raids intensify in Los Angeles, social media platforms have become flooded with false claims—exacerbated by users relying on AI chatbots like Grok and ChatGPT for fact-checking, often with wildly inaccurate results.
Conservative accounts on X and Facebook have spread misleading narratives, including recycled protest footage, debunked claims of “paid agitators,” and misattributed images. Amid the chaos, some users have turned to AI tools for clarity, only to receive demonstrably false answers.
One viral example involved California Governor Gavin Newsom sharing images of National Guard troops sleeping on floors in response to criticism from former President Donald Trump. Conspiracy theorists quickly alleged the photos were AI-generated or from another event. When users asked X’s Grok about their origin, the chatbot incorrectly claimed they were from Afghanistan in 2021 or the U.S. Capitol—despite being verified by the San Francisco Chronicle.
Similarly, OpenAI’s ChatGPT falsely identified one of Newsom’s photos as depicting Kabul airport during the 2021 withdrawal. These errors were then cited on platforms like Truth Social as “proof” of deception.
Grok also wrongly asserted that an image of bricks—purportedly signaling planned unrest—was taken in Los Angeles, despite fact-checkers confirming it was from New Jersey. When challenged, the chatbot doubled down, citing nonexistent news reports.
The spread of such disinformation has been amplified by high-profile figures, including actor James Woods and Senator Ted Cruz, who shared outdated or misleading content. Meanwhile, right-wing accounts have pushed baseless claims that protesters are funded by shadowy actors, pointing to footage of “bionic shield” masks being distributed—though no evidence suggests they were provided en masse or with malicious intent.
As platforms like X and Meta scale back content moderation, experts warn that unchecked AI tools are further muddying the truth. Neither X nor OpenAI responded to requests for comment.
The situation underscores a growing crisis: in an era of rapid misinformation, even AI fact-checking tools can’t be trusted.





Leave a comment