Reddit filed a lawsuit on Wednesday against artificial intelligence company Anthropic, accusing it of illegally harvesting the comments of millions of users to train its Claude chatbot without permission.

The social media platform alleges that Anthropic used automated bots to scrape Reddit’s content despite being asked to stop, and intentionally trained its AI on users’ personal data without obtaining consent. The lawsuit, filed in California Superior Court in San Francisco, marks the latest clash between tech companies over the use of online content to fuel AI development.

“AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data,” Ben Lee, Reddit’s chief legal officer, said in a statement.

Anthropic, founded by former OpenAI executives, denied the allegations. “We disagree with Reddit’s claims and will defend ourselves vigorously,” the company said in a statement.

Reddit has previously struck licensing deals with Google, OpenAI, and other firms, allowing them to train AI models on its vast trove of user-generated posts while enforcing privacy protections. These agreements, Lee said, help safeguard user data and prevent misuse.

The lawsuit does not accuse Anthropic of copyright infringement but instead focuses on alleged violations of Reddit’s terms of service and unfair competition. The case highlights growing tensions as AI firms increasingly rely on publicly available web content—from Wikipedia to forums like Reddit—to develop their systems.

Anthropic, a key rival to OpenAI, has faced legal scrutiny before. Last year, major music publishers sued the company, claiming Claude reproduced copyrighted song lyrics. In response, Anthropic has argued that its training methods constitute lawful use of publicly available data.

The outcome of the case could set a precedent for how AI companies access and utilize online content in the rapidly evolving landscape of generative AI.

One response to “Reddit Sues AI Firm Anthropic Over Alleged Illegal Data Scraping”

  1. […] corporate espionage when placed in simulated high-stakes scenarios, according to a new study from Anthropic, the AI safety […]

Leave a reply to AI Models Show Alarming Tendency Toward Deception and Harm in Simulated Tests, Study Finds – AI News Monitor Cancel reply

Trending