Anthropic settles authors’ class-action over AI training on copyrighted works
Anthropic has reached a settlement in a class-action lawsuit brought by a group of book authors; the financial terms were not disclosed. The case had raised the prospect of enormous statutory damages after a judge found the company had acquired copyrighted works without payment — even while recognizing that training large language models can in some circumstances constitute fair use.
Background
In June, U.S. District Judge William Alsup issued a mixed ruling: he said that using copyrighted material to train LLMs could be fair use, but that Anthropic’s illegal and unpaid acquisition of certain copyrighted works exposed it to claims of piracy. Statutory damages for willful copyright infringement can start at $750 per infringed work — and plaintiffs alleged a library of roughly 7 million works — creating the potential for multibillion-dollar exposure.
Why the settlement matters
- The deal avoids a potentially massive damages ruling and may shape future litigation strategies around AI training data.
- Litigation over AI and copyright is still developing; settlements like this could set practical precedents even if they aren’t legal precedents from appeals courts.
- Anthropic has faced related claims before, including a 2023 suit from elements of the music industry that was partially resolved earlier this year.
Next steps
The settlement requires court approval; reports indicate a fairness hearing is expected (reports mentioned a September 8, 2025 hearing before Judge Alsup). Details such as how many class members will claim and what payouts (if any) individual authors will receive have not been released.
Sources and further reading
For more detail see TechCrunch’s report: TechCrunch — Anthropic settles AI book training lawsuit
Industry analysis: Publishers Marketplace — analysis
Note: Original Engadget article was excluded per request to avoid RSS news-source links.