Meta AI fair use ruling and what it could mean for ANI’s lawsuit against Open AI before the High Court of Delhi
- Kiratraj Sadana
- Jul 1, 2025
- 4 min read
On 25 June 2025 the US District Court for the Northern District of California granted summary judgment for Meta in Kadrey v Meta Platforms. Judge Vince Chhabria found that copying more than 190,000 books to train Meta’s large-language model (LLM) was protected by fair use under section 107 of the US Copyright Act.
Although dozens of AI-copyright suits are pending, this is the first merits ruling squarely blessing unlicensed training. It arrives just as investors, policymakers and creators are asking whether AI labs must pay for the data that powers their models.

Background
Who sued? Authors Richard Kadrey, Christopher Golden, comedian Sarah Silverman and others, backed by the Authors Guild.
What they alleged: Meta scraped the texts from shadow libraries and duplicated them verbatim during training and thereby committed wholesale piracy.
Meta’s response: Training is “statistical learning,” not verbatim republication, and therefore highly transformative. Any direct copying is temporary and invisible to end-users.
Judge Chhabria's fair-use analysis
The Meta AI fair use ruling turns on Judge Vince Chhabria’s methodical walk-through of the four statutory factors. First, he found the purpose and character of Meta’s copying to be “exceedingly transformative”: the Llama training process converts expressive prose into numerical weights that cannot recreate the original passages, allowing the model to learn from the books rather than republish them. Second, while novels and non-fiction are undeniably creative works—a point that usually cuts against a fair-use defence—the judge treated that creativity as neutral in this context because the transformative purpose dominated. Third, he held that wholesale copying of entire books was permissible since complete ingestion was “reasonably necessary” to achieve the new, analytic objective, echoing the Supreme Court’s reasoning in Google Books. Finally, and most decisively, he concluded that the plaintiffs had offered “no meaningful evidence” of market harm: they neither showed reduced sales nor identified a functioning or nascent licensing market that Meta’s activities might displace. In short, without proof of economic injury, the balance of factors tipped squarely in Meta’s favour.
A win, but not a blank cheque
Judge Chhabria cautioned that his ruling is limited to the facts before him. He did not declare all AI training lawful. Instead, he faulted the plaintiffs for failing to gather data on market impact. In other words, the victory is procedural as much as doctrinal: better evidence could tip the scales next time.
One day, two opinions: comparing Meta and Anthropic
Just 24 hours earlier, Judge William Alsup ruled in Authors Guild v Anthropic that training was fair use but storing seven million pirated files in a permanent repo might still infringe. Together, the opinions suggest US courts will tolerate unlicensed training so long as:
The training process is genuinely transformative.
The developer does not keep illicit archives that compete with the originals.
Plaintiffs cannot prove a well-defined licensing market that is being displaced.
How the ruling could influence ANI v Open AI in the Delhi High Court
India’s first generative-AI copyright action, ANI Media Pvt Ltd. v Open AI Inc. (CS-COMM-1028/2024), is inching toward trial in Delhi. ANI alleges that ChatGPT was trained on and can still reproduce its pay-walled news reports, eroding the value of a potential licensing market.
Open AI will almost certainly invoke Judge Chhabria’s reasoning to argue that large-scale text-and-data mining is transformative learning, not conventional copying, and that ANI must quantify real market damage if it wishes to prevail. Yet the persuasive power of the Meta AI fair use ruling has limits: Indian law relies on a closed list of “fair-dealing” exceptions under section 52 of the Copyright Act, none of which expressly covers machine learning.
The Delhi High Court may be reluctant to graft a US-style transformative-use doctrine onto that statutory framework. Moreover, while Chhabria downplayed the provenance of Meta’s training corpus, the Delhi High Court could take a stricter view especially in light of Judge William Alsup’s parallel opinion in Authors Guild v Anthropic, which flagged the continued storage of pirated files as a separate infringement.
Ultimately, the California decision arms Open AI with useful rhetoric but does not foreclose ANI’s path to victory: if the news agency can demonstrate a credible licensing market and show that ChatGPT’s outputs act as substitutes, Indian judges may still find infringement despite the US precedent.
What the ruling signals for the wider ecosystem
Authors and publishers must gather hard data on lost sales or blocked licensing opportunities. Rhetoric alone will not suffice.
AI developers gain a roadmap: ensure your use is clearly transformative, maintain clean data-handling procedures and be ready to rebut market-loss claims with economic evidence.
Policy-makers may see the decision as a nudge toward collective licensing or levy schemes, akin to broadcast royalties, that compensate creators without stalling innovation.
Key takeaways
Fair use survives, but evidence is paramount. Meta prevailed because the Plaintiffs’ record was thin, not because the Court green-lit unlimited scraping.
Transformation is necessary, not sufficient. Plaintiffs can still win if they prove genuine market harm, especially in sectors such as music or journalism.
Data provenance matters. Developers who retain or distribute pirated source files remain exposed, as Judge Alsup’s split decision on Anthropic shows.
Expect local divergence. Courts outside the US, including in Delhi,
will study these opinions yet adapt them to domestic statutes and policy concerns.

Comments