/bmi/media/media_files/2025/01/17/kumXQpu9rBQmNzS9ngXE.jpg)
New Delhi: A day before the civil lawsuit hearing scheduled for January 28, ANI Media's (ANI) legal battle with OpenAI took an interesting turn. The Digital News Publishers Association (DNPA), along with members like Indian Express, Hindustan Times, and NDTV, officially joined the fray by intervening in the lawsuit.
Filed in November 2024, this case is the first of its kind in India, with ANI accusing OpenAI of copyright infringement over its use of news content.
Finally, the digital news platforms, already facing tough competition for ad revenue from the likes of Google, Meta, and e-commerce giants have joined forces with ANI and are collectively raising concerns about OpenAI using their copyrighted content without permission. They seek compensation for the material used to train AI models and are also worried about the rise of disinformation due to generative AI.
While global publishers like The Associated Press, News Corp, and Axel Springer have struck licensing deals with OpenAI for training ChatGPT—sometimes even including paywalled content—Indian media companies are still waiting for their turn.
The Indian publishers contend that OpenAI's reluctance to enter into such partnerships in India demonstrates a disregard for local laws and undermines the nation’s media industry.
Anant Goenka, Executive Director at The Indian Express, pointed out that in the last five years “Either by way of legislation or through independent commercial agreements, big tech platforms in almost every territory, including India, have a well-established market value for the content generated by news organizations. AI companies like OpenAI have also struck mutually beneficial arrangements in some foreign markets. While in this case, we have intervened on a question of law, the larger question remains as to why AI companies are discriminating against India.”
In July 2023, The Associated Press struck a deal with OpenAI to license its archives for ChatGPT training. In May 2024, News Corp signed a five-year deal with OpenAI, worth $250 million, to allow ChatGPT to use its content with a delay. Axel Springer also partnered with OpenAI, enabling ChatGPT to summarise articles from its publications like Politico and Business Insider, including paywalled content, with attribution.
Indian publishers argue that OpenAI's unwillingness to establish similar partnerships in India reflects a disregard for local regulations and poses a threat to the country’s media industry.
Having said that, these exclusive deals may overlook smaller and local publishers, potentially hindering the development of sustainable business models in favour of one-off licensing agreements.
In its November 19, 2024, order, the Delhi High Court identified and framed key legal questions stemming from the ANI v. OpenAI case. These focus on OpenAI’s extensive—and, according to DNPA, unlawful—use of Indian copyrighted content, raising critical implications for intellectual property rights and the future of India’s news industry in the digital age.
DNPA contends that companies like OpenAI have developed large language models (LLMs) by “training” on vast quantities of text, including, without a licence or permission, copyright-protected works. “This unlawful utilisation of copyrighted materials exclusively benefits OpenAI and its investors, to the detriment of the creative works across the entire industry in India,” said DNPA.
DNPA further asserted that OpenAI’s appropriation of news content presents an increasing threat to press transparency in India.
DNPA also voiced serious concerns over OpenAI’s lack of transparency and disclosure in its AI models. With disinformation and deepfakes on the rise, the DNPA claims that generative AI is amplifying online misinformation, threatening the quality, reliability, and diversity of news content in India. “AI models are supercharging misinformation online, and this could have far-reaching consequences for the Indian news ecosystem,” wrote DNPA in a statement.
The association also highlighted how these opaque algorithms risk exposing the public to manipulated narratives. “Without proper checks and balances, we’re heading toward a future of algorithmic opacity that will undermine public trust in journalism,” it added.
Reaffirming its commitment to protecting the rights of its members and the future of journalism, the DNPA is preparing to present its arguments in the Delhi High Court on January 28. “We are here to advocate for fairness and transparency, ensuring that AI companies respect intellectual property and the value of journalistic work,” the DNPA emphasised.
In the past, even The New York Times has sued OpenAI and Perplexity, seeking billions in damages for the unauthorised use of its content.
Similarly, News Corp, the parent company of major media outlets such as The Wall Street Journal and the New York Post, filed a lawsuit against Perplexity for allegedly infringing on copyrighted content.
Expressing his thoughts on why the lawsuit was filed in the first palace, Anant Nath Editor at The Caravan and Executive Publisher at Delhi Press, said, “The principle here is that generative AI companies need to engage with original content producers. Another issue is that their algorithms and systems, which train on content created by original publishers, should require a license. Additionally, there needs to be proper citation of where the content is being sourced from.
If their models are being trained on proprietary content, there should be a license fee, as it doesn't fall under the fair use policy. This necessitates a commercial arrangement, which includes determining the extent of citation required.”
Nath believes that ideally this issue should have been resolved without a lawsuit, but since GenAI companies have trained their models without permission, a lawsuit has become the only option.
The impact of AI tools on news platforms extends beyond OpenAI to even Google's AI Overviews, which provide direct answers to search queries. These summaries risk bypassing links to publishers' websites, leading to significant drops in web traffic and revenue.
Highlighting the growing imbalance, Nath stated, “The combined revenues of Meta and Google surpass those of all media companies in the country put together. This is significant because neither Meta nor Google produces original content. For instance, when you perform a Google search, the response that appears in Google Overview is often sufficient for the majority of readers, eliminating the need to visit the original source. Previously, a Google search query would lead to a list of links, prompting users to click through to the original content source website.”
The rise of generative AI has exacerbated the problem, with original content now being summarised in AI-generated language. “As a result, the traffic is now being contained within the big tech platforms, as they provide summarised responses to queries, reducing the need for users to visit the original content websites,” Nath explained.
To address this complex issue, Nath believes that stronger enforcement of licensing agreements and clear content usage terms are essential. “The evolution should be negotiated because, right now, it is almost like a completely free run. It has to move away from the free use of content,” he asserted.
Since these tech giants are basically the backbone of today’s media landscape and have used journalism to add value to their empires, it’s high time policymakers step in. They need to make sure news organisations get their fair share and that big corporations don’t just play the game—they play fair.
Nath emphasised the need for collective action: “This requires an industry-wide response. A few players taking action is not enough, given the vast amount of content being produced globally. Therefore, concerted action is needed across all major original content producers.”
In November, the Information and Broadcasting Minister Ashwini Vaishnaw showed urgency in addressing the challenges posed by AI systems during his National Press Day speech in November.
He emphasised the need to safeguard the intellectual property (IP) rights of original creators. “AI models today can generate creative content based on vast datasets they are trained on. But what happens to the rights and recognition of the original creators who contributed to that data? Are they being compensated or acknowledged for their work?” the minister questioned.
“This is not just an economic issue; it is an ethical issue too," he added.