/bmi/media/media_files/2026/02/17/ashwini-vaishnaw-at-ai-summit-2026-02-17-15-11-55.png)
New Delhi: Union Minister Ashwini Vaishnaw on Tuesday said the government believes news publishers must receive fair compensation when their content is used by artificial intelligence platforms, signalling a stronger policy push around revenue sharing and attribution in the AI era.
Responding to a query by BestMediaInfo.com at the India AI Summit 2026, Vaishnaw said, “We believe that there has to be a fair distribution of the revenue which comes out of the big efforts that the conventional media teams create.”
His remarks come at a time when news organisations globally are negotiating licensing deals and pursuing legal action over the use of copyrighted content to train large language models.
Vaishnaw said conventional media companies invest significant resources in producing credible journalism, and that this effort must be recognised as AI platforms monetise information and scale content discovery.
While he did not outline a specific regulatory framework, Vaishnaw’s comments indicate the government is examining how AI platforms source, attribute and monetise news content, and how publishers should be compensated.
A fresh demand for a fair share was first among a nine-point charter revealed by India Today Group Vice Chairperson Kalli Purie on Monday at the “AI and Media: Opportunities, Responsible Pathways, and the Road Ahead” session.
She opened with what she called the first principle: fair value for journalistic content used in AI systems.
“Publishers cannot be expected to invest in reporting if their work is freely absorbed into AI products without a fair return,” she said.
Vaishnaw also said global digital intermediaries operating in India must follow the country’s legal and constitutional framework. “Any company which operates must operate within the constitutional framework of the country in which it is operating,” he said, adding that companies must also respect the local cultural context.
Separately, Vaishnaw said the rise of deepfakes requires stronger safeguards. “I think we need much stronger regulation on deepfakes. It is a problem growing day by day. Certainly, there is a need for protecting our children and our society from these harms… we have initiated a dialogue with industry on what kind of regulation will be needed beyond what we already have,” he said.
He added that Parliament’s IT committee has studied the issue and made recommendations, and noted that many countries have accepted the need for age-based regulation.
“…this is something that has been accepted by many countries, that age-based regulation has to be there. It was part of our DPDP… when we created this age-based differentiation on the content which is accessible to students and to young people. So, at that time itself, we took that forward-looking step,” he said.
Globally, publishers have raised concerns around unauthorised scraping, weak attribution, and the absence of revenue-sharing mechanisms as AI tools become mainstream.
Vaishnaw’s remarks suggest India could explore norms around attribution, licensing or compensation models as AI deployment expands and platform dependence on news content grows.
/bmi/media/agency_attachments/KAKPsR4kHI0ik7widvjr.png)
Follow Us