AI Summit: Kalli Purie’s 9-point AI charter meets Tanmay Maheshwari’s execution plan

​​At the “AI and Media” session on the inaugural day, India Today Vice Chairperson presses for fair value and labelling, while Amar Ujala MD outlines provenance, data access and infrastructure priorities

author-image
Lalit Kumar
New Update
Kalli Purie and Tanmay Maheshwari

Kalli Purie and Tanmay Maheshwari

Listen to this article
0.75x1x1.5x
00:00/ 00:00

New Delhi: A call for fair value for journalistic content, mandatory labelling of AI outputs and stricter penalties for hallucinations dominated the “AI and Media: Opportunities, Responsible Pathways, and the Road Ahead” session at India AI Impact Summit 2026 on Monday.

At the session, India Today Group’s Kalli Purie laid out a nine-point charter, followed by Amar Ujala’s Tanmay Maheshwari, who outlined an execution-focused plan.

The session, held on the inaugural day of the summit at Bharat Mandapam, brought together senior media leaders and global experts to discuss how AI is reshaping newsrooms, editorial trust and the economics of publishing, while also examining the guardrails needed for responsible adoption.

Purie, speaking on the panel, said her “nine-point charter” was aimed at ensuring AI is “done right”, starting with “fair value for journalistic content used in AI systems”.

She called for transparency on how AI systems “digest” and “metabolise” news, and argued that attribution and traceability should be treated as democratic principles rather than a “commercial favour” extended by large platforms.

On disclosure, Purie questioned why AI labelling is still left to user discretion. Referring to the scale of content publishing across multiple channels, she said tech companies should build automatic labelling into the code so AI-generated or AI-altered content is clearly marked.

“Bad actors are never going to say it’s AI,” she said, arguing that voluntary labelling places a higher compliance burden on responsible actors while doing little to deter manipulation.

Purie’s charter also called for recognising journalism as a “public good” and for creating stronger signals that reward reporting with social impact.

She said current algorithms are tuned to virality and lack meaningful cues that privilege content designed to inform citizens or serve the public interest.

She also pushed for greater value to be assigned to verified content produced by credible institutions, and said AI hallucinations should be penalised “severely” instead of being treated as minor, “cute” mistakes.

Pointing to what she described as uneven standards, Purie argued for ending the “asymmetry of reward and punishment” between legacy media and social media platforms.

She said traditional news organisations operate under guidelines and accountability frameworks, while similar standards are routinely breached on social media with far less consequence.

Purie also framed attention as the scarcest resource in the digital economy. She said the population’s attention should be treated as “the rarest mineral we have”, and called for insisting on reciprocity from major tech companies that benefit from it.

“What are they giving us back?” she asked, as part of her ninth point.

Maheshwari, Managing Director at Amar Ujala, picked up on Purie’s charter and said the key challenge is execution. Calling her agenda “great”, he said the focus now should be on “how do we execute it”, and offered a plan built around provenance, traceability, data access and infrastructure.

His first proposal was a system in which the government and big tech label the source of original, verified content and create a signature around it.

Maheshwari said this can be done using existing technologies, including blockchain-based methods, and would help establish authenticity at the point of origin.

He said such signatures would lead to the second outcome: traceability. If the content’s origin is known, it becomes possible to track misuse back to the source of manipulation, he argued, reducing the risk of large-scale mind manipulation because actors know they can be identified.

Maheshwari also flagged what he called a structural constraint for emerging Indian models: data access.

Referencing remarks made in the session about limitations of western models in the Indian context, he said many critical datasets are not digitised or accessible at scale.

He listed gaps across healthcare, public transportation, regulatory data and criminal records, arguing that without reliable datasets, training strong models becomes difficult.

He said the government must open up data and enable access for Indian organisations building and training models domestically, positioning it as essential to building competitive capabilities.

Maheshwari’s final thrust was infrastructure. He argued that foundational infrastructure for AI, compute, chips and supporting capacity, needs a government-led push in the early phase, much like roads, dams, highways and airports are typically built with state support before private participation expands.

He cited China as a reference point for large-scale public investment, and said the visible turnout and queues at the summit underlined both public interest and national commitment.

He said he hoped the summit would be an early step toward a larger build-out, but warned that without Indian infrastructure and sufficient availability of chips and data, “even talking about an Indian model” would not be logical.

To drive home the point, Maheshwari used a Hindi idiom: “Ki pani ke upar malai kabhi nahin banti,” adding that “you need to have milk to have cream over it; you cannot have cream over water,” to argue that applications and innovation cannot sit on a weak foundational base.

Watch the full session:

Kalli Purie India Today India AI Impact Summit artificial intelligence Tanmay Maheshwari Amar Ujala AI regulation
Advertisment