/bmi/media/media_files/2025/10/22/deepfake-maity-2025-10-22-15-14-34.png)
New Delhi: Industry leaders on Wednesday backed the government’s proposal to mandate labelling of AI-generated content, saying stronger identifiers and clear implementation standards are essential to curb deepfakes and build user trust.
Mahesh Makhija, Partner and Technology Consulting Leader at EY India, said the move is a step toward authenticity in digital content and will underpin responsible AI adoption.
“Labelling AI-generated material and embedding non-removable identifiers will help users distinguish real content from synthetic, serving as the foundation for responsible AI adoption,” he said, adding that the next step must be practical standards and government–industry collaboration “so the rules are scalable and supportive of India’s AI leadership ambitions”.
Calling deepfakes “worryingly convincing,” Akshay Garkel, Partner at Grant Thornton Bharat, termed the proposal timely. “It’s good to see the government and law enforcement taking the issue seriously and acting to curb this menace,” he said.
The government on Wednesday proposed changes in IT rules that would mandate clear labelling of AI-generated content and place greater accountability on large platforms such as Facebook, YouTube and others in verifying and flagging synthetic information - a move aimed at curbing user harm from deepfakes and misinformation.
The IT ministry noted that deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create "convincing falsehoods", where such content can be "weaponised" to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.
The proposed amendments to IT rules provide a clear legal basis for labelling, traceability, and accountability related to synthetically-generated information.
Apart from clearly defining synthetically generated information, the draft amendment - on which comments from stakeholders have been sought by November 6, 2025 - mandates labelling, visibility, and metadata embedding for synthetically generated or modified information to distinguish such content from authentic media.
The stricter rules would increase accountability of significant social media intermediaries (those with 50 lakh or more registered users) in verifying and flagging synthetic information through reasonable and appropriate technical measures.