/bmi/media/media_files/2025/02/11/FSmZKNSe5IJNpDBSaoMV.jpg)
New Delhi: India’s top tech industry bodies have urged the IT ministry to narrow and clarify its proposed AI content labelling rules, warning that broadly drafted obligations could overreach, strain compliance systems, and dampen innovation, even as they support decisive action against deepfakes.
As per news reports, the Internet and Mobile Association of India (IAMAI) asked the government to precisely define “synthetically generated information” and “deepfake synthetic content” and to focus enforcement on harmful or malicious material rather than sweeping in all algorithmically altered media.
Both groups sought global alignment (for example, with standards like C2PA) and raised feasibility concerns about some labelling mandates.
MeitY’s draft amendments, circulated for consultation, would require platforms and users to label AI-generated visuals with a visible marker covering at least 10% of the display area and to add disclaimers to AI audio for the first 10% of the duration.
Significant social media intermediaries (SSMIs) with 5 million or more registered users in India would need to obtain user declarations about synthetic content, verify them using automated tools, and treat unlabelled or unverifiable content as non-compliant. Industry respondents cautioned that visible watermarks are easily removed, metadata often gets stripped in cross-platform sharing, and classifier-based verification can be error-prone.
The associations also flagged potential knock-on effects for safe-harbour protections. While the core immunity framework is not being scrapped, due diligence duties would expand to include verification and labelling, raising the risk that platforms could lose conditional immunity if they fail these checks. Startups and smaller firms, they argued, could face disproportionate burdens relative to their resources.
IAMAI and Nasscom recommended a more flexible approach: prioritise high-risk content and actors; prefer machine-readable provenance signals over large, on-screen markers; clarify obligations by use case (consumer vs enterprise); and pace implementation to avoid fragmenting India’s rules from emerging global norms. The groups said such calibration would still deter deepfakes while preserving room for legitimate AI uses in media, advertising, and software tools.
MeitY’s consultation window has closed, and the ministry is expected to review feedback before finalising amendments to the IT Intermediary Rules and the Digital Media Ethics Code. With AI now embedded across creation and distribution workflows, the industry expects continued dialogue on technical feasibility, redress for wrongful takedowns, and timelines for compliance.
/bmi/media/agency_attachments/KAKPsR4kHI0ik7widvjr.png)
Follow Us