Govt notifies AI content rules: AI labels mandatory, 3-hour takedown window on social media

The government had brought in stricter obligations for online platforms on handling AI-generated and synthetic content, saying platforms such as X and Instagram must take down within three hours any such content flagged

author-image
BestMediaInfo Bureau
New Update
Cleartrip x Artificial Labs 1
Listen to this article
0.75x1x1.5x
00:00/ 00:00

New Delhi: The government has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, tightening compliance for AI-generated and synthetic content on online platforms.

The fresh rules require platforms to prominently label content generated or altered using AI tools and mandate that social media companies seek user declarations on whether uploads are AI-generated and deploy tools to verify those declarations.

The government had brought in stricter obligations for online platforms on handling AI-generated and synthetic content, including deepfakes, saying platforms such as X and Instagram must take down within three hours any such content flagged by a competent authority or courts.

The amended rules will come into force from February 20, 2026, as per the Gazette notification issued by the Ministry of Electronics and Information Technology (MeitY).

User declaration and verification before publishing

A key insertion requires significant social media intermediaries to, before publishing any content, obtain a user declaration on whether the information is “synthetically generated”.

Platforms must also deploy “reasonable and appropriate technical measures”, including automated tools, to verify the correctness of that declaration, keeping in view the nature, format and source of the content.

Where verification indicates that the content is synthetic, the platform must ensure it is displayed clearly and prominently with an appropriate label or notice.

What counts as “synthetically generated”

The amendments formally define “synthetically generated information” as audio, visual or audio-visual content that is artificially or algorithmically created or altered in a way that appears real, authentic or true, and is likely to be perceived as indistinguishable from a natural person or a real-world event.

The definition also clarifies exclusions. Routine editing that does not materially misrepresent the underlying content, good-faith creation or design work that does not result in a false document or false electronic record, and accessibility or quality improvements such as translation or searchability that do not manipulate material parts of the content are excluded.

Mandatory labels, permanent metadata and a bar on removal

For intermediaries that enable the creation or sharing of synthetic content, the rules require such content to be clearly and prominently labelled so users can immediately identify it as synthetically generated.

They also require intermediaries to embed permanent metadata or other provenance mechanisms, including a unique identifier, “to the extent technically feasible”, to help identify the computer resource of the intermediary used to create or alter the content.

Intermediaries are barred from enabling the modification, suppression or removal of the label or embedded metadata once applied.

Platforms are asked to prevent illegal AI content using tools

The amendments also place responsibility on platforms to deploy reasonable and appropriate technical measures, including automated tools, to prevent users from creating or sharing synthetically generated information that violates the law.

The notification flags categories such as child sexual exploitative and abuse material, non-consensual intimate imagery, content resulting in false documents or false electronic records, content linked to explosives or arms procurement, and deceptive impersonation-like depictions of persons or events.

The rules also clarify that when intermediaries remove or disable access to content in compliance, using reasonable and appropriate technical measures, including automated tools—such action will not be treated as a violation of safe harbour conditions under Section 79 of the IT Act.

3-hour takedown deadline, faster grievance redressal

The amendments also tighten compliance timelines. For lawful directions to remove or disable access to content, the time limit has been cut from 36 hours to within three hours.

User grievance redressal timelines have been shortened as well. Platforms must acknowledge complaints within two hours (earlier 24 hours) and dispose of complaints within seven days (earlier 15 days).

The notification further specifies that where an intimation is issued by the police administration, the authorised officer must not be below the rank of Deputy Inspector General of Police, and must be specially authorised by the appropriate government.

AI content brought on par with “information”

Another change clarifies that references to “information” for the purposes of unlawful content determinations will be construed to include synthetically generated information as well, bringing AI-generated content into the same compliance and enforcement framework under the IT Rules.

The rules also reiterate that intermediaries must periodically inform users, at least once every three months, about compliance requirements and consequences, in English or any language in the Eighth Schedule of the Constitution.

The amendments also update legal references by substituting the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023, in the relevant provisions. 

social media content AI deepfake government Govt IT Rules
Advertisment