Govt brings photos and personal data shared with AI under legal protection

Personal images and biometric details shared on AI platforms now fall under the Digital Personal Data Protection Act, 2023, and new Rules notified on November 13, 2025

author-image
BestMediaInfo Bureau
New Update
baishnav
Listen to this article
00:00/ 00:00

New Delhi: Millions of Indians who upload their photos and personal data to AI applications will now have stronger legal protection, the government said on Wednesday, signalling a new push to regulate the booming world of artificial intelligence.

Minister for Electronics and Information Technology Ashwini Vaishnaw said that personal images, biometric data, and other information shared with AI platforms are now covered under the Digital Personal Data Protection Act, 2023, and the Digital Personal Data Protection Rules, 2025, which came into effect on November 13, 2025. 

“The Act empowers individuals with specific rights over their data and imposes clear obligations on organisations that process it,” Vaishnaw told the parliament.

At the centre of the new framework is the Data Protection Board of India, which will oversee complaints, ensure compliance, and enforce penalties.

The Board will have a Chairperson and four members, all appointed through a Search-cum-Selection Committee. The move aims to give users more control over how AI platforms handle their images and data.

The announcement comes amid rising concerns over deepfakes, morphed images, and other AI-generated content that can misrepresent individuals, damage reputations, or spread misinformation. The Indian government has been engaging social media platforms and intermediaries to strengthen safeguards against such misuse. 

Advisories issued on December 26, 2023, March 15, 2024, and November 21, 2025 reminded platforms of their obligations under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, including prompt detection and removal of unlawful content.

Under the recently amended IT Rules (Amendment) 2025, platforms must remove or disable access to reported unlawful content within 36 hours of receiving notice from the government or a court. 

Draft amendments under public consultation go further, proposing mandatory labelling, watermarking, and traceability for AI-generated images and videos, so users can easily identify content created or manipulated by AI.

The Government emphasised that the enforcement of crimes involving misuse of AI-generated content remains a State responsibility. Police and law-enforcement agencies in States and Union Territories are empowered to investigate and prosecute offences involving social media misuse.

On the technology front, the Government highlighted projects under the IndiaAI Mission, launched in March 2024 to ensure safe and responsible AI use in India. 

Under the mission’s Safe & Trusted AI pillar, three initiatives have been selected to detect and prevent misuse of AI-generated content:

  • Saakshya, a deepfake detection and governance framework developed by IIT Jodhpur and IIT Madras.
  • AI Vishleshak, led by IIT Mandi and the Himachal Pradesh Directorate of Forensic Services, focusing on detecting audio-visual deepfakes and forged handwritten signatures.
  • A Real-Time Voice Deepfake Detection System from IIT Kharagpur.

These projects aim to strengthen India’s ability to identify AI-generated manipulations and protect citizens from reputational and financial harm caused by synthetic content.

AI Digital Personal Data Protection Act Ashwini Vaishnaw deepfakes DPDP personal data
Advertisment