/bmi/media/media_files/2025/04/07/4EfI4M1VBJuOimFoaJtH.jpg)
New Delhi: What seems like a whimsical new trend online—transforming selfies into Studio Ghibli-style portraits using AI—may actually be hiding a serious privacy dilemma, experts warn.
While millions of netizens indulge in these AI filters for their artistic charm, cybersecurity professionals are urging caution. The tools powering these viral transformations often come with unclear terms of service, raising questions about what happens to users’ personal data and photographs after they’re uploaded, according to a Newsdrum report.
The trend took off after OpenAI’s launch of its GPT-4o model, which allows people to recreate personal photos in the dreamy style of the Japanese animation studio, Studio Ghibli. But beneath the charm lies a sophisticated data-processing system—one that can collect more than just facial features.
“These tools rely on neural style transfer (NST) algorithms, which isolate the content of an image from its style to combine it with artwork,” said Vishal Salvi, CEO of Quick Heal Technologies. “What users don’t often realise is that photos can contain hidden metadata—location, time, device details—that can quietly reveal sensitive personal information.”
Salvi warned that AI models remain vulnerable to attacks like model inversion, which could potentially reconstruct the original images from their stylised versions. “Even if companies claim not to store your photos, data fragments could still be retained and repurposed—for training surveillance algorithms, targeted ads, or worse.”
That concern is echoed by Pratim Mukherjee, Senior Director of Engineering at McAfee. He pointed out that the slick, engaging design of these platforms often masks the true extent of data access and sharing.
“These platforms are built to make engagement frictionless. You upload a photo, enjoy the art—and in that moment, you might not even realise what you’ve agreed to share,” Mukherjee told NewsDrum. “Creativity is the bait, but it’s data collection that’s being normalised.”
He added, “Once a photo is out there, it's out there. You can’t reset your face the way you change a password.”
According to Mukherjee, such platforms often bury their data usage terms in long, dense policies that few users read or understand. “Just because someone clicks ‘accept’ doesn’t mean they’ve given informed consent.”
Vladislav Tushkanov, Group Manager at Kaspersky’s AI Technology Research Centre, added another layer of concern: even when companies promise to delete images or claim secure storage, no system is completely immune to breaches.
“Due to technical flaws or malicious activity, user data can leak—and often ends up for sale on the dark web,” Tushkanov said. “User accounts can also be compromised if login credentials or the device itself is hacked.”
The broader implication is more than just targeted ads. Experts caution that stolen images can feed the creation of deepfakes, fuel identity theft, or even be misused in AI model training without consent.
Some companies say they delete photos after one-time use, but the ambiguity remains: is deletion instant, delayed, or partial? Without clear answers, users are left vulnerable.
To reduce these risks, Tushkanov recommends basic cybersecurity hygiene—like using strong passwords, enabling two-factor authentication, and avoiding unverified platforms. Salvi added that stripping metadata from images before upload can provide an additional layer of protection.
Mukherjee urges governments to intervene, calling for simplified, up-front disclosures about data handling. “We need transparency by design, not as a footnote in fine print.”
Until then, experts say, it’s up to users to weigh the fleeting fun of viral filters against the long-term risks of digital exposure.