As synthetic media goes viral, advertisers push for tamper-proof trust signals

At India AI Impact Summit 2026, Adobe’s Andy Parsons and other speakers backed standards such as C2PA content credentials to help brands verify how AI-era campaigns are created and edited, as policymakers flagged deepfake virality risks

author-image
Lalit Kumar
New Update
ai india summit
Listen to this article
0.75x1x1.5x
00:00/ 00:00

New Delhi: As AI-generated campaigns move from experimentation to mainstream marketing, industry leaders at the India AI Impact Summit 2026 underscored the growing importance of transparency and provenance standards in protecting brand trust.

While policymakers focused on risks around virality and citizen protection, Andy Parsons, Global Head of Content Autenticity at Adobe, made a strong case for why content credentials are equally critical for advertisers navigating the AI era.

At the centre of this conversation is the Coalition for Content Provenance and Authenticity (C2PA), an industry-led initiative that has developed an open technical standard for attaching secure, cryptographic “content credentials” to digital media. These credentials allow viewers and platforms to verify how a piece of content was created, whether it has been altered, and in some cases, whether AI tools were involved.

Speaking on the sidelines of the summit and in an exclusive interaction with BestMediaInfo.com, Parsons said the Coalition for Content Provenance and Authenticity (C2PA) is emerging as a key trust layer for marketing.

“C2PA is an aid to the marketing industry,” Parsons told BestMediaInfo.com. “If you think about movements like truth in advertising, people want to know that a dentist actually uses toothpaste and that a photo is genuine. Trust signals matter.”

The broader summit discussion framed synthetic media as both an opportunity and a risk. MeitY’s Deepak Goyal emphasised that the government’s primary concern is not AI creation itself, but its amplification.

“The issue is the amplification,” Goyal said during the session, pointing to the viral spread of manipulated audio-visual content.

He stressed that regulatory thinking is centred on individuals rather than platforms. “This is not about content moderation,” Goyal stated. “It is about believability and keeping the citizen at the centre.”

According to Goyal, deepfakes and synthetic media primarily harm individuals whose “likeness can be misused” and whose “voice can be synthesised.” He added that policy safeguards must ensure “the right to know, the right to protection against impersonation, and the right to remedy.”

While MeitY’s framing focused on citizen risk and regulatory principles, the advertising industry faces a parallel challenge of protecting brand credibility in an environment where consumers increasingly question what is real.

Parsons argued that the strongest trust anchor remains the brand itself, but AI-generated campaigns require additional signals to reinforce that trust.

Because C2PA embeds tamper-resistant metadata into images and videos, it does not determine whether content is permissible. Instead, it provides provenance, contextual information about origin and edits, enabling audiences to make informed judgments.

As global brands, including major beverage and consumer goods companies, experiment with AI-driven storytelling, the authenticity debate has intensified.

“The same principle can apply to brands facing questions around AI-generated campaigns,” Parsons said. “Technologies like C2PA can help people better differentiate what is real and what is created using AI.”

Google echoed the importance of transparency, though from a platform and ecosystem perspective. Gail Kent, Global Public Policy Director at Google, cautioned against equating AI-generated content with misinformation.

“Just because something is created by AI does not mean it is untrustworthy,” Kent said at the summit.

She noted that Google is deploying tools such as C2PA credentials and SynthID to embed signals into content. “We need tools like C2PA that provide information about how and when content was created,” Kent said, adding that tamper-resistant credentials strengthen trust.

“If content credentials cannot be tampered with, that significantly strengthens trust,” she observed.

For advertisers, such tools could offer a proactive way to address scepticism before it escalates into reputational damage. Parsons emphasised that Adobe’s interest in the space is both strategic and commercial.

“This is an area of great interest for Adobe,” he said. “We have an entire business around the Marketing Cloud, and trust, authenticity, and brand protection are key themes within it.”

Adobe’s Marketing Cloud supports campaign management, analytics and digital customer journeys for global brands. In that ecosystem, authenticity is directly linked to business performance. Industry collaboration, Parsons added, is already underway.

“Publicis is also a member of the C2PA steering committee, which shows there is a strong and focused effort around advertising, marketing, and brand protection,” he said. “Overall, this is extremely helpful for advertisers.”

Sameer Boray, Information Technology Industry Council,  reinforced during the session that while C2PA is not a “silver bullet,” it represents a step forward. He said it is “better than the status quo", highlighting that multiple tools, including watermarking and provenance standards, will need to work together.

As policymakers focus on curbing the risks of viral synthetic media and platforms invest in interoperable transparency tools, advertisers appear to be entering the conversation with a distinct objective, i.e., safeguarding brand equity.

In an AI-saturated creative landscape, provenance technology like C2PA may not eliminate scepticism, but, as Parsons suggested, it can provide the trust signals that modern marketing increasingly depends on.

content AI Google advertising Adobe MeitY AI-generated AI generated campaign India AI Impact Summit
Advertisment