Google’s Preeti Lobana highlights steps to fight misinformation and fake content

As AI adoption grows, Google outlines multi-layered approach to tackle misinformation and deepfakes, highlighting watermarking tools and cross-sector collaboration

author-image
BestMediaInfo Bureau
New Update
Google’s Preeti Lobana highlights steps to fight misinformation and fake content
Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

New Delhi: New technologies such as artificial intelligence (AI) have created significant opportunities but also introduced challenges like deepfakes, according to Google India’s Country Manager and Vice President, Preeti Lobana. 

Advertisment

She said addressing misinformation remains a priority for the company, which relies on a combination of policies, AI tools, and human oversight to address such issues.

“The Asia Pacific region is particularly seeing higher degree of scams/frauds and misinformation has been a challenge,” Lobana said, adding that Google is ramping up its efforts to address misleading and fake content.

Lobana said that the company’s plan to launch the Google Safety Engineering Centre in India, first announced in 2023, was now “imminent.”

“This (tackling misinformation) is super important for us, when you think about our mission, about information being universally accessible and organising it in a certain way, making sure that we are tackling misinformation in a very systematic manner is very, very critical. So...(it is about) having the right policies and guidelines, having the right technology, having the right human oversight to make sure that we are catching misinformation,” she said.

Lobana highlighted that the company is introducing innovations like SynthID to watermark and verify AI-generated content. The tool applies an invisible watermark to content created using Google’s AI tools, which remains detectable even if the content is edited or shared widely.

“...we're introducing innovation like SynthID, so when any content is created using some of Google's AI tools, there is an invisible watermark, and it's pretty strong technology, because even if it is shared across multiple people or edited, it is detectable,” she said.

She added that Google has launched a SynthID verifier, allowing users to upload content to check whether it is synthetic or AI-generated.

According to Lobana, combating misinformation is an ongoing process and requires a coordinated approach across the wider ecosystem.

“These are our efforts. The ecosystem needs to come together, but it is deeply important to us to make sure that we are combating that,” she said.

The company also unveiled its Google Safety Charter for India’s AI-led transformation. The document outlines a collaborative approach to tackle emerging online challenges, including fraud and scams, cybersecurity across sectors, and responsible AI development.

Lobana noted that while AI has unlocked new creative possibilities, it has also contributed to a rise in misinformation and manipulated content.

“Therefore our effort is to make sure that whatever content is created using our AI, there are watermarks on that, and then (the idea is) enabling and sharing tools through which a wider section of users can upload some of this content to be able to identify it. But like I said, it is about working with a broader ecosystem as well, because multiple AIs are used to generate some of this content,” she said.

“Combating misinformation and deepfakes is a work in progress and an area of deep focus for not just Google, but others in the industry,” she added.

deep fake misinformation Google
Advertisment