Can advertisers trust ads inside OpenAI’s ChatGPT?

As ChatGPT prepares to monetise attention through advertising, the industry questions whether conversational AI can truly be trusted

author-image
Lalit Kumar
Updated On
New Update
ChatFraud
Listen to this article
0.75x1x1.5x
00:00/ 00:00

New Delhi: India’s digital advertising market already runs on a glaring contradiction. It is the biggest line item in many marketing plans, yet it is still among the least independently audited.

To quantify the scale of the problem, India loses Rs 30 crore every day to digital ad fraud, translating to nearly Rs 10,000 crore annually. Between 12% and 14% of total digital ad spends are believed to be lost to invalid traffic, including increasingly sophisticated AI-led bot activity.

In this volatile landscape, open-auction programmatic environments remain the weakest link, with invalid traffic levels in some programmatic inventories reaching as high as 31%, making them the single largest contributor to wasted media spends.

It is against this backdrop of persistent fraud, weak verification, and eroding advertiser trust that OpenAI’s decision to introduce advertising inside ChatGPT has triggered both curiosity and unease across the marketing ecosystem.

Industry conversations indicate that beta testing will begin as early as February 6, 2026, with CPMs reportedly priced three times higher than those on Google and Meta.

What OpenAI is effectively offering is not clicks or conversions, but presence inside an AI-generated response, a space many users increasingly treat as authoritative.

For advertisers, the question is no longer just whether ChatGPT ads will work.

The deeper, more uncomfortable question is whether they can be trusted

Prashant-Puri
Prashant Puri

Prashant Puri, Co-Founder & CEO at AdLift (acquired by Liqvd Asia), placed ChatGPT’s fraud profile closer to search than social, but with important conditions.

“ChatGPT’s ad model likely sits closer to Google Search than Meta on classic ‘invalid traffic’ (bot/click-farm) risk, but for a different reason. It’s a new, closed, logged-in, low-inventory surface with limited measurement, so the fraud profile is narrower but also less battle-tested,” he stated.

Puri suggested that lower exposure to classic bot-driven fraud does not automatically mean higher safety. It simply means fewer known attack surfaces today.

Dhiraj-mFilterIT
Dhiraj Sinha

Dhiraj Sinha, CTO, mFilterIT, refused to place ChatGPT anywhere on the fraud-risk spectrum at this stage, and that refusal itself is telling.

He said, “Compared to Google Search and Meta, ChatGPT’s ad model doesn’t clearly sit anywhere on the fraud-risk spectrum yet, simply because the ecosystem is still too early and too limited to evaluate properly.”

In his view, the lack of historical data and mature tooling makes any definitive judgement premature.

Sinha went further to underline the core vulnerability. “Today, tracking, monitoring, and measurement capabilities are minimal. Without mature visibility into how impressions, interactions, or conversions are logged and verified, it’s difficult to assess fraud exposure with confidence. If you can’t measure deeply, you can’t truly detect fraud either,” he noted.

What disappears, what mutates

One reason ChatGPT advertising appears attractive is that it structurally removes several long-standing fraud vectors that plague open digital ecosystems. According to Puri, some categories of fraud simply struggle to exist in a closed, logged-in environment with limited inventory.

“ChatGPT structurally eliminates supply-chain and inventory fraud and sharply reduces bot-driven IVT, but shifts risk toward influence manipulation, prompt steering, and measurement opacity rather than classic fake traffic,” he said.

This distinction is critical for Indian advertisers who have seen the worst fraud losses emerge from programmatic arbitrage, where low-cost inventory masked poor quality traffic at scale.

Open-auction programmatic has been flagged as the most compromised part of the digital stack, with agencies and fraud-detection firms warning that a significant share of impressions never reach real humans. Against that backdrop, ChatGPT’s controlled environment does remove several layers of leakage.

However, removal does not mean immunity. The risk does not vanish; it mutates. In conversational AI, the manipulation shifts away from traffic generation and toward influence and context, raising questions about how recommendations are surfaced and how much weight users assign to AI-generated answers.

Sinha argued that the industry should not assume old frameworks will work here. He observed, “What is clear is that conversational advertising will require a different measurement framework altogether. The models we use for search and social, clicks, impressions, device-level tracking, and third-party verification may not translate directly here.”

This creates a gap between how advertisers are used to validating media and what ChatGPT can currently offer.

The “view” is the weakest link

One issue consistently emerged as the most fragile, and that is the definition of a “view.” In an ecosystem where billions have been lost to inflated impressions, any ambiguity around exposure becomes dangerous.

“In conversational AI, a ‘view’ is not a cheap impression; it’s a credible opportunity to influence, but it must earn trust through transparency, not assumed attention,” Puri remarked. In other words, the premium pricing only makes sense if the exposure itself is robustly defined and auditable.

He also identified the precise fault line where fraud could re-enter the system. Zooming in, he said, “Any definition of a view that counts mere rendering, without visibility, time, and conversational stability, recreates impression fraud economics, even in a logged-in conversational AI.”

This mirrors the same mistakes that turned banners and low-quality video into arbitrage-heavy channels. Sinha’s position remained more cautious. “The risk isn’t necessarily higher or lower. It’s less understood,” he said. For advertisers, this uncertainty means early performance data should be treated as directional learning rather than proof of safety or efficiency.

Is ChatGPT more like CTV than search?

Puri believed the closest comparison is not search or social, but connected TV. “ChatGPT’s ad fraud risk profile is closer to CTV than to Search or Social: structurally resistant to classic IVT, but vulnerable to weak definitions of exposure and opaque measurement early on,” he noted.

The analogy holds because CTV, despite its premium positioning, has faced sustained scrutiny over measurement consistency and third-party verification. Premium digital environments are not immune to control gaps, reinforcing the argument that “closed” does not automatically mean “clean.” The same lesson applies here.

Sinha reinforced the need for independent oversight. “Until stronger tracking and independent monitoring mechanisms are built, fraud assessment will remain limited. Advertisers should treat it as an evolving channel where today’s risk evaluation tools simply don’t apply yet,” he cautioned.

This does not amount to a rejection of the channel, but a call for disciplined experimentation.

What “safe” should mean in a ChatGPT world

OpenAI’s ChatGPT is currently positioned as a utility rather than a media platform. But advertiser safety will depend less on intent and more on execution.

India’s digital ad fraud problem persists largely because measurement has not been demanded forcefully enough. If ChatGPT advertising scales without strong definitions of exposure and credible verification pathways, it risks repeating the same mistakes, albeit in a more sophisticated interface.

In a conversational AI environment, advertising sits next to answers, not content feeds. That elevates its influence and raises the cost of getting the measurement wrong.

Safety, in this context, will not be about the absence of bots alone. It will be about whether brands can see, verify, and trust what they are paying for.

OpenAI ChatGPT advertising ads digital Marketing brands advertisers India adtech fraud safety CPM AI media Trust programmatic measurement
Advertisment