/bmi/media/media_files/2025/10/23/neil-thompson-2025-10-23-10-59-43.jpg)
Neil Thompson
New Delhi: Artificial intelligence may be the hottest buzzword in marketing, but it still struggles with something old-school: human understanding.
And that, according to Neil Thompson from Massachusetts Institute of Technology, is exactly why the future of advertising won’t be machine-led; it’ll be machine-managed.
In an exclusive conversation with BestMediaInfo.com on the sidelines of NDTV World Summit, Thompson, who currently chairs the role of Director at MIT FutureTech, explained that while AI can make advertising sharper and faster, it also makes it riskier.
When you combine the probability of AI making errors and the highly complex, nuanced demographic situation in India, marketing efforts in targeting, media planning or media buying can easily go wrong.
The takeaway? Machines can help marketers aim better, but they can’t always tell what’s worth aiming at.
The illusion of perfection
On paper, AI is a marketer’s dream. It promises to understand you, anticipate your next move, and serve you just the right ad at the right time. In practice, It talks with the confidence of a scholar and the accuracy of a rumour.
“When we think about the role of AI in marketing, especially the idea of targeting individuals with hyper-personalized messages, the potential is immense, but so are the challenges. The very precision that makes AI powerful also means that when it errs, the consequences can be significant,” he said.
He added, “However, there’s always a long tail of risk.”
Zooming in, Thompson stated that if an AI system is built with rich and accurate data, it tends to perform remarkably well. For many consumers, the messages generated by such systems feel far more relevant and engaging than anything they have encountered before.
Even when the targeting is not entirely precise, it often represents an improvement over the earlier model of broad segmentation, where audiences were categorised by limited factors such as profession or age, overlooking deeper nuances of identity and behaviour.
Highlighting the long tail risks, he further explained that in certain cases, AI-generated messages may not only miss the intended mark but also inadvertently offend recipients or misrepresent the brand’s values.
Consequently, most organisations developing AI-driven marketing systems incorporate a layer of human oversight, a safeguard designed to ensure that automation does not compromise sensitivity, ethics, or brand reputation.
That long tail, the unpredictable space where personalisation turns into misjudgement, is what keeps marketers awake at night. Because one wrong ad placement, one culturally tone-deaf line, or one mismatched image can undo months of precision-driven marketing.
AI doesn’t mean to offend, of course. It just doesn’t know better yet.
Humans: the unsung editors of AI
While marketers are quick to automate, Thompson believes the smartest brands are the ones quietly building human checkpoints into their AI workflows. “Often when people build these kinds of AI systems, they have to have a wrapper of human control or checking on it,” he explained.
Sometimes that check is algorithmic, like filters that prevent hate speech. But sometimes it’s not about code. “Sometimes you actually just need a human to read it,” he said.
And that’s where the human layer comes in, not as decoration, but as defence. Because even the most sophisticated AI can’t recognise tone, intent, or irony, the very things that make advertising resonate.
It’s the human eye that catches the nuance between witty and offensive, between relevant and random. Or as Thompson implied, the human brain that looks at an AI-generated tagline and says, “Maybe don’t post that.”
Data: the new power play
The marketing world has always been about influence, but today, it’s also about information. And as AI systems grow, the one who owns the best data wins the game.
Thompson explained that as platforms like OpenAI integrate payments and transactions through India’s UPI network, the old power balance in digital advertising could shift.
“Who wins is not clear yet, but there’s going to be a lot of platform competition right now,” he said.
He broke it down simply: “It’s going to matter whether the thing that’s rare is a good model or the underlying data about you.”
That’s where the battle between AI intelligence and human behaviour data begins. Google, for example, has powerful models; Meta, on the other hand, sits on mountains of personal insights. “Even if Meta’s AI system isn’t quite as developed as OpenAI’s, they might still win because of the data,” Thompson said.
In short, it’s no longer about who has the smartest tech, it’s about who knows you better.
The great balancing act
What Thompson is really hinting at is an evolution, not a replacement. AI isn’t here to take over; it’s here to test how well humans can keep it grounded.
Marketers often romanticise data, the dashboards, the graphs, the predictive models, but data doesn’t feel. It doesn’t know if a message sounds too clinical or too clever. It doesn’t know when a campaign has crossed from relatable to robotic.
And that’s where humans come back into the picture, not to slow AI down, but to remind it who it’s talking to.
The best advertising of the future, Thompson seemed to suggest, will be co-written: AI will handle the scale and precision, and humans will ensure the message still feels like it’s coming from a person, not a server.
Because in a country as layered and emotionally charged as India, algorithms can’t afford to guess. They need to understand. And until machines learn how to read between the lines, the way people do, the human hand on the marketing wheel will always be the one steering it in the right direction.
/bmi/media/agency_attachments/KAKPsR4kHI0ik7widvjr.png)
Follow Us