/bmi/media/media_files/2025/06/23/perplexity-2025-06-23-09-43-27.png)
New Delhi: The BBC has issued a legal warning to US-based startup Perplexity AI, claiming the company has used its content without permission to train artificial intelligence models. This marks the broadcaster’s first formal action to protect its intellectual property from being used in AI development.
In a letter addressed to Perplexity AI’s chief executive, Aravind Srinivas, the BBC stated it had gathered evidence indicating its material was used in training the company’s model. The letter, first reported by the Financial Times, demands the startup cease scraping BBC content and delete any copies already obtained, unless it offers “a proposal for financial compensation”. It also warns of potential legal proceedings, including an injunction.
The development comes amid broader industry concerns over the unauthorised use of copyrighted content in AI training. In recent weeks, BBC Director General Tim Davie and the chief executive of Sky criticised proposed UK government regulations that could permit tech companies to use copyright-protected material without consent.
“If we currently drift in the way we are doing now we will be in crisis,” Davie said during the Enders conference. “We need to make quick decisions now around areas like … protection of IP. We need to protect our national intellectual property, that is where the value is. What do I need? IP protection; come on, let’s get on with it.”
Perplexity has responded to the BBC’s letter, calling the broadcaster’s claims “manipulative and opportunistic”. The company told the Financial Times that the allegations reflected a “fundamental misunderstanding of technology, the internet and intellectual property law”.
Unlike companies such as OpenAI, Google and Meta, Perplexity says it does not build or train its own foundation models. Instead, it offers an interface that lets users choose between existing models. However, the BBC claims that some of its content had been reproduced “verbatim” by Perplexity, and argued that the startup’s tool “directly competes with the BBC’s own services, circumventing the need for users to access those services”.
Last October, the BBC began registering copyright for its online news content in the United States, enabling it to seek statutory damages for unauthorised use. That same month, Dow Jones the publisher of The Wall Street Journal filed a lawsuit against Perplexity, accusing it of “a massive amount of illegal copying” in a “brazen scheme … free-riding on the valuable content the publishers produce”.
Concerns over AI training practices have prompted calls for regulation. The publishing sector has pushed for an opt-in model that would require tech firms to obtain permission and secure licensing agreements before using copyrighted material.
While a consultation in the UK previously suggested that media owners might be required to opt out to protect their content, Culture Secretary Lisa Nandy has indicated that no final decision has been made.
“We are a Labour government, and the principle [that] people must be paid for their work is foundational,” Nandy said at a media conference earlier this month. “You have our word that if it doesn’t work for the creative industries, it will not work for us.”
Several major publishers, including the Financial Times, Axel Springer, Hearst and News Corporation, have signed licensing agreements with OpenAI. Meta has entered into a deal with Reuters, while the Daily Mail’s parent company has partnered with ProRata.ai.