How brands are battling with polarised & cringe content on social media

Industry leaders deliberate why some brands continue to prefer eyeballs over brand safety, the growing challenges in safeguarding brands on digital, measures taken by industry and government, along with strides made by YouTube and Meta ensuring brand safety

author-image
Sakshi Sharma
Updated On
New Update
How brands are battling with polarised & cringe content on social media

In today's digital landscape, where divisive and polarised content is prevalent, some brands find themselves at a crossroads, wrestling with the challenge of brand safety. As they seek to engage their target audience, the pressing need to safeguard their reputation and values has never been more crucial.

For example, regular brand ads have found their way into or between the videos of the Ukraine-Russia and Israel-Hamas war, misleading polarised political and cringe-worthy content on social media and YouTube.

More eyeballs vs brand-safe environment

publive-image
Anita Nayyar

While brand safety is important for brands to maintain ethics and be true to consumers, Anita Nayyar, COO of Media, Branding, and Communications at Patanjali Ayurved, said, "However, in the same breath, eyeballs are what brands gun for, and hence many brand categories do advertise on polarised content as they are more likely to find eyeballs there.”

Having said that, brand retention and awareness are more likely in the right environment, and hence, while eyeballs will be plenty, it is most likely that consumers will not relate/connect with the brands at that time,” she added.

She also highlighted that responsible brands need to ensure that they do not support polarised content that is not in good taste, as this may affect the brand loyalty and trust the consumers have. Loyal consumers will not want the brand they trust to be part of controversial content.

Nayyar emphasised that brands that put eyeballs above everything else will likely engage with polarised content. The motivation is purely to garner maximum reach.

publive-image
Anil Shankar

Anil Shankar, Senior Vice-President - Digital, Starcom India, highlighted that it's important to note that motivations can vary both within and across industries. While some may actively seek polarised content, others may do so inadvertently or as a result of complex ad placement algorithms. For example, a political campaign may engage with polarised content to target specific demographics with their messages.

“Their primary motivation may be to expand their reach and mobilise like-minded individuals who are active on polarised platforms. In contrast, a content creator or influencer may have diverse motivations, including financial incentives and building a loyal following. They might create or engage with polarised content to cater to their audience's interests or to generate engagement and discussion,” he added.

publive-image
Karan Anand

Similarly, Karan Anand, SVP – Strategy, Interactive Avenues (the digital arm of IPG Mediabrands India) said that certain industries, like controversial product categories or those adopting an edgy persona, may engage with polarised content to target specific audiences and create unique brand identities. Media outlets focused on sensationalism and companies in high-risk financial sectors also lean towards polarised content for high engagement and targeting receptive audiences.

Betting, gambling apps and quick loan companies choose to advertise on cringe-worthy and polarised content

Certain industries, such as betting apps, real money gaming, financial institutions, providing quick loan returns on investments, alcoholic beverages, etc., may be inclined to advertise on polarised content.

publive-image
Krishnarao Buddha

According to Krishnarao Buddha, Senior Category Head at Parle Products, every type of content will find its audience. For instance, brands associated with betting may choose to place their content on platforms featuring negative content. They believe this audience is the right fit, as it appears vulnerable and drawn to negativity. These viewers might be in a distressed state of mind, making them susceptible to ads from companies like OctaFX, Rummy Circle or Junglee Rummy. These brands aim to gain control over whatever limited funds these viewers possess.

publive-image
Suchi Jain

“Their primary motive for the same would be that they wish to reach out to their prospective customers who would be present on platforms consuming that content. One reason could be the country in which they want to advertise would have laws that restrict them from using the platform channels like any other advertiser industry would,” Suchi Jain, General Manager, Madison Digital added.

“However, for reputable brands like Parle, we make it a priority to ensure our ads are placed in a safe environment. We avoid contexts with curse words or negative content when planning our advertising initiatives,” Buddha said.

Buddha continued, “We avoid any association with such content entirely, making it a top priority when advertising on platforms like Facebook and YouTube, where content control isn't 100% guaranteed.”

“However, for platforms like Zee5, SonyLiv, and Disney plus Hotstar, where we know the content, we are comfortable with advertising," he added.

Ensuring brand safety is a difficult task

Shankar of Starcom India pointed out that while ensuring brand safety is a difficult task at hand amid rising user-generated content, it should be viewed from two perspectives: One from the consumer's standpoint and the other from the brand's standpoint. Both are closely linked to the world of brands.

“Firstly, in the continuously evolving data landscape, brands need to scrutinise platform (website, app, e-commerce) policies and guidelines to ensure good compliance and enhance the consumer experience. Additionally, brands should focus on controlling ad placements to ensure ads appear in brand-safe contexts, which can be challenging to manage due to user-generated content.”

The second point relates to reputation risk and potential customer backlash, he said. “Brands risk association with divisive or controversial content, which can harm their reputation and values. Consumers may react negatively to brands appearing on polarised platforms, potentially leading to boycotts or negative sentiment,” Shankar said.

Interactive Avenues’ Anand said that navigating ads amid polarised content poses brand safety challenges due to algorithm-driven placements and unpredictable user-generated content.

“The delicate balance between maximising reach and maintaining a positive image requires continuous adaptation, safety tools and responsible marketing to align with positive social causes,” he added.

Meanwhile, Madison Digital’s Jain pointed out that brands need to define the standard of the content and the suitability of the platforms they wish to advertise on.

“Some advertisers continue to do this by choice, while others have their ads served because of no controls or prior measures taken. To prevent the challenge, it comes as part of the association and willingness of the brand to be present with sensitive content. This, in turn, leads to the audience consuming content along with the ad to form a perception for the brand that may damage its reputation eventually,” Jain added.

What should be done to stop or even reduce this trend?

Nayyar pointed out that the industry certainly is taking baby steps to ensure brands do not consider polarised content; however, more than any industry-led initiative, it becomes the brands’ responsibility to practice what is right.

“While there are many platforms that refrain from offering polarised content for advertising, it is important that both platforms offering this content and brands join hands to ensure brand safety,” she added.

Buddha suggested that improved regulations would create a safer environment for all brands, which is a positive step.

“If it happens, platforms like YouTube, Facebook and Instagram will have a greater responsibility to regulate content, particularly because they cater to family audiences. With the widespread availability of cheap mobile phones and data, controlling content access is challenging, especially for those 18 and older. While platforms typically require users to be at least 13 years old, it's easy to fake one's age and access content. So, regulations would be a welcome move,” he added.

Shankar said that government initiatives such as the DPDT India Act 2023 focus on the rights of users, data collection, localisation and processing. As scrutiny increases in India, the advertising industry is likely to witness changes in standards and regulations aimed at addressing the brand safety concerns associated with polarised content.

“For example, regulatory bodies and industry associations may impose stricter guidelines on content moderation for platforms, ensuring that polarised and controversial content is better identified and controlled. Another example is the increased demand for transparency from both brands and advertising platforms. Brands may be required to disclose their ad placement strategies, while platforms might have to provide more information on their content policies and algorithms,” Shankar added.

Shankar also pointed out that advertising platforms play a pivotal role in ensuring brand safety for advertisers on their platforms. They act as the gatekeepers of content, ad placements, and user interactions, playing a crucial role in maintaining brand-safe environments. The primary role of advertising platforms should be to provide transparency on how content is moderated, how ad placements are determined, and any changes made to content policies.

“They should also periodically audit ad placements and content moderation processes to ensure compliance with brand safety standards. It is essential to hold platforms accountable for any lapses and ensure their compliance with emerging regulations related to brand safety and transparency in advertising,” he added.

According to Anand, growing scrutiny around advertising on controversial content is expected to drive industry standards towards transparency, ethical practices, and improved control over ad placements. Enhanced AI tools, regulatory guidelines, and responsible advertising strategies are crucial for striking a balance between free expression and maintaining a safe digital advertising space.

Meanwhile, Jain said, “Changes are expected. At an advertiser consortium level, the rules and regulations of advertising on polarised content need to be defined and there has to be homogenisation of platform policies across Industries. Steps can be taken in the form of generally applicable policies, customised policies for specific sectors that wish to advertise on polarise content, policies for influencers who promote the brand on social channels, setting a limit or no advertising policy on the inventory based on content suitability for advertisers that can be classified as high, medium, or low.”

How are platforms like Google and Meta addressing brand safety concerns?

According to Google Ads' 2022 Ads Safety Report, in 2022, Google added or updated 29 policies for advertisers and publishers. This included expanding its financial services verification programme to 10 new countries, expanding protections for teens and strengthening its election ads policies.

"In 2022, we removed over 5.2 billion ads, restricted over 4.3 billion ads and suspended over 6.7 million advertiser accounts. This represents an increase of 2 billion more ads removed in 2022 from the previous year. We also blocked or restricted ads from serving on over 1.5 billion publisher pages and took broader site-level enforcement action on over 143,000 publisher sites. To enforce our policies at this scale, we rely on a combination of human reviews and automated systems powered by artificial intelligence and machine learning. This helps sort through content and better detect violations across the globe,” Google stated.

“Following the start of the war in Ukraine, we acted quickly to prohibit ads that exploit, dismiss or condone the war. This is in addition to our longstanding policies prohibiting content that incites violence or denies the occurrence of tragic events to run as ads or monetise using our services,” it added.

Earlier in May this year, Integral Ad Science announced it had enhanced its partnership with YouTube to provide advertisers with industry-leading brand safety and suitability measurement across the online video platform.

Utilising advanced machine learning technology, IAS measurement will now offer a more thorough examination of video content on YouTube. This enhancement provides marketers with improved tools for ensuring safety and relevance. IAS's updated reporting follows the Global Alliance for Responsible Media (GARM) Brand Safety and Suitability guidelines, allowing for detailed campaign reporting to achieve the highest effectiveness.

On the other hand, Meta also has tools that give advertisers control over where their ads appear on Facebook and Instagram. According to Meta, it offers a variety of brand suitability controls to prevent ads from running with certain types of content on Facebook, Instagram, and Meta Audience Network.

"When creating an ad, you can choose where you want your ad to show on Facebook, Instagram, Messenger, and Audience Network. If you don't want your ads to run in certain placements, you can opt out of them. If there are certain placements where you don't want your ads to appear, you can upload a list of those URLs and prevent your ads from being delivered there," Meta stated.

"Our ad placement list details the URLs where your ads can appear, such as the Audience Network, Facebook in-stream videos, ads on Facebook Reels, and ads on Instagram Reels. You can download the list and review it. After this, you can copy the selected URL to your block list or publisher permission list. You can also search, sort, and filter publishers in the Brand Safety and Suitability Control interface to spot-check publishers without downloading the entire ad placement list," it added.

Info@BestMediaInfo.com

brands advertisers YouTube Instagram Facebook advertisements brand safety digital platforms Reels ads on YouTube reel ads Facebook ads cringe content polarised content inappropriate content
Advertisment