Leveraging UGC (user-generated content) on social networks is a mainstay of the modern marketer’s arsenal, but it is also riddled with risks for brand integrity. And while using tools like TikTok’s inventory filter — which enables advertisers to filter out content categories and determine the type of content they are happy to advertise against — are a useful place to start, brands should be wary of defaulting to a blanket ban of certain types of content, Publicis Media Head of Standards EMEA Tom Burns told the conference.
“We don’t want to be creating blunt instruments,” said Burns, noting that the ecosystem had moved away from blanket bans of hashtags in recent years. He said that in an ideal world, brands and marketers should be making decisions that were inclusive of all types of content.
“Risk is really important as a definition, because risk exists in everything” commented Burns. “We can’t just change the word and pretend it doesn’t exist. UGC is probably the biggest area of risk to safety, to suitability, for brand integrity. So I think [the key is] just understanding that risk and being comfortable with it.”
In relation to TikTok, whose dynamic video environment represents a completely different advertising environment for marketers than the traditional, static settings of the web, brands should use concrete examples of content to workshop and define what kind of content is suitable to be adjacent to.
“Getting out of the theoretical and actually seeing content is the main way we’ve been seeing a lot of success in getting advertisers to understand what risk means for platforms like TikTok which are so different from the traditional media world,” commented Burns.
When assessing the level of risk tolerance that is suitable for their brand on TikTok, marketers should refer to the GARM (Global Alliance for Responsible Media) brand safety and suitability standard which defines 11 content categories in terms of four risk levels: low, medium, high, and floor.
What actions did TikTok take for Ukraine ?
“Each individual brand [is] going to have their own risk tolerance, [and] has their own strategy around what’s right for them,” responsible marketing data agency ZEFR EVP Strategy and Marketing Andrew Serby said.
Meanwhile, earlier in the session, TikTok revealed actions it has taken to prevent the spread of misinformation on the platform relating to the war in Ukraine. The company has suspended livestreaming and new content on the platform in Russia while it “reviews” the implications of Russia’s new ‘fake news’ law, Tok Head of Trust & Safety Cormac Keenan said. Direct messaging within the platform is still permitted.
Keenan said that in response to events in Ukraine, TikTok had evolved its methods in real time to identify and combat harmful content, enabling it to take action on livestreams broadcasting misleading or unoriginal content. TikTok is using a combination of technology and people — it has partnered with independent fact checking organizations — to help it identify and remove content that violates its community guidelines.
Between February 24 and March 31, 2022, TikTok:
- Removed 41,191 videos relating to the war in Ukraine, 87% of which violated the platform’s misinformation policy.
- Fact-checking partners helped assess 13,738 videos globally, resulting in prompts on 5,600 videos informing viewers that content could not be verified.
- Removed 321,784 fake accounts in Russia, and 46,298 fake accounts in Ukraine.
- Identified and removed six networks and 204 accounts globally for coordinated efforts to influence public opinion and mislead users about their identities.
- Labeled content from 49 Russian state-controlled media accounts as part of its pilot state-controlled media policy.