Internet monitoring in the service of brand safety
Brand safety is one of the most important problems of modern digital marketing. In the context of online advertising, the term “brand safety” refers to practices and tools that help ensure that information about the product or service does not appear in a place that may harm the manufacturer or service provider. Another aspect of brand safety is the need to avoid certain combinations of message contexts, for example: reputable companies producing sports and luxury cars do not want photos of their latest effects of hard work to be next to graphics depicting broken and burned car wrecks.
Issuing advertising (and thus financial support to the publisher), which infringes intellectual property or allows the spread of hate speech, is very important from the perspective of brand safety. A quick example: the Samsung Galaxy S8 ad that appeared some time ago on Yahoo.com. The banner was very well composed at the very top of the site, accompanied by an innocuous, melting cone that melted in the summer. However, when it came down a bit lower, to the most important events, immediately controversial and political content appeared:
In 2017, Procter & Gamble and Unilever reduced their involvement in the online market. They have reduced the number of Internet publishers who will guarantee a safe environment for their brands, mainly in the context of video content. Due to the cropping of large advertising budgets, this information shook the market. The publishers who provide content safe and verified by marketers have gained this. It is the end of an era of any and unchecked content created by every user who wants to acquire large brands.
Marketers who want to advertise on safe placements have three options. An old model of choosing advertising space with a reduced risk of displaying the risky content. Model with a choice of security guarantees. The use of content analysis technologies on websites and individual subpages to stop advertising broadcasts if certain predetermined guidelines are exceeded. All these cases are the less effective the more content is of UGC type (created by users), where publishers have limited field of reaction and surveillance capabilities.
Harmful and undesirable areas
Longer explanations of the concept of brand safety are more complex. Each brand has its own unique offer, vision and values, and usually there is no list of dangerous media listed. Generally, brand safety can be defined as keeping away from issues that are: illegal (in terms of copyright), offensive or directly contrary to the company’s specific rules and the basic rules of a good image. This is just a small fragment of the so-called undesirable content. Other areas and values that adversely affect the brand’s reputation and appearance are:
- erotica and pornography;
- xenophobia, racism, discrimination, hatred;
- weapons, war, crime, violence, aggression;
- gambling;
- addictions, alcohol, pipes, drugs, tobacco;
- pirated content, illegal;
- accidents, disasters, tragedies, cataclysms, deaths, crimes, murders;
- profanities, blasphemies, profanities;
- currently sensitive content (abortion, refugees, …);
- fake news.
Online brand protection also prevents the loss of revenue, reputation and trust of customers that often occur when someone else uses your company for their own benefit. The trademark of the company may fall victim to this type of practice. Many people in the world do not know that without the permission of the owner, the emblem can not be used as a symbol of another brand in order to gain their own advantage. Unwittingly, many people make the mistake of creating so-called “memes”, where various registered trademarks are used illegally. Unfortunately, they are usually accompanied by unwanted content.
Creating whitelist and blacklist
In order to provide protection against spam and artificial accounts on social networks that can spoil the image of our company, so-called. whitelist and blacklist. To best illustrate these concepts, it is enough to refer to unwanted phone calls from telemarketers or voice machines that urge on various unnecessary goods and services. When we receive a signal from an unknown source and do not want it to be repeated later, we reject it and with one click we can throw it into the “black list”. This will cause future calls to be silenced from this untrusted and unwanted phone number.
Very similar treatments are performed “online”. This is how “blacklist” is created – groups of accounts or sources that we do not want to follow on the web. Thanks to this, we will not be “bombarded” with undesirable content or information that we do not need in any way. “Whitelist” is a list of users in “social media”, which is created to constantly observe them. These can be, for example, celebrities, close friends, maybe even your family – trusted accounts with whom you want to keep in touch or see their notifications. We do not just use this definition on Facebook, Twitter or Instagram, but also on the mailbox or in the context of secure IP addresses of specific internet servers.
By adding specific sources to “whitelist”, we can expect that we will always get emails from these accounts without checking the SPAM tab or antivirus control – however, it is always a good idea to conduct a scan to protect against malicious software! “White letters” are useful in large corporations in internal correspondence, where we want to be sure that e-mail has gone from us and reached the recipient without any problems, not wandering along the way in tabs like: unwanted or unnecessary e-mails.
Advertisers are taking more proactive steps regarding the places where their content appears because they are trying to improve the safety of the “online” brand. “Whitelist” is an ideal solution for sensitive companies. Unfortunately, but the stick has two ends. Such restrictions may affect the reach of the advertised product or service campaign – the fewer recipients in the network, the smaller the rate of reaching. Something for something.
Solutions based on algorithms
The Internet undeniably creates new ways of communicating brands with recipients. Companies such as Google, Facebook, YouTube and Twitter take great advantage of the huge online reach and offer companies the option to advertise on their platforms around the world. However, like many things in life, information can go not the way we expected it to go. After all, no one would want the logo of our company, professionally made, for heavy money, profile pictures, to be in the company of advertising, e.g. a reputable pornographic label. In order to avoid threats that lurk in the abyss of the Internet, it is worth taking an interest in media monitoring. Companies offering such services help with brand safety and communication.
YouTube is already pushing these algorithmic solutions after, in 2017, the partners of this site have massively begun to withdraw information about their products and services because many of their advertisements were in the company of extremist content.
The question arises – how to choose the best content? For global marketers, the nightmare is to manage multiple partners, sellers and suppliers, while providing brand safety guidelines. There is no single solution for brand safety that fits all networks. What works for one brand may not work for another. When designing security mechanisms, many factors need to be taken into account: context, page quality, advertising fraud, language, images, and more. How can the brand maximize coverage without the risk of exposure in the wrong places?
How to care for brand safety?
- Use blacklist and whitelist
- Restrict the publication on websites with a political context
- Monitor the list of websites you advertise on
- Limit usage on UGC publishers
- Use contextual targeting
- Collaborate with trusted publishers and market pleces
- Demand more responsibility from social platforms
- Use internet monitoring
- Cooperate with associations that set the brand’s safety standards
Human factor
Companies in the IT-Tech industry are constantly developing new content analysis systems that mainly operate on the basis of keywords and context. Of course, this works fairly well in cases of text. It is more difficult with pictures, photos or video. There are no ready solutions yet. The only solution is the human factor and appropriate content labeling on the publishers’ side.
The comprehensive media monitoring would help a lot in collecting user feedback on this platform. The voice of people surfing the Internet – potential consumers – is necessary in this matter. Enterprises could benefit from the involvement of users because they turned out to be open and willing to help recipients who are authorized to do so. Human participation in the content verification process is still necessary.
Algorithms are a popular method, especially considering the huge amount of information and platforms that exist on the market. These strictly defined strings can be designed to offer users interesting and relevant topics based on specific preferences and “online” behaviors. However, these systems themselves can potentially be used to fuel disinformation and share different, harmful ideas. So far, there is no algorithm that could faultlessly decide what is real and what is false.
Currently, the internet is almost filtered with negative and untrue content. We must strive to make online platforms a safe and trustworthy environment for consumers and brands. It can help us in this “Joint Industry Committee for Web Standards” (JICWEBS). This body was created by British advertising and media companies responsible for creating standards and codes in digital media. He works with industry organizations (IPA, IAB and AOP) to find solutions to problems such as visibility, advertising fraud and brand safety. Companies can now obtain the accreditation of this organization as proof that they follow the procedure for full “brand safety”. Along with the existence of the Internet, we will be accompanied by the lack of unambiguous measures determining what a “friendly” environment looks like. Ultimately, brand safety is always associated with a certain human judgment.
Trusted Media Brands research
In the Programmatic in the Era of Transparency study carried out in May 2018 by Trusted Media Brands on a group of 300 US marketers from the TOP 200 list of advertisers in the US, it turned out that for 58% of participants the biggest fear in buying digital media was brand safety. The above was only the emphasis on ROI (62%) and the care for the visibility of advertisements (59%). In the question asked about the actions taken to improve the brand’s safety brand, the respondents most often indicated:
- using blacklist (66%)
- restricting websites to a political context (58%)
- monitoring service lists (56%)
- using whitelist (55%)
- Avoiding UGC sites (48%)
- increasing the use of contextual targeting (47%)
- demanding more responsibility from social platforms (46%)
- increasing the use of programmatic guaranteed / market places (44%)