Working a company In order to help advertisers, Facebook is developing tools to keep the ads in its news feed away from harmful content To keep the placements of their ads away from certain topics in the news feed.
The company said it begins by testing its topic exclusion controls with a small group of advertisers.
The company explained that the children’s games company, for example, will be able to avoid content related to crime and tragedies, if desired.
Other topics include news, politics and social issues. The company indicated that the process of developing and testing controls for excluding topics takes most of the year.
Facebook, along with players such as Google and Twitter, works with marketers and agencies through a group called the Global Responsible Media Alliance, or GARM, In order to develop standards in this area.
The group has been working on measures that aid consumer and advertiser safety, including defining harmful content definitions, reporting standards, independent oversight, and approval to create tools that better manage content next to the advertisement.
Facebook’s news feed tools rely on tools that operate in other areas of the platform, such as: in-stream videos or on its own audience network, which allows developers to serve users in-app ads based on Facebook data.
The concept of brand security is important for any advertiser who wants to make sure that their company’s ads are not close to certain topics, and there has also been increasing pressure from the industry to make platforms like Facebook more secure.
“It has moved from brand safety to focus more on societal safety,” said the CEO of the World Federation of Advertisers, which created the Global Responsible Media Alliance.
Ad-supported content helps support all things ad-free, and many advertisers say they feel responsible for what happens on the ad-supported web.
This was greatly illustrated last summer when a large number of advertisers temporarily boycotted Facebook, demanding that it take tougher steps to stop the spread of hate speech and disinformation on its platform.
Not only did some of these advertisers want to steer their ads away from obnoxious or discriminatory content, but they also wanted a plan to make sure that content was completely off the platform.
Advertisers have complained for years that the major social media companies are doing little to prevent ads from appearing alongside hate speech, fake news, and other harmful content.
In September, Facebook, Twitter and YouTube signed a deal with major advertisers to curb harmful online content.