This site uses cookies to ensure the best viewing experience for our readers.
Facebook Executive Describes Monumental Task of Detecting and Removing Terrorism Content

Facebook Executive Describes Monumental Task of Detecting and Removing Terrorism Content

A director of counterterrorism policy at the social network briefed Israeli and American homeland security officials in Jerusalem about the company’s efforts to combat extremist speech

Asaf Shalev | 09:37, 15.06.18
A director of counterterrorism policy at Facebook briefed Israeli and American homeland security officials about the company’s efforts to combat extremist speech on Wednesday as part of a three-day conference in Jerusalem.

For daily updates, subscribe to our newsletter by clicking here.

Speaking at the first International Homeland Security Forum, Erin Marie Saltman discussed the social network’s first-ever Community Standards Enforcement Report, which was released last month. The report said that removed 1.9 million posts related to terrorist groups Al Qaeda and ISIS in the first quarter of the year and that 99.5% of posts were removed in the social network’s screening process without user input.

Facebook offices. Photo: AFP Facebook offices. Photo: AFP Facebook offices. Photo: AFP
“When it comes to violent extremism policies our policy line does say that we do not allow violent organizations to have a presence on Facebook,” Ms. Saltamn said. “This means they can not use Facebook even just to talk about their favorite radio stations.”

“We have a team that is trained to understand these organizations and remove pages, posts, groups and profiles they might have,” Ms. Saltamn added. “Once designated, no one is allowed to support, praise, or promote these groups, so even if a person who is not involved in terrorism says something to the effect of ‘ISIS is great’ that content, when flagged to us, would also be removed.”

The company indexes extremist videos and photos using what are known as hashes, which allows the network to automatically remove content that has been previously flagged.

“If somebody is raw-sharing and uploading terrorist content without context the machine can actually make a binary decision to remove that without it ever hitting the platform,” Ms. Saltamn said.

Sometimes, however, activists, officials and journalists use imagery from terrorist groups in order to expose or criticize it. In those cases, human judgment must kick in. Ms. Saltamn said that the company’s human reviewers speak more than 40 languages and have special training to be able to determine the local context of content that may be flagged.

Facebook grew its global online safety workforce from 4,000 to 7,500 people over the past year and plans to reach 10,000 by the end of 2018, according to Ms. Saltman.

These efforts support a larger initiative by Facebook, Google, Twitter, Microsoft, and others called the Global Internet Forum to Counter Terrorism.

Related stories

Through GIFCT, the large tech companies share technology and knowledge with smaller platforms that may not have accumulated experience in identifying and removing harmful content. “We should all be learning from one another,” Ms. Saltman said.
share on facebook share on twitter share on linkedin share on whatsapp share on mail

TAGS