This site uses cookies to ensure the best viewing experience for our readers.
New Mechanisms “Guarantee” Transparency Ahead of U.S. Midterm Elections, Facebook Executive Says

New Mechanisms “Guarantee” Transparency Ahead of U.S. Midterm Elections, Facebook Executive Says

Facebook’s top security executive Guy Rosen spoke to Calcalist about the technology behind the company’s war on fake news

Omer Kabir | 18:06, 12.07.18
At the end of 2017, Facebook's vice president of product management Guy Rosen received a personal email from Mark Zuckerberg, sent days before the budget for 2018 was allotted. "Before I begin, how much do you need?" he asked Rosen, making it clear that Rosen has a blank check as far as resources go.

"At that moment, the penny dropped," Rosen said Tuesday in an interview with Calcalist. "We were the top priority."

For daily updates, subscribe to our newsletter by clicking here.

 

Rosen is Facebook’s top security executive. He is in charge of dealing with some of the issues that have been at the heart of the criticism levied against the media company in recent years, such as fake news, hate speech, and graphic and violent content.

Guy Rosen. Photo: Avigail Uzi Guy Rosen. Photo: Avigail Uzi Guy Rosen. Photo: Avigail Uzi

He joined Facebook in 2013, when Onavo Mobile Ltd., a mobile data analytics startup he co-founded, was acquired by the company and turned into Facebook's research and development center in Israel. Today, Rosen works out of Facebook's headquarters in Menlo Park, California.

Rosen has been leading the unit in charge of the technological aspects of user safety at Facebook since 2016. It was in 2016 that Facebook realized it was not doing enough to ensure the safety of users, Rosen said in an interview with Calcalist on Tuesday. The company invested vast amounts of monetary and human resources to address the problem since, Rosen said.

The term "fake news" has risen to infamy in the context of the 2016 U.S. election, with many laying the blame at the feet of social media outlets like Facebook, through which disinformation and fraudulent stories were often spread. The company has been lambasted for not doing enough to counteract such content.

Despite initially dismissing as "crazy" the claim that fake news spread on Facebook might have influenced the election, Facebook’s founder and CEO Mark Zuckerberg later stated the company "didn't do enough to prevent these tools from being used for harm as well.”

Like Zuckerberg, Rosen owns up to Facebook's earlier idleness, but states the company has learned the lesson well. "If there's something that characterizes the company, it is its ability to alter its course," he said. "Everyone always talks about the shift to mobile in 2012. The company understood that mobile is the new playing field. We were a desktop company, and we made an abrupt change, helmed by Mark."

Facebook CEO Mark Zuckerberg testifies before the U.S. Senate. Photo: AP Facebook CEO Mark Zuckerberg testifies before the U.S. Senate. Photo: AP Facebook CEO Mark Zuckerberg testifies before the U.S. Senate. Photo: AP

Focus on user safety is a change of the same scale, according to Rosen. "Everything we do has to be oriented towards community safety, taking into consideration everything that could go wrong."

To combat fake news, Rosen’s team has declared a war on fake accounts. “The general belief is that fake news are driven by political motivation. In fact, most fake news is driven by financial motivation,” Rosen said.

“Much of our work is concentrating on limiting the profitability of Fake news,” he said.

His team is also collaborating with organizations that specialize in fact-checking. “When a fact checker detects fake news, our system significantly lowers its circulation, about 80%, and we add links to related articles to provide a different viewpoint.”

Rosen believes that what happened around the 2016 elections would not repeat itself as the 2018 U.S. midterm elections draw near. In May, Facebook began labeling political ads on the Facebook and Instagram platforms as well as including a “paid for by” disclosure. “We’re making big changes to the way ads with political content work on Facebook to help prevent abuse, especially during elections,” Facebook said in a post announcing the new feature.

“We are still in the midst of development but the defense mechanisms we have put in place can guarantee that what happened in 2016 would not happen again, as far as transparency goes,” Rosen said. “On the other hand, there are people who go to work in the morning and their job is to figure out how to bypass the barriers we put in place,” he said.

Facebook has people trying to hack its own systems and barricades to identify potential exploitations methods, he said. “We also work with law enforcement agencies and an FBI task force,” he added

According to Rosen, what fuels fake news is a general distrust in institutions, and people's’ tendency to favor conspiratory narratives. “We are studying this issue with academics, trying to understand what we can add to help people understand,” he said.

U.S. President Donald Trump. Photo: Reuters U.S. President Donald Trump. Photo: Reuters U.S. President Donald Trump. Photo: Reuters

In its self-declared fight against fake news, Facebook is facing powerful players who are trying to shape global agenda and public opinion in their favor. One of these players is Russia. After learning about the role Cambridge Analytica played, Facebook had to “shift to a higher gear,” Rosen said.

“We had to internalize the lessons and change the way we were treating the issue of privacy.”

While Facebook is working to turn its namesake social platform into the face of the fight against fake news, other platforms owned by the company pose unique challenges. Facebook-owned encrypted instant messaging platform Whatsapp, which is owned by Facebook, has become fertile ground for the distribution of fake news and other forbidden content, such as violent or obscene images and videos.

“Our approach is to figure out what tools we can give our users to help them make the right decisions,” Rosen said. This week, Facebook began marking forwarded messages. In Mexico, Rosen said, Facebook launched a pilot in collaboration with a local fact-checking organization, in which Whatsapp users could send messages they received to a designated Whatsapp chat, and receive a reply that confirms or refutes the information.

A significant part of Rosen’s job at Facebook has to do with overseeing the company’s content filtration scheme, which role it is to detect and remove content that violates Facebook’s policies, such as hate speech and nudity. Rosen leads the technological part of this operation, but the company also engages a team of human content moderators whose job it is to go over content flagged on the platform and remove anything that is at odds with the company’s policies.

Facebook’s post removal policies are the fruit of countless hours of debate and consultation with researchers and human rights organizations, Rosen said. “We have to balance between freedom of speech and maintaining a safe place,” he said. According to Rosen, posts flagged for hate speech are only removed after one of Facebook’s content moderators confirms them as such. Policing nudity, on the other hand, is the domain of the company’s AI algorithm, which removes some risky posts before they were even viewed.

In March, U.N. human rights officials cited Facebook as complicit in spreading hate speech and fake news which fueled violence against the Muslim Rohingya minority in Myanmar. According to Medecins Sans Frontieres (MSF), at least 6,700 Rohingya men, women, and children were killed in the first month of violence between August and September of 2017, and many women and girls were raped by Myanmar military troops, in what is widely considered an act of ethnic cleansing in the country.

“The situation is horrible,” Rosen said. “Myanmar made a very quick transformation into the world of mobile and internet, and we were too slow in working with local organizations and applying our tools.” Last month, Facebook sent a team to Myanmar to study the issue, and has been working more closely with local organizations since, Rosen said. The company also hired additional Burmese speaking content moderators.

Related stories

“It is not enough that our moderators speak the language, they have to understand the culture,” Rosen said. “Much of the propaganda and hate speech are nuanced. We also began working with local organizations to develop digital literacy education programs to help locals understand how to use the internet.”

A specialized team at Facebook is working to train a Burmese hate speech detection AI algorithm, he added. While Burmese is a complex language, and is posing great challenges to Facebook’s AI team, Rosen said that the company was able to improve the rate of response to hate speech content in the country.
share on facebook share on twitter share on linkedin share on whatsapp share on mail

TAGS