This site uses cookies to ensure the best viewing experience for our readers.
“Skypes, Skittles, and Yahoos”: Digital Dog Whistles signal new language for online hate

“Skypes, Skittles, and Yahoos”: Digital Dog Whistles signal new language for online hate

According to University of Haifa professor Gabriel Weimann, online extremists are creating their own language to bypass new rules put in place against hate speech

James Spiro | 14:15, 09.02.21

Far-right social media users are adopting a new form of hate-slang to refer to Jews, Muslims, Mexicans, and other minority groups, according to a new paper published by Professor Gabriel Weimann from the University of Haifa and Ari Ben Am, a web intelligence analyst. It is understood that extremists online are adopting the new language to try and circumvent the new abuse policies put in place by social media platforms, while at the same time avoiding detection from automatic systems such as Google’s conversational AI.

According to the paper, titled 'Digital Dog Whistles: The New Online Language of Extremism': “Jews are being referred to as ‘Skypes’, African-Americans are labeled ‘Googles,’ Latinos are described as ‘Yahoos,’ and Muslims are called ‘Skittles.’ ”It is understood that ‘Skype’ was chosen due to its similar sound to popular anti-Semitic slur (kike), and ‘Skittles’ is a reference to Donald Trump Jr.’s comparison to refugees who he claimed “would kill you” during his father’s 2016 election campaign.
Extremists online are adopting new language to try and circumvent new abuse policies. Photo: Shutterstock Extremists online are adopting new language to try and circumvent new abuse policies. Photo: Shutterstock Extremists online are adopting new language to try and circumvent new abuse policies. Photo: Shutterstock

The paper highlights a new trend of hateful online language emerging, which Weimann calls “the systematic use of innocuous words to stand in for offensive racial slurs." Named ‘Operation Google,’ the paper reveals the efforts that users online are making to disguise their language to maintain their communication and spread their beliefs while keeping under the radar.

“Terrorists and extremists groups are communicating sometimes openly but very often in concealed formats,” Professor Weimann wrote. “Alarmed by police and security forces attempts to find them online and by the social platforms attempts to remove their contents, they try to apply the new language of codes and doublespeak. A study conducted in 2019 revealed how white supremacists use coded language on social media networks to promote violence, terror, and radicalism.”

The danger of using coded language to express hateful or violent ideas means that it can become increasingly difficult for human moderators or AI to identify and remove it. Researchers who combat online hate have often found flaws in the automatic detection of racist, anti-Semitic, or neo-Nazi, messaging with simple replacements of ‘$’ instead of the letter ‘S’, or changing vowels to numbers so that artificial intelligence cannot spot hate speech. With an entirely new vocabulary including different words and phrases, it can be near impossible.

Related Stories

While hate speech has been well documented on platforms like Twitter, TikTok, Parler, and more, the paper primarily focused on content that was uploaded to Facebook and publicly appeared on pages, groups, and posts. For years, Facebook has faced criticism for its delay in labeling Holocaust Denial as hate speech, and recently irked Jews yet again when it admitted the term ‘Zionist’ is under review when identifying political discourse and anti-Semitism.

Weimann’s paper comes at a delicate time in the conversation that differentiates between hate speech and free speech online. While private tech companies are doing what they can with AI to try to curb online hate, they can also permanently ban world leaders with almost zero government oversight. The paper states that there should be clear cooperation between law enforcement and private sector bodies interested in preventing hateful speech.

“Finding nonintrusive and privacy and civil-rights oriented methods of cooperating with Law Enforcement should be of paramount importance to Facebook and other social media platforms to identify and monitor imminently dangerous online extremists,” he concluded.

share on facebook share on twitter share on linkedin share on whatsapp share on mail

TAGS