This site uses cookies to ensure the best viewing experience for our readers.
Tricking Algorithms—Fraudsters’ Next Frontier

INSPIRE

Tricking Algorithms—Fraudsters’ Next Frontier

Computer scientists systematically tricked speaker verification systems into making false identifications

Keren-Or Grinberg and Orr Hirschauge | 18:05, 30.10.17
An Israeli computer lab has been researching ways to fool computer algorithms into making false identifications, including speaker verification systems similar to the ones used by banks.

More by CTech

According to Joseph (Yossi) Keshet, who heads a deep learning laboratory at Israel’s Bar Ilan University, researchers on his team have systematically tricked speaker verification systems, similar to the ones used by Google and to those used by banks, into identifying one speaker as another. To acheive this the researchers fed the system with manipulated audio data, tricking it in more than half of the attempts.

Professor Joseph (Yossi) Keshet Professor Joseph (Yossi) Keshet Professor Joseph (Yossi) Keshet

Speaking at Calcalist’s digital and mobile conference held in Tel-Aviv Monday, Mr. Keshet said past research has shown how pictures can be manipulated to trick image recognition algorithms to make false identifications, with a perceived high level of certitude. As an example, an image recognition algorithm can be fooled into identifying an image of a giant panda as a gibbon at a 99% certitude. Similarly, a computer has been tricked into misclassifying a car, a truck and a dog as an ostrich.

“There’s a fundamental problem here,” Mr. Keshet said, speaking at Calcalist’s digital and mobile conference held in Tel-Aviv Monday.

For the manipulations to work, not all of the data needs to be manipulated. For example, by wearing printed cardboard glasses with the right visual data, researchers have tricked facial recognition algorithms into making false identifications.

According to Mr. Keshet, such manipulations have repeatedly tricked machine learning services by MetaMind, Google, and Amazon. Amazon’s machine learning service, offered through the company’s cloud services, was fooled over 96% of the time, and Google’s algorithms have been tricked over 97% of the time.

“I don’t hack into their systems; I don’t know their IP address, I just manipulate the pictures fed to the algorithm,” Mr. Keshet said.

Such manipulations can also be performed with systems used for autonomous driving. Collaborating with Mr. Keshet’s team, researchers at Facebook Research in Paris have made such a system falsely classify pedestrians, a sidewalk, and the main road. The research is scheduled to be presented at NIPS (Neural Information Processing Systems), a machine learning conference to be held in Long Beach, California in December.

Combined with the growing fears of hackers ability to overtake connected cars, the method presented by Mr. Keshet, has alarming implications. The researchers also made a Microsoft Kinect falsely identify body postures.

“There’s hardly any noticeable difference between the original pictures and the manipulated ones,” Mr. Keshet said.

The researchers also made a Microsoft Kinect falsely identify body postures.

By adding noise to an audio feed, the researchers were also able to throw off Google Voice, making it erroneously identify spoken words.
share on facebook share on twitter share on linkedin share on whatsapp share on mail

TAGS