From Crisis Response to Accessibility Tools: Some Recent Developments From Google’s Israel R&D Center
“There is nothing new about solving problems with tech, but AI can help us accomplish things that were just science fiction until now,” Yossi Matias, head of Google’s research and development center in Israel, said at a press conference Wednesday
Omer Kabir | 17:55, 19.05.19
Yossi Matias, head of Google’s research and development center in Israel, stood in his office in northern Israeli city Haifa in 2010 and watched as thick smoke devoured the nearby Mount Carmel Forest. The forest burnt for four days straight, resulting in 44 fatalities. “I did not know what was going on, did not know if I needed to evacuate the office, there was no information online,” Matias said, speaking at a press conference held in Tel Aviv Wednesday. “I called the city and they referred me to the police, which had info,” he said. “I asked my crew to make the information I got available on Google search results and in just a few hours, emergency guidelines were integrated into search results for the first time.”
According to Emanuel, the project already had some initial success. One of the team members, an engineer suffering from a hearing impairment, recorded 10 hours of speech that helped lower the software’s margin of error from 70% to just 10%, he said. There is still a lot of work to be done before universal systems capable of understanding everyone will be in place, though, he cautioned, adding “in the meantime, we are helping people on a personal level, and that is a worthy cause in and of itself.”
According to Matias, all the tools featured at the event were spun-off original initiatives of Google Israel workers. “We are examining the use of AI to solve social problems worldwide,” he said. “There is nothing new about solving problems with tech, but AI can help us accomplish things that were just science fiction up until now,” he added.
For daily updates, subscribe to our newsletter by clicking here.
This anecdote marked the birth of Google’s crisis response products. Its first development, SOS Alerts, pushes relevant content from authorities on all Google products in case of natural or man-caused disasters in the user’s area. Since its launch in 2017, this tool was used in 250 crisis situations worldwide, and information distributed through it was viewed 2 billion times. Earlier this month, at its annual I/O developer conference, Google presented its latest crisis response product, which, like SOS alerts, was developed in Israel. “Floods are one of the most common and most deadly naturally occurring disasters, but much of the damage they cause is avoidable,” said Sella Nevo, a senior software engineer at Google Israel. Nevo, who heads the company’s flood forecasting initiative, spoke at the Tel Aviv press conference. “Between one third and one half of the damage caused by floods can be prevented with a good enough alert system,” he said. “The problem is that in most parts of the world, and especially in developing countries, there is not enough data or computational capabilities to establish accurate alert systems,” he added. “The data is so inaccurate that it is difficult to implement.” Nevo’s team created an algorithm that scans through a large number of aerial and satellite photos to assess the ground level and create topographic maps with 90 times the resolution of existing maps. “We use the maps in combination with data on rivers to simulate the water’s movement in case of a flood,” Nevo said. “You can send this data to people to warn them against a flood in their area, to show it on Google Maps to indicate which areas are safe and which are not, or to integrate them into relevant search results,” he said. According to Nevo, last year the company conducted a pilot for the technology in India, where 20% of all global floods occur. This year, coverage has been extended to include the millions of people living along the Ganges and Brahmaputra rivers in Asia, he said. In addition to the flood detection tool developed by Nevo and his team, several other developments by the Israeli R&D center were also showcased. One of them is Live Relay, a tool that instantly transforms speech to text and vice versa to help people who are hard of hearing conduct phone calls. According to Matias, Live Relay was derived from another Israeli development released last year, called Call Screen. "Call Screen helps users deal with unwanted phone calls,” Matias explained. “When I get a call from an unidentified number, my phone automatically asks the caller to identify themselves and state their business,” he said. The caller’s response is automatically transcribed into text, letting the user choose whether they want to pose more questions, accept the call, or report it as spam, he added. "For years I worked to develop technological solutions for people with disabilities in my free time,” Sapir Caduri, the software engineer who initiated Live Relay, said at the Tel Aviv event. “About a year ago I decided to combine my hobby with my work,” Caduri said. The feature lets people who are hard of hearing conduct a phone call independently and type while the other side is speaking, she added. This means that both parties can communicate in a way that suits their condition—either by speech or by text. Matias believes the feature can also be used by the general public, regardless of a physical limitation. “A lot of people are excited by the ability to make phone calls during meetings or in noisy environments, we all encounter situations in which this technology can be helpful,” he said. Live Relay doesn't have an official launch date yet, but another feature for the hard of hearing called Live Caption will reach users with the release of Google’s latest mobile device Pixel 4, scheduled for later this year. People who are deaf cannot watch videos without captions,” Matias said. “Live Caption can take any video and automatically transcribe its audio channel, whether it is stored on the device or streamed online,” he added. Another Israeli development presented at the conference was the integration of a feature called Lens and Google Go, Google’s search app for cheap devices aimed at developing markets. This feature makes the device read out-loud any text its camera is directed at. “A year ago, we combined a text-reading feature for websites in Google Go,” Matias said. “There are people that have trouble reading, and now they can consume written information through hearing,” he added. This feature also lets users listen to written texts while driving or running as well as use Google Translate to read texts in different languages, he explained. It is no accident that many of Google’s new tools focus on converting text to speech, which has become one of the most dominant interfaces in the past few years. Not everyone, however, can work with voice-activated assistants, in particular, those who suffer from speech impairments. For this end, Google started a research project called Euphonia, which trains speech recognition systems to recognize irregular speech patterns. "The idea was born from a meeting with an ALS (amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease) in an attempt to help patients,” Dotan Emanuel who heads the project said in the Tel Aviv event. “Speech recognition systems sometimes find it hard to identify irregular speech, but family members and caregivers do understand these people,” he explained. If people can learn to understand, it is also possible to teach and train artificial intelligence to do the same, he added. The project focuses on severe speech impairments caused by Parkinson’s disease, stuttering, or deafness,” Emanuel said. The team used recordings of patients to train systems to be capable of understanding these particular people, he explained. The team’s efforts focused on Google Home, the company’s smart speaker because many of these people are wheelchair-bound and find everyday activities such as turning the television on or shutting off the lights challenging, he explained.
No Comments Add Comment