This site uses cookies to ensure the best viewing experience for our readers.
Microsoft denies its AI and cloud services were used to harm people in Gaza

Microsoft denies its AI and cloud services were used to harm people in Gaza

Company says internal and external reviews found no misuse by Israeli military.

CTech | 17:13, 16.05.25

Microsoft said Thursday that it has found no evidence that its Azure or AI technologies have been used by the Israeli military to target or harm civilians in Gaza, following what it described as growing concerns from employees and the public in recent months.

The company conducted an internal review and commissioned an external firm to perform additional fact-finding after media reports and employee questions surfaced about the use of its technology in the Gaza conflict. According to Microsoft, the investigations included interviews with dozens of employees and an assessment of relevant documents. Microsoft’s announcement comes just days before its Build developer conference in Seattle, where the activist group No Azure for Apartheid has pledged to stage another protest.

Microsoft exhibition. Microsoft exhibition. Microsoft exhibition.

“Based on these reviews,” the company said, “we have found no evidence to date that Microsoft’s Azure and AI technologies have been used to target or harm people in the conflict in Gaza.”

Microsoft confirmed that it works with countries and customers around the world, including Israel’s Ministry of Defense (IMOD), and supplies the ministry with software, professional services, Azure cloud services, and Azure AI services, including language translation. It also provides cybersecurity support to the Israeli government to protect against external threats.

The company said this relationship is structured as a “standard commercial relationship” governed by Microsoft’s terms of service, its Acceptable Use Policy, and its AI Code of Conduct. These conditions prohibit the use of Microsoft technologies to inflict harm or violate legal boundaries and require responsible AI practices such as human oversight and access controls.

“We have found no evidence that Microsoft’s Azure and AI technologies, or any of our other software, have been used to harm people or that IMOD has failed to comply with our terms of service or our AI Code of Conduct,” the company stated.

Related articles:

However, Microsoft acknowledged that it provided “limited emergency support” to the Israeli government in the weeks after the October 7, 2023, attacks, specifically to assist in hostage rescue efforts. This support was described as exceptional and subject to high-level oversight. The company said it approved some requests and denied others and aimed to balance urgent rescue operations with respect for the privacy and rights of civilians in Gaza.

Microsoft also emphasized that it does not have visibility into how customers use on-premise software or operate their own systems, including the IMOD’s use of non-Microsoft government cloud providers. “By definition, our reviews do not cover these situations,” the company said.

The company noted that it has not created or provided surveillance or targeting software for the Israeli military and said such functions are typically carried out using proprietary or defense-specific tools not supplied by Microsoft.

Reaffirming its broader position, Microsoft said it is committed to human rights principles and to providing humanitarian assistance in both Israel and Gaza. “Based on everything we currently know, we believe Microsoft has abided by these Commitments in Israel and Gaza,” the statement concluded.

share on facebook share on twitter share on linkedin share on whatsapp share on mail

TAGS