TEL AVIV, Israel — U.S. tech giants are quietly empowering Israel to track and eliminate suspected militants more swiftly in Gaza and Lebanon, driven by a surge in artificial intelligence and computing services. However, the rising death toll among civilians has sparked fears that these advanced tools might inadvertently contribute to the loss of innocent lives.
For years, armies have relied on private companies to develop custom autonomous weapons. Yet, Israel’s recent engagements stand out as a prime example where commercially available AI, predominantly from the United States, is being deployed in active combat, despite concerns that these technologies were not originally intended for such critical decisions.
The Emergence of AI in Warfare
The Israeli military now employs AI to analyze extensive intelligence, intercepted communications, and surveillance data. It seeks to identify suspicious behavior and movements of potential adversaries. Following the devastating Hamas attack on October 7, 2023, which resulted in approximately 1,200 casualties and over 250 hostages, the military’s usage of Microsoft and OpenAI surged.
According to an Associated Press investigation based on internal documents, interviews with Israeli officials, and company employees, the military’s efforts to eradicate Hamas accelerated significantly. Since the conflict began, over 50,000 people have perished in Gaza and Lebanon, with around 70% of Gaza’s buildings damaged or destroyed.
Intense Data Processing
Microsoft’s cloud services experienced a dramatic increase, nearly 200 times higher than before the October 7 attack, as the war intensified. The amount of data stored on Microsoft’s servers doubled between March and July 2024, reaching over 13.6 petabytes—equivalent to roughly 350 times the digital memory required to store every book in the Library of Congress.
While the Israeli military claims these AI tools enhance target identification and minimize collateral damage, critics argue the technology can lead to errors. Faulty data or flawed algorithms may result in misidentification of targets, impacting both military operations and civilian lives.
Human vs. Machine in Decision-Making
Despite the reliance on AI, the Israeli military maintains that human analysts independently examine all AI suggestions in conjunction with high-ranking officers to ensure compliance with international law. This dual-check system aims to balance military effectiveness with minimizing civilian casualties.
“These AI tools make the intelligence process more accurate and effective,” an official Israeli military statement claims. “They accelerate target identification without compromising accuracy and often help reduce civilian harm.”
However, skeptics contend that the human condition introduces vulnerabilities, especially in high-pressure scenarios. Young officers may jump to conclusions based on AI-generated data, leading to potential targeting mistakes.
The Case of the Hadijazi Family
In November 2023, an air strike mistakenly hit the vehicle driven by Hoda Hijazi and her family near the Lebanese border, killing her mother and three young daughters. Before the attack, the family took precautions to indicate the presence of children, hoping to avoid the strike.
Video footage from a nearby convenience store shows the car driven by Hoda Hijazi hours before the attack. The footage matches the Israeli military’s release of the airstrike location, indicating that the mistake likely occurred during target identification or verification.
The incident highlights the potential risks of AI-driven warfare, where even with careful processes, errors can happen. An intelligence officer admits that AI could misidentify residences, leading to innocent civilians being targeted.
Support from Major Tech Firms
Beyond Microsoft and OpenAI, Google, Amazon, and other tech giants provide cloud computing and AI services to the Israeli military. Google’s and Amazon’s $1.2 billion contract, known as “Project Nimbus,” began in 2021. Cisco, Dell, Red Hat, and Palantir Technologies also offer AI and cloud solutions to Israel.
Microsoft uses its Azure platform to transcribe, translate, and compile intelligence data, including intercepted communications. OpenAI models, accessible through Azure, assist Israel in sifting through vast amounts of data to find patterns and movements associated with militants.
However, these models come with their own risks. OpenAI’s Whisper translation model, for example, occasionally generates text that was never spoken, including racist remarks and violent rhetoric. This raises ethical questions about using such AI for critical military decisions.
Errors and Overconfidence
While the military insists on human oversight, mistakes still occur. An intelligence officer recounted an instance where incorrect translation flagged a word indicating payment as describing the handling of rocket-propelled grenades, incorrectly adding individuals to target lists.
Another example involved the misidentification of a group of high school students as potential militants based on an exam list attached to their profiles. These errors, while rare, underscore the importance of rigorous data verification.
Over-reliance on AI can also entrench biases. “Confirmation bias can prevent people from investigating further,” warns Tal Mimran, a former reserve legal officer for the Israeli military. “Some might be too quick to accept AI recommendations without questioning their validity.”
Growling Ethical Concerns
The ethical implications of tech companies assisting in military operations extend beyond Israel. Some U.S. employees have expressed deep moral objections to their involvement with military contracts.
In October 2023, Microsoft fired two workers for organizing an unauthorized lunchtime vigil for Palestinian refugees. The company maintained that the termination was in line with internal policy but did not provide specific details.
Google also terminated about 50 employees for protesting the Israeli military actions in Gaza. While the company cited disruption of workspaces as the reason, the firings highlighted tensions between employee activism and corporate interests.
AI and National Security
Despite outreach to concerned employees, some tech firms continue to expand their military partnerships. OpenAI modified its terms of use in 2023 to allow for “national security use cases” aligned with its mission. Google followed suit earlier this year, aligning its policies to support national security efforts.
These changes reflect a broader trend in which U.S. tech giants push for increased military contracts, echoing similar moves under previous U.S. administrations. Alexander Karp, CEO of Palantir Technologies, advocates for closer collaboration between the U.S. military and Silicon Valley.
“The fate of the United States, and its allies, depends on the ability of their defense agencies to evolve rapidly,” Karp wrote in a new book. “This evolution must include advanced AI weaponry like drone swarms and robots.”
Feeling the Impact
While high-level debates about AI in warfare continue, those affected by the conflict demand accountability. Mahmoud Adnan Chour, the father of the three children killed in the mistruck car, remains unsatisfied. “Why did the plane choose this car, filled with laughter?” he asks.
The ongoing ceasefire in Gaza provides temporary respite, but the Israeli government has announced plans to further develop AI across all its military units. The question remains: How can technology be leveraged ethically in warfare while minimizing collateral damage?
As tech companies compete for military business, the human toll of faulty AI systems becomes a critical issue. Ensuring transparency, ethical guidelines, and rigorous testing processes is paramount to preventing future tragedies.
Seeking Clarity
The complex relationship between U.S. tech firms and the Israeli military raises crucial questions about responsibility and accountability. The fate of thousands of lives hangs in the balance as these advanced technologies shape the battlefield.
As the conflict continues, the voices of concerned employees, intelligent officers, and victims’ families echo the need for a more thorough examination of AI’s role in warfare. The future will depend on whether tech giants can strike a balance between innovation and ethical conduct.
Contact AP’s global investigative team at [email protected] or https://www.ap.org/tips/ to share more information or seek clarification.
Join the conversation. Share your thoughts and experiences below. Subscribe for more articles like this, and don’t forget to share on Facebook, Twitter, and Instagram.