U.S. tech giants, including Microsoft and OpenAI, have significantly bolstered Israel’s military capabilities in its ongoing conflicts in Gaza and Lebanon through the use of advanced artificial intelligence (AI) systems. However, the surge in AI-driven warfare has also led to a sharp increase in civilian casualties, raising ethical concerns about the role of technology in determining who lives and who dies.
An Associated Press investigation revealed that Israel’s military has relied heavily on AI to sift through vast amounts of intelligence, intercepted communications, and surveillance data to identify and target suspected militants. Since the October 7, 2023, Hamas attack that killed 1,200 Israelis and took over 250 hostages, the Israeli military’s use of Microsoft and OpenAI technology has skyrocketed. Internal documents and interviews with current and former officials show that AI systems are now integral to Israel’s targeting process, though errors in data or algorithms have led to tragic mistakes.
“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare.”
The Israeli military has defended its use of AI, stating that it enhances accuracy and efficiency while minimizing civilian casualties. However, the AP’s investigation found that over 50,000 people have died in Gaza and Lebanon since the war began, with nearly 70% of buildings in Gaza destroyed.
One tragic example occurred in November 2023, when an Israeli airstrike killed three young girls and their grandmother in southern Lebanon. The family had been fleeing clashes near the border, and security footage showed the car filled with children moments before the strike. The Israeli military later admitted the strike was a mistake but did not clarify whether AI played a role in the targeting decision.
Microsoft and OpenAI have not commented on their involvement, though internal documents reveal a $133 million contract between Microsoft and Israel’s Ministry of Defense. OpenAI’s usage policies prohibit the development of weapons, but the company recently amended its terms to allow for “national security use cases.”
As U.S. tech companies deepen their ties with militaries worldwide, the ethical implications of AI in warfare remain a pressing concern. Critics argue that the use of commercial AI models in conflict zones risks normalizing automated warfare and eroding accountability.
“Cloud and AI are the bombs and bullets of the 21st century,” said Hossam Nasr, a former Microsoft employee advocating for the company to end its military contracts. “Microsoft is providing the Israeli military with digital weapons to kill, maim, and displace Palestinians.”
The AP’s findings highlight the growing intersection of technology and warfare, raising urgent questions about the future of AI in global conflicts.






Leave a comment