Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo

ARAB AND WORLD

Sat 26 Apr 2025 3:29 pm - Jerusalem Time

Israeli AI experiments in the Gaza war raise ethical concerns.

Washington - Saeed Erekat


The New York Times reported in an investigation published on Saturday, April 26, 2025, that in late 2023, Israel aimed to assassinate Ibrahim al-Bayari, a senior Hamas leader in northern Gaza who helped plan the October 7, 2023, attack. However, Israeli intelligence was unable to locate al-Bayari, who was believed to be hiding in a network of tunnels under Gaza.


Therefore, according to the newspaper, "Israeli officers turned to a new military technology enhanced by artificial intelligence, according to three Israeli and American officials familiar with the events," noting that "this technology was developed a decade ago, but had not been used in combat. The discovery of Al-Bayari provided a new impetus to improve the tool, so engineers from Israel's Unit 8200, the Israeli counterpart to the US National Security Agency, rushed to integrate artificial intelligence into it, according to these people."


Shortly thereafter, Israel listened to al-Bayari's calls and tested an AI voice tool, which provided an approximate location of where he was making his calls. Using this information, Israel ordered airstrikes targeting the area on October 31, 2023, killing al-Bayari. More than 125 civilians were also killed in the attack, according to Airwars, a London-based conflict monitoring organization.


The audio tool was just one example of how Israel is using the war in Gaza to rapidly test and deploy AI-powered military technologies to an unprecedented degree, according to interviews with nine American and Israeli defense officials, who spoke on condition of anonymity because the work is classified.


According to the newspaper, two people familiar with the programs said that many of these efforts were partnerships between Unit 8200 conscripts and reservists working for technology companies such as Google, Microsoft, and Meta (Facebook). Unit 8200 created what became known as "The Studio," an innovation center and a place to match experts with AI projects, the people said.


However, even as Israel races to develop its AI arsenal, the deployment of the technology has sometimes led to mistaken identities, arrests, and civilian deaths, according to Israeli and American officials. Some officials have grappled with the ethical implications of AI tools, which could lead to increased surveillance and the killing of other civilians.


European and US defense officials said no other country has been as active as Israel in testing AI tools in real-time combat, offering a glimpse into how these technologies could be used in future wars—and how they could go wrong.


"The urgent need to deal with the crisis has accelerated innovation, much of it powered by artificial intelligence," Hadas Lorber, head of the Institute for Applied Research in Artificial Intelligence at Israel's Holon Institute of Technology and a former senior director at Israel's National Security Council, told the newspaper. This has led to the development of game-changing technologies on the battlefield and advantages that have proven valuable in combat.


But Lorber said these technologies "also raise serious ethical questions." She warned that AI requires checks and balances, adding that humans make the final decisions.


An Israeli military spokeswoman said she could not comment on specific technologies due to their "secret nature." She added that Israel is "committed to the legal and responsible use of data technology tools," adding that the military is investigating the Biari strike and is "unable to provide any additional information until the investigation is complete."


Meta and Microsoft declined to comment to the newspaper, and Google claimed it has "employees serving as reserve personnel in various countries around the world. The work these employees perform as reserve personnel is not associated with Google."


Israel has previously used the conflicts in Gaza and Lebanon to test and develop technological tools for its military, such as drones, phone hacking tools, and the Iron Dome defense system, which can help intercept short-range ballistic missiles.


The newspaper says: "After Hamas launched cross-border attacks on Israel on October 7, 2023, killing more than 1,200 people and taking 250 hostages, the use of artificial intelligence technologies was quickly authorized, according to four Israeli officials. They added that this led to cooperation between Unit 8200 and reservists at the "studio" to quickly develop new artificial intelligence capabilities."


The newspaper quotes Avi Hasson, CEO of Startup Nation Central, an Israeli nonprofit that connects investors with companies, as saying that reservists from Meta, Google, and Microsoft have become a key element in driving innovation in the field of drones and data fusion. He added, "Reservists bring technical know-how and access to key technologies that were not available in the military."


The Israeli military quickly used artificial intelligence to enhance its drone fleet. Aviv Shapira, founder and CEO of Extend, a software and drone company working with the Israeli military, said AI-powered algorithms were used to build drones that identify and track targets remotely.


"In the past, guidance capabilities relied on focusing on the target image. Now, AI can recognize and track the object itself—whether a moving vehicle or a person—with pinpoint accuracy," he said. Shapira said his main clients, the Israeli military and the US Department of Defense, were aware of the ethical implications of AI in warfare and discussed the responsible use of this technology.


Three Israeli officers familiar with the program said that one of the tools developed by The Studio was an Arabic-language AI model known as the Large Language Model. (Plus 972, an Israeli-Palestinian news site, previously reported on the Large Language Model.)


Developers faced difficulty creating such a model due to the scarcity of Arabic data to train the technology. When such data was available, it was mostly in written Modern Standard Arabic, a more formal language than the dozens of dialects used in spoken Arabic.


The three officers said the Israeli military didn't face this problem. The country had decades of intercepted text messages, written phone calls, and social media posts gleaned from spoken Arabic dialects. So, Israeli officers created a large language model in the first few months of the war and built a chatbot to process queries in Arabic. Four Israeli officials said they integrated the tool with multimedia databases, allowing analysts to conduct complex searches across images and videos.


Three Israeli officers told the newspaper that when Israel assassinated Hezbollah leader Hassan Nasrallah last September, the chatbot analyzed responses across the Arabic-speaking world. The technology distinguished between different dialects in Lebanon to gauge public reaction, helping Israel assess whether there was public pressure to launch a counterstrike.


Two officers said the chatbot was sometimes unable to identify some modern slang terms and words translated from English into Arabic. One officer said this required Israeli intelligence officers with experience in various dialects to review and correct its work.


Two Israeli intelligence officers said the chatbot also occasionally gave incorrect answers—for example, returning images of pipes instead of guns. However, they said the AI tool significantly accelerated research and analysis.


At temporary checkpoints set up between the north and south of the Gaza Strip, Israel also began equipping cameras after the October 7 attacks with the ability to scan and send high-resolution images of Palestinians to artificial intelligence-powered facial recognition software, the newspaper reported.


The system also sometimes had difficulty identifying people whose faces were obscured. This led to the arrest and interrogation of Palestinians who were mistakenly identified by the facial recognition system, according to two Israeli intelligence officers.


Israel also used artificial intelligence to sift through data collected by intelligence officials on Hamas members. Before the war, Israel built a machine learning algorithm—codenamed "Lavender"—that could quickly sort through data to search for low-level militants. It was trained on a database of confirmed Hamas members and was intended to predict who might be part of the group. Although the system's predictions were inaccurate, Israel used it early in the Gaza war to help select attack targets.


There were no greater goals than finding and eliminating Hamas’s top leadership. At the top of the list was Biari, the Hamas commander Israeli officials believe played a pivotal role in planning the October 7 attacks. Israeli military intelligence quickly intercepted Biari’s calls with other Hamas members but was unable to pinpoint his location. So they turned to an AI-powered audio tool that analyzed various sounds, such as stun grenades and airstrikes. Two intelligence officers said that after deducing an approximate location of where Biari had made his calls, Israeli military officials were warned that the area, which includes several residential complexes, was densely populated. They said that an airstrike would need to target several buildings to ensure Biari’s assassination. Approval was given for the operation. Since then, Israeli intelligence has also used the audio tool, along with maps and photos of Gaza’s labyrinth of underground tunnels, to pinpoint the locations of hostages. Over time, the tool has been refined to locate individuals with greater precision, two Israeli officers said.

Tags

Share your opinion

Israeli AI experiments in the Gaza war raise ethical concerns.

MORE FROM ARAB AND WORLD