+972 Magazine / June 6, 2023
With algorithms making warfare easier to sustain, automated weapons have turned Israeli assaults on besieged Palestinians into an annual event.
“The skies above Gaza are filled with Israeli bombs,” Anas Baba told me when we spoke over WhatsApp a few weeks ago, just after the Israeli army and Islamic Jihad had reached a tenuous ceasefire in the wake of Israel’s latest offensive on the blockaded strip, which killed 33 and wounded dozens more. While the drone strikes had ceased, the incessant hum of UAVs had not. Their sound was a reminder — as Baba, a Gaza-based journalist, put it — that war was now an annual event.
So many Palestinian families have lost their homes in the repeated bombardments of the strip over the 16 years since the siege began that reconstruction is never-ending, made even more difficult by the involvement of numerous organizations and governments offering limited humanitarian assistance. And because of the vast number of people and the amount of funding required to rebuild, Baba explained, families could find themselves on waiting lists for years.
Israeli bombardments on Gaza are becoming more frequent, thanks to innovations in artificial intelligence (AI) and a military that bends to the dictates of increasingly right-wing governments. The army boasts that intelligence units can now pinpoint targets — a process that used to take years — in just a month. Even as the death tolls across the occupied territories climb, visions of this humanitarian crisis rarely puncture the Jewish-Israeli public sphere, fortified by military censors, missile defense systems, and plain indifference. Instead, regional violence is parsed through the redemptive parlance of technological innovation.
In the Israeli press, these wars unfold in a familiar pattern. New military offensives on Gaza are announced like the release of a much anticipated Call of Duty game. The army saturates social media feeds with epic graphics of gunslinging soldiers, while Biblical names evoke military might of mythological proportions. Then missiles rain over Gaza, blasting away Palestinian infrastructure, homes, and lives, while rocket sirens send Israelis across the south scampering into fortified shelters.
In the days after a ceasefire is agreed, generals make their media rounds to talk up innovations in automation unveiled in the last assault. Swarms of killer drones directed by supercomputing algorithms, which can shoot and kill with minimal human intervention, are celebrated the same way Silicon Valley CEOs praise chatbots. As the world reckons with runaway developments in AI, each war waged by Israel’s automated military arsenal in Gaza illustrates the human cost of these systems.
‘A force multiplier’
War has always been an occasion for militaries to trade in weaponry. But as Israel’s asymmetrical bombardments on Gaza have become annual events, the army has started branding itself as something of a pioneer, exploring the uncharted territory of automated warfare. The IDF proclaimed it waged the “world’s first AI war” in 2021 — the 11-day offensive on Gaza codenamed “Operation Guardian of the Walls” which, according to B’Tselem, killed 261 and injured 2,200 Palestinians. Drones wiped out entire families, damaged schools and medical clinics, and exploded high-rise buildings home to families, businesses, and media offices far from any military targets.
As 72,000 Palestinians were displaced and thousands more mourned the dead, Israeli generals boasted that they had revolutionized warfare. “AI was a force multiplier for the IDF,” officials bragged, detailing how robotic drone swarms amassed surveillance data, pinpointed targets, and dropped bombs with minimal human intervention.
The pattern repeated a little over a year later. In August 2022, the IDF launched a five-day offensive on Gaza named “Operation Breaking Dawn,” which took the lives of 49 Palestinians, including 17 civilians. Missiles exploded in the streets of the Jabalia refugee camp, killing seven civilians driven from their homes because of power outages. Drones also struck a nearby cemetery, taking the lives of children playing in a coveted strip of open space.
In the wake of the destruction, the army launched another manicured PR campaign, breaking a decades-long ban on openly discussing the use of AI-powered drones in military operations. Brig. Gen. Omri Dor, commander of the Palmachim airbase, told the Times of Israel that drones equipped with AI gave the army “surgical precision” in the assault, allowing troops to minimize “collateral damage or harm to another person.”
Like all self-promotion, however, such announcements are an exercise in self-aggrandizement. For starters, Israel did not wage the world’s “first AI war” in 2021. Drones, missile defense systems, and cyberwarfare have been used for decades worldwide, and the United States, rather than the Israeli army, is often hailed as the real pioneer.
In Vietnam, for example, sensors and hundreds of IBM computers helped U.S. troops to track, locate, and kill Vietcong combatants — and plenty of civilians — in lethal airstrikes. When U.S. soldiers rolled into Iraq, so did armed robots armed with guns and capable of detonating explosives. Since the late 2000s, most governments have incorporated machine learning systems into their military and surveillance arsenals. Today, automated drone swarms have killed militants and civilians in wars in Libya and Ukraine.
It was therefore a problem of market saturation that motivated Israel’s army to turn assaults on Gaza into coordinated advertising campaigns. In 2021, AI experts sounded the alarm over Turkish-manufactured killer drones that could swarm and kill targets without human intervention. China came under fire for exporting automated weapons systems — from robotic submarines to stealth drones — to Pakistan and Saudi Arabia.
Seeing this, Israeli arms dealers worried other countries might eclipse the “start-up nation’s” competitive edge on weapons exports to regimes with sordid human rights records. “It was obvious that things have changed and that Israel has to change its attitude if it does not want to lose more potential markets,” a senior Israeli military official told a Defense Industry newsletter after the August 2022 operation.
Their efforts paid off: after Guardian of the Walls, Israel’s arms exports hit an all-time high in 2021. Amid repeated bombardments on Gaza and with war raging in Ukraine, that number will likely grow.
The ubiquity of AI warfare does not mean this technology should be deployed without safeguards and limitations. Algorithms may indeed make many aspects of warfare more efficient, from guiding missiles to sifting through data to monitoring border crossings. Yet experts list a litany of dangers posed by these systems: from digital dehumanization that reduces human beings into lines of code for a machine to determine who should live or die, to a lowered cost and threshold for warfare that replaces ground troops with algorithms. Much of the weaponry on the market is riddled with glitches, said to misidentify targets or pre-programmed to kill certain demographic groups with more frequency. Even if they reduce the number of civilians killed in a single bombardment, as their advocates claim, automated weapons systems risk making battle more frequent and easier to sustain, allowing warfare to drag on with no end in sight.
This is the case in Gaza. As Baba, the journalist, put it: “With a population of 2.3 million people in an area less than 45 kilometers, Gaza is one of the most densely populated places in the world.” No matter how advanced the technologies used, each Israeli bombardment on the strip takes the lives of countless innocent bystanders. “Civilians are often caught in the crossfire,” he added.
Since 2021, when Israel began publicly promoting the use of AI in military operations, over 300 Palestinians have been killed in Israel’s annual assaults and thousands more have been injured and displaced; vital infrastructure like sewage systems and electricity grids have been irrevocably damaged in the regular assaults. Automation may have prevented Israel from sending in ground troops and causing loss of life on its side — if it could muster the forces and political support — but mostly, the technology has simply made the bombs and bullets fall more often.
Political pundits often discuss the dangers posed by automated weapon systems in the future tense. But the human cost is already present across Palestine. “We have long witnessed evidence of Israel’s use of the OPT, especially Gaza, as a laboratory for testing and deploying experimental weapons technologies,” Omar Shakir, Israel and Palestine Director for Human Rights Watch, told +972.
Shakir emphasized that such weapons used across the West Bank and Gaza, from drones to biometrics to AI-powered gun turrets, “serve to automate Israel’s unlawful use of force and its apartheid against Palestinians.” Given Israel’s centrality in global weapons markets, Shakir believes that “it is only a matter of time before the weapon systems deployed today by Israel end up in the farthest-flung corners of the globe.”
Digital rights advocates have also warned that weapons developed in Palestine will wreak havoc when exported abroad, emphasizing that these systems emerge from political contexts where prejudice against Palestinians is paramount. For example, if the Israeli army once taught operators that a certain number of non-combatants could be killed in drone strikes, as +972 reported last year, is this number replicated in the algorithms guiding precision missiles? If Israeli soldiers operating checkpoints are directed to temporarily detain Palestinian men of a certain age, will new biometric borders, as Amnesty International recently reported on, recommend the detention of all those slotted within this demographic? As Mona Shtaya, advocacy director at 7amleh, explained: “If the data is biased, then the product’s end result is going to be biased against Palestinians.”
The Israeli army does not seem concerned by the pace of such AI development. “What does ChatGPT do? It distills knowledge, insight, that you need,” said Col. Uri, commander of the IDF’s new AI research and information unit, during a rare interview in February. “There is a limit to your capability as a human being. If you sit down to process the information for a week, you might come to the exact same conclusion. But a machine can do in a minute what would take you a week.“
This techno-optimism is seen across the board of Israel’s military ranks and has helped it justify ongoing warfare. Commanders of elite intelligence units have self-published tracts embracing a “human-machine synergy.” Others hold key positions in arms companies like Elbit Systems, eager to export automated weapons systems worldwide. When 60 countries, including China and the United States, penned a largely symbolic “call to action” endorsing the responsible use of military AI in February, Israel refused to sign the statement. Instead, high-ranking commanders compare killer robots to chatbots, parroting Silicon Valley tech executives who say AI will only enhance human life.
The breadth of destruction in a besieged Gaza makes such claims harder and harder to believe. If the latest bombardment reveals anything, it is that even the most technologically advanced weaponry cannot offset the human cost of warfare, no matter how sophisticated the algorithms are.
Sophia Goodfriend is a Ph.D. candidate in Anthropology at Duke University with expertise in digital rights and digital surveillance in Israel/Palestine