A Page One story in Sunday’s New York Times detailed the assassination of a nuclear scientist in Iran in November: The Scientist and the A.I.-Assisted, Remote-Control Killing Machine (published online Sept. 18, 2021). I was taken aback by the phrase “AI-assisted remote-control killing machine” — going for the shock value!
Here’s a sample of the writing about the technology:
“The straight-out-of-science-fiction story of what really happened that afternoon and the events leading up to it, published here for the first time, is based on interviews with American, Israeli and Iranian officials …”
The assassination was “the debut test of a high-tech, computerized sharpshooter kitted out with artificial intelligence and multiple-camera eyes, operated via satellite and capable of firing 600 rounds a minute.”
“Unlike a drone, the robotic machine gun draws no attention in the sky, where a drone could be shot down, and can be situated anywhere, qualities likely to reshape the worlds of security and espionage” (boldface mine).
Most of the (lengthy) article is about Iran’s nuclear program and the role of the scientist who was assassinated.
The remote assassination system was built into the bed of a pickup truck, studded with “cameras pointing in multiple directions.” The whole rig was blown up after achieving its objective (although the gun robot was not destroyed as intended).
A crucial point about this setup is to understand the role of humans in the loop. People had to assemble the rig in Iran and drive the truck to its waiting place. A human operator “more than 1,000 miles away” was the actual sniper. The operation depended on satellites transmitting data “at the speed of light” between the truck and the distant humans.
So where does the AI enter into it?
There was an estimated 1.6-second lag between what the cameras saw and what the sniper saw, and a similar lag between the sniper’s actions and the firing of the gun positioned on the rig. There was the physical effect of the recoil of the gun (which affects the bullets’ trajectory). There was the speed of the car in which the nuclear scientist was traveling past the parked rig. “The A.I. was programmed to compensate for the delay, the shake and the car’s speed,” according to the article.
A chilling coda to this story: “Iranian investigators noted that not one of [the bullets] hit [the scientist’s wife], seated inches away, accuracy that they attributed to the use of facial recognition software.”
If you’re familiar with the work of Norbert Wiener (1894–1964), particularly on automated anti-aircraft artillery in World War II, you might be experiencing déjà vu. The very idea of a feedback loop came originally from Wiener’s observations of adjustments that are made as the gun moves in response to the target’s movements. To track and target an aircraft, the aim of the targeting weapon is constantly changing. Its new position continually feeds back into the calculations for when to fire.
The November assassination in Iran is not so much a “straight-out-of-science-fiction story” as it is one more incremental step in computer-assisted surveillance and warfare. An AI system using multiple cameras and calculating satellite lag times, the shaking of the truck and the weapon, and the movement of the target will be using faster computer hardware and more sophisticated algorithms than anything buildable in the 1940s — but its ancestors are real and solid, not imaginary.
Algorithmic warfare and the reinvention of accuracy (Suchman, 2020)
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.