Beyond drones: Assassination by robot?

A Page One story in Sunday’s New York Times detailed the assassination of a nuclear scientist in Iran in November: The Scientist and the A.I.-Assisted, Remote-Control Killing Machine (published online Sept. 18, 2021). I was taken aback by the phrase “AI-assisted remote-control killing machine” — going for the shock value!

Here’s a sample of the writing about the technology:

“The straight-out-of-science-fiction story of what really happened that afternoon and the events leading up to it, published here for the first time, is based on interviews with American, Israeli and Iranian officials …”

The assassination was “the debut test of a high-tech, computerized sharpshooter kitted out with artificial intelligence and multiple-camera eyes, operated via satellite and capable of firing 600 rounds a minute.”

Unlike a drone, the robotic machine gun draws no attention in the sky, where a drone could be shot down, and can be situated anywhere, qualities likely to reshape the worlds of security and espionage” (boldface mine).

Most of the (lengthy) article is about Iran’s nuclear program and the role of the scientist who was assassinated.

The remote assassination system was built into the bed of a pickup truck, studded with “cameras pointing in multiple directions.” The whole rig was blown up after achieving its objective (although the gun robot was not destroyed as intended).

A crucial point about this setup is to understand the role of humans in the loop. People had to assemble the rig in Iran and drive the truck to its waiting place. A human operator “more than 1,000 miles away” was the actual sniper. The operation depended on satellites transmitting data “at the speed of light” between the truck and the distant humans.

So where does the AI enter into it?

There was an estimated 1.6-second lag between what the cameras saw and what the sniper saw, and a similar lag between the sniper’s actions and the firing of the gun positioned on the rig. There was the physical effect of the recoil of the gun (which affects the bullets’ trajectory). There was the speed of the car in which the nuclear scientist was traveling past the parked rig. “The A.I. was programmed to compensate for the delay, the shake and the car’s speed,” according to the article.

A chilling coda to this story: “Iranian investigators noted that not one of [the bullets] hit [the scientist’s wife], seated inches away, accuracy that they attributed to the use of facial recognition software.”

If you’re familiar with the work of Norbert Wiener (1894–1964), particularly on automated anti-aircraft artillery in World War II, you might be experiencing déjà vu. The very idea of a feedback loop came originally from Wiener’s observations of adjustments that are made as the gun moves in response to the target’s movements. To track and target an aircraft, the aim of the targeting weapon is constantly changing. Its new position continually feeds back into the calculations for when to fire.

The November assassination in Iran is not so much a “straight-out-of-science-fiction story” as it is one more incremental step in computer-assisted surveillance and warfare. An AI system using multiple cameras and calculating satellite lag times, the shaking of the truck and the weapon, and the movement of the target will be using faster computer hardware and more sophisticated algorithms than anything buildable in the 1940s — but its ancestors are real and solid, not imaginary.


Algorithmic warfare and the reinvention of accuracy (Suchman, 2020)

Killer robots already exist, and they’ve been here a very long time (Ryder, 2019)


Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.


How weird is AI?

For Friday AI Fun, I’m sharing one of the first videos I ever watched about artificial intelligence. It’s a 10-minute TED Talk by Janelle Shane, and it’s actually pretty funny. I do mean “funny ha-ha.”

I’m not wild about the ice-cream-flavors example she starts out with, because what does an AI know about ice cream, anyway? It’s got no tongue. It’s got no taste buds.

But starting at 2:07, she shows illustrations and animations of what an AI does in a simulation when it is instructed to go from point A to point B. For a robot with legs, you can imagine it walking, yes? Well, watch the video to see what really happens.

This brings up something I’ve only recently begun to appreciate: The results of an AI doing something may be entirely satisfactory — but the manner in which it produces those results is very unlike the way a human would do it. With both machine vision and game playing, I’ve seen how utterly un-human the hidden processes are. This doesn’t scare me, but it does make me wonder about how our human future will change as we rely more on these un-human ways of solving problems.

“When you’re working with AI, it’s less like working with another human and a lot more like working with some kind of weird force of nature.”

—Janelle Shane

At 6:23 in the video, Shane shows another example that I really love. It shows the attributes (in a photo) that an image recognition system decided to use when identifying a particular species of fish. You or I would look at the tail, the fins, the head — yes? Check out what the AI looks for.

Shane has a new book, You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place. I haven’t read it yet. Have you?

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.


Robots, and what’s not AI

Think of a robot. Do you picture a human-looking construct? Does it have a human-like face? Does it have two legs and two arms? Does it have a head? Does it walk?

It’s easy to assume that a robot that walks across a room and picks something up has AI operating inside it. What’s often obscured in viral videos is how much a human controller is directing the actions of the robot.

I am a gigantic fan of the Spot videos from Boston Dynamics. Spot is not the only robot the company makes, but for me it is the most interesting. The video above is only 2 minutes long, and if you’ve never seen Spot in action, it will blow your mind.

But how much “intelligence” is built into Spot?

The answer lies in between “very little” and “Spot is fully autonomous.” To be clear, Spot is not autonomous. You can’t just take him out of the box, turn him on, and say, “Spot, fetch that red object over there.” (I’m not sure Spot can be trained to respond to voice commands at all. But maybe?) Voice commands aside, though, Spot can be programmed to perform certain tasks in certain ways and to walk from one given location to another.

This need for additional programming doesn’t mean that Spot lacks AI, and I think Spot provides a nice opportunity to think about rule-based programming and the more flexible reinforcement-learning type of AI.

This 20-minute video from Adam Savage (of MythBusters fame) gives us a look behind the scenes that clarifies how much of what we see in a video about a robot is caused by a human operator with a joystick in hand. If you pay attention, though, you’ll hear Savage point out what Spot can do that is outside the human’s commands.

Two points in particular stand out for me. The first is that when Spot falls over, or is upside-down, he “knows” how to make himself get right-side-up again. The human doesn’t need to tell Spot he’s upside-down. Spot’s programming recognizes his inoperable position and corrects it. Watching him move his four slender legs to do so, I feel slightly creeped out. I’m also awed by it.

Given the many incorrect positions in which Spot might land, there’s no way to program this get-right-side-up procedure using set, spelled-out rules. Spot must be able to use estimations in this process — just like AlphaGo did when playing a human Go master.

The second point, which Savage demonstrates explicitly, is accounting for non-standard terrain. One of the practical uses for a robot would be to send it somewhere a human cannot safely go, such as inside a bombed-out building — which would require the robot to walk over heaps of rubble and avoid craters. The human operator doesn’t need to tell Spot anything about craters or obstacles. The instruction is “Go to this location,” and Spot’s AI figures out how to go up or down stairs or place its feet between or on uneven surfaces.

The final idea to think about here is how the training of a robot’s AI takes place. Reinforcement learning requires many, many iterations, or attempts. Possibly millions. Possibly more than that. It would take lifetimes to run through all those training episodes with an actual, physical robot.

So, simulations. Here again we see how super-fast computer hardware, with multiple processes running in parallel, must exist for this work to be done. Before Spot — the actual robot — could be tested, he existed as a virtual system inside a machine, learning over nearly endless iterations how not to fall down — and when he did fall, how to stand back up.

See more robot videos on Boston Dynamics’ YouTube channel.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.