
The military calls in AI for support
For all our fears about Terminator-style killer robots, the aim of AI in the U.S. military is likely to be on augmenting humans, not replacing them.
Why it matters: AI has been described as the “third revolution” in warfare, after gunpowder and nuclear weapons. But every revolution carries risks, and even an AI strategy that focuses on assisting human warfighters will carry enormous operational and ethical challenges.
Driving the news: On Tuesday, Armenia accepted a cease-fire with its neighbor Azerbaijan to bring a hopeful end to their brief war over the disputed enclave of Nagorno-Karabakh.
- Azerbaijan dominated the conflict in part thanks to the ability of its fleets of cheap, armed drones to destroy Armenia’s tanks, in what military analyst Malcolm Davis calleda “potential game-changer for land warfare.”
An even bigger game-changer would be if such armed drones were made fully autonomous, but for the foreseeable future such fears of “slaughterbots” that could be used to kill with impunity appear overstated, says Michael Horowitz, a political scientist at the University of Pennsylvania.
- A reportreleased last month by Georgetown’s Center for Security and Emerging Technology found defense research into AI is focused “not on displacing humans but assisting them in ways that adapt to how humans think and process information,” said Margarita Konaev, the report’s co-author, at an event earlier this week.
Details: A version of that future was on display at an event held in September by the Air Force to demonstrate its Advanced Battle Management System (ABMS), which can rapidly process data in battle and use it to guide warfighters in the field.
- At the demo Anduril — a young Silicon Valley startup backed by Peter Thiel and co-founded by Palmer Luckey that focuses on defense — showed off its Lattice software system, which processes sensor data through machine-learning algorithms to automatically identify and track targets like an incoming cruise missile.
- Using the company’s virtual reality interface, an airman in the demo only had to designate the target as hostile and pair it with a weapons system to destroy it, closing what the military calls a “kill chain.”
What they’re saying: “At the core, our view is that the military has struggled with the question of, how do I know what’s happening in the world and how do we process it,” says Brian Schimpf, Anduril CEO.
- What Anduril and other companies involved in the sector are aiming to do is make AI work for defense in much the same way it currently works for other industries: speeding up information processing and creating what amounts to a more effective, human-machine hybrid workforce.
Yes, but: Even though a human is still the one deciding whether or not to pull the trigger, experts worry about the accuracy of the algorithms that are advising that decision.
- Just as it’s not fully clear who would be responsible for an accident involving a mostly self-driving car — the human inside or the technology — “who owns the consequences if something goes wrong on the battlefield?” says P.W. Singer, a senior fellow at New America.
Be smart: The strength of AI is also its vulnerability: speed.
- It’s bad enough when malfunctioning trading algorithms cause a stock market flash crash. But if faulty AI systems encourage the military to move too quickly on the battlefield, the result could be civilian casualties, an international incident — or even a war.
The bottom line: Two questions should always be asked whenever AI spreads to a new industry: Does it work and should it work? In war, the stakes of those questions can’t get any higher.
Πηγή: axios.com