AI Could Enable ‘Swarm Warfare’ for Tomorrow’s Fighter Jets
But Missy Cummings, a professor at Duke University and former fighter pilot who studies automated systems, says the speed at which decisions must be made on fast-moving jets means any AI system will be largely autonomous.
She’s skeptical that advanced AI is really needed for dogfights, where planes could be guided by a simpler set of hand-coded rules. She is also wary of the Pentagon’s rush to adopt AI, saying errors could erode faith in the technology. “The more the DOD fields bad AI, the less pilots, or anyone associated with these systems, will trust them,” she says.
AI-controlled fighter planes might eventually carry out parts of a mission, such as surveying an area autonomously. For now, EpiSci’s algorithms are learning to follow the same protocols as human pilots and to fly like another member of the squadron. Gentile has been flying simulated test flights where the AI takes all responsibility for avoiding midair collisions.
Military adoption of AI only seems to be accelerating. The Pentagon believes that AI will prove critical for future warfighting and is testing the technology for everything from logistics and mission planning to reconnaissance and combat.
AI has begun creeping into some aircraft. In December, the Air Force used an AI program to control the radar system aboard a U-2 spy plane. Although not as challenging as controlling a fighter jet, this represents a life-or-death responsibility, since missing a missile system on the ground could leave the bomber exposed to attack.
The algorithm used, inspired by one developed by the Alphabet subsidiary DeepMind, learned through thousands of simulated missions how to direct the radar in order to identify enemy missile systems on the ground, a task that would be critical to defense in a real mission.
Will Roper, who stepped down as the assistant secretary of the Air Force in January, says the demonstration was partly about showing that it is possible to fast-track the deployment of new code on older military hardware. “We didn’t give the pilot override buttons, because we wanted to say, ‘We need to get ready to operate this way where AI is truly in control of mission,’” he says.
But Roper says it will be important to ensure these systems work properly and that they are not themselves vulnerable. “I do worry about us over-relying on AI,” he says.
The DOD already may have some trust issues around the use of AI. A report last month from Georgetown University’s Center for Security and Emerging Technology found that few military contracts involving AI made any mention of designing systems to be trustworthy.
Margarita Konaev, a research fellow at the center, says the Pentagon seems conscious of the issue but that it’s complicated, because different people tend to trust AI differently.
Part of the challenge comes from how modern AI algorithms work. With reinforcement learning, an AI program does not follow explicit programming, and it can sometimes learn to behave in unexpected ways.
Bo Ryu, CEO of EpiSci, says his company’s algorithms are being designed in line with the military’s plan for use of AI, with a human operator responsible for deploying deadly force and able to take control at any time. The company is also developing a software platform called Swarm Sense to enable teams of civilian drones to map or inspect an area collaboratively.
He says EpiSci’s system does not rely only on reinforcement learning but also has handwritten rules built in. “Neural nets certainly hold a lot of benefits and gains, no doubt about it,” Ryu says. “But I think that the essence of our research, the value, is finding out where you should put and should not put one.”
More Great WIRED Stories