Why the Aviation Industry Is Taking Cautious Approach to Cockpit A.I.


While the speed at which artificial intelligence (A.I.) is pervading mainstream society could seemingly break the sound barrier, aviation authorities are taking a slower, more cautious approach to its potential applications in the cockpit. After all, the implications of a sophisticated A.I. system acting as copilot raise serious questions—such as, will it second-guess or even stop the human pilot from flying if the computer perceives an imminent emergency, which may or may not exist?
The answer is not yet definitive. The Federal Aviation Administration (FAA) and the European Union Aviation Safety Agency (EASA) have both published papers acknowledging A.I.’s future in aircraft operations, emphasizing that safety needs to be the primary criterion for certification. EASA forecasts a three-stage implementation: A.I. first assisting human pilots with information, then human and A.I. “teams” working together through 2035, and, finally, advanced automation and autonomous flight after 2035.
“People feel comfortable with a human pilot in the cockpit,” says Robin Riedel, an aerospace engineer, a certified airline pilot, and a current partner at McKinsey & Company. “Elevators used to be run by humans for much the same reason, but there aren’t many elevator operators these days.”
Riedel points to the fact that A.I. already exists across private aviation, including aircraft design, operations, and maintenance scheduling. Autopilot systems, fly-by-wire controls, and synthetic vision, while not the equivalent of an A.I. copilot, are also all heavily computer dependent. “A.I.’s not 30 years away,” he says. “It’s already in the cockpit.”
The initial focus is on deterministic A.I., a form of machine learning in which the outcomes do not deviate from predefined rules. According to Riedel, that technology should be certifiable because the results are predictable. “Ultimately, it’s about reducing the pilot’s workload and making flying safer,” he says.
Then there’s nondeterministic A.I., which is designed to learn, adapt, and offer a variety of responses to the same input. This form worries regulators, especially if the computer initiates a maneuver based on its own logic rather than a realistic flight situation. “We know we can’t rely on it 100 percent, but we can test and test and get to a level where we think we can trust it,” says Riedel, noting that, in any scenario, human pilots will be able to overrule the process. Of more concern is the potential for a breach in cybersecurity, allowing hackers to control the aircraft remotely.
Despite public apprehension and certification hurdles, Swiss start-up Daedalean AI has created a vision system named PilotEye that uses a neural network to identify and categorize approaching aircraft or other airborne objects. It’s poised to be one of the first nondeterministic cockpit applications. “The goal is to improve safety and create situational intelligence,” says Yemaya Bordain, Daedalean AI’s chief commercial officer and president of the Americas. Bordain assures that it won’t make decisions for the pilot but will provide visual and verbal alerts and even guidance on landings. She asserts that the program differs from other nondeterministic A.I. “because we trained our model in the field, so it’s representative of the real world.”
In 2023, Daedalean successfully flight-tested the system with Leonardo Helicopters, and its research collaboration with EASA helped inform the agency’s guidelines. The Zurich-based company is working toward certification of PilotEye later this year. “We’re showing how nondeterministic doesn’t have to mean non-certifiable,” says Bordain. Realistically, though, with no clear regulations and little public understanding, it could take advanced A.I. considerably more time to earn its wings.