Why users are so important in AI development

When AI is brought into a complex system we observe varying degrees of success. This rests on the complexity of the system, the training and understanding of the end user as well as the maintenance processes around the system. In examples of simple systems we have seen measured success in academic studies relating to gearboxes. However, when we move to add AI into more complex systems we start to see serious problems appear. 

For example, in the Boeing Air Max Disaster, AI had been added to a platform and taken away the users’ prerogative to move outside the extremely reserved ‘envelope’ of operation, leading to loss of life. To prevent such disasters, the implementation of AI in complex systems, especially related to aerospace and subsurface industries, needs to be considered carefully.

IST AI logo

Sullenberger – the pilot of the plane that downed in the Hudson River, states that the operators of complex systems such as aircraft are so highly attuned that they know the `feel’ of the vessel and every valve and connection in the system. Indeed Captain Sullenberger states that “pilots must be capable of absolute mastery of the aircraft and the situation at all times, a concept pilots call airmanship”.  In the same way we could feel that something was wrong with our car when we had manual steering columns, an operator knows their system.

This can cause problems as we have seen when the Boeing Air Max crashed, where instead of purchasing new platforms, Boeing decided to mount large, more fuel efficient engines onto the existing air frame. This meant the engines had to be mounted higher and farther forward on the wings than for previous models of the 737. This significantly changed the aerodynamics of the aircraft and created the possibility of a nose-up stall under certain flight conditions. There were multiple explanations for the crashes but only one was a design flaw, this flaw was within the MAX’s new flight control software system designed to prevent stalls. The remaining explanations were ethical and political: internal pressure to keep pace with Boeing’s chief competitor, Airbus; Boeing’s lack of transparency about the new software; and the lack of adequate monitoring of Boeing by the Federal Aviation Authority (FAA), especially during the certification of the MAX model and following the first crash.

The apparent gap in communication in the development of software systems that are then added onto complex platforms is concerning. Historically, system testing would have been done by users and training would have been supplied, while in the Air Max disasters the user was deemed to be quite separate from the software. Moreover, the existence of the software, designed to prevent a stall due to the reconfiguration of the engines, was not disclosed to pilots until after the first crash. Even after that tragic incident, pilots were not required to undergo simulation training on the 737 MAX.

The design of any AI system in engineering needs to consider the operator and their ability to interact with the system and override if necessary. Ultimately the operators input is imperative to the safety of the platform.  As can be seen if the AI is programmed to regulatory standards but the operator needs to avert crisis by using engineering limits then this would not be possible if there is no override for the AI system.

In this implementation the user was not put at the heart of the new technological developments and the consequences, as we can see, can be very severe.