Explainability in AI systems and why it is so important

Complex platforms are very difficult to manage and maintain. This is why we see teams of engineers, many highly specialised, that carry out this role for industries such as aerospace, nuclear and sub surface. It is a critical undertaking to maintain the aforementioned systems and this involves complex systems, often with components at varying degrees of degradation. When AI is brought into a complex system we observe varying degrees of success. This rests on the complexity of the system, the training and understanding of the end user as well as the maintenance processes around the system.

We see AI employed with a measure of success in wind turbines and to some extent in gearboxes. However, when we move to add AI into more complex system we start to see serious problems appear. In the Boeing Air Max Disaster AI had been added to a platform and taken away the users prerogative to move outside the extremely reserved ‘envelope’ of operation.  If AI is implemented in this way in complex systems such as sub surface and aerospace we may well see further loss of life.

IST AI logo

The proliferation of AI models that have been developed in the area of Condition Based Monitoring are currently not well explained nor well understood by users due to the technical research becoming inaccessible to the non specialist. This leads to a communication problem that may put systems and lives at risk and so we must proceed with caution. 

There is also the unknown of whether AI will create further risks over and above the implementation issues that have been widespread in the news of late. It could be that the systems being developed are too idealistic. However, with further interdisciplinary work we might be able to move towards a system that is transparent, explainable and valuable within maintaining systems as a whole.

Implementation of AI systems in industry has seen mixed results. From success in the wind industry to failure in aerospace. The complexity of the platforms becomes a significant issue when intertwined with AI. AI having significant control over the platform or systems and having the natural feel of the platform removed from the human operator can cause unintended and fatal consequences. Given the complexity of systems, it is more important than ever to promote interdisciplinary working so that the end user understands and can operate the system effectively. It is important to deploy robust systems that have been developed in context and with the user and society at the heart of them.

Read more at Springer Link