Explainability in modelling and AI

In the last few decades the accountability of algorithms has grown as a pertinent issue. AI systems demonstrating low accuracy or high levels of misleading output continue to be developed and deployed or sold. There are now multiple sets of frameworks being developed by multiple parties that all claim to act to produce ‘ethical AI’. In this ‘wild west’ of AI we have, ultimately, people at the end of the decisions being made by AI and automated systems and as yet there is a low level of transparency on both how these models are built and how they process decisions.

IST AI logo

There are numerous examples of AI being used to make decisions on credit cards, on recruitment and on jail sentences and I know I don’t really like being at the end of an invisible system that I can’t interact with. I much prefer a person to speak to me about the details and explain to me what is happening.

As we move forward we need to ensure that algorithms and technology can be explained. This is particularly problematic for complex systems and models where people’s lives can hang in the balance. Some operators of these systems might be pilots or nuclear plant managers, who really do need an override button and an explanation of the decision taken by the algorithm. 

We will never be able to take people out of the decision making process because inevitably there will be aspects of the problem that the algorithm is not capable of considering and context that is not within the programme but is nevertheless critical for the decision making process. 

How we go about integrating humans and technology will be the crucial part of this journey.