AI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This is why funding processes must be reviewed for AI research grants.
Ethical AI has been identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments and AI audit frameworks.
Increasingly funding bodies take on the responsibilities to ensure investment is channelled towards trustworthy and safe AI systems.
There are two proposals detailed in the Journal Paper submitted on this:
The first proposal is for the inclusion of a ‘Trustworthy AI Statement’ section in the grant application form.
The second proposal outlines the wider management aspects a funding body for the ethical review and monitoring of funded projects to ensure adherence to the proposed ethical strategies in the applicant’s Trustworthy AI Statement.
The anticipated outcome for such proposals being employed would be to create a ‘stop and think’ section during the project planning and application procedure requiring applicants to implement the methods for the ethically aligned design of AI. In essence it asks funders to send the message “if you want the money, then build trustworthy AI!”.
Journal Article published in AI and Ethics Journal