In general, we don’t blindly trust those who can’t explain their reasoning. The same goes for AI, perhaps even more so. As an AI increases in capabilities and achieves a greater range of impact, its decision-making process should be explainable in terms people can understand. Explainability is key for users interacting with AI to understand the AI’s conclusions and recommendations. Your users should always be aware that they are interacting with an AI. Good design does not sacrifice transparency in creating a seamless experience. Imperceptible AI is not ethical AI.
Allow for questions. A user should be able to ask why an AI is doing what it’s doing on an ongoing basis. This should be clear and up front in the user interface at all times.
Decision making processes must be reviewable, especially if the AI is working with highly sensitive personal information data like personally identifiable information, protected health information, and/or biometric data.
When an AI is assisting users with making any highly sensitive decisions, the AI must be able to provide them with a sufficient explanation of recommendations, the data used, and the reasoning behind the recommendations.
Teams should have and maintain access to a record of an AI’s decision processes and be amenable to verification of those decision processes.
“IBM supports transparency and data governance policies that will ensure people understand how an AI system came to a given conclusion or recommendation. Companies must be able to explain what went into their algorithm’s recommendations. If they can’t, then their systems shouldn’t be on the market.“
- Data Responsibility at IBM