By: Abhilash Rajaram, RIG Intern Researcher

 

Our lives are increasingly mediated by algorithms of Artificial Intelligence (AI) systems. The scope of these AI systems has reached significant and safety-critical domains such as healthcare, finance, automated driving and so on. Despite their continuously improving capabilities, AI systems suffer from transparency issues where the mechanics underlying their decisions often remain invisible or incomprehensible to end-users.

In addition to Trustworthy and Human-Accountable AI, AI systems need to be able to make their decisions explainable and comprehensible for people to understand. The decisions should be able to improve or oppose the actions, guarantee fair and ethical solutions, and provide good Human-AI cooperation. Although Explainable AI has been a prominent interdisciplinary domain and a fast-growing field, there is no agreed-upon definition of it, few methods to evaluate it, nor guidelines to create such technology [1]. Discussions to outline the domain and shape these important topics call for human-centered and socio-technical perspectives, input from diverse stakeholders and the participation of the broader Human Computer Interaction community.

Explainable AI (XAI) refers to artificial intelligence and machine learning techniques that can provide human understandable justification for their output behavior. The algorithm-centered XAI community mostly consists of machine learning researchers where explanation techniques are often developed based on researchers’ own intuition rather than the needs of their intended audience. Much of the work on XAI has been focused mainly on interpretability. Interpretability is a property of machine-learned models that dictate the degree to which a human user—AI expert or non-expert user—can come to conclusions about the performance of the model on specific inputs [2]. Therefore, bringing user-centered perspectives to the forefront of XAI is necessary field work towards supporting diverse types of users and stakeholders. Additionally, how they interpret and react to explanations might vary significantly depending on the motivational, social and cognitive behaviors as well as professional and educational profiles.

 

Figure 1: Emergent relationship between the dimensions (left) and components (right) of user perceptions and preferences

When adopting a Human-Centered XAI approach, the question is not just about the “who” (the explanation is essential for), but also about the “why”. Reasons for explanations include trustworthiness, causality, transferability, informativeness, fairness, accessibility, interactivity, or privacy awareness. Consequently, understanding the “who” and the “why” is crucial because it governs what the explanation requires for a given problem [3]. It also scopes how the data is collected, what data can be collected, and the most effective way of describing the “why” behind an action. For instance: with automated vehicles, the engineer may have different requirements of explain-ability than the passenger in that car. By studying why end-users seek explanations across various AI systems, the engineers can summarize the user goals that XAI should aim to support while gaining further insights for decision-making, to appropriately evaluate the AI’s capability, to adapt their usage or control of the AI and to improve AI performance.

Another important aspect is “where” (the application domain or context for XAI). Many XAI contributions address case-based explanations. For example, using local (specific to the decision/instance) or global (perspective of the overall model methods). There has been progress in natural language-based explanations generated in real-time with a goal for accessible explanations for end-users (who might not be AI experts). Recent work done by researchers such as Upol Ehsan and Mark Riedl has introduced XAI features in model development tools, AI-assisted decision-support tools for model fairness evaluation etc. For instance, a neural machine translation approach is used to generate rationales in p

lain English to justify the actions of an AI agent completing a sequential decision-making task. Generating accessible explanations in real-time has critical implications in domains such as automated driving (Figure 2).

Figure 2: The AI agent generated natural language rationales while playing a computer game (left). This approach may be extended to real environments in which potential passengers of automated vehicles select objects in the driving environment that should be included in explanations to make decisions of driving algorithms more comprehensible(right).

 

Building on the above reflections, the following research questions have come to light that need to be addressed [5]:

  • Who are the consumers and relevant stakeholders of XAI? What are their needs for explain-ability? What values are reflected, and tensions arise in these needs?
  • Why is explain-ability sought? What user goals should XAI aim to support? How are these goals shaped by technological, individual, and social factors?
  • How can we develop transferable evaluation methods for XAI? What key constructs need to be considered?
  • What are the explain-ability challenges where we move beyond the dominant one-to-one Human-AI interaction paradigms? How might a human-centered perspective address these challenges?

It is important to dedicate time to address these issues because the speed of algorithmic development will likely be higher than the rate of societal adaptation. Systematic human-subject studies are necessary to understand how different strategies for explanation generation affect end-users, especially non-AI-experts. End-to-end studies with fully realized XAI systems are needed to determine whether explanation generation can be controlled to produce the desired down-stream effects on the intended recipients. Human Computer Interaction (HCI) in AI can and should play a calibrating and moderating role in the journey to mitigate unintended consequences and facilitate inclusion of diverse voices in the design of the future of AI.

 

 

 

References:

  • On Design and Evaluation of Human-centered Explainable AI systems

https://www.cc.gatech.edu/~riedl/pubs/ehsan-chi-hcml19.pdf

  • Explainable AI: A Review of Machine Learning Interpretability Methods

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7824368/

  • Towards Human-Centered Explainable AI: the journey so far

https://thegradient.pub/human-centered-explainable-ai/

  • Human-centered Explainable AI: Towards a Reflective Socio Technical Approach

https://arxiv.org/abs/2002.01092

  • Workshop on Operationalizing Human-Centered Perspectives in Explainable AI

https://hcxai.jimdosite.com/