Article Preview
TopIntroduction
With the advancement in technology, Artificial Intelligence (AI) has gained enormous popularity and applicability in various domains such as healthcare, finance, retail, etc. (Sarivougioukas & Vagelatos, 2020). In order to produce high-performance commercial products, many AI-based companies tend to develop predictive models whose behaviour may sometimes deviate from human expectations (Cheng et al., 2021). Traditional AI systems often lack transparency in decisions due to their complex nature and hence, are unable to explain such deviations (EU, 2019). In critical systems like autonomous (Fiorini, 2020; Pandey & Banerjee, 2019) and AI-assisted healthcare systems (Gupta et al., 2021; Sun et al., 2019), there is a need to induce explainability of decisions for social, practical, and legal reasons. Hence, recently the branch of eXplainable Artificial Intelligence (XAI) has gained importance in applications where the result of committing a mistake can be disastrous (Gunning & Aha, 2019). It refers to the branch of AI which provides reasoning behind the predictions of any AI model. Further, XAI techniques can be broadly divided into intrinsic and post-hoc wherein, intrinsic techniques aim to provide explanation along with the prediction. Whereas the post-hoc techniques are applied on models to produce explanation after the output is predicted. The paper aims to build an explanation-to-narration module for post-hoc explanations.
The current state-of-art XAI techniques present explanations in many forms such as visual, audio, linguistic, tabular, etc. The traditional trend in the literature is to represent results in the form of visuals, especially heat maps. However, these may not be well understood by a non-technical user. Out of the above-mentioned ways, linguistic methods can be attractive for interested non-expert users (J.M. Alonso et al., 2020). These allow users to understand the model’s predictions without any mathematics or engineering background and instigate willingness among them to use autonomous systems (J.M. Alonso et al., 2020). To date, few works have directly addressed the possibility of generating textual explanations from the structured output of an explainer. However, the NLP community (Singh & Sachan, 2021), especially working in data-to-text generation, can add a linguistic layer to many of the state-of-art post-hoc XAI systems proposed so far (Fayoumi & Hajjar, 2020; Inan & Dikenelli, 2021).
The explanations generated by the state-of-art post-hoc local XAI techniques such as LIME (Ribeiro et al., 2016), SHAP (Roth, 1988), etc. are generally in the form of where and represent the features and their contribution respectively for each instance . Natural Language Generation (NLG) techniques can convert data into text or text into text depending on the application requirement (Reiter & Dale, 1997). A sub-field of NLG i.e., data-to-text generation can be employed on the structured explanation to generate the corresponding narrative.
Figure 1.
An example of the structured explanation and corresponding narrations