Understanding explainable artificial intelligence techniques: a comparative analysis for practical application

Shweta Bhatnagar, Rashmi Agrawal

Abstract


Explainable artificial intelligence (XAI) uses artificial intelligence (AI) tools and techniques to build interpretability in black-box algorithms. XAI methods are classified based on their purpose (pre-model, in-model, and post-model), scope (local or global), and usability (model-agnostic and model-specific). XAI methods and techniques were summarized in this paper with real-life examples of XAI applications. Local interpretable model-agnostic explanations (LIME) and shapley additive explanations (SHAP) methods were applied to the moral dataset to compare the performance outcomes of these two methods. Through this study, it was found that XAI algorithms can be custom-built for enhanced model-specific explanations. There are several limitations to using only one method of XAI and a combination of techniques gives complete insight for all stakeholders.

Keywords


Explainable artificial intelligence; Explainable artificial intelligence models; Explainable artificial intelligence techniques; Local interpretable model-agnostic explanations; Shapley additive explanations

Full Text:

PDF


DOI: https://doi.org/10.11591/eei.v13i6.8378

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Bulletin of EEI Stats

Bulletin of Electrical Engineering and Informatics (BEEI)
ISSN: 2089-3191, e-ISSN: 2302-9285
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).