Collection
Special Issue: Interpretable Machine Learning and Explainable Artificial Intelligence
- Submission status
- Closed
Machine learning (ML) has garnered significant interest in recent years due to its applicability to a wide range of complex problems. There is increasing realization that ML models, in addition to making predictions, reveal information about relationships between domain data items, commonly referred to as interpretability of the model. A similar situation is occurring in the artificial intelligence (AI) scientific community, which has concentrated on explainable AI (XAI) along the dimensions of algorithmic interpretability, explainability, transparency, and accountability of algorithmic judgments. ML approaches may be classified as white-box or black-box. White-box techniques, like rule learners and inductive logic programming, provide explicit models that are intrinsically interpretable, while black-box techniques, such as (deep) neural networks, provide opaque models. With the growing use of ML, there have been significant social concerns about implementing black-box models for decisions requiring the explanation of domain relationships. The ability to express information obtained from ML models in human-comprehensible language–aka interpretability–has sparked considerable attention in academics and industry. These interpretations have found applications in healthcare, transportation, finance, education, policymaking, criminal justice, etc. As it evolves, one aim in ML is the development of interpretable techniques and models that explain themselves and their output.
This special issue invites papers on advancements in interpretable ML from the modeling and learning perspectives. We are looking for high-quality, original articles presenting work on the following (not exhaustive) topics:
• Probabilistic graphical model applications
• Explainable artificial intelligence
• Rule learning for interpretable machine learning
• Interpretation of black-box models
• Interpretability in reinforcement learning
• Interpretable supervised and unsupervised models
• Interpretation of neural networks and ensemble-based methods
• Interpretation of random forests and other ensemble models
• Causality of machine learning models
• Novel applications requiring interpretability
• Methodologies for measuring interpretability of machine learning models
• Interpretability-accuracy trade-off and its benchmarks
Call for Papers Flyer: Interpretable Machine Learning and Explainable Artificial Intelligence
Editors
-
Kazim Topuz
The University of Tulsa, USA
-
Akhilesh Bajaj
The University of Tulsa, USA
-
Kristof Coussement
IÉSEG School of Management, France
-
Timothy L. Urban
The University of Tulsa, USA
Articles (8 in this collection)
-
-
Understanding the challenges affecting food-sharing apps’ usage: insights using a text-mining and interpretable machine learning approach
Authors
- Praveen Puram
- Soumya Roy
- Anand Gurumurthy
- Content type: Original Research
- Published: 27 June 2024
-
Toward interpretable machine learning: evaluating models of heterogeneous predictions
Authors
- Ruixun Zhang
- Content type: Original Research
- Published: 10 May 2024
-
Quantifying and explaining machine learning uncertainty in predictive process monitoring: an operations research perspective
Authors
- Nijat Mehdiyev
- Maxim Majlatow
- Peter Fettke
- Content type: Original Research
- Open Access
- Published: 03 April 2024
-
Population-based exploration in reinforcement learning through repulsive reward shaping using eligibility traces
Authors (first, second and last of 4)
- Melis Ilayda Bal
- Cem Iyigun
- Huseyin Aydin
- Content type: Original Research
- Published: 18 January 2024
-
Interpretable multi-hop knowledge reasoning for gastrointestinal disease
Authors (first, second and last of 5)
- Dujuan Wang
- Xinwei Wang
- Yunqiang Yin
- Content type: Original Research
- Published: 30 October 2023
-
How to customize an early start preparatory course policy to improve student graduation success: an application of uplift modeling
Authors
- Yertai Tanai
- Kamil Ciftci
- Content type: Original Research
- Published: 26 September 2023
-
An interpretable system for predicting the impact of COVID-19 government interventions on stock market sectors
Authors (first, second and last of 5)
- Cai Yang
- Mohammad Zoynul Abedin
- Petr Hajek
- Content type: Original Research
- Published: 24 April 2023