Abstract
Recently, few methods for understanding machine learning model’s outputs have been developed. SHAP and LIME are two well-known examples of these methods. They provide individual explanations based on feature importance for each instance. While remarkable scores have been achieved for individual explanations, understanding the model’s decisions globally remains a complex task. Methods like LIME were extended to face this complexity by using individual explanations. In this approach, the problem was expressed as a submodular optimization problem. This algorithm is a bottom-up method aiming at providing a global explanation. It consists of picking a group of individual explanations which illustrate the global behavior of the model and avoid redundancy. In this paper, we propose CoSP (Co-Selection Pick) framework that allows a global explainability of any black-box model by selecting individual explanations based on a similarity preserving approach. Unlike submodular optimization, in our method the problem is considered as a co-selection task. This approach achieves a co-selection of instances and features over the explanations provided by any explainer. The proposed framework is more generic given that it is possible to make the co-selection either in supervised or unsupervised scenarios and also over explanations provided by any local explainer. Preliminary experimental results are made to validate our proposal.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Nowadays, a wide range of real-life applications such as computer vision [1, 2], speech processing, natural language understanding [3], health [4], and military fields [5, 6] make use of Machine Learning (ML) models for decision making or prediction/classification purpose. However, those models are often implemented as black boxes which make their predictions difficult to understand for humans. This nature of ML-models limits their adoption and practical applicability in many real world domains and affect the human trust in them. Making ML-models more explainable and transparent is currently a trending topic in data science and artificial intelligence fields which attracts the interest of several researchers.
Explainable AI (XAI) refers to the tools, methods, and techniques that can be used to make the behavior and predictions of ML models to be understandable to human [7]. Thus, the higher the interpretability/explainability of a ML model, the easier it is for someone to comprehend why certain decisions or predictions have been made.
Multiple interpretability approaches are based on additive models where the prediction is a sum of individual marginal effects like feature contribution [8], where a value (denoting the influence on the output) is assigned to each feature. One of the latest proposed methods is based on mathematical Shapeley Values and was introduced by Scott et al. [9] as SHAP (for SHapley Additive exPlanations). It relies on combining ideas from cooperative game theory and local explanations [10]. LIME (Local Interpretable Model-agnostic Explanations), introduced by Ribeiro et al. [11], is also one of the most famous local explainable models. It explains individual predictions of any classifier or regressor in a faithful and intelligible way, by approximating them locally with an interpretable model (e.g., linear models, decision trees). However, having a global explanation of the model can be challenging as it is more complicated to maintain a good fidelity - interpretability trade off. To this end, authors in [11] proposed an approach, called submodular Pick which is an algorithm aiming to maximize a coverage function of total feature importance for a set of instances. While maximizing the coverage function is NP-Hard, authors make use of a greedy algorithm which adds iteratively instances with the highest marginal coverage to the solution set, offering a constant-factor approximation to the optimum. The selected set is the most representative, non-redundant individual explanations of the model.
In this paper, our aim is to introduce a new approach to select individual instances (explanations) to be considered for global explanation to ensure that the picked group reflects the global behavior of the black-box model. Unlike submodular optimization proposed in [11], we advocate to consider the problem of picking representative instances as a co-selection task. The idea is to apply a similarity preserving co-selection approach to select a set of instances and features on the explanations provided by any explainer. In fact, feature or instance selection has been widely considered separately in the literature to remove noise, irrelevant and redundant features or instances in datasets [12,13,14,15,16]. Unfortunately, selecting features and instances separately and sequentially is time consuming, especially when dealing with large scale datasets. To overcome this problem, co-selection or the simultaneous selection of features and instances is proposed making use of the duality between feature space and instance space. In this context, several approaches have been proposed. For instance, Kuncheva et al. [17] proposed a genetic algorithm that simultaneously select features and reference cases to improve the performance of nearest neighbor classifiers. Derrac et al. [18] suggested an evolutionary model based on cooperative coevolution to perform co-selection in nearest neighbor classification. GarcíA-Pedrajas et al. [19] proposed a scalable, almost any size, method for concurrent instance and feature selection. In another side, similarity preserving approaches have been considered in the literature with the aim of evaluating features by their ability to preserve locality. For instance, Zhao et al. [20] introduced a similarity preserving feature selection framework that overcomes common weakness in handling feature redundancy. Ma et al. [21] proposed a similarity preserving method that generate unseen visual features from random noises concatenated with semantic descriptions. Shang et al. [22] suggested UFSRL, a framework that used local similarity preserving for feature selection.
1.1 Contributions
The technical contributions of this paper are summarized as follows.
-
We propose a new approach, called CoSP, for a global explainability of black box machine learning.
-
The proposed approach selects individual explanations to provide global explanation for machine learning models.
-
CoSP is based on a similarity preserving co-selection approach.
-
Experiments are conducted to validate the efficacy of these contributions.
-
We release a performant implementation of CoSP at [23].
The original version of this work is published at WISE’22 [24]. The main changes in this paper are presented below:
-
Creating a new section (Related Work) that presents some modern explainability methods proposed in the literature to further clarify the importance of interpretability of machine learning models.
-
Detailing, in the Proposed approach section, the alternative optimization procedure applied to the objective function.
-
Adding computational complexity in the Algorithm Analysis subsection.
-
Conducting further experiments to validate the effectiveness of CoSP by comparing it against four approaches including, Random, Greedy [25], Parzen [26] and LIME [11], combined with Submodular Pick (SP) and Random Pick (RP).
The paper is structured as follows. Section 2 introduces the related work. Section 3 provides a necessary background on LIME method. In Section 4, we present our approach allowing for a global explanation of black box ML models. Section 5 shows the preliminary experiments done to validate our proposal. In Section 6, we conclude the paper and draw some research lines for future work.
2 Related work
Interpretability of ML models reflects the ability to provide meaning in understandable terms to human. It is crucial to trust the system and get insights based on its decisions. Quality of an explanation could be improved by making it more Interpretable, Faithful, and model-agnostic [27]. Faithfulness represents how the explanation is describing the reality of the model. Model-agnostic methods are used for any type of model. Several explainability methods are proposed in the literature. LIME introduced by Ribeiro et al. [11], is one of the well-known examples. It is a framework which explains a prediction by approximating it locally using an interpretable model. Other methods were proposed later, for instance, Burkart et al. [28] provided a survey that presents the main explainable of supervised machine learning methods. Lundberg et al. [29] suggested a novel explanation that improves the interpretability of tree-based models by directly measuring local feature interaction effects. Vlahek et al. [30] introduced an iterative approach to learning explainable features, where new features are generated with each iteration and high quality dissimilars are selected. Dinh et al. [31] suggested a consistent feature selection for analytic deep neural networks. Cancela et al. [32] proposed E2E-FS, a feature selection algorithm providing both precision and explainability in a smart way. Wang et al. [33] proposed RC-Explainer, a Reinforced Causal Explainer for Graph Neural Networks. A poweful framework that generate faithful and concise explanations to unseen graphs. Moritz et al. [34] introduced CoDA Nets, a powerful classifiers with a high degree of inherent interpretability. Table 1 gives a concise overview of other existing explainability algorithms.
3 Background on LIME
The basic idea of LIME is to replace a data instance x by its interpretable representations \(x'\) thanks to a mapping function \(\Phi (x)\). For example, an image will be represented as a group of super-pixels, a text as binary vectors indicating the presence or the absence of a word. The interpretable representations are more easily understandable and close to human intuition. Then, \(x'\) is perturbed to generate a set of new instances. The black box model is used to make predictions of generated instances from \(x'\) which are weighted according to their dissimilarity with \(x'\). Now, for the explanation purpose, an interpretable model, such as linear models, is trained on weighted data to explain prediction locally (see, Algorithm 1).
3.1 LIME: fidelity-interpretability trade-off
Authors in [11] define an explanation as a model \( g \in G\), where G is a class of potentially interpretable models (e.g., linear models, decision trees). Let \( \Omega {(g)} \) be a measure of complexity (as opposed to interpretability) of the explanation g. For example, for linear models \(\Omega (g)\) may be the number of non-zero weights. The model being explained is denoted by \(f : \mathbb R^d \longrightarrow \mathbb R\). Let now \(\pi _x\) defines a locality around x and \(\mathcal {L}(f; g; x) \) be a measure of how unfaithful g is in approximating f in the locality \(\pi _x\). The explanation produced by LIME is then obtained by the following minimization problem [11]:
3.2 Explaining global behavior
LIME explains a single prediction locally. Then, it picks K explanations which must be representative to show to the user. The Submodular Pick is used to choose instances to be inspected for global understanding. The quality of selected instances is critical to get insights from the model in a reasonable time (see, Algorithm 2). Let \(\mathrm {\textbf{X}}\) (with \(|\mathrm {\textbf{X}}|= n\)) be the set of instances to explain, Algorithm 2 calculates \(\mathrm {\textbf{W}}\in \mathbb R^{n \times d'}\) an explanation matrix using each individual explanation given by Algorithm 1. Then, it computes \( (I_{j})\) global feature importance for each column j in W, such that the highest importance score is given to the feature explaining an important number of different instances. Submodular Pick aims then at finding the set of instances V, \(|V| < \mathrm {\textbf{B}}\) that scores the highest coverage, defined as the function which calculates total importance of features in at least one instance. Finally, greedy algorithm is used to build V by adding the instance with highest marginal coverage gain.
4 Proposed approach
The approach we propose in this paper consists of two sequential phases (see Figure 1). The first is to use LIME (without loss of generality, any other explainer can be used) to obtain the explanations of the predictions for the test data. While the second phase focuses on global explainability by co-selecting the most important test instances and features. Thus, we provide a global understanding of the black-box model.
4.1 Notation
Table 2 summarizes the significant notations used in this paper. Let \({{\,\mathrm{\textbf{E}}\,}}\) be an explanation matrix of n instances and m features. The \(l_{2,1}\)-norm of \({{\,\mathrm{\textbf{E}}\,}}\) is:
and its Frobenius norm (\(l_{2,2}\)) is:
4.2 Explanation space
Let f be a black box model, and \(\mathrm {\textbf{X}}\) a test dataset of n instances and \(\Phi (\mathrm {\textbf{X}})=\mathrm {\mathbf {X'}}\) its interpretable representation in \(\mathbb R^p\). First, to obtain an individual explanation of the prediction made by f for each instance \(x_i\) we use LIME by fitting a linear model on a generated dataset around \({x'}_{i}\), the interpretable representation of \(x_i\) . Thus, for each instance \(x_i\), we obtain an explanation of length k (\(k < p\)). It is worthy to note that the length is a parameter set by the user and corresponds to the number of features retained. Once the individual explanations have been obtained, we construct an explanation space represented by \({{\,\mathrm{\textbf{E}}\,}}\in \mathbb R^{n\times m}\), where the dimension m of the explanations space corresponds to the union of the k features of each explanation. We illustrate this step with the following example:
Example
Let \(\mathrm {\mathbf {X'}}\) be the interpretable representation of 3 instances in \(\mathbb R^{500}\), and \(k=3\) be the length of the explanation desired for these three instances. By performing LIME algorithm on \(\mathrm {\mathbf {X'}}\), we obtain 3 explanations of length 3:
where \(e_1,e_2\), and \(e_3\) are the explanations of \(x'_1\), \(x'_2\) and \(x'_3\) respectively. Thus, the matrix \({{\,\mathrm{\textbf{E}}\,}}\in \mathbb R^{3\times 7}\) can be seen as the concatenation of all the explanations and the union of the set of features obtained by each explanation. Note that the dimension m here is equal to 7.
4.3 Global explicability by co-selection
Understanding the model’s decisions globally remains a complex task. In fact, some approaches like LIME were extended to face this complexity by only picking a group of individual explanations. In this paper, we advocate a method allowing global explainability by co-selecting the most important instances and features over the explanations provided by any explainer. The idea is to find a residual matrix \(\mathrm {\textbf{R}}\) and a transformation matrix \(\mathrm {\textbf{W}}\), which transforms high-dimensional explanations data \({{\,\mathrm{\textbf{E}}\,}}\) to low dimensional data \({{\,\mathrm{\textbf{E}}\,}}\mathrm {\textbf{W}}\), to maximize the global similarity between \({{\,\mathrm{\textbf{E}}\,}}\) and \({{\,\mathrm{\textbf{E}}\,}}\mathrm {\textbf{W}}\). After the optimal \(\mathrm {\textbf{W}}\) and \(\mathrm {\textbf{R}}\) have been obtained, the original features and instances are ranked, based on the \(\ell _{2,1}\)-norm values of the rows of \(\mathrm {\textbf{R}}\) and \(\mathrm {\textbf{W}}\), and the top features and instance are selected accordingly.
4.4 Co-selection pick (CoSP)
To perform a co-selection of instances and features on the explanations matrix, we must minimize the following problem as pointed out in [49]:
Where:
-
Z is the eigen-decomposition of the pairwise similarity matrix, \(\mathrm {\textbf{A}}\), computed over the explanation matrix \({{\,\mathrm{\textbf{E}}\,}}\). Note that the similarity matrix \(\mathrm {\textbf{A}}\) can be calculated in supervised fashion (e.g. adjacency matrix, fully binary matrix) if the labels of test instances are available, or in unsupervised mode as follows:
$$\begin{aligned} \mathrm {\textbf{A}}_{ij} = e^{-\frac{\Vert e_i-e_j \Vert ^2}{2\delta ^2}} \end{aligned}$$(6) -
\(\mathrm {\textbf{R}}={\textbf {W}}^T{{\,\mathrm{\textbf{E}}\,}}^T-{\textbf {Z}}^T-\Theta .\), is a residual matrix and \(\Theta \) is a random matrix, usually assumed to be multi-dimensional normal distribution [50]. Note that the matrix \(\mathrm {\textbf{R}}\) is a good indicator of outliers and less important and irrelevant instances in a dataset according to [51, 52].
-
\(\lambda \) and \(\beta \) are regularization parameters, used to control the sparsity of \(\mathrm {\textbf{W}}\) and \(\mathrm {\textbf{R}}\) respectively; and \(\delta \) is a parameter for the RBF kernel used to compute the matrix \(\mathrm {\textbf{A}}\) in the unsupervised mode in (6).
The first term of the objective in (5) exploits the \({{\,\mathrm{\textbf{E}}\,}}\) structure by preserving the pairwise explanations similarity while the second and third terms are used to perform feature selection and instance selection, respectively.
4.4.1 Optimization
In order to minimize (5), we adopt an alternating optimization over \(\mathrm {\textbf{W}}\) and \(\mathrm {\textbf{R}}\) as in[49], by solving two reduced minimization problems :
Problem 1
Minimizing (5) by fixing \(\mathrm {\textbf{R}}\) to compute \(\mathrm {\textbf{W}}\) (for feature selection). To solve this problem, we consider the lagrangian function of (5):
Then, we calculate the derivative of \(\mathcal {L}_W\) w.r.t \(\mathrm {\textbf{W}}\):
Where \(\mathcal {D}_W\) is a (\(m\times m\)) diagonal matrix with the \(i^{th}\) element equal to \(\frac{1}{2\parallel {\textbf {W}}(i,:)\parallel _2 }\). Subsequently, we set the derivative to zero to update \(\mathrm {\textbf{W}}\):
Problem 2
Minimizing (5) by fixing \(\mathrm {\textbf{W}}\) to compute the solution for \(\mathrm {\textbf{R}}\) (for explanation selection). To solve this problem, we consider the Lagrangian function of (5):
Then, we calculate the derivative of \(\mathcal {L}_\mathrm {\textbf{R}}\) w.r.t \(\mathrm {\textbf{R}}\):
Where \(\mathcal {D}_\mathrm {\textbf{R}}\) is a (\(n\times n\)) diagonal matrix with the \(i^{th}\) element equal to \(\frac{1}{2\parallel \mathrm {\textbf{R}}^T(i,:)\parallel _2 }\).
Subsequently, we set the derivative to zero to update B:
Where I is a (\(n\times n\)) identity matrix. All of the above developments are summarized on Algorithm 3.
4.5 Algorithm analysis
In the Algorithm 3, the final user expects a selection of \(\mathrm {\textbf{B}}\) instances (e.g., explanations) and \(\mathrm {\textbf{L}}\) features which are most relevant to provide global explanation of the model. In order to achieve this, CoSP requires four hyper-parameters \(\lambda \), \(\beta \) , \(\delta \) and h that will be used later on to build the set of chosen instances and features. Firstly, we build the explanations matrix \({{\,\mathrm{\textbf{E}}\,}}\) using any explainer, in our case we use LIME. Secondly, we compute the similarity matrix \(\mathrm {\textbf{A}}\) either in supervised mode (as adjacency matrix or a binary matrix) or in an unsupervised way according to the availability of the labels of the test instances \(\mathrm {\textbf{X}}\). Then, we eigen-decompse \(\mathrm {\textbf{A}}\) to find Z. From line 9 to line 13 \(\mathrm {\textbf{W}}\) and \(\mathrm {\textbf{R}}\) are updated until convergence according to (9) and (12). Following the alternate optimization, we rank the instances and the features according to \(\mathrm {\textbf{R}}\) and \(\mathrm {\textbf{W}}\) respectively. So, the higher the norm of \(\parallel {\textbf {R}}(:,j)\parallel _2\), the more the \(j^{th}\) explanation is not representative, while the higher the norm \(\parallel {\textbf {W}}(i,:)\parallel _2\), the more the \(i^{th}\) feature is important. The computational complexity of Algorithm 3 is presented by the following lemma.
Lemma
CoSP is computed in time of \(\mathcal {O}\) \((nmh+m^3+n^3+nm^2+n^2h)\).
Proof
The time complexity of CoSP essentially depends on the rule of (9) as well as the rule of (12). These two rules are for updating the two matrices W and R which consists of some matrix multiplication and inversion operations at each iteration. Specifically, the computation of the derivative w.r.t W requires \(\mathcal {O}\) \((nmh+nm^2+m^3)\). The derivative w.r.t R needs \(\mathcal {O}\) \((nmh+n^2h+n^3)\).
5 Experiments
In this section, we conduct some experiments to validate our framework on some known sentiment datasets.
5.1 Datasets and compared methods
We use a binary sentimental classification dataset. Sentimental analysis is the task of analyzing people’s opinions, reviews, and comments presented as textual data. It gives intuition about different points of view and feedback by detecting relevant words used to express specific sentiments [53]. Today, companies rely on sentimental analysis to improve their strategy. People’s opinions are collected from different sources like Facebook, Tweets, product reviews and processed in order to understand customer’s needs and improve marketing plans. When the sentiment is divided into positive and negative ones, it is called binary sentimental analysis which is the most common type and the one used in our case. While multi-class sentiment analysis classifies text into groups of possible labels. We use multi-Domain Sentiment DatasetFootnote 1, which contains multiple domains reviews (books and dvd) from Amazon.com, where for each type of product there are hundred of thousands of collected reviews. Then, we use an experiment introduced in [11] which aims to evaluate if explanations could help a simulated user to recognize the best model from a group of models having the same accuracy on validation set. In order to do this, a new dataset will be generated by adding 10 artificial features to the train and validation set from original public dataset (reviews). For the train examples, each of those features appears in 10% of instances in one class and in 20% of the other class. In the test examples, an artificial feature appears in 10 % of examples in both classes. This represents the case of having spurious correlations in the data introduced by non informative features.
Furthermore, we train pairs of classifiers until their validation accuracy is within 0.1% of each other. However, their test accuracy should differ by at least 5% which will make one classifier better than the other. Then, we explain global behaviors of both classifiers using our proposed approach CoSP.
We compare CoSP against Random, Greedy [25], Parzen [26] and LIME approches [11] combined with Submodular Pick (SP) and Random Pick (RP). In the following, we briefly describe each approach.
-
Random randomly chooses the features as an explanation.
-
Greedy removes features highly contributing to the predicted class until the prediction changes.
-
Parzen uses parzen windows to globally approximate the classifier
-
Lime explains the classifier predictions by approximating it locally with an interpretable model.
In the experiment, the explanations were obtained with the above four local explainability techniques. Then, the global explainability approaches CoSP, SP or RP were used to select the relevant instances.
5.2 Experimental setting
To validate our approach, we use the same experimental setting introduced in [54] by selecting top five important features per class chosen as most relevant ones to be considered for the classification task. Global approach is validated if it selects distinguishing features. Four hyper-parameters necessary for CoSP have been set as follows: \(\lambda \approx 2.11\), \(\beta \approx 61.79\) , \(\delta = 1\) and \(h = 17000\) (which stands for the number of features selected by CoSP). Parallely, the parameters configuration of compared methods is as follows: K = 10 words in each explanation and B = 10 instances.
5.3 Evaluation and results
In this section, we present the main results of our experiments. Figures 3, 4, 5 and 6 show the experimental results over two datasets, Books and DVDs. We summarize the main observations of the experimental results in the following points.
-
In terms of Accuracy, the LIME combined with either Co-Selection Pick (CoSP) or Submodular Pick (SP-LIME) outperforms other comparison algorithms. It means that the explanations provided by LIME are faithful to the models (see Figure 6).
-
Regardless of the choice of the explainers, CoSP is significantly better than SP or RP, across the two data sets, followed by SP-LIME (see Figures 3 and 6).
-
CoSP further improves the user’s ability to select the best classifier comparing with the SP or RP (see Figures 3 and 6).
-
From Figures 4 and 5, the displayed perception contains words that are meaningful in order to judge the type of comment. Features are aligned with human intuition and words with no representative meaning like stop words were not selected. Also, noisy features labeled with prefix “FAKE” added to the dataset were not deemed important.
6 Conclusion
In this paper, we presented CoSP, a generic framework aiming to select individual instances in order to provide global explanation for machine learning models. We used Co-selection based on similarity as foundation to build global understanding of the black box internal logic over any local explainer. Furthermore, we conducted some experiments showing that CoSP offers representative insights. This study is a another step towards understanding machine learning models globally. For future work, we would like to explore this method in the context of time series data, as it is a challenging to find representative illustration for this type of data. The approach we proposed is independent of the type of data, since it is based on the explanations provided by a local explainer. Concerning time series, the local explainer must be capable of processing this type of data. This involves in particular the choice of an efficient representation of the time series. In the case of LIME, it is necessary to find a vector representation of the series to be able to apply LASSO and have the explanations. Among the applications on which we want to apply our approach, there is the detection of contextual anomalies in time series. The idea is then not only to detect abnormal segments in a time series but to explain why such a segment was detected as abnormal.
Availability of data and materials
Not applicable.
References
Mohaghegh, F., Murthy, J.: Machine learning and computer vision techniques to predict thermal properties of particulate composites. CoRR. abs/2010.01968 (2020). arXiv:2010.01968
Holm, E.A., Cohn, R., ao, N., Kitahara, A.R., Matson, T.P., Lei, B., Yarasi, S.R.: Overview: Computer vision and machine learning for microstructural characterization and analysis. CoRR. abs/2005.14260 (2020). arXiv:2005.14260. https://doi.org/10.1007/s11661-020-06008-4
Kosowski, P.: Deep learning for natural language processing and language modelling. In: 2018 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), pp. 223–228 (2018). https://doi.org/10.23919/SPA.2018.8563389
Shailaja, K., Seetharamulu, B., Jabbar, M.A.: Machine learning in healthcare: a review. In: 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), pp. 910–914 (2018). https://doi.org/10.1109/ICECA.2018.8474918
Bistron, M., Piotrowski, Z.: Artificial intelligence applications in military systems and their influence on sense of security of citizens. Electronics. 10(7), (2021). https://www.mdpi.com/2079-9292/10/7/871
Gunning, D., Aha, D.: Darpas explainable artificial intelligence (xai) program. AI. Mag. 40(2), 44–58 (2019)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), (2018). https://doi.org/10.48550/arXiv.1802.01933
Strumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41, 647–665 (2013)
Lundberg, S., Lee, S.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)
Lundberg, S., Erion, G., Chen, H., DeGrave, A., Prutkin, J., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.: Explainable ai for trees: From local explanations to global understanding. ArXiv. abs/1905.04610, (2019)
Ribeiro, M., Singh, S., Guestrin, C.: “why should I trust you?”: Explaining the predictions of any classifier. In: al., B.K. (ed.) Proc. of the 22nd ACM SIGKDD Inter. Conf. on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pp. 1135–1144. ACM, ??? (2016). https://doi.org/10.1145/2939672.2939778
Li, J., Cheng, K., Wang, S., Morstatter, F., Trevino, R.P., Tang, J., Liu, H.: Feature selection: A data perspective. ACM computing surveys (CSUR). 50(6), 1–45 (2017)
Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3(Mar), 1157–1182 (2003)
Olvera-López, J.A., Carrasco-Ochoa, J.A., Martínez-Trinidad, J.F., Kittler, J.: A review of instance selection methods. Artif. Intell. Rev. 34, 133–143 (2010)
Liu, H., Motoda, H.: On issues of instance selection. Data Min. Knowl. Disc. 6(2), 115 (2002)
Li, Y.-F., Zhou, Z.-H.: Improving semi-supervised support vector machines through unlabeled instances selection. In: Proceedings of the AAAI Conference on Artificial Intelligence, 25, pp. 386–391 (2011)
Kuncheva, L.I., Jain, L.C.: Nearest neighbor classifier: Simultaneous editing and feature selection. Pattern Recogn. Lett. 20(11–13), 1149–1156 (1999)
Derrac, J., García, S., Herrera, F.: Ifs-coco: Instance and feature selection based on cooperative coevolution with nearest neighbor rule. Pattern Recogn. 43(6), 2082–2105 (2010)
GarcíA-Pedrajas, N., De Haro-GarcíA, A., PéRez-RodríGuez, J.: A scalable approach to simultaneous evolutionary instance and feature selection. Inf. Sci. 228, 150–174 (2013)
Zhao, Z., Wang, L., Liu, H., Ye, J.: On similarity preserving feature selection. IEEE Trans. Knowl. Data Eng. 25(3), 619–632 (2011)
Ma, Y., Xu, X., Shen, F., Shen, H.T.: Similarity preserving feature generating networks for zero-shot learning. Neurocomputing 406, 333–342 (2020)
Shang, R., Chang, J., Jiao, L., Xue, Y.: Unsupervised feature selection based on self-representation sparse regression and local similarity preserving. Int. J. Mach. Learn. & Cybernet. 10, 757–770 (2019)
Code for COsP, howpublished = https://github.com/KhaoulaBF/CoSPIctai/blob/main/dvd_features_scos%20(2).ipynb,
Meddahi, K., Benkabou, S.-E., Hadjali, A., Mesmoudi, A., El Kefel Mansouri, D., Benabdeslem, K., Chaib, S.: Towards a co-selection approach for a global explainability of black box machine learning models. In: International Conference on Web Information Systems Engineering, pp. 97–109 (2022). Springer
Martens, D., Provost, F.: Explaining data-driven document classifications. MIS quarterly. 38(1), 73–100 (2014)
Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Müller, K.-R.: How to explain individual classification decisions. J. Mach. Learn. Res. 11, 1803–1831 (2010)
Ribeiro, M., Singh, S., Guestrin, C.: Fairness, Accountability, and Transparency in Machine Learning, paper ‘Why Should I Trust You?’ Explaining the Predictions of Any Classifier. https://www.fatml.org/schedule/2016/presentation/why-should-i-trust-you-explaining-predictions (2016)
Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. 70, 245–317 (2021)
Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nat. Mach. Intell. 2(1), 56–67 (2020)
Vlahek, D., Mongus, D.: An efficient iterative approach to explainable feature learning. IEEE Trans. Neural Netw. & Learn, Syst (2021)
Dinh, V.C., Ho, L.S.: Consistent feature selection for analytic deep neural networks. Adv Neural Inf. Proc. Syst. 33, 2420–2431 (2020)
Cancela, B., Bolón-Canedo, V., Alonso-Betanzos, A.: E2e-fs: An end-to-end feature selection method for neural networks. IEEE Trans. Pattern Anal. & Mach, Intell (2022)
Wang, X., Wu, Y., Zhang, A., Feng, F., He, X., Chua, T.-S.: Reinforced causal explainer for graph neural networks. IEEE Trans. Pattern. Anal. & Mach, Intell (2022)
Böhle, M., Fritz, M., Schiele, B.: Optimising for interpretability: Convolutional dynamic alignment networks. IEEE Transactions on Pattern Analysis and Machine Intelligence. (2022)
Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A.: Not just a black box: Learning important features through propagating activation differences. arXiv:1605.01713 (2016)
Krishnan, S., Wu, E.: Palm: Machine learning explanations for iterative debugging. In: Proceedings of the 2Nd Workshop on Human-in-the-loop Data Analytics, pp. 1–6 (2017)
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: High-precision model-agnostic explanations. In: Proc. AAAI Conf. Artif. Intell. vol. 32 (2018)
Zhou, B., Bau, D., Oliva, A., Torralba, A.: Interpreting deep visual representations via network dissection. IEEE Trans. Pattern. Anal. & Mach. Intell. 41(9), 2131–2145 (2018)
Schwab, P., Karlen, W.: Cxplain: Causal explanations for model interpretation under uncertainty. Adv. Neural Inf. Proc. Syst. 32 (2019)
Yang, M., Kim, B.: Benchmarking attribution methods with relative feature importance. arXiv:1907.09701 (2019)
Albini, E., Rago, A., Baroni, P., Toni, F.: Relation-based counterfactual explanations for bayesian network classifiers. In: IJCAI, pp. 451–457 (2020)
Agarwal, R., Melnick, L., Frosst, N., Zhang, X., Lengerich, B., Caruana, R., Hinton, G.E.: Neural additive models: Interpretable machine learning with neural nets. Adv. Neural Inf. Proc. Syst. 34, 4699–4711 (2021)
Liu, Y., Khandagale, S., White, C., Neiswanger, W.: Synthetic benchmarks for scientific research in explainable machine learning. In: Adv. Neural Inf. Proc. Syst. Datasets Track. (2021)
Graziani, M., Lompech, T., Müller, H., Andrearczyk, V.: Evaluation and comparison of cnn visual explanations for histopathology. In: Proceedings of the AAAI Conference on Artificial Intelligence Workshops (XAI-AAAI-21), Virtual Event, pp. 8–9 (2021)
Agarwal, C., Krishna, S., Saxena, E., Pawelczyk, M., Johnson, N., Puri, I., Zitnik, M., Lakkaraju, H.: Openxai: Towards a transparent evaluation of model explanations. Adv. Neural Inf. Proc. Syst. 35, 15784–15799 (2022)
Li, X., Xiong, H., Li, X., Wu, X., Chen, Z., Dou, D.: Interpretdl: explaining deep models in paddlepaddle. J. Mach. Learn. Res. 23(1), 8969–8974 (2022)
Motallebi, M., Anik, M.T.A., Zaïane, O.R.: Explaining decisions of black-box models using barbe. In: Int. Conf. Database & Expert Syst. Appl. pp. 82–97 (2023). Springer
Agarwal, C., Queen, O., Lakkaraju, H., Zitnik, M.: Evaluating explainability for graph neural networks. Scientific Data. 10(1), 144 (2023)
Benabdeslem, K., Mansouri, D.E.K., Makkhongkaew, R.: scos: Semi-supervised co-selection by a similarity preserving approach. IEEE Trans. Knowl. Data Eng. 34(6), 2899–2911 (2022). https://doi.org/10.1109/TKDE.2020.3014262
She, Y., Owen, A.-B.: Outlier detection using nonconvex penalized regression. CoRR. abs/1006.2592 (2010). arXiv:1006.2592
Tong, H., Lin, C.: Non-negative residual matrix factorization with application to graph anomaly detection. In: Proceedings of the Eleventh SIAM International Conference on Data Mining, SDM 2011, April 28-30, 2011, Mesa, Arizona, USA, pp. 143–153. SIAM / Omnipress, ??? (2011)
Tang, J., Liu, H.: Coselect: Feature selection with instance selection for social media data. In: Proceedings of the 13th SIAM International Conference on Data Mining, May 2-4, 2013. Austin, Texas, USA, pp. 695–703. SIAM, ??? (2013)
Minaee, S., Kalchbrenner, N., Cambria, E., Nikzad, N., Chenaghlu, M., Gao, J.: Deep learning-based text classification. ACM Computing Surveys (CSUR). 54, 1–40 (2021)
Linden, I.-V.-D., Haned, H., Kanoulas, E.: Global aggregations of local explanations for black box models. CoRR. abs/1907.03039, (2019). arXiv:1907.03039
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
1- Dou El Kefel Mansouri: Methodology, Software, Validation, Formal analysis, Investigation, Writing- Original draft preparation.
2- Seif-Eddine Benkabou: Conceptualization of this study, Methodology, Software, Validation, Formal analysis, Investigation, Writing- Original draft preparation.
3- Khaoula Meddahi: Conceptualization of this study, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing- Original draft preparation.
4- Allel Hadjali: Methodology, Investigation, Visualization.
5- Amin Mesmoudi: Software, Methodology, Investigation, Visualization.
6- Khalid Benabdeslem: Methodology, Investigation, Visualization.
7- Souleyman Chaib: Methodology, Investigation, Visualization.
**All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare that they have no confict of interest.
Ethical Approval
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article belongs to the Topical Collection: Special Issue on Web Information Systems Engineering 2022
Guest Editors: Richard Chbeir, Helen Huang, Yannis Manolopoulos and Fabrizio Silvestri.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Mansouri, D.E.K., Benkabou, SE., Meddahi, K. et al. CoSP: co-selection pick for a global explainability of black box machine learning models. World Wide Web 26, 3965–3981 (2023). https://doi.org/10.1007/s11280-023-01213-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11280-023-01213-8