Abstract
In real applications, multi-view clustering with incomplete data has played an important role in the data mining field. How to design an algorithm to promote the clustering performance is a challenging problem. In this paper, we propose an approach with learned graph to handle the case that each view suffers from some missing information. It combines incomplete multi-view data and clusters it simultaneously by learning the ideal structures. For each view, with an initial input graph, it excavates a clustering structure with the consideration of consistency with the other views. The learned structured graphs have exactly c (the predefined number of clusters) connected components so that the clustering results can be obtained without requiring any post-clustering. An efficient optimization strategy is provided, which can simultaneously handle both the whole and the partial regularization problems. The proposed method exhibits impressive performance in experiments.
Supported by the National Natural Science Foundation of China (No. 61473302, 61503396).
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In many real applications, data are often coming from multiple sources or with multiple modalities becoming multi-view data, which have attracted extensive attention in the data mining field [1, 7, 14]. Observing that multiple views usually provide each other with complementary and compatible information, integrating them together to get better performance becomes natural [6, 14]. However, in real applications, it is often the case that some or even all of the views suffer from some missing information [12, 19]. For example, in speaker grouping, the audio and visual appearances represent two views and some speakers may miss audio or visual information. Another example is document clustering, different language versions of a document can be regarded as multiple views, but many documents may not be translated into each language. Therefore, it is necessary to explore how to integrate such incomplete multi-view data. In this paper, we focus on multi-view clustering with incomplete data. Exiting approaches for this task can be divided into two categories: completion methods [2, 13] and subspace methods [12, 20].
Completion methods are based on matrix completion. For example, singular value decomposition imputation [2] first uses the eigenvalues to apply a regression to the complete attributes of the instance, to obtain an estimation of the missing value itself, and then applies conventional multi-view clustering methods [7, 10] to derive the clustering results. The difference among different completion methods [7, 9, 10] is that they complete the incomplete data according to different principles. Their performances are usually unsatisfactory when the data are missing block-wise [12].
In recent years, some subspace methods to handle this case have been proposed. Partial multi-view clustering (PVC) [12] first divides the partial examples into two blocks and then executes non-negative matrix factorization (NMF) [11] to learn a low-dimensional representation for each multi-view data. Incomplete multi-modal visual data grouping (IMG) [20] can be regarded as a version of PVC, which requires that the low-dimensional representations conform to a self-learning manifold structure. Other methods such as [17, 18] can also be classified into this category. Although these methods have achieved good performance, there is still room to improve. Most of them are based on NMF, which is aimed at learning the latent low-dimensional representations of data rather than clustering data. As a result, they must utilize a post-clustering such as K-means and spectral clustering to obtain the clustering results. Besides, they assume that the shared data in different views have exactly the same representation in the latent space, which may lead to a negative effect on the intrinsic inner structure of each individual view.
In this paper, with the goal to learn cluster structures directly, we propose the Structured Graph Learning (SGL) method to manipulate incomplete multi-view data and group them simultaneously. For each view, our SGL excavates a structured graph to combine the partial and the complete examples. To establish interaction between the different views, we naturally constrain the subgraphs corresponding to the shared data (in different views) to be close. Thus to some extent, we maintain the intrinsic structure of each individual view, as well as the consistency between different views. By improving the mechanism of the graph learning, the graph matrixes learned by our method have ideal structures-exactly c connected components, so that the clustering results can be derived from these graphs without requiring any post-clustering. Besides, we propose an efficient optimization strategy to solve our formulated problem. Experimental results on real benchmark data sets validate the advantages of our method.
2 The Proposed SGL
For the convenience of presentation, we take two-view data for illustration. As we can see from following formulations, extension to any number of views is direct. For example, most simply, we can separate the multiple views into pairs and then solve all the two-view problems. For input data, each column is a data and each row is an attribute. The feature dimensions of view 1 and view 2 data are \(d_1\) and \(d_2\), respectively. Input data \([{X^{(1)}}^T,{X^{(2)}}^T]^T\in \mathbb {R}^{(d_1+d_2)\times n_3}\), \(\hat{X}^{(1)}\in \mathbb {R}^{d_1\times n_1}\) and \(\hat{X}^{(2)}\in \mathbb {R}^{d_2\times n_2}\) denote the examples appearing and only appearing in both views, view 1 and view 2, respectively. \(X^{(1)}\in \mathbb {R}^{d_1\times n_3}\) denotes the shared data in view 1, and \(X^{(2)}\in \mathbb {R}^{d_2\times n_3}\) denotes the shared data in view 2. We assume that \(X_1\in \mathbb {R}^{d_1\times (n_3+n_1)}=[X^{(1)},\hat{X}^{(1)}]\), \(X_2\in \mathbb {R}^{d_2\times (n_3+n_2)}=[X^{(2)},\hat{X}^{(2)}]\), so that the n-th (\(\forall n\le n_3\)) columns of \(X_1\), \(X_2\) belong to the same example, while the rest columns of them contain no common example. Figure 1 illustrates the notations.
We denote the initial graphs constructed from \(X_1\) and \(X_2\) as \(A_1\in \mathbb {R}^{(n_3+n_1) \times (n_3+n_1)}\) and \(A_2\in \mathbb {R}^{(n_3+n_2)\times (n_3+n_2)}\) respectively. And we denote the learned graphs that best approximate \(A_1\) and \(A_2\) as \(S_1\in \mathbb {R}^{(n_3+n_1)\times (n_3+n_1)}\) and \(S_2\in \mathbb {R}^{(n_3+n_2)\times (n_3+n_2)}\) respectively. We denote the initial graphs that correspond to \(X^{(1)}\) and \(X^{(2)}\) as \(\bar{A}_1\in \mathbb {R}^{n_3\times n_3}\) and \(\bar{A}_2\in \mathbb {R}^{n_3\times n_3}\) respectively. Thus \(\bar{A}_1\) and \(\bar{A}_2\) are the subgraphs of \(A_1\) and \(A_2\) respectively. And we denote the learned graphs that correspond to \(X^{(1)}\) and \(X^{(2)}\) as \(\bar{S}_1\in \mathbb {R}^{n_3\times n_3}\) and \(\bar{S}_2\in \mathbb {R}^{n_3\times n_3}\) respectively. Thus \(\bar{S}_1\) and \(\bar{S}_2\) are the subgraphs of \(S_1\) and \(S_2\) respectively. Figure 2 illustrates the learned graphs \(S_1\) and \(S_2\).
For each individual view, we aim to excavate its intrinsic clustering structure. Motivated by [16], which is effective in learning clustering structure with complete single-view data, we intend to learn the ideal structured graph from the initial graph. Given initial affinity matrixes \(A_1\) and \(A_2\) constructed from \(X_1\) and \(X_2\) respectively, we can learn the graph matrixes \(S_1\) and \(S_2\) that best approximate \(A_1\) and \(A_2\) respectively. Following are the elementary objectives:
Here we append the constraints \(\sum _js_{1ij}=1\) and \(\sum _js_{2ij}=1\) \((\forall i)\) to avoid the case that some rows of \(S_1\) or \(S_2\) are all zeros, and to make \(S_1\) and \(S_2\) to be comparable at the same scale.
For each view, according to Eq. (1) (or Eq. (2)), we can learn a graph to explore and maintain the inner structure of each view. Further, we aim to bridge different views, for which it is natural to take into account the consistency between the shared data in different views. For \(\bar{S}_1\), we can not only learn from \(\bar{A}_1\), but also \(\bar{S}_2\). For \(\bar{S}_2\), we can not only learn from \(\bar{A}_2\), but also \(\bar{S}_1\). Thus we have the following objective function:
where \(\mu >0\) is a parameter that balances the first two terms and the last term.
To make the learned graphs have ideal structures directly for clustering tasks, we introduce the following property [4, 15]:
Property 1
The number of connected components in the graph with the similarity matrix S is equal to the multiplicity c of the eigenvalue zero of \(L_S\).
\( L_S \in \mathbb {R}^{n\times n}\) in Property 1 is the Laplacian matrix of the nonnegative similarity matrix S, i.e. \(L_S=D_S-(S^T+S)/2\) (where \(D_S\) is the degree matrix defined as a diagonal matrix whose i-th diagonal element is \(\sum _j{(s_{ij}+s_{ji})/2}\)).
Given a graph with graph matrix S, Property 1 indicates that if \(rank(L_S)=n-c\), then this graph contains c connected components and each component corresponds to a cluster. We can obtain clustering results from the learned graph. Thus the problem (3) can be improved to the following problem:
which is equivalent to the following optimization problem for large enough values of both \(\lambda _1\) and \(\lambda _2\):
Here, \(\sigma _{1i}(L_{S_1})\) and \(\sigma _{2i}(L_{S_2})\) denote the i-th smallest eigenvalue of \(L_{S_1}\) and \(L_{S_2}\) respectively. Noting that \(\sigma _{1i}(L_{S_1})\ge 0(\forall i)\) and \(\sigma _{2i}(L_{S_2})\ge 0(\forall i)\), therefore when \(\lambda _1\) and \(\lambda _2\) are large enough, \(\sum _{i=1}^c\sigma _{1i}(L_{S_1})=0\) and \(\sum _{i=1}^c\sigma _{2i}(L_{S_2})=0\), so that the constraint \(rank(L_{S_1})=n_1+n_3-c\) and \(rank(L_{S_2})=n_2+n_3-c\) in the problem (4) will be satisfied. Thus, the problem (5) is equivalent to the problem (4). According to Ky Fan’s Theorem [5], we have
where \(F_1\) and \(F_2\) are the intermediate variables. Thus the problem (5) is further equivalent to the following problem:
where both parameters \(\lambda _1\) and \(\lambda _1\) are large enough to guarantee that the sums of the c smallest eigenvalues of both \(L_{S_1}\) and \(L_{S_1}\) are equal to zero.
According to Eq. (7), note that different from previous graph-based methods, our graphs are learned, which are learned from both the initial graphs and the consistency between different views. Besides, Eq. (7) is designed to cluster data points directly by learning graphs with structures which are ideal for clustering. From the learned structured graph, we can obtain clustering results of each view directly. Then by making a simple best one-to-one map between the clustering results of \(X^{(1)}\) and \(X^{(2)}\), the final clustering results can be derived.
Since the problem (7) is not convex and the regularization is added on the partial elements of the graph matrixes, it seems difficult to solve. We propose an efficient algorithm to solve this problem in the next section.
3 Optimization Algorithms
When \(S_1\) and \(S_2\) are fixed, optimizing \(F_1,F_2\), the problem (7) becomes
The optimal solution of \(F_1\) and \(F_2\) is formed by the c eigenvectors of \(L_{S_1}\) and \(L_{S_2}\) respectively corresponding to their c smallest eigenvalues.
When \(F_1\), \(F_2\) and \(S_2\) are fixed, optimizing \(S_1\), the problem (7) becomes
where \(S_1, A_1\in \mathbb {R}^{(n_3+n_1)\times (n_3+n_1)}\) and \(\bar{S}_1, \bar{S}_2\in \mathbb {R}^{n_3\times n_3}\).
The problem (10) is equal to
We can solve the following problems separately for each i since the problem (11) is independent for different i.
Update the first \(n_3\) rows of \(S_1\)(\(1\le i\le n_3\)). For each i, we denote \({v_1}_{ij}=f_{1i}-f_{1j}\). Then the problem (11) becomes
For each i, we denote \(s_{1i}=[\bar{s}_{1i} , \hat{s}_{1i}]\), \(v_{1i}=[\bar{v}_{1i} , \hat{v}_{1i}]\) and \(a_{1i}=[\bar{a}_{1i} , \hat{a}_{1i}]\). \(\bar{s}_{1i} \), \(\bar{v}_{1i} \) and \(\bar{a}_{1i} \) contains and only contains the first \(n_3\) elements of the vector \(s_{1i}\), \(v_{1i}\) and \(a_{1i}\) respectively. \(\hat{s}_{1i} \), \(\hat{v}_{1i} \) and \(\hat{a}_{1i} \) contains and only contains the last \(n_1\) elements of the vector \(s_{1i}\), \(v_{1i}\) and \(a_{1i}\) respectively.
The problem (12) can be written in vector form as
We denote \(c1 = \bar{a}_{1i}-1/2\lambda _1\bar{v}_{1i}+\mu \bar{s}_{2i}\), \(c_2=\hat{a}_{1i}-1/2\lambda _1\hat{v}_{1i}, b_1=[c_1, c_2]\). \(U_1\) is a diagonal matrix whose first \(n_3\) diagonal elements are all \(1+\mu \), and the others are all 1. Then the problem (13) becomes
This problem can be solved by algorithm in [16].
Update the last \(n_1\) rows of \(S_1\)(\(n_3<i\le n_3+n_1\)). When \(n_3<i\le n_3+n_1\), denoting \(v_{1ij}=f_{1i}-f_{1j}\), the problem (11) becomes
The problem (15) can be written in vector form as
This problem can be solved with an effective iterative algorithm [8], or solved by the solution with a similar form as Eq. (30) in [16]
When \(F_1\), \(F_2\) and \(S_1\) are fixed, optimizing \(S_2\) is similar to optimizing \(S_1\). We omit the detailed process.
Update the first \(n_3\) rows of \(S_2\)(\(1\le i\le n_3\)). When \(1\le i\le n_3\), the problem (7) becomes
where \(b_2=[c_3, c_4]\), \(U_2\) is a diagonal matrix whose first \(n_3\) diagonal elements are all \(1+\mu \), and the others are all 1, \(c_3=(\bar{a}_{2i}-1/2\lambda _2\bar{v}_{2i}+\mu \bar{s}_{1i}), c_4=\hat{a}_{2i}-1/2\lambda _2\hat{v}_{2i}\).
Update the last \(n_2\) rows of \(S_2\)(\(n_3<i\le n_3+n_2\)). When \(n_3<i\le n_3+n_2\), the problem (7) becomes
The algorithm is provided in Algorithm 1, in which for each data point, we only update the nearest k similarities in S in order to reduce the complexity of updating S, F significantly. This technique makes our method applied on data sets with very large scale.
4 Discussion
4.1 Convergence Analysis
Property 2
The Algorithm 1 will monotonically decrease the objective of the problem in each iteration, and converge to a local optimum of the problem.
The brief idea in proving this theorem is summarized as follows. The Algorithm 1 converges because the objective function value of Eq. (7) decreases as iteration round increases. In detail, with fixed \(S_1\) and \(S_2\), optimal \(F_1\) and \(F_2\) can be obtained by solving the problems (8) and (9) respectively, which will reduce the objective function value, and with fixed \(F_1\) and \(F_2\), we can get optimal \(S_1\) and \(S_2\) by solving the problems (14) (16) and (17) (18) respectively, which will also reduce the objective function value. In summary, the Algorithm 1 will converge to a local optimum of the problem (7).
4.2 Computational Time
Since SGL is solved in an alternative way, we calculate their total computational complexity by analyzing the computational complexity in solving corresponding alternative optimization problems. The Algorithm 1 of SGL can be divided into three alternative optimization problems. The problems in Eqs. (8) and (9) updating matrix \(F_1\) and \(F_2\) respectively can be solved by eigen-decomposition, and the computational complexity is \(O((n_1+n_3)^3)\) and \(O((n_2+n_3)^3)\) respectively. Therefore, the total computational complexity of this procedure is \(O(\max \{(n_1+n_3)^3,(n_2+n_3)^3\})\).
The problems in Eqs. (14) and (16) to update \(S_1\) row by row are the subproblems of (10). The problem in Eq. (14) can be solved by Lagrange Multiplier and Newton’s method, of which the computational complexity is \(O((n_1+n_3)\times n_3)\). The problem (16) can be solved by an efficient iterative algorithm [8], and the computational complexity is \(O((n_1+n_3)\times n_1)\). Therefore, the total computational complexity of this step is \(O((n_1+n_3)^2)\).
The problems in Eqs. (17) and (18) to update \(S_2\) are similar to Eqs. (14) and (16) respectively, and the total computational complexity is \(O((n_2+n_3)^2)\).
As a result, the total computational complexity of SGL is \(O(T\times \max \{(n_1+n_3)^3,(n_2+n_3)^3\})\), where T is the number of iterations. Obviously, cubic time complexity is caused by spectral decompositions. But in our algorithm, the Laplacian matrixes on which we implement spectral decompositions is sparse. Besides, nowadays there are many other novel alternative efficient methods handling spectral decomposition.
5 Experiment
5.1 Data Sets
MSRCv1 is comprised of 240 images in 8 class in total. Two pairs of visual features are extracted: 256 Local Binary Pattern(LBP) and 512 GIST noted as MSRCv1GC, 1302 CENTRIST and 200 SIFT noted as MSRCv1CS. Ionosphere is composed of 351 free electrons in the ionosphere observed by a system in Goose Bay, Labrador. The data can be divided into two classes: “Good” are those showing evidence of some type of structure in the ionosphere. “Bad” returns are those that do not. Caltech101-7 contains 441 objective images in 7 categories as whole. We extract two features from each image data, including 200 SIFT and 32 Gabor texture. Handwritten numerals (HW) contains 2000 images data points for 0 to 9 digit classes and each class has 200 data points. We select 216 profile correlations (FAC) and 76 Fourier coefficients of the character shapes (FOU) for our clustering. WebKB is comprised of 1051 pages collected from four universities. Each page has 2 views: 334 citation view, and 2949 content view. Statistics of the data sets are summarized in Table 1.
5.2 Comparing Methods
Single-view methods V1CLR and V2CLR: With the partial example ratio being zero, CLR [16] algorithm is executed separately on view 1 and view 2 noted as V1CLR and V2CLR respectively.
Completion methods CentroidSC and PairwiseSC: Firstly, the partial examples are completed with the Robust Rank-k Matrix Completion [9] method. Two co-regularization schemes [10] are proposed to accomplish this work: Centroid-Based Co-regularization noted as CentroidSC and Pairwise-Based Co-regularization noted as PairwiseSC.
Subspace methods PVC [12] and IMG [20].
Other method: CGC [3] is aimed to deal with many-to-many instance relationship, which supports the situation of incomplete views.
We construct the sparse affinity matrixes \(A_1\) and \(A_2\) by Eq. (35) in [16]. Following [12], we randomly select a ratio of examples to be partial to simulate the partial view setting, i.e. they only appear in either of the two views while the remaining ones are described by both views. we evenly assign them to the two views to simplify the experiment. Each time we randomly select \(10\%\) to \(90\%\) examples, with \(10\%\) as interval, as partial examples. We repeat such process 10 times and record the average and standard deviation results. For PVC and IMG, the k-means algorithm is performed to get the final clustering result as in the original paper. The other clustering methods may be also feasible. We utilize two standard clustering evaluation metric to measure the multi-view clustering performance, that is, Clustering Accuracy (ACC) and Normalized Mutual Information (NMI). Same as [12], we test all the methods under different Partial Example Ratio (PER) varying from 0.1 to 0.9 with an interval of 0.1.
5.3 Clustering Result Comparison
Table 2 and Fig. 3 report the ACC and NMI values respectively on various data sets with different PER ratio settings. From these figures and the table, we make the following observations and discussions.
In almost all the settings, our method usually outperform both methods executed on single complete view (V1CLR and V2CLR) even when the PER ratio equals \(30\%\) , which confirm that our approach synthesizes the information from both views validly and proposed constraint between the subgraphs corresponding to the same samples (from different views) is effective. As the partial example ratio PER varies from \(10\%\) to \(90\%\), the proposed method usually performs much better than other multi-view clustering baselines. Particularly, the performance of our approach improves much compared with the baselines when Per is less than \(40\%\). And with more missing examples, the performance of all the methods drops basically.
CentroidSC and PairwiseSC usually perform worst on almost all the data sets, which may be caused by that matrix completion requires the randomness of the missing locations, while the data are missing block-wise for the multi-view incomplete data setting. One may be curious why our method performs much better than other subspace learning methods. This may be caused by that their assumption that shared data in different views have exactly the same representations in the latent space may damage the inner structure of each view and increase the risk of over-fitting. Besides, our graphs are learned from both the initial graphs and the consistency between views. And the learned graphs contain ideal clustering structures.
One may be also interested in the reason why our method has a considerable improvement when PER is high. When PER is high, for most of the subspace learning methods, it is hard to accurately estimate the common representation \(P_c\) simply from the little common complete data. IMG utilizes a general global structure to remedy this deviation. However, the global structure overlooks the specific intrinsic structure of each individual view. Our structures are more detailed and specific.
5.4 Convergence Study
We experiment on data set MSRCv1 to show the convergence property. The convergence curves and corresponding NMI performances with \(PER=30\%\) and \(PER=70\%\) setting are plotted in Fig. 4. For \(PER=30\%\) setting, we set \(\{\lambda 1, \lambda 2, \mu \}\) as \(\{50, 50, 2.0\}\). For \(PER=70\%\) setting, we set \(\{\lambda 1, \lambda 2, \mu \}\) as \(\{50, 50, 9.0\}\). The objective function during each iteration is drawn in black. We can see that the value of objective function decreases rapidly with the increasing of iteration round. Inspiringly, it only takes around 12 rounds to converge. The NMI value during each iteration is drawn in red, from which we can see 12 round is enough to get good clustering performance.
5.5 Parameter Study
We study parameters on four data sets: MSRCv1GC, MSRCv1SW CaltechSW and HW. There are three parameters to explore: \(\lambda _1\), \(\lambda _2\) and \(\mu \). Following [16], we determined both \(\lambda _1\) and \(\lambda _2\) in a heuristic way : in each iteration, we computed the numbers of zero eigenvalues of \({L_S}_1\) and \({L_S}_2\), if one is larger (smaller) than k, we divide (multiply) it by two (respectively). Following [12], We tune \(\mu \) for three different PER 30%, 50% and 70%. As above experiment, we randomly select a ratio of examples to be partial and repeat such process 10 times to record the average results. The effect of the parameter \(\mu \) is showed in Fig. 5.
From Fig. 5, it is easy to see that on all data sets, our method achieves steadily good performance for NMI with a very large range of \(\mu \) in all settings, which validates the robustness of our method. Our method usually has a relatively good performance when \(\mu \) is in the range of [2.0, 4.0] for the \(PER=30\%\) and \(PER=50\%\) settings, while the best range becomes [7.0, 9.0] for the \(PER=70\%\) setting.
6 Conclusion
In this paper, we propose a method to handle multi-view clustering problem in the case that all the views suffer from the missing of some data. Different from existing approaches, we simultaneously manipulate and cluster incomplete multi-view data. We excavate and maintain the intrinsic structure of each individual view, and establish interaction between the different views through the shared data. For each view, a graph with exactly c connected components can be learned so that the clustering results can be derived from graphs without any post-clustering. To optimize our proposed objective, we provide the solution which can simultaneously handle both the whole and partial regularization problem. Experimental results on six real-world multi-view data sets compared with several baselines validate the effectiveness of our method. In the future, we will study how to reduce the computational cost of our method.
References
Bickel, S., Scheffer, T.: Multi-view clustering. In: Proceedings of the 4th IEEE International Conference on Data Mining (ICDM), pp. 19–26 (2004)
Brand, M.: Incremental singular value decomposition of uncertain data with missing values. In: Computer Vision - ECCV 7th European Conference on Computer Vision, pp. 707–720 (2002)
Cheng, W., Zhang, X., Guo, Z., Wu, Y., Sullivan, P.F., Wang, W.: Flexible and robust co-regularized multi-domain graph clustering. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 320–328 (2013)
Chung, F.R.: Spectral Graph Theory, vol. 92. American Mathematical Society, New York (1997)
Fan, K.: On a theorem of Weyl concerning eigenvalues of linear transformations i. Proc. Natl. Acad. Sci. 35(11), 652–655 (1949)
Greene, D., Cunningham, P.: A matrix factorization approach for integrating multiple data views. In: Proceedings of Machine Learning and Knowledge Discovery in Databases, European Conference, ECML PKDD 2009, pp. 423–438 (2009)
Guo, Y.: Convex subspace representation learning from multi-view data. In: Proceedings of the 27th AAAI Conference on Artificial Intelligence (2013)
Huang, J., Nie, F., Huang, H.: A new simplex sparse learning model to measure data similarity for clustering. In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence IJCAI, pp. 3569–3575 (2015)
Huang, J., Nie, F., Huang, H., Lei, Y., Ding, C.H.Q.: Social trust prediction using rank-k matrix recovery. In: Proceedings of the 23rd International Joint Conference on Artificial Intelligence IJCAI, pp. 2647–2653 (2013)
Kumar, A., Rai, P., Daume, H.: Co-regularized multi-view spectral clustering. In: Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems, pp. 1413–1421 (2011)
Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)
Li, S., Jiang, Y., Zhou, Z.: Partial multi-view clustering. In: Proceedings of the 28th AAAI Conference on Artificial Intelligence, pp. 1968–1974 (2014)
Lin, Z., Chen, M., Ma, Y.: The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055 (2010)
Liu, J., Wang, C., Gao, J., Han, J.: Multi-view clustering via joint nonnegative matrix factorization. In: Proceedings of the 2013 SIAM International Conference on Data Mining, pp. 252–260. SIAM (2013)
Mohar, B., Alavi, Y., Chartrand, G., Oellermann, O.: The laplacian spectrum of graphs. Graph theory, combinatorics, and applications 2(871–898), 12 (1991)
Nie, F., Wang, X., Jordan, M.I., Huang, H.: The constrained laplacian rank algorithm for graph-based clustering. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence, pp. 1969–1976 (2016)
Shao, W., He, L., Yu, P.S.: Multiple incomplete views clustering via weighted nonnegative matrix factorization with \(L_{2,1}\) regularization. In: Appice, A., Rodrigues, P.P., Santos Costa, V., Soares, C., Gama, J., Jorge, A. (eds.) ECML PKDD 2015. LNCS (LNAI), vol. 9284, pp. 318–334. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23528-8_20
Shao, W., Shi, X., Philip, S.Y.: Clustering on multiple incomplete datasets via collective kernel learning. In: 2013 IEEE 13rd International Conference on Data Mining (ICDM), pp. 1181–1186. IEEE (2013)
Wang, Q., Si, L., Shen, B.: Learning to hash on partial multi-modal data. In: Proceedings of the 24th International Joint Conference on Artificial Intelligence, IJCAI, pp. 3904–3910 (2015)
Zhao, H., Liu, H., Fu, Y.: Incomplete multi-modal visual data grouping. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence, IJCAI, pp. 2392–2398 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Wu, J., Zhuge, W., Tao, H., Hou, C., Zhang, Z. (2018). Incomplete Multi-view Clustering via Structured Graph Learning. In: Geng, X., Kang, BH. (eds) PRICAI 2018: Trends in Artificial Intelligence. PRICAI 2018. Lecture Notes in Computer Science(), vol 11012. Springer, Cham. https://doi.org/10.1007/978-3-319-97304-3_8
Download citation
DOI: https://doi.org/10.1007/978-3-319-97304-3_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-97303-6
Online ISBN: 978-3-319-97304-3
eBook Packages: Computer ScienceComputer Science (R0)