Keywords

1 Motivation

Understanding the nature of a generative process for observed data typically involves uncovering explanatory factors of variation responsible for the observations. But the relationship between these factors and our observation usually remains unclear. A common assumption is that the relevant factors can be expressed by a low-dimensional latent representation Z [25]. Therefore, popular machine learning methods involve learning of appropriate latent representations to disentangle factors of variation. Learning disentangled representations is often considered in an unsupervised setting which does not rely on the prior knowledge about the data such as labels [7, 8, 13, 17, 24]. However, it was shown that inductive bias on the dataset and learning approach is necessary to obtain disentanglement [25]. Inductive biases allow us to express assumptions about the generative process and to prioritise different solutions not only in terms of disentanglement [5, 13, 21, 35, 44], but also in terms of constrained latent space structures [15, 16], preservation of causal relationships [40], or interpretability [45].

We consider a supervised setting where semantic knowledge about the input data allows structuring the latent representation in disjoint subspaces \(Z_0\) and \(Z_1\) of the latent space Z by enforcing conditional invariance. In such supervised settings, disentanglement can be viewed as an extraction of level sets or symmetries inherent to our data X which leave a specified property Y invariant. An important application in that direction is the generation of diverse molecular structures with similar chemical properties [44]. The goal is to disentangle factors of variation relevant for the property. Typically, level sets \(L_y\) are defined implicitly through \(L_y(f)=\{(x_1,...,x_d) \vert f(x_1,...,x_d)=y\}\) for a property y which implicitly describes the level curve or surface w.r.t. inputs \((x_1,...,x_d)\in \mathbb {R}^d\). The topic of this paper is to identify a sparse parameterisation of level sets which encodes conditional invariances and thus selects a correct model. Several techniques have been developed to steer model selection by sparsifying the number of features, e.g. [38, 39], or compressing features into a low-dimensional feature space, e.g. [4, 33, 42]. These methods improve generalisation by focusing on only a subset of relevant features and using these to explain a phenomenon. Existing methods for including such prior knowledge in the model usually do not include dimensionality reduction techniques and perform a hand-tuned selection [15, 21, 44].

In this paper, we introduce a novel approach to cycle consistency, relying on property side information Y as our semantic knowledge, to provide conditional invariance in the latent space. With this we mean that conditioning on part of the latent space, i.e. \(Z_0\), allows property-invariant sampling in the latent space \(Z_1\). By ensuring that our method consistently performs on generated samples when fed back to the network, we achieve more disentangled and sparser representations. Our work builds on [42], where a general sparsity constraint on latent representations is provided, and on [21, 44], where conditional invariance is obtained through adversarial training. We show that our approach addresses some drawbacks in previous approaches and allows us to identify more meaningful factors for learning better models and achieve improved invariance performance. Our contributions may thus be summarised as follows:

  • We propose a novel approach for supervised disentanglement where conditional invariance is enforced by a novel cycle consistency on property side information. This facilitates the guided exploration of the latent space and improves sampling with a fixed property.

  • Our model inherently favours sparse solutions, leading to more interpretable latent dimensions and facilitates built-in model selection.

  • We demonstrate that our method improves on the state-of-the-art performance for conditional invariance as compared to existing approaches on both synthetic and molecular benchmark datasets.

2 Related Work

2.1 Deep Generative Latent Variable Models and Disentanglement

Because of its flexibility, the variational autoencoder (VAE) [20, 34] is a popular deep generative latent variable model in many areas such as fairness [27], causality [26], semi-supervised learning [19], and design and discovery of novel molecular structures [11, 22, 28]. The VAE is closely related to the Information Bottleneck (IB) principle [4, 39]. Various approaches exploit this relation, e.g. the deep variational information bottleneck (DVIB) [2, 4]. Further extensions were proposed in the context of causality [9, 29, 30] or archetypal analysis [15, 16].

The \(\beta \)-VAE [13] extends the standard VAE approach and allows unsupervised disentanglement. In unsupervised settings, there exists a great variety of approaches based on VAEs and generative adversarial networks (GANs) to achieve disentanglement such as FactorVAE [17], \(\beta \)-TCVAE [7] or InfoGAN [8, 24]. Partitioning the latent space into subspaces is inspired by the multi-level VAE [5], where the latent space is decomposed into a local feature space that is only relevant for a subgroup and a global feature space. In supervised settings, several approaches such as [10, 21, 23, 44] achieve disentanglement by applying adversarial information elimination to select a model with partitioned feature and property space. In such a setting, different to unsupervised disentanglement, our goal is supervised disentanglement with respect to a particular target property.

Another important line of research employs the idea of cycle consistency for learning disentangled representations. Presumably the most closely related work to this study is conducted by [14, 43, 44]. Here, the authors employ a cycle-consistent loss on the latent representations to learn symmetries and disentangled representations in weakly supervised settings, respectively. Moreover, in [44], the authors use adversarial training and mutual information estimation to learn symmetry transformations instead of explicitly modelling them. In contrast, our work replaces adversarial training by using cycle consistency.

2.2 Model Selection via Sparsity

Several works perform model selection by introducing sparsity constraints which penalise the model complexity. A common sparsity constraint is the Least Absolute Shrinkage and Selection Operator (LASSO) [38]. Extensions of the LASSO propose a log-penalty to obtain even sparser solutions in the compressed IB setting [33] and generalise it further to deep generative models [42]. Furthermore, the LASSO has been extended to the group LASSO, where combinations of covariates are set to zero, the sparse group LASSO [36], and the Bayesian group LASSO [32]. Perhaps most closely related to our work is the oi-VAE [3], which incorporates a group LASSO prior in deep latent variable models. These methods employ a general sparsity constraint to achieve a sparse representation. Our model extends these ideas and imposes a semantic sparsity constraint in the form of cycle consistency that performs regularisation based on prior knowledge.

3 Preliminaries

3.1 Deep Variational Information Bottleneck

We focus on the DVIB [4] which is a method for information compression based on the IB principle [39]. The objective is to compress a random variable X into a random variable Z while being able to predict a third random variable Y. The DVIB is closely related to the VAE [20, 34]. The optimal compression is achieved by solving the parametric problem

$$\begin{aligned} \min _{\phi ,\theta } I_{\phi }(Z;X) - \lambda I_{\phi ,\theta }(Z;Y), \end{aligned}$$
(1)

where I is the mutual information between two random variables. Hence, the DVIB objective balances maximisation of \(I_{\phi ,\theta }(Z;Y)\), i.e. Z being informative about Y, and minimisation of \(I_{\phi }(Z;X)\), i.e. compression of X into Z. We assume a parametric form of the conditionals \(p_\phi (Z|X)\) and \( p_\theta (Y|Z)\) with \(\phi \) and \(\theta \) representing the parameters of the encoder and decoder network, respectively. Parameter \(\lambda \) controls the degree of compression and is closely related to \(\beta \) in the \(\beta \)-VAE [13]. The relationship to the VAE becomes more apparent with the definition of the mutual information terms:

$$\begin{aligned} I_\phi (Z;X)&= \mathbb {E}_{p(X)} D_{KL}(p_\phi (Z|X)\Vert p(Z)) ,\end{aligned}$$
(2)
$$\begin{aligned} I_{\phi ,\theta }(Z;Y)&\ge \mathbb {E}_{p(X,Y)} \mathbb {E}_{p_\phi (Z|X)}\log p_\theta (Y|Z) + h(Y), \end{aligned}$$
(3)

with \(D_{KL}\) being the Kullback-Leibler divergence, and h(Y) the entropy. Note that we write Eq. (3) as an inequality which uses the insight of [41] that the RHS is in fact a lower bound to \(I_\theta (Z;Y)\); see [41] for details.

3.2 Cycle Consistency

We use the notion of cycle consistency similar to [14, 46]. The CycleGAN [46] performs unsupervised image-to-image translation, where a data point is mapped to its initial position after being transferred to a different space. For instance, suppose that domain X consists of summer landscapes, while domain Y consists of winter landscapes (see Appendix Fig. A1). A function f(x) may be used to transform a summer landscape x to a corresponding winter landscape y. Similarly, function g(y) maps y back to the domain X. The goal of cycle consistency is to learn a mapping to \(\hat{x}\), which is close to the initial x. In most cases, there is a discrepancy between x and \(\hat{x}\) referred to as the cycle consistency loss. In order to obtain an almost invertible mapping, the loss \(\Vert (g(f(x)) - x \Vert _1\) is minimised.

Fig. 1.
figure 1

Model illustration. (a) Firstly, we learn a sparse representation Z from our input data X which we separate into a property space \(Z_0\) and an invariant space \(Z_1\). Given this representation, we try to predict the property \(\hat{Y}\) and reconstruct our input \(\hat{X}\). Grey arrows indicate that \(\hat{Y}=\text {dec}_Y(Z_0)\) instead of \(Z_0\) is used for decoding \(\hat{X}\) (see Sect. 4.3). (b) Secondly, we sample new data in two ways: (i) uniformly in Z to get new data points \(\tilde{X}\) and \(\tilde{Y}\) (orange data), (ii) uniformly in \(Z_1\) with fixed \(Z_0\) to get \(\tilde{X}^\star \) at fixed \(\hat{Y}\) (cyan data). We concatenate the respective decoder outputs. (c) Lastly, we feed the concatenated input batch \(X^\text {c}\) into our model and calculate the cycle consistency loss between the properties. (Color figure online)

4 Model

Our model is based on the DVIB to learn a compact latent representation. The input X and the output Y may be complex objects and take continuous values, such as molecules with their respective molecular properties. Unlike the standard DVIB, we do not only want to predict Y from an input X, but also want to generate new \(\tilde{X}\) by sampling from our latent representation. As a consequence, we add an additional second decoder that reconstructs X from Z (similar to [11] for decoder Y in the VAE setting), leading to the adjusted parametric objective

$$\begin{aligned} \min _{\phi ,\theta ,\tau } I_{\phi }(Z;X) - \lambda \big ( I_{\phi ,\theta }(Z;Y) + I_{\phi ,\tau }(Z;X)\big ), \end{aligned}$$
(4)

where \(\phi \) are the encoder parameters, and \(\theta \) and \(\tau \) describe network parameters for decoding Y and X, respectively.

4.1 Learning a Compact Representation

Formulating our model as a DVIB allows leveraging properties of the mutual information with respect to learning compact latent representations. To see this, first assume that X and Y are jointly Gaussian-distributed which leads to the Gaussian Information Bottleneck [6] where the solution Z can be found analytically and proved to be Gaussian. In particular, for \(X \sim \mathcal {N}(0, \varSigma _{X})\), the optimal Z is a noisy projection of X: \(Z = AX + \xi \), where \(\xi \sim \mathcal {N}(0,I)\). The mutual information between X and Z is then equal to

$$\begin{aligned} I(X;Z) = \tfrac{1}{2} \log |A\varSigma _{X}A^\top + I|. \end{aligned}$$
(5)

If we now assume A to be diagonal, the model becomes sparse [33]. This is because a full-rank projection \(A X^\prime \) of \(X^\prime \) does not change the mutual information since \(I(X;X^\prime )=I(X;AX^\prime )\). A reduction in mutual information can only be achieved by a rank-deficient matrix A. In general, the conditionals Z|X and Y|Z in Eq. (1) may be parameterised by neural networks with X and Z as input. The diagonality constraint on A does not cause any loss of generality of the DVIB solution as long as the neural network encoder \(f_\phi \) makes it possible to diagonalise \(Af_\phi (X)f_\phi (X)^\top A^\top \) (see [42] for more details). In the following, we consider A to be diagonal and define the sparse representation as the dimensions of the latent space Z selected by the non-zero entries of A. Recalling Eq. (5), this allows us to approximate the mutual information for the encoder in Eq. (2) in a sparse manner

$$\begin{aligned} I_\phi (X;Z) = \tfrac{1}{2}\log |\text {diag} (f_\phi (X)f_\phi (X)^\top ) + \mathbf {1}|, \end{aligned}$$
(6)

where \(\mathbf {1}\) is the all-one vector and the diagonal elements of A are subsumed in the encoder \(f_\phi \).

4.2 Conditional Invariance and Informed Sparsity

A general sparsity constraint is not sufficient to ensure that latent dimensions indeed represent independent factors. In a supervised setting, our target Y conveys semantic knowledge about the input X, e.g. a chemical property of a molecule. To incorporate semantic knowledge into our model, we require a mechanism that partitions the representation such that it encodes the semantic meaning not only sparsely but preferably independently of other information concerning the input.

To this end, the central element of our approach is cycle consistency with respect to target property Y, which is illustrated in steps (b) and (c) in Fig. 1. The idea is, that reconstructed \(\hat{X}\) or newly sampled \(\tilde{X}\) with associated prediction \(\hat{Y}\) and \(\tilde{Y}\) are expected to provide matching predictions \(\hat{Y}'\) and \(\tilde{Y}'\) when \(\hat{X}\) and \(\tilde{X}\) are used as an input to the network. This means, if we perform another cycle through the network with sampled or reconstructed inputs, the property prediction should stay consistent. The partitioning of the latent space Z in the property subspace \(Z_0\) and the invariant subspace \(Z_1\) is crucial. The property Y is predicted from \(Z_0\), while the input is reconstructed from the full latent space Z. Ensuring cycle consistency with respect to the property allows putting property-relevant information into the property subspace \(Z_0\). Furthermore, the latent space is regularised by drawing samples which adhere to cycle consistency and provide additional sparsity. If information about Y is encoded in \(Z_1\), this will lead to a higher cycle consistency loss. In this way, cycle consistency enforces invariance in subspace \(Z_1\). By fixing coordinates in \(Z_0\), and thus fixing a property, sampling in \(Z_1\) results in newly generated \(\tilde{X}\) with the same property \(\tilde{Y}\). More formally, fixing \(Z_0\) renders random variables X and Y conditionally independent, i.e. \(X \perp \!\!\!\perp Y \vert Z_0\) (see Appendix Fig. A2). We ensure conditional invariance with a particular sampling: We fix the \(Z_0\) coordinates and sample in \(Z_1\) to obtain generated \(\tilde{X}^\star \) all with a fixed property \(\hat{Y}\). Using these inputs allows to obtain a new prediction \(\tilde{Y}^\star \) which should be close to the fixed target property \(\hat{Y}\). We choose the L\(_2\) norm for convenience and define the full cycle consistency loss by

$$\begin{aligned} \mathcal {L}_\text {cycle} = \Vert \hat{Y} - \hat{Y}' \Vert _2 + \Vert \tilde{Y} - \tilde{Y}'\Vert _2 + \Vert \hat{Y} - \tilde{Y}^\star \Vert _2 . \end{aligned}$$
(7)

4.3 Proposed Framework

The resulting model in Eq. (8) combines sparse DVIBs with partitioned latent space and a novel approach to cycle consistency, which drives conditional invariance and informed sparsity in the latent structure. This allows latent dimensions in \(Z_0\) relevant for prediction of Y to disentangle of latent dimensions in \(Z_1\) which encode remaining input information of X.

$$\begin{aligned} \mathcal {L}&= I_{\phi }(X;Z) - \lambda \Big (I_{\phi ,\tau }(Z_0,Z_1;X) + I_{\phi ,\theta }(Z_0;Y) \nonumber \\&\qquad \qquad \qquad \quad \,\,\,\, -\beta \big ( \Vert \hat{Y} - \hat{Y}' \Vert _2 + \Vert \tilde{Y} - \tilde{Y}'\Vert _2 + \Vert \hat{Y} - \tilde{Y}^\star \Vert _2 \big )\Big ) \end{aligned}$$
(8)

The proposed model performs model selection as it inherently favours sparser latent representations. This in turn facilitates easier interpretation of latent factors because of the built-in conditional independence between property space \(Z_0\) and invariant space \(Z_1\). These adjustments address some of the issues of the STIB [44] relying on adversarial training, mutual information estimation (which can be difficult in high-dimensions [37]) and bijective mapping which can make the training challenging. In contrast to the work of [14], we impose a novel cycle consistency loss on the predicted outputs Y instead of the latent representation Z. A reason to consider rather Y than Z is that varying latent dimensionality leads to severe problems in the optimisation process as it requires an adaptive rescaling of the different loss weights. To overcome this drawback, we close the full cycle and define the loss on the outputs. Appendix Sec. A.3 and Algorithm A.1 provide more information on the implementation.Footnote 1 As an implementation detail, we choose to concatenate the decoded \(Z_0\) code with \(Z_1\) in order to decode \(\hat{X}\), i.e. \(\hat{X}=\text {dec}_X (Z_1, \hat{Y}=\text {dec}_Y(Z_0))\). This is an additional measure to ensure that \(Z_0\) contains information relevant for property prediction Y and prevent superfluous remaining information about the input X in property space \(Z_0\).

5 Experimental Evaluation

We evaluate the effectiveness of our proposed method w.r.t. (i) selection of a sparse representation with meaningful factors of variation (i.e. model selection) and (ii) enforcing conditional independence in the latent space between these factors. To this end, we conduct experiments on a synthetic dataset with knowledge about appropriate parameterisations to highlight the differences to existing models. Additionally, we evaluate our model on a real-world application with a focus on conditional invariance and generation of novel samples. To assess the performance of our model, we compare our approach to two state-of-the-art baselines: (i) the \(\beta \)-VAE [13] which is a typical baseline model in disentanglement studies and (ii) the symmetry-transformation information bottleneck (STIB) [44] which ensures conditional invariance through adversarial training and is the direct competitor to our model. We adapt the \(\beta \)-VAE by adding a decoder for property Y (similar to [11]) which takes only subspace \(Z_0\) as input. The latent space of the adapted \(\beta \)-VAE is split into two subspaces as in the STIB and our model, but has no explicit mechanisms to enforce invariance. This setup can be viewed as an ablation study in which the \(\beta \)-VAE is the basis model of our approach without cycle consistency and sparsity constraints. The STIB provides an alternative approach for the same goal but with a different mechanism.

The objective of the supervised disentanglement approach is to ensure disentanglement of a fixed property with respect to variations in the invariant space \(Z_1\). This is a slightly different setting than in standard unsupervised disentanglement and therefore standard disentanglement metrics might be less insightful. Instead, in order to test the property invariance, we first encode the inputs of the test set and fix the coordinates in the property subspace \(Z_0\) which provides prediction \(\hat{Y}\). Then we sample uniformly at random in \(Z_1\) (plus/minus one standard deviation), decode the generated \(\tilde{X}\) and perform a cycle through the network to obtain \(\tilde{Y}\). This provides the predicted property for the generated \(\tilde{X}\). If conditional invariance between X and Y at a fixed \(Z_0\) is warranted, the mean absolute error (MAE) between \(\hat{Y}\) and \(\tilde{Y}\) should be close to zero. Thus, all models are trained to attain similar MAEs for reconstructing X and, in particular, predicting Y, to ensure a fair comparison.

5.1 Synthetic Dataset

In the first experiments, we focus on learning level sets of ellipses and ellipsoids mapped into five dimensions. We consider these experiments as they allow a clear interpretation and visualisation of fixing a property, i.e. choosing the ellipse curve or ellipsoid surface, and known low-dimensional parameterisations are readily available. To this end, we sample uniformly at random data points \(X_\text {original}\) from \(\mathcal {U}([-1,1]^{d_\text {X}})\) and calculate as the corresponding one-dimensional properties \(Y_\text {original}\) the ellipse curves (\(d_X=2\)) and ellipsoid surfaces (\(d_X=3\)) rotated by \(45^\circ \) in the \(X_1X_2\)-plane. In addition, we add Gaussian noise to the property \(Y_\text {original}\). In a real-world scenario, we typically do not have access to the underlying generating process providing \(X_\text {original}\) and property \(Y_\text {original}\) but a transformed view on these quantities. To reflect this, we map the input \(X_\text {original}\) into a five dimensional space (\(d'_X=5\)), i.e. \(X_\text {original}\in [-1,1]^{N\times d_X} \rightarrow X\in \mathbb {R}^{N\times 5}\), and property \(Y_\text {original}\) into three dimensional space (\(d'_Y=3\)), i.e. \(Y_\text {original}\in \mathbb {R}_+^{N\times 1} \rightarrow Y\in \mathbb {R}^{N\times 3}\), with N data points and dimensions \(d_X=\{2,3\}\). See Appendix Sec. A.4 for more details and Fig. A3 for an illustration of the dataset.

Level sets are usually defined implicitly (see Appendix Eq. (A.9)). Common parameterisations consider polar coordinates \((x,y) = (r\cos \varphi , r\sin \varphi )\) for the ellipse and spherical coordinates \((x,y,z)=(r\cos \varphi \sin \theta ,r\sin \varphi \sin \theta ,r\cos \theta )\) for the ellipsoid, with radius \(r\in [0,\infty )\), (azimuth) angle \(\varphi \in [0,2\pi )\) in the \(X_1X_2\)-plane, and polar angle \(\theta \in [0,\pi ]\) measured from the \(X_3\) axis. The goal of our experiment is to identify a low-dimensional parameterisation which captures the underlying radial and angular components, i.e. identify latent dimensions which correspond to parameters \((r,\varphi )\) and \((r,\varphi ,\theta )\).

Details on the architecture and training can be found in Appendix Sec. A.4. We use fully-connected layers for our encoder and decoder networks. Note that in our model, the noise level is fixed at \(\sigma _\text {noise}=1\) w.l.o.g. (see Sect. 4.1). We choose an 8-dim. latent space, with 3 dimensions reserved for property subspace \(Z_0\) and 5 dimensions for invariant subspace \(Z_1\). We consider a generous latent space with \(d_{Z_1}=d'_X=5\) and \(d_{Z_0}=d'_Y=3\) to evaluate the sparsity and model selection.

Fig. 2.
figure 2

Results for ellipse and ellipsoid in original input space (\(d_X=\{2,3\}\)). (a, c) Illustration of standard deviation in the different latent dimensions, where property subspace \(Z_0\) spans dimensions 1–3 and invariant subspace \(Z_1\) spans dimensions 4–8. Grey bars indicate the sampling noise \(\sigma _\text {noise}\) and orange bars the sample standard deviation \(\sigma _\text {signal}\) in the respective dimension. We consider a latent dimension to be selected if the signal exceeds the noise, i.e. orange bars are visible. Only our model selects the expected numbers of parameters. (b, d) Illustration of latent traversal in our model in latent dimensions 4 to 8 in our model in the original input space for fixed values in the property space dimension 1 (different colours). (b) The selected dimension 8 represents the angular component \(\varphi \) and reconstructs the full ellipse curves. (d) The selected dimension 6 represents the polar angle \(\theta \), while dimension 8 can be related to the azimuth angle \(\varphi \). (b, d) The last plot (red borders) samples in all selected dimensions, which reconstructs the full ellipse and ellipsoid, respectively. We intentionally did not sample the ellipsoid surfaces completely to allow seeing surfaces underneath. (Color figure online)

Results: All models attain similar MAEs for X reconstruction and Y prediction but differ in the property invariance as summarised in Table 1. Our model learns more invariant representations with several factors difference w.r.t. the property invariance in both experiments. In Fig. 2(a), signal vs. noise for the different models is presented. The standard deviation \(\sigma _\text {signal}\) is calculated as the sample standard deviation of the learned means in the respective latent dimension. The sampling noise \(\sigma _\text {noise}\) is optimised as a free parameter during training. We consider a latent dimension to be informative or selected if the signal exceeds the noise. The sparest solution is obtained in our model with one latent dimension selected in the property subspace \(Z_0\) and one in the invariant subspace \(Z_1\). In Fig. 2(b), we examine the obtained solution more closely in the original data space by mapping back from \(d'_X=5\) to \(d_X=2\) dimensions. We consider ten equidistant values in the selected \(Z_0\) dim. 1 and sample points in the selected \(Z_1\) dim. 8. The different colours represent fixed values in \(Z_0\), with latent traversal in \(Z_1\) dim. 8 reconstructing the full ellipse. This means, the selected latent dim. 8 contains all relevant information at a given coordinate in \(Z_0\), while dim. 4 to 7 do not contain any relevant information. We can relate the selected dim. 1 in \(Z_0\) to the radius r and dim. 8 in \(Z_1\) to the angle \(\varphi \). For the ellipsoid (\(d_X=3\)) we obtain qualitatively the same results as for the ellipse. Again, only our model selects the correct number of latent factors with one in \(Z_0\) and two in \(Z_1\) (see Fig. 2(c)). The latent traversal results in Fig. 2(d) are more intricate to interpret. For latent dim. 6, we obtain a representation which can be interpreted as encoding the polar angle \(\theta \). Traversal in latent dim. 8 yields closed curves in three dimensions which can be viewed as on orthogonal representation to dim. 6 and be interpreted as an encoding of the azimuth angle \(\varphi \). In both Fig. 2(b) and 2(d), the last plot shows sampling in the selected \(Z_1\) dimensions for fixed \(Z_0\) (i.e. property Y) and reconstructs the full ellipse and ellipsoid. Although \(\beta \)-VAE and STIB perform equally well on reconstructing and predicting on the test set, these models do not consistently lead to sparse and easily interpretable representations which allow direct traversal on the level sets as shown for our model. The presented results remain qualitatively the same for reruns of the models.

Table 1. Mean absolute errors (MAE) for reconstruction of input X, prediction of property Y, and property invariance. Ellipse/Ellipsoid: MAEs on 5-dim. input X and 3-dim. property Y are depicted. Molecules: MAEs on input X and property Y as the band gap energy in \(\text {kcal} \text { mol}^{-1}\).

5.2 Small Organic Molecules (QM9)

As a more challenging example, we consider the QM9 dataset [31] which includes 133,885 organic molecules. The molecules consist of up to nine heavy atoms (C, O, N, and F), not including hydrogen. Each molecule includes corresponding chemical properties computed with the Density Functional Theory methods. In our experiments, we select a subset with a fixed stoichiometry \((C_7 O_2 H_{10})\) which consists of 6,093 molecules. We choose the band gap energy as the property.

Details on the architecture and training can be found in Appendix Sec. A.5. We use fully-connected layers for our encoder and decoder. For the input X we use the bag-of-bonds [12] descriptor as a translation, rotation, and permutation invariant representation of molecules, which involves 190 dimensions. The latent space size is 17, where \(Z_0\) is 1-dimensional and \(Z_1\) is 16-dimensional. To evaluate the invariance, we first adjust the regularisation loss weights for a fair comparison of the models. The weights for the irrelevance loss in the STIB and the invariance loss terms in our model were increased until a drop in reconstruction and prediction performances compared to the \(\beta \)-VAE results was noticeable.

Results: Table 1 summarises the results. On a test set of 300 molecules, all models achieve similar MAE of 0.01 for the reconstruction of X. For prediction of the band gap energies Y a MAE of approx. 4 \(\text {kcal} \text { mol}^{-1}\) is achieved. The invariance is computed on the basis of 25 test molecules and 400 samples generated for each reference molecule. Similarly to the synthetic experiments, the STIB model performs almost twice as well as the \(\beta \)-VAE, while our model yields a distinctly better invariance of 1.34 \(\text {kcal} \text { mol}^{-1}\) among both models. With this result, we can generate novel molecules which are very close to a fixed property. This capability is illustrated in Fig. 3. For two reference molecules in the test set, we generate 2,000 new molecules by sampling uniformly at random with one standard deviation in the invariant subspace \(Z_1\) and keeping the reference property value, i.e. fixed \(Z_0\) coordinates. We show three such examples in Fig. 3a and select the nearest neighbours in the test set for visualisation of the molecular structure. For all samples, the boxplots in Fig. 3b illustrate the distribution in the predicted property values. The spread of predicted property values is generally smaller than the model prediction error of 4.06 \(\text {kcal} \text { mol}^{-1}\) and the predicted property of a majority of samples is close to the target property value.

Fig. 3.
figure 3

Illustration of the generative capability of our model for two reference molecules (rows). (a) The first molecule is the reference molecule with a fixed reference band gap energy. We display three samples and their predicted band gap energies out of 2,000 samples. (b) Boxplots for distribution of predicted property. The star symbol marks the fixed reference band gap energy. The shaded background depicts the prediction error range of the model. (Color figure online)

6 Discussion

Sparsity constraints and cycle consistency lead to sparse and interpretable models facilitating model selection. The results in Fig. 2(a, c) demonstrate that our method identifies the sparsest solution in comparison to the standard disentanglement baseline \(\beta \)-VAE and the direct competitor STIB, which do not address sparsity explicitly. Furthermore, the experiments on ellipses and ellipsoids show that only our model also identifies a correct parameterisation. It correctly learns the radius r in the property subspace \(Z_0\) as it encodes the level set, i.e. the ellipse curve or ellipsoid surface given by property Y. The angular components \(\varphi \) and \(\theta \) are correctly – and in particular independently – learned in the invariant subspace \(Z_1\) (see Fig. 2(b, d)). This is a direct consequence of the cycle consistency on the property Y. It allows for semantically structuring the latent space on the basis of the semantic knowledge on property Y. Finally, these results highlight that our method is able to inherently select the correct model. Although the \(\beta \)-VAE and STIB are capable of attaining similar reconstruction and prediction errors, a reconstruction of level sets in these models requires a more complicated combination of latent dimensions and hinders interpretation. Therefore, only our model makes an interpretation of the learned latent representation feasible.

Cycle Consistency Enforces Conditional Invariance. Table 1 shows that for all experiments, our model exhibits the best property invariance at otherwise similar reconstruction and prediction errors. The \(\beta \)-VAE has no mechanisms to ensure invariance and thus performs worst. But although the STIB relies on adversarial training to minimise mutual information (MI) between \(Z_1\) and Y, the alternating training and MI estimation can pose practical obstacles, especially in cases with high-dimensional latent spaces. Our cycle-consistency-based approach has the same benefits and is more feasible. In particular, our approach can operate on arbitrarily large latent spaces in both \(Z_0\) and \(Z_1\), because of the inherent sparsity of the solution. Typically, an upper limit for the size of property subspace \(Z_0\) and invariant subspace \(Z_1\) can be defined by the dimensionality of the property Y and input X (see Fig. 2). Noteworthy – although our model is trained and tested on data in the interval \([-1,1]^{d_X}\), \(d_X=\{2,3\}\) – the results generalise well beyond this interval, as long as a part of the level curve or surface was encountered during training (see Fig. 2(b)). This can be directly attributed to the regularisation of the latent space through additional sampling and cycle consistency of generated samples. These mechanisms impose conditional invariance which, in turn, facilitates generalisation and exploration of new samples by sharing the same level set or symmetry-conserved property.

Conditional Invariance Improves Targeted Molecule Discovery. Conditional invariance is of great importance for the generative potential of our model. In Fig. 3 we exemplary explored the molecular structures for two reference molecules. By sampling in the invariant space \(Z_1\), we discover molecular structures with property values which are very close to the fixed targets, i.e. the mean absolute deviation is below the model prediction error. Our experiment demonstrates the ability to generate molecules with self-consistent properties which rely on the improved conditional invariance provided by our model. This facilitates the discovery of novel molecules with desired chemical properties.

In conclusion, we demonstrated on synthetic and real-world use cases that our method allows selecting a correct model and improve interpretability as well as exploration of the latent representation. In our synthetic study, we focused on simple cases of connected and convex level sets. To generalise these findings, more general level sets are interesting to be investigated in order to relate to more real-world scenarios. In addition, our approach could be applied to medical applications where a selection of interpretable models is of particular relevance.