Keywords

1 Introduction

Deep neural networks (DNNs) have proven to be extremely high-performing in the realms of computer vision [7], natural language processing [26], autonomous control [16], and deep generative models [5, 13], among others. The huge successes of DNNs have made them almost ubiquitous as an engineering tool, and it is very common for them to appear in many new applications. However, as we rush to deploy DNNs in the real world, we have also exposed many of their shortcomings.

Fig. 1.
figure 1

Depiction of the proposed disentanglement method, NashAE, for a latent space dimensionality of \(m=3\). An autoencoder (AE), composed of the encoder \(\phi \) and decoder \(\psi \), compresses a high dimensional input \({\textbf{x}}\in \mathbb {R}^n\) to a lower dimensional latent vector \({\textbf{z}}\in \mathbb {R}^m\), and decompresses \({\textbf{z}}\) to approximate \({\textbf{x}}\) as \({\textbf{x}}'\). An ensemble of m independently trained regression networks \(\rho \) takes m duplicates of \({\textbf{z}}\) which each have an element removed, and each independent regression network tries to predict the value of its missing element using knowledge of the other elements. Disentanglement is achieved through equilibrium in an adversarial game in which \(\phi \) minimizes the element-wise covariance between the true latent vector \({\textbf{z}}\) and concatenated predictions vector \({\textbf{z}}'\)

One such shortcoming is that DNNs are extremely sensitive to minute perturbations in their inputs [6, 25] or weights [8], causing otherwise high-performing models to suddenly be consistently incorrect. Additionally, DNNs trained on image classification tasks are observed to predict labels confidently even when the image shares no relationship with their in-distribution label space [9, 17]. Furthermore, DNNs are known to perpetuate biases in their training data through their predictions, exacerbating salient issues such as racial and social inequity, and gender inequality [1]. While this small subset of examples may appear to be unrelated, they are all linked by a pervasive issue of DNNs: their lack of interpretability [23, 24]. The fact that DNNs are treated as black boxes, where engineers lack a clear explanation or reasoning for why or how DNNs make decisions, makes the root cause of DNNs’ shortcomings difficult to diagnose.

A promising remedy to this overarching issue is to clarify learning representations through feature disentanglement: the process of learning unique data representations that are each only sensitive to independent factors of the underlying data distribution. It follows that a disentangled representation is an inherently interpretable representation, where each disentangled unit has a consistent, unique, and independent interpretation of the data across its domain.

Several works have pioneered the field of feature disentanglement, moving from supervised [14] to unsupervised approaches [2, 4, 10, 11], which we focus on in this work. Chen et al. [4] present InfoGAN, an extension of the GAN framework [5] that enables better control of generated images using special noise variables. Higgins et al. [10] introduce \(\beta \)-VAE, a generalization of the VAE framework [13] that allows the VAE to extract more statistically independent, disentangled representations.

While these highly successful methods are considered to be unsupervised, they still have a considerable amount of prior knowledge built into their operation. InfoGAN requires prior knowledge of the number and form of disentangled factors to extract, and \(\beta \)-VAE encounters bottleneck capacity issues and inconsistent results with seemingly innocuous changes to hyperparameters, requiring finetuning with some supervision [2, 10].

We propose a new method, NashAE, to promote a sparse and disentangled latent space in the standard AE that does not make assumptions on the number or distribution of underlying data generating factors. The core intuition behind the approach is to reduce the redundant information between latent encoding elements, regardless of their distribution. To accomplish this, this work presents a new technique to reduce the information between encoded continuous and/or discrete random variables using just access to samples drawn from the unknown underlying distributions (Fig. 1). We empirically demonstrate that the method can reliably extract high-quality disentangled representations according to the metric presented in [10], and that the method has a higher latent feature capacity with respect to salient data characteristics.

The paper makes the following contributions:

  • We develop a method to quantify the relationship between random variables with unknown distribution (arbitrary continuous or discrete/categorical), and show that it can be used to promote statistical independence among latent variables in an AE.

  • We provide qualitative and quantitative evidence that NashAE reliably extracts a set of disentangled continuous and/or discrete factors of variation in a variety of scenarios, and we demonstrate the method’s improved latent feature capacity with regard to salient data characteristics.

  • We release the Beamsynthesis disentanglement dataset, a collection of time-series data based on particle physics studies and their associated data generating factor ground truth.

The code for all experiments and the Beamsynthesis dataset can be found at: https://github.com/ericyeats/nashae-beamsynthesis.

2 Related Work

Autoencoders. Much of this work derives from autoencoders (AE), which consist of an encoder function followed by a decoder function. The encoder transforms high-dimensional input \({\textbf{x}}\sim X\) into a low-dimensional latent representation \({\textbf{z}}\), and the decoder transforms \({\textbf{z}}\) to a reconstruction of the high-dimensional input \({\textbf{x}}'\). AE have numerous applications in the form of unsupervised feature learning, denoising, and feature disentanglement [11, 20, 22]. Variational autoencoders (VAEs) [13] take AEs further by using them to parameterize probability distributions for X. VAEs are trained by maximizing a lower bound of the likelihood of X, a process which involves conforming the encoded latent space \({\textbf{z}}\sim Z\) with a prior distribution P. Adversarial AEs [21], like VAEs, match encoded distributions to a prior distribution, but do so through an adversarial procedure inspired by Generative Adversarial Networks (GANs) [5].

Unsupervised Disentanglement Methods. One of the most successful approaches to feature disentanglement is \(\beta \)-VAE [10], which builds on the VAE framework. \(\beta \)-VAE adjusts the VAE training framework by modulating the relative strength of the \(D_{\textrm{KL}}(Z||P)\) term with hyperparameter \(\beta \), effectively limiting the capacity of the VAE and encouraging disentanglement as \(\beta \) becomes larger. Higgins et al. [10] note a positive correlation between the size of the VAE latent dimension and the optimal \(\beta \) hyperparameter to do so, requiring some hyperparameter search and limited supervision. Another important contribution of Higgins et al. [10] is a metric for quantifying disentanglement which depends on the accuracy of a linear classifier in determining which data generating factor is held constant over a pair of data batches.

Multiple works have augmented \(\beta \)-VAE with loss functions that isolate the Total Correlation (TC) component of \(D_{\textrm{KL}}(Z||P)\), further boosting quantitative disentanglement performance in certain scenarios [3, 12]. Another VAE-based work proposed by Kumar et al. [15] directly minimizes the covariance of the encoded representation. However, simple covariance of the latent elements fails to capture more complex, nonlinear relationships between the elements. Our work employs regression neural networks to capture complex dependencies.

Chen et al. [4] present InfoGAN, which builds on the GAN framework [5]. InfoGAN augments the base GAN training procedure with a special set of independent noise inputs. A tractable lower bound on MI is maximized between the special noise inputs and output of the generator, leading to the special noise inputs resembling data generating factors. While the method claims to be unsupervised, choosing its special noise inputs requires prior knowledge of the number and nature (e.g. distribution) of factors to extract.

Limitations of Unsupervised Disentanglement. Locatello et al. [19] demonstrate that unsupervised disentanglement learning is fundamentally impossible without incorporating inductive biases on both models and data [19]. However, they assert that given the right inductive biases, the prospect of unsupervised disentanglement learning is not so bleak. We incorporate several inductive biases in our method to achieve unsupervised disentanglement. First, our approach assumes that disentangled learning representations are characterised by being statistically independent. Second, we posit that breaking up the latent factorization problem into multiple parts by individual masking and adversarial covariance minimization helps boost disentanglement reliability. In terms of models and data, we employ the network architectures and data preparation suggested by previous works in unsupervised disentanglement. Under such conditions, NashAE has demonstrated superior reliability in retrieving disentangled representations.

3 NashAE Methodology

Our approach starts with a purely deterministic encoder \(\phi \), which takes input observations \({\textbf{x}}\sim X\) and creates a latent representation \({\textbf{z}}= \phi ({\textbf{x}})\). Where \({\textbf{x}}\in \mathbb {R}^n\), \({\textbf{z}}\in \mathbb {R}^m\), and typically \(n \gg m\). Furthermore, \(\phi \) employs a sigmoid activation function \(\sigma \) at its output to produce \({\textbf{z}}\) such that \({\textbf{z}}= \sigma ({\mathbf {\zeta }})\) and \({\textbf{z}}\in [0, 1]^m\), where \({\mathbf {\zeta }}\) is the output of \(\phi \) before it is passed through the sigmoid non-linearity. A deterministic decoder \(\psi \) maps the latent representation \({\textbf{z}}\) back to the observation domain \({\textbf{x}}' = \psi ({\textbf{z}})\). To achieve disentanglement, the AE is trained with two complementary objectives: (1) reconstructing the observations, and (2) maximizing the discrepancy between each latent variable and predicted values of the variable using information of all other variables. The intuitions behind each are the following. First, reconstruction of the input observations \({\textbf{x}}\) is standard of AEs and ensures that they learn features relevant to the distribution X. Second, promoting discrepancy between i-th latent element and its prediction (conditioned on all other \(j \ne i\) elements) reduces the information between latent element i and all other elements \(j \ne i\).

For the reconstruction objective, the goal is to minimize the mean squared error:

$$\begin{aligned} \mathcal {L}_R = \frac{1}{2 n}\mathbb {E}_{{\textbf{x}}\sim X}\big |\big |{\textbf{x}}' - {\textbf{x}}\big |\big |^2_2 . \end{aligned}$$
(1)

Reconstructing the input observation \({\textbf{x}}\) ensures that the features of the latent space are relevant to the underlying data distribution X. The following subsection describes the adversarial game loss objectives, which settle on an equilibrium and inspire the name of the proposed disentanglement method, NashAE.

Adversarial Covariance Minimization

In general, it is difficult to compute the information between latent variables when one only has access to samples of observations \({\textbf{z}}\sim Z\). Since the underlying distribution Z is unknown, standard methods of computing the information directly are not possible. To overcome this challenge, we propose to reduce the information between latent variables indirectly using an ensemble of regression networks which attempt to capture the relationships between latent variables. The process is computationally efficient; it uses simple measures of linear statistical independence and an adversarial game.

Consider an ensemble of m independent regression networks \(\rho \), where the output of the i-th network \(\rho _i\) corresponds to a missing i-th latent element. The objective of each \(\rho _i\) is to minimize the mean squared error:

$$\begin{aligned} \mathcal {L}_{\rho _i} = \frac{1}{2}\mathbb {E}_{{\textbf{x}}\sim X}\big (\rho (\overline{{\textbf{z}}_i})_i - {\textbf{z}}_i\big )^2 , \end{aligned}$$
(2)

where \({\textbf{z}}_i\) is the i-th true latent element, and \(\overline{{\textbf{z}}_i}\) is the latent vector with the i-th latent element masked with 0 (i.e., all elements of the latent vector are present except \({\textbf{z}}_i\)).

We call \(\rho \) the predictors, since they are each optimized to predict one missing value of \({\textbf{z}}\) given knowledge of all other \({\textbf{z}}\). If all their individual predictions are concatenated together, they form \({\textbf{z}}'\) such that each \({\textbf{z}}'_i = \rho (\overline{{\textbf{z}}_i})_i\).

For the disentanglement objective, we want to choose encodings \({\textbf{z}}\sim \phi (X)\) that make it difficult to recover information of one element from all others. This leads to a natural minmax formulation for the AE and predictors:

$$\begin{aligned} \min _{\phi ,\psi } \max _{\rho } \frac{1}{2} \mathbb {E}_{{\textbf{x}}\sim X} \left[ \frac{1}{n}\big |\big |{\textbf{x}}' - {\textbf{x}}\big |\big |^2_2 - \big |\big |{\textbf{z}}' - {\textbf{z}}\big |\big |^2_2 \right] . \end{aligned}$$
(3)

In general, each predictor attempts to use information of \(\overline{{\textbf{z}}_i}\) to establish a one-to-one linear relationship between \({\textbf{z}}'\) and \({\textbf{z}}\). Hence we propose to use covariance between \({\textbf{z}}'\) and \({\textbf{z}}\) across a batch of examples to capture the degree to which they are related. In practice, we find that training the AE to minimize the summed covariance objective between each of the \({\textbf{z}}_i'\) and \({\textbf{z}}_i\) random variable pairs,

$$\begin{aligned} \mathcal {L}_A = \sum _{i=1}^m\textrm{Cov}({\textbf{z}}'_i, {\textbf{z}}_i) , \end{aligned}$$
(4)

is more stable than maximizing \(\frac{1}{2}\mathbb {E}_{{\textbf{x}}\sim X}||{\textbf{z}}' - {\textbf{z}}||\) and leads to more reliable disentanglement outcomes. Hence, in all the following experiments we train the AE to minimize this summed covariance measure, \(\mathcal {L}_A\). Furthermore, one can show that the fixed points of the minmax objective (3) are the same as those of training \(\phi \) to minimize \(\mathcal {L}_R + \mathcal {L}_A\) for disentangled representations (see supplementary material).

In the adversarial loss \(\mathcal {L}_A\), the optimization objective of the encoder \(\phi \) is to adjust its latent representations to minimize the covariance between each \({\textbf{z}}'_i\) and \({\textbf{z}}_i\). Using minibatch stochastic gradient descent (SGD), the encoder \(\phi \) can use gradient passed through the predictors \(\rho \) to learn exactly how to adjust its latent representations to minimize the adversarial loss. Assuming that \(\rho \) can learn faster than \(\phi \), each i-th covariance term will reach zero when \(\mathbb {E}[{\textbf{z}}_i|{\textbf{z}}_j, \forall j \ne i] = \mathbb {E}[{\textbf{z}}_i]\) everywhere.

In the following experiments, we weight the sum of \(\mathcal {L}_R\) and \(\mathcal {L}_A\) with the hyperparameter \(\lambda \in [0, 1)\) in order to establish a normalized balance between the reconstruction and adversarial objectives:

$$\begin{aligned} \mathcal {L}_{R,A}(\lambda ) = (1 - \lambda ) \mathcal {L}_R + \lambda \mathcal {L}_A . \end{aligned}$$
(5)

Intuitively, higher values of \(\lambda \) result lower covariance between elements of \({\textbf{z}}\) and \({\textbf{z}}'\), and eventually the equilibrium covariance settles to zero. In the special case where all data generating factors are independent, the AE can theoretically achieve \(\mathcal {L}_{R,A}=0\).

Proposed Disentanglement Metric: TAD

In the following section, we find that NashAE and \(\beta \)-VAE could achieve equally high scores using the \(\beta \)-VAE metric. However, the \(\beta \)-VAE metric fails to capture a key aspect of a truly disentangled latent representation: change in one independent data generating factor should correspond to change in just one disentangled latent feature. This is not captured in the \(\beta \)-VAE disentanglement metric since the score can benefit from spreading the information of one data generating factor over multiple latent features. For example, duplicate latent representations of the same unique data generating factor can only increase the score of the \(\beta \)-VAE metric.

Furthermore, a disentanglement metric should quantify the degree to which its set of independent latent axes aligns with the independent data generating factor ground truth axes. In essence, a unique latent feature should be a confident predictor of a unique data generating factor, and all other latents should be orthogonal to the same data generating factor. Intuitively, the greater number of latent axes that align uniquely with the data generating factors and the more confident the latents are as predictors of the factors, the higher the metric score should be.

For these reasons, we design a disentanglement metric for datasets with binary attribute ground truth labels called Total AUROC Difference (TAD). For a large number l of examples which we collect a batch of latent representations z of the shape (lm), we perform the following to calculate the TAD:

  1. 1.

    For each independent ground truth attribute, calculate the AUROC score of each latent variable in detecting the attribute.

  2. 2.

    For each independent ground truth attribute, find the maximum latent AUROC score \(a_{1,i}\) and the next-largest latent AUROC score \(a_{2,i}\), where i is the index of the independent ground truth attribute under consideration.

  3. 3.

    Take \(\sum _i a_{1,i}-a_{2,i}\) as the TAD score, where i indexes over the independent ground truth attributes.

The TAD metric captures important aspects of a disentangled latent representation. First, each AUROC difference \(a_{1,i}-a_{2,i}\) captures the degree to which a unique attribute is detected by a unique latent representation. Second, summing the AUROC difference scores for each independent ground truth attribute quantifies the degree to which the latent axes confidently replicate the ground truth axes. See the supplementary material for more details on how TAD is calculated and for a discussion relating it with other work.

4 Experiments

The following section contains a mix of qualitative and quantitative results for four unsupervised disentanglement algorithms: NashAE (this work), \(\beta \)-VAE [10], FactorVAE [12], and \(\beta \)-TCVAE [3]. The results are collected for disentanglement tasks on three datasets: Beamsynthesis, dSprites [10], and CelebA [18]. Please refer to the supplementary material for details on algorithm hyperparameters, network architectures, and data normalization for the different experiments.

Beamsynthesis is a simple dataset of 360 time-series data of current waveforms constructed from simulations of the LINAC (linear particle accelerator) portion of high-energy particle accelerators. The dataset contains two ground truth data generating factors: a categorical random variable representing the frequency of the particle waveform which can take on one of the three values (10, 15, 20) and a continuous random variable constructed from a uniform sweep of 120 waveform duty cycle values \(\in [0.2, 0.8)\). The Cartesian product of the two data generating factors forms the set of observations. The challenge in disentangling this dataset arises from the fact that both the frequency and duty cycle of a waveform affect the length of the “on” period of each wave. We visualize the complete latent space of different algorithms and evaluate the reliability of the algorithms in extracting the correct number of ground truth data generating factors using this dataset.

dSprites is a disentanglement dataset released by the authors of \(\beta \)-VAE - it is comprised of a set of 737, 280 images of white 2D shapes on a black background. The Cartesian product of the type of shape (categorical: square, ellipse, heart), scale (continuous: 6 values), orientation (continuous: 40 values), x-position (continuous: 32 values), and y-position (continuous: 32 values) forms the independent ground truth of the dataset. We measure the \(\beta \)-VAE disentanglement metric score for different algorithms using this dataset.

CelebA is a dataset comprised of 202, 599 images of the faces of 10, 177 different celebrities. Associated with each image are 40 different binary attribute labels such as bangs, blond hair, black hair, chubby, male, and eyeglasses. We measure the TAD score of different algorithms using this dataset.

Empirical Fixed Point Results

In Sect. 3, we indicate that higher values of \(\lambda \in (0, 1)\) should result in a statistically independent NashAE latent space, and that redundant latent elements will not be learned. This is supported by observations of the fixed point of the optimization process for all experiments with nonzero \(\lambda \): as \(\lambda \) is increased, the number of dead latent representations increases, and the average \(R^2\) correlation statistic between latent representations and their predictions decreases.

Fig. 2.
figure 2

Visualization of true latent representations (x-axis) vs predicted latent representations (y-axis) on the CelebA dataset

Figure 2 depicts each of the 32 true latent representations vs their predictions for 1000 samples of the CelebA dataset after three different NashAE networks have converged. When \(\lambda =0\) (standard AE), all latent elements are employed towards the reconstruction objective, and the predictions exhibit a strong positive linear relationship with the true latent variables (average \(R^2\) is 0.52). When \(\lambda =0.1\), only 28 latent representations are maintained, and the average \(R^2\) statistic between true latents and their predictions becomes 0.09. The 4 unused latent representations are each isolated in a dead zone of the sigmoid non-linearity, respectively. When \(\lambda \) is increased to 0.2, only 22 latent representations are maintained and the average \(R^2\) statistic decreases even further to 0.04. Note also that the predictions become constant and are each equal to the expected value of their respective true latent feature. This is consistent with the conditional expectation of each variable being equal to its marginal expectation everywhere, and it indicates that no useful information is given to the predictors towards their regression task.

Beamsynthesis Latent Space Visualization

Figure 3 depicts the complete latent space generated by encoding all 360 observations of the Beamsynthesis dataset for the different algorithms and their baselines with a starting latent size of \(m=4\).

A standard AE latent space (leftmost) employs all latent elements towards the reconstruction objective, and their relationship with the ground truth data generating factors, frequency (categorical) and duty cycle (continuous), is unclear. Similarly, when a standard VAE (center right) converges and the \(\mu \) component of the latent space is plotted for all observations, all latent variables are employed towards the reconstruction objective, and no clear relationship can be established for the latent variables.

Fig. 3.
figure 3

Visualizations of the learned latent space for the different algorithms on the Beamsynthesis dataset

If an adversarial game is played with \(\lambda =0.2\) (center left), the correct number of latent dimensions is extracted, and each nontrivial latent representation aligns with just one data generating factor. In this case, L1 level-encodes the frequency categorical data generating factor, and L2 encodes the duty cycle continuous data generating factor with a consistent interpretation. The unused neurons remain in a dead zone of the sigmoid non-linearity.

\(\beta \)-VAE with \(\beta =100\) (rightmost) can disentangle the 360 observations in a similar fashion. In this example, L2 level-encodes the frequency categorical data generating factor, and L3 encodes the duty cycle continuous data generating factor with a consistent interpretation. The unused neurons each have approximately 0 variance in their \(\mu \) component and an approximately constant value of 1 in their learned variance component.

Although all algorithms are capable of extracting a disentangled representation of the ground truth data generating factors, there is a stark difference in the reliability of the methods in extracting the correct number of latent variables when the starting latent space size m is changed. Reliability in this aspect is critical, as the dimensionality of the independent data generating factors is often an important unknown quantity to recover from new data. To determine this unknown dimensionality, one should start with a latent space size m which is larger than the number of latent factors that should be extracted.

Table 1. Average absolute difference between the number of learned latent dimensions and the number of ground truth factors for different starting latent space sizes m on the Beamsynthesis dataset. Lower is better, and the lowest for each latent space size configuration (m) are in bold. The results are averaged over 8 trials

Table 1 depicts the results of an experiment in which all hyperparameters are held constant except the starting latent size m as each of the algorithms are trained to convergence on the Beamsynthesis dataset. Each entry in the table is the average absolute difference between the number of learned latent representations and the number of ground truth data generating factors (2 for Beamsynthesis), collected over 8 trials. Both NashAE \(\lambda =0\) and \(\beta \)-(TC)VAE \(\beta =1\) learn far too many latent variables, and \(\beta \)-VAE \(\beta =125\) tends to learn too few latent variables when \(m=4\) and \(m=8\). NashAE \(\lambda =0.2\) and NashAE \(\lambda =0.3\) perform very well in comparison, keeping the average absolute difference less than or equal to one in all configurations of m. \(\beta \)-TCVAE and FactorVAE perform second-best overall, tending to learn too many latent variables. The results indicate that NashAE is the most consistent in recovering the correct number of data generating factors. See the supplementary material for a similar experiment with the dSprites dataset and details on how learned latent representations are counted.

\(\beta \) -VAE Metric on dSprites

Table 2 reports the disentanglement score of each algorithm averaged over 15 trials - please refer to Higgins et al. [10] for more details on the metric. In general, the standard AE (NashAE, \(\lambda =0\)) and standard VAE (\(\beta \)-VAE, \(\beta =1\); \(\beta \)-TCVAE, \(\beta =1\)) performed the worst on the \(\beta \)-VAE disentanglement metric. As \(\lambda \) and \(\beta \) are increased, the disentanglement score of NashAE and \(\beta \)-VAE increases to over \(96\%\). We do not observe the difference between NashAE and \(\beta \)-VAE in top performance on this metric to be significant, so both are in bold. In general, \(\beta \)-TCVAE performed slightly worse on this metric than \(\beta \)-VAE and NashAE, achieving just over \(95\%\). We observed that increasing \(\lambda \) or \(\beta \) beyond these values leads to poorer performance for all algorithms. All algorithms achieve higher disentanglement scores on some initializations than others, but no outliers are removed from the reported scores (as is done in [10]). Overall, the results indicate that NashAE scores at least as high as those of \(\beta \)-VAE and \(\beta \)-TCVAE algorithm on the \(\beta \)-VAE metric.

Table 2. \(\beta \)-VAE Metric Scores on dSprites averaged over 15 trials. Higher is better, and the highest scores of all models and hyperparameter configurations are in bold. Optimal \(\lambda \) values for disentanglement are different for this dataset because the dSprites image data is not normalized, following the precedent of previous works [10]
Fig. 4.
figure 4

Traversals of latent features corresponding to the highest AUROC score for the bangs, blond, and male attributes for the different disentanglement algorithms. Each latent representation with maximum AUROC score and its corresponding score are reported

Latent Traversals and TAD Metric on CelebA

We include traversals of latent representations that have the highest AUROC detector score for a small set of attributes on CelebA in Fig. 4. In each case, we start with a random image from the dataset and hold all latent representations constant except the one identified to have the highest AUROC score for the attribute of interest. We vary that representation evenly from its minimum to its maximum (as observed across 1000 random samples) and decode the resulting latent representation to generate the images reported in Fig. 4.

Note that in all cases, employing disentanglement methods (NashAE \(\lambda > 0\) or \(\beta \)-VAE \(\beta \gg 1\)) leads to a visual traversal that intuitively matches the attribute that the latent representation is a good detector for. Furthermore, the visual changes are significant and obvious. Contrarily, when there is no effort to disentangle the representations (\(\lambda =0\) or \(\beta =1\)), the relationship between the representation’s high AUROC score and its traversal visualization become far less clear. In some cases, the traversal does not make meaningful change or even causes odd artifacts during decoding. We hypothesize that this is due to redundant information being shared between the latent features, and changing just one may have either no significant effect or the combination will be “out of distribution” to the decoder, leading to unnatural decoding artifacts. The idea that standard latents hold redundant information is supported by Fig. 2, where the predictors establish a high average \(R^2\) value on CelebA when \(\lambda =0\).

Table 3. TAD Scores on CelebA (averaged over 3 trials). Higher TAD scores are better, and the highest average score is in bold

We employ the TAD metric to quantify disentanglement on the CelebA dataset. Table 3 summarizes the TAD results and number of captured attributes for each of the algorithms averaged over three trials. An attribute is considered captured if it has a corresponding latent representation with an AUROC score of at least 0.75. The resulting scores indicate that the NashAE consistently achieves a higher TAD score, suggesting that its latent space captures more of the salient data characteristics (determined by the labelled attributes). Furthermore, NashAE achieves high scores over a broad range for \(\lambda \in (0, 1)\). \(\beta \)-TCVAE performs second best, achieving a TAD score of 0.446 when \(\beta = 15\), yet it does not capture as many attributes as NashAE. In general, \(\beta \)-VAE and FactorVAE tend to capture fewer attributes and score lower TAD scores, suggesting that their latent spaces capture fewer of the salient data characteristics.

5 Discussion

We have shown with our quantitative experiments that NashAE can reliably extract disentangled representations. Furthermore, qualitative latent traversal inspection indicates that the latent variables of NashAE which are the best detectors for a given attribute indeed visually reflect independent traversals of the attribute. Hence, the adversarial covariance minimization objective presented in this work promotes learning of clarified, interpretable representations in neural networks. We believe that improvements in neural network interpretability can aid engineers in diagnosing and treating the current ailments of neural networks such as security vulnerability, lack of fairness, and out-of-distribution detection.

Future work will investigate more sophisticated latent distribution modeling and to make NashAE a generative model. This could further boost NashAE’s disentanglement performance and provide deeper insight with information-theoretic approaches. It could be interesting to apply the adversarial covariance minimization objective to clarify the representations of DNNs for image classification.

6 Conclusion

We have presented NashAE, a new adversarial method to disentangle factors of variation which makes minimal assumptions on the number and form of factors to extract. We have shown that the method leads to a more statistically independent and disentangled AE latent space. Our quantitative experiments indicate that this flexible method is more reliable in retrieving the true number of data generating factors and has a higher capacity to align its latent representations with salient data characteristics than leading VAE-based algorithms.