Keywords

1 Introduction

Contemporary deep learning methods now provide excellent performance across a variety of computer vision tasks when ample annotated training data is available. However, this performance often degrades rapidly if models are applied to novel domains with very different data statistics from the training data, which is a problem known as domain shift. Meanwhile, data collection and annotation for every possible domain of application is expensive and sometimes impossible. This challenge has motivated extensive study in the area of domain adaptation (DA), which addresses training models that work well on a target domain using only unlabelled or partially labelled target data from that domain together with labelled data from a source domain.

Fig. 1.
figure 1

Left: Exact meta-learning of initial condition with inner loop training DA to convergence is intractable. Right: Online meta-learning alternates between meta-optimization and domain adaptation.

Several variants of the domain adaptation problem have been studied. Single-source domain adaptation (SDA) considers adaptation from a single source domain  [5, 6], while multi-source domain adaptation (MSDA) considers improving cross-domain generalization by aggregating information across multiple sources  [35, 47]. Unsupervised domain adaptation (UDA) learns solely from unlabelled data in the target domain [16, 43], while semi-supervised domain adaptation (SSDA) learns from a mixture of labelled and unlabelled target domain data  [9, 10]. The main means of progress has been developing improved methods for aligning representations between source(s) and the target in order to improve generalization. These methods span distribution alignment, for example by maximum mean discrepancy (MMD) [27, 48], domain adversarial training [16, 43], and cycle consistent image transformation [18, 26].

In this paper we adopt a novel research perspective that is complementary to all these existing methods. Rather than proposing a new domain adaptation strategy, we study a meta-learning framework for improving these existing adaptation algorithms. Meta-learning (a.k.a. learning to learn) has a long history  [44, 45], and has re-surged recently, especially due to its efficacy in improving few-shot deep learning  [12, 40, 50]. Meta-learning pipelines aim to improve learning by training some aspect of a learning algorithm such a comparison metric  [50], model optimizer  [40] or model initialisation  [12], so as to improve outcomes according to some meta-objective such as few-shot learning efficacy [12, 40, 50] or learning speed [1]. In this paper we provide a first attempt to define a meta-learning framework for improving domain adaptive learning.

We take the perspective of meta-optimizing the initial condition [12, 33] of domain adaptive learningFootnote 1. While there are several facets of algorithms that can be meta-learned such as hyper-parameters [14] and learning rates [24]; these are somewhat tied to the base learning algorithm (domain adaptive algorithm in our case). In contrast, our framework is algorithm agnostic in that it can be used to improve many existing gradient-based domain adaptation algorithms.

Furthermore we develop variants for both unsupervised multi-source adaptation, as well as semi-supervised single source adaptation, thus providing broad potential benefit to existing frameworks and application settings. In particular we demonstrate application of our framework to the classic domain adversarial neural network (DANN) [16] algorithm, as well as the recent maxmium-classifier discrepancy (MCD) [43], and min-max entropy (MME) [41] algorithms.

Meta-learning can often be cleanly formalised as a bi-level optimization problem [14, 39]: where an outer loop optimizes the meta-parameter of interest (such as the initial condition in our case) with respect to some meta-loss (such as performance on a validation set); and the inner loop runs the learning algorithm conditioned on the chosen meta-parameter. This is tricky to apply directly in a domain adaptation scenario however, because: (i) The computation graph of the inner loop is typically long (unlike the popular few-shot meta-learning setting [12]), making meta-optimization intractable, and (ii) Especially in unsupervised domain adaptation, there is no labelled data in the target domain to define a supervised learning loss for the outer-loop meta-objective. We surmount these challenges by proposing a simple, fast and efficient meta-learning strategy based on online shortest path gradient descent  [36], and defining meta-learning pipelines suited for meta-optimization of domain adaptation problems. Although motivated by initial condition learning, our online algorithm ultimately has the interpretation of intermittently performing meta-update(s) of the parameters in order to achieve the best outcome from the following DA updates (Fig. 1).

Overall, our contributions are: (i) Introducing a meta-learning framework suitable for both multi-source and semi-supervised domain adaptation settings, (ii) We demonstrate the algorithm agnostic nature of our framework through its application to several base domain adaptation methods including MME  [41], DANN  [16] and MCD  [43], (iii) Applying our meta-learner to these base adaptation methods, we achieve state of the art performance on several MSDA and SSDA benchmarks, including the largest-scale DA benchmark, DomainNet  [38].

2 Related Work

Single-Source Domain Adaptation.  Single-source unsupervised domain adaptation (SDA) is a well established area [5, 6, 16, 19, 27, 29,30,31, 43, 48]. Theoretical results have bound the cross-domain generalization error in terms of domain divergence [4], and numerous algorithms have been proposed to reduce the divergence between source and target features. Representative approaches include minimising MMD distribution shift [27, 48] or Wasserstein distance [2, 51], adversarial training  [16, 20, 43] or alignment by cycle-consistent image translation  [18, 26]. Given the difficulty of SDA, studies have considered exploiting semi-supervised or multi-source adaptation where possible.

Semi-supervised Domain Adaptation.  This setting assumes that besides the labelled source and unlabelled target domain data, there are a few labelled samples available in the target domain. Exploiting the few target labels allows better domain alignment compared to purely unsupervised approaches. Representative approaches are based on regularization [9], subspace learning [54], label smoothing [10] and entropy minimisation in the target domain [17]. The state of the art method in this area, MME, extends the entropy minimisation idea to adversarial entropy minimisation in a deep network setting [41].

Multi-source Domain Adaptation.  This setting assumes there are multiple labelled source domains for training. In deep learning, simply aggregating all source domains data together often already improves performance due to bigger datasets learning a stronger representation. Theoretical results based on \(\mathcal {H}\)-divergence  [4] can still apply after aggregation, and existing SDA methods that attempt to reduce source-target divergence  [5, 16, 43] can be used. Meanwhile, new generalization bounds for MSDA have been derived  [38, 55], which motivate algorithms that align amongst source domains as well as between source and target. Nevertheless, practical deep network optimization is non-convex, and the degree of alignment achieved depends on the details of the optimization strategy. Therefore our paradigm of meta-learning the initial condition of optimization is compatible with, and complementary to, all this prior work.

Meta-learning for Neural Networks.  Meta-Learning (learning to learn)  [44, 46] has experienced a recent resurgence. This has largely been driven by its efficacy for few-shot deep learning via initial condition learning [12], optimizer learning [40] and embedding learning [50]. More generally it has been applied to improve optimization efficiency  [1], reinforcement learning  [37], gradient-based hyperparameter optimization [14] and neural architecture search  [25]. We start from the perspective of MAML [12], in terms of focusing on learning initial conditions of neural network optimization. However besides the different application (domain adaptation vs few-shot learning), our resulting algorithm is very different as we end up performing meta-optimization online while solving the target task rather than in advance of solving it [12, 39]. A few recent studies also attempted online meta-learning [13, 53], but these are designed specifically to backprop through RL [53] or few-shot supervised [13] learning. Meta-learning with domain adaptation in the inner loop has not been studied before now.

In terms of learning with multiple domains a few studies [3, 11, 21] have considered meta-learning for multi-source domain generalization, which evaluates the ability of models to generalise directly without adaptation. In practice these methods use supervised learning in their inner optimization. No meta-learning method has been proposed for the domain adaptation problem addressed here.

3 Methodology

3.1 Background

figure a

Unsupervised Domain Adaptation.  Domain adaptation techniques aim to reduce the domain shift between source domain(s) \(\mathcal {D}_S\) and target domain \(\mathcal {D}_T\), in order that a model trained on labels from \(\mathcal {D}_S\) performs well when deployed on \(\mathcal {D}_T\). Commonly such algorithms train a model \(\varTheta \) with a loss \(\mathcal {L}_{\text {uda}}\) that breaks down into a term for supervised learning on the source domain \(\mathcal {L}_{\text {sup}}\) and an adaptation loss \(\mathcal {L}_{\text {a}}\) that attempts to align the target and source data

$$\begin{aligned} \begin{aligned} \mathcal {L}_{\text {uda}}(\varTheta , \mathcal {D}_{S}, \overline{\mathcal {D}}_{T}) = \mathcal {L}_{\text {sup}}(\varTheta , \mathcal {D}_S) + \lambda \mathcal {L}_{\text {a}}(\varTheta , \mathcal {D}_{S}, \overline{\mathcal {D}}_{T}). \end{aligned} \end{aligned}$$
(1)

We use notation \(\mathcal {D}_S\) and \(\overline{\mathcal {D}}_T\) to indicate that the source and target domains contain labelled and unlabelled data respectively. Many existing domain adaptation algorithms  [16, 38, 43, 48] fit into this template, differing in their definition of the domain alignment loss \(\mathcal {L}_{\text {a}}\). In the case of multi-source adaptation  [38], \(\mathcal {D}_{S}\) may contain several source domains \(\mathcal {D}_{S}=\{ {D}_1, \dots , {D}_N\}\) and the first supervised learning term \(\mathcal {L}_{\text {sup}}\) sums the performance on all of these.

Semi-supervised Domain Adaptation.  In the SSDA setting  [41], we assume a sparse set of labelled target data \(\mathcal {D}_T\) is provided along with a large set of unlabelled target data \(\overline{\mathcal {D}}_{T}\). The goal is to learn a model that fits both the source and few-shot target labels \(\mathcal {L}_\text {sup}\), while also aligning the unlabelled target data to the source with an adaptation loss \(\mathcal {L}_a\).

$$\begin{aligned} \begin{aligned} \mathcal {L}_{\text {ssda}}(\varTheta ,\mathcal {D}_{S},\overline{\mathcal {D}}_{T},\mathcal {D}_T)=&\mathcal {L}_{\text {sup}}(\varTheta , \mathcal {D}_{S}) + \mathcal {L}_{\text {sup}}(\varTheta , \mathcal {D}_{T}) \\&+\lambda \mathcal {L}_{\text {a}}(\varTheta , \mathcal {D}_{S}, \overline{\mathcal {D}}_{T}) \end{aligned} \end{aligned}$$
(2)

Several existing algorithms [16, 28, 41] fit this template and hence can potentially be optimized by our framework.

Meta Learning Model Initialisation.  The problem of meta-learning the initial condition of an optimization can be seen as a bi-level optimization problem  [14, 39]. In this view there is a standard task-specific (inner) algorithm of interest whose initial condition we wish to optimize, and an outer-level meta-algorithm that optimizes that initial condition. This setup can be described as

$$\begin{aligned} \begin{aligned} \varTheta&=\underbrace{\underset{\varTheta }{{\text {argmin}} }~~ \mathcal {L}_{\text {outer}}( \overbrace{\mathcal {L}_{\text {inner}}(\varTheta , \mathcal {D}_{\text {tr}})}^{\text {Inner-level}}, \mathcal {D}_{\text {val}})}_{\text {Outer-level}} \end{aligned} \end{aligned}$$
(3)

where \(\mathcal {L}_{\text {inner}}(\varTheta ,\mathcal {D}_{\text {tr}})\) denotes the standard loss of the base task-specific algorithm on its training set. \(\mathcal {L}_{\text {outer}}(\varTheta ^*,\mathcal {D}_\text {val})\) denotes the validation set loss after \(\mathcal {L}_{\text {inner}}\) has been optimized, (\(\varTheta ^*={\text {argmin}}\mathcal {L}_\text {inner}\)), when starting from the initial condition set by the outer optimization. The overall goal in Eq. 3 above is thus to set the initial condition of base algorithm \(\mathcal {L}_{\text {inner}}\) such that it achieves minimum loss on the validation set. When both losses are differentiable we can in principle solve Eq. 3 by taking gradient steps on \(\mathcal {L}_\text {outer}\) as shown in Algorithm 1. However, such exact meta-learning requires backpropagating through the path of the inner optimization, which is costly and inaccurate for a long computation graph.

figure b

3.2 Meta-learning for Domain Adaptation

Overview.  For meta domain adaptation, we would like to instantiate the initial condition learning idea summarised earlier in Eq. 3 in order to initialize popular domain adaptation algorithms such as [16, 41, 43] that can be represented as problems in the form of Eqs. 1 and 2, so as to maximise the resulting performance in the target domain upon deployment. To this end we will introduce in the following section appropriate definitions of the inner and outer tasks, as well as a tractable optimization strategy.

Multi-source Domain Adaptation.  Suppose we have an adequate algorithm to optimize for initial conditions as required in Eq. 3. How could we apply this idea to multi-source unsupervised domain adaptation setting, given that there is no target domain training data to take the role of \(\mathcal {D}_{\text {val}}\) in providing the metric for outer loop optimization of the initial condition? Our idea is that in the multi-source domain adaptation setting, we can split available source domains into disjoint meta-training and meta-testing domains \(\mathcal {D}_S=\mathcal {D}_S^{\text {mtr}}\cup \mathcal {D}_S^{\text {mte}}\), where we actually have labels for both. Now we can let \(\mathcal {L}_\text {inner}\) be an unsupervised domain method [16, 43] \(\mathcal {L}_\text {inner}:=\mathcal {L}_\text {uda}\), and ask it to adapt from meta-train to the unlabelled meta-test domain. In the outer loop, we can then use the labels of the meta-test domain as a validation set to evaluate the adaptation performance via a supervised loss \(\mathcal {L}_\text {outer}:=\mathcal {L}_\text {sup}\), such as cross-entropy. Thus we aim to find an initial condition for our base domain adaptation method \(\mathcal {L}_\text {uda}\) that enables it to adapt effectively between source domains

$$\begin{aligned} \varTheta _0 = \underset{\varTheta _0}{{\text {argmin}}} \sum _{\mathcal {D}_S^{\text {mtr}},\mathcal {D}_S^{\text {mte}}\sim \mathcal {D}_S}\mathcal {L}_\text {sup}(\mathcal {L}_\text {uda}(\mathcal {D}_S^{\text {mtr}},\overline{\mathcal {D}}_S^{\text {mte}};\varTheta _0),\mathcal {D}_S^{\text {mte}}) \end{aligned}$$
(4)

where we use \(\mathcal {L}(\cdot ;\varTheta _0)\) to denote optimizing a loss from starting point \(\varTheta _0\). This initial condition could be optimized by taking gradient descent steps on the outer supervised loss using \({\text {UpdateIC}}\) from Algorithm 1. The resulting \(\varTheta _0\) is suited to adapting between all source domains hence should also be good for adapting to the target domain. Thus we would finally instantiate the same UDA algorithm using the learned initial condition, but this time between the full set of source domains, and the true unlabelled target domain \(\overline{\mathcal {D}}_T\).

$$\begin{aligned} \varTheta = \underset{\varTheta }{{\text {argmin}}}~~ \mathcal {L}_\text {uda}(\mathcal {D}_{S},\overline{\mathcal {D}}_{T};\varTheta _0) \end{aligned}$$
(5)

An Online Solution.  While conceptually simple, the problem with the direct approach above is that it requires completing domain-adaptive training multiple times in the inner optimization. Instead we propose to perform online meta-learning [53, 56] by alternating between steps of meta-optimization of Eq. 4 and steps on the final unsupervised domain adaptation problem in Eq. 6. That is, we iterate

$$\begin{aligned} \begin{aligned} \displaystyle \varTheta&= {\text {UpdateIC}} (\varTheta , (\mathcal {D}_S^{\text {mtr}})\cup (\overline{\mathcal {D}}_S^{\text {mte}}), (\mathcal {D}_S^{\text {mte}}), \mathcal {L}_{\text {uda}}, \mathcal {L}_{\text {sup}} )\\ \varTheta&= \varTheta _{} - \alpha \nabla _{\varTheta _{}}\mathcal {L}_{\text {uda}}(\varTheta _{}, (\mathcal {D}_S),(\overline{\mathcal {D}}_T) ) \end{aligned} \end{aligned}$$
(6)

where \((\mathcal {D})\) denotes minibatch sampling from the corresponding dataset, and we call \({\text {UpdateIC}}(\cdot )\) with a small number of inner-loop optimizations such as \(J=1\).

Our method, summarised in Fig. 1 and Algorithm 3, performs meta-learning online, by simultaneously solving the meta-objective and the target task. It translates to tuning the initial condition between taking optimization steps on the target DA task. This avoids the intractability and instability of backpropagating through the long computational graph in the exact approach that meta-optimizes \(\varTheta _0\) to completion before doing DA. Online meta-learning is also potentially advantageous in practice due to improving optimization throughout training rather than only at the start – c.f. the vanilla exact method, where the impact of the initial condition on the final outcome is very indirect.

Semi-supervised Domain Adaptation.  We next consider how to adapt the ideas introduced previously to the semi-supervised domain adaptation setting. In the MSDA setting above, we divided source domains into meta-train and meta-test, used unlabeled data from meta test to drive adaptation, and then used meta-test labels to validate the adaptation performance. In SSDA we do not have multiple source domains with which to use such a meta-train/meta-test split strategy, but we do have a small amount of labeled data in the target domain that we can use to validate adaptation performance and drive initial condition optimization. By analogy to Eq. 4, we can aim to find the initial condition for the unsupervised component \(\mathcal {L}_\text {uda}\) of an SSDA method. But now we can use the few labelled examples \(\mathcal {D}_T\) to validate the adaptation in the outer loop.

$$\begin{aligned} \varTheta _0 = \underset{\varTheta _0}{{\text {argmin}}} \sum \mathcal {L}_\text {sup}(\mathcal {L}_\text {uda}(\mathcal {D}_{\text {S}},\overline{\mathcal {D}}_{T}; \varTheta _0),\mathcal {D}_{T}) \end{aligned}$$
(7)

The learned initial condition can then be used to instantiate the final semi-supervised domain adaptive training.

$$\begin{aligned} \varTheta =\underset{\varTheta }{{\text {argmin}}}~~ \mathcal {L}_{\text {ssda}} (\varTheta ,\mathcal {D}_{S},\overline{\mathcal {D}}_{T},\mathcal {D}_T;\varTheta _0) \end{aligned}$$
(8)

An Online Solution.  The exact meta SSDA approach above suffers from the same limitations as exact MetaMSDA. So we again apply online meta-learning by iterating between meta-optimization of Eq. 7 and the final supervised domain adaptation problem of Eq. 9.

$$\begin{aligned} \begin{aligned} \displaystyle \varTheta&= {\text {UpdateIC}} (\varTheta , (\mathcal {D}_S)\cup (\overline{\mathcal {D}}_T), (\mathcal {D}_T), \mathcal {L}_{\text {uda}}, \mathcal {L}_{\text {sup}} )\\ \varTheta&= \varTheta -\alpha \nabla _\varTheta \mathcal {L_{\text {ssda}}}(\varTheta ,\mathcal {D}_{S},\overline{\mathcal {D}}_{T},\mathcal {D}_T;\varTheta ) \end{aligned} \end{aligned}$$
(9)

The final procedure is summarized in Algorithm 4.

3.3 Shortest Path Optimization

figure c

Meta-learning Model Initialisation.  As described so far, our meta-learning approach to domain adaptation relies on the ability meta-optimize initial conditions using gradient descent steps as described in Algorithm 1. Such steps evaluate a meta-gradient that depends on the parameter \(\varTheta ^*\) output by the base domain adaptation algorithm

$$\begin{aligned} \begin{aligned} \varTheta _0 = \varTheta _0 - \alpha \overbrace{\nabla _{\varTheta }\mathcal {L}_{\text {sup}}(\varTheta _{}^* , \mathcal {D}_{\text {val}})}^{\text {Meta Gradient}} \end{aligned} \end{aligned}$$
(10)

Evaluating the meta-gradient directly is impractical because: (i) The inner loop that runs the base domain adaptation algorithm may take multiple gradient descent iterations \(j=1\dots J\). This will trigger a large chain of higher-order gradients \(\nabla _{\varTheta _{0}}\mathcal {L}_{\text {inner}}(\cdot ), \dots ,\nabla _{\varTheta _{J-1}}\mathcal {L}_{\text {inner}}(\cdot )\). (ii) More fundamentally, several state of the art domain adaptation algorithms [41, 43] use multiple optimization steps when making updates on \(\mathcal {L}_\text {inner}\). For example, to adversarially train the deep feature extractor and classifier modules of the model in \(\varTheta \). Taking gradient steps on \(\mathcal {L}_\text {outer}(\varTheta ^*)\) thus triggers higher-order gradients, even if one only takes a single step \(J=1\) of domain adaptation optimization.

Shortest Path Optimization.  To obtain the meta gradient in Eq. 10 efficiently, we use shortest-path gradient (SPG)  [36]. Before optimising the innner loop, we copy parameters \(\varTheta _0\) as \(\tilde{\varTheta }_{0}\) and use \(\tilde{\varTheta }_{0}\) in the inner-level algorithm. Then, after finishing the inner loop we get the shortest-path gradient between \(\tilde{\varTheta }_{J}\) and \(\varTheta _0\).

$$\begin{aligned} \begin{aligned} \nabla _{\varTheta _0}^{\text {short}} = \varTheta _0 - \tilde{\varTheta }_{J} \end{aligned} \end{aligned}$$
(11)

Each meta-gradient step (Eq. 10) is then approximated as

$$\begin{aligned} \begin{aligned} \varTheta _0&= \varTheta _0 - \alpha \nabla _{\varTheta _0}\mathcal {L}_{\text {sup}}(\varTheta _0 - \nabla _{\varTheta _0}^{\text {short}}, \mathcal {D}_{\text {val}}) \\ \end{aligned} \end{aligned}$$
(12)

Summary.  We now have an efficient implementation of \({\text {UpdateIC}}\) for updating initial conditions as summarised in Algorithm 2. This shortest path approximation has the advantage of allowing efficient initial condition updates both for multiple iterations of inner loop optimization \(J>1\), as well as for inner loop domain adaptation algorithms that use multiple steps [41, 43]. We use this implementation for the MSDA and SSDA methods in Algorithms 3 and 4.

figure d

4 Experiments

Datasets.  We evaluate our method on several multi-source domain adaptation benchmarks including PACS [22], Office-Home  [49] and DomainNet [38]; as well as on the semi-supervised setting of Office-Home and DomainNet.

Base Domain Adaptation Algorithms and Ablation.  Our Meta-DA framework is designed to complement existing base domain adaptation algorithms. We evaluate it in conjunction with Domain Adversarial Neural Networks (DANN, [16]) – as a representative classic approach to deep domain adaptation; as well as Maximum Classifier Discrepancy (MCD, [43]) and MinMax Entropy (MME, [41]) – as examples of state of the art multi-source and semi-supervised domain adaptation algorithms respectively. Our goal is to evaluate whether our Meta-DA framework can improve these base learners. We note that the MCD algorithm has two variants: (1) A multi-step variant that alternates between updating the classifiers and several steps of updating the feature extractor and (2) A one-step variant that uses a gradient reversal [16] layer so that classifier and feature extractor can be updated in a single gradient step. We evaluate both of these. Sequential Meta-Learning: As an ablation, we also consider an alternative fast meta-learning approach that performs all meta-updates at the start of learning, before doing DA; rather than performing meta-updates online with DA as in our proposed Meta-DA algorithms.

Table 1. Multi-source DA results on PACS. Bold: Best. Red: Second Best.
Table 2. Multi-source domain adaptation on office-home.
Table 3. Multi-source domain adaptation on DomainNet dataset.

4.1 Multi-source Domain Adaptation

PACS: Dataset. PACS [22] was initially proposed for domain generalization and had been subsequently been re-purposed [7, 34] for multi-source domain adaptation. This dataset has four diverse domains: (A)rt painting, (C)artoon, (P)hoto and (S)ketch with seven object categories ‘dog’, ‘elephant’, ‘giraffe’, ‘guitar’, ‘house’, ‘horse’ and ‘person’ with 9991 images in total. Setting. We follow the setting in  [7] and perform leave-one-domain out evaluation, setting each domain as the adaptation target in turn. As per  [7], we use the ImageNet pre-trained ResNet-18 as our feature extractor for fair comparison. We train with M-SGD (batch size = 32, learning rate = \(2\times 10^{-3}\), momentum = 0.9, weight decay = \(10^{-4}\)). All the models are trained for 5k iterations before testing. Results. From the results in Table 1, we can see that: (i) Several recent methods with published results on PACS achieve similar performance, with JiGen [7] performing best. We additionally evaluate DANN and MCD including one-step MCD (os) and multi-step MCD (n = 4) variants, and find that one-step MCD performs similarly to JiGen. (ii) Applying our Meta-DA framework to DANN and MCD boosts all three base domain adaptation methods by \(1.96\%\), \(2.5\%\) and \(1.2\%\) respectively. (iii) In particular, our Meta-MCD surpasses the previous state of the art performance set by JiGen. Together these results show the broad applicability and absolute efficacy of our method. Based on these results we focus on the better performing single-step MCD in the following evaluations.

Table 4. Semi-supervised DA on DomainNet.
Fig. 2.
figure 2

Vanilla DA vs seq. and online meta (T:MSDA, B:SSDA).

Office-Home:  Dataset and Settings. Office-Home was initially proposed for the single-source domain adaptation, containing \(\approx 15,500\) images from four domains ‘artistic’, ‘clip art’, ‘product’ and ‘real-world’ with 65 different categories. We follow the setting in  [8] and use ImageNet pretrained ResNet-50 as our backbone. We train all models with M-SGD (batch size = 32, learning rate = \(10^{-3}\), momentum = 0.9 and weight decay = \(10^{-4}\)) for 3k iterations. Results. From Table 2, we see that MCD achieves the best performance among the baselines. Applying our meta-learning framework improves both baselines by a small amount, and Meta-MCD achieves state-of-the-art performance on this benchmark.

Table 5. Semi-supervised domain adaptation: Office-home.
Fig. 3.
figure 3

Performance across weight space slices defined by a common initial condition \(\varTheta _0\) and MCD and Meta-MCD solutions (\(\varTheta _\text {Meta-MCD}\) and \(\varTheta _\text {MCD}\) respectively). MSDA PACS benchmark with Sketch target.

Table 6. Comparison between DG and DA methods on PACS.

DomainNet:  Dataset. DomainNet is a recently benchmark [38] for multi-source domain adaptation in object recognition. It is the largest-scale DA benchmark so far, with \(\approx 0.6\) m images across six domains and 345 categories.

Settings.  We follow the official train/test split protocol [38]Footnote 2. Various feature extraction backbones were used in the original paper [38], making it hard to compare results. We use ImageNet pre-trained ResNet-18 and ResNet-34 for our own implementations to facilitate direct comparison. We use M-SGD to train all the competitors (batch size = 32, learning rate = 0.001, momentum = 0.9, weight decay = 0.0001) for 10k iterationsFootnote 3. We re-train the model three times to generate standard deviations. Results. From the results in Table 3, we can see that: (i) The top group of results from  [38] show that the dataset is a much more challenging domain adaptation benchmark than previous ones. Most existing domain adaptation methods (typically tuned on small-scale benchmarks) fail to improve over the source-only baseline according to the results in  [38]. (ii) The middle group of ResNet-18 results show that our MCD experiment achieves comparable results to those in  [38]. (iii) Our Meta-MCD and Meta-DANN methods provide a small but consistent improvement over the corresponding MCD and DANN baselines for both ResNet-18 and ResNet-34 backbones. While the improvement margins are relatively small, this is a significant outcome as the results show that the base DA methods already struggle to make a large improvement over the source-only baseline when using ResNet-18/34; and also the multi-run standard deviation is small compared to the margins. (iv) Overall our Meta-MCD achieves state-of-the-art performance on the benchmark by a small margin.

4.2 Semi-supervised Domain Adaptation

Office-Home: Setting. We follow the setting in  [41]. We focus on 3-shot learning in the target domain (three annotated examples only per category), and focus on the five most difficult source-target domain pairs. We use the ImageNet pretrained AlexNet and ResNet-34 as backbone models. We train all the models with M-SGD, with batch size 24 for labelled source and target domains and 48 for the unlabelled target as in  [41], learning rate is \(10^{-2}\) and \(10^{-3}\) for the fully-connected and the rest trainable layers. We also use horizontal-flipping and random-cropping data augmentation for training images. Results. From the results in Table 4, we can see that our Meta-MME does not impact performance on AlexNet. However, for a modern ResNet-34 architecture, Meta-MME provides a visible \(\sim \)0.8% accuracy gain over the MME baseline, which results in the state-of-the-art performance of SSDA on this benchmark.

DomainNet:  Settings. We evaluate DomainNet for 1-1 few-shot domain adaptation as in  [41]. We evaluate both AlexNet and modern ResNet-34 backbones, and apply our meta-learning method on MME. As per  [41], we train our models using M-SGD where the initial learning rate is 0.01 for the fully-connected layers and 0.001 for the rest of trainable layers. During the training we use the annealing strategy in  [16] to decay the learning rate, and use the same batch size as  [41].

Results.  From the results in Table 5, we can see our Meta-MME improves on the accuracy of the base MME algorithm in all pairwise transfer choices, and also for both backbones. These results show the consistent effectiveness of our method, as well as improving state-of-the-art for DomainNet SSDA.

4.3 Further Analysis

Discussion.  Our final online algorithm can be understood as performing DA with periodic meta-updates that adjust parameters to optimize the impact of the following DA steps. From the perspective of any given DA step, the role of the preceding meta-update is to tune its initial condition.

Non-meta vs Sequential vs Online Meta.  This work is the first to propose meta-learning to improve domain adaptation, and in particular to contribute an efficient and effective online meta-learning algorithm for initial condition training. Exact meta learning is intractable to compare. However, this section we compare our online meta update with the alternative sequential approximation, and non-meta alternatives for both MSDA and SSDA using A, C, P \(\rightarrow \) S and R \(\rightarrow \) C as examples. For fair comparison, we control the number of meta-updates (\({\text {UpdateIC}}\)) and base DA updates available to both sequential and online meta-learning methods to the same amount. Figure 2 shows that: (1) Sequential meta-learning method already improves the performance on the target domain comparing to vanilla domain adaptation, which confirms the potential for improvement by refining model initialization. (2) The sequential strategy has a slight advantage early in DA training, which makes sense, as all meta-updates occur in advance. But overall our online method that interleaves meta-updates and DA updates leads to higher test accuracy.

Computational Cost.  Our Meta-DA imposes only a small computational overhead over the base DA algorithm. For example, comparing Meta-MCD and MCD on ResNet-34 DomainNet, the time per iteration is 2.96s vs 2.49s respectivelyFootnote 4.

Weight-Space Illustration.  To investigate our method’s mechanism, we train MCD and Meta-MCD from a common initial condition on MSDA PACS when ‘Sketch’ is the target domain. We use the initial condition \(\varTheta _0\) and two different solutions (\(\varTheta _\text {Meta-MCD}\) and \(\varTheta _\text {MCD}\)) to define a plane in weight-space and colour it according to the performance at each point. We can see from Fig. 3(a) that Meta-MCD finds a solution with greater test accuracy. Figures 3(b) and (c) break down the training loss components. We can see that, in this slice, both methods managed to minimize MCD’s adaptation (classifier discrepancy) loss \(\mathcal {L}_a\) adequately, but MCD failed to minimize the supervised loss as well as Meta-MCD (\(\varTheta _\text {Meta-MCD}\) is closer to the minima than \(\varTheta _\text {MCD}\)). Note that both methods were trained to convergence in generating these solutions. This suggests that Meta-MCD’s meta-optimization step using meta-train/meta-test splits materially benefits the optimization dynamics of the downstream MSDA task.

Model Agnostic.  We emphasize that, although we focused on DANN, MCD and MME, our MetaDA framework can apply to any base DA algorithm. Supplementary C shows some results for JiGen and M\(^3\)SDA algorithms.

Comparison Between DA and DG Methods.  As a highly related topical problem to domain adaptation, domain generalization assumes no access to the target domain data during the training. DA and DG methods are rarely directly compared. Now we compare our Meta-MCD and MCD with some state of the art DG methods on PACS as shown in Table 6. From the results, we can see that generally DA methods outperform the DG methods with a noticeable margin, which is expected as DA methods ‘see’ the target domain data at training.

5 Conclusion

We proposed a meta-learning pipeline to improve domain adaptation by initial condition optimization. Our online shortest-path solution is efficient and effective, and provides a consistent boost to several domain adaptation algorithms, improving state of the art in both multi-source and semi-supervised settings. Our approach is agnostic to the base adaptation method, and can potentially be used to improve many DA algorithms that fit a very general template. In future we aim to meta-learn other DA hyper-parameters beyond initial conditions.