Keywords

1 Introduction

Recent years have seen incredible progress in Visual Dialog [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22], spurred in part by the initial efforts of Das et al.  [2] in developing a concrete task definition – given an image, dialog history consisting of a sequence of question-answer pairs, and a follow-up question about the image, to predict a free-form natural language answer to the question – along with a large-scale dataset and evaluation metrics. The state-of-the-art on the task has improved by more than \(20\%\) absolute (\({\sim }54\% \rightarrow {\sim }74\%\) NDCG) and the original task has since been extended to challenging domains, e.g. video understanding [23], navigation assistants [24,25,26].

Fig. 1.
figure 1

First, the language stream of our model is pretrained on English Wikipedia and the BooksCorpus [27] datasets with the masked language modeling (MLM) and next sentence prediction (NSP) losses. Next, the entire model is trained on the Conceptual Captions [28] and VQA [29] datasets with the masked image region (MIR), MLM and NSP losses. Finally, we finetune the model on sparse annotations from VisDial [2] with the MIR, MLM and NSP losses, and optionally finetune on dense annotations.

While this is promising, much of this progress has happened in isolation, wherein sophisticated neural architectures are trained and benchmarked solely on the VisDial dataset. This is limiting – since there is a significant amount of shared abstraction and visual grounding in related tasks in vision and language (e.g. captioning, visual question answering) that can benefit Visual Dialog – and wasteful – since it is expensive and dissatisfying to have to collect a large-scale dataset for every new task. In this work, we explore an approach to pretrain our model on other related vision and language datasets and then transfer to Visual Dialog (Fig. 1).

Our work is inspired by prior work in transfer learning in computer vision and natural language understanding where large models [30,31,32,33,34,35,36,37,38,39,40] are pretrained on large datasets [27, 41, 42] with simple yet powerful self-supervised objectives to learn powerful representations that are then transferred to downstream tasks, leading to state-of-the-art results on a variety of benchmarks [41, 43]. Recent work has extended this to vision and language tasks [44,45,46,47,48,49,50], leading to compelling results in Visual Question Answering [29], Commonsense Reasoning [51], Natural Language Visual Reasoning [52], Entailment [53], Image-Text Retrieval [54, 55], Referring Expressions [56], and Vision-Language Navigation [57].

In this work, we adapt ViLBERT [44] to Visual Dialog. ViLBERT uses two Transformer-based[34] encoders, one for each of the two modalities – language and vision – and interaction between the two modalities is enabled by co-attention layers i.e. attention over inputs from one modality conditioned on inputs from the other. Note that adapting ViLBERT to Visual Dialog is not trivial. The Visual Dialog dataset has image-grounded conversation sequences that are up to 10 rounds long. These are significantly longer than captions (which are \(\le \) \(2\) sentences) from the Conceptual Captions dataset [28] or question-answer pairs from VQA [29] used to pretrain ViLBERT, and thus requires a different input representation and careful reconsideration of the masked language modeling and next sentence prediction objectives used to train BERT [35] and ViLBERT [44].

This adapted model outperforms prior published work by \(> 1\%\) absolute and achieves state-of-the-art on Visual Dialog. Next, we carefully analyse our model and find that additional finetuning on ‘dense’ annotationsFootnote 1 i.e. relevance scores for all 100 answer options corresponding to each question on a subset of the training set, highlights an interesting trade-off – the model gets to \({\sim }74.5\%\) NDCG (outperforming the 2019 VisDial Challenge winner), but an MRR of \({\sim }52\%\) (\({\sim }17\%\) below our base model!). We find this happens because dense annotations in VisDial do not correlate well with the ground-truth answers to questions, often rewarding the model for generic, uncertain responses.

Concretely, our contributions are as follows:

  • We introduce an adaptation of the ViLBERT [44] model for Visual Dialog, thus making use of the large-scale Conceptual Captions [28] and Visual Question Answering (VQA) [29] datasets for pretraining and learning powerful visually-grounded representations before finetuning on VisDial [2]. Since captioning and VQA differ significantly from Visual Dialog in input size (\(\le \) \(2\) sentence descriptions vs. \(\le \) \(10\) question-answer rounds), this requires rethinking the input representation to learn additional segment embeddings representing questions-answer pairs. Our adapted model improves over prior published work by \(1\%\) and sets a new state-of-the-art.

  • We next finetune our model on dense annotations i.e. relevance scores for all 100 answer options corresponding to each question on a subset of the training set, leading to even higher NDCG – more than \(10\%\) over our base model – but hurting MRR – more than \(17\%\) below our base model! This highlights a stark trade-off between the two primary metrics for this task – NDCG and MRR. Through qualitative and quantitative results, we show that this happens because dense annotations do not correlate well with the original ground-truth answers, often rewarding the model for generic, uncertain responses.

  • Our PyTorch [58] code is publicly availableFootnote 2 to encourage further work in large-scale transfer learning for VisDial.

2 Related Work

Our work is related to prior work in visual dialog [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22], and self-supervised pretraining and transfer learning in computer vision and language [30,31,32,33,34,35,36,37,38,39,40].

Visual Dialog. Das et al.  [2] and de Vries et al.  [1] introduced the task of Visual Dialog – given an image, dialog history consisting of a sequence of question-answer pairs, and a follow-up question, predict a free-form natural language answer to the question – along with a dataset, evaluation metrics, and baseline models. Follow-up works on visual dialog have explored the use of deep reinforcement learning [3, 4, 17], knowledge transfer from discriminative to generative decoders [5], conditional variational autoencoders [6], generative adversarial networks [7], attention mechanisms for visual coreference resolution [9, 11], and modeling the questioner’s theory of mind [10]. Crucially, all of these works train and evaluate on the VisDial dataset in isolation, without leveraging related visual grounding signals from other large-scale datasets in vision and language. We devise a unified model that can be pretrained on the Conceptual Captions [28] and VQA [29] datasets, and then transferred and finetuned on VisDial.

Self-supervised Learning in Vision and Language. Building on the success of transfer learning in natural language understanding [33,34,35,36,37,38,39,40] leading to state-of-the-art results on a broad set of benchmarks [41, 43], recent work has extended this to vision and language tasks [44,45,46,47,48,49,50]. These works pretrain single [45, 48, 49] or two [44, 46]-stream Transformer [34]-based models with self-supervised objectives, such as next-sentence prediction and masked language/image modeling, on large-scale image-text datasets and have led to compelling results in Visual Question Answering [29], Commonsense Reasoning [51], Natural Language Visual Reasoning [52], Entailment [53], Image-Text Retrieval [54, 55], and Referring Expressions [56], and Vision-Language Navigation [57].

3 Adapting ViLBERT [44] for Visual Dialog

Lu et al.  [44] introduced ViLBERTFootnote 3, which extended BERT [35] to a two-stream multi-modal architecture for jointly modeling visual and linguistic inputs. Interaction between the two modalities was enabled through co-attention layers, i.e. attending to one modality conditioned on the other – attention over language conditioned on visual input, and attention over image regions conditioned on linguistic input. This was operationalized as swapping the key and value matrices between the visual and linguistic Transformer [34] blocks. We next discuss our changes to adapt it for Visual Dialog followed by our training pipeline.

Input Representation. Recall that the model gets image I, dialog history (including image caption C) \(H = (C, (Q_1, A_1), ..., (Q_{t-1}, A_{t-1}))\), question \(Q_t\), and a list of 100 answer options \(A_t = \{A_t^{(1)}, A_t^{(2)}, ..., A_t^{(100)}\}\) as input, and is asked to return a sorting of \(A_t\). We concatenate the t rounds of dialog history and follow-up question \(Q_t\), with each question and answer separated by a token. The overall input to the language stream is represented as:

(1)

Similar to Wolf et al.  [59], we use different segment embeddings for questions and answers to help the model distinguish between the two and understand question and answer boundaries in the input. Captions and answers share the same segment embeddings. To represent the image, we follow [44, 60] and extract object bounding boxes and their visual features for top-36 detected objects in the image from a Faster R-CNN [61] (with a ResNet-101 [30] backbone) object detection network pretrained on the Visual Genome dataset [42]. The feature vector for each detected object is computed as mean-pooled convolutional features from the regions of that object. A 5-d feature vector, consisting of normalized top-left and bottom-right object coordinates, and the fraction of image area covered, is projected to the same dimensions as the feature vector for the detected object, and added to it, giving us the final visual features \(\{v_1, ..., v_{36}\}\). The beginning of this image region sequence (consisting of object detection features) is demarcated by an IMG token with mean-pooled features from the entire image. The overall input to ViLBERT can be written as the following sequence:

(2)

3.1 Pretraining on Conceptual Captions [28]

To pretrain the model, we follow [44] and train on the Conceptual Captions (CC) dataset, which is a large corpus (with \({\sim }3\)M samples) of aligned image-caption pairs. During pretraining, the sum of the masked language modeling (MLM) loss [35] and the masked image region (MIR) loss is optimized. To compute the MLM loss, a set of tokens in the input sequence are masked and the model is trained to predict these tokens given context. We mask around \(15\%\) of the tokens in the input sequence. For the MIR loss, similar to the MLM loss, we zero out \(15\%\) of the image features and the model learns to predict the semantic category of the masked out object (out of 1601 classes from Visual Genome [42, 60]).

3.2 Pretraining on VQA [29]

The VQA dataset is quite related to Visual Dialog in that it can be interpreted as independent visually-grounded question-answer pairs with no dialog history, and thus is a natural choice for further pretraining prior to finetuning on VisDial. Similar to Lu et al. [44], we pretrain on VQA by learning a small decoder – a two-layer MLP – on top of the element-wise product between the image and text representations to predict a distribution over 3129 answers.

3.3 Finetuning on Visual Dialog [2]

To finetune on Visual Dialog, we use the MLM loss along with the next sentence prediction (NSP) and MIR losses. For MLM, we mask \(10\%\) of the tokens in the dialog sequence. For MIR, similar to pretraining, we mask \(15\%\) of the image features. Note that the discriminative task in visual dialog is to identify the ground-truth answer from a list of 100 answer options consisting of popular, nearest neighbors, and random answers from the dataset. We achieve this through the NSP loss. The NSP head is trained to predict 1 when the ground-truth answer is appended to the input sequence, and 0 when a negative answer sampled from the remaining answer options is appended to it. Each image in VisDial has 10 rounds of dialog, leading to 10 sets of positive and negative samples for the NSP loss per mini-batch. Since these are fairly correlated samples, we randomly sub-sample 2 out of these 20 during training. At test time, we use log-probabilities from the NSP head to rank the 100 answer options per round.

3.4 Finetuning with Dense Annotations

The authors of [2] recently released dense annotationsFootnote 4 i.e. relevance scores for all 100 answer options from \(A_t\) corresponding to the question on a subset of the training set. These relevance scores range from 0 to 1 and are calculated as the ratio of number of human annotators who marked a particular answer option as correct to the total number of human annotators (\(=4\)). So 1 means that the answer option was considered correct by 4 human annotators. In our final stage of training, we utilize these dense annotations to finetune our model. Concretely, we use the NSP head to predict likelihood scores \(\hat{\ell }_t^{(i)}\) for each answer option \(A_t^{(i)}\) at round t, normalize these to form a probability distribution over the 100 answers \(\hat{y}_t = [ \hat{y}_t^{(1)}, ..., \hat{y}_t^{(100)} ]\), and then compute a cross-entropy (CE) loss against the normalized ground-truth relevance scores \(y_t\), given by \(-\sum _i y_t^{(i)} \log \hat{y}_t^{(i)}\).

4 Experiments

To compare to previous research, we conduct experiments on VisDial v1.0 [2]. The dataset contains human-human dialogs on \({\sim }130k\) COCO [62]-like images. We follow the original splits and use \({\sim }120k\) for training, \({\sim }2k\) for validation, and \({\sim }8k\) for testing. We next describe the various settings we experiment with.

Evaluation Metrics. We use metrics introduced in [2]. Specifically, given the predicted ranking of 100 answer options from a model at each round, we compute retrieval metrics – mean rank (MR) of the ground-truth answer, mean reciprocal rank (MRR), and recall@k (\(k=\{1,5,10\}\)). Additionally, along with the release of dense annotations, i.e. relevance scores \(\in [0, 1]\) for all 100 answer options, a new metric – NDCG – was introduced. NDCG accounts for multiple correct answers in the option set and penalizes low-ranked but correct answer options.

4.1 Language-Only

We begin with a ‘blind’ setting, where given the dialog history and follow-up question, and without access to the image, the model is tasked with predicting the answer. We do not use the ViLBERT formulation for these experiments, and finetune the BERT model released in [35] and pretrained on BooksCorpus [27] and English Wikipedia. For the MLM loss, we mask \(15\%\) of tokens and sub-sample 8 out of 20 sequences per mini-batch during training. We experiment with two variants – training only with NSP, and training with both NSP and MLM. See Table 3 for language-only results (marked ‘L-only’). This setting helps us benchmark gains coming from switching to Transformer [34]-based architectures before the added complexity of incorporating visual input.

Varying Number of Dialog Rounds. We train ablations of our language-only model (with NSP and MLM losses) where we vary the number of rounds in dialog history, starting from 0, where the input sequence only contains the follow-up question and answer, to 2, 4, and 6 and 10 rounds of dialog history (Table 1).

Zero-Shot and ‘Cheap’ Finetuning. We report performance for ablations of our NSP+MLM model with no/minimal training in Table 2. First, we do a zero-shot test where we initialize BERT with weights from Wikipedia and BooksCorpus pretraining and simply run inference on VisDial. Second, with the same initialization, we freeze all layers and finetune only the MLM and NSP loss heads.

4.2 Finetuning on VisDial

We finetune ViLBERT on VisDial with four different weight initializations – 1) with randomly initialized weights, 2) from the best language-only weights (from Sect. 4.1) for the language stream (visual stream and co-attention layers initialized randomly), 3) from a model pretrained on CC [28] (as described in Sect. 3.1) and 4) from a model pretrained on CC [28] +VQA [29] (as described in Sect. 3.2). 1) helps us benchmark improvements due to pretraining, 2) helps us benchmark performance if the model learns visual grounding solely from VisDial, 3) quantifies effects of learning visual grounding additionally from CC, and 4) helps us quantify improvements with additional exposure to visually-grounded question-answering data. See Table 3 for results.

4.3 Finetuning with Dense Annotations

Finally, we finetune our best model from Sect. 4.2 – marked ‘w/ CC+VQA’ in Table 3 – on dense annotations, as described in Sect. 3.4. Note that computing the CE loss requires a separate forward pass for each of the 100 answer options, since dialog history, question, answer are all concatenated together before passing as input. This is memory-expensive, and so in practice, we sub-sample and only use 80 options, and use gradient accumulation to (artificially) construct a larger mini-batch. Finetuning with the CE loss only leads to significant improvements on NDCG but hurts other metrics (see Table 3). We discuss and analyse this in more detail later. But to control for this ‘metric-overfitting’, we also train a variant with both the CE and NSP losses.

5 Results

We list findings from all experiments described in Sect. 4 below.

Table 1. Performance of the NSP + MLM language-only model on VisDial v1.0 val as the number of dialog history rounds is varied
Table 2. Performance of the NSP + MLM language-only model on VisDial v1.0 val with no/minimal training (described in Sect. 4.1)
  • Language-only performs well. The language-only model gets to 57.22 on NDCG and 64.10 on MRR (Table 3), which is already competitive with several prior published works (Table 4). These trends are consistent with high human performance on VisDial [2] with just language (question and dialog history) – 48.5 on MRR – which further improves to 63.5 on MRR with image.

  • Increasing dialog history rounds helps. We report performance of the language-only model as a function of dialog history rounds in Table 1 and Fig. 2a. Note that the change in performance from including 0 to 4 rounds of dialog history (\(+4.56\) on NDCG, \(+8.54\) on MRR) is much more than from 4 to 10 dialog history rounds (\(+2.12\) on NDCG, \(+1.27\) on MRR). Thus, performance continues to go up with increasing dialog history rounds but starts to plateau with \(\ge \) \(4\) history rounds. We believe these improvements are largely indicative of the Transformer’s ability to model long-term dependencies.

  • Zero-shot model performs poorly. Running inference with the language-only model pretrained on BooksCorpus [27] and Wikipedia without any finetuning on VisDial only gets to 11.63 on NDCG and 6.88 on MRR (Table 2). Finetuning the loss heads with all other layers frozen leads to an improvement of \({\sim }8\) NDCG points over this. This low performance can be attributed to significantly longer sequences in VisDial than the model was pretrained with.

  • VQA initialization helps more than random or CC initialization. Finetuning ViLBERT on VisDial with weights initialized from VQA pretraining gets to 64.82 on NDCG and 68.97 on MRR, \({\sim }3\) points better than random initialization on NDCG and \({\sim }2\) points better than CC pretraining (Table 3). We believe poorer transfer from CC is because both VQA and VisDial have images from COCO and are more closely related tasks than captioning on CC.

  • Dense annotations boost NDCG, hurt MRR. Finetuning with the CE loss leads to 74.47 on NDCG – a \({\sim }10\%\) improvement over the ‘w/ CC + VQA’ base model – but 50.74 on MRR, a \({\sim }17\%\) decline below the base model (Table 4). This is a surprising finding! We carefully analyze this behavior in Sect. 6.

  • Ensembling does not improve performance. We trained 3 models initialized with different random seeds for each of the 3 variants (‘w/ CC + VQA’, ‘CE’ and ‘CE + NSP’) and aggregated results by averaging the normalized scores from the 3 models. We did not observe any significant improvement.

Table 3. Results on VisDial v1.0 val (with 95% CI). \(\uparrow \) indicates higher is better.
Table 4. Results on VisDial v1.0 test-std. \(\uparrow \) indicates higher is better. \(\downarrow \) indicates lower is better. \(\dag \) denotes ensembles. Best single-model results are bolded and best ensemble results are underlined. \(\star \) denotes the winning team of the 2019 Visual Dialog Challenge.

We report results from the Visual Dialog evaluation serverFootnote 5 for our best models – ‘w/ CC + VQA’, ‘CE’ and ‘CE + NSP’ – on the unseen test-std split in Table 4. We compare against prior published results and top entries from the leaderboard. Our models outperform prior results and set a new state-of-the-art – ViLBERT with CC + VQA pretraining on MRR, R@k, MR metrics, and further finetuning with a CE loss on dense annotations on NDCG. Finally, adding NSP loss along with CE (as in Sect. 4.3) offers a balance between optimizing metrics that reward both sparse (original ground-truth answers) and dense annotations.

Fig. 2.
figure 2

Analysis plots of dense annotations in Visdial v1.0 val split and dialog history ablations.

6 Analysis

As described in Sect. 5, finetuning on dense annotations leads to a significant increase in NDCG, but hurts the other 5 metrics – MRR, R@1, R@5, R@10 and MR – which depend on the original sparse annotations in VisDial i.e. follow-up answers provided in human-human dialog.

We begin by visualizing the distribution of dense relevance scores for these sparse ground-truth (GT) answers in Fig. 2b and observe that \({\sim }50\%\) GT answers have relevance \(\le \) \(0.8\), and \({\sim }30\%\) have relevance \(\le \) \(0.6\). Thus, there is some degree of misalignment between dense and sparse annotations – answers originally provided during human-human dialog in VisDial were not always judged to be relevant by all humans during the post-hoc dense annotation phase.

Why are GT and Dense Annotations Misaligned? We notice that many questions with discrepancy between GT and dense annotations are somewhat subjective. For e.g., in row 1, round 7 (Fig. 5), Q: ‘what color is the chair?’, the GT answer is ‘black’ but the chair is in shadow and it is difficult to accurately identify its color. And thus, we expect to see variance when multiple humans are polled for the answer. Instead, the GT answer is just one sample from the human answer distribution, not necessarily from its peak. In general, the dense annotations seem less wrong than GT (as they are sourced by consensus) since they are safer – often resolving to answers like ‘I cannot tell’ when there is uncertainty/subjectivity – but also uninformative – not conveying additional information e.g. ‘I think 3 but they are occluded so it is hard to tell’ – since such nuanced answers are not part of the list of answer options in VisDial [2].

Model Performance on GT vs. Dense annotations. Table 2c shows mean ranks of these GT answers as predicted by three model variants – ViLBERT w/ CC + VQA, CE, and CE + NSP – grouped by dense relevance scores. The ‘CE’ model gets worse mean ranks than ‘w/ CC + VQA’ for all GT answers, since it is no longer trained with these GT answers during dense annotation finetuning. The CE model assigns low mean ranks to GT answers with higher relevance scores (\(\ge \) \(0.8\)), which translates to a high NDCG score (Table 3). But it assigns poor mean ranks to GT answers with relatively lower relevance scores (\(\le \) \(0.8\)), and since \({\sim }50\%\) GT answers have relevance scores \(\le \) \(0.8\), this hurts MRR, R@k, MR for the CE model (Table 3).

Fig. 3.
figure 3

Mean relevance scores and counts for top-50 most-relevant answers from VisDial v1.0 val dense annotations. These contain several sets of paraphrases – \(\{\)“yes it’s in color”, “yes this picture is in color”, “the picture is in color”, “yes the picture is in color”, “yes, it is in color”, “yes it is in color”, “yes, it’s in color”, “yes in color”\(\}\)etc. and have a bias towards binary answers (Color figure online)

Next, we consider the top-50 most-relevant answer options (occurring \(\ge \) \(10\) times) as per dense annotations in VisDial v1.0 val (not restricting ourselves to only GT answers). Figure 3 shows the mean relevance scores for this set, and Fig. 4 shows the mean ranks assigned to these answers by our models. The CE model gets better mean ranks in this set compared to Base, leading to high NDCG.

Fig. 4.
figure 4

Predicted mean rank for each of the top-50 most relevant answers as per dense annotations (from Fig. 3) by three model variants – ViLBERT w/ CC + VQA (called ‘Base’), CE, and CE + NSP. The CE model gets lower mean ranks for most answers in this set compared to Base. This leads to significantly higher NDCG, as reported in Table 3 and Table 4, but low MRR, since these relevant answers as per dense annotations do not correlate well with the set of original ground-truth answers, as shown in Fig. 2b

Fig. 5.
figure 5

Qualitative samples for three model variants – ViLBERT w/ CC + VQA (called ‘Base’), Base + CE, and Base + CE + NSP

Qualitative Examples. Finally, we present uniformly sampled example answer predictions on VisDial v1.0 val from our models along with the ground-truth dialog sequences in Fig. 5 and present additional samples in the appendix. In these examples, consistent with the Visual Dialog task definition [2], at every round of dialog, the model gets the image, ground-truth human dialog history (including caption), and follow-up question as input, and predicts the answer. Specifically, the model ranks 100 answer options. Here we show the top-1 prediction.

We make a few observations. 1) The Base model is surprisingly accurate, e.g. in row 2, round 1 (Fig. 5), Q: ‘can you see any people?’, predicted answer: ‘part of a person’, in row 2, round 10, Q: ‘anything else interesting about the photo?’, predicted answer: ‘the dog is looking up at the person with his tongue out’. 2) The CE model often answers with generic responses (such as ‘I cannot tell’), especially for questions involving some amount of subjectivity/uncertainty,  e.g. in row 1, round 7, Q: ‘what color is the chair?’, predicted answer: ‘I cannot tell’ (the chair seems to be in shadow in the image), in row 2, round 7, Q: ‘does the dog look happy?’, predicted answer: ‘I can’t tell’ (subjective question). 3) This also highlights a consequence of misalignment between ground-truth and dense annotations. While the ground-truth answer provides one reasonable response for the question asked, it is answerer-specific to quite an extent and there may be other correct answers (annotated in the dense annotations). A negative effect of this misalignment is that when finetuned on dense annotations (CE), the model gets rewarded for generic answers (e.g. ‘cannot tell’). While being able to capture and reason about uncertainty is a desirable property models should have, it would be more helpful if these agents can convey more information with appropriate qualifiers (e.g. ‘I think 3 but they are occluded so it is hard to tell’) than a blanket ‘I cannot tell’. We aim to study this in future work.

7 Implementation

We use the BERTBASE model [35] for the linguistic stream. We use 6 layers of Transformer blocks (with 8 attention heads and a hidden state size of 1024) for the visual stream. The co-attention layers connect the 6 Transformer layers in the visual stream to the last 6 Transformer layers in the linguistic stream. We train on dialog sequences with atmost 256 tokens as most sequences had atmost 256 tokens. During inference, we truncate longer sequences by removing rounds starting from round 1 (we keep the caption). We set all loss coefficients to 1. We use a batch size of 128 for language-only experiments and 80 for other experiments. We use Adam [63] and linearly increase learning rate from 0 to \(2\text {e}^{-5}\) over 10k iterations and decay to \(1\text {e}^{-5}\) over 200k iterations. Our code is available at github.com/vmurahari3/visdial-bert/.

8 Conclusion

We introduce a model for Visual Dialog that enables pretraining on large-scale image-text datasets before transferring and finetuning on VisDial. Our model is an adaptation of ViLBERT [44], and our best single model is pretrained on BooksCorpus [27], English Wikipedia (at the BERT stage), and on Conceptual Captions [28], VQA [29] (at the ViLBERT stage), before finetuning on VisDial, optionally with dense annotations. Our model outperforms prior published results by \(> 1\%\) absolute on NDCG and MRR, achieving state-of-the-art results, and providing a simple baseline for future ‘pretrain-then-transfer’ approaches.

Through careful analysis of our results, we find that the recently released dense annotations for the task do not correlate well with the original ground-truth dialog answers, leading to a trade-off when models optimize for metrics that take into account these dense annotations (NDCG) vs. the original sparse annotations (MRR). This opens up avenues for future research into better evaluation metrics.

Finally, note that our model is discriminative – it can pick a good answer from a list of answer options – but cannot generate an answer. In the future, we aim to develop robust decoding techniques, based on decoding strategies for transformer-based models introduced in [33, 64], for a strong generative model.