1 Introduction

State-of-the-art interactive video retrieval systems [1, 2, 14, 16, 17, 20] try to face difficult video retrieval tasks by a combination of information retrieval [3], deep learning [8], and interactive search [19, 21] approaches. Such systems are annually compared at the Video Browser Showdown (VBS) [5, 11, 12] evaluation campaign, where participating teams compete in one room in the concurrent way, trying to solve a presented video retrieval task within a given time frame. The competition tasks are revealed one by one at the competition, while the video datasetFootnote 1 is provided to teams in advance. This paper presents new features of the VIRET tool prototype that regularly participates at interactive video/lifelog search competitions [9, 12].

VIRET [14, 15] is a frame-based interactive video retrieval framework, relying mostly on a set of three basic image retrieval approaches for searching a set of selected video frames – keyword search based on automatic annotation, color sketching based on position-color signatures, and query by example image relying on deep features. The basic approaches can be further used to construct more complex multi-modal and/or temporal queries, which turned out to be an important VIRET tool feature at VBS 2019. In order to provide information about location of frequent semantic concepts, the sketch canvas supports construction of filters for faces or a displayed text. In addition, several filters are supported as well, either based on content (e.g., black and white filter) or the knowledge of video/shot boundaries (e.g., show only top ranked frames for each video/shot).

The interface is divided into two blocks, one for query formulation and one for top ranked results and context inspection. Extracted small thumbnails are used to display frames. All the entered queries are visible as well as other query settings like a sorting model or filters limiting the number of returned frames for each basic query model. Once a query is provided, users can browse temporal context of each displayed frame, observe video summary or an image map dynamically computed for top ranked frames.

Whereas the presented VIRET features were competitive in expertFootnote 2 VBS search sessions, in novice sessions the previous version of the VIRET tool prototype did not perform well. Specifically, while expert VIRET users solved all ten visual KIS tasks and six out of eight textual KIS tasks (the most of all participating teams), the novice VIRET users solved just two out of five evaluated visual KIS tasks (a subset of the expert tasks). According to our analysis, in two unsolved novice tasks the searched frame appeared on the first page, but it was overlooked. In addition, the novice users faced problems with keyword search without the knowledge of the set of supported labels used for automatic annotation. Hence, we focus both on updates of retrieval models and several modifications of the user interface for the next installment of the Video Browser Showdown.

2 New VIRET Features for VBS 2020

This section summarizes considered updates of employed retrieval models and interfaces.

2.1 Retrieval Models

Since the basic retrieval models and their multi-modal temporal combination used by VIRET often turned out to be effective to bring searched frames to the first page, we plan to keep the main querying scheme. However, several updates are considered for VBS 2020.

First, the automatic annotation process employing a retrained NASNet deep classification network [22] is modified to produce different score values for the network output vector (i.e., scores of assigned labels). Instead of softmax which is used for training, the employed feature extraction process currently uses another form of network output normalization of scores that enables more effective retrieval with queries comprising a combination of multiple class labels. In the normalized vectors, all potential zero scores are further replaced with a small constant. We also plan to investigate performance benefits of additional annotation sources (e.g., using object detection networks).

Second, the performance of the vitrivr tool [17] keyword search in ASR annotationsFootnote 3 was impressive at the previous VBS event. Since the vitrivr team shared the ASR data with other teams, we plan to integrate the data to the VIRET framework. Specifically, we plan to include a video filter based on the presence of a spoken word or phrase.

Third, given a set of collected query logs, we plan to investigate and optimize meta parameters of the retrieval models with respect to the whole set of logged queries. More specifically, we plan to fine-tune the initial setting of filters for the number of top ranked frames for each model and presentation filters. Given a detected effective setting, the corresponding interface controls can be hidden or simplified for novice sessions.

Last but not least, we plan to include free-form text search using a variant of recently introduced W2VV++ model [10], extending the W2VV model [7] and relying on visual features from deep networks leveraging the whole ImageNet hierarchy [6] for training effective representations [13].

2.2 User Interface

Since the VIRET tool focuses on frequent query (re)formulations, informative visualizations are important to aid with querying and help with the semantic gap problem. For VBS 2020, we consider the following updates of the VIRET tool prototype interface.

So far, the keyword search component prompted only supported class labels and their descriptions during query formulation. In the new version, we consider to automatically show also top ranked selected frames for prompted labels (see Fig. 1).

Fig. 1.
figure 1

Top ranked frames for prompted labels.

Without the knowledge of supported labels used for automatic annotation, (especially) novice users may face problems with keyword search initialization. Trying to bridge the gap, the interface was updated to show a few automatically assigned labels with the highest score for a displayed frame, once the mouse cursor hovers over the frame. This feedback helps novice users to observe and gradually learn how the automatic annotation works. Based on this feedback, users can interactively extend the query expression with labels that originally did not come to their mind.

In order to construct temporal queries using example images, we consider a hierarchical static image map [4] with an organized/sorted representative sample of the whole dataset. Let us emphasize that the primary purpose of the map is to find a suitable query example frame as finding one particular searched frame is a way more difficult task.

The last modification focuses on result presentation displays (used already at the Lifelog Search Challenge 2019). One display shows a classical long list of frames sorted by relevance (using larger thumbnails), where users can navigate using the scroll bar. The second display shows one larger page of the ranked result set, where frames are locally rearranged such that frames from one video are collocated and sorted by frame number (see Fig. 2). The video groups on one page are sorted with respect to the most relevant frame from each group and separated by a green vertical line.

Fig. 2.
figure 2

Top ranked frames grouped by video ID on one page. (Color figure online)

3 Conclusion

This paper presents a new version of the VIRET system, focusing on updates of the utilized retrieval toolkit and interface. The updates aim at more convenient query formulation, new modality (speech), and fine-tuning of the employed ranking and filtering models.