Keywords

1 Introduction

Face anti-spoofing is an important task in computer vision, which aims to facilitate facial interaction systems to determine whether a presented face is live or spoof. With the successful deployments in phone unlock, access control and e-wallet payment, facial interaction systems already become an integral part in the real world. However, there exists a vital threat to these face interaction systems. Imagine a scenario where an attacker with a photo or video of you can unlock your phone and even pay his bill using your e-wallet. To this end, face anti-spoofing has emerged as a crucial technique to protect our privacy and property from being illegally used by others.

Most modern face anti-spoofing methods  [8, 14, 31] are fueled by the availability of face anti-spoofing datasets  [4, 5, 18, 24, 29, 32, 34], as shown in Table 1. However, there are several limitations with the existing datasets: 1) Lack of Diversity. Existing datasets suffer from lacking sufficient subjects, sessions and input sensors (e.g. mostly less than 2000 subject, 4 sessions and 10 input sensors). 2) Lack of Annotations. Existing datasets have only annotated the type of spoof type. Face anti-spoof community lacks a densely annotated dataset covering rich attributes, which can further help researchers to explore face anti-spoofing task with diverse attributes. 3) Performance Saturation. The classification performance on several face anti-spoofing datasets has already saturated, failing to evaluate the capability of existing and future algorithms. For example, the recall under FPR = 0.5% on SiW and Oulu-NPU datasets using vanilla ResNet-18 has already reached 100.0% and 99.0%, respectively (Fig. 1).

To address these shortcomings in existing face anti-spoofing dataset, in this work we propose a large-scale and densely annotated dataset, CelebA-Spoof. Besides the standard Spoof Type annotation, CelebA-Spoof also contains annotations for Illumination Condition and Environment, which express more information in face anti-spoofing, compared to categorical label like Live/Spoof. Essentially, these dense annotations describe images by answering questions like “Is the people in the image Live or Spoof?”, “What kind of spoof type is this?”, “What kind of illumination condition is this?” and “What kind of environment in the background?”. Specifically, all live images in CelebA-Spoof are selected from CelebA [20], and all Spoof images are collected and annotated by skillful annotators. CelebA-Spoof has several appealing properties. 1) Large-Scale. CelebA-Spoof comprises of a total of 10177 subjects, 625537 images, which is the largest dataset in face anti-spoofing. 2) Diversity. For collecting images, we use more than 10 different input tensors, including phones, pads and personal computers (PC). Besides, we cover images in 8 different sessions. 3) Rich Annotations. Each image in CelebA-Spoof is defined with 43 different attributes: 40 types of Face Attribute defined in CelebA [20] plus 3 attributes of face anti-spoofing, including: Spoof Type, Illumination Condition and Environment. With rich annotations, we can comprehensively investigate face anti-spoofing task from various perspectives.

Equipped with CelebA-Spoof, we design a simple yet powerful network named Auxiliary information Embedding Network (AENet), and carefully benchmark existing methods within this unified multi-task framework. Several valuable observations are revealed: 1) We analyze the effectiveness of auxiliary geometric information for different spoof types and illustrate the sensitivity of geometric information to special illumination conditions. Geometric information includes depth map and reflection map. 2) We validate auxiliary semantic information, including face attribute and spoof type, plays an important role in improving classification performance. 3) We build three CelebA-Spoof benchmarks based on this two auxiliary information. Through extensive experiments, we demonstrate that our large-scale and densely annotated dataset serves as an effective data source in face anti-spoofing to achieve state-of-the-art performance. Furthermore, models trained with auxiliary semantic information exhibit great generalizability compared to other alternatives.

Fig. 1.
figure 1

A quick glance of CelebA-Spoof face anti-spoofing dataset with its attributes. Hypothetical space of scenes are partitioned by attributes and Live/Spoof. In reality, this space is much higher dimensional and there are no clean boundaries between attributes presence and absence

Table 1. The comparison of CelebA-Spoof with existing datasets of face anti-spoofing. Different illumination conditions and environments make up different sessions, (V means video, I means image; Ill. Illumination condition, Env. Environment; - means this information is not annotated)

In summary, the contributions of this work are three-fold:1) We contribute a large-scale face anti-spoofing dataset, CelebA-Spoof, with 625,537 images from 10,177 subjects, which includes 43 rich attributes on face, illumination, environment and spoof types. 2) Based on these rich attributes, we further propose a simple yet powerful multi-task framework, namely AENet. Through AENet, we conduct extensive experiments to explore the roles of semantic information and geometric information in face anti-spoofing. 3) To support comprehensive evaluation and diagnosis, we establish three versatile benchmarks to evaluate the performance and generalization ability of various methods under different carefully-designed protocols. With several valuable observations revealed, we demonst rate the effectiveness of CelebA-Spoof and its rich attributes which can significantly facilitate future research.

2 Related Work

Face Anti-spoofing Datasets. Face anti-spoofing community mainly has three types of datasets. First, the multi-modal dataset: 3DMAD [7], Msspoof [6], CASIA-SURF [32] and CSMAD  [1]. However, since widespread used mobile phones are not equipped with suitable modules, such datasets cannot be widely used in the real scene. Second is the single-modal dataset, such as Replay Attack [5], CASIA-MFSD [34], MSU-MFSD [29], MSU-USSA [24] and HKBU-MARS V2  [16]. But these datasets have been collected for more than three years. With the rapid development of electronic equipment, the acquisition equipment of these datasets is completely outdated and cannot meet the actual needs. SiW [18], Oulu-NPU [4] and HKBU-MAR V1+  [15] are relatively up-to-date. However, the limited number of subjects, spoof types, and environment (Only indoors) in these datasets does not guarantee for the generalization capability required in the real application. Third, SiW-M [19] is mainly used for Zero-Shot face anti-spoofing tasks. CelebA-Spoof datasets have 625537 pictures from 10177 subjects, 8 scenes (2 environments * 4 illumination conditions) with rich annotations. The characteristic of Large-scale and diversity can further fill the gap between face anti-spoofing dataset and real scenes. With rich annotations we can better analyze face anti-spoofing task. All datasets mentioned above are listed in Table 1.

Face Anti-spoofing Methods. In recent years, face anti-spoofing algorithms have seen great progress. Most traditional algorithms focus on handcrafted features, such as LBP [5, 21, 22, 30], HoG [21, 25, 30] and SURF [2]. Other works also focused on temporal features such as eye-blinking [23, 27] and lips motion [12]. In order to improve the robustness to light changes, some researchers have paid attention to different color spaces, such as HSV [3], YCbcR [2] and Fourier spectrum [13]. With the development of the deep learning model, researchers have also begun to focus on Convolutional Neural Network based methods. [8, 14] considered the face PAD problem as binary classification and perform good performance. The method of auxiliary supervision is also used to improve the performance of binary classification supervision. Atoum et al. let the full convolutional network to learn the depth map and then assist the binary classification task. Liu et al.  [15, 17] proposed remote toplethysmography (rPPG signal)-based methods to foster the development of 3D face anti-spoofing. Liu et al. [18] proposed to leverage depth map combined with rPPG signal as the auxiliary supervision information. Kim et al. [11] proposed using depth map and reflection map as the Bipartite auxiliary supervision. Besides, Yang et al. [31] proposed to combine the spatial information with the temporal information in the video stream to improve the generalization of the model. Amin et al. [10] solved the problem of face anti-spoofing by decomposing a spoof photo into a Live photo and a Spoof noise pattern. These methods mentioned above are prone to over-fitting on the training data, the generalization performance is poor in real scenarios. In order to solve the poor generalization problem, Shao et al. [26] adopted transfer learning to further improve performance. Therefore, a more complex face anti-spoofing dataset with large-scale and diversity is necessary. From extensive experiments, CelebA-Spoof has been shown to significantly improve generalization of basic models, In addition, based on auxiliary semantic information method can further achieve better generalization.

Fig. 2.
figure 2

Representative examples of the semantic attributes (i.e. spoof type, illumination and environment) defined upon spoof images. In detail, (a) 4 macro-types and 11 micro-types of spoof type and (b) 4 illumination and 2 types of environmental conditions are defined

3 CelebA-Spoof Dataset

Existing face anti-spoofing datasets cannot satisfy the requirements for real scenario applications. As shown in Table 1, most of them contain fewer than 200 subjects and 5 sessions, meanwhile they are only captured indoor with fewer than 10 types of input sensors. On the contrary, our proposed CelebA-Spoof dataset provides 625, 537 pictures and 10, 177 subjects, therefore offering a superior comprehensive dataset for the area of face anti-spoofing. Furthermore, each image is annotated with 43 attributes. This abundant information enrich the diversity and make face anti-spoofing more illustrative. To our best knowledge, our dataset surpasses all the existing datasets both in scale and diversity.

In this section, we describe our CelebA-Spoof dataset and analyze it through a variety of informative statistics. The dataset is built based on CelebA  [20], where all the live people in this dataset are from CelebA. We collect and annotate Spoof images of CelebA-Spoof.

Fig. 3.
figure 3

The statistical distribution of CelebA-Spoof dataset. (a) Overall live and spoof distribution as well as the face size statistic. (b) An exemplar of live attribute, i.e. “gender”. (c) Three types of spoof attributes

3.1 Semantic Information Collection

In recent decades, studies in attribute-based representations of objects, faces, and scenes have drawn large attention as a complement to categorical representations. However, rare works attempt to exploit semantic information in face anti-spoofing. Indeed, for face anti-spoofing, additional semantic information can characterize the target images by attributes rather than discriminated assignment into a single category, i.e. “live” or “spoof”.

Semantic for Live - Face Attribute \({\varvec{\mathcal {S}}}^\mathbf{f}\). In our dataset, we directly adopt 40 types of face attributes defined in CelebA  [20] as “live” attributes. Attributes of “live” faces always refer to gender, hair color, expression and etc. These abundant semantic cues have shown their potential in providing more information for face identification. It is the first time to incorporate them into face anti-spoofing. Extensive studies can be found in Sect. 5.1.

Semantic for Spoof - Spoof Type \({\varvec{\mathcal {S}}}^\mathbf{s}\), Illumination \({\varvec{\mathcal {S}}}^\mathbf{i}\), and Environment \({\varvec{\mathcal {S}}}^\mathbf{e}\). Differs to “live” face attributes, “spoof” images might be characterized by another bunch of properties or attributes as they are not only related to the face region. Indeed, the material of spoof type, illumination condition and environment where spoof images are captured can express more semantic information in “spoof” images, as shown in Fig. 2. Note that the combination of illumination and environment forms the “session” defined in the existing face anti-spoofing dataset. As shown in Table 1, the combination of four illumination conditions and two environments forms 8 sessions. To our best knowledge, CelebA-Spoof is the first dataset covering spoof images in outdoor environment.

3.2 Statistics on CelebA-Spoof Dataset

The CelebA-Spoof dataset is constructed with a total of 625, 537 images. As shown in Fig. 3(a), the ratio of live and spoof is 1 : 3. Face size in all images is mainly between 0.01 million pixels to 0.1 million pixels. We split the CelebA-Spoof dataset into training, validation, and test sets with a ratio of 8 : 1 : 1. Note that all three sets are guaranteed to have no overlap on subjects, which means there is no case of a live image of one certain subject in the training set while its counterpart spoof image in the test set. The distribution of live images in three splits is the same as that defined in the CelebA dataset.

The semantic attribute statistics are shown in Fig. 3(c). The portion of each type of attack is almost the same to guarantee a balanced distribution. It is easy to collect data under normal illumination in an indoor environment where most existing datasets adopt. Besides such easy cases, in CelebA-Spoof dataset, we also involve \(12\%\) dark, \(11\%\) back, and \(19\%\) strong illumination. Furthermore, both indoor and outdoor environments contain all illumination conditions.

4 Auxiliary Information Embedding Network

Fig. 4.
figure 4

Auxiliary information Embedding Network (AENet). We use two Conv\(_{3\times 3}\) after CNN and upsample to size \(14\times 14\) to learn the geometric information. Besides, we use three FC layers to learn the semantic information. The prediction score of \(\mathcal {S}^\text {f}\) of spoof image should be very low and the prediction result of \(\mathcal {S}^\text {s}\) and \(\mathcal {S}^\text {i}\) of live image should be “No illumination” and “No attack” which belongs to the first label in \(\mathcal {S}^\text {s}\) and \(\mathcal {S}^\text {i}\) (Color figure online)

Equipped with CelebA-Spoof dataset, in this section, we design a simple yet effective network named Auxiliary information Embedding Network (AENet), as shown in Fig. 4. In addition to the main binary classification branch (in green), we 1) Incorporate the semantic branch (in orange) to exploit the auxiliary capacity of rich annotated semantic attributes in the dataset, and 2) Benchmark the existing geometric auxiliary information within this unified multi-task framework.

AENet\(_{\mathcal {C},\mathcal {S}}\). Refers to the multi-task jointly learn auxiliary “semantic” attributes and binary “classification” labels. Such auxiliary semantic attributes defined in our dataset provide complement cues rather than discriminated assignment into a single category. The semantic attributes are learned via the backbone network followed by three FC layers. In detail, given a batch of n images, based on AENet\(_{\mathcal {C},\mathcal {S}}\), we learn live/spoof class \(\{\mathcal {C}_{k}\}_{k=1}^{n}\) and semantic information, i.e. live face attributes \(\{\mathcal {S}_{k}^{\text {f}}\}_{k=1}^{n}\), spoof type \(\{\mathcal {S}_{k}^{\text {s}}\}_{k=1}^{n}\) and illumination conditions \(\{\mathcal {S}_{k}^{\text {i}}\}_{k=1}^{n}\) simultaneouslyFootnote 1. The loss function of our AENet\(_{\mathcal {C},\mathcal {S}}\) is

$$\begin{aligned} \mathcal {L}_{c,s}=\mathcal {L}_{\mathcal {C}} + \lambda _{f}\mathcal {L}_{\mathcal {S}^\text {f}} + \lambda _{s}\mathcal {L}_{\mathcal {S}^\text {s}} + \lambda _{i}\mathcal {L}_{\mathcal {S}^\text {i}}, \end{aligned}$$
(1)

where \(\mathcal {L}_{\mathcal {S}^\text {f}}\) is binary cross entropy loss. \(\mathcal {L}_{\mathcal {C}}\), \(\mathcal {L}_{\mathcal {S}^\text {s}}\) and \(\mathcal {L}_{\mathcal {S}^\text {i}}\) are softmax cross entropy losses. We set the loss weights \(\lambda _{f} = 1\), \(\lambda _{s} = 0.1\) and \(\lambda _{i} = 0.01\), \(\lambda \) values are empirically selected to balance the contribution of each loss.

AENet\(_{\mathcal {C},\mathcal {G}}\). Besides the semantic auxiliary information, some recent works claim some geometric cues such as reflection map and depth map can facilitate face anti-spoofing. As shown in Fig. 4 (marked in blue), spoof images exhibit even and the flat surfaces which can be easily distinguished by the depth map. The reflection maps, on the other hand, may display reflection artifacts caused by reflected light from flat surface. However, rare works explore their pros and cons.

AENet\(_{\mathcal {C},\mathcal {G}}\) also learn auxiliary geometric information in a multi-task fashion with live/spoof classification. Specifically, we concate a Conv\(\_{3\times 3}\) after the backbone network and upsample to \(14\times 14\) to output the geometric maps. We denote depth and reflection cues as \(\mathcal {G}^\text {d}\) and \(\mathcal {G}^\text {r}\) respectively. The loss function is defined as

$$\begin{aligned} \mathcal {L}_{c,g}=\mathcal {L}_{c} + \lambda _{d}\mathcal {L}_{\mathcal {G}^\text {d}} + \lambda _{r}\mathcal {L}_{\mathcal {G}^\text {r}}, \end{aligned}$$
(2)

where \(\mathcal {L}_{\mathcal {G}^\text {d}}\) and \(\mathcal {L}_{\mathcal {G}^\text {r}}\) are mean squared error losses. \(\lambda _{d}\) and \(\lambda _{r}\) are set to 0.1. In detail, refer to  [11], the ground truth of the depth map of live image is generated by PRNet  [9] and the ground truth of the reflection map of the spoof image is generated by the method in  [33]. Besides, the ground truth of the depth map of the spoof image and the ground truth of the reflection map of the live images are zero.

5 Ablation Study on CelebA-Spoof

Based on our rich annotations in CelebA-Spoof and the designed AENet, we conduct extensive experiments to analyze semantic information and geometric information. Several valuable observations have been revealed: 1) We validate that \(\mathcal {S}^\text {f}\) and \(\mathcal {S}^\text {s}\) can facilitate live/spoof classification performance greatly. 2) We analyze the effectiveness of geometric information on different spoof types and find that depth information is particularly sensitive to dark illumination.

Table 2. Different settings in ablation study. For Baseline, we use softmax score of \(\mathcal {C}\) for classification (a) For AENet\(_\mathcal {S}\), we use the average softmax score of \(\mathcal {S}^\text {f}\), \(\mathcal {S}^\text {s}\) and \(\mathcal {S}^\text {i}\) for classification. AENet\(_{\mathcal {S}^\text {f}}\), AENet\(_{\mathcal {S}^\text {s}}\) and AENet\(_{\mathcal {S}^\text {i}}\) refer to each single spoof semantic attribute respectively. Based on AENet\(_{\mathcal {C},\mathcal {S}}\), w/o \(\mathcal {S}^\text {f}\), w/o \(\mathcal {S}^\text {s}\), w/o \(\mathcal {S}^\text {i}\) mean AENet\(_{\mathcal {C},\mathcal {S}}\) discards \(\mathcal {S}^\text {f}\), \(\mathcal {S}^\text {s}\) and \(\mathcal {S}^\text {i}\) respectively. (b) For AENet\(_{\mathcal {G}^\text {d}}\), we use \(\left\| \mathcal {G}^\text {d}\right\| _{2}\) for classification. Based on AENet\(_{\mathcal {C},\mathcal {G}}\), w/o \(\mathcal {G}^\text {d}\), w/o \(\mathcal {G}^\text {r}\) mean AENet\(_{\mathcal {C},\mathcal {G}}\) discards \(\mathcal {G}^\text {d}\) and \(\mathcal {G}^\text {r}\) respectively

5.1 Study of Semantic Information

In this subsection, we explore the role of different semantic informations annotated in CelebA-Spoof on face anti-spoofing. Based on AENet\(_{\mathcal {C},\mathcal {S}}\), we design eight different models in the Table 2(a). The key observations are:

Binary Supervision is Indispensable. As shown in Table 3(a), Compared to baseline, AENet\(_\mathcal {S}\) which only leverages three semantic attributes to do the auxiliary job cannot surpass the performance of baseline. However, as shown in 3(b), AENet\(_{\mathcal {C},\mathcal {S}}\) which jointly learns auxiliary semantic attributes and binary classification significantly improves the performance of baseline. Therefore we can infer that even such rich semantic information cannot fully replace live/spoof information. But live/spoof with semantic attributes as auxiliary information can be more effective. This is because the semantic attributes of an image cannot be included completely, and a better classification performance cannot be achieved only by relying on several annotated semantic attributes. However, semantic attributes can help the model pay more attention to cues in the image, thus improving the classification performance of the model.

Semantic Attribute Matters. From Table 3(c), we study the impact of different individual semantic attributes on AENet\(_{\mathcal {C},\mathcal {S}}\). As shown in this table, AENet\(_{\mathcal {C},\mathcal {S}}\) w/o \(\mathcal {S}^\text {s}\) achieves the worst APCER. Since APCER reflects the classification ability of spoof images, it shows that compared to other semantic attributes, spoof types would significantly affect the performance of the spoof images classification of AENet\(_{\mathcal {C},\mathcal {S}}\). Furthermore, we list detail information of AENet\(_{\mathcal {C},\mathcal {S}}\) in Fig. 5(a). As shown in this figure, AENet\(_{\mathcal {C},\mathcal {S}}\) without spoof types gets the 5 worst APCER\(_{\mathcal {S}^\text {s}}\) out of 10 APCER\(_{\mathcal {S}^\text {s}}\) and we show up these 5 values in this figure. Besides, in Table 3(b), AENet\(_{\mathcal {C},\mathcal {S}}\) w/o \(\mathcal {S}^\text {f}\) gets the highest BPCER. And we also obtain the BPCER\(_{\mathcal {S}^\text {f}}\) of each face attribute. As shown in Fig. 5(b), among 40 face attributes, BPCER\(_{\mathcal {S}^\text {f}}\) of AENet\(_{\mathcal {C},\mathcal {S}}\) w/o \(\mathcal {S}^\text {f}\) occupies 25 worst scores. Since BPCER reflects the classification ability of live images, it demonstrate \(\mathcal {S}^\text {f}\) plays an important role in the classification of live images.

Table 3. Semantic information study results in Sect. 5.1. (a) AENet\(_\mathcal {S}\) which only depends on semantic attributes for classification cannot surpass the performance of baseline. (b) AENet\(_{\mathcal {C},\mathcal {S}}\) which leverages all semantic attributes achieve the best result. Bolds are the best results; \(\uparrow \) means bigger value is better; \(\downarrow \) means smaller value is better
Fig. 5.
figure 5

Representative examples of dropping partial semantic attributes on AENet\(_{\mathcal {C},\mathcal {S}}\) performance. In detail, higher APCER\(_{\mathcal {S}^\text {s}}\) and BPCER\(_{\mathcal {S}^\text {f}}\) are worse results. (a) Spoof types where AENet\(_{\mathcal {C},\mathcal {S}}\) w/o \(\mathcal {S}^\text {s}\) achieve the worst APCER\(_{\mathcal {S}^\text {s}}\). (b) Face attributes where AENet\(_{\mathcal {C},\mathcal {S}}\) w/o \(\mathcal {S}^\text {f}\) achieve the worst BPCER\(_{\mathcal {S}^\text {f}}\)

Qualitative Evaluation. Success and failure cases on live/spoof and semantic attributes predictions are shown in Fig. 6. For live examples, the first example in Fig. 6(a-i) with “glasses“ and “hat“ help AENet\(_{\mathcal {C},\mathcal {S}}\) to pay more attention to the clues of the live image and further improve the performance of prediction of live/spoof. Besides, the first example in Fig. 6(a-ii). AENet\(_{\mathcal {C},\mathcal {S}}\) significantly improve the classification performance of live/spoof comparing to baseline. This is because spoof semantic attributes including “back illumination” and “phone” help AENet\(_{\mathcal {C},\mathcal {S}}\) recognize the distinct characteristics of spoof image. Note that the prediction of the second example in Fig. 6(b-i) is mistaken.

5.2 Study of Geometric Information

Based on AENet\(_{\mathcal {C},\mathcal {g}}\) under different settings, we design four models as shown in Table 2(b) and use semantic attributes we annotated to analyze the usage of geometric information in face anti-spoofing task. The key observations are:

Depth Maps are More Versatile. As shown in Table 4(a), geometric information is insufficient to be the unique supervision for live/spoof classification. However, it can boost the performance of the baseline when it serves as an auxiliary supervision. Besides, we study the impact of different individual geometric information on AENet\(_{\mathcal {C},\mathcal {G}}\) performance. As shown in Fig. 7(a), AENet\(_{\mathcal {C},\mathcal {G}}\) w/o \(\mathcal {G}^\text {d}\) performs the best in spoof type: “replay” (macro definition), because the reflect artifacts appear frequently in these three spoof types. For “phone”, AENet\(_{\mathcal {C},\mathcal {G}}\) w/o \(\mathcal {G}^\text {d}\) improves 56\(\%\) comparing to the baseline. However AENet\(_{\mathcal {C},\mathcal {G}}\) w/o \(\mathcal {G}^\text {d}\) gets worse result than baseline in spoof type: “print” (macro definition). Moreover, AENet\(_{\mathcal {C},\mathcal {G}}\) w/o \(\mathcal {G}^\text {r}\) helps greatly to improve the classification performance of baseline in both “replay” and “print”(macro definition). Especially for “poster”, AENet\(_{\mathcal {C},\mathcal {G}}\) w/o \(\mathcal {G}^\text {r}\) improves baseline by 81\(\%\). Therefore, the depth map can improve classification performance in most spoof types, but the function of the reflection map is mainly reflected in “replay”(macro definition).

Sensitive to Illumination. As shown in Fig. 7(a), in spoof type “print”(macro definition), the performance of the AENet\(_{\mathcal {C},\mathcal {G}}\) w/o \(\mathcal {G}^\text {r}\) on “A4” is much worse than “poster” and “photo”, although they are both in “print” spoof type. The main reason for the large difference in performance among these three spoof types for AENet\(_{\mathcal {C},\mathcal {G}}\) w/o \(\mathcal {G}^\text {r}\) is that the learning of the depth map is sensitive to dark illumination, as shown in Fig. 7(b). When we calculate APCER under other illumination conditions: normal, strong and back, AENet\(_{\mathcal {C},\mathcal {G}}\) w/o \(\mathcal {G}^\text {r}\) achieves almost the same results among “A4”, “poster” and “photo”.

Table 4. Geometric information study results in Sect. 5.2. (a) AENet\(_{\mathcal {G}^\text {d}}\) which only depends on the depth map for classification performs worst than baseline. (b) AENet\(_{\mathcal {C},\mathcal {G}}\) which leverages all semantic attributes achieve the best result. Bolds are the best results; \(\uparrow \) means bigger value is better; \(\downarrow \) means smaller value is better
Fig. 6.
figure 6

Success and failure cases. The row(i) present the live image and row(ii) present the spoof image. For each image, the first row is the highest score of live/spoof prediction of baseline and others are the highest live/spoof and the highest semantic attributes predictions of AENet\(_{\mathcal {C},\mathcal {S}}\). Blue indicates correctly predicted results and orange indicates the wrong results. In detail, we list the top three prediction scores of face attributes in the last three rows of each image (Color figure online)

6 Benchmarks

In order to facilitate future research in the community, we carefully build three different benchmarks to investigate face anti-spoofing algorithms. Specifically, for a comprehensive evaluation, besides ResNet-18, we also provide the corresponding results based on a heavier backbone, i.e. Xception. Detailed information of the results based on Xception are shown in the supplementary material.

6.1 Intra-dataset Benchmark

Based on this benchmark, models are trained and evaluated on the whole training set and testing set of CelebA-Spoof. This benchmark evaluates the overall capability of the classification models. According to different input data types, there are two kinds of face anti-spoof methods, i.e. “ video-driven methods” and “image-driven methods”. Since the data in CelebA-Spoof are image-based, we benchmark state-of-the-art “image-driven methods” in this subsection. As shown in Table 5, AENet\(_{\mathcal {C},\mathcal {S}, \mathcal {G}}\) which combines geometric and semantic information has achieved the best results on CelebA-Spoof. Specifically, our approach outperforms the state-of-the-art by 38% with much fewer parameters.

Table 5. Intro-dataset Benchmark results on CelebA-Spoof. AENet\(_{\mathcal {C},\mathcal {S}, \mathcal {G}}\) achieved the best result. Bolds are the best results; \(\uparrow \) means bigger value is better; \(\downarrow \) means smaller value is better. * Model 2 defined in Auxiliary can be used as “image driven method”

6.2 Cross-Domain Benchmark

Since face anti-spoofing is an open-set problem, even though CelebA-Spoof is equipped with diverse images, it is impossible to cover all spoof types, environments, sensors, etc. that exist in the real world. Inspired by  [4, 18], we carefully design two protocols for CelebA-Spoof based on real-world scenarios. In each protocol, we evaluate the performance of trained models under controlled domain shifts. Specifically, we define two protocols. 1) Protocol 1 - Protocol 1 evaluates the cross-medium performance of various spoof types. This protocol includes 3 macro types of spoof, where each covers 3 micro types of spoof. These three macro types of spoof are “print”, “repay” and “paper cut”. In detail, in each macro type of spoof, we choose 2 of their micro type of spoof for training, and the others for testing. Specifically, “A4”, “face mask” and “PC” are selected for testing. 2) Protocol 2 - Protocol 2 evaluates the effect of input sensor variations. According to imaging quality, we split input sensors into three groups: low-quality sensor, middle-quality sensor and high-quality sensorFootnote 2. Since we need to test on three different kinds of sensor and the average performance of FPR-Recall is hard to measure, we do not include FPR-Recall in the evaluation metrics of protocol 2. Table 6 shows the performance under each protocol.

Fig. 7.
figure 7

Representative examples of the effectiveness of geometric information. Higher APCER\(_{\mathcal {S}^\text {s}}\) is worse. (a) AENet\(_{\mathcal {C},\mathcal {G}}\) w/o \(\mathcal {G}^\text {d}\) perform the best in spoof type: “replay”(macro definition) and AENet\(_{\mathcal {C},\mathcal {G}}\) w/o \(\mathcal {G}^\text {r}\) perform the best in spoof type: “print”(macro definition). (b) The performance of AENet\(_{\mathcal {C},\mathcal {G}}\) w/o \(\mathcal {G}^\text {r}\) improve largely on spoof type: “A4”, if we only calculate APCER under illumination conditions: “normal”, “strong” and “back”

Table 6. Cross-domain benchmark results of CelebA-Spoof. Bolds are the best results; \(\uparrow \) means bigger value is better; \(\downarrow \) means smaller value is better
Table 7. Cross-dataset benchmark results. AENet\(_{\mathcal {C},\mathcal {S},\mathcal {G}}\) based on ResNet-18 achieves the best generalization performance. Bolds are the best results; \(\uparrow \) means bigger value is better; \(\downarrow \) means smaller value is better

6.3 Cross-Dataset Benchmark

In this subsection, we perform cross-dataset testing on CelebA-Spoof and CASIA-MFSD dataset to further construct the cross-dataset benchmark. On the one hand, we offer a quantitative result to measure the quality of our dataset. On the other hand, we can evaluate the generalization ability of different methods according to this benchmark. The current largest face anti-spoofing dataset CASIA-SURF [32] adopted FAS-TD-SF [28] (which is trained on SiW or CASIA-SURF and tested on CASIA-MFSD) to demonstrate the quality of CASIA-SURF. Following this setting, we first train AENet\(_{\mathcal {C}, \mathcal {G}}\), AENet\(_{\mathcal {C}, \mathcal {S}}\) and AENet\(_{\mathcal {C}, \mathcal {S}, \mathcal {G}}\) based on CelebA-Spoof and then test them on CASIA-MFSD to evaluate the quality of CelebA-Spoof. As shown in Table 7, we can conclude that: 1) The diversity and large quantities of CelebA-Spoof drastically boosts the performance of vanilla model; a simple ResNet-18 achieves state-of-the-art cross-dataset performance. 2) Comparing to geometric information, semantic information equips the model with better generalization ability.

7 Conclusion

In this paper, we construct a large-scale face anti-spoofing dataset, CelebA-Spoof, with 625,537 images from 10,177 subjects, which includes 43 rich attributes on face, illumination, environment and spoof types. We believe CelebA-Spoof would be a significant contribution to the community of face anti-spoofing. Based on these rich attributes, we further propose a simple yet powerful multi-task framework, namely AENet. Through AENet, we conduct extensive experiments to explore the roles of semantic information and geometric information in face anti-spoofing. To support comprehensive evaluation and diagnosis, we establish three versatile benchmarks to evaluate the performance and generalization ability of various methods under different carefully-designed protocols. With several valuable observations revealed, we demonstrate the effectiveness of CelebA-Spoof and its rich attributes which can significantly facilitate future research.