Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The acquisition and analysis of latent marks from the perspective of fingerprints is one of the oldest disciplines in forensic science which also paved the way for the utilization of the fingerprint as a biometric modality in a manifold of applications. Of course there are known differences in both domains such as briefly summarized as follows:

  1. (1)

    The fingerprint trace (finger mark) is formed by human secretions (e.g., invisible human sweat) left by a finger touching a surface (also called latent mark) or caused due to the removable, adherent or deformable nature of the a surface property (such as other liquids on the surface, e.g., visible blood or ink, surface dust—also called often patent mark, or ductile and plastic surface behavior also called plastic mark (see [1, pp. 105–106]). Depending on the visibility of the mark, the fingerprint characteristics might be invisible to the human eye. As the presence and location of fingerprint traces is not known a priori in forensics, the marks need to be searched for and found (detected) at first. Due to the small size of the marks and very often its invisible (latent) nature on the surface, the search needs to include mechanisms to turn the mark visible and/or accessible for further acquisition techniques . This can be done either by using a physical or chemical treatment, see details in the third section, allowing for acquiring camera photos, or with contact-less methods which exploit properties of the residue using special sensory such as optical, nanometer range sensing or special lighting conditions (UV or infrared, see also in the following forth section) further combined with an additional so-called digital treatment. Contact-less, non-destructive approaches are of course very valuable allowing trace processing several times without altering the trace itself.

  2. (2)

    For trace detection as well as for analysis after acquisition the trace needs to be separated or segmented from the surface to see if the finger mark contains a sufficient number and quality of forensically relevant characteristics (see Sect. 2.2).

  3. (3)

    The analysis is usually performed by a forensic expert (called dactyloscopy expert or latent print examiner) supported with further means (microscopes, digital processing, etc.) as described in Sect. 2.5.

  4. (4)

    Additional forensic challenges which occur in the case work are for example:

(a) overlapping marks: Detection of overlapping marks, determination of the kind of overlapping traces (e.g., when other traces such as fibers are expected) and/or number of overlapping marks, separation of the marks with its quality assessment (see e.g. [2, 3]).

(b) trace age estimation: Determination of absolute or relative time the trace was left at the crime scene, the trace age is valuable to exclude or include traces which are, e.g., older or younger at time of interest, as summarized in [4].

(c) forgery detection: As known and discussed in several publications such as Harper [5] or Wertheim [6] and more recently [7], crime scene trace forgeries are present and need to be detected. In [8] for example several artificial sweat printed fingerprints and corresponding forgery detection approaches are summarized and benchmarked.

In comparison to the forensic case, in the biometric case, the fingerprint is explicitly and intentionally presented to the biometric sensory, producing a so-called exemplar fingerprint. Therefore, the detection from the forensic case (1) is limited here to searching and locating the fingerprint in a particular limited sized sensor region. In respect to (2), depending on the kind of biometric sensory used, the surface separation and segmentation is fixed and well defined. The analysis from (3) is usually performed by a biometric algorithm.

In respect to further challenges (4):

(a) overlapping marks occur in case of contact-based sensing, where remaining secretions from a preceding touch remains on the sensory. Here it might be valuable that the sensory detects that there are disturbances and is doing or requesting a cleaning as solution in this case.

In biometrics, the mark age (trace age (4)(b)) can be determined explicitly during sensor acquisition with the sensor time. For both domains, the age of the subject might be also of interest, see for example discussion about challenges caused by aging of fingerprint in [9].

Forgery detection (c) is also relevant for biometric applications. The kind of forgeries differs in respect to the time and means, such as artificial sweat printings cannot be easily placed in from of a biometric sensory to forge it, but 3D fingers of course can be produced to forge both domains. Liveness detection or presentation attack detection is therefore a relevant feature in the biometric domain, while in forensic domain even marks and prints from dead subjects might be of interest in order to link them to a crime or to identify corpses.

In the forensic case the finger mark is evaluated in order to determine whether the mark’s characteristics belong to a particular known subject by comparing it either with exemplar prints taken under controlled conditions directly from the subject or with other latent marks found at crime scenes where the subject’s identity is not known, yet. Nowadays, potential outcomes of this comparison are an inconclusive result if an insufficient number of usable features is present and an exclusion for sufficiently different patterns or an identification if the mark has a sufficient number of characteristics in common with the pattern is has been compared to. In the future alternative measure likelihood ratios might be used (see e.g. [10]). In the biometric case, a verification or identification is performed by using samples from a template database which has to be created in advance in the enrollment stage (called enrollment sample(s), reference(s), template(s)). Known error rates are the false acceptance rate (FAR, also known as type 1 error) and false rejection rate (FRR, also known as type 2 error) as well as the failure to enroll rate (FTE) and failure to acquire rates (FTA), additionally the equal error rate (EER) is used to describe the performance of a biometric system [11, pp. 6–12]. An additional challenge here is the sensor dependency of template and test samples and cross-sensor evaluations are of interest (see e.g. [12]).

This chapter provides a brief overview over the state of the art of the acquisition and analysis of latent marks. In line with the literature, e.g. [1], the term finger mark is used for the latent trace at the crime scene, whereas the term fingerprint is used for an exemplar print captured directly from the finger e.g. within a biometric system. The term fingerprint pattern is used for the pattern of both, fingerprints as well as finger marks.

The remainder of this chapter is structured as follows: Sect. 2.2 summarizes the fingerprint characteristics as a foundation for the comparison. The current state of the art regarding conventional latent mark acquisition techniques is discussed in Sect. 2.3. An overview of contact-less, non-destructive acquisition techniques is presented in Sect. 2.4. In Sect. 2.5 the finger mark analysis process is discussed. Afterward, particular legal challenges for new techniques are discussed in Sect. 2.6. Subsequently, the chapter is summarized in Sect. 2.7.

2 Fingerprint Characteristics

The foundations for the analysis of fingerprints and marks are primarily established by Johann C. A. Mayer [13], Johann E. Purkinje [14], William Herschel [15], Henry Faulds [16], Francis Galton [17], Edward Henry [18], Edmond Locard and Salil Chatterjee whereas the usage of fingerprints in general dates back several centuries, see e.g. [19] for historical timeline of forensic sciences.

Mayer [13, pp. 5–6] described the uniqueness of the fingerprint pattern for the first time in his explanation of the second copper-plate: “Although the wrinkles of two humans are never coincide with each other, nevertheless some humans have more similarities whereas others seem to differ more in their visual appearance. However, the peculiarities of all arrangement are similarly formed.

Purkinje [14] describes a classification of nine global patterns of the fingerprint: simple arch, tented arch, oblique stripe, oblique loop, almond, spiral, ellipse, circle and double whorl. Starting in 1858, Herschel [15] utilized the fingerprint patterns as a means of identification in India. The uniqueness of the skin furrows is also described by Henry Faulds [16] based on experiments with fingerprints from monkeys and humans. It is also one of the first public articles considering the usage of fingerprints for performing individualization at crime scenes.

Galton [17] describes different kinds of arches, loops and whorls as global patterns, as well as minutiae points and pores . He also discusses the evidential value of fingerprints, indexing methods and personal identification. Galton also performed experiments on the fingerprint pattern persistency.

Henry [18] developed a classification system for fingerprints in collaboration with Galton. He differentiates between the delta and core points of the global level one pattern. For his primary classification he differentiates between loops (including arches) and whorls for five pairs of two fingers [18, pp. 69–75].

Locard established the first rules towards a minimum number of minutiae points which are necessary for identification (see e.g. [20]) in 1914. Furthermore, he established the science of the poroscopy [21] in 1912. However, his most important contribution to forensic sciences is probably the formulation of his exchange principle (see e.g. [22, p. 44]), which is the foundation for many other forensic disciplines as well. It basically states that every offender inevitably leaves traces at the scene of crime and takes traces from it with him as well.

Chatterjee described the analysis of the edges of the ridge lines, known as edgeoscopy [21], in 1962.

Nowadays, as known, three different levels of features are used within the scope of fingerprints [1, pp. 15–20]. The first level describes the global pattern which is visible on the fingertip even to the bare eye. The second level describes local characteristics, known as minutiae points. Those particular features are primarily used for matching fingerprints. The third level of features describes microscopic details such as pores or the edges of papillary lines. In the following the characteristics for each feature level are described.

The first level of fingerprint features has already been used e.g. by Galton [17] and Henry [18] for their classification systems. In particular, usually the five different global pattern types left loop, right loop, whorl, (plain) arch, tented arch are used.

In forensic investigations those patterns can be used for a quick exclusion. However, a matching level 1 pattern is insufficient for a verification or identification of individuals. Besides their different visual appearance the patterns share some properties regarding the number of core and delta points. The delta points are characterized by a triangular-shaped region within the ridge flow. Thus, the ridge flow has three different orientations within close proximity of this point. The core point is a point where the ridge orientation significantly changes or, in other words, a point with a non-continuous ridge flow in its neighborhood.

An arch has no core and delta points at all. Tented arches, left loops and right loops have one core and one delta point. Whorls have two core points and two delta points.

The level 2 patterns describe local characteristics of the ridge flow. The most common level 2 features are called minutiae points. Other features include warts, scars, creases, wrinkles, incipient ridges and subsidiary ridges [1, p. 17]. The two most common minutiae types are ridge endings (a termination of the papillary line) and bifurcations (a splitting point of one papillary line into two ridges) as illustrated in Fig. 2.1.

Fig. 2.1
figure 1

Illustration of minutiae and pores (re-sketched)

Usually four types of information are stored for each minutiae point: the x and y coordinates, the minutiae type and the orientation of the minutiae. Depending on the utilized format the origin of the coordinate system can be in the lower left or upper left corner of the image. Furthermore, the coordinates can be stored as pixel values or in metric units. Especially the latter has the advantage of achieving a resolution independent template. The orientation of the minutiae is determined as depicted in Fig. 2.2.

Fig. 2.2
figure 2

Minutiae orientation: ridge ending (left) and bifurcation (right) (re-sketched)

For ridge endings the angle of the minutiae is determined by measuring the angle θ between the perpendicular line through the y coordinate and the prolonged ridge line. For bifurcations the angle is determined between the perpendicular line through the y coordinate and the prolonged valley between the two ridges.

On the third level of features, primarily small details are investigated. Such features include pores as illustrated in Fig. 2.1 and edges of the ridges. The pores are formed by the sweat glands on the ridges. Due to their nature, a high acquisition resolution of at least 1000 ppi is required in order to be able to extract such features reliably. Common biometric systems usually take no advantage of those features, see e.g. [23]. In contrast to that, especially for partial finger marks , in forensics such features can make a difference between an inconclusive comparison and an exclusion or identification of the mark.

3 Conventional Latent Mark Acquisition Techniques

Several latent mark detection and acquisition techniques exist and are applied in daily police work. Such methods are usually necessary to render the invisible residue from the fingertip visible. Some of those methods are also combined with special illumination techniques for further contrast enhancement.

The detection techniques are usually divided into methods for non-porous and porous substrates. On non-porous substrates the residue resides on top of the object with the finger mark. In contrast to that, the residue is absorbed over time into porous substrates. A detailed description of the multitude of detection techniques is provided in [1, pp. 111–173] and [24]. Due to the nature of these techniques the latent mark is inevitably altered in terms of its chemical composition during the processing. Therefore, the forensic laboratories implement the ISO 17025 Standard to produce consistent results [24]. In the following, the most commonly applied known techniques for the latent mark development such as powdering, cyanoacrylate fuming, vacuum metal deposition, ninhydrin spraying, physical developer and multimetal deposition are summarized.

The oldest detection technique for smooth non-porous substrates is the powdering of the finger marks , see e.g. [1, pp. 136–137]. During this physical treatment powder particles are applied to the mark using brushes in order to detect the mark. The particles adhere to the greasy, sticky or humid substances in the residue and thus, render the mark visible. Afterward, the mark can be photographed and/or lifted using adhesive tape or gel lifters. Due to the low cost and low requirements regarding equipment and special training, this technique is still the most commonly applied method for latent marks detection at crime scenes. However, special substrate characteristics might require different powders in terms of color or other properties [1, pp. 136–137]. Other special forms of powdering include magnetic powders, which should cause less destruction by avoiding the brushing and wet powder, which can be applied on wet substrates in contrast to standard powdering techniques [1, p.137].

A chemical treatment for non-porous substrates is the cyanoacrylate fuming, also known as super glue fuming. This technique uses the effect of the polymerization of cyanoacrylate vapor at the fingerprint ridges. In particular, it reacts with eccrine and sebaceous components within the residue. The process is performed in special fuming chambers which provide a defined level of humidity and controlled heat sources for vaporizing the cyanoacrylate. It is the most commonly used technique for the latent mark development on non-porous substrates in laboratories [1, p. 138]. The result of this process is a hard white polymer on the ridges. Thus, after the fuming often an additional staining is applied to enhance the visibility of the developed marks.

The vacuum metal deposition technique is another method for visualizing latent marks on non-porous substrates [1, pp. 145–147]. The key aspect of this method is the effect that the finger mark residue hinders the deposition of metallic films. In this technique gold is evaporated under vacuum to form a thin layer of metal on the surface of an object. This layer is deposited across the surface and penetrates the residue which forms the mark. In a second step zinc is deposited in the same manner on the layer of gold. In contrast to the gold layer, this layer of zinc does not penetrate the fingerprint residue. Thus an increased contrast between the substrate and the mark is achieved. The result is usually a negative mark. An additional advantage of this technique is that it can be applied even after fuming the mark with cyanoacrylate.

On porous substrates the application of ninhydrin is one of the common detection techniques [1, pp. 114–124]. Ninhydrin reacts with amino acids, proteins and peptides within the residue resulting in a dark purple color. In the development process the substrate is often dipped into a solution of the reagent. Alternatively, the solution can be applied by spraying or brushing. Afterward, the samples are dried and stored at room temperature for several hours for developing the purple color caused by the chemical reaction. Besides ninhydrin, several analogs can be applied as well; diazafluorene (DFO), see e.g. [1, pp. 128–131] is one example for such an analog reagent.

An alternative to the ninhydrin treatment is the utilization of a physical developer [1, pp. 131–133]. This particular technique is sensitive to water insoluble components of the residue and hence applicable for wet surfaces as well. The technique is based on the photographic physical developer. Once the substrate is placed in the reagent, silver is slowly depositing on it. The residue increases the amount of the deposited silver, resulting in darker areas on the developed surface.

The multimetal deposition (MMD) technique is a two-step process [1, pp. 133–134]. In the first step gold is deposited on the fingerprint residue within a colloidal gold solution. In the second step a modified physical developer solution is used to amplify the visibility of the treated mark. This technique can be applied on porous and non-porous substrates.

After the development of the latent mark, it is usually acquired by taking photographs [24, pp. 3–38, 289–321]. Those photos are afterward developed or printed for the following investigation steps. Depending on the characteristics of the developed mark special illumination techniques can be applied for an additional enhanced contrast.

4 Contact-Less Latent Mark Acquisition Techniques

The contact-less acquisition of the latent marks has the same purpose as conventional acquisition techniques —detecting the mark and making it usable for a forensic analysis by latent print examiners (LPEs). The main difference is the selection of the means of processing the mark. In particular, contact-less techniques exploit physical/optical properties and the chemical composition of the residue which can be sensed without a direct contact with the mark. However, not every contact-less acquisition method can be considered non-destructive, see e.g. [24, pp. 294–295]. Especially the utilization of UV radiation is known for its potential impact on DNA traces within the mark. Several known contact-less approaches are selected and summarized in the remainder of this section.

Early experiments with detecting the inherent fingerprint luminescence with lasers date back to 1977 [25]. However, the high performance could only be reproduced for contaminated residues. Without such a contamination, less than 20% of the marks allowed for the detection of fingerprints by laser excited luminescent [26]. Similar to the luminescence the fluorescence of fingerprints can be investigated as well [24]. However, the quality of the results for untreated marks also depends on particular contaminations.

Other approaches exploit the diffuse reflection of light on the residue. Such approaches can be applied to recover latent marks from smooth, non-porous, glossy substrates such as glass, plastic or polished metals [1, p.112] or for recovering marks in dust [24]. Such examples include oblique illumination setups, dark field illumination, coaxial illumination and the usage of polarized light (see [24, 27]). Those illumination techniques are already applied in forensic practice to enhance the contrast of latent and developed marks.

Various ultraviolet imaging techniques are also used in today’s police practice. The most commonly used UV imaging technique is the reflected UV imaging system (RUVIS), which usually employs shortwave UV radiation [24, pp. 289–299]. This approach utilizes specific UV absorption characteristics of the substrate and the residue as well as the diffuse reflection from the residue [1, pp. 112–113]. The results depend on the substrate, the angle of the light source and the type of the light source.

Another spectrum is used by the EVISCAN system [28]. The system employs a heat source emitting long wavelength infrared radiation and a high-resolution thermal vision camera to capture the diffuse reflection of IR radiation from the fingerprint in an oblique top measurement setup.

For glossy, curved substrates, [29] employ a gloss-meter and a rotary stage to acquire the finger marks. Such systems have the advantage of compensating the perspective distortion which would occur in normal 2D imaging.

The technique of optical coherence tomography can be used to detect latent marks covered by a layer of dust [30]. The sensor is able to produce 3D volume data of light scattering material.

The approach described in [31] utilizes the chemical composition of the residue in conjunction with Fourier transform infrared spectroscopy (FTIR). This technique allows for acquiring latent marks from various porous and non-porous substrates. However, the sensory has significant limitations regarding the size of the samples.

Within the scope of the German Federal Ministry of Education and Research funded research project “DigiDak” reflection based sensors are employed as well. The experiments use three different measurement principles: hyperspectral imaging in the UV to NIR range [32], chromatic confocal sensing [33] and confocal laser scanning microscopy [34]. The latter two techniques additionally allow for capturing height information (topography) from the latent marks or other types of traces. Due to the perpendicular illumination and measurement, those sensors primarily sense an intensity reduction on the areas covered with residue caused by the previously mentioned diffuse reflection of the latent mark.

As an outcome of this research project, a data set of computer generated finger marks is available upon request [35]. This particular data set consists of fingerprint patterns generated using SFinGe [36] which have been afterward printed on an overhead foil using artificial sweat and standard inkjet printers. Subsequently, each of the 24 samples has been contact-less acquired using a FRT MicroProf 200 surface measurement device equipped with a FRT CWL 600 sensor with a resolution of 500 and 1000 ppi.

A specific advantage of the non-destructive acquisition is the possibility to observe a mark over an interval of time by acquiring a series of scheduled images of it, called time series in [4]. This is a foundation for estimating the age of the mark by determining its speed of degradation [4]. Additionally, such degradation can be used to determine the persistence of finger marks on the surface of specific substrates.

Estimating the age of latent marks is an old challenge which has not been solved, yet. If it is possible to determine the age of a mark, the evidential value would increase significantly because it would be possible to prove that an individual has been at the crime scene at the time of the crime. In [4] time series of latent marks covering the first 24 h after the placement of the mark are the foundation for extracting 17 features. The feature space consists of binary pixel features from intensity and topography images as well as statistical features. The individual aging speed is determined by calculating ten consecutive feature values for each sample. The experimental setup consists of numerous donors and various influence factors. In the experiments two disjunct time classes are defined in order to determine whether the mark is younger or older than 5 h with different classifiers. Here, the classification accuracy for single features varies between 79.29% in the best case and 30.02% in the worst case. However, the performance can be slightly improved up to 83.10% if non-deterministic aging tendencies are excluded from the classification. For the combined feature space the classification accuracy varies between 65.08 and 79.79% depending on the amount of excluded time series.

Besides the age estimation of finger marks, the degradation of the residue of the mark can be used to locate traces within a temporal feature space. This particular location approach is motivated by the persistence of fibers at crime scenes. The foundation for the temporal feature space for latent marks are spectral texture features which are observed over a series of consecutive scans. For comparison, this feature space is also applied in the spatial domain. The general approach of using temporal features is similar to the age estimation. However, there are some differences within the processing pipeline. In contrast to the age estimation where a set of features is extracted from a series of samples, each sample is separated into blocks of 2 × 2 mm. Each block allows for determining the individual tendencies within the feature space in the region it is covering. In [37] the experiments are performed within low resolution scans (125 ppi) covering an area of 20 × 20 cm of the three (semi-)porous substrates copying paper, photographic paper and a textured catalog paper. Each substrate is prepared with 60 latent prints from six test subjects. The low acquisition resolution is necessary in order to achieve a reasonable acquisition time for each scan of 2.1 h for copying paper and 1.1 h for reflective photographic and catalog papers. The results for the three investigated substrates in [37] show an improved performance of the temporal feature space in comparison to the spatial feature space. The largest performance gain of 6.7% points is achieved when eight consecutive images are used to determine the temporal features. However, the results in [37] also indicate that a large number of consecutive scans might lead to a deteriorated performance as well.

5 Latent Mark Analysis Process

After the acquisition of the mark, usually a digital preprocessing is applied for an additional enhancement of the fingerprint pattern. Depending on the substrate, various artifacts might interfere with the pattern of the mark and the contrast within the image. Thus, emphasizing the fingerprint pattern can be quite challenging because the fingerprint is not necessarily the dominating pattern within the image as summarized in [33]. Hence the digital preprocessing of latent marks often significantly differs from the enhancement of exemplar prints as used in biometric systems. Moreover, it is not possible to ask the subject to acquire a sample with a better quality. These particular challenge are addressed in [38] for conventionally captured latent marks within the NIST Special Database 27 [39] or [40] for contact-less acquired marks using image processing and pattern recognition approaches.

The approach in [38] consists of a manual markup of the ROI, core and delta points, the block-based computation of multiple dominant orientations utilizing the short-time Fourier transform, the orientation field estimation using R-RANSAC and subsequently the enhancement by employing Gabor filters. The hypothesis creation and evaluation for determining the orientation fields using R-RANSAC is necessary to detect a plausible ridge flow. Otherwise parts of the background orientations might be mixed with the fingerprint orientations which possibly alter minutiae points. The last step of utilizing Gabor filters is known from the processing of exemplar prints in biometric systems as well. Here, on the foundation of the local ridge frequency and orientation the fingerprint pattern can be emphasized.

The approach in [40] utilizes the whole amount of available data from the contact-less sensory, namely the intensity and topography image. In contrast to [38] no human interaction is necessary to process the image of the mark during its processing. The first step of this approach is the preprocessing of each input image using various Sobel operators, unsharp masking and various Gabor filters. Each preprocessed image is afterwards processed separately. The feature extraction for the pattern recognition based decision whether a block contains fingerprint residue or not is performed in blocks of 50 × 50 μm. The feature space consists of statistical, structural and fingerprint semantics features. Afterward, each block is classified using pre-trained models. Subsequently, a fingerprint image can be reconstructed based on the classifiers decisions. The primary challenge of this approach is the training of the classifiers. This step is time-consuming and usually requires human interaction in order to get a ground-truth for the supervised learning. The evaluation is performed for eight different substrates ranging from the rather cooperative white furniture surface to the very challenging blued metal and golden oak veneer. In the best case, a classification accuracy of 95.1% is achieved. On golden oak at least an accuracy of 81.1% is achieved. However, an automated biometric feature extraction and matching is only successful for fingerprints from three of the eight substrates.

Another challenge for the preprocessing is the handling of overlapping marks. Such marks frequently appear on locations which are often touched. Such places can be door handles, elevator buttons or touch screens. In current police practice such traces are usually discarded if the non-overlapped region of the mark does not contain a sufficient amount of minutiae points.

One of the first approaches addressing this challenge to separate two overlapping patterns is published in [2]. This approach is designed for conventionally captured marks. During the first step it is necessary to mark the region mask manually. Here, the two non-overlapped and the overlapped regions are defined by the user. Afterward, the initial orientation field is determined which contains two orientation labels within the overlapped region. The separation of the overlapped orientation field is performed using a relaxation labeling algorithm resulting in two component fingerprints which need to be constructed using compatibility coefficients based on the local neighborhood of each block or object. After this step, the separated orientation fields are merged with the orientation fields of the non-overlapped regions.

An extended separation approach for conventionally and contact-less acquired marks is proposed in [3]. Here, the context-based computation and optimization of parameters are introduced, e.g. for accounting for different acquisition resolutions. The biometric evaluation of the separation results show an equal error rate of 5.7% for contact-less acquired marks, and 17.9% for conventionally captured marks. These results show that the separation benefits from the increased acquisition resolution of 2540 ppi of the contact-less acquired latent marks.

For the analysis of the latent marks, usually the ACE-V process (see [41]) or variations of it, is applied. ACE-V is an abbreviation for the four processing stages: analysis, comparison, evaluation and verification.

During the first step, the analysis, the acquired latent mark is investigated regarding the overall quality and the clarity and number of usable features. Several influence factors, such as the substrate, the development medium, assumed deposition pressure or distortions, are taken into account in this step. If the latent print examiner comes to the conclusion that the quality is too low, e.g., due to a lack of sufficient features, the mark is discarded as insufficient without performing the remaining steps of the ACE-V process. Otherwise it is compared to a reference print in the following step.

The comparison step (2nd step) involves a side-by-side comparison between two fingerprint patterns—usually represented by the latent mark from a crime scene and a reference print with a known origin connected with an identity. During the first comparison step, the level 1 pattern is compared. The comparison can be aborted if those patterns do not match; if the patterns are identical or not visible in one of the samples, the comparison is continued by creating relationships between level 2 features. However, due to potential distortions and low-quality areas this is usually no physical measurement with a fixed scale and thus hard to automate. The features are usually matched by following ridge paths or by counting ridges between two feature points. The matching itself is a subjective process which requires extensive training. Especially for poor quality marks the tolerance for variations is increased, which usually requires an increased number of features in order to decide about the result of the comparison in the next step.

The third step for the latent print examination is the evaluation. This step contains the formulation of the final conclusion based on the analysis and comparison steps of the samples. The examiner has to decide whether both patterns originate from the same source. This also requires the explanation of differences and variations found during the comparison. If the level 1, 2 and 3 features are sufficiently similar between the two prints the conclusion is called individualization. Likewise, if a number of different features are found and not explainable, e.g., by distortions, the conclusion is called exclusion which means that the patterns originate from different fingers. The result is marked as inconclusive if the examiner is unable to make a decision beyond reasonable doubt, e.g., due to a lack of matching and non-matching features.

The last (fourth) step of the ACE-V process is the verification. This step is performed because the comparison of the two fingerprint patterns is a subjective process. The verification consists of an independent analysis, a comparison and an evaluation of the samples by a second examiner. It is intended to increase the reliability of the final conclusion by reducing the subjectivity of the decision. This ideally means that the second investigator is unaware of the initial investigation results. If both examiners warrant the same conclusion, the examination of the latent mark is finished. Otherwise, a third examiner might repeat the investigation or the outcomes can be discussed and reevaluated by the two examiners in order to find an agreement.

The requirements for the decision-making are different in various countries. Currently two standards exist [42]: the numerical standard and the non-numerical standard.

The numerical standard defines a minimum number of matching level 2 feature points in order to draw the conclusion of identification. However, the threshold regarding the number of necessary features varies between 7 and 17 [42, p. 47]. This fixed threshold does not account for the rarity of particular feature points. Thus, the conclusion can be drawn from a specific number of rather common feature points or from rarer features as well. In order to account for this discrepancy in the evidential value of the features, some countries have switched to non-numerical standards [43]. This results in dynamic thresholds based on, e.g., the specificity of the features considering the rarity of feature points and the relationship to other points. In other words: a smaller number of matching rare features can suffice for drawing the conclusion of identification whereas a large number of matching common features might result in an inconclusive outcome. Thus, the non-numerical standard provides an increased flexibility in the decision making. However, on the other hand this results in an increased responsibility for the examiner and the requirement to provide sufficient statistical data and sophisticated knowledge of the state of the art to back the decision.

6 Legal Challenges of Applying New Techniques in the Latent Mark Processing

Each new procedure in forensics needs to be evaluated regarding their suitability and scientific foundation. In the US, a so-called Daubert challenge (see [44]) is known which can be used to assess the suitability of a scientific method prior to the admission of the evidence in court. In such a Daubert challenge at the Supreme Court, the judge has a role of a gatekeeper to screen scientific evidence in order to ensure that only relevant and reliable sources are admitted. In the original trial of Daubert v. Merrell Dow Pharmaceuticals in 1993 a list of five factors was provided, which a judge might consider during the assessment of the scientific validity of the theory or method [44]:

  • Whether it [the method] can be (and has been) tested,

  • Whether it has been subjected to peer review and publication,

  • The known or potential rate of error [of the method],

  • The existence and maintenance of standards controlling the technique’s operation,

  • Whether it is generally accepted in the scientific community.

This list has been extended after the initial Daubert decision by additional factors as summarized in [44, p. 38]. For new acquisition techniques this would require an extensive testing and the definition of particular standards. This would also require a comparison with existing techniques to show the validity of the results. Here, the contact-less acquisition techniques have the advantage that they would not interfere with any other conventional detection technique. Thus, it is possible to acquire the mark by non-destructive means in the first place and to verify the results using accepted standard procedures afterward.

However, in the context of the latent mark examination in general a critical review of court rulings is given in [45]. In the essence of [45] the fingerprint evidence merits its acceptance in court due to its long usage of over a century even if it would hardly withstand a careful evaluation of the Daubert factors. In [46] the rate of error is investigated in more detail. The author describes multiple cases of erroneous identifications indicating a non-zero error rate. However, he also states that the available data is inadequate to calculate a meaningful error rate. A solution to account for the non-zero error rate is the usage of known likelihood ratios (LR): instead of presenting the result of a binary decision, a quotient between the probabilities of two opposing hypotheses is given.

In the context of fingerprints [10], the two hypotheses are that the patterns originate from the same finger or from different fingers. Especially the variability of the pattern due to skin elasticity needs to be taken into account for determining the probabilities. In such an LR-based approach, the outcome of exclusion would relate to an LR of zero, whereas identification relates to an LR of infinity (due to the assumed error rate of zero). In practice the likelihood ratio is somewhere between those two extremes due to the variability of the patterns.

Although the first experiments in [10] look promising, they suffer from a challenge in a real-world application: it is almost impossible to calculate the probability for the two patterns originating from different fingers as the denominator for determining the LR since it would require knowing the patterns and properties of all fingers in the world. Thus, it is only possible to estimate this probability. Nevertheless, the application of LRs helps to express uncertainties in the decision process, see e.g. [47].

7 Summary

This chapter summarizes a selection of the state of the art of the latent mark acquisition and analysis. Even though the comparison of fingerprints has a long tradition in forensic investigations, several crucial questions remain unanswered to date. From a technical point of view, a broad variety of sensors allow for acquiring latent marks without the need for altering the trace composition by adding additional reagents. On the other hand, such techniques require an extensive testing in line with best practices of the police forces before they can replace the conventional techniques. Nevertheless, non-destructive sensors allow for new investigation techniques such as the age estimation or the observation of the fingerprint persistency.

The comparison of the fingerprint patterns bears several challenges as well. Especially the elasticity of the skin is a cause for uncertainty in the decision-making process. Here, it is necessary to establish a statistical foundation to determine likelihood ratios based on a common standard to be able to express and consider potential uncertainties while retaining a comparability of the resulting values.

With respect to the legal requirements there is currently no common ground. Some countries employ the numerical standard with thresholds requiring between 7 and 17 matching features [42, p. 47]. Others use non-numeric standards which account for the specificity of particular feature points.