1 INTRODUCTION

Firearms identification by the marks on cartridge cases is one of the important problems of forensic ballistics. Its solution makes it possible to link a particular firearm to a place of its criminal use. Even though, during a shot, the firearm leaves many marks on cartridge cases, firearms identification is a rather complex and sometimes ambiguous procedure. To make an identification conclusion, firearm examiners use subjective criteria, which are based on their previous experience.

The purpose of this work is to investigate the possibility of developing an objective computerized method suitable for forensic firearms identification. One of the stages of the research is examining the possibility of effective training of a fully connected neural network (FCNN) by the augmentation of the training dataset of source images of firing pin impressions. The augmentation is carried out by subjecting the source images to spatial and brightness distortions in accordance with the predicted variability of individual features. The FCNN is trained using the augmented dataset and the accuracy of multigroup classification of firing pin impressions is estimated.

Figure 1 shows the marks left by firearms on discharged cartridge cases. The most informative ones are the marks of a firing pin, breach face, ejector, and ejection port, as well as marks that depend on the firearm’s action, etc. Firearms identification is often carried out by examining marks of firing pin impressions. In this work, images of hemispherical firing pin impressions were used.

Fig. 1.
figure 1

Firearm marks on a discharged cartridge case: 1 is the firing pin impression, 2 is the mark of the ejection port, 3 is the ejector mark, 4 are the breech face markings, 5 is the extractors mark, 6 is the indicator mark, and 7 is the mark of the indicator aperture.

The development of an effective algorithm for automated classification of images of firing pin impressions is a complex technical problem, which is first of all due to the wide variety of types of micro-inhomogeneities on the firing pin surface that are transferred to the marks. For example, Fig. 2 shows images of firing pin impressions with dominant features of different types [1]. In addition, significant variability of features can be observed within one class (in this case, one class means firing pin impressions made by the same firearm). This variability of features can be caused by different reasons, e.g., the use of cartridges from different manufacturers, the presence or absence of a lacquer coating on the surface of a primer, different amounts of lubricants in the firing mechanism, etc. These factors significantly complicate the comparison of images of firing pin impressions and, therefore, identification of firearms. For instance, Fig. 3 shows two pairs of firing pin impressions. The first pair represents the marks left by different firing pins (different classes), while the second pair represents the marks left by one firing pin (one class). Visually, the second pair has more differences than the first one.

Fig. 2.
figure 2

Images of firing pin impressions with their distinctive individual characteristics.

Fig. 3.
figure 3

Firing pin impressions: (a) non-matching marks and (b) impressions of the same firing pin.

2 MAIN CAUSES OF THE VARIABILITY OF FIRING PIN IMPRESSIONS

The high variability of features in firing pin impressions within one class complicates the procedure of automated comparison of their images and, ultimately, reduces its effectiveness. The main factors that lead to the high variability of the marks in firing pin impressions are as follows.

1. Despite the unification of ammunition, each shot occurs under slightly different conditions (the rate of increase in the pressure of gunpowder gases, the value of the maximum pressure, the weight of gunpowder and modifications of gunpowder made by different manufacturers, different behaviors of the primer mixture, including misfire or hangfire, etc.). This causes the formation of firing pin impressions in which individual characteristics look different.

2. The presence of various inhomogeneities in the form of oxidation, lacquer coating, specks, etc. on the primer before the firing pin strikes it is random and can significantly distort the overall picture [2]. Figure 4 shows main types of inhomogeneities on the primers that are not related to the shot itself: traces of primer’s foil roll, notches and grooves, cavities caused by erosion of the surface of old primers, oxidation spots, and manufacturer’s markings on primers. These inhomogeneities can be partially preserved in firing pin impressions, which can significantly complicate the process of comparison of individual characteristics.

Fig. 4.
figure 4

Main types of inhomogeneities on the surface of primers that can be preserved in firing pin impressions: (a) traces of primers foil roll, (b) notches and grooves, (c) cavities caused by erosion of the surface of old primers, (d) oxidation spots, and (e) manufacturer’s markings on primers.

3. Differences in the mechanical properties of the primer’s foil can cause impressions of the same firing pin with varying depths, diameters (Fig. 3b), and different levels of distinctiveness in individual characteristics.

4. The impossibility of absolutely identical installation of cartridge cases in the scanner for scanning the base of a cartridge case leads to different positions of matching features in different images of impressions of the same firing pin (match marks). This can cause angular misorientation of one image relative to another, as well as left-right and up-down displacements of images of analyzed marks. The displacements generally do not exceed 10% of the diameter of a firing pin impression. Thus, the matching features of different impressions of the same firing pin can have different positions relative to the center of the image.

3 EFFECT OF THE TOPOLOGY OF FEATURES ON THEIR STABILITY

The topology of the inhomogeneities on the striking surface of the firing pin affects the repeatability of the marks in the impressions. Folds and stiffening ribs in the microrelief of the firing pin are consistently carried over to the impressions. Figure 5 shows images of impressions of the same firing pin with clearly defined boundaries of its individual characteristics in the form of spots of indeterminate shape (see mark 1 in Fig. 5). The boundaries of the spots quite accurately match when compared. The same is true for other marks with sharp boundaries. It can be assumed with high probability that the boundaries of these marks change slightly for different shots when using cartridges from the same manufacturer. The regions with a smoothly changing microrelief are more susceptible to changes, e.g., due to different maximum pressures of gunpowder gases on the inner surface of the primer for different shots. In Fig. 5, the regions with a smoothly changing microrelief in the central part are indicated as mark 2. It can be seen that the variability of the central regions of the spots is much higher than the variability of the sharp outer boundaries. These features can be taken into account to predict the most probable variation of firing pin impressions, e.g., when creating images with slightly modified individual characteristics.

Fig. 5.
figure 5

Firing pin impressions: mark 1 represents sharp boundaries of individual characteristics and mark 2 represents blurred boundaries of individual characteristics.

In general, despite the variability of the characteristics and the presence of the masking inhomogeneities on the surface of primers, the presence of three firing pins impressions for each firearm specimen allows the examiner to visually identify matching individual characteristics and, when forming an augmented dataset, purposefully make modifications in individual characteristics. In this case, the pronounced inhomogeneities on the primer’s surface are partially or completely eliminated by superimposing images of neighbor fragments of the primer’s surface.

4 METHODS FOR ESTIMATING THE SIMILARITY OF FIRING PIN IMPRESSIONS

Let us briefly consider main methods for estimating the similarity of firing pin impressions. In various automated ballistic identification systems, the similarity of images of firing pin impressions is estimated mainly by the maximum of a cross-correlation function (CCF), which shows the degree of their similarity. The position of the maximum corresponds to the coordinates of the central element of a mask for which the maximum similarity of the images is observed (Fig. 6a). The CCF of the images of different firing pin impressions is usually characterized by several equivalent maxima with relatively small values (Fig. 6b). It should be noted that the CCF is not invariant to the rotation, scale, and position of one image relative to another. Thus, to implement this method, the CCF is evaluated for various rotations of one image relative to another. This makes it almost impossible to perform multigroup classification when conducting a search through an electronic database with thousands of digital images of firing pin impressions. The presence of various artifacts and flares in the images also significantly reduces the effectiveness of the correlation analysis.

Fig. 6.
figure 6

Cross-correlation function: (a) images of two impressions of the same firing pin and (b) images of two impressions of different firing pins.

J. Song developed the congruent matching cells (CMC) method, which allows one to effectively compare digital images of breech face markings [3, 4]. This method is based on the exclusion of low-informative regions of compared images from further analysis and the identification of matching inhomogeneities congruently located in the image. For this purpose, the analyzed images are divided into cells of the same size. Cells suitable for correlation (which contain spatial inhomogeneities) are used in the analysis.

The method of correlation cells, which is similar to the CMC method, was proposed in [5]. It is based on superimposing a grid of cells of the same size onto the analyzed images, finding the CCF maximum for the same-name cells of the first and second images, and determining shifts of an image relative to the superimposed grid (see Figs. 7a and 7b) for which the number of matched cells with the greatest CCF values is maximum. The similarity of the coordinates of the shifts for which the maximum number of matched cells with the maximum CCF value is observed (Fig. 7c) characterizes the degree of similarity of the inhomogeneities (characteristics) distributed over the compared images. The method of correlation cells can be successfully used to analyze images of breech face impressions and firing pin impressions.

Fig. 7.
figure 7

Method of correlation cells: (a) image of the first firing pin impression with a superimposed grid of equidistant cells, (b) image (shiftable along x and y) of the second firing pin impression with a superimposed grid of equidistant cells, (c) and (d) are distributions of maximum CCF values on the shift diagram for the images of impressions of the same and different firing pins, respectively.

The disadvantages of the CMC method and method of correlation cells include their low efficiency in analyzing firing pin impressions and breech face impressions with characteristics in the form of arcs and circles, as well as the difficulty of their use for multigroup classification.

Another approach is the method of potential functions, which, to describe the relief of firing pin impressions, uses descriptors that do not depend on the orientation of images, e.g., perimeter (P), area of a characteristic (S), maximum and minimum moments of inertia (Imax and Imin) (see Fig. 8). The descriptors form what is known as feature space [6, 7], and the comparison of the coordinates of features in compared images allows one to draw conclusions about their similarity. Each object is characterized by a point in the feature space or a feature vector. The closer an object from the test set is to the analyzed mark in the feature space, the greater is the similarity between their descriptors. This method is effective when comparing toolmarks with large individual characteristics in the form of arbitrary-shape spots (Fig. 8a).

Fig. 8.
figure 8

Methods for comparing firing pin impressions with individual characteristics in the form of large spots: (a) extraction of descriptors in the firing pin impressions and (b) coordinate system and description of the object’s boundary in complex coordinates (the object’s boundary is shown by the dashed line).

In [8, 9], toolmarks with features in the form of large spots were compared using contour analysis. Representation of features as contours is especially useful when the image is binary and the information relevant for identification is contained in boundaries of objects. To represent characteristics as contours, the boundary of each characteristic is described by unit vectors that connect points in accordance with the directions of an 8-connected system (Fig. 8b). The thus-encoded contours have remarkable properties, namely, the maximum absolute value of the normalized scalar product of the contours is invariant to their rotation, plane position, and scale [10]. In this case, the absolute value of the normalized scalar product indicates the degree of similarity of the contours, while the argument represents the angle of their misorientation. The disadvantage of this method is its low efficiency when analyzing contours similar to a circle and when each mark has several characteristics.

Machine learning methods can also be employed for firearm examination. In [11], a neural network was used to classify images of firing pin impressions on the surface of a primer. The original images were preprocessed without spatial distortion of individual characteristics. In that work, all 747 cartridges, which were scanned to obtain images of firing pin impressions and form the training and test sets, were discharged from only five specimens of a 9 mm Parabellum Vector SPI pistol. The small number of classes makes it difficult to qualitatively estimate the classification accuracy of a neural network. Since there were 747 original images for 5 classes, the authors did not need to form an augmentation sample and estimate its effectiveness.

In [12], a Siamese neural network was used for binary classification of matching and non-matching images of firing pin impressions. Instead of two-dimensional images, point clouds generated by 3D confocal scanning of firing pin impressions were used. In addition, the purpose of that research was the binary classification into categories “match” and “non-match”, rather than multigroup classification (where several predefined classes are specified).

This overview of related works allows us to conclude that there is no universal method for multigroup classification of firing pin impressions with different types of individual characteristics given a small number of objects in each class.

5 ARTIFICIAL NEURAL NETWORKS

Recently, artificial neural networks have become widely used in forensic science. Fully connected neural networks (FCNNs), being one of the simplest architectures, have been successively employed in solving multigroup classification problems [1315]. A FCNN consists of a layer of input neurons, several hidden layers, and a layer of output neurons. The numbers of input and output neurons are strictly determined: the former depends on the number of pixels (M) in analyzed images, while the latter depends on the number of classes (N) that constitute the training set (Fig. 9). The number of neurons in hidden layers and the number of hidden layers are not strictly specified.

Fig. 9.
figure 9

Typical architecture of a fully connected neural network.

At the preliminary stage, the network is trained to extract features characteristic of each class of objects. The training process is based on the reception of an error signal, its backpropagation, and adjustment of weight coefficients wij that connect neurons of adjacent layers (Fig. 9). Initially, the weight coefficients of the connections between neurons of adjacent layers are set randomly, with their total value generally not exceeding 1 for each layer. In the process of training, the neuron connection weights are adjusted in such a way that, when a test image of class i is input, the output neuron corresponding to this class receives a signal close to 1, whereas the signal on the other output neurons is close to 0. The connection weights are adjusted to meet a chosen criterion of neural network performance. In this work, the adopted criterion was the condition that the difference (ε) of signals at each output neuron must be less than a predefined threshold (e.g., 0.05):

$${\text{|}}{{d}_{j}}-{{y}_{j}}{\text{|}} = {{\varepsilon }_{j}} < 0.05\quad {\text{for}}\quad j = [1, \ldots ,N],$$

where dj is the ideal signal at the jth output neuron (0 or 1), yj is the real signal at the jth output neuron, and N is the number of classes.

As a result of training, the FCNN learns to ignore various random artifacts in the images of the training set and, at the same time, to extract features that characterize a particular class.

The use of neural networks for firearms identification by digital images of firing pin impressions is complicated by a small number of objects for each firearm. This is due to the fact that, to collect samples of cartridge cases during testfire, a small number of cartridges are discharged. For instance, in the Russian Federation, three rounds are fired from each firearm; in other countries, from two to five rounds. It is well-known that, to effectively train a neural network, each class requires a large number of images, which are represented at different scales, have different orientations and positions of the object in the frame, represent different overlaps of the object with other objects, etc. The higher the variety of objects for each class, the better the network is trained and the more accurate the classification.

The analysis of images of firing pin impressions suggests that the effective training of the FCNN may require a training sample with a much smaller number of objects for each class than it is necessary for solving traditional image classification problems. Indeed, the analyzed images of firing pin impressions suitable for identification have the same scale, the same resolution, are identically centered, and almost always represent the complete image of the impression. Therefore, it can be assumed that 20 to 30 images per class can be sufficient to train the neural network.

6 CREATING IMAGES WITH DISTORTED INDIVIDUAL CHARACTERISTICS

The problem of a small number of objects in the training set can be solved by transforming each source image into a set of images with individual characteristics modified within acceptable limits. For this purpose, at the first step, source images 500 × 500 px in size were obtained, where the center of the firing pin impression was positioned in the center of the frame. The frame regions not related to the firing pin impression were blackened. Then, the images were subjected to homomorphic processing [8, 9] to equalize their brightness (Fig. 10a). At the second step, to obtain an extended training dataset, images were created by the following method.

Fig. 10.
figure 10

Example of a source image and its clones with modified individual characteristics: (a) source image, (b) image with modified individual characteristics without rotation, (c) and (d) are images with modified individual characteristics, rotated by an angle of 7° and 15° clockwise, respectively.

1. The brightness of the regions with a small gradient was varied within 10–15% of the dynamic range.

2. The contours of the large characteristics with well-defined boundaries were deformed by no more than 5% of their linear size.

3. The region of the firing pin impression itself in the new images was randomly shifted within 5–7% of the linear frame size (500 × 500 px).

4. All images with modified characteristics were rotated with a step of 7–9 deg at angles from 0 to ±35 deg. For each source image, one image with modified individual characteristics and its eight rotations by ±7–9°, ±15–17°, ±23–25°, and ±32–35° were obtained (see Figs. 10c and 10d). Technically, it was possible to rotate the images within ±180º, thus eliminating the need to ensure their identical orientation; however, in that case, it would take more images with modified features and, therefore, more training time.

Upon creating the images with modified features and rotating them, each class can be represented as several sets of similar images (branches). The number of branches depends on the number of source images (Fig. 11a). If necessary, from each source image, two or three images with differently modified individual characteristics can be obtained. To train the FCNN, all images were reduced to 75 × 75 px. This reduction in resolution was done to reduce the training time and the amount of computational resources required at the stage of estimating the effectiveness of the proposed method. In the future, when developing an applied model to solve real-world problems, it will be necessary to use larger images. The two-dimensional source images of firing pin impressions for the training and test datasets were obtained using the POISC (Russia) and IBIS (Canada) ballistic scanners. All images were brought to the same size and resolution.

Fig. 11.
figure 11

Generation of the training and test sets: (a) typical structure of an augmentation class, (b) formation of the training and test sets in accordance with variant 1, and (c) formation of the training and test sets in accordance with variant 2.

7 GENERATION OF THE TRAINING AND TEST DATASETS

The datasets were generated in accordance with the following rule. If the ith class contained three branches, then two of them formed class i in the training dataset and the third branch formed the same class in the test dataset (Figs. 11a and 11b). Thus, the images or their modifications included in the test dataset to estimate model performance were never used to train the model itself. Each branch contained a source image, at least one source image with individual characteristics modified within certain limits, and eight images with individual characteristics modified within certain limits that were rotated by different angles in the range of ±35 deg. In addition, the test dataset contained classes that were not included in the training dataset.

For better description of the structure of the datasets, additional definitions of class groups were introduced. The classes of the test dataset that are also present in the training set are called matched classes. The other classes of the test dataset are called non-matched. Two versions of the training and test datasets were generated, which allowed us to estimate the accuracy of classification of matched classes and the accuracy of detecting non-matched classes. In the first variant (see Fig. 11b), the training and test datasets each contained 30 matched classes (approximately 700 images in the training set and 350 images in the test dataset), and 78 non-matched classes were included in the test dataset (approximately 900 images). In the second variant (Fig. 11c), the training and test datasets also contained 30 paired classes each (700 and 350 images, respectively) plus one combined class. The combined classes were formed as follows: 78 non-matched classes from the test set of version 1 (Fig. 11b) were divided into two groups. The first group included in the training dataset contained classes with the most pronounced individual characteristics. The second group included all other non-matched classes. It can be seen from Fig. 11c that the non-matched classes included in the combined class of the training dataset do not coincide with the non-matched classes included in the combined class of the test dataset. Including the combined class in both datasets makes it possible to train the FCNN to extract features of non-matched classes and estimate the accuracy of their prediction by using a confusion matrix.

To avoid using the objects of the test dataset to form the combined class, the images of firing pin impressions with similar dimensional and geometric characteristics (diameter of approximately 1.5 mm, hemispherical profile, etc.) from other firearm models can be used. For instance, for the FCNN trained to classify firing pin impressions of a Makarov pistol (9 × 18 mm caliber), the combined classes can be formed using firing pin impressions of pistols with similar characteristics, e.g., Tauras, Beretta-92, etc. (9 × 19 mm caliber).

8 TRAINING A FULLY CONNECTED NEURAL NETWORK

To estimate the effectiveness of using the augmented training dataset, formed by the modification of individual characteristics of firing pin impressions, the FCNN [13] with two hidden layers was constructed. The FCNN had the following structure: the input layer consisted of 5625 input neurons (in accordance with the number of image pixels), the first hidden layer had 625 neurons, the second layer had 156 neurons, and the number of neurons in the output layer depended on the number of classes in the training set (Fig. 12). As a result, the FCNN had approximately 3.5 million adjustable weight coefficients.

Fig. 12.
figure 12

Architecture of the fully connected neural network used in this research.

The FCNN was trained in several steps. At the first step, the training set included only source images. Initially, the FCNNs were trained on the images with well-defined; then, the images with less pronounced characteristics were used. At the second step, the training dataset was extended to include images with modified individual characteristics without rotation. At the third step, the training dataset included images with modified individual characteristics randomly rotated relative to their initial positions by angles within ±35 deg.

The preliminary analysis of the FCNN showed that the inclusion of the images with modified individual characteristics into the training dataset improves the classification accuracy by several percent; the inclusion of the images with different rotations additionally improves the accuracy by 8–10%. Therefore, the results of training the FCNN on the dataset that included rotated images with modified individual characteristics are discussed below. In total, more than 50 FCNNs were trained with different initial weights, which were set randomly.

For matched classes, a success was interpreted as the appearance of a signal higher than the classification threshold on the corresponding output neuron or, with an unfixed classification threshold, the appearance of the maximum signal on the corresponding output neuron. A failure was interpreted as the appearance of a signal below the classification threshold on the corresponding output neuron or, with an unfixed classification threshold, the appearance of the maximum signal on the non-corresponding output neuron.

For non-matched classes (variant 1), the success was the appearance of maximum signals below the classification threshold on all output neurons, while the failure was the appearance of a signal above the classification threshold on at least one output neuron. In the absence of a fixed classification threshold, it is impossible to estimate the accuracy of prediction of non-matched classes. That is why, in variant 2, the combined class was introduced, the training on which made it possible to estimate the accuracy of prediction of non-matched classes.

9 ESTIMATION OF CLASSIFICATION ACCURACY

The accuracy of predicting the classes of the test dataset was estimated using the Accuracy, Recall, Precision, and F1 metrics.

Accuracy = (TP + TN)/(TP + TN + FP + FN) is the ratio of the correct predictions to the total number of predictions, where TP is the true positive prediction, TN is the true negative prediction, FP is the false positive prediction, and FN is the false negative prediction.

Recall = TP/(TP + FN) shows the number of false negative predictions (misclassifications), which is a very important parameter when conducting a search through a database of firing pin impressions.

Precision = TP/(TP + FP) shows the number of false positive predictions and characterizes the ability of the classifier to distinguish one class from the other similar classes.

The F1 metric combines two previous metrics: F1 = 2 \( * \) Precision \( * \) Recall/(Precision + Recall).

To estimate the classification accuracy, several FCNNs were trained on 30 classes of images of firing pin impressions. For one of the best-trained FCNNs, using the confusion matrix, the metrics were calculated for matched classes, depending on the threshold level of the signal on one of the output neurons (see Fig. 13). The graph shows that, for this FCNN, the optimal threshold is 0.7, at which all metrics have fairly high values on the order of 78–82%.

Fig. 13.
figure 13

Effectiveness of FCNN predictions in terms of different metrics depending on the threshold value on output neurons.

Then, the trained FCNNs were tested both with the fixed classification threshold and with the maximum signal on output neurons without the classification threshold (see Table 1). The FCNNs were trained in two ways. In the first case, several FCNNs with different initial sets of weight coefficients were trained in parallel, the best FCNN was selected, and its classification accuracy was estimated. In the second case, a collection of FCNNs (at least 10) with different initial sets of weight coefficients were trained, three FCNNs with the best performance were selected, and a new FCNN with weight coefficients averaged over the selected FCNNs was constructed and then retrained on the same dataset. In Table 1, the first method is denoted by “1 FCNN;” the second method, by “3 FCNN optimization.”

Table 1. Classification accuracy

The table shows that the classification accuracy at the given threshold value of the signal on output neurons for matched classes ranges from 63% to 84%; for non-matched classes, it ranges from 80% to 95%. When the threshold is not fixed, the prediction accuracy ranges from 72% to 96% for matched classes when the classification is carried out by the maximum signal on one output neuron, and it ranges from 95% to 98% when the success is interpreted as the appearance of the signal on the corresponding neuron among three output neurons with the highest signal values.

Analysis of the data presented in Table 1 allows us to draw the following conclusions.

1. Using several FCNNs to construct one FCNN with averaged weight coefficients makes it possible to improve the classification accuracy, which is probably due to reducing the effect of overfitting.

2. The classification by the maximum signal on the output neuron (on one out of three output neurons with the maximum signals) without the fixed threshold makes it possible to improve the accuracy of classifying objects of matched classes.

3. Training the FCNN on the combined class formed from non-matched classes makes it possible to improve the accuracy of predicting non-matched classes up to 95% for the fixed threshold and up to 99% for the unfixed threshold.

It should be noted that the trained FCNN exhibits quite high precision. Figure 14 shows the images of firing pin impressions for two Makarov pistols with serial numbers 1784 and 1699 that have topologically similar individual characteristics. The FCNN confidently attributed these images to correct classes.

Fig. 14.
figure 14

Images of firing pin impressions for two firearms with a similar topology of individual characteristics: (a) and (b) images of firing pin impressions on the cartridge cases discharged from one specimen of the Makarov pistol and (c) image of a firing pin impression on the cartridge case discharged from another specimen of the Makarov pistol.

10 DISCUSSION

Obviously, to improve the classification accuracy, it is necessary to increase the size of the images, because, when the source images are reduced to 75 × 75 px, the inhomogeneities are smoothed out with a loss of information about small details of individual characteristics. However, to analyze images of size 250 × 250 px or larger, it is required to switch to a convolutional neural network developed for image analysis.

It is also required to increase the number of classes in the training and test datasets.

11 CONCLUSIONS

The research showed the following:

– the fundamental possibility of developing a FCNN-based system for toolmark classification, which has characteristics that allow it to be used for forensic examination;

– the use of the augmented images of firing pin impressions that have individual characteristics modified within certain limits, in the case of a small number of initial objects in each class, allows the FCNN to be trained and used for the classification of firearms marks;

– the images of firing pin impressions can be classified by the FCNN on matched classes with the accuracy of about 84% for a fixed classification criterion, about 96% when the classification is carried out by the maximum (unfixed) signal on one output neuron, and about 98% in the case of classification by three maximum signals on output neurons;

– the non-matched classes can be detected with the accuracy of about 95% in the case of the fixed classification criterion and 99% when the detection is carried out by three maximum signals on output neurons;

– the use of several FCNNs to form one FCNN with averaged weight coefficients and its subsequent retraining, while taking into account the maximum signals not only on one output neuron but also on two or three output neurons, makes it possible to improve the accuracy of classification.

Overall, the research showed that the augmentation of the samples of firing pin impressions images by purposefully modifying their individual characteristics can provide more favorable conditions for effective training of neural networks and their subsequent use in conducting searches through databases of digital images of firing pin impressions.