1 Introduction

In modern society, information exchange is becoming more and more frequent. Personal identity authentication is becoming more and more important and it has been widely used in electronic transaction, public safety, commercial finance and other fields. Human’s palm contains abundant vein and palmprint information. Vein is the internal characteristics of the body and it has a certain anti-falsification (Yuan et al. 2013; Wu et al. 2012; Trabelsi et al. 2016). But because of the physical structure of the palm, some human’s palm vein information is very small in the near infrared light. The texture is composed of principal line of the palmprint and orientation of ridges. These textures have strong directivity. And the width and depth of these textures is different. The research shows that the distribution of the palmprint is unique and stable, which can be used as the basis of identity authentication (Kong et al. 2006). Multi-mode recognition based on palmprint and palm vein can improve the reliability of personal identification. The above characteristics give the palm feature recognition technology great advantages in the entrance guard system.

The initial concept of palm vein recognition appeared in 1990s. The Japanese Fujitsu Corporation developed a palm vein recognition instrument in 2006. Since 2006, research on palm vein recognition has been widely studied. From 2006 to 2010, the palm acquisition equipment was based on contact design (Guo et al. 2017; El-Tarhouni et al. 2017; Xu et al. 2016; Mohsen et al. 2017a, b, c; Lee et al. 2012; Lin et al. 2013; Zhang et al. 2011). Contact collection caused some problems in health and safety. Meanwhile, the sensor surface in the contact collection is more easily polluted, especially the access control system is often used in the outdoor environment. It has poor hygienic condition, which will increase the false positive rate of the system and shorten the service life of the acquisition instrument.

From 2010, some papers began to study the palm feature recognition in contactless imaging (Nesrine et al. 2017; Mohsen et al. 2017a, b, c; Mokni et al. 2016; Tamrakar et al. 2016; Lin et al. 2015; Wang et al. 2014; Han et al. 2012; Yuan et al. 2012) have obtained the distribution information of the palm vein and palmprint by contactless approach. There is non-aggression and high degree of public acceptance through contactless approach to obtain hand features. This way is more health and meets the requirement of practical application. It is very suitable for those users who are worried about the disease transmission. At present, palm feature recognition methods for contactless imaging are divided into the following three categories: the method based on subspace (Nesrine et al. 2017; Tamrakar et al. 2016; Yuan et al.2012), the method based on linear features (Mohsen et al. 2017a, b, c; Mokni et al. 2017; Lin et al. 2015) and the method based on statistical features (Mohsen et al. 2017a, b, c; Han et al. 2012; Wang et al. 2014). The subspace method is used to describe the palm vein and palmprint as a global description, and the hand image is projected into the subspace to extract the feature vector. Although these methods can obtain good recognition rate, the feature dimension is higher and the feature extraction time is increased. The line feature method extracts the curve features or straight line features of palm vein and palmprint. These methods have good recognition effect for high quality hand image with few line breakpoints. The method based on statistical features is used to extract the texture features of the hand image by using different filters. It can achieve better recognition results by combining texture information in frequency domain. But this method needs a set of experimental parameters related to the specific environment. When the posture of the palm and the illumination conditions is changed, the stability of the method is poor.

The position of palm through contactless acquisition way is less controlled. In contactless acquisition way, the user is required to keep the stentering palm plane parallel to the sensor surface of the imaging device and maintain a distance within a range of depth of field. The uncertainty of the position between the palm and the device leads to the horizontal rotation, translation, scaling of hand image. There are two solutions. One is to normalize the palm image. Another is to extract the translation, rotation and scaling invariant hand feature. The normalization changes some properties of the original image that may affect the feature extraction. At the same time, the normalization process also increases the running time. So this paper chooses the latter program.

This paper proposes a novel feature extraction method which can reflect the geometric features of palmprint and palm vein without being affected by the scaling, rotation and translation. Firstly, the inscribed circle of palm is obtained to as the feature extraction area. The circle center is as reference point to locate the palm image. These can eliminate the influence of palm translation in the process of image acquisition. Secondly, several radiation segments are made between the circle center and circumference. The length of the segments is equal to the length of the radius of the inscribed circle. The image gradient value of pixels in the inscribed circle is calculated by creating template. Finally, the feature vector space is established by the centroid’s relative radius of feature segment. The relative radius is equal to the radio of the absolute distance between the circle center and the centroid to the radius of the inscribed circle. This method can extract the stable palm features after the palm image is translated, rotated and scaled. At the same time, the feature extraction time is fast, it is more suitable for contactless hand identification device with high real-time requirement.

2 The basic idea of selecting the palm invariant features

This paper studies the palm feature extraction method of contactless acquisition way. Because of the uncertainty of position and angle between the palm and the imaging device, it may lead to the horizontal rotation, translation, scaling of palm in the palm image. Therefore, it is important to choose a palm feature that is independent of the translation, rotation and scaling. In addition, it is necessary to consider the small data space and fast calculation speed for the palm recognition instrument.

Because of the physical structure of the palm, some human’s palm vein information is very small in the near infrared light. The texture is composed of principal line of the palmprint and orientation of ridges. According to such image, this paper proposes a novel feature extraction method. The basic idea of the method is as follows.

A line segment is composed of a number of points. An image is composed of a number of lines. When enough line segments is used to cover the palm image, the distribution structure of palmprint and palm vein can be described by using the line segment in combination with the corresponding gray change information. As shown in Fig. 1.

Fig. 1
figure 1

Constructing invariant features based on the geometric features of palmprint and palm vein. a Structural feature line segment in the inscribed circle. b Illustration of feature line segment. c Illustration of gray value of pixel in feature line segment. d Illustration of feature line segment’s centroids

According to the above considerations, this paper constructs an inscribed circle in palm, and chooses the circle center as the stable reference point of palm. Several radiation segments are made between the circle center and circumference. These rays constitute a number of feature segments.

The detailed process of invariant feature selection is as follows:

  1. 1.

    Selection of stable reference point in palm

    Firstly, choose a stable reference point in the palm. The position of the reference point is not changed with the change of the position between palm and the imaging device. If the relative space position between palm vein and the reference point is set as the palm feature, this feature will not change with the change of the position while the palm is above the imaging device.

    Considering the limitation of computation space and running speed, this paper selects some points on the palm vein and palmprint as the feature points. The relative position between the feature points and the reference points is composed palm feature. The more feature points, the more details of palm vein and palmprint are revealed. But the computational complexity is also increased. So the number of feature points is a compromise based on the calculation space, running speed and feature recognition rate.

    The extraction of space position of palm vein and palmprint is not easy, even small number of errors will bring bigger error to the recognition. Therefore, this paper chooses the gradient intensity value as the basis of calculation. There has a large gradient intensity value in the palm vein and palmprint. It plays a decisive role in the calculation of feature.

    According to the above considerations, this paper constructs an inscribed circle in palm, and chooses the circle center as the stable reference point of palm. To ensure the inscribed circle is unique, it must meet the following conditions: the inscribed circle must be tangent to the bilateral contour of palm. At the same time, the margin of the inscribed circle must pass through the junction point of the middle finger, the ring finger and palm. As shown in Fig. 2.

    Fig. 2
    figure 2

    Detection of palm inscribed circle

  2. 2.

    Selection of centroid on feature segment

    This paper constructs a relative position relationship between palm vein, palmprint and the stable reference point. Firstly, several radiation segments are made between the circle center and circumference. These rays constitute a number of feature segments. As shown in Fig. 1a. Each feature segment may pass through a number of palm vein or palmprint. According to the principle of first order moment invariants, the gradient of each point on the feature segment is multiplied by the relative distance of the point, and the product is accumulated (Yuan et al. 2013). It is equal to the product of the sum of each point’s gradient on the feature segment and the relative distance of the centroid. The relative distance mentioned above is the radio of absolute distance from point to the circle center and the radius of the inscribed circle. The relative distance of the centroid can be obtained by calculation.

    Because the pixels located in the palm veins and palmprint has a large gradient value, there is a direct influence on the relative distance of the centroid on feature segment. Therefore, the relative distance of the centroid can reflect the spatial position of palm vein and palm prints. The relative distance of the centroid has nothing to do with the size, the position and the direction of the palm in the image, but it is related to the direction of feature segment in the palm.

    In order to get only one feature segment’s direction, this paper presents an implementation method. The inscribed circle was divided into several sectors by feature segment. If the direction of the first feature segment is determined, the direction of the other feature segments is also determined. In this paper, the wiring direction from the center to the root point of the middle finger and ring finger is as the direction of the first feature segment. When the position, the direction and the size of palm in the image is changed, the root point of the middle finger and ring finger is not changed. Therefore, the direction of the first feature segment is unique.

    Based on the above analysis, the centroid’s relative distance of each feature segment can reflect the space position of palmprint and palm vein, and it has uniqueness and stability. As the feature of the palm, the features have translation invariance, rotation invariance and scaling invariance.

3 Extraction of the palm invariant features

At present, the palm location method can be classified into two types. One class method is segmentation of the square area in the palm. Another class method is segmentation of an inscribed circle area in the palm. This paper needs to use the relative space position between palm vein and the reference point as the palm feature. The circle has the better rotation invariance than the square. Therefore, this paper chooses the inscribed circle as the region of interest and chooses the circle center as the stable reference point of palm.

3.1 Extraction of the palm reference point

Because the palm is approximately rectangle, it can be divided several inscribed circles in palm. In order to make the size and the position of the inscribed circle stable, this paper sets the following requirements for palm imaging: thumb separate, the other four fingers folded, palm stretched out into a plane. The plane of palm parallels with the sensor plane of imaging device. Palm is in the depth of field of the lens. The inscribed circle on the palm must meet the following conditions. The inscribed circle must be tangent to the bilateral contour of palm. At the same time, the margin of the inscribed circle must pass through the junction point of the middle finger, the ring finger and palm. This paper selects the circle center as stable reference point of palm.

The inscribed circle is near the root of four fingers. One tangency point of the inscribed circle and palm contour is located in the outer palm contour of the index finger. Another tangency point is located in the outer palm contour of the little finger. That is two boundary points of the inscribed circle are locating in the outer palm contour of the index finger and the outer palm contour of the little finger. At the same time, the boundary of the inscribed circle passes through the root points of both middle finger and ring finger. That is the root points of both middle finger and ring finger is one boundary point of the inscribed circle. Because three non-collinear points can determine a circle, the inscribed circle which is constructed in this paper must exist.

It can be known from the geometric properties of the circle that mid-perpendicular of any two points on circle boundary must pass through the centre. At the same time, tangent of a circle must be perpendicular to the radius that passes through the tangency point. According to these two properties, if this paper can find another boundary point, a straight line can be constructed through this boundary point and the root points of both middle finger and ring finger. The tangent line that passes through the boundary point can be constructed by line fitting. Then, the centre of circle can be determined.

Refer to Fig. 3, the extraction process of the inscribed circle is given below:

Fig. 3
figure 3

Detection of palm inscribed circle

  1. 1.

    Extraction of hand contour. It is to find the P, Q point on both sides contour of the wrist along the vertical direction of the hand contour by using edge detection method. It can track from P point to Q point along the hand contour by using the hand shape contour trace method based on directional gradient extremum (Yuan et al. 2010). Thus hand contour is extracted.

  2. 2.

    Extract the position of middle fingertip. The middle finger is the longest of the folded four fingers. Hand in the image is basically in a horizontal direction, and the right direction is the direction of the finger. The right-most position of the hand contour is corresponding to the position of middle fingertip.

  3. 3.

    Extract the intersection point \({P_1}\) of middle finger’s root, ring finger’s root and palm. For the left hand image, the ring finger is located under the middle finger. From the middle fingertips, along the hand contour, the junction point C of middle fingertip and ring fingertip will be found. Then in the corresponding gray image, from point C, the intersection point \({P_1}\) of middle finger’s root, ring finger’s root and palm will be found by tracking method of gray minimum. This spot is marked in the binary image of hand contour. For the right hand image, the ring finger is located above the middle finger, whose search direction is opposite to that of the left hand, which is no longer in detail.

  4. 4.

    In the binary image of hand contour, the inscribed circle is extracted. The inscribed circle’s edge pass through the intersection point \({P_1}\) of middle finger’s root, ring finger’s root and palm and be tangent with hand contour. The center of the inscribed circle is the reference point O of palm. More detailed information is as follows (Yuan et al. 2013). ① It can determines the region of the candidate center point of inscribes circle by detecting the boundary point in the palm contour lines of the lateral index finger and lateral pinkie outside respectively. ② To calculate three distance values for every the candidate center point in the region. The first value is the distance between the candidate center point and palm contour lines of the lateral index finger. The second value is the distance between the candidate center point and palm contour lines of the lateral pinkie. The third value is the distance between the candidate center point and the intersection point \({P_1}\) of middle finger’s root, ring finger’s root and palm. ③ When the difference of three distance is minimum ,the candidate center point is the centre of inscribes circle. It is set to O(x0,y0). The Euclidean distance between the centre O and the intersection point \({P_1}\) is the radiuses of inscribes circle. The letter R represents the radius of the inscribed circle.

  5. 5.

    The inscribed circle in palm is constructed. The center is the point O(x0,y0) and the radius is R. This paper chooses the inscribed circle as ROI area.

3.2 Generation of gradient image

When calculating the centroid on the feature segment, it is required that the gradient value of every point on feature segment be calculated. Therefore, it is needed to convert the hand gray image to the gradient image. Refer to Fig. 4, the conversion principle is as follows:

Fig. 4
figure 4

Block diagram of gradient image generation

  1. 1.

    In the intersecting direction with the palm vein or the palmprint, the gray value of the pixel is satisfied with grey minimum. Through the four directions (0°, 45°, 90°, 135°) to judge whether the gray value of the pixel is the minimum in a small linear neighborhood. If the gray value of the pixel is satisfied with grey minimum, it is possible to find candidate pixel of palm vein or palmprint (Yuan and Wang 2011; Yuan and Zhang 2001). The pixels that do not meet grey minimum must not be vein or palmprint, therefore, the gradient value of the pixel is set to zero.

  2. 2.

    When the pixels are located in the vein or in the palmprint, the pixels meet the grey minimum and the two gradient values in the vertical direction are relatively large. When the pixel is deviated from the vein or palm print, the gradient values will become a large one and a small one. As shown in Fig. 5. Therefore, the smaller value of the two gradients in the direction of meeting grey minimum is taken as the gradient value of this direction. Then the maximum value of all directional gradient values that meet grey minimum is taken as the gradient value of this pixel. The maximum gradient value is corresponding to the central position of the vein or palmprint. With the deviation from the center point, the gradient is gradually reduced.

Fig. 5
figure 5

Gradient values schematic diagram

According to the above method, the gradient value of all the pixels in the image is calculated, and a gradient image is formed, as shown in Fig. 11.

Specific implementation steps are as follows:

  1. 1.

    The gray value of pixels outside the inscribed circle (the feature extraction region) is set to zero.

  2. 2.

    Each pixel in the feature extraction region is used as the center pixel point (x0,y0) to calculate the image gradient value I0(x0,y0) of the pixels in the neighborhood. The specific calculation method is as follows:

    1. The gradient calculation template is structured. The template consists of 3 × 3 sub-templates Xm, which are arranged in a square matrix. The subscript m represents the order number of sub-template in the template, m = 0,1,…,8. X0 is located in the geometric center of the template, as shown in Fig. 6a.

    2. Each sub-template is composed of l × l (l = 2,3,…) pixels, which are arranged in a square matrix. The subscript n represents the order number of pixel in the sub-template, n = 0,1,…, l × l − 1. amn represents the gray value of the n (th) pixel in the m (th) sub-template. am0 is located in the geometric center of the sub-template.

    If the sub-template consists of odd number of pixels, the am0 is defined as the geometric center of the sub-template. If the sub-template consists of even number of pixels, the am0 is defined as the upper left side the geometric center of the sub-template. a00 corresponds to the center pixel (x0,y0) of the template, as shown in Fig. 6b, c.

    Fig. 6
    figure 6

    Gradient calculation template. a Gradient calculation template. b The center position of the sub-template when the size of sub-template is two. c The center position of the sub-template when the size of sub-template is three

  1. The gray average \({\overline {F} _m}\) of pixels in the range of sub-template is calculated. The calculation method is as follows:

    $${\overline {F} _m}=\frac{1}{{l \times l}}\sum\limits_{{i=0}}^{{l \times l - 1}} {{a_{mi}}} .$$
    (1)

    The value of \({\overline {F} _m}\) is assigned to the center pixel of the sub-template:

    $${a_{m0}}={\overline {F} _m}.$$
    (2)
    1. Through the four directions (0°, 45°, 90°, 135°) to judge whether the center pixel of the template meet the gray minimum conditions. The judgment method is as follows:

      $${a_{i0}}>{a_{00}}<{a_{(i+4)0}},$$
      (3)

      i∈[1,2,3,4]. If the formula (2) is not satisfied, the template is in the gray gentle region. The image gradient value I0(x0,y0) of the center pixel is set to zero; If the formula (2) is satisfied, it can be judged that there is an edge in the range of the template. To transfer to the fifth step, the image gradient value of center pixel I0(x0,y0) is calculated.

    2. The calculation process of the image gradient value I0(x0,y0) is as follows:

      $$I({x_0},{y_0})=\hbox{max} \left\{ {\mathop \vee \limits_{{1 \leqslant i \leqslant 4}} \hbox{min} \left\{ {\left| {{a_{(i+4)0}} - {a_{00}}} \right|,\left| {{a_{i0}} - {a_{00}}} \right|} \right\}} \right\}.$$
  2. 3.

    To regenerate an image with the same size as the palm image. The calculated image gradient value I0(x0,y0) are assigned to the pixel point of the corresponding position and the corresponding gradient image is obtained. The difference of the sub-template’s size will produce different effects on the generated gradient image. Section 5.2 will determines the size of the sub-template through the experiment.

3.3 Extraction of the centroid’s relative radius on feature segment

As shown in Fig. 1, from the zero direction of the reference point in the palm, along the counterclockwise direction, the inscribed circle is divided into H sector regions, thereby obtaining H segments K1,K2,…,KH. The distance between the centroid on feature segment and reference point in palm is the centroid radius, which is expressed by rj. The calculating formula of the centroid radius on the j(th) feature segment is as follows:

$${r_j}=\frac{{\sum\nolimits_{{i=1}}^{M} {i \times {I_{0i}}} }}{{\sum\nolimits_{{i=1}}^{M} {{I_{0i}}} }}.$$
(4)

The feature segment is the radius of the inscribed circle. For the convenience of calculation, the radius of the inscribed circle is rounded. In the formula (4), the value range of i is from 1 to M. M is the number of pixels of radius in the horizontal direction. I0i is the image gradient value of the point i.

In addition to the horizontal and vertical direction, generally the unit point i on the feature segment not fall on the pixel. Therefore, this paper uses the bilinear interpolation method to calculate the gradient value. The method is as follows.

The four nearest neighbor pixels of the unit point (x0, y0) are respectively A1, A2, A3, A4. The coordinates of the four pixels are respectively (i,j), (i + 1, j), (i + 1, j + 1), (i,j + 1), the gradient values are I(A1), I(A2), I(A3), I(A4). The image gradient values of the unit point can be calculated by using the gradient values of the four nearest neighbor pixels. First, the image gradient values I(A5) of the pixel point A5 and the image gradient values I(A6) of the pixel point A6 are calculated. The position of A5 and A6 is shown in Fig. 7, the calculated formula is as follows:

Fig. 7
figure 7

Bilinear interpolation schema

$$I({A_5})=({x_0} - i)\left[ {I({A_3}) - I({A_4})} \right]+I({A_4}),$$
(5)
$$I({A_6})=({x_0} - i)\left[ {I({A_2}) - I({A_1})} \right]+I({A_1}).$$
(6)

The calculated formula of gradient values I(x0,y0) of the unit points (x0,y0) is as follows:

$$I({x_0},{y_0})=({y_0} - j)\left[ {I({A_5}) - I({A_6})} \right]+I({A_6}).$$
(7)

Because the size of palm is different in every image, the distance of the centroid on feature segment to the center will change. In order to solve the problem of scaling, the relative radius of centroid is calculated. The relative radius of centroid is defined as the radio of the distance rj and radius R of inscribed circle. The rj is the distance between the centroid and circle center. The calculation formula is as follow:

$${\delta _j}=\frac{{{r_j}}}{R}.$$

The feature vector is constructed by the relative radius of centroids of H feature segments. This feature is invariant whenever the palm image is scaled, rotated or translated. The constructed feature vector is as follows:

$$F=\left\{ {{\delta _1},{\delta _2}, \ldots ,{\delta _H}} \right\}.$$

4 Matching of feature vectors

After the above steps, H segments can be constructed in the palm to reflect the texture feature of the palm. The feature vector extracted from the registered palm image is as follows:

$${F_a}=\left\{ {{\delta _{1a}},{\delta _{2a}}, \ldots ,{\delta _{Ha}}} \right\}.$$

The feature vector extracted from the palm image to be recognized is as follows:

$${F_b}=\left\{ {{\delta _{1b}},{\delta _{2b}}, \ldots ,{\delta _{Hb}}} \right\}.$$

To determine whether the registered palm image is matched with the palm image to be recognized, it is required to judge the number of matching segments between the two images.

In the matching of feature segments, the distance dj between the centroid’s relative radiuses of the corresponding feature segments is used as the criterion similarity for their.

The distance of the feature segment j between the registered palm image and the palm image to be recognized is as follows:

$${d_j}=\left| {{\delta _{ja}} - {\delta _{jb}}} \right|,$$
(8)

When dj is less than the threshold T1, it is considered that the two corresponding segments are the matching segments. If the number of matching segments is \(g\), the matching rate is as follows:

$$G=\frac{g}{H} \times 100\% ,$$
(9)

When the match rate G is greater than the threshold T2, it is considered that the two palm images are derived from the same person.

5 Experiment and result analysis

Experimental images come from the palm image database built by our research group. Our research group has designed and developed a set of contactless palm image acquisition device. The collection device is shown in Fig. 8. The device uses the AD-080CL multi-spectral imaging Near-IR camera to collect the palm image. The video signal is digitized by the video decoder chip TVP5146. Then the digital signal is inputted to the computer. In order to avoid the effect of visible light, a 750 nm wave filter is added in front of the camera lens. This filter can allow more than 90% Near-IR light to pass through. It not only guarantees the passing rate of Near-IR light, but also effectively cut off the visible light.

Fig. 8
figure 8

Device of palm vein image acquisition

In this paper, vein and principal line of palmprint constitute the multiple palm features together. Palmprint imaging is based on the reflection of the incident light. With the increase of the light source’s wavelength, the absorption capacity of the palm to the light is increased, the intensity of light reflected back from the skin is decreased, the palm ridge become week obviously. Therefore, this paper selects 760 and 850 nm as the wavelength of light source. The collected palm image contains palm vein information and palmprint information.

The image database includes 85 people. The age of these people is between 20 and 33 years old. Each left palm and right palm of every people is shot for twenty images. The palm imaging mode is divided into three types. (1) The thumb is required to separate out and the other four fingers gather up. (2) The five fingers slightly open. (3) The five fingers fully spread. In the image acquisition process, the palm stretches into a plane. The palm is parallel to the plane of the camera lens. The distance between the palm and the camera lens is 20 cm. The complete palm area can be seen in the collected palm image. And the palm is consistent with the length direction of the display. There are altogether 3400 images in the database. The size of image is 1024 × 768 pixels.

5.1 Testing of the position stability under three imaging modes

When the fingers outstretched, the palm near the root of fingers will appear small deformation. In order to understand the influence of finger spreading on the positioning stability, this paper tests the position stability of the palm under three imaging modes. The position stability refers to the different image’s similarity degree in the inscribed circle extracting from the same person.

Firstly, the palm image in the inscribed circle is normalized, including the normalization of direction and normalization of size. The wiring direction from the center to the root points of both middle finger and ring finger is as the reference direction of the first radiation segment. It is rotated to the horizontal position. The pixels point number of the inscribed circle’s radius is in the range of 140–160 in the horizontal direction. Therefore, the pixels point number of radius is normalized to 150.

For palm images, the ordinate is set for the X axis, the abscissa is set for the Y axis and the pixel’s gray value is set for the Z axis. Then the three-dimensional coordinate system can be built and each pixel in the image can be expressed as a point in the coordinate system. The palm image can be expressed as a gray surface in the coordinate system. As shown in Fig. 9.

Fig. 9
figure 9

Palm image and its gray-value surface. a The palm inscribed circle. b Gray-value surfaces

This paper judges the positioning stability by the standard deviation of gray difference surface within inscribes circle. It is calculated respectively that the gray difference surface between the different palm images from the same palm and the gray difference surface between the different palm images from the different palm. It can be seen from the Fig. 10 that the gray difference surface from the same palm is flat and the gray difference surface from the different palm has big fluctuation. The more similar the image within inscribes circle, the higher the positioning stability, and the smaller the standard deviation.

Fig. 10
figure 10

Difference surfaces of the same hand and different hand. a Gray difference surfaces of same hand. b Gray difference surfaces of different hand

In order to understand the influence of finger spreading on the positioning stability, this paper has designed the following experiment. The experimental sample is selected from 1700 left palm images of 85 individuals in the database. It chooses 15 images per person. It selects 425 pairs of the palm image of four fingers gathering up (85 × 5), 425 pairs of the palm image of five fingers slightly opening (85 × 5) and 425 pairs of the palm image of five fingers fully spreading (85 × 5). In every imaging mode, it traverses the calculation for everyone’s five images. The calculation times are ten. There are ten standard deviations. After eliminating the gross error, it can be considered that the positioning is relatively stable if the change of the residual standard deviation is not large. In this paper, the standard deviation of the residual standard deviation is calculated again. The standard deviation can be used as the evaluation value of the palm’s positioning stability. Eighty-five people have been performed the operation. It can get 85 standard deviations and calculating their average value. The value can be as the evaluation value of positioning stability. In three kind of imaging mode, the valuation value of positioning stability is shown in Table 1.

Table 1 Comparison of positioning stability

The experimental data shows that the positioning is relatively stable when four fingers folded.

5.2 Determination of the size of the sub-template

Due to the difference in distance and direction of everyone, the position and direction of the palm in the image is different. The following experimental samples are selected from the 850 left palm images in the database.

In Sect. 3.2 of this paper, the gradient calculation template is composed of 3 × 3 sub-templates. According to the image in the database, the width of the palm vein and palmprint mainline is approximately 6–9 pixels, and the width of the other palmprint is less than this range. This paper selects eight sizes of sub-templates for the experiment and obtains the corresponding gradient image. The dimensions l × l of the sub-templates are 3 × 3, 4 × 4, 5 × 5, 6 × 6, 7 × 7, 8 × 8, 9 × 9, 10 × 10. In order to observe the difference of the gradient images with different sizes of sub-templates, the gradient images are linearly stretched. The processed gradient image is shown in Fig. 11. It can be seen from the gradient image that the noise in the gradient image is obviously reduced when the size of the sub-template become large and large. At the same time, palmprint and palm vein lines become thicker.

Fig. 11
figure 11

gradient image in different size of sub-template. a Gradient image when size of sub-template is equal to three. b Gradient image when size of sub-template is equal to four. c Gradient image when size of sub-template is equal to five. d Gradient image when size of sub-template is equal to six. e Gradient image when size of sub-template is equal to seven. f Gradient image when size of sub-template is equal to eight. g Gradient image when size of sub-template is equal to nine. h Gradient image when size of sub-template is equal to ten

The evaluation basis of the sub-template’s size is based on the position stability of the centroid on the feature segment in the gradient image. This paper arbitrary chooses ten people from the image database, ten images per person, a total of 100 images as the experiment object. The experimental method is as follows:

  1. 1.

    The number of feature segment in the inscribed circle is set to 60 because this number of the feature segments can basically cover the mainline of palmprint and palm vein.

  2. 2.

    Under the 3 × 3 sub-template, the centroid’s relative radius of 60 feature segments in every gradient image is calculated according to the formula (7). Further calculation can be obtained the standard deviation of the centroid’s relative radius in the corresponding direction. Sixty standard deviations are obtained by the above calculation, which are defined as variable quantity of the centroid’s relative radius. Then the calculation formula of variable quantity \(\Delta {\delta _{ij}}\) of the centroid’s relative radius of the j-th person in the i-th direction is shown as follows:

    $$\Delta {\delta _{ij}}=\sqrt {\frac{1}{{10}}\sum\limits_{{k=1}}^{{10}} {{{({\delta _{kij}} - {\mu _{ij}})}^2}} } ,{\mu _{ij}}=\frac{1}{{10}}\sum\limits_{{k=1}}^{{10}} {{\delta _{kij}}} .$$

    The relative radius’s maximum variation of the 60 directions of the j-th individual is expressed as follows:

    $$\Delta {\delta _j}=\hbox{max} \left\{ {\Delta {\delta _{ij}},\quad i=1, \ldots ,60} \right\},j \in \left[ {{\text{1}},{\text{y2}}, \ldots ,{\text{1}}0} \right].$$

    Then, the relative radius’s maximum variation of the ten individual’s feature segments is obtained under the 3 × 3 sub-template.

  3. 3.

    Change the size of sub-template. According to the steps (2), the relative radius’s maximum variation of the feature segments under eight sub-templates is calculated, as shown in Fig. 12.

Through the data analysis of Fig. 12, most the relative radius’s maximum variation are the smallest when the size of sub-template is 7 × 7. Although the other value is not minimum, it is also close to the minimum. Therefore, it is considered that when the sub-template is 7 × 7, the relative radius’s maximum variation of gradient image is the smallest. And the stability of the centroid point is the best.

Fig. 12
figure 12

The variation of relative radius of centroid in different sizes sub-templates

In the experimental results, the size of the sub-template is close to the width of the palm vein and the palmprint line. It shows that the palm vein and palmprint play a major role in determining the location of the centroid of the feature segment. Therefore, this paper chooses 7 × 7 as the gradient calculation sub-template.

5.3 Determination of the number of the feature segments

In general, when the number of the centroid’s relative radius increases, the resolution is stronger and computational complexity is also increased. However, when the number of feature segments increases, the similarity of the centroid’s location of the adjacent feature segment is increased, and these similar features cannot improve the recognition rate significantly.

When the number of features reaches a certain value, the recognition rate may be no longer improved significantly. If such a point is found, it can be used as a basis for determining the number of feature segments. This paper selects 7 × 7 as the size of the gradient calculation sub-template. According to the 850 images of database, the experimental method is as follows:

  1. 1.

    The following steps are carried out to determine matching threshold T1 of the centroid’s relative radius of matching segments and matching rate threshold T2 of two images, the method is as follows.

Firstly, when the size of the sub-template is 7 × 7 and the number of the feature segment is 60, the centroid’s relative radius is analyzed to determine the value of T1. There are 850 images from the database, they are from 85 people, ten for each. With the formula (8) as the evaluation criterion, the centroid’s relative radiuses of the 60 feature segments of 85 individuals are statistically evaluated. In different images of the same person, the difference of the centroid’s relative radius in the corresponding direction is in the range of 0.02–0.06 mostly. The difference of the centroid’s relative radius of different people in the corresponding direction is generally greater than 0.06. Therefore, the threshold value of T1 is selected as 0.06.

Secondly, the matching rate threshold T2 is determined according to the FRR curve and the FAR curve. There are 85 individuals in the image database, ten palm images of everyone, a total of 850 palm images. In this paper, we use this algorithm to extract features and traversal matching, the number of matching times is 360,825, in which the intra-class matching times is 3825, the inter-class matching times is 357,000. The intra-class matching is the matching of different palm images from the same person. The inter-class matching is the matching of palm images from the different persons. The matching curve of the intra-class and inter-class is shown in Fig. 13. According to the curve figure, the method of this paper can distinguish between the intra-class matching and the inter-class matching of the palm vein and the palmprint effectively, and realize the palm recognition. By calculating the false rejection rate (FRR) of intra-class matching and the false acceptance rate (FAR) of inter-class matching under different matching thresholds, the distribution curves of FRR and FAR are obtained, as shown in Fig. 13. Among them, the FRR and the FAR are defined as follows:

Fig. 13
figure 13

Matching distribution for intra-class and inter-class

$$FRR=\frac{{NFR}}{{NAA}} \times 100\% \quad FAR=\frac{{NFA}}{{NIA}} \times 100\% .$$

In the above formula, NAA is the number of access attempts and NIA is the number of impostor attempts. NFR is the number of false rejections and NFA is number of false acceptance. Under a given matching threshold, the FRR and the FAR are equal, and the error rate is called equal error rate (EER). In Fig. 14, the matching threshold of the intersection points of FRR curve and FAR curve is 0.52, which is assigned to T2.

Fig. 14
figure 14

Curves of FRR and FAR

  1. 2.

    Under the premise of T1 = 0.06 and T2 = 0.52, 8, 12, 18, 36, 40, 60, 72, 90, 120, 180 feature segments are respectively constructed. The EER of image database is calculated under different number of feature segments. The results are shown in Fig. 15.

Fig. 15
figure 15

The relationship of the number of feature segments and recognition performance

Through the analysis of Fig. 15, it is found that the EER is basically stable when the number of feature segments is greater than 60. Considering the running time, the number of feature segments is set to 60.

5.4 Performance comparison of algorithms

Recognition speed is an important index to reflect whether the algorithm meets the real-time requirements. The high efficiency of the algorithm has a certain practical significance for the design of the actual recognition system. In this paper, the test code is written in JAVA language. The configuration of the machine is CPU: i5-2450M, memory: 3G.

The sample’s pretreatment time, the time of feature extraction and the matching time are calculated, and the average value is shown in Table 2. Visible from the Table 2, the proposed method takes less time in every step. It can meet the real-time requirements of the actual system.

Table 2 Efficiency comparison of recognition algorithms

Table 3 provides a comparison of several palm recognition methods based on palmprint and palm vein. As can be seen from this table, the method of this paper can obtain better recognition results when the position of the palm is translated, rotated and scaled.

Table 3 Performance comparison of recognition algorithms

5.5 Rotation and scaling invariance verification of palm feature

This section mainly analyzes whether the palm features constructed by this paper are invariable when the palm image are rotated and scaling. For 85 people, everyone chooses two images. There are 170 palm images. After preprocessing, the ROI images are obtained. The images are rotated and scaling and the invariance of palm features are analyzed. Because in the practical application environment, it will restrict the rotation angle of the user’s palm and guarantee the integrity of the palm in the image. This paper does not consider the extreme situation of palm’s big distortion. The images are rotated around the center for − 30° to 30° to simulate the rotation transformation of the palm. The images are scaled from 85 to 115% to simulate the distance change between the palm and the sensor. It takes the original images as the registration samples and takes the rotating and scaling images as the test samples. It calculates the matching rate of the registration samples and the test samples. The matching rate is bigger, the influence of palm affected by the rotation and scaling is less. The stability of the palm features is better. Because the palm image changes some properties in the process of scaling and rotation, the matching rate will not reach 100%. The 85 people are treated separately, the average value is obtained and the matching curve is constructed. Figure 16a gives the matching curve of the palm image after rotation. Figure 16b gives the matching curve of the palm image after scaling. The curve shows that the matching ratio of the original image to the processed images is higher than 97%. The experimental results show that the palm features have better robustness and invariance of rotation and scaling.

Fig. 16
figure 16

Robustness analysis of the proposed method. a Robustness analysis of rotating. b Robustness analysis of scale

6 Conclusions

This paper proposes a method of palm feature extraction based on contactless acquisition mode. Firstly, several feature segments are constructed in the inscribed circle. Secondly, according to the gradient value of each point on the feature segment, the centroid’s relative radiuses on the feature segment is calculated to form the feature vector space. Finally, the validity of the feature is verified from two angles.

  1. 1.

    Feature stability under different sub-template sizes. Choose ten people from the image database, ten images per person, totally 100 images are as the experimental object. When the size of the sub-template is 7 × 7, the variation of relative radius is the smallest. Under this condition, the stability of the centroid’s location is the best.

  2. 2.

    Recognition performance of the method in different number of feature segments. Eight hundred and fifty palm images in the image database are used as the experimental objects. The EER is 0.3188% when the number of feature segments is 60. When the number of feature segments continues to increase, the recognition rate is no longer improved significantly. At this time, the time of feature extraction is 0.0019 s.

The above conclusions show that the structural features of this paper are invariant to translation, rotation and scaling. At the same time, the feature extraction method is efficient and effective.