Vein Pattern Veriﬁcation and Identiﬁcation Based on Local Geometric Invariants Constructed from Minutia Points and Augmented with Barcoded Local Feature

: This paper presents the development of a hybrid feature—dorsal hand vein and dorsal geometry—modality for human recognition. Our proposed hybrid feature extraction method exploits two types of features: dorsal hand geometric-related and local vein pattern. Using geometric a ﬃ ne invariants, the peg-free system extracts minutia points and vein termination and bifurcation and constructs a set of geometric invariants, which are then used to establish the correspondence between two sets of minutiae—one for the query vein image and the other for the reference vein image. When the correspondence is established, geometric transformation parameters are computed to align the query with the reference image. Once aligned, hybrid features are extracted for identiﬁcation. In this study, the algorithm was tested on a database of 140 subjects, in which ten di ﬀ erent dorsal hand geometric-related images were taken for each individual, and yielded the promising results. In this regard, we have achieved an equal error rate (EER) of 0.243%, indicating that our method is feasible and e ﬀ ective for dorsal vein recognition with high accuracy. This hierarchical scheme signiﬁcantly improves the performance of personal veriﬁcation and / or identiﬁcation.


Introduction
A biometric system uses signature points of measurable uniqueness, derived from the physiological and/or behavioral characteristics possessed by an individual, to characterize and determine his/her identity. Biometric characteristics are preferably used in security systems over more traditional security measures. They are also used in internet access, computer system security, secure electronic passport control, banking, mobile phones, credit cards, secured access to buildings, health and social services, parenthood determination, terrorist determination and corpse identification. A number of relevant biometric technologies have been developed based on diverse biometric cues, such as DNA [1], ear morphology [2], facial features [3], fingerprints [4], gait [5], hand and finger geometry [6], iris [7], keystroke [8], odor [9], palm print [10], hand writing and signature [11], voice [12], etc.
The problem of personal verification and/or identification using palm images has drawn considerable attention. Researchers have purposed various methods [6,[13][14][15][16] which can be categorized into geometric-related methods and vein pattern feature methods. Geometric-related methods exploit the geometric characteristics of the hand and/or fingers to provide biometric information. Park and between two sets of minutiae: one for the query image and the other for the reference image. Once the correspondence is established, the geometric transformation parameters are computed to align the query against the reference image. With this geometric-invariant algorithm, the subjects can freely align their hand in the acquisition system.
The paper is organized as follows: Section 2 explains the vein pattern image acquisition system; Section 3 is devoted to the identification process; Section 4 describes the experimental validation procedures and results; and Section 5 presents the discussion and conclusion.

Vein Pattern Identification System
Our proposed contactless human recognition system using the dorsal vein pattern is shown in Figure 1. The system consists of four units including (i) image acquisition, (ii) image preprocessing, (iii) a feature extraction unit and (iv) an identification process. The first unit acquires an image of the query hand without the user touching the sensor. The captured image is then preprocessed to enhance the image in preparation for the feature extraction. With the extracted features, the person is identified in the final unit. The following subsections provide detailed descriptions of each unit in this system.
(3) The proposed hybrid feature approach exploits both global and local aspects: the geometric feature is global, whereas the vein pattern feature and barcoded feature are local aspects of human recognition. This hierarchical scheme was found to significantly improve the performance.
(4) This system is invariant to affine geometric transformation. From the vein pattern image, the minutia, vein termination and vein bifurcation points are extracted. With the extracted minutia points, we can construct a set of geometric invariance and further use it to establish a correspondence between two sets of minutiae: one for the query image and the other for the reference image. Once the correspondence is established, the geometric transformation parameters are computed to align the query against the reference image. With this geometric-invariant algorithm, the subjects can freely align their hand in the acquisition system. The paper is organized as follows: Section 2 explains the vein pattern image acquisition system; Section 3 is devoted to the identification process; Section 4 describes the experimental validation procedures and results; and Section 5 presents the discussion and conclusion.

Vein Pattern Identification System
Our proposed contactless human recognition system using the dorsal vein pattern is shown in Figure 1. The system consists of four units including (i) image acquisition, (ii) image preprocessing, (iii) a feature extraction unit and (iv) an identification process. The first unit acquires an image of the query hand without the user touching the sensor. The captured image is then preprocessed to enhance the image in preparation for the feature extraction. With the extracted features, the person is identified in the final unit. The following subsections provide detailed descriptions of each unit in this system.

Image Acquisition Unit
The image acquisition unit is the frontend part of the system, used for capturing a person's palm image in a contactless way. This is a crucial step, as it affects the complexity of the image preprocessing unit. Capturing high-quality images results in less complexity in the image processing requirements and expectedly yields fewer errors in human recognition. In contrast, low-quality images typically result in more complex image processing and higher error rates. Obtaining highquality images allows the relevant key points and features to be extracted more effectively. The design of palm imaging systems for human recognition has been an active research topic over the last decade. These systems exploit the concept of the infrared light absorption of deoxyhemoglobin reduced hemoglobin (HbR) in the blood. Most of the systems presented in the literature use NIR

Image Acquisition Unit
The image acquisition unit is the frontend part of the system, used for capturing a person's palm image in a contactless way. This is a crucial step, as it affects the complexity of the image preprocessing unit. Capturing high-quality images results in less complexity in the image processing requirements and expectedly yields fewer errors in human recognition. In contrast, low-quality images typically result in more complex image processing and higher error rates. Obtaining high-quality images allows the relevant key points and features to be extracted more effectively. The design of palm imaging systems for human recognition has been an active research topic over the last decade. These systems exploit the concept of the infrared light absorption of deoxyhemoglobin reduced hemoglobin (HbR) in the blood. Most of the systems presented in the literature use NIR [23,[41][42][43][44][45] with wavelengths of 750-1500 nm or far infrared (FIR) [23,46] light sources. Figure 2 shows our proposed image acquisition system using an 850-nm wavelength light source. The system can be designed to operate in two modes, i.e., a transmission mode [28,41,47] where the infrared light source and the infrared sensitive camera are installed on opposite sides of the hand, and a reflection mode, [17,19,24,26,48] where the infrared light source and infrared-sensitive camera are aligned on the same side of the hand. We opted to implement a transmission mode configuration, because it is considered to be more resistant to noise [41]. As shown Appl. Sci. 2020, 10, 3192 4 of 27 in Figure 2, an infrared LED is installed in the scatter-light protection cylindrical case with a 5 cm diameter and 5 cm height. A concave lens is also installed at the orifice of the cylindrical case to focus the light onto the subject's hand. On the opposite side of the infrared LED, an 850-nm wavelength IR-filtered camera is installed. For specific usage scenarios, the FUJIKO CCTV is modified with the removal of the IR cut filter to reduce the infrared transmission. Equipped with the replacement of IR filter, it selectively transmits light at a 850 nm wavelength, with an adjustable arm for focal length control. To prevent the scattering of ambient light, the system is also covered with protective plastic, leaving only the front side exposed for hand imaging, as shown in Figure 2b. Samples of IR-radiated vein images are shown in Figure 2c.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 4 of 28 [23,[41][42][43][44][45] with wavelengths of 750-1500 nm or far infrared (FIR) [23,46] light sources. Figure 2 shows our proposed image acquisition system using an 850-nm wavelength light source. The system can be designed to operate in two modes, i.e., a transmission mode [28,41,47] where the infrared light source and the infrared sensitive camera are installed on opposite sides of the hand, and a reflection mode, [17,19,24,26,48] where the infrared light source and infrared-sensitive camera are aligned on the same side of the hand. We opted to implement a transmission mode configuration, because it is considered to be more resistant to noise [41]. As shown in Figure 2, an infrared LED is installed in the scatterlight protection cylindrical case with a 5 cm diameter and 5 cm height. A concave lens is also installed at the orifice of the cylindrical case to focus the light onto the subject's hand. On the opposite side of the infrared LED, an 850-nm wavelength IR-filtered camera is installed. For specific usage scenarios, the FUJIKO CCTV is modified with the removal of the IR cut filter to reduce the infrared transmission. Equipped with the replacement of IR filter, it selectively transmits light at a 850 nm wavelength, with an adjustable arm for focal length control. To prevent the scattering of ambient light, the system is also covered with protective plastic, leaving only the front side exposed for hand imaging, as shown in Figure 2b. Samples of IR-radiated vein images are shown in Figure 2c.

Image Preprocessing Unit
As shown in Figure 3a, the images obtained from the image acquisition are presented in color, consisting of veins that appear darker than the surrounding parts. Image preprocessing is applied to enhance the quality of the acquired images and to ensure the efficacy of the feature extraction process. Based on a global threshold [49], the image is first converted into grayscale and then segmented to extract the hand portion from the background of the image. After that, a 3 × 3 median filter is applied to remove salt-and-pepper noise, together with the opening and closing morphological technique to remove speckled remaining noise. Figure 3c, in the red square frame, shows images with interference, zooming into the finer details before processing them with image enhancement. Figure 3d illustrates a typical output image, with the hand portion in the foreground after the image enhancement.

Image Preprocessing Unit
As shown in Figure 3a, the images obtained from the image acquisition are presented in color, consisting of veins that appear darker than the surrounding parts. Image preprocessing is applied to enhance the quality of the acquired images and to ensure the efficacy of the feature extraction process. Based on a global threshold [49], the image is first converted into grayscale and then segmented to extract the hand portion from the background of the image. After that, a 3 × 3 median filter is applied to remove salt-and-pepper noise, together with the opening and closing morphological technique to remove speckled remaining noise. Figure 3c, in the red square frame, shows images with interference, zooming into the finer details before processing them with image enhancement. Figure 3d illustrates a typical output image, with the hand portion in the foreground after the image enhancement.

Feature Extraction Process
The proposed recognition system is based on hand geometry and vein patterns. These are utilized in a hierarchical scheme consisting of local and global features. Global features are related to hand geometry, whereas local ones are associated with vein patterns.

Feature Extraction Process
The proposed recognition system is based on hand geometry and vein patterns. These are utilized in a hierarchical scheme consisting of local and global features. Global features are related to hand geometry, whereas local ones are associated with vein patterns.

Global Hand Geometry Feature Extraction
The goal of the hand geometry feature extraction stage is to find the fingertips and concave points (valleys) joining fingers. The obvious characteristic of the tip and the concave point of the fingers is the extremal curvature of the contour around the hand. Therefore, most methods of finding the distal and concave points of the fingers are obtained from the curvature of the hand contour [17,18,50]. Fingertip points can also be identified using the convex hulls of the binary palm image. Once the fingertip point is located, the concave point is further identified by measuring the distance between the midpoint of each fingertip point to the concave point, known as the convexity defection [50,51]. In this research, we exploit the radius transform by measuring the distance between the hand centroid to the hand contour. The fingertip and the concave point yield the maximal and minimal radius distance, respectively. To find the hand centroid, a distance map method is applied. As shown in Figure 4, the point with maximal distance represents the hand centroid. Figure 5 demonstrates the robustness of this centroid detection with various hand geometric transformations. Once the centroid is located, the Euclidian distance between the centroid and the hand contours-the so-called radius distance-is computed using Equation (1).
where P ki = {P ki } I i=1 is the hand contour extracted from each image consisting of points, P ki = P x ki , P y ki are the coordinates of these hand contour points, Q = {Q x , Q y } are the coordinates of the reference point and D ki is the distance between the centroid and hand contour.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 6 of 28  As shown below, Figure 6a shows a centroid detected with the distance transform process. Figure 6b shows a plot of the radius distance. From the graph of radius distance, the fiducial points, including fingertip and concave points, can be located by finding the extremal points. The fingertip points will be the five points of maximal radius distance, denoted respectively as T1-T5, while the concave point will be the four minimal radius distance points, denoted respectively as V1-V4 (local minima point, V n = Min(T n : T n+1 )). The detected fingertip and the concave points are shown in Figure 6c.  As shown below, Figure 6a shows a centroid detected with the distance transform process. Figure 6b shows a plot of the radius distance. From the graph of radius distance, the fiducial points, including fingertip and concave points, can be located by finding the extremal points. The fingertip points will be the five points of maximal radius distance, denoted respectively as T1-T5, while the concave point will be the four minimal radius distance points, denoted respectively as V1-V4 (local minima point, Vn = Min(Tn: Tn+1)). The detected fingertip and the concave points are shown in Figure  6c.   As shown below, Figure 6a shows a centroid detected with the distance transform process. Figure 6b shows a plot of the radius distance. From the graph of radius distance, the fiducial points, including fingertip and concave points, can be located by finding the extremal points. The fingertip points will be the five points of maximal radius distance, denoted respectively as T1-T5, while the concave point will be the four minimal radius distance points, denoted respectively as V1-V4 (local minima point, Vn = Min(Tn: Tn+1)). The detected fingertip and the concave points are shown in Figure  6c.

Local Dorsal Hand Vein Feature Extraction
There are many different types of veins. Ibrahim et al. [47] classified veins into seven types, namely termination, bifurcation, lake, independent ridge, dot, spur and crossover. With their unique characteristics across individuals, vein patterns can potentially serve as biometrics. Human recognition using vein pattern biometrics can be roughly classified into two main categories: (i) minutia based and (ii) non-minutia based. For minutia-based vein pattern identification [23,[41][42][43][44][45][46][48][49][50][51][52][53], minutia points, such as the point of vein termination or bifurcation, are extracted and further used for human recognition. Non-minutia-based vein pattern identification is based on matching between the vein patterns of unknown identity with that of the reference directly, without extracting minutia point information [27]. In this study, we have opted for a minutia extraction approach. The minutia points used here are the vein terminations and bifurcations. With the extracted minutia points, a set of geometric invariant features are constructed and further used to establish the correspondence between two sets of minutiae, using the one from the query image and the other from the reference image. Once the correspondence is established, the geometric transformation parameters are computed to align the query with the reference image. The quantitative similarity criterion is then computed and used for human recognition. Our process of minutia-based feature extraction from vein patterns is divided into two subprocesses: (i) vein extraction, and (ii) minutia extraction, as explained in more detail below.

A. Vein Pattern Extraction
The vein pattern extraction process is started by converting the color palm image from the image acquisition system into a grayscale image. Veins in the grayscale image appear dark, as deoxyhemoglobin contained within the blood inside the veins absorbs more infrared light. As a result, less infrared light is transmitted to the IR-sensitive camera. Vein extraction from the grayscale image is subdivided into two steps: (i) region of interest (ROI) extraction and (ii) vein extraction. The process of vein extraction is shown in Figure 7.
correspondence between two sets of minutiae, using the one from the query image and the other from the reference image. Once the correspondence is established, the geometric transformation parameters are computed to align the query with the reference image. The quantitative similarity criterion is then computed and used for human recognition. Our process of minutia-based feature extraction from vein patterns is divided into two subprocesses: (i) vein extraction, and (ii) minutia extraction, as explained in more detail below.

A. Vein Pattern Extraction
The vein pattern extraction process is started by converting the color palm image from the image acquisition system into a grayscale image. Veins in the grayscale image appear dark, as deoxyhemoglobin contained within the blood inside the veins absorbs more infrared light. As a result, less infrared light is transmitted to the IR-sensitive camera. Vein extraction from the grayscale image is subdivided into two steps: (i) region of interest (ROI) extraction and (ii) vein extraction. The process of vein extraction is shown in Figure 7.

(i) ROI Extraction
To improve the efficiency of human recognition using vein patterns, rather than using the whole vein image, a defined ROI is used. From the hand geometric feature extraction process, we have determined U2 to be the point on the contour from T2 with equal distance when traversing from T2 to V2 and, similarly, U3 to be the point on the contour from T5 with an equal distance when traversing from T5 to V4. P1 can then be marked as the middle point between U2 and V2. P2 can be marked as the middle point between V4 and U3, as shown in Figure 8a. The coordinates of P3 and P4 are then calculated as in Equations (2) and (3). and

(i) ROI Extraction
To improve the efficiency of human recognition using vein patterns, rather than using the whole vein image, a defined ROI is used. From the hand geometric feature extraction process, we have determined U2 to be the point on the contour from T2 with equal distance when traversing from T2 to V2 and, similarly, U3 to be the point on the contour from T5 with an equal distance when traversing from T5 to V4. P1 can then be marked as the middle point between U2 and V2. P2 can be marked as the middle point between V4 and U3, as shown in Figure 8a. The coordinates of P3 and P4 are then calculated as in Equations (2) and (3). and This ROI bounds a section of the vein pattern inside a square box, having P1, P2, P3 and P4 as the vertices. An identified ROI is shown in Figure 8b, with the corresponding derived vein pattern shown in Figure 8c.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 8 of 28 This ROI bounds a section of the vein pattern inside a square box, having P1, P2, P3 and P4 as the vertices. An identified ROI is shown in Figure 8b, with the corresponding derived vein pattern shown in Figure 8c. (ii) Vein Extraction As shown in Figure 8c, venous regions within the image will be darker than the other areas due to the process of infrared light absorption by deoxyhemoglobin in the bloodstream. The process of extracting the vein pattern from the ROI involves three steps.

(ii) Vein Extraction
As shown in Figure 8c, venous regions within the image will be darker than the other areas due to the process of infrared light absorption by deoxyhemoglobin in the bloodstream. The process of extracting the vein pattern from the ROI involves three steps.
Step 1: Intensity Profile To extract the veins from the ROI, the intensity between adjacent pixels is compared. Intensity profiles are calculated along four axes (horizontal, vertical, left diagonal and right diagonal), as shown in Figure 9. From these intensity profiles, vein location pixels are determined from the points of minimal intensity. Figure 10 shows the vertical intensity profile with the corresponding vein location pixels. Note that these pixels represent the center points of different veins.

(ii) Vein Extraction
As shown in Figure 8c, venous regions within the image will be darker than the other areas due to the process of infrared light absorption by deoxyhemoglobin in the bloodstream. The process of extracting the vein pattern from the ROI involves three steps.
Step 1: Intensity Profile To extract the veins from the ROI, the intensity between adjacent pixels is compared. Intensity profiles are calculated along four axes (horizontal, vertical, left diagonal and right diagonal), as shown in Figure 9. From these intensity profiles, vein location pixels are determined from the points of minimal intensity. Figure 10 shows the vertical intensity profile with the corresponding vein location pixels. Note that these pixels represent the center points of different veins. Step 2: Calculation of the Curvatures of Profiles As explained in step 1, the vein location pixel will be a point of local minimal intensity. However, rather using the point of minimal intensity, we seek to use the point of maximum curvature, where a threshold can be used to determine the width of the vein at the vein location pixel. To compute the curvature of the intensity profile, we use Equation (4): Step 2: Calculation of the Curvatures of Profiles As explained in step 1, the vein location pixel will be a point of local minimal intensity. However, rather using the point of minimal intensity, we seek to use the point of maximum curvature, where a threshold can be used to determine the width of the vein at the vein location pixel. To compute the curvature of the intensity profile, we use Equation (4): where Pf(z) is the intensity profile, is the second derivative of the intensity profile and is the first derivative of the intensity profile.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 28 where Pf(z) is the intensity profile, is the second derivative of the intensity profile and is the first derivative of the intensity profile. Step 3: Vein Detection Figure 11 depicts the vein extraction process. The intensity profile is shown in Figure 11a; after applying Equation (4), the graph of curvature was produced, as shown in Figure 11c. To mitigate noise disturbance, the curvature threshold is set as annotated with solid lines in Figure 11c. Step 3: Vein Detection Figure 11 depicts the vein extraction process. The intensity profile is shown in Figure 11a; after applying Equation (4), the graph of curvature was produced, as shown in Figure 11c. To mitigate noise disturbance, the curvature threshold is set as annotated with solid lines in Figure 11c. Curvatures exceeding this threshold are considered as veins. The maximum point of curvature is equivalent to the center of the vein, identified as P x in Figure 11e. Points where the curvature crosses the threshold line determine the width of the vein detected in that vicinity, illustrated as W x in Figure 11e. Pixels lying within this region are labeled as vein portions, whereas those lying outside this region are classified as non-vein portion pixels. To improve the accuracy of vein detection, this algorithm is repeated for intensity profiles computed in four directions ( Figure 9). The resulting images detected from all intensity profiles are then combined to yield the final vein image. Figure 12 demonstrates the vein image detection from four directional intensity profiles computed using the described algorithm.

B. Minutia Extraction
The goal of this feature extraction process is to obtain minutiae from the vein image. The interesting minutiae are the points of vein termination or bifurcation. Minutia extraction is performed by the following steps below.
(i) Apply morphological closing to remove possible noise from the vein image, as shown in Figure 13a.
(ii) Apply morphological thinning to the result from step (i). (iii) Apply morphological pruning to the result from step (ii) to remove existing spurs. (iv) Apply vein ending and vein bifurcation pattern matching to locate the minutia.
This minutia extraction process is illustrated in Figure 13. Example results of this procedure are shown in Figure 13e. The goal of this feature extraction process is to obtain minutiae from the vein image. The interesting minutiae are the points of vein termination or bifurcation. Minutia extraction is performed by the following steps below.
(i) Apply morphological closing to remove possible noise from the vein image, as shown in Figure  13a.
(ii) Apply morphological thinning to the result from step (i).
(ii) Apply morphological pruning to the result from step (ii) to remove existing spurs.
(iii) Apply vein ending and vein bifurcation pattern matching to locate the minutia.
This minutia extraction process is illustrated in Figure 13. Example results of this procedure are shown in Figure 13e.

Human Recognition Process Based on Vein Pattern Matching
Our method of vein pattern matching is based on minutia registration. The registration process can be carried out in the presence of a rigid transformation and an affine transformation. The registration process is shown in Figure 14 for the general nonlinear case. A set of triangles spanning three minutia points is obtained from the query as well as the candidate vein pattern from the minutia points previously extracted from each. These are sorted in ascending order of area prior to establishing correspondences. Once enough correspondences are established using the matching method described below, the transformation parameters are determined, the images are aligned and a distance map error between the test vein pattern image and the undo-transformed image is computed for later use in vein pattern recognition.

Human Recognition Process Based on Vein Pattern Matching
Our method of vein pattern matching is based on minutia registration. The registration process can be carried out in the presence of a rigid transformation and an affine transformation. The registration process is shown in Figure 14 for the general nonlinear case. A set of triangles spanning three minutia points is obtained from the query as well as the candidate vein pattern from the minutia points previously extracted from each. These are sorted in ascending order of area prior to establishing correspondences. Once enough correspondences are established using the matching method described below, the transformation parameters are determined, the images are aligned and a distance map error between the test vein pattern image and the undo-transformed image is computed for later use in vein pattern recognition. three minutia points is obtained from the query as well as the candidate vein pattern from the minutia points previously extracted from each. These are sorted in ascending order of area prior to establishing correspondences. Once enough correspondences are established using the matching method described below, the transformation parameters are determined, the images are aligned and a distance map error between the test vein pattern image and the undo-transformed image is computed for later use in vein pattern recognition.

Minutia-Based Matching
Given a set of minutia points on the query vein pattern image and another on one of the templates, we want to consider a model that is preserved under the existing (rigid and affine) map. Given the centroid as a reference point, the minutia points that are close to the centroid to some certain distance will be considered, as shown in Figure 15.

Minutia-Based Matching
Given a set of minutia points on the query vein pattern image and another on one of the templates, we want to consider a model that is preserved under the existing (rigid and affine) map. Given the centroid as a reference point, the minutia points that are close to the centroid to some certain distance will be considered, as shown in Figure 15.

Matching in the Presence of a Rigid Transformation
The rigid transformation corresponds to the case where the hand of the query vein pattern, including the candidate retrieved in the database, is in exactly the same position and orientation. For each triangle, the shape (triangles sides' lengths and angles) and areas are preserved.
The algorithm for matching two sets of triangles, one from the template and the other from the query (in Figure 16), proceeds as follows: (1) Obtain the set of triangles for the template and the query vein pattern using a triangulation process. These sets are shown in Figure 16a, g, respectively.
(2) Sort the triangles of each set in ascending order, as shown in Figure 16b, f, respectively.
(3) Reorient each triangle in the list in a standardized position where the longest length is taken as the base of the triangle and the length is decreasing starting from the base in a clockwise

Matching in the Presence of a Rigid Transformation
The rigid transformation corresponds to the case where the hand of the query vein pattern, including the candidate retrieved in the database, is in exactly the same position and orientation. For each triangle, the shape (triangles sides' lengths and angles) and areas are preserved.
The algorithm for matching two sets of triangles, one from the template and the other from the query (in Figure 16), proceeds as follows: (1) Obtain the set of triangles for the template and the query vein pattern using a triangulation process. These sets are shown in Figure 16a,g, respectively.
(2) Sort the triangles of each set in ascending order, as shown in Figure 16b,f, respectively.
(3) Reorient each triangle in the list in a standardized position where the longest length is taken as the base of the triangle and the length is decreasing starting from the base in a clockwise direction. The list of triangles in the standardized position for the template and the query are shown in Figure 16c,e, respectively. (4) Perform a run length technique, which is similar to the "list-matching algorithm", by searching for the longest string of matching triangles between the triangles in the template and the query list. A circular shift of triangles in the query list is applied upon searching for the matching triangle in the template list. The matching criterion between each pair of triangles, say triangle ith and jth, is according to is the feature vector of the ith triangle defined in Figure 17, with d 1 , d 2 and d 3 being the lengths of the triangle sides. α 1 , α 2 and α 3 are the angles and A is the area of the triangle;ξ is some threshold value.

Matching in the Presence of Affine Transformation
Under an affine transformation, the lengths and angles of the triangles are no longer preserved. Under an affine transformation, the area of the corresponding triangles, however, becomes a relative invariant, with the two corresponding areas being related to each other through the determinant of the linear transformation matrix T in the affine map transform (T, b) where b is the translation vector. If the area patches of the triangle sequence in the template are [Aa(1),…,Aa(n)], the corresponding area patches A(k) associated with the sequence of the triangles on the query are related to those of the template in accordance with the following relative invariant: Figure 17. Definition of the parameters in feature vector F; d i is the corresponding length of triangle, α i is the corresponding angle and A is the area of triangle.

Matching in the Presence of Affine Transformation
Under an affine transformation, the lengths and angles of the triangles are no longer preserved. Under an affine transformation, the area of the corresponding triangles, however, becomes a relative invariant, with the two corresponding areas being related to each other through the determinant of the linear transformation matrix T in the affine map transform (T, b) where b is the translation vector. If the area patches of the triangle sequence in the template are [A a (1), . . . ,A a (n)], the corresponding area patches A(k) associated with the sequence of the triangles on the query are related to those of the template in accordance with the following relative invariant: where |T| is the determinant of the transformation matrix T. As the linear matrix is unknown, the absolute affine invariants are constructed out of the area relative invariants by taking the ratio of two triangles to cancel out the dependence of the area relative invariant on |T|. By taking the ratio of consecutive elements in the sequence, a set of absolute invariants are obtained in (7) and (8). and This is for k = 1,2, . . . , n. In the case of noise free measurement, the absolute invariant of the template equals that of the query, i.e., I a (k) = I(k), and in the absence of noise and occlusion, each of I a (k) will have a counter part with I(k). That counterpart is easily determined through a circular shift involving n comparisons, where n is the number of invariants. To allow for noise and small deviations from an affine map, we allow a small error percentage difference between corresponding invariants to allow for only a small difference between the area patches before declaring them as matching. This may reduce the length of the matching triangle sequence. Lowering the allowable error percentage effectively makes the matching stricter. Experimentally, an error percentage of 5% was applied. We adopted a run length method to decide on the correspondence between the two ordered sets of triangles. For every starting point on the sequence, the run length method computes a sequence of consecutive invariants, satisfying the criterion stated in Equation (9).
We declare a match on the longest string (M) of triangles that yields the minimum averaged error, given by Equation (10): Once correspondences are found, the vertices of the matching triangles are used to estimate the affine transformation. The algorithm for matching the triangles in the query and template vein patterns under an affine map is shown in Figure 18.

Distance Map Error
Once the best warping transformation between the candidate and the query finger positions is estimated based on the matching triangles. For geometric transformation, the corresponding minutia points are established from the matching minutia-based triangles. We rely only on strong matches of all minutia points on the query image compared to the closest one on the reference image. Therefore, triangles that have no counterpart are discarded. The performance of our method is represented by calculating an average distance map error denoted as EVeinMAP, as defined in Equation (11). This is done for every other possible candidate vein pattern. In an identification problem, the query is identified as the candidate that results in the smallest average geometric distance map error function. In a verification problem, the identity of the query vein pattern is verified if the average geometric distance map error function is below a set threshold and the distance map error is computed and denoted as EVeinMAP. An equation of the distance map error is shown below.

Distance Map Error
Once the best warping transformation between the candidate and the query finger positions is estimated based on the matching triangles. For geometric transformation, the corresponding minutia points are established from the matching minutia-based triangles. We rely only on strong matches of all minutia points on the query image compared to the closest one on the reference image. Therefore, triangles that have no counterpart are discarded. The performance of our method is represented by calculating an average distance map error denoted as EVeinMAP, as defined in Equation (11). This is done for every other possible candidate vein pattern. In an identification problem, the query is identified as the candidate that results in the smallest average geometric distance map error function. In a verification problem, the identity of the query vein pattern is verified if the average geometric distance map error function is below a set threshold and the distance map error is computed and denoted as E VeinMAP. An equation of the distance map error is shown below.
where m q (i) = (x i , y i ) is the coordinate of the ith minutia of the query image after the undo transformation. m c (i) = (x i , y i ) is the coordinate of the ith minutia of the reference that is closest to the minutia that is under the undo transformation. n is the number of minutiae subjected to the undo transformation. Figure 19 presents an example of the image registration of the inquiry hand contour (white) against the reference hand contour (red).
Appl. Sci. 2020, 10, x FOR PEER REVIEW 16 of 28 the minutia that is under the undo transformation. n is the number of minutiae subjected to the undo transformation. Figure 19 presents an example of the image registration of the inquiry hand contour (white) against the reference hand contour (red). To improve the specificity for human recognition, we used another feature vector containing physical parameters extracted from the finger and palm, including the length and width of the fingers, as shown in Figure 20.
In the figure, for example, point C1 is the middle point between the adjacent concave points V2 and V3. The finger length is defined to be the distance between the fingertip T3 and C1. The finger width is defined as the distance between point C2 and C3, where C2 is the half-distance point on the contour between T3 and V2, and where C3 is the half-distance point on the contour between T3 and V3. Feature d11 is the palm width, which is the distance between C4 and U3, where C4 is the mid-point between V1 and U2. The error of the length and width of the fingers is then calculated using Equation (12).
where (lThumb, lIndex, lMiddle, lRing, lLittle, wThumb, wIndex, wMiddle, wRing, wLittle, wPalm) is the feature vector of hand x; lThumb, lIndex, lMiddle, lRing, lLittle are the lengths of the thumb, index, middle, ring and little fingers, respectively; wThumb, wIndex, wMiddle, wRing, wLittle, wPalm are the widths of the thumb, index, middle, ring, little fingers and palm, respectively. To improve the specificity for human recognition, we used another feature vector containing physical parameters extracted from the finger and palm, including the length and width of the fingers, as shown in Figure 20.
In the figure, for example, point C 1 is the middle point between the adjacent concave points V 2 and V 3 . The finger length is defined to be the distance between the fingertip T 3 and C 1 . The finger width is defined as the distance between point C 2 and C 3 , where C 2 is the half-distance point on the contour between T 3 and V 2 , and where C 3 is the half-distance point on the contour between T 3 and V 3 . Feature d11 is the palm width, which is the distance between C 4 and U 3 , where C 4 is the mid-point between V 1 and U 2 . The error of the length and width of the fingers is then calculated using Equation (12).
where (l Thumb , l Index , l Middle , l Ring , l Little , w Thumb , w Index , w Middle , w Ring , w Little , w Palm ) is the feature vector of hand x; l Thumb , l Index , l Middle , l Ring , l Little are the lengths of the thumb, index, middle, ring and little fingers, respectively; w Thumb , w Index , w Middle , w Ring , w Little , w Palm are the widths of the thumb, index, middle, ring, little fingers and palm, respectively.

Feature Matching: Barcoded Features
Recognition can also be augmented by an appearance-based matching process that compares the barcoded features' interiors to the matching triangles formed by the declared distance map. It is necessary that the query corresponds to the candidate image. There are two reasons that make barcoded features beneficial for enhancing vein pattern identification and classification: (i) the fingerprint image consists only of a background and foreground (vein and non-vein), and (ii) the geometric error distance map is a global feature whereas barcoded features reflect local properties. This combination of local and global features enhances the performance of our recognition to a certain degree.

Computing the Barcoded Features
Given the two corresponding triangles formed by the two sets of three corresponding minutia points (in the same manner as the distance map described in Section 4), the vein pattern images are capsulated by the query and the candidate triangles, as shown in Figure 21a. Figure 21b presents square boxes which are bounded by the triangle patches of interest to derive a sub-image. Each subimage is then divided into ten vertical stripes, where we compute the energy contained in each stripe, normalize them and input this computed gray level into the corresponding stripe shown in Figure  21c. This constitutes the barcoded feature vector which is shown in Figure 21d. A similar feature vector will be obtained for the corresponding minutia triangle on the candidate vein pattern.

Feature Matching: Barcoded Features
Recognition can also be augmented by an appearance-based matching process that compares the barcoded features' interiors to the matching triangles formed by the declared distance map. It is necessary that the query corresponds to the candidate image. There are two reasons that make barcoded features beneficial for enhancing vein pattern identification and classification: (i) the fingerprint image consists only of a background and foreground (vein and non-vein), and (ii) the geometric error distance map is a global feature whereas barcoded features reflect local properties. This combination of local and global features enhances the performance of our recognition to a certain degree.

Computing the Barcoded Features
Given the two corresponding triangles formed by the two sets of three corresponding minutia points (in the same manner as the distance map described in Section 4), the vein pattern images are capsulated by the query and the candidate triangles, as shown in Figure 21a. Figure 21b presents square boxes which are bounded by the triangle patches of interest to derive a sub-image. Each sub-image is then divided into ten vertical stripes, where we compute the energy contained in each stripe, normalize them and input this computed gray level into the corresponding stripe shown in Figure 21c. This constitutes the barcoded feature vector which is shown in Figure 21d. A similar feature vector will be obtained for the corresponding minutia triangle on the candidate vein pattern.
With the augmentation of the error metric based on the barcoded features, the overall verification and identification rule combines the geometric error map described in Section 3 with that given in the following error metric: where factors are decimal values between 0 and 1 and 1

Matching Based on Barcoded Features
Once the barcoded feature vector is computed for all the query triangles (warped in accordance to the estimated transformation under an assumed candidate vein pattern) and the declared correspondent triangles on the given candidate vein pattern, the barcoded error function is defined by Equation (13).
where S c and S q are the corresponding barcoded strips (Figure 21d) associated with the (i, j) corresponding triangles on the candidate and the query vein patterns, and where the summation is over all the triangles on the query and their counterparts on the candidate. With the augmentation of the error metric based on the barcoded features, the overall verification and identification rule combines the geometric error map described in Section 3 with that given in the following error metric: where factors β 1 , β 2 , β 3 are decimal values between 0 and 1 and β 1 + β 2 + β 3 = 1.
The complete vein pattern human recognition process is shown in Figure 22. From the query palm image, the hand centroid and minutia points are extracted. All the minutia points that are located close to the centroid to some certain distance are used to derive a set of absolute affine invariant features, as explained in Section 2.4.3. The matching set of absolute affine invariance against that of a reference image is then processed. The corresponding minutiae between the query and the reference are then used to align the query image against the reference. One aligned, the error function, given in Equation (14), is then computed. The score function, defined as in Equation (15), is then used to identify and/or verify the query person.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 19 of 28 The complete vein pattern human recognition process is shown in Figure 22. From the query palm image, the hand centroid and minutia points are extracted. All the minutia points that are located close to the centroid to some certain distance are used to derive a set of absolute affine invariant features, as explained in Section 2.4.3. The matching set of absolute affine invariance against that of a reference image is then processed. The corresponding minutiae between the query and the reference are then used to align the query image against the reference. One aligned, the error function, given in Equation (14), is then computed. The score function, defined as in equation (15), is then used to identify and/or verify the query person.

Experimental Results
The experiments are divided into two parts: (i) the registration of two vein patterns after performing an alignment with the affine transformations and (ii) the results of the vein pattern verification of our proposed algorithm against a database of vein patterns.

Vein Pattern Registration Based on Minutia-Based Matching in the Presence of Affine Transformation
A dorsal image is captured with our system. The size of the original image is 420 × 456 pixels. Proceeding to the ROI process (see Section 2.3.2 A), the ROI vein pattern image is a size of 145 × 145 pixels. Preprocessing image enhancement is then performed on the ROI, including a median filter, closing morphological process, thinning morphological process, pruning morphological process and minutia detection (see Section 2.3.2 B). Triangulation is formed on a set of selected minutia triplets, as described in Section 2.4.1. The sequence of triangles both in the query image and in the reference image is in ascending order. A ratio of the consecutive triangle area is formed to derive an absolute invariance. The two sequences are then shifted and compared to find the matching triangle. The vertices of the corresponding matching triangle are used to estimate the affine transformation parameter. With the estimated transformation, the query image is aligned against the reference image. To provide a quantitative measure, the average distance error is computed. Figure 23 shows the matching triangles in the query image and the reference image. Figure 24 demonstrates the alignment of the query against the reference image. The average distance map error before and after

Experimental Results
The experiments are divided into two parts: (i) the registration of two vein patterns after performing an alignment with the affine transformations and (ii) the results of the vein pattern verification of our proposed algorithm against a database of vein patterns.

Vein Pattern Registration Based on Minutia-Based Matching in the Presence of Affine Transformation
A dorsal image is captured with our system. The size of the original image is 420 × 456 pixels. Proceeding to the ROI process (see Section 2.3.2 A), the ROI vein pattern image is a size of 145 × 145 pixels. Preprocessing image enhancement is then performed on the ROI, including a median filter, closing morphological process, thinning morphological process, pruning morphological process and minutia detection (see Section 2.3.2 B). Triangulation is formed on a set of selected minutia triplets, as described in Section 2.4.1. The sequence of triangles both in the query image and in the reference image is in ascending order. A ratio of the consecutive triangle area is formed to derive an absolute invariance. The two sequences are then shifted and compared to find the matching triangle. The vertices of the corresponding matching triangle are used to estimate the affine transformation parameter. With the estimated transformation, the query image is aligned against the reference image. To provide a quantitative measure, the average distance error is computed. Figure 23 shows the matching triangles in the query image and the reference image. Figure 24 demonstrates the alignment of the query against the reference image. The average distance map error before and after the alignment of Figure 24 is shown in Table 1. To test the performance of the alignment further, we randomly selected a dataset that corresponds to 10 individuals with two vein images from each individual, thereby giving a total of (10 × 2) = 20 vein image data. The data are divided into two sets, denoted as the reference set and the query set. The vein image data in the query set are aligned against the vein image data in the reference set using the proposed algorithm. The data are shown in Figure 25. In the figure, the reference set is the first row, while the query set is the first column. The average alignment error is also shown in Figure 25.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 20 of 28 the alignment of Figure 24 is shown in Table 1. To test the performance of the alignment further, we randomly selected a dataset that corresponds to 10 individuals with two vein images from each individual, thereby giving a total of (10 × 2) = 20 vein image data. The data are divided into two sets, denoted as the reference set and the query set. The vein image data in the query set are aligned against the vein image data in the reference set using the proposed algorithm. The data are shown in Figure  25. In the figure, the reference set is the first row, while the query set is the first column. The average alignment error is also shown in Figure 25.    Appl. Sci. 2020, 10, x FOR PEER REVIEW 20 of 28 the alignment of Figure 24 is shown in Table 1. To test the performance of the alignment further, we randomly selected a dataset that corresponds to 10 individuals with two vein images from each individual, thereby giving a total of (10 × 2) = 20 vein image data. The data are divided into two sets, denoted as the reference set and the query set. The vein image data in the query set are aligned against the vein image data in the reference set using the proposed algorithm. The data are shown in Figure  25. In the figure, the reference set is the first row, while the query set is the first column. The average alignment error is also shown in Figure 25.      Figure 25. Ten randomly selected vein images in the query set aligned against each of the corresponding ten reference vein images. Images on the diagonal are perfectly matched.

Vein Pattern Recognition Based on Combined Features
Our human recognition is based on hybrid features that combined geometric-related features and vein pattern features. The geometric features are related to the geometric structure of the hand, namely the geometric error (EHand) derived from Equation (12). The vein pattern features are the vein pattern error (EVeinMap) and barcoded features error (EVein) derived from Equations (11) and (13), respectively. The combined error is the weighted linear combination of these errors (Equation 14), which is then used in our vein pattern human recognition system.

Vein Pattern Recognition Based on Combined Features
Our human recognition is based on hybrid features that combined geometric-related features and vein pattern features. The geometric features are related to the geometric structure of the hand, namely the geometric error (E Hand ) derived from Equation (12). The vein pattern features are the vein pattern error (E VeinMap ) and barcoded features error (E Vein ) derived from Equations (11) and (13), respectively.
The combined error is the weighted linear combination of these errors (Equation (14)), which is then used in our vein pattern human recognition system.
We have tested the performance of our vein pattern recognition system on 140 subjects (87 females; 53 males) with an age between 18 and 40 years. Ten images are collected from one subject with different hand orientations, including a left and right roll rotation, front and back pitch rotation, left and right yaw rotation and four relaxed hand positions. The roll, pitch and yaw rotation collection are shown in Figure 26.
from the different vein patterns). The distribution of matching scores is generated in a set of two parallel images-images obtained from the same hand (red line on the right) and the ones from different hands (blue line on the left), as shown in Figure 27, where the cross-over point of the false non match rate (FNMR) curve and the false match rate (FMR) curve is 0.607.
As shown in Figure 28, we calculated the Receiver Operating Characteristics (ROC) curve. The area under the curve (AUC) is used as the optimization objective since it provides a good representation of the ROC performance. In this system, the AUC value of 0.99842 shows that our method is extremely good. We also evaluated the overall performance in terms of equal error rate (EER), known as the error rate when the false non match rate (FNMR) of genuine vein patterns and the false match rate (FMR) of impostor vein patterns assume the same value, as shown in Figure 29. A common approach is to use the cross-over point between the FNMR curve and the FMR curve. In this regard, we have achieved an EER of 0.243%, indicating that our method is feasible and effective for dorsal vein recognition with a high accuracy. Finally, a detection-error tradeoff (DET) curve plotting the FMR against the FNMR is presented in Figure 30. The comparison results are presented in Table 2.  The limits of the roll and pitch angles are within ±20, while that of the yaw angle is 0-180 • . The total number of images is 1400 (140 × 10) images. The number of genuine matches is 6300 matches (45 possible pairs of vein patterns coming from the same hand × 140 individuals = 6300) and the number of impostor matches is 9730 matches ((140 × 139/2) possible pairs of template vein patterns coming from the different vein patterns). The distribution of matching scores is generated in a set of two parallel images-images obtained from the same hand (red line on the right) and the ones from different hands (blue line on the left), as shown in Figure 27, where the cross-over point of the false non match rate (FNMR) curve and the false match rate (FMR) curve is 0.607.
As shown in Figure 28, we calculated the Receiver Operating Characteristics (ROC) curve. The area under the curve (AUC) is used as the optimization objective since it provides a good representation of the ROC performance. In this system, the AUC value of 0.99842 shows that our method is extremely good. We also evaluated the overall performance in terms of equal error rate (EER), known as the error rate when the false non match rate (FNMR) of genuine vein patterns and the false match rate (FMR) of impostor vein patterns assume the same value, as shown in Figure 29. A common approach is to use the cross-over point between the FNMR curve and the FMR curve. In this regard, we have achieved an EER of 0.243%, indicating that our method is feasible and effective for dorsal vein recognition with a high accuracy. Finally, a detection-error tradeoff (DET) curve plotting the FMR against the FNMR is presented in Figure 30. The comparison results are presented in Table 2.

Discussion and Conclusion
In this paper, we introduced a hybrid method of dorsal hand vein and dorsal geometry modality for human recognition. We designed a hand vein pattern acquisition that is immune to ambient light disturbance. The captured infrared image is then preprocessed using a median filter, closing morphological process, thinning morphological process, pruning morphological process and minutia detection. To find correspondences between the minutia points of two vein pattern images, a set of geometric invariants was determined based on the triangles constructed from sets of the minutia point triplets. After the correspondences were established, the parameters of a relevant transformation were estimated and the two images were aligned. The performance of our method was demonstrated by its ability to register the two vein pattern images scanned under a host of shape transformations. The results of the vein pattern alignment revealed that our proposed method can be

Discussion and Conclusion
In this paper, we introduced a hybrid method of dorsal hand vein and dorsal geometry modality for human recognition. We designed a hand vein pattern acquisition that is immune to ambient light disturbance. The captured infrared image is then preprocessed using a median filter, closing morphological process, thinning morphological process, pruning morphological process and minutia detection. To find correspondences between the minutia points of two vein pattern images, a set of geometric invariants was determined based on the triangles constructed from sets of the minutia point triplets. After the correspondences were established, the parameters of a relevant transformation were estimated and the two images were aligned. The performance of our method was demonstrated by its ability to register the two vein pattern images scanned under a host of shape transformations. The results of the vein pattern alignment revealed that our proposed method can be used to find the corresponding minutiae and align any two vein patterns in the case of affine transformation. This makes our system applicable under varying conditions that differ from those under which the database of vein pattern was constructed. For vein pattern verification, we also proposed a rule that combines the geometric distance map error with the barcoded features to verify the query vein pattern against the reference vein pattern. Our performance yielded an area of 0.99842 under the ROC curve. Our algorithm compares favorably to other methods, resulting in an EER of 0.243%.
Funding: This research received no external funding.