Next Article in Journal
On Classical Gauss Sums and Some of Their Properties
Previous Article in Journal
Theoretical Analysis of Empirical Mode Decomposition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Method of Automatic Knuckle Image Acquisition for Continuous Verification Systems

Institute of Computer Science, University of Silesia, Bedzinska 39, 41-200 Sosnowiec, Poland
Symmetry 2018, 10(11), 624; https://doi.org/10.3390/sym10110624
Submission received: 30 August 2018 / Revised: 5 November 2018 / Accepted: 7 November 2018 / Published: 10 November 2018

Abstract

:
The paper proposes a method of automatic knuckle image acquisition for continuous verification systems. The developed acquisition method is dedicated for verification systems in which the person being verified uses a computer keyboard. This manner of acquisition enables registration of the knuckle image without interrupting the user’s work for the time of acquisition. This is an important advantage, unprecedented in the currently known methods. The process of the automatic location of the finger knuckle can be considered as a pattern recognition approach and is based on the analysis of symmetry and similarity between the reference knuckle patterns and live camera image. The effectiveness of the aforesaid approach has been tested experimentally. The test results confirmed its high effectiveness. The effectiveness of the proposed method was also determined in a case where it is a part of a multi-biometric method.

1. Introduction

One of the most important problems faced every day by both companies and individuals is the protection of sensitive data stored in computer systems. These data can be very valuable assets for thieves, who use them in widely understood cybercrime. Cyberattacks can be carried out remotely, i.e., from outside of the premises of the company being attacked; however, a large percentage of intrusions is initiated when the criminal stays inside the company [1]. This type of attack occurs, for example, when users go away from their computers, forgetting to log out, and thus allow the intruder to access the computer. The aforementioned threat causes the necessity to develop methods that allow detecting and effectively neutralize the attacks [2,3].
The threats described cause a need to develop methods that enable quick detection of an attack, thanks to which it will be possible to neutralize it effectively. In most of the security systems known so far, user verification is carried out once the user starts work. The use of one-time login causes the detection of an attack, in which the intruder takes over the access to the computer, is practically impossible. An effective solution to this problem can be the use of continuous verification. Such verification is carried out repeatedly at certain short time intervals or when there is a suspicion that an unauthorized person is at the computer. The verification systems known so far are based mainly on the use of passwords, PIN codes or ID cards. Unfortunately, such solutions do not always guarantee the compliance with relevant safety standards. This is mainly due to the imperfections of human nature—passwords or cards can be lost, forgotten or stolen. In addition, there is a nuisance consisting in the necessity to enter the password many times, which causes such solutions to not be popular in continuous verification systems.
Biometrics is a tool, the usefulness of which for detection of intruders trying to gain unauthorized access to computer systems has been demonstrated recently [4,5]. Biometric verification/identification methods are based on the analysis of popular physical features (e.g., iris, retina, friction ridges, fingerprints, blood vessel pattern, ear shape, etc.) [6,7,8,9,10].
The usefulness of a feature in a biometric system depends on the fact, which of the following assumptions are fulfilled by it [11]:
  • versatility—each person should have a given feature;
  • uniqueness—no two persons should have the same feature;
  • durability—invariability of the feature in time;
  • measurability—a possibility of measuring with the use of a practical device;
  • storability—features can be registered and stored;
  • acceptability and convenience of use along with the adequacy of the size of the device.
Continuous verification requires frequent acquisition of biometric data. That is why it is so important that the biometric feature to be used, in addition to its uniqueness and versatility, should also be convenient in acquisition and acceptable. Otherwise, the use of a given biometric feature will be just as troublesome as the use of a password or an ID card. Unfortunately, in the case of the vast majority of physical features, the convenience of their acquisition is not satisfactory. For example, when acquiring fingerprints, the user must stop working and put his/her finger against the scanner. A similar rule applies when acquiring a vein or retina image. In addition, the acquisition of retina images is characterized by a low level of acceptability. Despite the lack of physical contact with the scanner, some users are worried about their health when scanning the eye. Another example of a physical feature, which is not very convenient in acquisition and thus, in use, is the image of the knuckle. The use of this feature consists of allowing the analysis of skin furrows visible on the surface of a knuckle. Examples of methods used for analysing a knuckle can be found in [12,13,14,15]. Unfortunately, also in the case of this feature, the acquisition requires putting a hand in a special scanner. This work was aimed at developing such a method of acquiring a physical feature that would be convenient and, very importantly, non-absorbing for the user. Such a method has not been developed so far. The result of the work is a new method of automatic acquisition of knuckle images. In the new approach, the camera continuously observes the user’s hands during the use of the keyboard. If there is a need to acquire an image, a method developed especially for this purpose locates the index finger in the image and a photo of its knuckle is taken. Then, another method evaluates the quality of the photo taken. Blurred images are not used in further stages of the verification. Such an approach, in which the image of a knuckle can be registered during the normal course of user’s work, does not require the user to interrupt the work and put their hand in the scanner. All these factors increase the possibilities of the practical use of images of the knuckle in biometric systems very significantly. The effectiveness of the new method of acquisition was determined experimentally. The usefulness of the proposed method was examined by determining the speed of its operation. The research also included its implementation as an element of two biometric methods. The first one is a method of verification based on the image of a knuckle, while the second one is a multi-biometric method, combining the analysis of the knuckle with the analysis of the dynamics of typing on the keyboard. The outcomes of the experiments, the results of which are presented in the research part, showed a high level of usability of the proposed method of acquisition.
To sum up, the scientific contribution of this work includes:
  • developing a method of automatic acquisition of knuckle images that enables continuous verification of the user without the necessity to interrupt the user’s work,
  • developing a method of evaluating the quality of the image obtained as a result of the acquisition,
  • demonstrating the high effectiveness and speed of operation of the method,
  • proposing the implementation of the method as an element of a biometric or multi-biometric system,

2. A Method of Automatic Acquisition of Finger Knuckle Images

Person verification based on the knuckle image consists of the analysis of the skin furrows located on the knuckle between the middle phalanx and the proximal phalanx. The analysis includes the comparison of the shapes and locations of individual furrows in the reference and verified images. The sample photo of finger knuckle is shown in Figure 1.
In order to carry out the analysis of furrows, the knuckle image has to be acquired. A significant disadvantage of the currently known acquisition methods is that the users have to put their hand inside a special rig where the image recording device is located. An example of such a rig is presented in Figure 2.
Such method of acquisition requires interruption of a user’s work for the time of acquisition. The average time a user needs to complete the entire acquisition process is about eight seconds. It should be noted that the acquisition procedure is repeated each time when there is a suspicion that an unauthorized person is working on the computer. In order to eliminate this inconvenience, in this study, a method was proposed in which the acquisition is performed automatically, i.e., without interrupting the user’s work. A new approach assumes that the acquisition will be performed with the use of a camera located in such a way that the user’s hands can be observed all the time when the user is using the keyboard. The device takes a photo of the hand. Image processing methods are used to locate the right hand in the image and then the index finger on the hand itself. The specific character of the method proposed assumes that the photos are taken when the hands are moving, which may result in their blurring. For this reason, the method assesses the quality of the image. A detailed description of the stages of the method is presented in the following subsections.

2.1. Taking a Photo of the Hand

The aim of the first stage of the method is to take a photo of the hand of the user. The photo is taken using a camera or a video camera installed in the central part of the keyboard. In this study, a small tripod was used for this purpose. Such a rig can be used to register images of keyboards both of desktop computers and laptops. The method requires that the camera is always located at the same distance from the computer keyboard. As a result, the hands visible in the photo always have a similar size, thanks to which the image does not have to be scaled. The rig used in the studies is shown in Figure 3. Number (1) indicates the laptop, while (2) indicates the video camera on the tripod.
Initially, a reference photo without a user’s hands on the keyboard should be taken. Only after taking the reference photo, the camera takes a photo of the hand typing on the keyboard. Both the reference keyboard photo and the photo of the hand are saved in grayscale and designated respectively as I r e f ( x , y ) and I ( x , y ) , x = 1 , , M , y = 1 , , M , where M is the width and height of the images.

2.2. Exposing the Hand on the Keyboard

In the next step, the contour of the user’s hand in the image I should be exposed. This task is carried out using the foreground detection technique [16,17]. It is a very simple and fast operation, while its outcome is sufficient to ensure a correct course of further stages of the method. It is possible to subtract images thanks to taking a reference image I r e f ( x , y ) of the keyboard itself. The image I S ( x , y ) resulting from the subtraction of images is obtained using the operation (1):
I S x , y = I r e f x , y I x , y , x = 1 , , M , y = 1 , , M .
In order to reduce the influence of external factors (e.g., lighting) on the operation, the image I S is subjected to binarization, where the binarization threshold is selected using the Otsu method [18]. The result of the operations of image subtraction and binarization is shown in Figure 4.

2.3. Location of Patterns in the Image

After exposing the contour of the hand, the right-hand index finger in the image is searched using the Template Matching ( T M ) technique [19,20,21]. This technique is used to indicate the part of the image matching the pattern searched for. The input for the Template Matching technique is the image I S with the size of M × M and the pattern T with the size of N × N , where M > N . Furthermore, the operation of searching for the pattern T in the image I S using the Template Matching technique will be designated as τ ( I S , T ) . The result of the operation of the Template Matching technique is the M × M matrix R . Elements of the matrix R will be designated as R ( x , y ) and can be determined based on different metrics of the function of matching of the images being compared [22]. Below, the definitions of some measures that were used during the experiments are given, the results of which are described in the section “Experimental verification”. The following methods have been selected because of the ease of their implementation and a high effectiveness:
-
Square Difference (SD)
R S D ( x , y ) = x , y ( T ( x , y ) I S ( x + x , y + y ) ) 2 ,
-
Square Difference Normed(SDN)
R S D N ( x , y ) = x , y ( T ( x , y ) I S ( x + x , y + y ) ) 2 x , y T ( x , y ) 2 x , y I S ( x + x , y + y ) 2 ,
-
Correlation (C)
R C ( x , y ) = x , y ( T ( x , y ) · I S ( x + x , y + y ) ) 2 ,
-
Correlation Normed (CN)
R C N ( x , y ) = x , y ( T ( x , y ) · I S ( x + x , y + y ) ) 2 x , y T ( x , y ) 2 x , y I S ( x + x , y + y ) 2 .
Determining the coordinates (row and column) of the maximum value in the matrix R allows for determining the location of the pattern T searched for in the image I S . Figure 5 shows the image I S , pattern T and the matrix R created on their base. In the image I S , the location of the pattern T, determined with the use of T M method, was marked with a square.
The operation of locating the user’s finger in the image is performed in two stages. The aim of the first stage is to limit the area of searching for a finger to the part of the image with the right hand of the user. Only in the second stage, in the limited fragment of a image, the index finger is searched. The two-stage localization of a finger gives better effectiveness compared to the methods, where a finger pattern was determined directly in the entire image. It has been shown in the experimental section.
In the course of work on the method, it appeared that each user put their hands on the keyboard in a slightly different manner. As a result, the hands, and thus the fingers, are put at a different angle in a relation to the keyboard. In addition, the distance between fingers may vary between individuals. This hinders and sometimes simply prevents a correct localization of the index finger. Therefore, in the proposed method, searching for the hand and the index finger in the analyzed image takes place by using the n patterns T i from the set W = { T 1 , , T n } consisting of hands or index fingers patterns, respectively. The patterns represent the hands of different people and differ from each other. Examples of the hand and finger patterns used are shown in Figure 6 where the mentioned differences in positions of hands and fingers are clearly visible.
The general way of locating n patterns using the T M method was presented with the use of Algorithm 1. The input for the algorithm is the image I S and the set W containing the patterns searched for in the image. The result of the operation of the Algorithm 1 is the set P containing n points. The coordinates of each of these points indicate the center of a given pattern from the set W in the image I S .
Algorithm 1: Location of n patterns in the image
Symmetry 10 00624 i001
In the next stage of the method, the coordinates of all points from the set P are averaged. As a result of this operation, we obtain the coordinates of one point p ( x a v g , y a v g ) . It should be remembered that individual patterns in the set W were selected in such a way so that they differ from each other. This increases the probability that one of the patterns will be similar to objects (hand or finger) in the image analyzed. Unfortunately, the diversity of patterns is a reason that some of them may have a shape very different from the shape of the object currently searched for. In this case, the technique T M may perform localization incorrectly, i.e., indicate a location of the pattern that deviates significantly from its actual location. An example of such a situation can be seen in Figure 7, where two points were incorrectly located. Such indications should be treated as outliers. To eliminate outliers, a method described in [23] was used. In this method, for each point p i ( x , y ) , its parameter D i is determined. This allows assessing how far from the other points it is located. Determination of the measure D i begins with the determination of cumulative distribution functions F ^ 1 i ( λ ) and F ^ 2 i ( λ ) :
F ^ 1 i λ = 1 n j = 1 n 𝟙 ( d E ( p i ( x , y ) , p j ( x , y ) ) λ ) , F ^ 2 i λ = 1 n 2 k = 1 n j = 1 n 𝟙 ( d E ( p k ( x , y ) , p j ( x , y ) ) λ ) , λ = 1 , 2 , , δ , i = 1 , , n ,
where d E ( p i ( x , y ) , p j ( x , y ) ) is a Euclidean distance between p i ( x , y ) and p j ( x , y ) , the value δ is a length of the diagonal of a image I and 𝟙 ( ) is an indicator function:
𝟙 ( d E ( p k ( x , y ) , p j ( x , y ) ) λ ) = 1 , if d E ( p k ( x , y ) , p j ( x , y ) ) λ , 0 , otherwise .
Next, for each point p i ( x , y ) , we define the value D i [ 0 , 1 ] as the maximum distance between F ^ 1 i ( λ ) and F ^ 2 i ( λ ) , so:
D i = max 1 λ δ F ^ 1 i ( λ ) F ^ 2 i ( λ ) , i = 1 , , n .
A small value of D i indicates that point p i is located near the other points and should not be treated as an outlier. Outliers should not be taken into account when determining the average value. In the proposed method, the point p i ( x , y ) is removed from the set P, if the value D i determined for this point is greater than 0.39 [23]. All remaining points are put into set O.
O = { ( p i ( x , y ) ) P : D i < 0.39 } , i = 1 , , n .
Determination of the point p ( x a v g , y a v g ) , based on the points from the set O, takes place on the basis of the following formulas:
x a v g = 1 m i = 1 m x i , y a v g = 1 m i = 1 m y i ,
where p ( x i , y i ) O and m is the number of points in set O.
Then, based on the coordinates x a v g and y a v g , an image fragment I f with a centre in these coordinates and dimensions w × h is cut out:
I f = I ( x , y ) , x = ( x a v g w ) , , ( x a v g + w ) , y = ( y a v g h ) , , ( y a v g + h ) .
The values w and h parameters should be selected so that the entire hand or the entire knuckle is visible in the cut out image. As a result of this assumption, the cut out image showing the hand has the size of the hand pattern searched for, i.e., w = W / 2 , h = H / 2 , where W and H are the width and the height of the hand pattern, respectively. In the case of the finger pattern, the knuckle covers only a small part of it. Taking this into consideration, the average size of knuckles was determined experimentally and compared to the size of the whole finger pattern. Based on the measurements, it has been assumed that, for finger pattern w = W / 10 , h = H / 10 , where W and H are the width and the height of the finger pattern, respectively.

2.4. Assessment of Finger Image Quality

The proposed method assumes that the photos of the knuckle will be taken during normal work of the user. A photo of a moving object can be blurred or noised. It may be not possible to determining furrows in low quality images. Therefore, in the proposed method, each determined image I f of the knuckle is subjected to quality assessment. For assessing the quality of the image I f , there was used the measure ϑ ( I f ) , which is based on determination of components of the gradient of the edge in the horizontal and vertical directions, i.e., along rows and columns, respectively:
ϑ ( I f ) = x , y S ( x , y ) w · h , S = Gx Gx + Gy Gy ,
where Gx = I f x is the image gradient matrix in the x direction, Gy = I f y is the image gradient matrix in the y direction, w and h are height and width of image I f , • is Hadamard product of two matrices.
The values ϑ ( I f ) , determined for four degrees of image blurring, are shown in Figure 8.
If the determined value of the quality measure ϑ ( I f ) of the image is lower than the assumed threshold Q, then the image I f is rejected and the process of localizing the finger image starts from the beginning, i.e., from the stage of taking a photo of the hand:
image I f = accepted if ϑ ( I f ) Q , rejected if ϑ ( I f ) < Q .
As already mentioned, the presented method is a two-stage method. The above stages, carried out in the right order, apply both for hand and finger patterns. In order to illustrate the operation of the method in a better way, its course was presented in a flowchart form shown in Figure 9.
The final result of the method is an image showing the furrows located on the knuckle.

3. A Method for Continuous Verification Based on the Finger Knuckle Image

The proposed method of automatic acquisition of a finger knuckle image can be used as a part of the continuous verification system. In this study, the continuous verification method based on [12,14] was developed. The general principle of operation of the method is presented below. The analysis begins from exposing the furrows visible in the image. For this purpose, there was used, inter alia, the Frangi filter [24,25] and Otsu binarization. The Shape Contexts (SC) and Thin Plate Spline (TPS) [12,26,27,28] methods were used to calculate the similarity between the test images of knuckles and the reference image registered earlier in the database. A proper use of these methods allows preliminary matching the furrows present in the two images being compared. The need to preliminary match the furrows results from the elastic properties of human skin [29]. They cause that the position and size of the furrows belonging to the same person may slightly differ in subsequent images coming from that person. In the proposed method, an adequate selection of the level of matching of furrows in the images is very important. A too low level of matching will not allow reducing slight differences between the samples coming from the same person, while a too large level of matching may cause furrows coming from different people to be too similar to each other. After preliminary matching of two images, the furrows visible on the knuckles are recorded in the form of chains of points. The resulting chains are then compared with each other and their similarity is determined. If the similarity is greater than the assumed threshold, it means that the compared images come from the same person; otherwise, the image being tested comes from an unauthorized person. In the proposed method, the knuckle images are captured on the fly and knuckle verification is performed continuously. The principle of operation of the used continuous verification process has been given in [12,26,27,28].

4. Experimental Verification

The effectiveness of the proposed method has been verified in a series of experiments. The tests were carried out in real conditions. A group of 50 users participated in the tests. For the tests, there was used Logitech c920 Pro Webcam camera, which takes photos in Full HD resolution. This camera is characterized by high quality of photos and has a built-in AutoFocus function. Its additional advantage is the integrated function of automatic correction of light intensity.
Initially, the parameter Q in the Formula (13) should be established. Images with quality ϑ ( I f ) lower than the threshold Q are considered to be useless and are eliminated from further analysis. To determine the value Q, a set of images showing a knuckle, obtained with the use of the camera, was prepared. The set was prepared manually and was composed of 200 images. The images were of different quality, i.e., they were characterized by a different degree of blurring. The images were verified using the method described in Section 3. During the experiment, the value of the parameter was changed in the range of Q = 0 , , 2 , with a step of 0.1. For each value Q, the Accuracy (ACC) value was determined. The results of computations are shown in Figure 10.
Based on Figure 10, the value Q = 1.2 was determined in further tests. After determining the value of the parameter Q, the evaluation of the effectiveness of the proposed method was started. This time, finger knuckle images were registered automatically. The scenario of the next experiments assumed that each user would work with a computer using a computer keyboard. Every 5 min, the users moved away from their computers. Then, they either returned to their workstations or switched workstations with other users. The cases when users sat at computers of other users were treated as attack attempts. In such situations, the method should block access to the computer. In total, the tests included 400 cases of switching computers and 400 cases where users returned to their own computers. Such a test scenario enabled determination of popular measures: FAR (False Acceptance Rate), FRR (False Rejection Rate) and ACC (Accuracy) [30].
F A R = number of imposters accepted number of imposters tested · 100 % ,
F R R = number of genuine users rejected number of genuine users tested · 100 % ,
A C C = number of users correctly recognized number of users tested · 100 % .
Following this, the average values and standard deviations in each experiment have been calculated.
The experiment allowed for determining how the effectiveness of the method is affected by the different metrics of the image matching function which were used in the template matching method: Square Difference(SD), Square Difference Normed (SDN), Correlation (C), and Correlation Normed (CN). The impact of sizes of the images registered by the camera was also determined on the effectiveness of the method. The size of the image was defined by the parameter M, which specifies the height and width of the image. The experiments were carried out using n = 10 patterns in the set W. The results are presented in Table 1.
Table 1 shows that the selection of the metrics of the image matching function affects the effectiveness of the method. The smallest values of FAR and FRR errors were obtained using the Correlation (C) measure. The resulting values of errors were as follows: FAR = 4.18%, FRR = 7.85%, and ACC = 94.81%. When analyzing the impact of the size of the image I registered by the camera, we can see that the minimum resolution that allows for obtaining a high efficiency of recognition is 400 × 400 (px). An increase in the resolution does not significantly improve the effectiveness of the method. The deterioration in the effectiveness was caused by problems with a proper detection of furrows in the images. Based on the results obtained, in the subsequent experiments, there was used the Correlation measure, while the resolution of the analyzed images was 400 × 400 (px).
An important element of the studies was determination of the impact of the number of the patterns used to locate both the hand and the index finger in order to assess the effectiveness of the method. In the previous experiment, the number of patterns was n = 10 . In order to check whether a reduction in the number of patterns will negatively affect the effectiveness of the method, in the next experiment, the number of patterns was being changed in the range of n = 4 , , 10 . The FAR, FRR and ACC values obtained for individual values of n are presented in Table 2.
The results presented in Table 2 show that, in order to obtain the highest possible effectiveness of the method, a set composed of n = 7 patterns is sufficient. The use of a larger number of patterns does not significantly improve the effectiveness of the method.
In addition, the validity of the approach, in which many patterns are searched for in the image, was examined in the next experiment. In this experiment, the set W contained only one pattern. Of course, in such a scenario, the stage of eliminating outliers was omitted because only one point was determined in the image. The tests were repeated ten times and each time a different pattern was searched for. The average values obtained were as follows: FAR = 15.84%, FRR = 26.41% and ACC = 79.50%. The results are significantly worse than those obtained in the method based on searching for multiple patterns. This unambiguously confirms that the developed method of locating the object with the use of multiple patterns is effective.
To confirm the validity of using a two-stage method of searching for patterns in the image (see Section 2.3), its effectiveness was compared with results obtained using only one stage of searching for patterns. In this stage, a knuckle was searched for straight away in the entire image, i.e., the search area had not been previously narrowed down to the area of the right hand. In this case, the effectiveness obtained was only FAR = 23.46%, FRR = 32.31% and ACC = 71.74% and was definitely worse than that in the approach with the use of two stages.
An increase in the number of patterns results in a better effectiveness of the method but also results in extension of the time of the image analysis. In order to fully evaluate the impact of the parameter n, its influence on the time of execution of Algorithm 1 was also measured. The measurements are important because the extent of the input data in Algorithm 1 depends directly or indirectly on the number n of the patterns used. The patterns of hands are larger than those of fingers, so the tests for Algorithm 1 were carried out separately for each type of pattern. In our experiments, the measurement time was obtained on a PC class computer equipped with an Intel Xeon E5440 processor running at 2.83 GHz, with 8 GB of RAM and a Windows 7 x64 operating system. The results are presented in Table 3.
When analyzing the results, it can be seen that the dependence between the algorithm’s execution time and the number n is close to linear. Finding a single pattern using the TM method takes about 110 ms for the hand pattern and about 75 ms for the finger pattern. When determining the optimal parameter value of n on the basis of results from Table 2 and Table 3, achievement of a high effectiveness of the method was set as a priority. Therefore, in further experiments, the value n was established as Table 2. The time complexity was also determined for individual stages of the proposed method—from the stage of image acquisition to the stage of verification of the image by the classifier. The results are presented in Table 4.
In the case of the tests carried out by the author, the duration of the entire verification process based on the image of the knuckle was 1.8 s.
The last stage of the studies was to determine the suitability of the proposed method in single and multi-biometric person continuous verification systems [31]. The studies included both the determination of the effectiveness of single method used separately and the assessment of the effectiveness of the method combining two methods.
For the tests of continuous multi-biometric system, there were selected systems combining the analysis of dynamics of typing on the keyboard with the analysis of the image of the knuckle. The methods were developed by the Author and described in detail in [12,13,15]. In the proposed methods, the verification of user’s identity is performed in two stages. The purpose of the first stage of the method is verification based on the analysis of the dynamics of typing on the keyboard. If the verification of the user’s identity is successful, the user can continue the work. If the verification is unsuccessful, there is a suspicion that the current user is an intruder. In such a situation, the user is subjected to additional verification—this time based on an analysis of the image of the finger knuckles. A positive result of the additional verification means that the user can continue the work, and the verification procedure returns to the stage of analyzing the dynamics of typing on the keyboard. However, if additional verification is not successful, the user’s access to the computer’s resources will be blocked. The obtained results are presented in Table 5. Additionally, Table 5 includes values describing of the effectiveness of the used multi-biometric continuous verification method when acquisition of knuckles images took place in a traditional way, i.e., by putting the hand in a special device.
When analyzing the results from Table 5, it can be seen that the proposed automatic acquisition method gives comparable results compared to the methods in which acquisition of knuckles images took place in a traditional way (by putting a hand into the scanner). However, it should be emphasized that the proposed method has an indisputable advantage—fully automatic acquisition of finger image, which is not offered by competing methods.

5. Conclusions

This paper proposes a method of automatic acquisition of knuckle images. The acquisition takes place while the user is using a computer keyboard. The conclusions concerning the proposed method can be summarized as follows:
  • The proposed method does not require the user to interrupt the work.
  • The tests indicated a high effectiveness of the proposed method. After determining the optimal parameters of the method, the following verification errors were obtained: FAR = 4.18%, FRR = 7.85%.
  • The values obtained are comparable with results of currently known methods; however, it should be emphasized that the competing methods do not offer automatic image acquisition, which negatively affects the usability of such methods.
  • The effectiveness of the proposed method has also been tested as a part of the multi-biometric method in which, apart from the analysis of the knuckle image, the dynamics of typing on the keyboard is analyzed too. Also in this case, the use of the new manner of acquisition did not negatively affect the effectiveness of the method.
The tests indicated that the method has some limitations. One of them is the problem with locating the knuckle image in the case of people typing on the keyboard with the left hand only. There was only one such person in the test group of 50 people. Another limitation is related to the method used for exposing the hand in the image of the keyboard. It requires that the color of the keyboard should contrast with the color of the hand. If, for example, a white keyboard is used, it will be harder to expose the hand on such a keyboard. The scope of future studies assumes, inter alia, the elimination of aforementioned limitations of the method and the use of machine learning algorithms for the recognition of patterns in the image.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Salem, M.B.; Hershkop, S.; Stolfo, S.J. A Survey of Insider Attack Detection Research. In Insider Attack and Cyber Security: Beyond the Hacker; Stolfo, S.J., Bellovin, S.M., Keromytis, A.D., Hershkop, S., Smith, S.W., Sinclair, S., Eds.; Springer US: Boston, MA, USA, 2008; pp. 69–90. [Google Scholar]
  2. Fernández-Alemán, J.L.; Señor, I.C.; Ángel Oliver Lozoya, P.; Toval, A. Security and privacy in electronic health records: A systematic literature review. J. Biomed. Inform. 2013, 46, 541–562. [Google Scholar] [CrossRef] [PubMed]
  3. Gunter, T.D.; Terry, N.P. The Emergence of National Electronic Health Record Architectures in the United States and Australia: Models, Costs, and Questions. J. Med. Internet Res. 2005, 7, e3. [Google Scholar] [CrossRef] [PubMed]
  4. Doroz, R.; Porwik, P.; Wrobel, K. Signature Recognition Based on Voting Schemes. In Proceedings of the 2013 International Conference on Biometrics and Kansei Engineering, Tokyo, Japan, 5–7 July 2013; pp. 53–57. [Google Scholar]
  5. Doroz, R.; Porwik, P. Handwritten Signature Recognition with Adaptive Selection of Behavioral Features. In Proceedings of the 10th International Conference on Computer Information Systems Analysis and Technologies, Kolkata, India, 14–16 December 2011; pp. 128–136. [Google Scholar]
  6. Peralta, D.; Galar, M.; Triguero, I.; Miguel-Hurtado, O.; Benitez, J.M.; Herrera, F. Minutiae filtering to improve both efficacy and efficiency of fingerprint matching algorithms. Eng. Appl. Artif. Intell. 2014, 32, 37–53. [Google Scholar] [CrossRef]
  7. Barpanda, S.S.; Sa, P.K.; Marques, O.; Majhi, B.; Bakshi, S. Iris recognition with tunable filter bank based feature. Multimedia Tools Appl. 2018, 77, 7637–7674. [Google Scholar] [CrossRef]
  8. Albakoor, M.; Saeed, K.; Rybnik, M.; Dabash, M. FE8R—A Universal Method for Face Expression Recognition. In Proceedings of the15th IFIP International Conference on Computer Information Systems and Industrial Management (CISIM), Vilnius, Lithuania, 14–16 September 2016; Springer International Publishing: Vilnius, Lithuania, 2016; pp. 633–646. [Google Scholar]
  9. Arsalan, M.; Hong, H.G.; Naqvi, R.A.; Lee, M.B.; Kim, M.C.; Kim, D.S.; Kim, C.S.; Park, K.R. Deep Learning-Based Iris Segmentation for Iris Recognition in Visible Light Environment. Symmetry 2017, 9, 263. [Google Scholar] [CrossRef]
  10. Porwik, P.; Wrobel, K. The New Algorithm of Fingerprint Reference Point Location Based on Identification Masks. In Proceedings of the 4th International Conference on Computer Recognition Systems, CORES’05, Rydzyna Castle, Poland, 22–25 May 2005; pp. 807–814. [Google Scholar]
  11. Clarke, R. Human Identification in Information Systems: Management Challenges and Public Policy Issues. Inf. Technol. People 1994, 7, 6–37. [Google Scholar] [CrossRef]
  12. Safaverdi, H.; Wesolowski, T.E.; Doroz, R.; Wrobel, K.; Porwik, P. Computer User Verification Based on Typing Habits and Finger-Knuckle Analysis. In Proceedings of the 9th International Conference on Computational Collective Intelligence—ICCCI 2017, Nicosia, Cyprus, 27–29 September 2017; pp. 161–170. [Google Scholar]
  13. Wesolowski, T.E.; Doroz, R.; Wrobel, K.; Safaverdi, H. Keystroke Dynamics and Finger Knuckle Imaging Fusion for Continuous User Verification. In Proceedings of the 16th IFIP TC8 International Conference on Computer Information Systems and Industrial Management—CISIM 2017, Bialystok, Poland, 16–18 June 2017; pp. 141–152. [Google Scholar]
  14. Doroz, R.; Wrobel, K.; Porwik, P.; Safaverdi, H. The Method of Person Verification by Use of Finger Knuckle Images. In Proceedings of the 10th International Conference on Computer Recognition Systems CORES 2017, Polanica Zdroj, Poland, 22–24 May 2017; pp. 248–257. [Google Scholar]
  15. Wesolowski, T.E.; Safaverdi, H.; Doroz, R.; Wrobel, K. Hybrid verification method based on finger-knuckle analysis and keystroke dynamics. J. Med. Inform. Technol. 2017, 26, 26–36. [Google Scholar]
  16. Jeeva, S.; Sivabalakrishnan, M. Survey on Background Modeling and Foreground Detection for Real Time Video Surveillance. Procedia Comput. Sci. 2015, 50, 566–571. [Google Scholar] [CrossRef]
  17. Nawaz, M.; Cosmas, J.; Adnan, A.; Haq, M.I.U.; Alazawi, E. Foreground detection using background subtraction with histogram. In Proceedings of the IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, BMSB 2013, London, UK, 5–7 June 2013; pp. 1–5. [Google Scholar]
  18. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  19. Khalil, M. Car plate recognition using the template matching method. Int. J. Comput. Theory Eng. 2010, 2, 683. [Google Scholar] [CrossRef]
  20. Oron, S.; Dekel, T.; Xue, T.; Freeman, W.T.; Avidan, S. Best-Buddies Similarity—Robust Template Matching Using Mutual Nearest Neighbors. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1799–1813. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Weber, J.; Lefèvre, S. Spatial and spectral morphological template matching. Image Vis. Comput. 2012, 30, 934–945. [Google Scholar] [CrossRef]
  22. Hisham, M.B.; Yaakob, S.N.; Raof, R.A.A.; Nazren, A.B.A.; Embedded, N.M.W. Template Matching using Sum of Squared Difference and Normalized Cross Correlation. In Proceedings of the 2015 IEEE Student Conference on Research and Development (SCOReD), Kuala Lumpur, Malaysia, 13–14 December 2015; pp. 100–104. [Google Scholar]
  23. Doroz, R.; Wrobel, K.; Porwik, P. An accurate fingerprint reference point determination method based on curvature estimation of separated ridges. Appl. Math. Comput. Sci. 2018, 28, 209–225. [Google Scholar] [CrossRef] [Green Version]
  24. Ng, C.C.; Yap, M.H.; Costen, N.; Li, B. Automatic Wrinkle Detection Using Hybrid Hessian Filter. In Proceedings of the Tenth Asian Conference on Computer Vision—-ACCV 2014, Columbus, OH, USA, 23–28 June 2014; Springer International Publishing: Cham, Switzerland, 2015; pp. 609–622. [Google Scholar]
  25. Iwahori, Y.; Hattori, A.; Adachi, Y.; Bhuyan, M.; Woodham, R.J.; Kasugai, K. Automatic Detection of Polyp Using Hessian Filter and HOG Features. Procedia Comput. Sci. 2015, 60, 730–739. [Google Scholar] [CrossRef]
  26. Belongie, S.; Mori, G.; Malik, J. Matching with Shape Contexts. In Statistics and Analysis of Shapes; Krim, H., Yezzi, A., Eds.; Birkhäuser Boston: Boston, MA, USA, 2006; pp. 81–105. [Google Scholar]
  27. Zhang, H.; Malik, J. Learning a discriminative classifier using shape context distances. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; Volume 1. [Google Scholar]
  28. Belongie, S.; Malik, J.; Puzicha, J. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 509–522. [Google Scholar] [CrossRef] [Green Version]
  29. Fagert, M.; Morris, K. Quantifying the limits of fingerprint variability. Forensic Sci. Int. 2015, 254, 87–99. [Google Scholar] [CrossRef] [PubMed]
  30. Bolle, R. Guide to Biometrics; Springer Professional Computing; Springer: New York, NY, USA, 2004. [Google Scholar]
  31. Yang, J.; Sun, W.; Liu, N.; Chen, Y.; Wang, Y.; Han, S. A Novel Multimodal Biometrics Recognition Model Based on Stacked ELM and CCA Methods. Symmetry 2018, 10, 96. [Google Scholar] [CrossRef]
Figure 1. A finger knuckle with visible furrows.
Figure 1. A finger knuckle with visible furrows.
Symmetry 10 00624 g001
Figure 2. The rig for the acquisition of knuckle images.
Figure 2. The rig for the acquisition of knuckle images.
Symmetry 10 00624 g002
Figure 3. Rig for image acquisition. 1—laptop, 2—video camera on a tripod.
Figure 3. Rig for image acquisition. 1—laptop, 2—video camera on a tripod.
Symmetry 10 00624 g003
Figure 4. Stages of image subtraction operation: (a) Reference photo of the keyboard I r e f ( x , y ) ; (b) Photo of the keyboard with the hands on it I ( x , y ) ; (c) Photo I S ( x , y ) obtained as a result of image subtraction I r e f ( x , y ) and I ( x , y ) , (d) Photo I S ( x , y ) subjected to binarization.
Figure 4. Stages of image subtraction operation: (a) Reference photo of the keyboard I r e f ( x , y ) ; (b) Photo of the keyboard with the hands on it I ( x , y ) ; (c) Photo I S ( x , y ) obtained as a result of image subtraction I r e f ( x , y ) and I ( x , y ) , (d) Photo I S ( x , y ) subjected to binarization.
Symmetry 10 00624 g004
Figure 5. (a) Image I S ; (b) Pattern T; (c) Graphical representation of the matching function in the matrix R .
Figure 5. (a) Image I S ; (b) Pattern T; (c) Graphical representation of the matching function in the matrix R .
Symmetry 10 00624 g005
Figure 6. The patterns searched for in the image: (ad) Pattern of hand; (eh) Pattern of finger.
Figure 6. The patterns searched for in the image: (ad) Pattern of hand; (eh) Pattern of finger.
Symmetry 10 00624 g006aSymmetry 10 00624 g006b
Figure 7. Image I S , where squares indicate the location of the right hand, determined with the use of n = 10 patterns.
Figure 7. Image I S , where squares indicate the location of the right hand, determined with the use of n = 10 patterns.
Symmetry 10 00624 g007
Figure 8. (a) ϑ ( I f ) = 1.35 ; (b) ϑ ( I f ) = 1.16 ; (c) ϑ ( I f ) = 1.00 ; (d) ϑ ( I f ) = 0.90 .
Figure 8. (a) ϑ ( I f ) = 1.35 ; (b) ϑ ( I f ) = 1.16 ; (c) ϑ ( I f ) = 1.00 ; (d) ϑ ( I f ) = 0.90 .
Symmetry 10 00624 g008
Figure 9. All stages of the presented method.
Figure 9. All stages of the presented method.
Symmetry 10 00624 g009
Figure 10. The influence of the parameter Q on the effectiveness of the method.
Figure 10. The influence of the parameter Q on the effectiveness of the method.
Symmetry 10 00624 g010
Table 1. The effectiveness of the method depending on the function’s metrics used.
Table 1. The effectiveness of the method depending on the function’s metrics used.
ImageMetrics Used to Analyze the Image
SizeSDSDNCCN
(Value of M (px))FAR [%] FRR [%]ACC [%]FAR [%]FRR [%]ACC [%]FAR [%]FRR [%]ACC [%]FAR [%]FRR [%]ACC [%]
10017.09 ± 0.2229.72 ± 0.2576.05 ± 0.7920.14 ± 0.5332.42 ± 0.4264.10 ± 0.5515.26 ± 0.2124.56 ± 0.3679.15 ± 1.1620.14 ± 0.2032.42 ± 0.4267.38 ± 0.74
2009.21 ± 0.1422.09 ± 0.3786.42 ± 0.9310.06 ± 0.2523.47 ± 0.3375.22 ± 0.898.45 ± 0.0919.72 ± 0.2984.86 ± 1.0710.06 ± 0.1523.47 ± 0.2283.57 ± 0.72
3005.18 ± 0.7811.49 ± 0.0990.85 ± 1.355.67 ± 0.0713.15 ± 0.2090.61 ± 0.994.89 ± 0.0611.05 ± 0.1192.84 ± 1.055.67 ± 0.0613.15 ± 0.1790.70 ± 1.32
4004.39 ± 0.088.16 ± 0.1291.66 ± 0.994.81 ± 0.069.03 ± 0.0893.19 ± 1.444.18 ± 0.057.85 ± 0.1094.81 ± 1.084.81 ± 0.069.03 ± 0.1592.69 ± 1.31
5004.26 ± 0.068.15 ± 0.0994.15 ± 1.134.68 ± 0.068.86 ± 0.1093.98 ± 1.204.14 ± 0.077.84 ± 0.0994.85 ± 0.994.68 ± 0.058.86 ± 0.1494.20 ± 1.11
6004.56 ± 0.078.84 ± 0.0892.23 ± 1.154.97 ± 0.069.39 ± 0.1389.32 ± 1.014.18 ± 0.067.89 ± 0.1294.80 ± 0.984.97 ± 0.079.39 ± 0.1189.69 ± 1.08
7004.61 ± 0.079.11 ± 0.1693.01 ± 1.014.61 ± 0.068.79 ± 0.1191.23 ± 1.284.15 ± 0.067.92 ± 0.0994.85 ± 1.434.61 ± 0.078.79 ± 0.1294.29 ± 1.00
8004.42 ± 0.058.43 ± 0.1094.29 ± 1.074.84 ± 0.068.98 ± 0.1193.03 ± 1.224.21 ± 0.067.81 ± 0.0994.81 ± 1.124.84 ± 0.088.98 ± 0.1393.44 ± 1.12
9004.53 ± 0.078.53 ± 0.1193.37 ± 1.164.99 ± 0.069.32 ± 0.1389.88 ± 0.974.19 ± 0.077.83 ± 0.1094.81 ± 1.204.99 ± 0.069.32 ± 0.1491.56 ± 1.13
SD - Square Difference, SDN - Square Difference Normed, C - Correlation, CN - Correlation Normed, M is the width and height of the image.
Table 2. The impact of the number of patterns searched for in the image on the effectiveness of the method.
Table 2. The impact of the number of patterns searched for in the image on the effectiveness of the method.
Number n of PatternsFAR [%]FRR [%]ACC [%]
46.45 ± 0.0911.26 ± 0.1494.42 ± 1.25
56.18 ± 0.1110.97 ± 0.1392.15 ± 1.22
64.49 ± 0.108.01 ± 0.1394.71 ± 1.32
74.18 ± 0.117.85 ± 0.1294.81 ± 1.28
84.17 ± 0.107.85 ± 0.1294.78 ± 1.22
94.18 ± 0.107.86 ± 0.1294.79 ± 1.21
104.18 ± 0.117.85 ± 0.1394.81 ± 1.22
Table 3. The impact of the number of patterns used on the time of execution of Algorithm 1.
Table 3. The impact of the number of patterns used on the time of execution of Algorithm 1.
Number n of PatternsTime (ms)
PalmFinger
4422296
5545373
6650450
7776519
8892592
9985661
101063734
Table 4. The time of performing particular stages of acquisition and analysis of images.
Table 4. The time of performing particular stages of acquisition and analysis of images.
StageTime (s)
Taking a photo of the hand0.048
Exposing the hand on the keyboard0.032
Location of hand on the keyboard0.776
Location of finger on the keyboard0.519
Assessment of finger image quality0.163
Verification0.295
Sum1.833
Table 5. The comparison of the performance of various continuous verification methods.
Table 5. The comparison of the performance of various continuous verification methods.
MethodNot Automatic AcquisitionAutomatic Acquisition
FAR [%]FRR [%]ACC [%]FAR [%]FRR [%]ACC [%]
Proposed (only knuckle)4.037.2295.204.187.8594.81
Keystroke + Knuckle [13]1.073.3598.501.173.2397.36
Keystroke + Knuckle [15]--98.71--96.97
Keystroke + Knuckle [12]0.672.1698.960.942.8798.61

Share and Cite

MDPI and ACS Style

Doroz, R. The Method of Automatic Knuckle Image Acquisition for Continuous Verification Systems. Symmetry 2018, 10, 624. https://doi.org/10.3390/sym10110624

AMA Style

Doroz R. The Method of Automatic Knuckle Image Acquisition for Continuous Verification Systems. Symmetry. 2018; 10(11):624. https://doi.org/10.3390/sym10110624

Chicago/Turabian Style

Doroz, Rafal. 2018. "The Method of Automatic Knuckle Image Acquisition for Continuous Verification Systems" Symmetry 10, no. 11: 624. https://doi.org/10.3390/sym10110624

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop