A Novel Multi-Scaled Deep Convolutional Structure for Punctilious Human Gait Authentication
Abstract
:1. Introduction
- Introducing a novel technique for extracting and resizing gait images into three scales;
- Developing a deep CNN algorithm specifically tailored for efficient gait feature extraction;
- An innovative approach for the fusion of features extracted at multiple scales enhances the robustness of gait recognition;
- Implementing a fine-tuned multilayer perceptron (MLP) for feature recognition;
- Developing an optimized convolutional neural network (CNN) model;
- Conducting a comprehensive comparative analysis with leading traditional CNN algorithms demonstrates that the suggested model is more accurate and efficient than state-of-the-art gait recognition technology.
2. Related Works
- Real-World Data Application: There is a lack of models rigorously tested on diverse, real-world datasets. Improvements are needed for systems to perform well in various environmental and situational contexts;
- Viewing Angle Variability: Many models struggle with large variations in viewing angles. Enhanced view synthesis methods could help improve recognition accuracy in practical settings;
- Dataset Quality: Enhancements in data segmentation and dataset quality are necessary to improve model training outcomes and recognition accuracy;
- System Complexity and Accuracy: Some innovative models exhibit low accuracy due to their complexity. Simplifying these models while maintaining or improving accuracy is crucial;
- Cross-View Recognition: Improved techniques for cross-view gait recognition are needed to ensure consistent performance across different viewing angles.
3. Convolutional Neural Network Models (CNNs)
4. Methodology
- Multi-Scale Image Processing: To accommodate variations in distance and perspective that naturally occur in gait data, silhouette images are segmented, preprocessed, and resized into three distinct scales: 50 × 50, 100 × 100, and 150 × 150 pixels. This scaling ensures that the network learns to recognize features across different resolutions, enhancing its ability to generalize across diverse real-world conditions;
- Feature Extraction: Each scaled image is processed by a custom-designed convolutional neural network (CNN). These networks are tailored to extract spatial hierarchies of features from simple edges to more complex gait patterns. The architecture of each CNN layer is specifically optimized to maximize the extraction of discriminative features from the gait data, which is crucial for the subsequent classification accuracy;
- Feature Fusion: After feature extraction, a fusion mechanism is employed to integrate the features from all scales into a coherent feature set. This integrated set harnesses the complementary information available across different scales, significantly boosting the robustness and accuracy of the gait recognition process.
4.1. The Proposed Procedures for Gait Recognition
Algorithm 1: Gait Authentication Model | ||
Input: Isolated Gait Dataset (I). | ||
Output: Identified gait Images. | ||
//Read the Isolated Gait dataset (ROI) | ||
1. | I ← read(ROI) | |
// Normalize all isolated images (I) | ||
2. | N ← normalize(I) | |
//Split images (I) for training, testing, and validation by (Train_test_split) | ||
3. | train_X, test_X, valid_X, train_y, test_y, valid_y ← split(N) | |
//Training and testing Algorithms | ||
4. | M ← [ABDGNet, LeNet, AlexNet, VGG, Inception, ResNet50, Xception] | |
//Extract Features from three models. | ||
5. | Foreach model m in M | |
6. | F ← extract(m, train_X, test_X, valid_X) // Extract Features | |
7. | Ffused ← concatenate(F) | |
8. | mFused ← train(Ffused, train_y) //Apply Concatenation Feature Fusion | |
9. | Summary ← mFused //Train the Fused Model Layers | |
10. | mCompiled ← compile(mFused) //Summary of the Fused Model | |
11. | epochs ← 30 | |
12. | batch_size ← 32 | |
13. | ver ← 2 | |
14. | mFit ← fit(mCompiled, train_X, train_y, epochs, batch_size, ver) | |
15. | plot(mFit) | |
16. | ConfMatrix ← calculate(mFit, test_X, test_y) | |
17. | plot(ConfMatrix) | |
18. | Store ← (mFused) //Store the Model | |
19. | End For |
4.2. Dataset Description
4.2.1. CASIA
4.2.2. OU-ISIR
4.2.3. OU-MVLP
4.3. Preprocessing Dataset
Algorithm 2: Preprocessing Steps | ||
Input: RGB gait dataset (D) and Background images (DBack). | ||
Output: binary isolated gait dataset images (I). | ||
1. | Procedure Generate Isolated Gait Dataset (D, DBack): | |
// Resize all images to three sizes (150, 100, and 50) | ||
2. | DResized←resize(D, [150, 100, 50]) | |
// Apply Rgb2gray image conversion | ||
3. | DGS←grayScale(DResized) | |
// Apply subtraction between grayscale gait and background | ||
4. | I←DGS – DBack | |
// Apply histogram equalization | ||
5. | IEqualized←histogramEqualization(I) | |
// Apply the image Thresholding procedure | ||
6. | IThreshold←OtsuThresholding(IEqualized) | |
// Construct a Structuring Element with disk radius = 1 | ||
7. | SE←CreateStructuringElement (radius = 1) | |
// Dilate image | ||
8. | IDilated←DilateImage(IThreshold, SE) | |
// Fill the holes | ||
9. | IFilled←fillHoles(IDilated, threshold = t) | |
// Remove all connected components less than t | ||
10. | I←RemoveSmallComponents(IFilled, threshold = t) | |
11. | ReturnI | |
12. | End Procedure |
4.4. The Structure of the Proposed CNN Model
4.5. Performance Evaluation
5. Results and Discussion
5.1. LeNet
5.2. AlexNet
5.3. VggNet
5.4. Inception-v3
5.5. Resnet
5.6. Xception
5.7. The Proposed CNN (ABDGNet)
5.8. Evaluation of CASIA
5.9. Evaluation of OU-ISIR
5.10. Evaluation of OU-MVLP
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Okano, T.; Izumi, S.; Kawaguchi, H.; Yoshimoto, M. Non-contact biometric identification and authentication using microwave Doppler sensor. In Proceedings of the 2017 IEEE Biomedical Circuits and Systems Conference (BioCAS), Turin, Italy, 19–21 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–4. [Google Scholar]
- Dadakhanov, S. Analyze and Development System with Multiple Biometric Identification. arXiv 2020, arXiv:2004.04911. [Google Scholar]
- Krish, R.P.; Fierrez, J.; Ramos, D.; Alonso-Fernandez, F.; Bigun, J. Improving automated latent fingerprint identification using extended minutia types. Inf. Fusion 2019, 50, 9–19. [Google Scholar] [CrossRef]
- Huang, Q.; Duan, B.; Qu, Z.; Fan, S.; Xia, B. The DNA Recognition Motif of GapR Has an Intrinsic DNA Binding Preference towards AT-rich DNA. Molecules 2021, 26, 5776. [Google Scholar] [CrossRef]
- Kortli, Y.; Jridi, M.; Al Falou, A.; Atri, M. Face recognition systems: A survey. Sensors 2020, 20, 342. [Google Scholar] [CrossRef] [PubMed]
- Khanam, R.; Haseen, Z.; Rahman, N.; Singh, J. Performance analysis of iris recognition system. In Data and Communication Networks; Springer: Cham, Switzerland, 2019; pp. 159–171. [Google Scholar]
- Zhang, Z.; Tran, L.; Yin, X.; Atoum, Y.; Liu, X.; Wan, J.; Wang, N. Gait recognition via disentangled representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4710–4719. [Google Scholar]
- Tariq, W.; Othman, M.L.; Akhtar, S.; Tariq, F. Gait Feature Based on Human Identification & Classification by Using Artificial Neural Network and Project Management Approaches for Its Implementation. Int. J. Eng. Technol. 2019, 8, 133–137. [Google Scholar]
- Yamada, H.; Ahn, J.; Mozos, O.M.; Iwashita, Y.; Kurazume, R. Gait-based person identification using 3D LiDAR and long short-term memory deep networks. Adv. Robot. 2020, 34, 1201–1211. [Google Scholar] [CrossRef]
- Kumar, M.; Singh, N.; Kumar, R.; Goel, S.; Kumar, K. Gait recognition based on vision systems: A systematic survey. J. Vis. Commun. Image Represent. 2021, 75, 103052. [Google Scholar] [CrossRef]
- Liao, R.; Yu, S.; An, W.; Huang, Y. A model-based gait recognition method with body pose and human prior knowledge. Pattern Recognit. 2020, 98, 107069. [Google Scholar] [CrossRef]
- Wu, Z.; Huang, Y.; Wang, L.; Wang, X.; Tan, T. A comprehensive study on cross-view gait based human identification with deep cnns. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 209–226. [Google Scholar] [CrossRef]
- Bukhari, M.; Bajwa, K.B.; Gillani, S.; Maqsood, M.; Durrani, M.Y.; Mehmood, I.; Ugail, H.; Rho, S. An efficient gait recognition method for known and unknown covariate conditions. IEEE Access 2020, 9, 6465–6477. [Google Scholar] [CrossRef]
- Altilio, R.; Rossetti, A.; Fang, Q.; Gu, X.; Panella, M. A comparison of machine learning classifiers for smartphone-based gait analysis. Med. Biol. Eng. Comput. 2021, 59, 535–546. [Google Scholar] [CrossRef]
- Saleh, A.M.; Hamoud, T. Analysis and best parameters selection for person recognition based on gait model using CNN algorithm and image augmentation. J. Big Data 2021, 8, 1–20. [Google Scholar] [CrossRef]
- Liao, R.; An, W.; Li, Z.; Bhattacharyya, S.S. A novel view synthesis approach based on view space covering for gait recognition. Neurocomputing 2021, 453, 13–25. [Google Scholar] [CrossRef]
- Elharrouss, O.; Almaadeed, N.; Al-Maadeed, S.; Bouridane, A. Gait recognition for person re-identification. J. Supercomput. 2021, 77, 3653–3672. [Google Scholar] [CrossRef]
- Zou, Q.; Wang, Y.; Wang, Q.; Zhao, Y.; Li, Q. Deep learning-based gait recognition using smartphones in the wild. IEEE Trans. Inf. Forensics Secur. 2020, 15, 3197–3212. [Google Scholar] [CrossRef]
- Wu, X.; An, W.; Yu, S.; Guo, W.; García, E.B. Spatial-temporal graph attention network for video-based gait recognition. In Proceedings of the Asian Conference on Pattern Recognition, Auckland, New Zealand, 26–29 November 2019; Springer: Cham, Switzerland, 2019; pp. 274–286. [Google Scholar]
- Guo, H.; Li, B.; Zhang, Y.; Zhang, Y.; Li, W.; Qiao, F.; Rong, X.; Zhou, S. Gait recognition based on the feature extraction of Gabor filter and linear discriminant analysis and improved local coupled extreme learning machine. Math. Probl. Eng. 2020, 2020, 5393058. [Google Scholar] [CrossRef]
- Kececi, A.; Yildirak, A.; Ozyazici, K.; Ayluctarhan, G.; Agbulut, O.; Zincir, I. Implementation of machine learning algorithms for gait recognition. Eng. Sci. Technol. Int. J. 2020, 23, 931–937. [Google Scholar] [CrossRef]
- Zhang, K.; Luo, W.; Ma, L.; Liu, W.; Li, H. Learning joint gait representation via quintuplet loss minimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4700–4709. [Google Scholar]
- Sivarathinabala, M.; Abirami, S. AGRS: Automated gait recognition system in smart environment. J. Intell. Fuzzy Syst. 2019, 36, 2511–2525. [Google Scholar] [CrossRef]
- Saleem, F.; Khan, M.A.; Alhaisoni, M.; Tariq, U.; Armghan, A.; Alenezi, F.; Choi, J.-I.; Kadry, S. Human gait recognition: A single stream optimal deep learning features fusion. Sensors 2021, 21, 7584. [Google Scholar] [CrossRef]
- Yu, S.; Chen, H.; Garcia Reyes, E.B.; Poh, N. Gaitgan: Invariant gait feature extraction using generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 30–37. [Google Scholar]
- Anbalagan, E.; Anbhazhagan, S.M. Deep learning model using ensemble based approach for walking activity recognition and gait event prediction with grey level co-occurrence matrix. Expert Syst. Appl. 2023, 227, 120337. [Google Scholar] [CrossRef]
- Xia, Y.; Sun, H.; Zhang, B.; Xu, Y.; Ye, Q. Prediction of freezing of gait based on self-supervised pretraining via contrastive learning. Biomed. Signal Process. Control 2024, 89, 105765. [Google Scholar] [CrossRef]
- Guffanti, D.; Brunete, A.; Hernando, M.; Álvarez, D.; Rueda, J.; Navarro, E. Supervised learning for improving the accuracy of robot-mounted 3D camera applied to human gait analysis. Heliyon 2024, 10, e26227. [Google Scholar] [CrossRef] [PubMed]
- Wang, X.; Yan, W.Q. Human gait recognition based on frame-by-frame gait energy images and convolutional long short-term memory. Int. J. Neural Syst. 2020, 30, 1950027. [Google Scholar] [CrossRef] [PubMed]
- Zhao, A.; Li, J.; Ahmed, M. Spidernet: A spiderweb graph neural network for multi-view gait recognition. Knowl.-Based Syst. 2020, 206, 106273. [Google Scholar] [CrossRef]
- CASIA Gait Dataset. Available online: http://www.cbsr.ia.ac.cn/users/szheng/?page_id=71 (accessed on 1 May 2023).
- OU-ISIR Dataset. Available online: http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitTM.html (accessed on 10 May 2023).
- Takemura, N.; Makihara, Y.; Muramatsu, D.; Echigo, T.; Yagi, Y. Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition. IPSJ Trans. Comput. Vis. Appl. 2018, 10, 4. [Google Scholar] [CrossRef]
- Makihara, Y.; Suzuki, A.; Muramatsu, D.; Li, X.; Yagi, Y. Joint intensity and spatial metric learning for robust gait recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5705–5715. [Google Scholar]
- Ghosh, A.; Sufian, A.; Sultana, F.; Chakrabarti, A.; De, D. Fundamental concepts of convolutional neural network. In Recent Trends and Advances in Artificial Intelligence and Internet of Things; Springer: Cham, Switzerland, 2020; pp. 519–567. [Google Scholar]
- Cheng, G.; Guo, W. Rock images classification by using deep convolution neural network. J. Phys. Conf. Ser. 2017, 887, 012089. [Google Scholar] [CrossRef]
- Condori, R.H.M.; Romualdo, L.M.; Bruno, O.M.; de Cerqueira Luz, P.H. Comparison between traditional texture methods and deep learning descriptors for detection of nitrogen deficiency in maize crops. In Proceedings of the 2017 Workshop of Computer Vision (WVC), Natal, Brazil, 30 October–1 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 7–12. [Google Scholar]
- Elmahdy, M.S.; Abdeldayem, S.S.; Yassine, I.A. Low quality dermal image classification using transfer learning. In Proceedings of the 2017 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Orland, FL, USA, 16–19 February 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 373–376. [Google Scholar]
- Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Into Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [PubMed]
- Rao, P.S.; Sahu, G.; Parida, P. Methods for Automatic Gait Recognition: A Review. In Proceedings of the International Conference on Innovations in Bio-Inspired Computing and Applications, Gunupur, India, 16–18 December 2019; Springer: Cham, Switzerland, 2019; pp. 57–65. [Google Scholar]
- Makihara, Y.; Mannami, H.; Tsuji, A.; Hossain, M.A.; Sugiura, K.; Mori, A.; Yagi, Y. The OU-ISIR gait database comprising the treadmill dataset. IPSJ Trans. Comput. Vis. Appl. 2012, 4, 53–62. [Google Scholar] [CrossRef]
- Xu, C.; Makihara, Y.; Liao, R.; Niitsuma, H.; Li, X.; Yagi, Y.; Lu, J. Real-time gait-based age estimation and gender classification from a single image. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 3460–3470. [Google Scholar]
- Li, Z.; Xiong, J.; Ye, X. A new gait energy image based on mask processing for pedestrian gait recognition. In Proceedings of the 2019 International Conference on Image and Video Processing, and Artificial Intelligence, Shanghai, China, 23–25 August 2019; International Society for Optics and Photonics: Bellingham, WA, USA, 2019; p. 113212A. [Google Scholar]
- Connie, T.; Goh, M.K.O.; Teoh, A.B.J. Human gait recognition using localized Grassmann mean representatives with partial least squares regression. Multimed. Tools Appl. 2018, 77, 28457–28482. [Google Scholar] [CrossRef]
- Zheng, S.; Zhang, J.; Huang, K.; He, R.; Tan, T. Robust view transformation model for gait recognition. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 2073–2076. [Google Scholar]
- He, R.; Tan, T.; Wang, L. Robust recovery of corrupted low-rankmatrix by implicit regularizers. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 770–783. [Google Scholar] [CrossRef]
- Xie, Y.; Ning, L.; Wang, M.; Li, C. Image enhancement based on histogram equalization. J. Phys. Conf. Ser. 2019, 1314, 012161. [Google Scholar] [CrossRef]
- Yang, P.; Song, W.; Zhao, X.; Zheng, R.; Qingge, L. An improved Otsu threshold segmentation algorithm. Int. J. Comput. Sci. Eng. 2020, 22, 146–153. [Google Scholar] [CrossRef]
- Donon, Y.; Kupriyanov, A.; Paringer, R. Image normalization for Blurred Image Matching. CEUR Workshop Proc. 2020, 127–131. [Google Scholar]
- Raju, S.; Rajan, E. Skin Texture Analysis Using Morphological Dilation and Erosion. Int. J. Pure Appl. Math 2018, 118, 205–223. [Google Scholar]
- Švábek, D. Comparison of morphological face filling in image with human-made fill. AIP Conf. Proc. 2018, 2040, 030009. [Google Scholar]
- Agarap, A.F. Deep learning using rectified linear units (relu). arXiv 2018, arXiv:1803.08375. [Google Scholar]
- Jie, H.J.; Wanda, P. RunPool: A dynamic pooling layer for convolution neural network. Int. J. Comput. Intell. Syst. 2020, 13, 66–76. [Google Scholar] [CrossRef]
- Yousif, B.B.; Ata, M.M.; Fawzy, N.; Obaya, M. Toward an optimized neutrosophic k-means with genetic algorithm for automatic vehicle license plate recognition (ONKM-AVLPR). IEEE Access 2020, 8, 49285–49312. [Google Scholar] [CrossRef]
- Francies, M.L.; Ata, M.M.; Mohamed, M.A. A robust multiclass 3D object recognition based on modern YOLO deep learning algorithms. Concurr. Comput. Pract. Exp. 2021, 36, e6517. [Google Scholar] [CrossRef]
- Zhang, Z.; Tran, L.; Liu, F.; Liu, X. On learning disentangled representations for gait recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 345–360. [Google Scholar] [CrossRef]
- Ben, X.; Zhang, P.; Lai, Z.; Yan, R.; Zhai, X.; Meng, W. A general tensor representation framework for cross-view gait recognition. Pattern Recognit. 2019, 90, 87–98. [Google Scholar] [CrossRef]
- Carley, C.; Ristani, E.; Tomasi, C. Person re-identification from gait using an autocorrelation network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Chao, H.; He, Y.; Zhang, J.; Feng, J. Gaitset: Regarding gait as a set for cross-view gait recognition. Proc. AAAI Conf. Artif. Intell. 2019, 33, 8126–8133. [Google Scholar] [CrossRef]
Study | Dataset | Goal | Methodology | Challenges |
---|---|---|---|---|
Liao et al. [16] | CASIA [31] OU-ISIR [32] | This study introduced a model for resolving the problem of large-angle intervals. | They have introduced a new view synthesis method based on view space coverage. | Do not deal with real datasets. |
Omar et al. [17] | CASIA [31] OU-ISIR [32] OU-MVLP [33] | This research proposed a gait authentication model. | They generated GEIs from gait images. Then, they estimated the angle of the gait and recognized it using a convolutional neural network. | Do not enhance the segmented datasets. |
Xinhui et al. [19] | CASIA [31] OU-ISIR [32] | This paper presented a human gait identification algorithm from a video sequence frames. | They extracted the discriminative features in the temporal and spatial domains and recognized them. | Accuracy is very low. |
K. Zhang et al. [22] | CASIA [31] OU-ISIR [32] [34] | This study presented a novel gait authentication model. | They introduced a novel Joint Unique-gait and Cross-gait Network (JUCNet) representation to incorporate the benefits of both approaches. | Complicated system. |
Shiqi et al. [25] | CASIA [31] | This study presented a transform model. | They introduced the GaitGANv2 transform model, which transforms gait images from any viewpoint to a side view and then recognizes it. | Accuracy is very low. |
Data | Image Size | Category | Category/Image |
---|---|---|---|
CASIA-A | 240 × 320 | 3 | 3500–6000 |
CASIA-B | 11 | 1200–2500 | |
CASIA-C | 4 | 3000–5000 | |
OU-ISIR (A) | 128 × 88 | 9 | 2000–3000 |
OU-ISIR (B) | 8 | 2500–2800 | |
OU-MVLP | 14 | 1300–1500 |
Parameters | Value |
---|---|
Image Size | 150 × 150 |
100 × 100 | |
50 × 50 | |
Mini Batch Size | 32 |
Initial Learning Rate | 0.0005 |
Number of Epochs | 30 |
Optimizer | Adam |
Momentum Factor | 0.5 |
Execution Environment | GPU |
CNN Initial Weights | ImageNet |
Activation Function | SoftMax (final dense layer) |
Parameters | Value |
---|---|
Image Size | LeNet = 32 × 32 |
AlexNet = 227 × 227 | |
VGGnet = 224 × 224 | |
Inception-v3 = 224 × 224 | |
ResNet = 224 × 224 | |
Xception = 224 × 224 | |
Mini Batch Size | 32 |
Initial Learning Rate | 0.0005 |
Number of Epochs | 30 |
Momentum Factor | 0.5 |
Execution Environment | GPU |
CNN Initial Weights | ImageNet |
Activation Function | SoftMax (final dense layer) |
Model_1 | Sens (%) | Recall | F-Score | Spec (%) | FNR (%) | Training Time |
---|---|---|---|---|---|---|
LeNet | 92.6 | 0.91 | 0.92 | 99.2 | 7.73 | 1 min 39 s. |
Model_2 | Sensitivity | Recall | F-Score | Specificity | FNR | Training Time |
---|---|---|---|---|---|---|
AlexNet | 0.994 | 0.99 | 0.99 | 0.999 | 0.526 | 40 min 26 s |
Model_3 | Sensitivity | Recall | F-Score | Specificity | FNR (%) | Time |
---|---|---|---|---|---|---|
VggNet | 0.987 | 0.99 | 0.98 | 0.998 | 1.21 | 41 min 26 s |
Model_4 | Sensitivity | Recall | F-Score | Specificity | FNR (%) | Training Time |
---|---|---|---|---|---|---|
Inception-v3 | 0.968 | 0.96 | 0.96 | 0.996 | 3.1 | 37 min 36 s |
Model_5 | Sensitivity | Recall | F-Score | Specificity | FNR (%) | Training Time |
---|---|---|---|---|---|---|
ResNet50 | 0.987 | 0.99 | 0.98 | 0.998 | 1.24 | 46 min 17 s |
Model_6 | Sensitivity | Recall | F-Score | Specificity | FNR (%) | Training Time |
---|---|---|---|---|---|---|
Xception | 0.964 | 0.96 | 0.96 | 0.996 | 3.56 | 1 h 11 min 23 s |
Dataset Name | Acc (%) | Precision (%) | Recall (%) | F-Score | Specificity (%) | FNR (%) | Training Time |
---|---|---|---|---|---|---|---|
CASIA (A) | 99.6 | 99.4 | 99 | 0.99 | 99.7 | 0.55 | 3 min 51 s |
CASIA (B) | 99.9 | 99.6 | 100 | 0.99 | 99.9 | 0.36 | 3 min 25 s |
CASIA (C) | 95.3 | 90.7 | 90 | 0.907 | 96.9 | 9.25 | 4 min 14 s |
OU-ISIR A | 97.4 | 88.7 | 87 | 0.89 | 98.5 | 11.2 | 3 min 30 s |
OU-ISIR B | 99.9 | 99.8 | 100 | 0.99 | 99.9 | 0.16 | 3 min 27 s |
OU-MVLP | 99.8 | 98.6 | 98 | 0.98 | 99.8 | 1.31 | 3 min 13 s |
Class | LeNet | AlexNet | VggNet | Inception | ResNet | Xception | Proposed Model |
---|---|---|---|---|---|---|---|
000 | 97.51 | 100 | 100 | 100 | 99.75 | 100 | 98.93 |
018 | 97.99 | 100 | 100 | 100 | 100 | 99.66 | 100 |
036 | 96.90 | 100 | 100 | 99.60 | 100 | 99.65 | 99.65 |
054 | 95.27 | 99.62 | 100 | 100 | 99.54 | 99.62 | 100 |
072 | 86.19 | 98.89 | 98.90 | 97.80 | 99.26 | 99.45 | 100 |
090 | 74.83 | 94.87 | 96.08 | 98.60 | 93.52 | 98.67 | 98.03 |
108 | 81.87 | 99.39 | 93.71 | 100 | 93.56 | 99.40 | 99.40 |
126 | 85.58 | 100 | 98.55 | 99.50 | 99.69 | 100 | 100 |
144 | 94.55 | 96.85 | 97.69 | 98.10 | 99.39 | 100 | 100 |
162 | 98.38 | 100 | 99.49 | 100 | 98.02 | 100 | 100 |
180 | 96.35 | 99.53 | 99.07 | 100 | 100 | 100 | 99.53 |
Average | 91.40 | 99.01 | 98.40 | 99.40 | 98.40 | 99.00 | 99.60 |
Model | Accuracy | Precision | Recall | F1-Score | Specificity | FNR | Training Time |
---|---|---|---|---|---|---|---|
LeNet | 0.91 | 0.93 | 0.91 | 0.92 | 0.99 | 7.73 | 1 min 39 s |
AlexNet | 0.99 | 0.99 | 0.99 | 0.99 | 1.00 | 0.53 | 20 min 26 s |
VggNet | 0.98 | 0.99 | 0.99 | 0.98 | 1.00 | 1.21 | 41 min 26 s |
Inception | 0.99 | 0.97 | 0.96 | 0.96 | 0.99 | 3.10 | 37 min 36 s |
ResNet | 0.98 | 0.99 | 0.99 | 0.98 | 0.99 | 1.24 | 46 min 17 s |
Xception | 0.99 | 0.96 | 0.96 | 0.96 | 0.99 | 3.56 | 1 h 11 min 23 s |
Proposed | 0.99 | 0.99 | 1.00 | 0.99 | 1.00 | 0.40 | 3 min 25 s |
Model | 0° | 18° | 36° | 54° | 72° | 90° | 108° | 126° | 144° | 162° | 180° | Average |
---|---|---|---|---|---|---|---|---|---|---|---|---|
[56] | 0.93 | 0.92 | 0.90 | 0.92 | 0.87 | 0.95 | 0.94 | 0.95 | 0.92 | 0.90 | 0.90 | 0.92 |
[11] | 0.95 | 0.96 | 0.95 | 0.96 | 0.95 | 0.97 | 0.97 | 0.94 | 0.96 | 0.97 | 0.97 | 0.96 |
[57] | 0.43 | 0.78 | 0.99 | - | 0.98 | 0.82 | 0.77 | 0.76 | 0.57 | 0.42 | 0.35 | 0.69 |
[17] | 0.94 | 0.95 | 0.97 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.95 | 0.93 | 0.96 |
Proposed | 0.98 | 1.00 | 0.99 | 1.00 | 1.00 | 0.98 | 0.99 | 1.00 | 1.00 | 1.00 | 0.99 | 0.99 |
Model | 0° | 15° | 30° | 45° | 60° | 75° | 90° | 180° | 195° | 210° | 225° | 240° | 255° | 270° |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
[58] | 0.79 | 0.89 | 0.93 | 0.95 | 0.95 | 0.95 | 0.95 | 0.96 | 0.90 | 0.95 | 0.95 | 0.93 | 0.94 | 0.94 |
[59] | 0.79 | 0.87 | 0.89 | 0.90 | 0.88 | 0.88 | 0.87 | 0.81 | 0.86 | 0.89 | 0.89 | 0.87 | 0.87 | 0.86 |
[17] | 0.93 | 0.95 | 0.95 | 0.97 | 0.98 | 0.97 | 0.98 | 0.92 | 0.94 | 0.95 | 0.95 | 0.97 | 0.97 | 0.98 |
Proposed | 0.96 | 0.99 | 0.99 | 0.99 | 0.98 | 0.98 | 0.99 | 0.95 | 0.99 | 0.99 | 0.98 | 0.98 | 0.98 | 0.98 |
Study | Dataset Used | Success Accuracy |
---|---|---|
Liao et al. [16] | CASIA [31] OU-ISIR [32] | The output features enhanced the gait recognition |
Omar et al. [17] | CASIA [31] OU-ISIR [32] OU-MVLP [33] |
|
Xinhui et al. [19] | CASIA [31] OU-ISIR [32] |
|
K. Zhang et al. [22] | CASIA [31] OU-ISIR [32] [34] |
|
Shiqi et al. [25] | CASIA [31] | 62.8% |
Proposed | CASIA (A) CASIA (B) OU-ISIR (A) OU-ISIR (B) OU-MVLP |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yousef, R.N.; Ata, M.M.; Rashed, A.E.E.; Badawy, M.; Elhosseini, M.A.; Bahgat, W.M. A Novel Multi-Scaled Deep Convolutional Structure for Punctilious Human Gait Authentication. Biomimetics 2024, 9, 364. https://doi.org/10.3390/biomimetics9060364
Yousef RN, Ata MM, Rashed AEE, Badawy M, Elhosseini MA, Bahgat WM. A Novel Multi-Scaled Deep Convolutional Structure for Punctilious Human Gait Authentication. Biomimetics. 2024; 9(6):364. https://doi.org/10.3390/biomimetics9060364
Chicago/Turabian StyleYousef, Reem N., Mohamed Maher Ata, Amr E. Eldin Rashed, Mahmoud Badawy, Mostafa A. Elhosseini, and Waleed M. Bahgat. 2024. "A Novel Multi-Scaled Deep Convolutional Structure for Punctilious Human Gait Authentication" Biomimetics 9, no. 6: 364. https://doi.org/10.3390/biomimetics9060364
APA StyleYousef, R. N., Ata, M. M., Rashed, A. E. E., Badawy, M., Elhosseini, M. A., & Bahgat, W. M. (2024). A Novel Multi-Scaled Deep Convolutional Structure for Punctilious Human Gait Authentication. Biomimetics, 9(6), 364. https://doi.org/10.3390/biomimetics9060364