Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (12)

Search Parameters:
Keywords = ear authentication

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 15271 KB  
Article
Symmetry Alignment–Feature Interaction Network for Human Ear Similarity Detection and Authentication
by Li Yuan, He-Bin Zhou, Jiang-Yun Li, Li Liu, Xiao-Chai Gu and Ya-Nan Zhao
Symmetry 2025, 17(5), 654; https://doi.org/10.3390/sym17050654 - 26 Apr 2025
Cited by 1 | Viewed by 874
Abstract
In the context of ear-based biometric identity authentication, symmetry between the left and right ears emerges as a pivotal factor, particularly when registration involves one ear and authentication utilizes its contralateral counterpart. The extent to which bilateral ear symmetry supports consistent identity verification [...] Read more.
In the context of ear-based biometric identity authentication, symmetry between the left and right ears emerges as a pivotal factor, particularly when registration involves one ear and authentication utilizes its contralateral counterpart. The extent to which bilateral ear symmetry supports consistent identity verification warrants significant investigation. This study addresses this challenge by proposing a novel framework, the Symmetry Alignment–Feature Interaction Network, designed to enhance authentication robustness. The proposed network incorporates a Symmetry Alignment Module, leveraging differentiable geometric alignment and a dual-attention mechanism to achieve precise feature correspondence between the left and right ears, thereby mitigating the robustness deficiencies of conventional methods under pose variations. Additionally, a Feature Interaction Network is introduced to amplify nonlinear interdependencies between binaural features, employing a difference–product dual-path architecture to enhance feature discriminability through Dual-Path Feature Interaction and Similarity Fusion. Experimental validation on a dataset from the University of Science and Technology of Beijing demonstrates that the proposed method achieves a similarity detection accuracy of 99.03% (a 9.11% improvement over the baseline ResNet18) and an F1 score of 0.9252 in identity authentication tasks. Ablation experiments further confirm the efficacy of the Symmetry Alignment Module, reducing the false positive rate by 3.05%, in combination with the Feature Interaction Network, shrinking the standard deviation of similarity distributions between the positive and negative samples by 67%. A multi-task loss function, governed by a dynamic weighting mechanism, effectively balances feature learning objectives. This work establishes a new paradigm for the authentication of biometric features with symmetry, integrating symmetry modeling with Dual-Path Feature Interaction and Similarity Fusion to advance the precision of ear authentication. Full article
(This article belongs to the Special Issue Symmetry Applied in Biometrics Technology)
Show Figures

Figure 1

19 pages, 7917 KB  
Article
Tekt3 Safeguards Proper Functions and Morphology of Neuromast Hair Bundles
by Dongmei Su, Sirun Lu, Ling Zheng and Dong Liu
Int. J. Mol. Sci. 2025, 26(7), 3115; https://doi.org/10.3390/ijms26073115 - 28 Mar 2025
Viewed by 969
Abstract
The inner ear and/or lateral line are responsible for hearing and balance of vertebrate. The otic sensory hair cells (HCs) employ cilium organelles, namely stereocilia and/or kinocilia, to mediate mechanical stimuli to electrical signal transition. Tektins (Tekts) are known as the cilium microtubule [...] Read more.
The inner ear and/or lateral line are responsible for hearing and balance of vertebrate. The otic sensory hair cells (HCs) employ cilium organelles, namely stereocilia and/or kinocilia, to mediate mechanical stimuli to electrical signal transition. Tektins (Tekts) are known as the cilium microtubule stabilizer and inner-space filler, and four Tekt(1-4)-encoding genes are identified in zebrafish HCs, but the subcellular location of Tekts in HCs remains unknown. In the present study, we first found that tekt3 is expressed in the inner ear and lateral line neuromast. Antibody staining revealed that Tekt3 is present in neuromast and utricular HCs. It is absent in the saccule, the authentic hearing end-organ of zebrafish and the crista of semi-circular canals. Furthermore, Tekt3 were enriched at the apical side of neuromast and utricular HCs, mainly in the cytosol. Similar subcellular distribution of Tekt3 was also evident in the outer HCs of mature mouse cochlea, which are not directly linked to the hearing sense. However, only neuromast HCs exerted morphological defect of kinocilia in tekt3 mutant. The disrupted or distorted HC kinocilia of mutant neuromast ultimately resulted in slower vital dye intake, delayed HC regeneration after neomycin treatment, and reduced startle response to vibration stimulation. All functional defects of tekt3 mutant were largely rescued by wild-type tekt3 mRNA. Our study thus suggests that zebrafish Tekt3 maintains the integrity and function of neuromast kinocilia to against surrounding and persistent low-frequency noises, perhaps via the intracellular distribution of Tekt3. Nevertheless, TEKT3/Tekt3 could be used to clarify HC sub-types in both zebrafish and mice, to highlight the non-hearing HCs. Full article
(This article belongs to the Section Molecular Neurobiology)
Show Figures

Figure 1

17 pages, 3652 KB  
Article
Toward Personal Identification Using Multi-Angle-Captured Ear Images: A Feasibility Study
by Ryuhi Fukuda, Yuto Yokoyanagi, Chotirose Prathom and Yoshifumi Okada
Appl. Sci. 2025, 15(6), 3329; https://doi.org/10.3390/app15063329 - 18 Mar 2025
Viewed by 926
Abstract
The ear is an effective biometric feature for personal identification. Although numerous studies have attempted personal identification using frontal-view images of the ear, only a few have attempted personal identification using multi-angle-captured ear images. To expand the extant literature and facilitate future biometric [...] Read more.
The ear is an effective biometric feature for personal identification. Although numerous studies have attempted personal identification using frontal-view images of the ear, only a few have attempted personal identification using multi-angle-captured ear images. To expand the extant literature and facilitate future biometric authentication technologies, we explore the feasibility of personal identification using multidirectionally captured ear images and attempted to identify the direction-independent feature points that contribute to the identification process. First, we construct a convolutional neural network model for personal identification based on multi-angle-captured ear images, after which we conduct identification experiments. We obtained high identification accuracies, exceeding 0.980 for all the evaluation metrics, confirming the feasibility of personal identification using multi-angle-captured ear images. Further, we performed Gradient-weighted Class Activation Mapping to visualize the feature points that contribute to the identification process, identifying the helix region of the ear as a key feature point. Notably, the contribution ratios for ear images in which the inner ear was visible and not visible are 97.5% and 56.0%, respectively. These findings indicate the feasibility of implementing personal identification using multi-angle-captured ear images for applications, such as surveillance and access control systems. These findings will promote the development of future biometric authentication technologies. Full article
(This article belongs to the Special Issue Applications of Signal Analysis in Biometrics)
Show Figures

Figure 1

27 pages, 5593 KB  
Article
Ear-Touch-Based Mobile User Authentication
by Jalil Nourmohammadi Khiarak, Samaneh Mazaheri and Rohollah Moosavi Tayebi
Mathematics 2024, 12(5), 752; https://doi.org/10.3390/math12050752 - 2 Mar 2024
Viewed by 2342
Abstract
Mobile devices have become integral to daily life, necessitating robust user authentication methods to safeguard personal information. In this study, we present a new approach to mobile user authentication utilizing ear-touch interactions. Our novel system employs an analytical algorithm to authenticate users based [...] Read more.
Mobile devices have become integral to daily life, necessitating robust user authentication methods to safeguard personal information. In this study, we present a new approach to mobile user authentication utilizing ear-touch interactions. Our novel system employs an analytical algorithm to authenticate users based on features extracted from ear-touch images. We conducted extensive evaluations on a dataset comprising ear-touch images from 92 subjects, achieving an average equal error rate of 0.04, indicative of high accuracy and reliability. Our results suggest that ear-touch-based authentication is a feasible and effective method for securing mobile devices. Full article
(This article belongs to the Special Issue New Advances and Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

13 pages, 7819 KB  
Article
Attempts to Communicate the Transcendent in Contemporary Art: An Artist’s Point of View
by Ivana Gagić Kičinbači
Religions 2023, 14(10), 1279; https://doi.org/10.3390/rel14101279 - 10 Oct 2023
Cited by 2 | Viewed by 4783
Abstract
The article investigates attempts in contemporary art to convey transcendent realities through the lens of the artist. This study examines three key moments of the artistic creative process: intuition, asceticism, and silence. The article assesses silence or stillness as a specific mental state [...] Read more.
The article investigates attempts in contemporary art to convey transcendent realities through the lens of the artist. This study examines three key moments of the artistic creative process: intuition, asceticism, and silence. The article assesses silence or stillness as a specific mental state that enables us to evaluate reality with a heightened awareness of our own length, fragility, and the infinite that awaits us on the other side of existence. In artistic practice, silence is a prerequisite for authenticity, believability, and creativity. The article explores the possibility of uncovering and revealing the transcendent via matter through the author’s own artistic inquiry. It discusses art as a master of transforming material, psychological, and physical facts into shapes that hint at what is beyond what the eye or ear can perceive. Art can lead to the sublime and open the mind, eyes, and heart to that which is beyond. The expression of the transcendent through artistic action is observed by analyzing the relationship between the artist and intuitive knowledge in the artistic practices of contemporary and modern artists. Along with the qualitative method of narrative research, research methodologies specific to the artistic field (visual arts) were predominately used, expanding the boundaries of qualitative research by taking a holistic approach closer to the very nature of the artistic process and allowing for a more complete understanding of the process itself. Full article
(This article belongs to the Special Issue Religious Education and Via Pulchritudinis)
Show Figures

Figure 1

17 pages, 38646 KB  
Article
Lightweight Human Ear Recognition Based on Attention Mechanism and Feature Fusion
by Yanmin Lei, Dong Pan, Zhibin Feng and Junru Qian
Appl. Sci. 2023, 13(14), 8441; https://doi.org/10.3390/app13148441 - 21 Jul 2023
Cited by 1 | Viewed by 1816
Abstract
With the development of deep learning technology, more and more researchers are interested in ear recognition. Human ear recognition is a biometric identification technology based on human ear feature information and it is often used for authentication and intelligent monitoring field, etc. In [...] Read more.
With the development of deep learning technology, more and more researchers are interested in ear recognition. Human ear recognition is a biometric identification technology based on human ear feature information and it is often used for authentication and intelligent monitoring field, etc. In order to make ear recognition better applied to practical application, real time and accuracy have always been very important and challenging topics. Therefore, focusing on the problem that the mAP@0.5 value of the YOLOv5s-MG method is lower than that of the YOLOv5s method on the EarVN1.0 human ear dataset with low resolution, small target, rotation, brightness change, earrings, glasses and other occlusion, a lightweight ear recognition method is proposed based on an attention mechanism and feature fusion. This method mainly includes the following several steps: First, the CBAM attention mechanism is added to the connection between the backbone network and the neck network of the lightweight human ear recognition method YOLOv5s-MG, and the YOLOv5s-MG-CBAM human ear recognition network is constructed, which can improve the accuracy of the method. Second, the SPPF layer and cross-regional feature fusion are added to construct the YOLOv5s-MG-CBAM-F human ear recognition method, which further improves the accuracy. Three distinctive human ear datasets, namely, CCU-DE, USTB and EarVN1.0, are used to evaluate the proposed method. Through the experimental comparison of seven methods including YOLOv5s-MG-CBAM-F, YOLOv5s-MG-SE-F, YOLOv5s-MG-CA-F, YOLOv5s-MG-ECA-F, YOLOv5s, YOLOv7 and YOLOv5s-MG on the EarVN1.0 human ear dataset, it is found that the human ear recognition rate of YOLOv5s-MG-CBAM-F method is the highest. The mAP@0.5 value of the proposed YOLOv5s-MG-CBAM-F method on the EarVN1.0 ear dataset is 91.9%, which is 6.4% higher than that of the YOLOv5s-MG method and 3.7% higher than that of the YOLOv5s method. The params, GFLOPS, model size and the inference time per image of YOLOv5s-MG-CBAM-F method on the EarVN1.0 human ear dataset are 5.2 M, 8.3 G, 10.9 MB and 16.4 ms, respectively, which are higher than the same parameters of the YOLOv5s-MG method, but less than the same parameters of YOLOv5s method. The quantitative results show that the proposed method can improve the ear recognition rate while satisfying the real-time performance and it is especially suitable for applications where high ear recognition rates are required. Full article
(This article belongs to the Special Issue Deep Learning in Object Detection and Tracking)
Show Figures

Figure 1

27 pages, 5361 KB  
Article
Biometric-Based Key Generation and User Authentication Using Acoustic Characteristics of the Outer Ear and a Network of Correlation Neurons
by Alexey Sulavko
Sensors 2022, 22(23), 9551; https://doi.org/10.3390/s22239551 - 6 Dec 2022
Cited by 9 | Viewed by 2991
Abstract
Trustworthy AI applications such as biometric authentication must be implemented in a secure manner so that a malefactor is not able to take advantage of the knowledge and use it to make decisions. The goal of the present work is to increase the [...] Read more.
Trustworthy AI applications such as biometric authentication must be implemented in a secure manner so that a malefactor is not able to take advantage of the knowledge and use it to make decisions. The goal of the present work is to increase the reliability of biometric-based key generation, which is used for remote authentication with the protection of biometric templates. Ear canal echograms were used as biometric images. Multilayer convolutional neural networks that belong to the autoencoder type were used to extract features from the echograms. A new class of neurons (correlation neurons) that analyzes correlations between features instead of feature values is proposed. A neuro-extractor model was developed to associate a feature vector with a cryptographic key or user password. An open data set of ear canal echograms to test the performance of the proposed model was used. The following indicators were achieved: EER = 0.0238 (FRR = 0.093, FAR < 0.001), with a key length of 8192 bits. The proposed model is superior to known analogues in terms of key length and probability of erroneous decisions. The ear canal parameters are hidden from direct observation and photography. This fact creates additional difficulties for the synthesis of adversarial examples. Full article
(This article belongs to the Special Issue Biometrics Recognition Based on Sensor Technology)
Show Figures

Figure 1

14 pages, 18640 KB  
Article
Biometric Security: A Novel Ear Recognition Approach Using a 3D Morphable Ear Model
by Md Mursalin, Mohiuddin Ahmed and Paul Haskell-Dowland
Sensors 2022, 22(22), 8988; https://doi.org/10.3390/s22228988 - 20 Nov 2022
Cited by 8 | Viewed by 3717
Abstract
Biometrics is a critical component of cybersecurity that identifies persons by verifying their behavioral and physical traits. In biometric-based authentication, each individual can be correctly recognized based on their intrinsic behavioral or physical features, such as face, fingerprint, iris, and ears. This work [...] Read more.
Biometrics is a critical component of cybersecurity that identifies persons by verifying their behavioral and physical traits. In biometric-based authentication, each individual can be correctly recognized based on their intrinsic behavioral or physical features, such as face, fingerprint, iris, and ears. This work proposes a novel approach for human identification using 3D ear images. Usually, in conventional methods, the probe image is registered with each gallery image using computational heavy registration algorithms, making it practically infeasible due to the time-consuming recognition process. Therefore, this work proposes a recognition pipeline that reduces the one-to-one registration between probe and gallery. First, a deep learning-based algorithm is used for ear detection in 3D side face images. Second, a statistical ear model known as a 3D morphable ear model (3DMEM), was constructed to use as a feature extractor from the detected ear images. Finally, a novel recognition algorithm named you morph once (YMO) is proposed for human recognition that reduces the computational time by eliminating one-to-one registration between probe and gallery, which only calculates the distance between the parameters stored in the gallery and the probe. The experimental results show the significance of the proposed method for a real-time application. Full article
Show Figures

Figure 1

20 pages, 7083 KB  
Article
MetaEar: Imperceptible Acoustic Side Channel Continuous Authentication Based on ERTF
by Zhuo Chang, Lin Wang, Binbin Li and Wenyuan Liu
Electronics 2022, 11(20), 3401; https://doi.org/10.3390/electronics11203401 - 20 Oct 2022
Cited by 7 | Viewed by 3354
Abstract
With the development of ubiquitous mobile devices, biometrics authentication has received much attention from researchers. For immersive experiences in AR (augmented reality), convenient continuous biometric authentication technologies are required to provide security for electronic assets and transactions through head-mounted devices. Existing fingerprint or [...] Read more.
With the development of ubiquitous mobile devices, biometrics authentication has received much attention from researchers. For immersive experiences in AR (augmented reality), convenient continuous biometric authentication technologies are required to provide security for electronic assets and transactions through head-mounted devices. Existing fingerprint or face authentication methods are vulnerable to spoof attacks and replay attacks. In this paper, we propose MetaEar, which harnesses head-mounted devices to send FMCW (Frequency-Modulated Continuous Wave) ultrasonic signals for continuous biometric authentication of the human ear. CIR (channel impulse response) leveraged the channel estimation theory to model the physiological structure of the human ear, called the Ear Related Transfer Function (ERTF). It extracts unique representations of the human ear’s intrinsic and extrinsic biometric features. To overcome the data dependency of Deep Learning and improve its deployability in mobile devices, we use the lightweight learning approach for classification and authentication. Our implementation and evaluation show that the average accuracy can reach about 96% in different scenarios with small amounts of data. MetaEar enables one to handle immersive deployable authentication and be more sensitive to replay and impersonation attacks. Full article
(This article belongs to the Special Issue Wearable Sensing Devices and Technology)
Show Figures

Figure 1

11 pages, 1484 KB  
Article
Bilateral Ear Acoustic Authentication: A Biometric Authentication System Using Both Ears and a Special Earphone
by Masaki Yasuhara, Isao Nambu and Shohei Yano
Appl. Sci. 2022, 12(6), 3167; https://doi.org/10.3390/app12063167 - 20 Mar 2022
Cited by 6 | Viewed by 4171
Abstract
In existing biometric authentication methods, the user must perform an authentication operation such as placing a finger in a scanner or facing a camera. With ear acoustic authentication, the acoustic characteristics of the ear canal can be used as biometric information. Therefore, a [...] Read more.
In existing biometric authentication methods, the user must perform an authentication operation such as placing a finger in a scanner or facing a camera. With ear acoustic authentication, the acoustic characteristics of the ear canal can be used as biometric information. Therefore, a person wearing earphones does not need to perform any authentication operation. Existing studies which use the acoustic characteristics of the ear canal as biometric information only measure the characteristics of one ear. However, the acoustic characteristics of the human ear canal can be measured from both ears. Hence, we proposed a new method for acoustic authentication based on the ability to measure the acoustic characteristics of the ear canal from both ears. The acoustic characteristics of the ear canal of 52 subjects were measured. Comparing the acoustic characteristics of the left and right ear canals, a difference in the signal between the left and right ear was observed. To evaluate the authentication accuracy, we calculated the evaluation indices of biometric authentication, equal error rate (EER), and area under curve (AUC). The EER for bilateral ear acoustic authentication using signals from both ears was 0.39%, which was lower than that of a single ear. The AUC was 0.0016 higher for bilateral ear acoustic authentication. Therefore, the use of bilateral signals for ear acoustic authentication was proved to be effective in improving authentication accuracy. Full article
Show Figures

Figure 1

13 pages, 1088 KB  
Article
Object Selection as a Biometric
by Joyce Tlhoolebe and Bin Dai
Entropy 2022, 24(2), 148; https://doi.org/10.3390/e24020148 - 19 Jan 2022
Cited by 3 | Viewed by 2110
Abstract
The use of eye movement as a biometric is a new biometric technology that is now in competition with many other technologies such as the fingerprint, face recognition, ear recognition and many others. Problems encountered with these authentication methods such as passwords and [...] Read more.
The use of eye movement as a biometric is a new biometric technology that is now in competition with many other technologies such as the fingerprint, face recognition, ear recognition and many others. Problems encountered with these authentication methods such as passwords and tokens have led to the emergence of biometric authentication techniques. Biometric authentication involves the use of physical or behavioral characteristics to identify people. In biometric authentication, feature extraction is a very vital stage, although some of the extracted features that are not very useful may lead to the degradation of the biometric system performance. Object selection using eye movement as a technique for biometric authentication was proposed for this study. To achieve this, an experiment for collecting eye movement data for biometric purposes was conducted. Eye movement data were measured from twenty participants during choosing and finding of still objects. The eye-tracking equipment used was able to measure eye-movement data. The model proposed in this paper aimed to create a template from these observations that tried to assign a unique binary signature for each enrolled user. Error correction is used in authenticating a user who submits an eye movement sample for enrollment. The XORed Biometric template is further secured by multiplication with an identity matrix of size (n × n). These results show positive feedback on this model as individuals can be uniquely identified by their eye movement features. The use of hamming distance as additional verification helper increased model performance significantly. The proposed scheme has a 37% FRR and a 27% FAR based on the 400 trials, which are very promising results for future improvements. Full article
Show Figures

Figure 1

19 pages, 1074 KB  
Article
Combining Multiple Biometric Traits Using Asymmetric Aggregation Operators for Improved Person Recognition
by Abderrahmane Herbadji, Zahid Akhtar, Kamran Siddique, Noubeil Guermat, Lahcene Ziet, Mohamed Cheniti and Khan Muhammad
Symmetry 2020, 12(3), 444; https://doi.org/10.3390/sym12030444 - 10 Mar 2020
Cited by 12 | Viewed by 4379
Abstract
Biometrics is a scientific technology to recognize a person using their physical, behavior or chemical attributes. Biometrics is nowadays widely being used in several daily applications ranging from smart device user authentication to border crossing. A system that uses a single source of [...] Read more.
Biometrics is a scientific technology to recognize a person using their physical, behavior or chemical attributes. Biometrics is nowadays widely being used in several daily applications ranging from smart device user authentication to border crossing. A system that uses a single source of biometric information (e.g., single fingerprint) to recognize people is known as unimodal or unibiometrics system. Whereas, the system that consolidates data from multiple biometric sources of information (e.g., face and fingerprint) is called multimodal or multibiometrics system. Multibiometrics systems can alleviate the error rates and some inherent weaknesses of unibiometrics systems. Therefore, we present, in this study, a novel score level fusion-based scheme for multibiometric user recognition system. The proposed framework is hinged on Asymmetric Aggregation Operators (Asym-AOs). In particular, Asym-AOs are estimated via the generator functions of triangular norms (t-norms). The extensive set of experiments using seven publicly available benchmark databases, namely, National Institute of Standards and Technology (NIST)-Face, NIST-Multimodal, IIT Delhi Palmprint V1, IIT Delhi Ear, Hong Kong PolyU Contactless Hand Dorsal Images, Mobile Biometry (MOBIO) face, and Visible light mobile Ocular Biometric (VISOB) iPhone Day Light Ocular Mobile databases have been reported to show efficacy of the proposed scheme. The experimental results demonstrate that Asym-AOs based score fusion schemes not only are able to increase authentication rates compared to existing score level fusion methods (e.g., min, max, t-norms, symmetric-sum) but also is computationally fast. Full article
Show Figures

Figure 1

Back to TopTop