Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (43)

Search Parameters:
Keywords = eyewear

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2836 KiB  
Article
Estimating Heart Rate from Inertial Sensors Embedded in Smart Eyewear: A Validation Study
by Sarah Solbiati, Federica Mozzini, Jean Sahler, Paul Gil, Bruno Amir, Niccolò Antonello, Diana Trojaniello and Enrico Gianluca Caiani
Sensors 2025, 25(15), 4531; https://doi.org/10.3390/s25154531 - 22 Jul 2025
Viewed by 327
Abstract
Smart glasses are promising alternatives for the continuous, unobtrusive monitoring of heart rate (HR). This study validates HR estimates obtained with the “Essilor Connected Glasses” (SmartEW) during sedentary activities. Thirty participants wore the SmartEW, equipped with an IMU sensor for HR estimation, a [...] Read more.
Smart glasses are promising alternatives for the continuous, unobtrusive monitoring of heart rate (HR). This study validates HR estimates obtained with the “Essilor Connected Glasses” (SmartEW) during sedentary activities. Thirty participants wore the SmartEW, equipped with an IMU sensor for HR estimation, a commercial smartwatch (Garmin Venu 3), and an ECG device (Movesense Flash). The protocol included six static tasks performed under controlled laboratory conditions. The SmartEW algorithm analyzed 22.5 s signal windows using spectral analysis to estimate HR and provide a quality index (QI). Statistical analyses assessed agreement with ECG and the impact of QI on HR accuracy. SmartEW showed high agreement with ECG, especially with QI threshold equal to 70, as a trade-off between accuracy, low error, and acceptable data coverage (80%). Correlation for QI ≥ 70 was high across all the experimental phases (r2 up to 0.96), and the accuracy within ±5 bpm reached 95%. QI ≥ 70 also allowed biases to decrease (e.g., from −1.83 to −0.19 bpm while standing), with narrower limits of agreement, compared to ECG. SmartEW showed promising HR accuracy across sedentary activities, yielding high correlation and strong agreement with ECG and Garmin. SmartEW appears suitable for HR monitoring in static conditions, particularly when data quality is ensured. Full article
(This article belongs to the Special Issue IMU and Innovative Sensors for Healthcare)
Show Figures

Figure 1

17 pages, 4473 KiB  
Article
Dual-Band Wearable Antenna Integrated with Glasses for 5G and Wi-Fi Systems
by Łukasz Januszkiewicz
Appl. Sci. 2025, 15(14), 8018; https://doi.org/10.3390/app15148018 - 18 Jul 2025
Viewed by 242
Abstract
This paper presents a dual-band antenna designed for integration into eyewear. The antenna is intended for a system supporting visually impaired individuals, where a wearable camera integrated into glasses transmits data to a remote receiver. To enhance system reliability within indoor environments, the [...] Read more.
This paper presents a dual-band antenna designed for integration into eyewear. The antenna is intended for a system supporting visually impaired individuals, where a wearable camera integrated into glasses transmits data to a remote receiver. To enhance system reliability within indoor environments, the proposed design supports both fifth-generation (5G) wireless communication and Wi-Fi networks. The compact antenna is specifically dimensioned for integration within eyeglass temples and operates in the 3.5 GHz and 5.8 GHz frequency bands. Prototype measurements, conducted using a human head phantom, validate the antenna’s performance. The results demonstrate good impedance matching across the desired frequency bands and a maximum gain of at least 4 dBi in both bands. Full article
(This article belongs to the Special Issue Antenna Technology for 5G Communication)
Show Figures

Figure 1

21 pages, 3250 KiB  
Article
Deploying Optimized Deep Vision Models for Eyeglasses Detection on Low-Power Platforms
by Henrikas Giedra, Tomyslav Sledevič and Dalius Matuzevičius
Electronics 2025, 14(14), 2796; https://doi.org/10.3390/electronics14142796 - 11 Jul 2025
Viewed by 497
Abstract
This research addresses the optimization and deployment of convolutional neural networks for eyeglasses detection on low-power edge devices. Multiple convolutional neural network architectures were trained and evaluated using the FFHQ dataset, which contains annotated eyeglasses in the context of faces with diverse facial [...] Read more.
This research addresses the optimization and deployment of convolutional neural networks for eyeglasses detection on low-power edge devices. Multiple convolutional neural network architectures were trained and evaluated using the FFHQ dataset, which contains annotated eyeglasses in the context of faces with diverse facial features and eyewear styles. Several post-training quantization techniques, including Float16, dynamic range, and full integer quantization, were applied to reduce model size and computational demand while preserving detection accuracy. The impact of model architecture and quantization methods on detection accuracy and inference latency was systematically evaluated. The optimized models were deployed and benchmarked on Raspberry Pi 5 and NVIDIA Jetson Orin Nano platforms. Experimental results show that full integer quantization reduces model size by up to 75% while maintaining competitive detection accuracy. Among the evaluated models, MobileNet architectures achieved the most favorable balance between inference speed and accuracy, demonstrating their suitability for real-time eyeglasses detection in resource-constrained environments. These findings enable efficient on-device eyeglasses detection, supporting applications such as virtual try-ons and IoT-based facial analysis systems. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 4th Edition)
Show Figures

Figure 1

16 pages, 1719 KiB  
Article
Finite Element Analysis of Ocular Impact Forces and Potential Complications in Pickleball-Related Eye Injuries
by Cezary Rydz, Jose A. Colmenarez, Kourosh Shahraki, Pengfei Dong, Linxia Gu and Donny W. Suh
Bioengineering 2025, 12(6), 570; https://doi.org/10.3390/bioengineering12060570 - 26 May 2025
Viewed by 522
Abstract
Purpose: Pickleball, the fastest-growing sport in the United States, has seen a rapid increase in participation across all age groups, particularly among older adults. However, the sport introduces specific risks for ocular injuries due to the unique dynamics of gameplay and the physical [...] Read more.
Purpose: Pickleball, the fastest-growing sport in the United States, has seen a rapid increase in participation across all age groups, particularly among older adults. However, the sport introduces specific risks for ocular injuries due to the unique dynamics of gameplay and the physical properties of the pickleball. This study aims to explore the mechanisms of pickleball-related eye injuries, utilizing finite element modeling (FEM) to simulate ocular trauma and better understand injury mechanisms. Methods: A multi-modal approach was employed to investigate pickleball-related ocular injuries. Finite element modeling (FEM) was used to simulate blunt trauma to the eye caused by a pickleball. The FEM incorporated detailed anatomical models of the periorbital structures, cornea, sclera, and vitreous body, using hyperelastic material properties derived from experimental data. The simulations evaluated various impact scenarios, including changes in ball velocity, angle of impact, and material stiffness, to determine the stress distribution, peak strain, and deformation in ocular structures. The FEM outputs were correlated with clinical findings to validate the injury mechanisms. Results: The FE analysis revealed that the rigid, hard-plastic construction of a pickleball results in concentrated stress and strain transfer to ocular structures upon impact. At velocities exceeding 30 mph, simulations showed significant corneal deformation, with peak stresses localized at the limbus and anterior sclera. Moreover, our results show a significant stress applied to lens zonules (as high as 0.35 MPa), leading to potential lens dislocation. Posterior segment deformation was also observed, with high strain levels in the retina and vitreous, consistent with clinical observations of retinal tears and vitreous hemorrhage. Validation against reported injuries confirmed the model’s accuracy in predicting both mild injuries (e.g., corneal abrasions) and severe outcomes (e.g., hyphema, globe rupture). Conclusions: Finite element analysis provides critical insights into the biomechanical mechanisms underlying pickleball-related ocular injuries. The findings underscore the need for preventive measures, particularly among older adults, who exhibit age-related vulnerabilities. Education on the importance of wearing protective eyewear and optimizing game rules to minimize high-risk scenarios, such as close-range volleys, is essential. Further refinement of the FEM, including parametric studies and integration of protective eyewear, can guide the development of safety standards and reduce the socio-economic burden of these injuries. Full article
(This article belongs to the Special Issue Biomechanics Studies in Ophthalmology)
Show Figures

Figure 1

13 pages, 376 KiB  
Article
Relationship Between Facial Melasma and Ocular Photoaging Diseases
by Lunla Udomwech, Chime Eden and Weeratian Tawanwongsri
Med. Sci. 2025, 13(2), 61; https://doi.org/10.3390/medsci13020061 - 16 May 2025
Viewed by 1143
Abstract
Background/Objectives: Facial melasma is a common, chronic, and relapsing hyperpigmentation disorder, affecting up to 40% of adult women in Southeast Asia. Although most cases are mild, the condition may have a considerable psychological impact. Ocular photoaging diseases are also common and have been [...] Read more.
Background/Objectives: Facial melasma is a common, chronic, and relapsing hyperpigmentation disorder, affecting up to 40% of adult women in Southeast Asia. Although most cases are mild, the condition may have a considerable psychological impact. Ocular photoaging diseases are also common and have been increasingly recognized in aging populations exposed to chronic sunlight. Ultraviolet (UV) radiation is implicated in both melasma and ocular photoaging; however, their relationship remains unclear. Methods: This cross-sectional study investigated the association between facial melasma and UV-induced ocular conditions among 315 participants aged 30–80 years at Walailak University Hospital, Thailand. Facial melasma was diagnosed clinically and dermoscopically, with severity assessed using the modified Melasma Area Severity Index. Ophthalmological examinations evaluated UV-related ocular conditions, including pinguecula, pterygium, climatic droplet keratopathy, cataracts, and age-related macular degeneration. Logistic regression analyses were performed, adjusting for age, sex, and sun exposure. Results: Facial melasma was identified in 66.0% of participants (n = 208), and nuclear cataracts were significantly associated with melasma (adjusted odds ratio, 2.590; 95% confidence interval, 1.410–4.770; p = 0.002). Additionally, melasma severity correlated with nuclear cataract severity (ρ = 0.186, p = 0.001). Other ocular conditions were not significantly associated with melasma. Conclusions: These findings suggest a shared UV-related pathogenesis between facial melasma and nuclear cataracts. Sun protection measures, including regular sunscreen use, UV-blocking eyewear, and wide-brimmed hats, may help mitigate the risk of both conditions. Further multicenter studies are warranted to confirm these findings and explore the underlying mechanisms. Full article
Show Figures

Figure 1

17 pages, 3331 KiB  
Article
Investigating the Use of Electrooculography Sensors to Detect Stress During Working Activities
by Alessandra Papetti, Marianna Ciccarelli, Andrea Manni, Andrea Caroppo and Gabriele Rescio
Sensors 2025, 25(10), 3015; https://doi.org/10.3390/s25103015 - 10 May 2025
Cited by 1 | Viewed by 579
Abstract
To tackle work-related stress in the evolving landscape of Industry 5.0, organizations need to prioritize employee well-being through a comprehensive strategy. While electrocardiograms (ECGs) and electrodermal activity (EDA) are widely adopted physiological measures for monitoring work-related stress, electrooculography (EOG) remains underexplored in this [...] Read more.
To tackle work-related stress in the evolving landscape of Industry 5.0, organizations need to prioritize employee well-being through a comprehensive strategy. While electrocardiograms (ECGs) and electrodermal activity (EDA) are widely adopted physiological measures for monitoring work-related stress, electrooculography (EOG) remains underexplored in this context. Although less extensively studied, EOG shows significant promise for comparable applications. Furthermore, the realm of human factors and ergonomics lacks sufficient research on the integration of wearable sensors, particularly in the evaluation of human work. This article aims to bridge these gaps by examining the potential of EOG signals, captured through smart eyewear, as indicators of stress. The study involved twelve subjects in a controlled environment, engaging in four stress-inducing tasks interspersed with two-minute relaxation intervals. Emotional responses were categorized both into two classes (relaxed and stressed) and three classes (relaxed, slightly stressed, and stressed). Employing supervised machine learning (ML) algorithms—Random Forest (RF), Logistic Regression (LR), Support Vector Machine (SVM), Decision Tree (DT), and K-Nearest Neighbors (KNN)—the analysis revealed accuracy rates exceeding 80%, with RF leading at 85.8% and 82.4% for two classes and three classes, respectively. The proposed wearable system shows promise in monitoring workers’ well-being, especially during visual activities. Full article
(This article belongs to the Special Issue Sensing Human Cognitive Factors)
Show Figures

Figure 1

16 pages, 7057 KiB  
Article
VRBiom: A New Periocular Dataset for Biometric Applications of Head-Mounted Display
by Ketan Kotwal, Ibrahim Ulucan, Gökhan Özbulak, Janani Selliah and Sébastien Marcel
Electronics 2025, 14(9), 1835; https://doi.org/10.3390/electronics14091835 - 30 Apr 2025
Viewed by 763
Abstract
With advancements in hardware, high-quality head-mounted display (HMD) devices are being developed by numerous companies, driving increased consumer interest in AR, VR, and MR applications. This proliferation of HMD devices opens up possibilities for a wide range of applications beyond entertainment. Most commercially [...] Read more.
With advancements in hardware, high-quality head-mounted display (HMD) devices are being developed by numerous companies, driving increased consumer interest in AR, VR, and MR applications. This proliferation of HMD devices opens up possibilities for a wide range of applications beyond entertainment. Most commercially available HMD devices are equipped with internal inward-facing cameras to record the periocular areas. Given the nature of these devices and captured data, many applications such as biometric authentication and gaze analysis become feasible. To effectively explore the potential of HMDs for these diverse use-cases and to enhance the corresponding techniques, it is essential to have an HMD dataset that captures realistic scenarios. In this work, we present a new dataset of periocular videos acquired using a virtual reality headset called VRBiom. The VRBiom, targeted at biometric applications, consists of 900 short videos acquired from 25 individuals recorded in the NIR spectrum. These 10 s long videos have been captured using the internal tracking cameras of Meta Quest Pro at 72 FPS. To encompass real-world variations, the dataset includes recordings under three gaze conditions: steady, moving, and partially closed eyes. We have also ensured an equal split of recordings without and with glasses to facilitate the analysis of eye-wear. These videos, characterized by non-frontal views of the eye and relatively low spatial resolutions (400×400), can be instrumental in advancing state-of-the-art research across various biometric applications. The VRBiom dataset can be utilized to evaluate, train, or adapt models for biometric use-cases such as iris and/or periocular recognition and associated sub-tasks such as detection and semantic segmentation. In addition to data from real individuals, we have included around 1100 presentation attacks constructed from 92 PA instruments. These PAIs fall into six categories constructed through combinations of print attacks (real and synthetic identities), fake 3D eyeballs, plastic eyes, and various types of masks and mannequins. These PA videos, combined with genuine (bona fide) data, can be utilized to address concerns related to spoofing, which is a significant threat if these devices are to be used for authentication. The VRBiom dataset is publicly available for research purposes related to biometric applications only. Full article
Show Figures

Figure 1

23 pages, 1237 KiB  
Review
Risk of Permanent Corneal Injury in Microgravity: Spaceflight-Associated Hazards, Challenges to Vision Restoration, and Role of Biotechnology in Long-Term Planetary Missions
by Jainam Shah, Joshua Ong, Ryung Lee, Alex Suh, Ethan Waisberg, C. Robert Gibson, John Berdahl and Thomas H. Mader
Life 2025, 15(4), 602; https://doi.org/10.3390/life15040602 - 4 Apr 2025
Cited by 2 | Viewed by 1033
Abstract
Human space exploration presents an unparalleled opportunity to study life in extreme environments—but it also exposes astronauts to physiological stressors that jeopardize key systems like vision. Corneal health, essential for maintaining precise visual acuity, is threatened by microgravity-induced fluid shifts, cosmic radiation, and [...] Read more.
Human space exploration presents an unparalleled opportunity to study life in extreme environments—but it also exposes astronauts to physiological stressors that jeopardize key systems like vision. Corneal health, essential for maintaining precise visual acuity, is threatened by microgravity-induced fluid shifts, cosmic radiation, and the confined nature of spacecraft living environments. These conditions elevate the risk of corneal abrasions, infections, and structural damage. In addition, Spaceflight-Associated Neuro-Ocular Syndrome (SANS)—while primarily affecting the posterior segment—has also been potentially linked to anterior segment alterations such as corneal edema and tear film instability. This review examines these ocular challenges and assesses current mitigation strategies. Traditional approaches, such as terrestrial eye banking and corneal transplantation, are impractical for spaceflight due to the limited viability of preserved tissues, surgical complexities, anesthetic risks, infection potential, and logistical constraints. The paper explores emerging technologies like 3D bioprinting and stem cell-based tissue engineering, which offer promising solutions by enabling the on-demand production of personalized corneal constructs. Complementary advancements, including adaptive protective eyewear, bioengineered tear substitutes, telemedicine, and AI-driven diagnostic tools, also show potential in autonomously managing ocular health during long-duration missions. By addressing the complex interplay of environmental stressors and biological vulnerabilities, these innovations not only safeguard astronaut vision and mission performance but also catalyze new pathways for regenerative medicine on Earth. The evolution of space-based ophthalmic care underscores the dual impact of space medicine investments across planetary exploration and terrestrial health systems. Full article
Show Figures

Figure 1

11 pages, 4280 KiB  
Article
Fog-Proof and Anti-Reflection Nano-Coating Prepared by Atmosphere Plasma Spraying
by Xiqiang Zhong, Zimo Zhou, Guanghua Liu, Dan Wang, Yan Xing and Wei Pan
Coatings 2025, 15(3), 331; https://doi.org/10.3390/coatings15030331 - 13 Mar 2025
Viewed by 863
Abstract
Fog-proof coatings have been widely utilized in various fields, including automobile windshields, curtain walls, and fog-resistant eyewear. To date, numerous methods have been developed for preparing fog-proof coatings. However, the most effective fog-proof surfaces often suffer from poor light transmittance. In this report, [...] Read more.
Fog-proof coatings have been widely utilized in various fields, including automobile windshields, curtain walls, and fog-resistant eyewear. To date, numerous methods have been developed for preparing fog-proof coatings. However, the most effective fog-proof surfaces often suffer from poor light transmittance. In this report, we present a method for preparing fog-proof nano-coatings using atmospheric plasma spraying (APS). Hexamethyldisiloxane (HMDSO) was employed as a precursor solution, resulting in the formation of amorphous nano-coatings on glass substrates with a thickness ranging from 15 to 25 nm. The APS-coated glasses exhibit superhydrophilic properties, excellent fog resistance, and anti-reflective characteristics. Additionally, the APS coatings enhance light transmittance from 90% to 92%. Full article
Show Figures

Figure 1

1 pages, 127 KiB  
Correction
Correction: Majerič et al. Study of the Application of Recycled Gold Nanoparticles in Coatings for Eyewear Lenses. Coatings 2023, 13, 1666
by Peter Majerič, Djuro Koruga, Zorana Njegovan, Žiga Jelen, Tilen Švarc, Andrej Horvat and Rebeka Rudolf
Coatings 2025, 15(3), 251; https://doi.org/10.3390/coatings15030251 - 20 Feb 2025
Viewed by 541
Abstract
In the original publication [...] Full article
22 pages, 10440 KiB  
Article
Hybrid BCI for Meal-Assist Robot Using Dry-Type EEG and Pupillary Light Reflex
by Jihyeon Ha, Sangin Park, Yaeeun Han and Laehyun Kim
Biomimetics 2025, 10(2), 118; https://doi.org/10.3390/biomimetics10020118 - 18 Feb 2025
Cited by 1 | Viewed by 964
Abstract
Brain–computer interface (BCI)-based assistive technologies enable intuitive and efficient user interaction, significantly enhancing the independence and quality of life of elderly and disabled individuals. Although existing wet EEG-based systems report high accuracy, they suffer from limited practicality. This study presents a hybrid BCI [...] Read more.
Brain–computer interface (BCI)-based assistive technologies enable intuitive and efficient user interaction, significantly enhancing the independence and quality of life of elderly and disabled individuals. Although existing wet EEG-based systems report high accuracy, they suffer from limited practicality. This study presents a hybrid BCI system combining dry-type EEG-based flash visual-evoked potentials (FVEP) and pupillary light reflex (PLR) designed to control an LED-based meal-assist robot. The hybrid system integrates dry-type EEG and eyewear-type infrared cameras, addressing the preparation challenges of wet electrodes, while maintaining practical usability and high classification performance. Offline experiments demonstrated an average accuracy of 88.59% and an information transfer rate (ITR) of 18.23 bit/min across the four target classifications. Real-time implementation uses PLR triggers to initiate the meal cycle and EMG triggers to detect chewing, indicating the completion of the cycle. These features allow intuitive and efficient operation of the meal-assist robot. This study advances the BCI-based assistive technologies by introducing a hybrid system optimized for real-world applications. The successful integration of the FVEP and PLR in a meal-assisted robot demonstrates the potential for robust and user-friendly solutions that empower the users with autonomy and dignity in their daily activities. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces)
Show Figures

Figure 1

19 pages, 1074 KiB  
Article
A Retrospective Analysis of Automated Image Labeling for Eyewear Detection Using Zero-Shot Object Detectors
by Dalius Matuzevičius
Electronics 2024, 13(23), 4763; https://doi.org/10.3390/electronics13234763 - 2 Dec 2024
Cited by 2 | Viewed by 1816
Abstract
This research presents a retrospective analysis of zero-shot object detectors in automating image labeling for eyeglasses detection. The increasing demand for high-quality annotations in object detection is being met by AI foundation models with open-vocabulary capabilities, reducing the need for labor-intensive manual labeling. [...] Read more.
This research presents a retrospective analysis of zero-shot object detectors in automating image labeling for eyeglasses detection. The increasing demand for high-quality annotations in object detection is being met by AI foundation models with open-vocabulary capabilities, reducing the need for labor-intensive manual labeling. There is a notable gap in systematic analyses of foundation models for specialized detection tasks, particularly within the domain of facial accessories. Six state-of-the-art models—Grounding DINO, Detic, OWLViT, OWLv2, YOLO World, and Florence-2—were evaluated across three datasets (FFHQ with custom annotations, CelebAMask-HQ, and Face Synthetics) to assess their effectiveness in zero-shot detection and labeling. Performance metrics, including Average Precision (AP), Average Recall (AR), and Intersection over Union (IoU), were used to benchmark foundation models. The results show that Detic achieved the highest performance scores (AP of 0.97 and AR of 0.98 on FFHQ, with IoU values reaching 0.97), making it highly suitable for automated annotation workflows. Grounding DINO and OWLv2 also showed potential, especially in high-recall scenarios. The results emphasize the importance of prompt engineering. Practical recommendations for using foundation models in specialized dataset annotation are provided. Full article
(This article belongs to the Special Issue IoT-Enabled Smart Devices and Systems in Smart Environments)
Show Figures

Figure 1

27 pages, 4935 KiB  
Article
Diverse Dataset for Eyeglasses Detection: Extending the Flickr-Faces-HQ (FFHQ) Dataset
by Dalius Matuzevičius
Sensors 2024, 24(23), 7697; https://doi.org/10.3390/s24237697 - 1 Dec 2024
Cited by 1 | Viewed by 2347
Abstract
Facial analysis is an important area of research in computer vision and machine learning, with applications spanning security, healthcare, and user interaction systems. The data-centric AI approach emphasizes the importance of high-quality, diverse, and well-annotated datasets in driving advancements in this field. However, [...] Read more.
Facial analysis is an important area of research in computer vision and machine learning, with applications spanning security, healthcare, and user interaction systems. The data-centric AI approach emphasizes the importance of high-quality, diverse, and well-annotated datasets in driving advancements in this field. However, current facial datasets, such as Flickr-Faces-HQ (FFHQ), lack detailed annotations for detecting facial accessories, particularly eyeglasses. This work addresses this limitation by extending the FFHQ dataset with precise bounding box annotations for eyeglasses detection, enhancing its utility for data-centric AI applications. The extended dataset comprises 70,000 images, including over 16,000 images containing eyewear, and it exceeds the CelebAMask-HQ dataset in size and diversity. A semi-automated protocol was employed to efficiently generate accurate bounding box annotations, minimizing the demand for extensive manual labeling. This enriched dataset serves as a valuable resource for training and benchmarking eyewear detection models. Additionally, the baseline benchmark results for eyeglasses detection were presented using deep learning methods, including YOLOv8 and MobileNetV3. The evaluation, conducted through cross-dataset validation, demonstrated the robustness of models trained on the extended FFHQ dataset with their superior performances over existing alternative CelebAMask-HQ. The extended dataset, which has been made publicly available, is expected to support future research and development in eyewear detection, contributing to advancements in facial analysis and related fields. Full article
Show Figures

Figure 1

16 pages, 1291 KiB  
Article
Silent Speech Eyewear Interface: Silent Speech Recognition Method Using Eyewear and an Ear-Mounted Microphone with Infrared Distance Sensors
by Yuya Igarashi, Kyosuke Futami and Kazuya Murao
Sensors 2024, 24(22), 7368; https://doi.org/10.3390/s24227368 - 19 Nov 2024
Viewed by 1463
Abstract
As eyewear devices such as smart glasses become more common, it is important to provide input methods that can be used at all times for such situations and people. Silent speech interaction (SSI) has the potential to be useful as a hands-free input [...] Read more.
As eyewear devices such as smart glasses become more common, it is important to provide input methods that can be used at all times for such situations and people. Silent speech interaction (SSI) has the potential to be useful as a hands-free input method for various situations and people, including those who have difficulty with voiced speech. However, previous methods have involved sensor devices that are difficult to use anytime and anywhere. We propose a method for SSI that involves using an eyewear device equipped with infrared distance sensors. The proposed method measures facial skin movements associated with speech from the infrared distance sensor mounted on an eyewear device and recognizes silent speech commands by applying machine learning to time series sensor data. The proposed method was applied to a prototype system including a sensor device consisting of eyewear and ear-mounted microphones to measure the movements of the cheek, jaw joint, and jaw. Evaluations 1 and 2 showed that five speech commands could be recognized with an F value of 0.90 and ten longer speech commands with an F value of 0.83. Evaluation 3 showed how the recognition accuracy changes with the combination of sensor points. Evaluation 4 examined whether the proposed method can be used for a larger number of speech commands with 21 commands by using deep learning LSTM and a combination of DTW and kNN. Evaluation 5 examined the recognition accuracy in some situations affecting recognition accuracy such as re-attaching devices and walking. These results show the feasibility of the proposed method for a simple hands-free input interface, such as with media players and voice assistants. Our study provides the first wearable sensing method that can easily apply SSI functions to eyewear devices. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors Technology in Smart Cities)
Show Figures

Figure 1

7 pages, 504 KiB  
Proceeding Paper
A Study on Eyewear Design Quality Using the Kano Two-Dimensional Quality Model
by Hsiao-Ni Su
Eng. Proc. 2024, 74(1), 72; https://doi.org/10.3390/engproc2024074072 - 16 Oct 2024
Viewed by 949
Abstract
In the recent consumer market, eyewear has gone beyond the improvement of visual function and is used for the individual style and the symbolization of social standing. Consequently, the aesthetic design of eyewear influences consumer purchasing decisions. Thus, it is necessary to investigate [...] Read more.
In the recent consumer market, eyewear has gone beyond the improvement of visual function and is used for the individual style and the symbolization of social standing. Consequently, the aesthetic design of eyewear influences consumer purchasing decisions. Thus, it is necessary to investigate how eyewear design incorporating elements with aesthetic appeal can enhance the sensory experiences of consumers, thereby intensifying their preference for products and fostering their intent to purchase. Utilizing the Evaluation Grid Method (EGM), the design characteristics of eyewear products in the market were explored to assess how these characteristics affect consumer selections. A quantitative analysis of the key quality attributes in eyewear design was conducted using the Kano Model. The results demonstrated a nonlinear relationship between design attributes and consumer satisfaction, confirming the relevance of the Kano Model’s classification. By providing a multi-dimensional quality, the Kano Model elucidated variations in consumer quality requirements for eyewear design, allowing designers and manufacturers to strategically enhance key product design elements, thus creating items with greater market appeal. The results provide recommendations for the improvement of product design aesthetics to increase visual allure for consumers and strengthen market competitiveness. Full article
Show Figures

Figure 1

Back to TopTop