Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (96)

Search Parameters:
Keywords = eye tracking sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 3186 KiB  
Article
The Design and Performance Evaluation of an Eye-Tracking System Based on an Electrostatic MEMS Scanning Mirror
by Minqiang Li, Lin Qin, Xiasheng Wang, Jiaojiao Wen, Tong Wu, Xiaoming Huang, Hongbo Yin, Yi Tian and Zhuqing Wang
Micromachines 2025, 16(6), 640; https://doi.org/10.3390/mi16060640 - 28 May 2025
Viewed by 1902
Abstract
In this paper, we proposed an eye-tracking system featuring a small size and high scanning frequency, utilizing an electrostatic biaxial scanning mirror fabricated through a micro-electro-mechanical system (MEMS) process. A laser beam is directed onto the mirror, and the two axes of the [...] Read more.
In this paper, we proposed an eye-tracking system featuring a small size and high scanning frequency, utilizing an electrostatic biaxial scanning mirror fabricated through a micro-electro-mechanical system (MEMS) process. A laser beam is directed onto the mirror, and the two axes of the mirror generate a Lissajous scanning pattern within an artificial eyeball. The scanning pattern reflected from the eyeball is detected by a linear photodiode sensor array (LPSA). The direction and rotation angle of the artificial eyeball result in varying grayscale values across a series of pixels detected by the LPSA, in which the average grayscale values change accordingly. By performing a linear fit between different rotation angles of the same eye movement direction and the corresponding grayscale values, we can determine the correlation between the direction of eye movement and the signal magnitude received by the LPSA, thereby enabling precise eye tracking. The results demonstrated that the minimum resolution was 0.6°. This preliminary result indicates that the system has good accuracy. In the future, this eye-tracking system can be integrated into various wearable glasses devices and applied in various fields, including medicine and psychology. Full article
Show Figures

Figure 1

23 pages, 2544 KiB  
Article
Fuzzy-Based Sensor Fusion for Cognitive Load Assessment in Inclusive Manufacturing Strategies
by Agnese Testa, Alessandro Simeone, Massimiliano Zecca, Andrea Paoli and Luca Settineri
Sensors 2025, 25(11), 3356; https://doi.org/10.3390/s25113356 - 27 May 2025
Viewed by 727
Abstract
In recent years, the need to design inclusive workplaces has grown, particularly in manufacturing contexts where high cognitive demands may disadvantage neurodiverse individuals. In manufacturing environments, neurodiverse workers often experience difficulties processing standard instructions, increasing cognitive load and errors and reducing overall performance. [...] Read more.
In recent years, the need to design inclusive workplaces has grown, particularly in manufacturing contexts where high cognitive demands may disadvantage neurodiverse individuals. In manufacturing environments, neurodiverse workers often experience difficulties processing standard instructions, increasing cognitive load and errors and reducing overall performance. This study proposes a methodology to assess cognitive load during assembly tasks to support workers with dyslexia. A multi-layer fuzzy logic framework was developed, integrating physiological, environmental, and task-related data. Physiological signals, including heart rate, heart rate variability, electrodermal activity, and eye-tracking data, were collected using wearable sensors. Ambient conditions were also measured. The model emphasizes the Reading dimension of cognitive load, critical for dyslexic individuals challenged by text-based instructions. A controlled laboratory study with 18 neurotypical participants simulated dyslexia scenarios with and without support, compared to a control condition. Results indicated that a lack of support increased cognitive load and reduced performance in complex tasks. In simpler tasks, control participants showed higher cognitive effort, possibly employing overcompensation strategies by exerting additional cognitive resources to maintain performance. Support mechanisms, such as audio prompts, effectively reduced cognitive load, highlighting the framework’s potential for fostering inclusive practices in industrial environments. Full article
Show Figures

Figure 1

13 pages, 1193 KiB  
Article
Validation of an Automated Scoring Algorithm That Assesses Eye Exploration in a 3-Dimensional Virtual Reality Environment Using Eye-Tracking Sensors
by Or Koren, Anais Di Via Ioschpe, Meytal Wilf, Bailasan Dahly, Ramit Ravona-Springer and Meir Plotnik
Sensors 2025, 25(11), 3331; https://doi.org/10.3390/s25113331 - 26 May 2025
Viewed by 409
Abstract
Eye-tracking studies in virtual reality (VR) deliver insights into behavioral function. The gold standard of evaluating gaze behavior is based on manual scoring, which is labor-intensive. Previously proposed automated eye-tracking algorithms for VR head mount display (HMD) were not validated against manual scoring, [...] Read more.
Eye-tracking studies in virtual reality (VR) deliver insights into behavioral function. The gold standard of evaluating gaze behavior is based on manual scoring, which is labor-intensive. Previously proposed automated eye-tracking algorithms for VR head mount display (HMD) were not validated against manual scoring, or tested in dynamic areas of interest (AOIs). Our study validates the accuracy of an automated scoring algorithm, which determines temporal fixation behavior on static and dynamic AOIs in VR, against subjective human annotation. The interclass-correlation coefficient (ICC) was calculated for the time of first fixation (TOFF) and total fixation duration (TFD), in ten participants, each presented with 36 static and dynamic AOIs. High ICC values (≥0.982; p < 0.0001) were obtained when comparing the algorithm-generated TOFF and TFD to the raters’ annotations. In sum, our algorithm is accurate in determining temporal parameters related to gaze behavior when using HMD-based VR. Thus, the significant time required for human scoring among numerous raters can be rendered obsolete with a reliable automated scoring system. The algorithm proposed here was designed to sub-serve a separate study that uses TOFF and TFD to differentiate apathy from depression in those suffering from Alzheimer’s dementia. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

24 pages, 2268 KiB  
Article
Fusion of Driving Behavior and Monitoring System in Scenarios of Driving Under the Influence: An Experimental Approach
by Jan-Philipp Göbel, Niklas Peuckmann, Thomas Kundinger and Andreas Riener
Appl. Sci. 2025, 15(10), 5302; https://doi.org/10.3390/app15105302 - 9 May 2025
Viewed by 382
Abstract
Driving under the influence of alcohol (DUI) remains a leading cause of accidents globally, with accident risk rising exponentially with blood alcohol concentration (BAC). This study aims to distinguish between sober and intoxicated drivers using driving behavior analysis and driver monitoring system (DMS), [...] Read more.
Driving under the influence of alcohol (DUI) remains a leading cause of accidents globally, with accident risk rising exponentially with blood alcohol concentration (BAC). This study aims to distinguish between sober and intoxicated drivers using driving behavior analysis and driver monitoring system (DMS), technologies that align with emerging EU regulations. In a driving simulator, twenty-three participants (average age: 32) completed five drives (one practice and two each while sober and intoxicated) on separate days across city, rural, and highway settings. Each 30-minute drive was analyzed using eye-tracking and driving behavior data. We applied significance testing and classification models to assess the data. Our study goes beyond the state of the art by a) combining data from various sensors and b) not only examining the effects of alcohol on driving behavior but also using these data to classify driver impairment. Fusing gaze and driving behavior data improved classification accuracy, with models achieving over 70% accuracy in city and rural conditions and a Long Short-Term Memory (LSTM) network reaching up to 80% on rural roads. Although the detection rate is, of course, still far too low for a productive system, the results nevertheless provide valuable insights for improving DUI detection technologies and enhancing road safety. Full article
(This article belongs to the Special Issue Human-Centered Approaches to Automated Vehicles)
Show Figures

Figure 1

17 pages, 2842 KiB  
Article
YOLO Model-Based Eye Movement Detection During Closed-Eye State
by Shigui Zhang, Junhui He and Yuanwen Zou
Appl. Sci. 2025, 15(9), 4981; https://doi.org/10.3390/app15094981 - 30 Apr 2025
Viewed by 623
Abstract
Eye movement detection technology holds significant potential across medicine, psychology, and human–computer interaction. However, traditional methods, which primarily rely on tracking the pupil and cornea during the open-eye state, are ineffective when the eye is closed. To address this limitation, we developed a [...] Read more.
Eye movement detection technology holds significant potential across medicine, psychology, and human–computer interaction. However, traditional methods, which primarily rely on tracking the pupil and cornea during the open-eye state, are ineffective when the eye is closed. To address this limitation, we developed a novel system capable of real-time eye movement detection even in the closed-eye state. Utilizing a micro-camera based on the OV9734 image sensor, our system captures image data to construct a dataset of eyelid images during ocular movements. We performed extensive experiments with multiple versions of the YOLO algorithm, including v5s, v8s, v9s, and v10s, in addition to testing different sizes of the YOLO v11 model (n < s < m < l < x), to achieve optimal performance. Ultimately, we selected YOLO11m as the optimal model based on its highest AP0.5 score of 0.838. Our tracker achieved a mean distance error of 0.77 mm, with 90% of predicted eye position distances having an error of less than 1.67 mm, enabling real-time tracking at 30 frames per second. This study introduces an innovative method for the real-time detection of eye movements during eye closure, enhancing and diversifying the applications of eye-tracking technology. Full article
Show Figures

Figure 1

25 pages, 630 KiB  
Review
Innovative Approaches in Sensory Food Science: From Digital Tools to Virtual Reality
by Fernanda Cosme, Tânia Rocha, Catarina Marques, João Barroso and Alice Vilela
Appl. Sci. 2025, 15(8), 4538; https://doi.org/10.3390/app15084538 - 20 Apr 2025
Cited by 1 | Viewed by 2800
Abstract
The food industry faces growing challenges due to evolving consumer demands, requiring digital technologies to enhance sensory analysis. Innovations such as eye tracking, FaceReader, virtual reality (VR), augmented reality (AR), and artificial intelligence (AI) are transforming consumer behavior research by providing deeper insights [...] Read more.
The food industry faces growing challenges due to evolving consumer demands, requiring digital technologies to enhance sensory analysis. Innovations such as eye tracking, FaceReader, virtual reality (VR), augmented reality (AR), and artificial intelligence (AI) are transforming consumer behavior research by providing deeper insights into sensory experiences. For instance, FaceReader captures emotional responses to food by analyzing facial expressions, offering valuable data on consumer preferences for taste, texture, and aroma. Together, these technologies provide a comprehensive understanding of the sensory experience, aiding product development and branding. Electronic nose, tongue, and eye technologies also replicate human sensory capabilities, enabling objective and efficient assessment of aroma, taste, and color. The electronic nose (E-nose) detects volatile compounds for aroma evaluation, while the electronic tongue (E-tongue) evaluates taste through electrochemical sensors, ensuring accuracy and consistency in sensory analysis. The electronic eye (E-eye) analyzes food color, supporting quality control processes. These advancements offer rapid, non-invasive, reproducible assessments, benefiting research and industrial applications. By improving the precision and efficiency of sensory analysis, digital tools help enhance product quality and consumer satisfaction in the competitive food industry. This review explores the latest digital methods shaping food sensory research and innovation. Full article
Show Figures

Figure 1

20 pages, 4083 KiB  
Article
MultiScaleAnalyzer for Spatiotemporal Learning Data Analysis: A Case Study of Eye-Tracking and Mouse Movement
by Shuang Wei, Chen Guo, Qingli Lei, Yingjie Chen and Yan Ping Xin
Appl. Sci. 2025, 15(8), 4237; https://doi.org/10.3390/app15084237 - 11 Apr 2025
Viewed by 430
Abstract
With the development of high-performance computers, cloud storage, and advanced sensors, people’s ability to gather complex learning data has greatly improved. However, analyzing these data remains a significant challenge. Especially for spatiotemporal learning data such as eye-tracking and mouse movement, understanding and analyzing [...] Read more.
With the development of high-performance computers, cloud storage, and advanced sensors, people’s ability to gather complex learning data has greatly improved. However, analyzing these data remains a significant challenge. Especially for spatiotemporal learning data such as eye-tracking and mouse movement, understanding and analyzing these data to identify the learning insights behind them is a difficult task. We propose a visualization platform called “MultiScaleAnalyzer”, which employs hierarchical structure to illustrate spatiotemporal learning data in multiple views. From high-level overviews to detailed analyses, “MultiScaleAnalyzer” provides varying resolutions of data tailored to educators’ need. To demonstrate the platform’s effectiveness, we applied “MultiScaleAnalyzer” to a mathematical word problem-solving dataset, showcasing how the visualization platform facilitates the exploration of student problem-solving patterns and strategies. Full article
Show Figures

Figure 1

44 pages, 38981 KiB  
Article
From Camera Image to Active Target Tracking: Modelling, Encoding and Metrical Analysis for Unmanned Underwater Vehicles
by Samuel Appleby, Giacomo Bergami and Gary Ushaw
AI 2025, 6(4), 71; https://doi.org/10.3390/ai6040071 - 7 Apr 2025
Viewed by 694
Abstract
Marine mammal monitoring, a growing field of research, is critical to cetacean conservation. Traditional ‘tagging’ attaches sensors such as GPS to such animals, though these are intrusive and susceptible to infection and, ultimately, death. A less intrusive approach exploits UUV commanded by a [...] Read more.
Marine mammal monitoring, a growing field of research, is critical to cetacean conservation. Traditional ‘tagging’ attaches sensors such as GPS to such animals, though these are intrusive and susceptible to infection and, ultimately, death. A less intrusive approach exploits UUV commanded by a human operator above ground. The development of AI for autonomous underwater vehicle navigation models training environments in simulation, providing visual and physical fidelity suitable for sim-to-real transfer. Previous solutions, including UVMS and L2D, provide only satisfactory results, due to poor environment generalisation while sensors including sonar create environmental disturbances. Though rich in features, image data suffer from high dimensionality, providing a state space too great for many machine learning tasks. Underwater environments, susceptible to image noise, further complicate this issue. We propose SWiMM2.0, coupling a Unity simulation modelling of a BLUEROV UUV with a DRL backend. A pre-processing step exploits a state-of-the-art CMVAE, reducing dimensionality while minimising data loss. Sim-to-real generalisation is validated by prior research. Custom behaviour metrics, unbiased to the naked eye and unprecedented in current ROV simulators, link our objectives ensuring successful ROV behaviour while tracking targets. Our experiments show that SAC maximises the former, achieving near-perfect behaviour while exploiting image data alone. Full article
Show Figures

Figure 1

19 pages, 1902 KiB  
Article
Facial Features Controlled Smart Vehicle for Disabled/Elderly People
by Yijun Hu, Ruiheng Wu, Guoquan Li, Zhilong Shen and Jin Xie
Electronics 2025, 14(6), 1088; https://doi.org/10.3390/electronics14061088 - 10 Mar 2025
Viewed by 725
Abstract
Mobility limitations due to congenital disabilities, accidents, or illnesses pose significant challenges to the daily lives of individuals with disabilities. This study presents a novel design for a multifunctional intelligent vehicle, integrating head recognition, eye-tracking, Bluetooth control, and ultrasonic obstacle avoidance to offer [...] Read more.
Mobility limitations due to congenital disabilities, accidents, or illnesses pose significant challenges to the daily lives of individuals with disabilities. This study presents a novel design for a multifunctional intelligent vehicle, integrating head recognition, eye-tracking, Bluetooth control, and ultrasonic obstacle avoidance to offer an innovative mobility solution. The smart vehicle supports three driving modes: (1) a nostril-based control system using MediaPipe to track displacement for movement commands, (2) an eye-tracking control system based on the Viola–Jones algorithm processed via an Arduino Nano board, and (3) a Bluetooth-assisted mode for caregiver intervention. Additionally, an ultrasonic sensor system ensures real-time obstacle detection and avoidance, enhancing user safety. Extensive experimental evaluations were conducted to validate the effectiveness of the system. The results indicate that the proposed vehicle achieves an 85% accuracy in nostril tracking, over 90% precision in eye direction detection, and efficient obstacle avoidance within a 1 m range. These findings demonstrate the robustness and reliability of the system in real-world applications. Compared to existing assistive mobility solutions, this vehicle offers non-invasive, cost-effective, and adaptable control mechanisms that cater to a diverse range of disabilities. By enhancing accessibility and promoting user independence, this research contributes to the development of inclusive mobility solutions for disabled and elderly individuals. Full article
(This article belongs to the Special Issue Active Mobility: Innovations, Technologies, and Applications)
Show Figures

Figure 1

10 pages, 1157 KiB  
Article
The Effectiveness of a Virtual Reality-Based Exergame Protocol in Improving Postural Balance in Older Adults During the COVID-19 Pandemic
by Valeska Gatica-Rojas, María Isabel Camoglino-Escobar, Hernán Carrillo-Bestagno and Ricardo Cartes-Velásquez
Multimodal Technol. Interact. 2025, 9(1), 7; https://doi.org/10.3390/mti9010007 - 15 Jan 2025
Viewed by 1137
Abstract
Background: The COVID-19 pandemic significantly reduced physical activity levels, particularly among older people, negatively impacting their postural balance and increasing the risk of falls and hip fractures. This study aims to assess the effect of a virtual reality-based exergame physical activity protocol at [...] Read more.
Background: The COVID-19 pandemic significantly reduced physical activity levels, particularly among older people, negatively impacting their postural balance and increasing the risk of falls and hip fractures. This study aims to assess the effect of a virtual reality-based exergame physical activity protocol at home on improving postural balance in older people. Materials and Methods: A quasi-experimental design was employed with 10 older people (71 ± 9 years) who participated in a virtual reality-based exergame physical activity protocol consisting of eighteen 25 min sessions conducted at home. The protocol incorporated 3D movement tracking using a sensor attached to the participants’ bodies to monitor postural sway in real time. Clinical measurements included the Timed Up and Go test and posturographic measures of center-of-pressure, including sway area, velocity, and standard deviation in the mediolateral and anteroposterior directions under four conditions: static with the eyes open and eyes closed and dynamic voluntary sway in the mediolateral direction following a 30 Hz metronome with the eyes open and eyes closed. Paired t-tests were used to compare pre- and post-intervention data. Results: The intervention led to significant improvements in postural balance as measured using both posturographic measures (p < 0.05) and the Timed Up and Go test (p = 0.04). Conclusion: The virtual reality-based exergame physical activity protocol conducted at home, comprising eighteen 25 min sessions, effectively improves postural balance in older people. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality—2nd Edition)
Show Figures

Figure 1

27 pages, 553 KiB  
Systematic Review
Integrating Artificial Intelligence, Internet of Things, and Sensor-Based Technologies: A Systematic Review of Methodologies in Autism Spectrum Disorder Detection
by Georgios Bouchouras and Konstantinos Kotis
Algorithms 2025, 18(1), 34; https://doi.org/10.3390/a18010034 - 9 Jan 2025
Cited by 2 | Viewed by 2866
Abstract
This paper presents a systematic review of the emerging applications of artificial intelligence (AI), Internet of Things (IoT), and sensor-based technologies in the diagnosis of autism spectrum disorder (ASD). The integration of these technologies has led to promising advances in identifying unique behavioral, [...] Read more.
This paper presents a systematic review of the emerging applications of artificial intelligence (AI), Internet of Things (IoT), and sensor-based technologies in the diagnosis of autism spectrum disorder (ASD). The integration of these technologies has led to promising advances in identifying unique behavioral, physiological, and neuroanatomical markers associated with ASD. Through an examination of recent studies, we explore how technologies such as wearable sensors, eye-tracking systems, virtual reality environments, neuroimaging, and microbiome analysis contribute to a holistic approach to ASD diagnostics. The analysis reveals how these technologies facilitate non-invasive, real-time assessments across diverse settings, enhancing both diagnostic accuracy and accessibility. The findings underscore the transformative potential of AI, IoT, and sensor-based driven tools in providing personalized and continuous ASD detection, advocating for data-driven approaches that extend beyond traditional methodologies. Ultimately, this review emphasizes the role of technology in improving ASD diagnostic processes, paving the way for targeted and individualized assessments. Full article
Show Figures

Graphical abstract

16 pages, 2423 KiB  
Article
Enhancing Autism Detection Through Gaze Analysis Using Eye Tracking Sensors and Data Attribution with Distillation in Deep Neural Networks
by Federica Colonnese, Francesco Di Luzio, Antonello Rosato and Massimo Panella
Sensors 2024, 24(23), 7792; https://doi.org/10.3390/s24237792 - 5 Dec 2024
Viewed by 2005
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterized by differences in social communication and repetitive behaviors, often associated with atypical visual attention patterns. In this paper, the Gaze-Based Autism Classifier (GBAC) is proposed, which is a Deep Neural Network model that leverages [...] Read more.
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterized by differences in social communication and repetitive behaviors, often associated with atypical visual attention patterns. In this paper, the Gaze-Based Autism Classifier (GBAC) is proposed, which is a Deep Neural Network model that leverages both data distillation and data attribution techniques to enhance ASD classification accuracy and explainability. Using data sampled by eye tracking sensors, the model identifies unique gaze behaviors linked to ASD and applies an explainability technique called TracIn for data attribution by computing self-influence scores to filter out noisy or anomalous training samples. This refinement process significantly improves both accuracy and computational efficiency, achieving a test accuracy of 94.35% while using only 77% of the dataset, showing that the proposed GBAC outperforms the same model trained on the full dataset and random sample reductions, as well as the benchmarks. Additionally, the data attribution analysis provides insights into the most influential training examples, offering a deeper understanding of how gaze patterns correlate with ASD-specific characteristics. These results underscore the potential of integrating explainable artificial intelligence into neurodevelopmental disorder diagnostics, advancing clinical research by providing deeper insights into the visual attention patterns associated with ASD. Full article
Show Figures

Figure 1

18 pages, 4024 KiB  
Article
Kalman Filter-Based Fusion of LiDAR and Camera Data in Bird’s Eye View for Multi-Object Tracking in Autonomous Vehicles
by Loay Alfeqy, Hossam E. Hassan Abdelmunim, Shady A. Maged and Diaa Emad
Sensors 2024, 24(23), 7718; https://doi.org/10.3390/s24237718 - 3 Dec 2024
Cited by 1 | Viewed by 2932
Abstract
Accurate multi-object tracking (MOT) is essential for autonomous vehicles, enabling them to perceive and interact with dynamic environments effectively. Single-modality 3D MOT algorithms often face limitations due to sensor constraints, resulting in unreliable tracking. Recent multi-modal approaches have improved performance but rely heavily [...] Read more.
Accurate multi-object tracking (MOT) is essential for autonomous vehicles, enabling them to perceive and interact with dynamic environments effectively. Single-modality 3D MOT algorithms often face limitations due to sensor constraints, resulting in unreliable tracking. Recent multi-modal approaches have improved performance but rely heavily on complex, deep-learning-based fusion techniques. In this work, we present CLF-BEVSORT, a camera-LiDAR fusion model operating in the bird’s eye view (BEV) space using the SORT tracking framework. The proposed method introduces a novel association strategy that incorporates structural similarity into the cost function, enabling effective data fusion between 2D camera detections and 3D LiDAR detections for robust track recovery during short occlusions by leveraging LiDAR depth. Evaluated on the KITTI dataset, CLF-BEVSORT achieves state-of-the-art performance with a HOTA score of 77.26% for the Car class, surpassing StrongFusionMOT and DeepFusionMOT by 2.13%, with high precision (85.13%) and recall (80.45%). For the Pedestrian class, it achieves a HOTA score of 46.03%, outperforming Be-Track and StrongFusionMOT by (6.16%). Additionally, CLF-BEVSORT reduces identity switches (IDSW) by over 45% for cars compared to baselines AB3DMOT and BEVSORT, demonstrating robust, consistent tracking and setting a new benchmark for 3DMOT in autonomous driving. Full article
Show Figures

Figure 1

13 pages, 1294 KiB  
Proceeding Paper
IoT-Enabled Intelligent Health Care Screen System for Long-Time Screen Users
by Subramanian Vijayalakshmi, Joseph Alwin and Jayabal Lekha
Eng. Proc. 2024, 82(1), 96; https://doi.org/10.3390/ecsa-11-20364 - 25 Nov 2024
Viewed by 353
Abstract
With the rapid rise in technological advancements, health can be tracked and monitored in multiple ways. Tracking and monitoring healthcare gives the option to give precise interventions to people, enabling them to focus more on healthier lifestyles by minimising health issues concerning long [...] Read more.
With the rapid rise in technological advancements, health can be tracked and monitored in multiple ways. Tracking and monitoring healthcare gives the option to give precise interventions to people, enabling them to focus more on healthier lifestyles by minimising health issues concerning long screen time. Artificial Intelligence (AI) techniques like the Large Language Model (LLM) technology enable intelligent smart assistants to be used on mobile devices and in other cases. The proposed system uses the power of IoT and LLMs to create a virtual personal assistant for long-time screen users by monitoring their health parameters, with various sensors for the real-time monitoring of seating posture, heartbeat, stress levels, and the motion tracking of eye movements, etc., to constantly track, give necessary advice, and make sure that their vitals are as expected and within the safety parameters. The intelligent system combines the power of AI and Natural Language Processing (NLP) to build a virtual assistant embedded into the screens of mobile devices, laptops, desktops, and other screen devices, which employees across various workspaces use. The intelligent screen, with the integration of multiple sensors, tracks and monitors the users’ vitals along with various other necessary health parameters, and alerts them to take breaks, have water, and refresh, ensuring that the users stay healthy while using the system for work. These systems also suggest necessary exercises for the eyes, head, and other body parts. The proposed smart system is supported by user recognition to identify the current user and suggest advisory actions accordingly. The system also adapts and ensures that the users enjoy proper relaxation and focus when using the system, providing a flexible and personalised experience. The intelligent screen system monitors and improves the health of employees who have to work for a long time, thereby enhancing the productivity and concentration of employees in various organisations. Full article
Show Figures

Figure 1

17 pages, 5344 KiB  
Article
The Effects of Competition on Exercise Intensity and the User Experience of Exercise during Virtual Reality Bicycling for Young Adults
by John L. Palmieri and Judith E. Deutsch
Sensors 2024, 24(21), 6873; https://doi.org/10.3390/s24216873 - 26 Oct 2024
Viewed by 1927
Abstract
Background: Regular moderate–vigorous intensity exercise is recommended for adults as it can improve longevity and reduce health risks associated with a sedentary lifestyle. However, there are barriers to achieving intense exercise that may be addressed using virtual reality (VR) as a tool to [...] Read more.
Background: Regular moderate–vigorous intensity exercise is recommended for adults as it can improve longevity and reduce health risks associated with a sedentary lifestyle. However, there are barriers to achieving intense exercise that may be addressed using virtual reality (VR) as a tool to promote exercise intensity and adherence, particularly through visual feedback and competition. The purpose of this work is to compare visual feedback and competition within fully immersive VR to enhance exercise intensity and user experience of exercise for young adults; and to describe and compare visual attention during each of the conditions. Methods: Young adults (21–34 years old) bicycled in three 5 min VR conditions (visual feedback, self-competition, and competition against others). Exercise intensity (cycling cadence and % of maximum heart rate) and visual attention (derived from a wearable eye tracking sensor) were measured continuously. User experience was measured by an intrinsic motivation questionnaire, perceived effort, and participant preference. A repeated-measures ANOVA with paired t-test post hoc tests was conducted to detect differences between conditions. Results: Participants exercised at a higher intensity and had higher intrinsic motivation in the two competitive conditions compared to visual feedback. Further, participants preferred the competitive conditions and only reached a vigorous exercise intensity during self-competition. Visual exploration was higher in visual feedback compared to self-competition. Conclusions: For young adults bicycling in VR, competition promoted higher exercise intensity and motivation compared to visual feedback. Full article
Show Figures

Figure 1

Back to TopTop