Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (130)

Search Parameters:
Keywords = eye-hand data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6848 KB  
Review
The Expanding Frontier: The Role of Artificial Intelligence in Pediatric Neuroradiology
by Alessia Guarnera, Antonio Napolitano, Flavia Liporace, Fabio Marconi, Maria Camilla Rossi-Espagnet, Carlo Gandolfo, Andrea Romano, Alessandro Bozzao and Daniela Longo
Children 2025, 12(9), 1127; https://doi.org/10.3390/children12091127 - 27 Aug 2025
Viewed by 228
Abstract
Artificial intelligence (AI) is revolutionarily shaping the entire landscape of medicine and particularly the privileged field of radiology, since it produces a significant amount of data, namely, images. Currently, AI implementation in radiology is continuously increasing, from automating image analysis to enhancing workflow [...] Read more.
Artificial intelligence (AI) is revolutionarily shaping the entire landscape of medicine and particularly the privileged field of radiology, since it produces a significant amount of data, namely, images. Currently, AI implementation in radiology is continuously increasing, from automating image analysis to enhancing workflow management, and specifically, pediatric neuroradiology is emerging as an expanding frontier. Pediatric neuroradiology presents unique opportunities and challenges since neonates’ and small children’s brains are continuously developing, with age-specific changes in terms of anatomy, physiology, and disease presentation. By enhancing diagnostic accuracy, reducing reporting times, and enabling earlier intervention, AI has the potential to significantly impact clinical practice and patients’ quality of life and outcomes. For instance, AI reduces MRI and CT scanner time by employing advanced deep learning (DL) algorithms to accelerate image acquisition through compressed sensing and undersampling, and to enhance image reconstruction by denoising and super-resolving low-quality datasets, thereby producing diagnostic-quality images with significantly fewer data points and in a shorter timeframe. Furthermore, as healthcare systems become increasingly burdened by rising demands and limited radiology workforce capacity, AI offers a practical solution to support clinical decision-making, particularly in institutions where pediatric neuroradiology is limited. For example, the MELD (Multicenter Epilepsy Lesion Detection) algorithm is specifically designed to help radiologists find focal cortical dysplasias (FCDs), which are a common cause of drug-resistant epilepsy. It works by analyzing a patient’s MRI scan and comparing a wide range of features—such as cortical thickness and folding patterns—to a large database of scans from both healthy individuals and epilepsy patients. By identifying subtle deviations from normal brain anatomy, the MELD graph algorithm can highlight potential lesions that are often missed by the human eye, which is a critical step in identifying patients who could benefit from life-changing epilepsy surgery. On the other hand, the integration of AI into pediatric neuroradiology faces technical and ethical challenges, such as data scarcity and ethical and legal restrictions on pediatric data sharing, that complicate the development of robust and generalizable AI models. Moreover, many radiologists remain sceptical of AI’s interpretability and reliability, and there are also important medico-legal questions around responsibility and liability when AI systems are involved in clinical decision-making. Future promising perspectives to overcome these concerns are represented by federated learning and collaborative research and AI development, which require technological innovation and multidisciplinary collaboration between neuroradiologists, data scientists, ethicists, and pediatricians. The paper aims to address: (1) current applications of AI in pediatric neuroradiology; (2) current challenges and ethical considerations related to AI implementation in pediatric neuroradiology; and (3) future opportunities in the clinical and educational pediatric neuroradiology field. AI in pediatric neuroradiology is not meant to replace neuroradiologists, but to amplify human intellect and extend our capacity to diagnose, prognosticate, and treat with unprecedented precision and speed. Full article
Show Figures

Figure 1

31 pages, 34013 KB  
Article
Vision-Based 6D Pose Analytics Solution for High-Precision Industrial Robot Pick-and-Place Applications
by Balamurugan Balasubramanian and Kamil Cetin
Sensors 2025, 25(15), 4824; https://doi.org/10.3390/s25154824 - 6 Aug 2025
Viewed by 595
Abstract
High-precision 6D pose estimation for pick-and-place operations remains a critical problem for industrial robot arms in manufacturing. This study introduces an analytics-based solution for 6D pose estimation designed for a real-world industrial application: it enables the Staubli TX2-60L (manufactured by Stäubli International AG, [...] Read more.
High-precision 6D pose estimation for pick-and-place operations remains a critical problem for industrial robot arms in manufacturing. This study introduces an analytics-based solution for 6D pose estimation designed for a real-world industrial application: it enables the Staubli TX2-60L (manufactured by Stäubli International AG, Horgen, Switzerland) robot arm to pick up metal plates from various locations and place them into a precisely defined slot on a brake pad production line. The system uses a fixed eye-to-hand Intel RealSense D435 RGB-D camera (manufactured by Intel Corporation, Santa Clara, California, USA) to capture color and depth data. A robust software infrastructure developed in LabVIEW (ver.2019) integrated with the NI Vision (ver.2019) library processes the images through a series of steps, including particle filtering, equalization, and pattern matching, to determine the X-Y positions and Z-axis rotation of the object. The Z-position of the object is calculated from the camera’s intensity data, while the remaining X-Y rotation angles are determined using the angle-of-inclination analytics method. It is experimentally verified that the proposed analytical solution outperforms the hybrid-based method (YOLO-v8 combined with PnP/RANSAC algorithms). Experimental results across four distinct picking scenarios demonstrate the proposed solution’s superior accuracy, with position errors under 2 mm, orientation errors below 1°, and a perfect success rate in pick-and-place tasks. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

15 pages, 2879 KB  
Article
Study on the Eye Movement Transfer Characteristics of Drivers Under Different Road Conditions
by Zhenxiang Hao, Jianping Hu, Xiaohui Sun, Jin Ran, Yuhang Zheng, Binhe Yang and Junyao Tang
Appl. Sci. 2025, 15(15), 8559; https://doi.org/10.3390/app15158559 - 1 Aug 2025
Viewed by 330
Abstract
Given the severe global traffic safety challenges—including threats to human lives and socioeconomic impacts—this study analyzes visual behavior to promote sustainable transportation, improve road safety, and reduce resource waste and pollution caused by accidents. Four typical road sections, namely, turning, straight ahead, uphill, [...] Read more.
Given the severe global traffic safety challenges—including threats to human lives and socioeconomic impacts—this study analyzes visual behavior to promote sustainable transportation, improve road safety, and reduce resource waste and pollution caused by accidents. Four typical road sections, namely, turning, straight ahead, uphill, and downhill, were selected, and the eye movement data of 23 drivers in different driving stages were collected by aSee Glasses eye-tracking device to analyze the visual gaze characteristics of the drivers and their transfer patterns in each road section. Using Markov chain theory, the probability of staying at each gaze point and the transfer probability distribution between gaze points were investigated. The results of the study showed that drivers’ visual behaviors in different road sections showed significant differences: drivers in the turning section had the largest percentage of fixation on the near front, with a fixation duration and frequency of 29.99% and 28.80%, respectively; the straight ahead section, on the other hand, mainly focused on the right side of the road, with 31.57% of fixation duration and 19.45% of frequency of fixation; on the uphill section, drivers’ fixation duration on the left and right roads was more balanced, with 24.36% of fixation duration on the left side of the road and 25.51% on the right side of the road; drivers on the downhill section looked more frequently at the distance ahead, with a total fixation frequency of 23.20%, while paying higher attention to the right side of the road environment, with a fixation duration of 27.09%. In terms of visual fixation, the fixation shift in the turning road section was mainly concentrated between the near and distant parts of the road ahead and frequently turned to the left and right sides; the straight road section mainly showed a shift between the distant parts of the road ahead and the dashboard; the uphill road section was concentrated on the shift between the near parts of the road ahead and the two sides of the road, while the downhill road section mainly occurred between the distant parts of the road ahead and the rearview mirror. Although drivers’ fixations on the front of the road were most concentrated under the four road sections, with an overall fixation stability probability exceeding 67%, there were significant differences in fixation smoothness between different road sections. Through this study, this paper not only reveals the laws of drivers’ visual behavior under different driving environments but also provides theoretical support for behavior-based traffic safety improvement strategies. Full article
Show Figures

Figure 1

18 pages, 566 KB  
Review
Skeletal Muscle Pathology in Autosomal Recessive Cerebellar Ataxias: Insights from Marinesco–Sjögren Syndrome
by Fabio Bellia, Luca Federici, Valentina Gatta, Giuseppe Calabrese and Michele Sallese
Int. J. Mol. Sci. 2025, 26(14), 6736; https://doi.org/10.3390/ijms26146736 - 14 Jul 2025
Viewed by 423
Abstract
Cerebellar ataxias are a group of disorders characterized by clumsy movements because of defective muscle control. In affected individuals, muscular impairment might have an impact on activities like walking, balance, hand coordination, speech, and feeding, as well as eye movements. The development of [...] Read more.
Cerebellar ataxias are a group of disorders characterized by clumsy movements because of defective muscle control. In affected individuals, muscular impairment might have an impact on activities like walking, balance, hand coordination, speech, and feeding, as well as eye movements. The development of symptoms typically takes place during the span of adolescence, and it has the potential to cause distress for individuals in many areas of their lives, including professional and interpersonal relationships. Although skeletal muscle is understudied in ataxias, its examination may provide hitherto unexplored details in this family of disorders. Observing muscle involvement can assist in diagnosing conditions where genetic tests alone are inconclusive. Furthermore, it helps determine the stage of progression of a pathology that might otherwise be challenging to assess. In this study, we reviewed the main scientific literature reporting on skeletal muscle examination in autosomal recessive cerebellar ataxias (ARCAs), with a focus on the rare Marinesco–Sjögren syndrome. (MSS). Our aim was to highlight the similarities in muscle alterations observed in ARCA patients while also considering data gathered from preclinical models. Analyzing the similarities among these disorders could enhance our understanding of the unidentified mechanisms underlying the phenotypic evolution of some less common conditions. Full article
(This article belongs to the Section Molecular Pathology, Diagnostics, and Therapeutics)
Show Figures

Graphical abstract

10 pages, 1863 KB  
Case Report
Corneal Perforation as a Possible Ocular Adverse Event Caused by Cabozantinib: A Clinical Case and Brief Review
by Carmelo Laface, Luca Scartozzi, Chiara Pisano, Paola Vanella, Antonio Greco, Agostino Salvatore Vaiano and Gianmauro Numico
J. Clin. Med. 2025, 14(12), 4052; https://doi.org/10.3390/jcm14124052 - 8 Jun 2025
Viewed by 792
Abstract
Background: Cabozantinib is a Vascular Endothelial Growth Factor Receptor Tyrosine Kinase Inhibitor (VEGFR-TKI). These drugs are employed as therapy for several malignancies. In detail, Cabozantinib has demonstrated its efficacy against several malignancies. On the other hand, Cabozantinib and other VEGFR-TKIs can be responsible [...] Read more.
Background: Cabozantinib is a Vascular Endothelial Growth Factor Receptor Tyrosine Kinase Inhibitor (VEGFR-TKI). These drugs are employed as therapy for several malignancies. In detail, Cabozantinib has demonstrated its efficacy against several malignancies. On the other hand, Cabozantinib and other VEGFR-TKIs can be responsible for various adverse events (AEs), in particular hepatic and dermatological AEs. Methods: To date, limited data are available in the literature regarding ocular AEs due to therapy with these drugs. In this regard, one case of corneal perforation during treatment with a VEGFR-TKI, Regorafenib, has been reported, while there are no data about Cabozantinib. In this paper, we present another clinical case of corneal perforation in a patient affected by advanced RCC and treated with Cabozantinib as a second-line therapy. The patient started Cabozantinib at the dosage of 60 mg/die although it was necessary to apply some dose reductions because of grade 2 AEs (according to CTCAE v6.0), such as asthenia, diarrhea, dysgeusia, and loss of appetite. Results: After approximately 15 months of treatment, the patient began to experience pain and vision loss in the right eye. A diagnosis of corneal perforation was made, followed by medical and surgical treatment. As regards the etiology of this pathology, all other possible causes were excluded, including a history of ocular disease, contact trauma, exposure to damaging agents (e.g., chemical agents and prolonged use of drugs such as topical NSAIDs), infections, or dry eye. Therefore, we hypothesized a correlation with Cabozantinib’s mechanisms of action and paused its administration. Conclusions: Cabozantinib may alter the ocular environment due to a lack of or imbalance in growth factors in the tear film, with a reduction in corneal epithelium proliferation. This condition might cause dry eye and a delay in corneal healing. Therefore, particular importance should be placed on ophthalmologic surveillance during treatment with these drugs in patients who develop ocular symptoms. Further in vitro and in vivo studies are necessary to deepen the knowledge about VEGFR-TKI-mediated ocular AEs. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

22 pages, 4481 KB  
Article
Hybrid Deep Learning Framework for Eye-in-Hand Visual Control Systems
by Adrian-Paul Botezatu, Andrei-Iulian Iancu and Adrian Burlacu
Robotics 2025, 14(5), 66; https://doi.org/10.3390/robotics14050066 - 19 May 2025
Viewed by 1453
Abstract
This work proposes a hybrid deep learning-based framework for visual feedback control in an eye-in-hand robotic system. The framework uses an early fusion approach in which real and synthetic images define the training data. The first layer of a ResNet-18 backbone is augmented [...] Read more.
This work proposes a hybrid deep learning-based framework for visual feedback control in an eye-in-hand robotic system. The framework uses an early fusion approach in which real and synthetic images define the training data. The first layer of a ResNet-18 backbone is augmented to fuse interest-point maps with RGB channels, enabling the network to capture scene geometry better. A manipulator robot with an eye-in-hand configuration provides a reference image, while subsequent poses and images are generated synthetically, removing the need for extensive real data collection. The experimental results reveal that this enriched input representation significantly improves convergence accuracy and velocity smoothness compared to a baseline that processes real images alone. Specifically, including feature point maps allows the network to discriminate crucial elements in the scene, resulting in more precise velocity commands and stable end-effector trajectories. Thus, integrating additional, synthetically generated map data into convolutional architectures can enhance the robustness and performance of the visual servoing system, particularly when real-world data gathering is challenging. Unlike existing visual servoing methods, our early fusion strategy integrates feature maps directly into the network’s initial convolutional layer, allowing the model to learn critical geometric details from the very first stage of training. This approach yields superior velocity predictions and smoother servoing compared to conventional frameworks. Full article
(This article belongs to the Special Issue Visual Servoing-Based Robotic Manipulation)
Show Figures

Figure 1

23 pages, 9051 KB  
Article
Predicting User Attention States from Multimodal Eye–Hand Data in VR Selection Tasks
by Xiaoxi Du, Jinchun Wu, Xinyi Tang, Xiaolei Lv, Lesong Jia and Chengqi Xue
Electronics 2025, 14(10), 2052; https://doi.org/10.3390/electronics14102052 - 19 May 2025
Viewed by 962
Abstract
Virtual reality (VR) devices that integrate eye-tracking and hand-tracking technologies can capture users’ natural eye–hand data in real time within a three-dimensional virtual space, providing new opportunities to explore users’ attentional states during natural 3D interactions. This study aims to develop an attention-state [...] Read more.
Virtual reality (VR) devices that integrate eye-tracking and hand-tracking technologies can capture users’ natural eye–hand data in real time within a three-dimensional virtual space, providing new opportunities to explore users’ attentional states during natural 3D interactions. This study aims to develop an attention-state prediction model based on the multimodal fusion of eye and hand features, which distinguishes whether users primarily employ goal-directed attention or stimulus-driven attention during the execution of their intentions. In our experiment, we collected three types of data—eye movements, hand movements, and pupil changes—and instructed participants to complete a virtual button selection task. This setup allowed us to establish a binary ground truth label for attentional state during the execution of selection intentions for model training. To investigate the impact of different time windows on prediction performance, we designed eight time windows ranging from 0 to 4.0 s (in increments of 0.5 s) and compared the performance of eleven algorithms, including logistic regression, support vector machine, naïve Bayes, k-nearest neighbors, decision tree, linear discriminant analysis, random forest, AdaBoost, gradient boosting, XGBoost, and neural networks. The results indicate that, within the 3 s window, the gradient boosting model performed best, achieving a weighted F1-score of 0.8835 and an Accuracy of 0.8860. Furthermore, the analysis of feature importance demonstrated that the multimodal eye–hand features play a critical role in the prediction. Overall, this study introduces an innovative approach that integrates three types of multimodal eye–hand behavioral and physiological data within a virtual reality interaction context. This framework provides both theoretical and methodological support for predicting users’ attentional states within short time windows and contributes practical guidance for the design of attention-adaptive 3D interfaces. In addition, the proposed multimodal eye–hand data fusion framework also demonstrates potential applicability in other three-dimensional interaction domains, such as game experience optimization, rehabilitation training, and driver attention monitoring. Full article
Show Figures

Figure 1

18 pages, 8682 KB  
Article
On the Validity and Benefit of Manual and Automated Drift Correction in Reading Tasks
by Naser Al Madi
J. Eye Mov. Res. 2025, 18(3), 17; https://doi.org/10.3390/jemr18030017 - 9 May 2025
Viewed by 482
Abstract
Drift represents a common distortion that affects the position of fixations in eye tracking data. While manual correction is considered very accurate, it is considered subjective and time-consuming. On the other hand, automated correction is fast, objective, and considered less accurate. An objective [...] Read more.
Drift represents a common distortion that affects the position of fixations in eye tracking data. While manual correction is considered very accurate, it is considered subjective and time-consuming. On the other hand, automated correction is fast, objective, and considered less accurate. An objective comparison of the accuracy of manual and automated correction has not been conducted before, and the extent of subjectivity in manual correction is not entirely quantified. In this paper, we compare the accuracy of manual and automated correction of eye tracking data in reading tasks through a novel approach that relies on synthetic data with known ground truth. Moreover, we quantify the subjectivity in manual human correction with real eye tracking data. Our results show that expert human correction is significantly more accurate than automated algorithms, yet novice human correctors are on par with the best automated algorithms. In addition, we found that human correctors show excellent agreement in their correction, challenging the notion that manual correction is “highly subjective”. Our findings provide unique insights, quantifying the benefits of manual and automated correction. Full article
Show Figures

Figure 1

13 pages, 1510 KB  
Article
Binocular Advantage in Established Eye–Hand Coordination Tests in Young and Healthy Adults
by Michael Mendes Wefelnberg, Felix Bargstedt, Marcel Lippert and Freerk T. Baumann
J. Eye Mov. Res. 2025, 18(3), 14; https://doi.org/10.3390/jemr18030014 - 7 May 2025
Viewed by 758
Abstract
Background: Eye–hand coordination (EHC) plays a critical role in daily activities and is affected by monocular vision impairment. This study evaluates existing EHC tests to detect performance decline under monocular conditions, supports the assessment and monitoring of vision rehabilitation, and quantifies the binocular [...] Read more.
Background: Eye–hand coordination (EHC) plays a critical role in daily activities and is affected by monocular vision impairment. This study evaluates existing EHC tests to detect performance decline under monocular conditions, supports the assessment and monitoring of vision rehabilitation, and quantifies the binocular advantage of each test. Methods: A total of 70 healthy sports students (aged 19–30 years) participated in four EHC tests: the Purdue Pegboard Test (PPT), Finger–Nose Test (FNT), Alternate Hand Wall Toss Test (AHWTT), and Loop-Wire Test (LWT). Each participant completed the tests under both binocular and monocular conditions in a randomized order, with assessments conducted by two independent raters. Performance differences, binocular advantage, effect sizes, and interrater reliability were analyzed. Results: Data from 66 participants were included in the final analysis. Significant performance differences between binocular and monocular conditions were observed for the LWT (p < 0.001), AHWTT (p < 0.001), and PPT (p < 0.05), with a clear binocular advantage and large effect sizes (SMD range: 0.583–1.660) for the AHWTT and LWT. Female participants performed better in fine motor tasks, while males demonstrated superior performance in gross motor tasks. Binocular performance averages aligned with published reference values. Conclusions: The findings support the inclusion of the LWT and AHWTT in clinical protocols to assess and assist individuals with monocular vision impairment, particularly following sudden uniocular vision loss. Future research should extend these findings to different age groups and clinically relevant populations. Full article
Show Figures

Figure 1

13 pages, 554 KB  
Article
Evaluating the Impact of a Laboratory-Based Program on Children’s Coordination Skills Using the MABC-2
by Sara Aliberti, Tiziana D’Isanto and Francesca D’Elia
Educ. Sci. 2025, 15(5), 527; https://doi.org/10.3390/educsci15050527 - 24 Apr 2025
Viewed by 607
Abstract
The aim of this study was to verify the effects of laboratory learning on children’s fundamental movement skills (FMS) through an intervention designed and implemented by specially trained generalist teachers. A total of 114 children attending 1st and 2nd grade of primary school [...] Read more.
The aim of this study was to verify the effects of laboratory learning on children’s fundamental movement skills (FMS) through an intervention designed and implemented by specially trained generalist teachers. A total of 114 children attending 1st and 2nd grade of primary school (6.7 ± 0.8 yo) and 28 children attending preschool (4.1 ± 0.9 yo) in Naples (Italy) participated in the study. To assess FMS, the Movement ABC-2 (MABC-2) was administered. A two-way ANOVA for repeated measures was used to compare data. The laboratory was effective in improving coordination in primary school children, with a significant reduction in medium/severe movement difficulties from 23.7% to 12.4%. The results showed significant changes in the execution time of several MABC-2 tests, indicating an improvement in FMS, particularly hand-eye coordination and dynamic balance. However, the intervention was less effective in preschool children, with a limited improvement of 2.9%, highlighting that the intervention only had an impact on some specific skills. Targeted interventions can be effective in improving FMS, providing a basis for educational programs that respond to movement needs of students. Full article
Show Figures

Figure 1

32 pages, 13506 KB  
Article
VR Co-Lab: A Virtual Reality Platform for Human–Robot Disassembly Training and Synthetic Data Generation
by Yashwanth Maddipatla, Sibo Tian, Xiao Liang, Minghui Zheng and Beiwen Li
Machines 2025, 13(3), 239; https://doi.org/10.3390/machines13030239 - 17 Mar 2025
Cited by 1 | Viewed by 2084
Abstract
This research introduces a virtual reality (VR) training system for improving human–robot collaboration (HRC) in industrial disassembly tasks, particularly for e-waste recycling. Conventional training approaches frequently fail to provide sufficient adaptability, immediate feedback, or scalable solutions for complex industrial workflows. The implementation leverages [...] Read more.
This research introduces a virtual reality (VR) training system for improving human–robot collaboration (HRC) in industrial disassembly tasks, particularly for e-waste recycling. Conventional training approaches frequently fail to provide sufficient adaptability, immediate feedback, or scalable solutions for complex industrial workflows. The implementation leverages Quest Pro’s body-tracking capabilities to enable ergonomic, immersive interactions with planned eye-tracking integration for improved interactivity and accuracy. The Niryo One robot aids users in hands-on disassembly while generating synthetic data to refine robot motion planning models. A Robot Operating System (ROS) bridge enables the seamless simulation and control of various robotic platforms using Unified Robotics Description Format (URDF) files, bridging virtual and physical training environments. A Long Short-Term Memory (LSTM) model predicts user interactions and robotic motions, optimizing trajectory planning and minimizing errors. Monte Carlo dropout-based uncertainty estimation enhances prediction reliability, ensuring adaptability to dynamic user behavior. Initial technical validation demonstrates the platform’s potential, with preliminary testing showing promising results in task execution efficiency and human–robot motion alignment, though comprehensive user studies remain for future work. Limitations include the lack of multi-user scenarios, potential tracking inaccuracies, and the need for further real-world validation. This system establishes a sandbox training framework for HRC in disassembly, leveraging VR and AI-driven feedback to improve skill acquisition, task efficiency, and training scalability across industrial applications. Full article
Show Figures

Figure 1

23 pages, 10794 KB  
Article
Hand–Eye Separation-Based First-Frame Positioning and Follower Tracking Method for Perforating Robotic Arm
by Handuo Zhang, Jun Guo, Chunyan Xu and Bin Zhang
Appl. Sci. 2025, 15(5), 2769; https://doi.org/10.3390/app15052769 - 4 Mar 2025
Viewed by 771
Abstract
In subway tunnel construction, current hand–eye integrated drilling robots use a camera mounted on the drilling arm for image acquisition. However, dust interference and long-distance operation cause a decline in image quality, affecting the stability and accuracy of the visual recognition system. Additionally, [...] Read more.
In subway tunnel construction, current hand–eye integrated drilling robots use a camera mounted on the drilling arm for image acquisition. However, dust interference and long-distance operation cause a decline in image quality, affecting the stability and accuracy of the visual recognition system. Additionally, the computational complexity of high-precision detection models limits deployment on resource-constrained edge devices, such as industrial controllers. To address these challenges, this paper proposes a dual-arm tunnel drilling robot system with hand–eye separation, utilizing the first-frame localization and follower tracking method. The vision arm (“eye”) provides real-time position data to the drilling arm (“hand”), ensuring accurate and efficient operation. The study employs an RFBNet model for initial frame localization, replacing the original VGG16 backbone with ShuffleNet V2. This reduces model parameters by 30% (135.5 MB vs. 146.3 MB) through channel splitting and depthwise separable convolutions to reduce computational complexity. Additionally, the GIoU loss function is introduced to replace the traditional IoU, further optimizing bounding box regression through the calculation of the minimum enclosing box. This resolves the gradient vanishing problem in traditional IoU and improves average precision (AP) by 3.3% (from 0.91 to 0.94). For continuous tracking, a SiamRPN-based algorithm combined with Kalman filtering and PID control ensures robustness against occlusions and nonlinear disturbances, increasing the success rate by 1.6% (0.639 vs. 0.629). Experimental results show that this approach significantly improves tracking accuracy and operational stability, achieving 31 FPS inference speed on edge devices and providing a deployable solution for tunnel construction’s safety and efficiency needs. Full article
Show Figures

Figure 1

10 pages, 2588 KB  
Proceeding Paper
Combining Interactive Technology and Visual Cognition—A Case Study on Preventing Dementia in Older Adults
by Chung-Shun Feng and Chao-Ming Wang
Eng. Proc. 2025, 89(1), 16; https://doi.org/10.3390/engproc2025089016 - 25 Feb 2025
Viewed by 738
Abstract
According to the World Health Organization, the global population is aging, with cognitive and memory functions declining from the age of 40–50. Individuals aged 65 and older are particularly prone to dementia. Therefore, we developed an interactive system for visual cognitive training to [...] Read more.
According to the World Health Organization, the global population is aging, with cognitive and memory functions declining from the age of 40–50. Individuals aged 65 and older are particularly prone to dementia. Therefore, we developed an interactive system for visual cognitive training to prevent dementia and delay the onset of memory loss. The system comprises three “three-dimensional objects” with printed 2D barcodes and near-field communication (NFC) tags and operating software processing text, images, and multimedia content. Electroencephalography (EEG) data from a brainwave sensor were used to interpret brain signals. The system operates through interactive games combined with real-time feedback from EEG data to reduce the likelihood of dementia. The system provides feedback based on textual, visual, and multimedia information and offers a new form of entertainment. Thirty participants were invited to participate in a pre-test questionnaire survey. Different tasks were assigned to randomly selected participants with three-dimensional objects. Sensing technologies such as quick-response (QR) codes and near-field communication (NFC) were used to display information on smartphones. Visual content included text-image narratives and media playback. EEG was used for visual recognition and perception responses. The system was evaluated using the system usability scale (SUS). Finally, the data obtained from participants using the system were analyzed. The system improved hand-eye coordination and brain memory using interactive games. After receiving visual information, brain function was stimulated through brain stimulation and focused reading, which prevents dementia. This system could be introduced into the healthcare industry to accumulate long-term cognitive function data for the brain and personal health data to prevent the occurrence of dementia. Full article
Show Figures

Figure 1

14 pages, 721 KB  
Article
Determinants of Safe Pesticide Handling and Application Among Rural Farmers
by Olamide Stephanie Oshingbade, Haruna Musa Moda, Shade John Akinsete, Mumuni Adejumo and Norr Hassan
Int. J. Environ. Res. Public Health 2025, 22(2), 211; https://doi.org/10.3390/ijerph22020211 - 2 Feb 2025
Viewed by 1420
Abstract
The study investigated the determinants of safe pesticide handling and application among farmers in rural communities of Oyo State, ssouthwestern Nigeria. A cross-sectional design utilizing 2-stage cluster sampling techniques was used to select Ido and Ibarapa central Local Government Areas and to interview [...] Read more.
The study investigated the determinants of safe pesticide handling and application among farmers in rural communities of Oyo State, ssouthwestern Nigeria. A cross-sectional design utilizing 2-stage cluster sampling techniques was used to select Ido and Ibarapa central Local Government Areas and to interview 383 farmers via a structured questionnaire. Data were analyzed using descriptive statistics and logistic regression at p = 0.05. Results showed that 41.8% of the farmers had been working with pesticides on farms for at least 5 years, 33.0% attended training on pesticide application, 73.5% had good safety and health knowledge, and 72.3% had safe pesticide handling and application practices. About half (50.2%) stated that they wear coveralls, gloves, and masks to protect their body, face, and hands when applying pesticides, 9.8% use empty pesticide containers for other purposes in the house/farm, while 11.5% blow the nozzle with their mouth to unclog it if it becomes blocked. The three major health symptoms reported by the participants were skin irritation (65.0%), itchy eyes (51.3%), and excessive sweating (32.5%). Having attended training on pesticide application and use enhanced (OR = 2.821; C.I = 1.513–5.261) practicing safe pesticide handling and application. Farmers with good knowledge (OR = 5.494; C.I = 3.385–8.919) were more likely to practice safe pesticide handling and application than those with poor knowledge about pesticide use. It is essential to develop and deliver mandatory comprehensive training programs for farmers on impacts of pesticides on health and environment, along with sustainable safe handling, application, and disposal of pesticides using proper waste management techniques and recognizing early signs and seeking medical assistance. The urgent need to strengthen policy to regulate pesticide use and limit farmers’ access to banned products is also key. Full article
Show Figures

Figure 1

10 pages, 1003 KB  
Article
Eyelid Contact Dermatitis: 25-Year Single-Center Retrospective Study
by Giovanni Rubegni, Tommaso Padula, Laura Calabrese, Martina D’Onghia, Linda Tognetti, Elisa Cinotti, Laura Lazzeri, Gabriele Ermini, Alessandra Cartocci and Gian Marco Tosi
J. Clin. Med. 2025, 14(3), 823; https://doi.org/10.3390/jcm14030823 - 27 Jan 2025
Cited by 2 | Viewed by 2131
Abstract
Background/Objectives: Eyelid dermatitis is an inflammatory disease affecting the palpebral skin characterized by itching, edema, and scaling of the periorbital area. This entity can be a manifestation of various underlying dermatological diseases, but allergic contact dermatitis (ACD) is the predominant etiology of [...] Read more.
Background/Objectives: Eyelid dermatitis is an inflammatory disease affecting the palpebral skin characterized by itching, edema, and scaling of the periorbital area. This entity can be a manifestation of various underlying dermatological diseases, but allergic contact dermatitis (ACD) is the predominant etiology of eyelid dermatitis among patients, being diagnosed in 43.4% of cases. The thin and highly permeable nature of eyelid skin increases its susceptibility to allergens, making it a distinct clinical entity. This study aimed to identify the primary haptens associated with eyelid ACD and compare these findings with the allergens implicated in non-eyelid ACD over a 25-year period in a large cohort of patients. Methods: We conducted a monocentric, retrospective study on a dataset of 7955 patients patch-tested for ACD at the Outpatient Allergy Dermatology Clinic of the Azienda Ospedaliera Universitaria Senese (AOUS) from 1997 to 2021. Eyelid ACD cases were identified based on clinical features and positive patch test results. Data on demographics, occupation, and personal history of atopy were collected. The statistical analyses assessed the associations between allergens and eyelid ACD. The trends in the sensitization rates for the most prevalent allergens were also evaluated. Results: Eyelid ACD was identified in 4.6% of the study population, predominantly affecting women (88.6%). Patients with eyelid ACD were more likely to exhibit single-hapten positivity (54.6%) and an atopic phenotype (52.3%) compared to non-eyelid ACD cases. Nickel sulfate (54%), cobalt chloride (13.4%), and thimerosal (12.6%) were the most common allergens associated with eyelid ACD. While thimerosal sensitization decreased significantly following its removal from topical products, nickel sensitization increased, likely due to exposure from electronic devices and hand–eye contact. Conclusions: The haptens identified in eyelid ACD largely overlap with those found in other body regions, including metals, fragrances, and preservatives. However, the unique characteristics of eyelid skin and hand–eye contact patterns play a significant role in sensitization. This study highlights the need for further investigation into the pathophysiology of eyelid allergic contact dermatitis, with particular emphasis on elucidating the mechanisms of hapten sensitization. Such insights could contribute to the development of effective strategies aimed at reducing allergen exposure. Full article
(This article belongs to the Section Dermatology)
Show Figures

Figure 1

Back to TopTop