Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,553)

Search Parameters:
Keywords = human–machine interactions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 85184 KiB  
Article
MB-MSTFNet: A Multi-Band Spatio-Temporal Attention Network for EEG Sensor-Based Emotion Recognition
by Cheng Fang, Sitong Liu and Bing Gao
Sensors 2025, 25(15), 4819; https://doi.org/10.3390/s25154819 - 5 Aug 2025
Abstract
Emotion analysis based on electroencephalogram (EEG) sensors is pivotal for human–machine interaction yet faces key challenges in spatio-temporal feature fusion and cross-band and brain-region integration from multi-channel sensor-derived signals. This paper proposes MB-MSTFNet, a novel framework for EEG emotion recognition. The model constructs [...] Read more.
Emotion analysis based on electroencephalogram (EEG) sensors is pivotal for human–machine interaction yet faces key challenges in spatio-temporal feature fusion and cross-band and brain-region integration from multi-channel sensor-derived signals. This paper proposes MB-MSTFNet, a novel framework for EEG emotion recognition. The model constructs a 3D tensor to encode band–space–time correlations of sensor data, explicitly modeling frequency-domain dynamics and spatial distributions of EEG sensors across brain regions. A multi-scale CNN-Inception module extracts hierarchical spatial features via diverse convolutional kernels and pooling operations, capturing localized sensor activations and global brain network interactions. Bi-directional GRUs (BiGRUs) model temporal dependencies in sensor time-series, adept at capturing long-range dynamic patterns. Multi-head self-attention highlights critical time windows and brain regions by assigning adaptive weights to relevant sensor channels, suppressing noise from non-contributory electrodes. Experiments on the DEAP dataset, containing multi-channel EEG sensor recordings, show that MB-MSTFNet achieves 96.80 ± 0.92% valence accuracy, 98.02 ± 0.76% arousal accuracy for binary classification tasks, and 92.85 ± 1.45% accuracy for four-class classification. Ablation studies validate that feature fusion, bidirectional temporal modeling, and multi-scale mechanisms significantly enhance performance by improving feature complementarity. This sensor-driven framework advances affective computing by integrating spatio-temporal dynamics and multi-band interactions of EEG sensor signals, enabling efficient real-time emotion recognition. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

16 pages, 5104 KiB  
Article
Integrating OpenPose for Proactive Human–Robot Interaction Through Upper-Body Pose Recognition
by Shih-Huan Tseng, Jhih-Ciang Chiang, Cheng-En Shiue and Hsiu-Ping Yueh
Electronics 2025, 14(15), 3112; https://doi.org/10.3390/electronics14153112 - 5 Aug 2025
Abstract
This paper introduces a novel system that utilizes OpenPose for skeleton estimation to enable a tabletop robot to interact with humans proactively. By accurately recognizing upper-body poses based on the skeleton information, the robot autonomously approaches individuals and initiates conversations. The contributions of [...] Read more.
This paper introduces a novel system that utilizes OpenPose for skeleton estimation to enable a tabletop robot to interact with humans proactively. By accurately recognizing upper-body poses based on the skeleton information, the robot autonomously approaches individuals and initiates conversations. The contributions of this paper can be summarized into three main features. Firstly, we conducted a comprehensive data collection process, capturing five different table-front poses: looking down, looking at the screen, looking at the robot, resting the head on hands, and stretching both hands. These poses were selected to represent common interaction scenarios. Secondly, we designed the robot’s dialog content and movement patterns to correspond with the identified table-front poses. By aligning the robot’s responses with the specific pose, we aimed to create a more engaging and intuitive interaction experience for users. Finally, we performed an extensive evaluation by exploring the performance of three classification models—non-linear Support Vector Machine (SVM), Artificial Neural Network (ANN), and convolutional neural network (CNN)—for accurately recognizing table-front poses. We used an Asus Zenbo Junior robot to acquire images and leveraged OpenPose to extract 12 upper-body skeleton points as input for training the classification models. The experimental results indicate that the ANN model outperformed the other models, demonstrating its effectiveness in pose recognition. Overall, the proposed system not only showcases the potential of utilizing OpenPose for proactive human–robot interaction but also demonstrates its real-world applicability. By combining advanced pose recognition techniques with carefully designed dialog and movement patterns, the tabletop robot successfully engages with humans in a proactive manner. Full article
Show Figures

Figure 1

24 pages, 48949 KiB  
Article
Co-Construction Mechanisms of Spatial Encoding and Communicability in Culture-Featured Districts—A Case Study of Harbin Central Street
by Hehui Zhu and Chunyu Pang
Sustainability 2025, 17(15), 7059; https://doi.org/10.3390/su17157059 - 4 Aug 2025
Abstract
During the transition of culture-featured district planning from static conservation to innovation-driven models, existing research remains constrained by mechanistic paradigms, reducing districts to functional containers and neglecting human perceptual interactions and meaning-production mechanisms. This study explores and quantifies the generative mechanisms of spatial [...] Read more.
During the transition of culture-featured district planning from static conservation to innovation-driven models, existing research remains constrained by mechanistic paradigms, reducing districts to functional containers and neglecting human perceptual interactions and meaning-production mechanisms. This study explores and quantifies the generative mechanisms of spatial communicability and cultural dissemination efficacy within human-centered frameworks. Grounded in humanistic urbanism, we analyze Harbin Central Street as a case study integrating historical heritage with contemporary vitality, developing a tripartite communicability assessment framework comprising perceptual experience, infrastructure utility, and behavioral dynamics. Machine learning-based threshold analysis reveals that spatial encoding elements govern communicability through significant nonlinear mechanisms. The conclusion shows synergies between street view-quantified greenery visibility and pedestrian accessibility establish critical human-centered design thresholds. Spatial data analysis integrating physiologically sensed emotional experiences and topologically analyzed spatial morphology resolves metric fragmentation while examining spatial encoding’s impact on interaction efficacy. This research provides data-driven decision support for sustainable urban renewal and enhanced cultural dissemination, advancing heritage sustainability. Full article
(This article belongs to the Section Sustainable Urban and Rural Development)
Show Figures

Figure 1

14 pages, 654 KiB  
Article
A Conceptual Framework for User Trust in AI Biosensors: Integrating Cognition, Context, and Contrast
by Andrew Prahl
Sensors 2025, 25(15), 4766; https://doi.org/10.3390/s25154766 - 2 Aug 2025
Viewed by 183
Abstract
Artificial intelligence (AI) techniques have propelled biomedical sensors beyond measuring physiological markers to interpreting subjective states like stress, pain, or emotions. Despite these technological advances, user trust is not guaranteed and is inadequately addressed in extant research. This review proposes the Cognition–Context–Contrast (CCC) [...] Read more.
Artificial intelligence (AI) techniques have propelled biomedical sensors beyond measuring physiological markers to interpreting subjective states like stress, pain, or emotions. Despite these technological advances, user trust is not guaranteed and is inadequately addressed in extant research. This review proposes the Cognition–Context–Contrast (CCC) conceptual framework to explain the trust and acceptance of AI-enabled sensors. First, we map cognition, comprising the expectations and stereotypes that humans have about machines. Second, we integrate task context by situating sensor applications along an intellective-to-judgmental continuum and showing how demonstrability predicts tolerance for sensor uncertainty and/or errors. Third, we analyze contrast effects that arise when automated sensing displaces familiar human routines, heightening scrutiny and accelerating rejection if roll-out is abrupt. We then derive practical implications such as enhancing interpretability, tailoring data presentations to task demonstrability, and implementing transitional introduction phases. The framework offers researchers, engineers, and clinicians a structured conceptual framework for designing and implementing the next generation of AI biosensors. Full article
(This article belongs to the Special Issue AI in Sensor-Based E-Health, Wearables and Assisted Technologies)
Show Figures

Figure 1

23 pages, 10868 KiB  
Article
Quantitative Analysis and Nonlinear Response of Vegetation Dynamic to Driving Factors in Arid and Semi-Arid Regions of China
by Shihao Liu, Dazhi Yang, Xuyang Zhang and Fangtian Liu
Land 2025, 14(8), 1575; https://doi.org/10.3390/land14081575 - 1 Aug 2025
Viewed by 201
Abstract
Vegetation dynamics are complexly influenced by multiple factors such as climate, human activities, and topography. In recent years, the frequency, intensity, and diversity of human activities have increased, placing substantial pressure on the growth of vegetation. Arid and semi-arid regions are particularly sensitive [...] Read more.
Vegetation dynamics are complexly influenced by multiple factors such as climate, human activities, and topography. In recent years, the frequency, intensity, and diversity of human activities have increased, placing substantial pressure on the growth of vegetation. Arid and semi-arid regions are particularly sensitive to climate change, and climate change and large-scale ecological restoration have led to significant changes in the dynamic of dryland vegetation. However, few studies have explored the nonlinear relationships between these factors and vegetation dynamic. In this study, we integrated trend analysis (using the Mann–Kendall test and Theil–Sen estimation) and machine learning algorithms (XGBoost-SHAP model) based on long time-series remote sensing data from 2001 to 2020 to quantify the nonlinear response patterns and threshold effects of bioclimatic variables, topographic features, soil attributes, and anthropogenic factors on vegetation dynamic. The results revealed the following key findings: (1) The kNDVI in the study area showed an overall significant increasing trend (p < 0.01) during the observation period, of which 26.7% of the area showed a significant increase. (2) The water content index (Bio 23, 19.6%), the change in land use (15.2%), multi-year average precipitation (pre, 15.0%), population density (13.2%), and rainfall seasonality (Bio 15, 10.9%) were the key factors driving the dynamic change of vegetation, with the combined contribution of natural factors amounting to 64.3%. (3) Among the topographic factors, altitude had a more significant effect on vegetation dynamics, with higher altitude regions less likely to experience vegetation greening. Both natural and anthropogenic factors exhibited nonlinear responses and interactive effects, contributing to the observed dynamic trends. This study provides valuable insights into the driving mechanisms behind the condition of vegetation in arid and semi-arid regions of China and, by extension, in other arid regions globally. Full article
(This article belongs to the Section Land Use, Impact Assessment and Sustainability)
Show Figures

Figure 1

25 pages, 953 KiB  
Article
Command Redefined: Neural-Adaptive Leadership in the Age of Autonomous Intelligence
by Raul Ionuț Riti, Claudiu Ioan Abrudan, Laura Bacali and Nicolae Bâlc
AI 2025, 6(8), 176; https://doi.org/10.3390/ai6080176 - 1 Aug 2025
Viewed by 154
Abstract
Artificial intelligence has taken a seat at the executive table and is threatening the fact that human beings are the only ones who should be in a position of power. This article gives conjectures on the future of leadership in which managers will [...] Read more.
Artificial intelligence has taken a seat at the executive table and is threatening the fact that human beings are the only ones who should be in a position of power. This article gives conjectures on the future of leadership in which managers will collaborate with learning algorithms in the Neural Adaptive Artificial Intelligence Leadership Model, which is informed by the transformational literature on leadership and socio-technical systems, as well as the literature on algorithmic governance. We assessed the model with thirty in-depth interviews, system-level traces of behavior, and a verified survey, and we explored six hypotheses that relate to algorithmic delegation and ethical oversight, as well as human judgment versus machine insight in terms of agility and performance. We discovered that decisions are made quicker, change is more effective, and interaction is more vivid where agile practices and good digital understanding exist, and statistical tests propose that human flexibility and definite governance augment those benefits as well. It is single-industry research that contains self-reported measures, which causes research to be limited to other industries that contain more objective measures. Practitioners are provided with a practical playbook on how to make algorithmic jobs meaningful, introduce moral fail-safes, and build learning feedback to ensure people and machines are kept in line. Socially, the practice is capable of minimizing bias and establishing inclusion by visualizing accountability in the code and practice. Filling the gap between the theory of leadership and the reality of algorithms, the study provides a model of intelligent systems leading in organizations that can be reproduced. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
18 pages, 8744 KiB  
Article
A User-Centered Teleoperation GUI for Automated Vehicles: Identifying and Evaluating Information Requirements for Remote Driving and Assistance
by Maria-Magdalena Wolf, Henrik Schmidt, Michael Christl, Jana Fank and Frank Diermeyer
Multimodal Technol. Interact. 2025, 9(8), 78; https://doi.org/10.3390/mti9080078 (registering DOI) - 31 Jul 2025
Viewed by 180
Abstract
Teleoperation emerged as a promising fallback for situations beyond the capabilities of automated vehicles. Nevertheless, teleoperation still faces challenges, such as reduced situational awareness. Since situational awareness is primarily built through the remote operator’s visual perception, the graphical user interface (GUI) design is [...] Read more.
Teleoperation emerged as a promising fallback for situations beyond the capabilities of automated vehicles. Nevertheless, teleoperation still faces challenges, such as reduced situational awareness. Since situational awareness is primarily built through the remote operator’s visual perception, the graphical user interface (GUI) design is critical. In addition to video feed, supplemental informational elements are crucial—not only for the predominantly studied remote driving, but also for emerging desk-based remote assistance concepts. This work develops a GUI for different teleoperation concepts by identifying key informational elements during the teleoperation process through expert interviews (N = 9). Following this, a static and dynamic GUI prototype was developed and evaluated in a click dummy study (N = 36). Thereby, the dynamic GUI adapts the number of displayed elements according to the teleoperation phase. Results show that both GUIs achieve good system usability scale (SUS) ratings, with the dynamic GUI significantly outperforming the static version in both usability and task completion time. However, the results might be attributable to a learning effect due to the lack of randomization. The user experience questionnaire (UEQ) score shows potential for improvement. To enhance the user experience, the GUI should be evaluated in a follow-up study that includes interaction with a real vehicle. Full article
Show Figures

Figure 1

19 pages, 1753 KiB  
Article
EMG-Driven Shared Control Architecture for Human–Robot Co-Manipulation Tasks
by Francesca Patriarca, Paolo Di Lillo and Filippo Arrichiello
Machines 2025, 13(8), 669; https://doi.org/10.3390/machines13080669 - 31 Jul 2025
Viewed by 211
Abstract
The paper presents a shared control strategy that allows a human operator to physically guide the end-effector of a robotic manipulator to perform different tasks, possibly in interaction with the environment. To switch among different operational modes referring to a finite state machine [...] Read more.
The paper presents a shared control strategy that allows a human operator to physically guide the end-effector of a robotic manipulator to perform different tasks, possibly in interaction with the environment. To switch among different operational modes referring to a finite state machine algorithm, ElectroMyoGraphic (EMG) signals from the user’s arm are used to detect muscular contractions and to interact with a variable admittance control strategy. Specifically, a Support Vector Machine (SVM) classifier processes the raw EMG data to identify three classes of contractions that trigger the activation of different sets of admittance control parameters corresponding to the envisaged operational modes. The proposed architecture has been experimentally validated using a Kinova Jaco2 manipulator, equipped with force/torque sensor at the end-effector, and with a limited group of users wearing Delsys Trigno Avanti EMG sensors on the dominant upper limb, demonstrating promising results. Full article
(This article belongs to the Special Issue Design and Control of Assistive Robots)
Show Figures

Figure 1

21 pages, 6921 KiB  
Article
Transcriptomic Analysis Identifies Oxidative Stress-Related Hub Genes and Key Pathways in Sperm Maturation
by Ali Shakeri Abroudi, Hossein Azizi, Vyan A. Qadir, Melika Djamali, Marwa Fadhil Alsaffar and Thomas Skutella
Antioxidants 2025, 14(8), 936; https://doi.org/10.3390/antiox14080936 - 30 Jul 2025
Viewed by 407
Abstract
Background: Oxidative stress is a critical factor contributing to male infertility, impairing spermatogonial stem cells (SSCs) and disrupting normal spermatogenesis. This study aimed to isolate and characterize human SSCs and to investigate oxidative stress-related gene expression, protein interaction networks, and developmental trajectories involved [...] Read more.
Background: Oxidative stress is a critical factor contributing to male infertility, impairing spermatogonial stem cells (SSCs) and disrupting normal spermatogenesis. This study aimed to isolate and characterize human SSCs and to investigate oxidative stress-related gene expression, protein interaction networks, and developmental trajectories involved in SSC function. Methods: SSCs were enriched from human orchiectomy samples using CD49f-based magnetic-activated cell sorting (MACS) and laminin-binding matrix selection. Enriched cultures were assessed through morphological criteria and immunocytochemistry using VASA and SSEA4. Transcriptomic profiling was performed using microarray and single-cell RNA sequencing (scRNA-seq) to identify oxidative stress-related genes. Bioinformatic analyses included STRING-based protein–protein interaction (PPI) networks, FunRich enrichment, weighted gene co-expression network analysis (WGCNA), and predictive modeling using machine learning algorithms. Results: The enriched SSC populations displayed characteristic morphology, positive germline marker expression, and minimal fibroblast contamination. Microarray analysis revealed six significantly upregulated oxidative stress-related genes in SSCs—including CYB5R3 and NDUFA10—and three downregulated genes, such as TXN and SQLE, compared to fibroblasts. PPI and functional enrichment analyses highlighted tightly clustered gene networks involved in mitochondrial function, redox balance, and spermatogenesis. scRNA-seq data further confirmed stage-specific expression of antioxidant genes during spermatogenic differentiation, particularly in late germ cell stages. Among the machine learning models tested, logistic regression demonstrated the highest predictive accuracy for antioxidant gene expression, with an area under the curve (AUC) of 0.741. Protein oxidation was implicated as a major mechanism of oxidative damage, affecting sperm motility, metabolism, and acrosome integrity. Conclusion: This study identifies key oxidative stress-related genes and pathways in human SSCs that may regulate spermatogenesis and impact sperm function. These findings offer potential targets for future functional validation and therapeutic interventions, including antioxidant-based strategies to improve male fertility outcomes. Full article
(This article belongs to the Special Issue Oxidative Stress and Male Reproductive Health)
Show Figures

Figure 1

18 pages, 2688 KiB  
Article
Generalized Hierarchical Co-Saliency Learning for Label-Efficient Tracking
by Jie Zhao, Ying Gao, Chunjuan Bo and Dong Wang
Sensors 2025, 25(15), 4691; https://doi.org/10.3390/s25154691 - 29 Jul 2025
Viewed by 124
Abstract
Visual object tracking is one of the core techniques in human-centered artificial intelligence, which is very useful for human–machine interaction. State-of-the-art tracking methods have shown their robustness and accuracy on many challenges. However, a large amount of videos with precisely dense annotations are [...] Read more.
Visual object tracking is one of the core techniques in human-centered artificial intelligence, which is very useful for human–machine interaction. State-of-the-art tracking methods have shown their robustness and accuracy on many challenges. However, a large amount of videos with precisely dense annotations are required for fully supervised training of their models. Considering that annotating videos frame-by-frame is a labor- and time-consuming workload, reducing the reliance on manual annotations during the tracking models’ training is an important problem to be resolved. To make a trade-off between the annotating costs and the tracking performance, we propose a weakly supervised tracking method based on co-saliency learning, which can be flexibly integrated into various tracking frameworks to reduce annotation costs and further enhance the target representation in current search images. Since our method enables the model to explore valuable visual information from unlabeled frames, and calculate co-salient attention maps based on multiple frames, our weakly supervised methods can obtain competitive performance compared to fully supervised baseline trackers, using only 3.33% of manual annotations. We integrate our method into two CNN-based trackers and a Transformer-based tracker; extensive experiments on four general tracking benchmarks demonstrate the effectiveness of our method. Furthermore, we also demonstrate the advantages of our method on egocentric tracking task; our weakly supervised method obtains 0.538 success on TREK-150, which is superior to prior state-of-the-art fully supervised tracker by 7.7%. Full article
Show Figures

Figure 1

16 pages, 358 KiB  
Article
Artificial Intelligence in Curriculum Design: A Data-Driven Approach to Higher Education Innovation
by Thai Son Chu and Mahfuz Ashraf
Knowledge 2025, 5(3), 14; https://doi.org/10.3390/knowledge5030014 - 29 Jul 2025
Viewed by 387
Abstract
This paper shows that artificial intelligence is fundamentally transforming college curricula by enabling data-driven personalization, which enhances student outcomes and better aligns educational programs with evolving workforce demands. Specifically, predictive analytics, machine learning algorithms, and natural language processing were applied here, grounded in [...] Read more.
This paper shows that artificial intelligence is fundamentally transforming college curricula by enabling data-driven personalization, which enhances student outcomes and better aligns educational programs with evolving workforce demands. Specifically, predictive analytics, machine learning algorithms, and natural language processing were applied here, grounded in constructivist learning theory and Human–Computer Interaction principles, to evaluate student performance and identify at-risk students to propose personalized learning pathways. Results indicated that the AI-based curriculum achieved much higher course completion rates (89.72%) as well as retention (91.44%) and dropout rates (4.98%) compared to the traditional model. Sentiment analysis of learner feedback showed a more positive learning experience, while regression and ANOVA analyses proved the impact of AI on enhancing academic performance to be real. Therefore, the learning content delivery for each student was continuously improved based on individual learner characteristics and industry trends by AI-enabled recommender systems and adaptive learning models. Its advantages notwithstanding, the study emphasizes the need to address ethical concerns, ensure data privacy safeguards, and mitigate algorithmic bias before an equitable outcome can be claimed. These findings can inform institutions aspiring to adopt AI-driven models for curriculum innovation to build a more dynamic, responsive, and learner-centered educational ecosystem. Full article
(This article belongs to the Special Issue Knowledge Management in Learning and Education)
Show Figures

Figure 1

23 pages, 19710 KiB  
Article
Hybrid EEG Feature Learning Method for Cross-Session Human Mental Attention State Classification
by Xu Chen, Xingtong Bao, Kailun Jitian, Ruihan Li, Li Zhu and Wanzeng Kong
Brain Sci. 2025, 15(8), 805; https://doi.org/10.3390/brainsci15080805 - 28 Jul 2025
Viewed by 258
Abstract
Background: Decoding mental attention states from electroencephalogram (EEG) signals is crucial for numerous applications such as cognitive monitoring, adaptive human–computer interaction, and brain–computer interfaces (BCIs). However, conventional EEG-based approaches often focus on channel-wise processing and are limited to intra-session or subject-specific scenarios, lacking [...] Read more.
Background: Decoding mental attention states from electroencephalogram (EEG) signals is crucial for numerous applications such as cognitive monitoring, adaptive human–computer interaction, and brain–computer interfaces (BCIs). However, conventional EEG-based approaches often focus on channel-wise processing and are limited to intra-session or subject-specific scenarios, lacking robustness in cross-session or inter-subject conditions. Methods: In this study, we propose a hybrid feature learning framework for robust classification of mental attention states, including focused, unfocused, and drowsy conditions, across both sessions and individuals. Our method integrates preprocessing, feature extraction, feature selection, and classification in a unified pipeline. We extract channel-wise spectral features using short-time Fourier transform (STFT) and further incorporate both functional and structural connectivity features to capture inter-regional interactions in the brain. A two-stage feature selection strategy, combining correlation-based filtering and random forest ranking, is adopted to enhance feature relevance and reduce dimensionality. Support vector machine (SVM) is employed for final classification due to its efficiency and generalization capability. Results: Experimental results on two cross-session and inter-subject EEG datasets demonstrate that our approach achieves classification accuracy of 86.27% and 94.01%, respectively, significantly outperforming traditional methods. Conclusions: These findings suggest that integrating connectivity-aware features with spectral analysis can enhance the generalizability of attention decoding models. The proposed framework provides a promising foundation for the development of practical EEG-based systems for continuous mental state monitoring and adaptive BCIs in real-world environments. Full article
Show Figures

Figure 1

16 pages, 1192 KiB  
Article
Application of the AI-Based Framework for Analyzing the Dynamics of Persistent Organic Pollutants (POPs) in Human Breast Milk
by Gordana Jovanović, Timea Bezdan, Snježana Herceg Romanić, Marijana Matek Sarić, Martina Biošić, Gordana Mendaš, Andreja Stojić and Mirjana Perišić
Toxics 2025, 13(8), 631; https://doi.org/10.3390/toxics13080631 - 27 Jul 2025
Viewed by 310
Abstract
Human milk has been used for over 70 years to monitor pollutants such as polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Despite the growing body of data, our understanding of the pollutant exposome, particularly co-exposure patterns and their interactions, remains limited. Artificial intelligence [...] Read more.
Human milk has been used for over 70 years to monitor pollutants such as polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Despite the growing body of data, our understanding of the pollutant exposome, particularly co-exposure patterns and their interactions, remains limited. Artificial intelligence (AI) offers considerable potential to enhance biomonitoring efforts through advanced data modelling, yet its application to pollutant dynamics in complex biological matrices such as human milk remains underutilized. This study applied an AI-based framework, integrating machine learning, metaheuristic hyperparameter optimization, explainable AI, and postprocessing, to analyze PCB-170 levels in breast milk samples from 186 mothers in Zadar, Croatia. Among 24 analyzed POPs, the most influential predictors of PCB-170 concentrations were hexa- and hepta-chlorinated PCBs (PCB-180, -153, and -138), alongside p,p’-DDE. Maternal age and other POPs exhibited negligible global influence. SHAP-based interaction analysis revealed pronounced co-behavior among highly chlorinated congeners, especially PCB-138–PCB-153, PCB-138–PCB-180, and PCB-180–PCB-153. These findings highlight the importance of examining pollutant interactions rather than individual contributions alone. They also advocate for the revision of current monitoring strategies to prioritize multi-pollutant assessment and focus on toxicologically relevant PCB groups, improving risk evaluation in real-world exposure scenarios. Full article
Show Figures

Figure 1

26 pages, 27333 KiB  
Article
Gest-SAR: A Gesture-Controlled Spatial AR System for Interactive Manual Assembly Guidance with Real-Time Operational Feedback
by Naimul Hasan and Bugra Alkan
Machines 2025, 13(8), 658; https://doi.org/10.3390/machines13080658 - 27 Jul 2025
Viewed by 259
Abstract
Manual assembly remains essential in modern manufacturing, yet the increasing complexity of customised production imposes significant cognitive burdens and error rates on workers. Existing Spatial Augmented Reality (SAR) systems often operate passively, lacking adaptive interaction, real-time feedback and a control system with gesture. [...] Read more.
Manual assembly remains essential in modern manufacturing, yet the increasing complexity of customised production imposes significant cognitive burdens and error rates on workers. Existing Spatial Augmented Reality (SAR) systems often operate passively, lacking adaptive interaction, real-time feedback and a control system with gesture. In response, we present Gest-SAR, a SAR framework that integrates a custom MediaPipe-based gesture classification model to deliver adaptive light-guided pick-to-place assembly instructions and real-time error feedback within a closed-loop interaction instance. In a within-subject study, ten participants completed standardised Duplo-based assembly tasks using Gest-SAR, paper-based manuals, and tablet-based instructions; performance was evaluated via assembly cycle time, selection and placement error rates, cognitive workload assessed by NASA-TLX, and usability test by post-experimental questionnaires. Quantitative results demonstrate that Gest-SAR significantly reduces cycle times with an average of 3.95 min compared to Paper (Mean = 7.89 min, p < 0.01) and Tablet (Mean = 6.99 min, p < 0.01). It also achieved 7 times less average error rates while lowering perceived cognitive workload (p < 0.05 for mental demand) compared to conventional modalities. In total, 90% of the users agreed to prefer SAR over paper and tablet modalities. These outcomes indicate that natural hand-gesture interaction coupled with real-time visual feedback enhances both the efficiency and accuracy of manual assembly. By embedding AI-driven gesture recognition and AR projection into a human-centric assistance system, Gest-SAR advances the collaborative interplay between humans and machines, aligning with Industry 5.0 objectives of resilient, sustainable, and intelligent manufacturing. Full article
(This article belongs to the Special Issue AI-Integrated Advanced Robotics Towards Industry 5.0)
Show Figures

Figure 1

23 pages, 2229 KiB  
Article
Assessing the Impact of Risk-Warning eHMI Information Content on Pedestrian Mental Workload, Situation Awareness, and Gap Acceptance in Full and Partial eHMI Penetration Vehicle Platoons
by Fang Yang, Xu Sun, Jiming Bai, Bingjian Liu, Luis Felipe Moreno Leyva and Sheng Zhang
Appl. Sci. 2025, 15(15), 8250; https://doi.org/10.3390/app15158250 - 24 Jul 2025
Viewed by 215
Abstract
External Human–Machine Interfaces (eHMIs) enhance pedestrian safety in interactions with autonomous vehicles (AVs) by signaling crossing risk based on time-to-arrival (TTA), categorized as low, medium, or high. This study compared five eHMI configurations (single-level low, medium, high; two-level low-medium, medium-high) against a three-level [...] Read more.
External Human–Machine Interfaces (eHMIs) enhance pedestrian safety in interactions with autonomous vehicles (AVs) by signaling crossing risk based on time-to-arrival (TTA), categorized as low, medium, or high. This study compared five eHMI configurations (single-level low, medium, high; two-level low-medium, medium-high) against a three-level (low-medium-high) configuration to assess their impact on pedestrians’ crossing decisions, mental workload (MW), and situation awareness (SA) in vehicle platoon scenarios under full and partial eHMI penetration. In a video-based experiment with 24 participants, crossing decisions were evaluated via temporal gap selection, MW via P300 event-related potentials in an auditory oddball task, and SA via the Situation Awareness Rating Technique. The three-level configuration outperformed single-level medium, single-level high, two-level low-medium, and two-level medium-high in gap acceptance, promoting safer decisions by rejecting smaller gaps and accepting larger ones, and exhibited lower MW than the two-level medium-high configuration under partial penetration. No SA differences were observed. Although the three-level configuration was generally appreciated, future research should optimize presentation to mitigate issues from rapid signal changes. Notably, the single-level low configuration showed comparable performance, suggesting a simpler alternative for real-world eHMI deployment. Full article
Show Figures

Figure 1

Back to TopTop