Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (733)

Search Parameters:
Keywords = human error factor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1391 KB  
Article
Human Factor Risk Analysis (HFRA) Based on an Integrated Perspective of Socio-Technical Systems and Safety Information Cognition
by Changqin Xiong and Yiling Ma
Systems 2026, 14(2), 199; https://doi.org/10.3390/systems14020199 - 12 Feb 2026
Abstract
Unsafe behavior remains a dominant contributor to accidents in complex socio-technical systems (STSs), yet it is still frequently interpreted as an individual-level information failure. This study argues that unsafe behavior is more accurately understood as a systemic outcome shaped by multi-level technological, organizational, [...] Read more.
Unsafe behavior remains a dominant contributor to accidents in complex socio-technical systems (STSs), yet it is still frequently interpreted as an individual-level information failure. This study argues that unsafe behavior is more accurately understood as a systemic outcome shaped by multi-level technological, organizational, and environmental conditions. To address this gap, an integrated human factor risk analysis framework is proposed by combining the STS perspective with safety information cognition (SIC) theory. The framework conceptualizes unsafe behavior as the result of risk transmission through safety information flows, linking system-level risk sources to individual perception, cognition, decision-making, and action. Within this perspective, human factor risk does not arise directly from individual error, but from deficiencies and asymmetries in the generation, transmission, and utilization of safety-related information embedded in the STS. Based on this conceptualization, a system-oriented human factor risk analysis (HRFA) approach is developed to support the identification, assessment, and control of unsafe behaviors across both accident scenarios and operational contexts. The framework is applied to road transportation of dangerous goods in China, a typical high-risk STS. The application results demonstrate that the proposed approach can effectively distinguish the comprehensive risk characteristics of different unsafe behaviors and reveal their underlying systemic causes. This study contributes to systems thinking in safety governance by shifting the analytical focus from individual behavior correction to upstream system conditions and information processes. The proposed framework provides a transferable approach for understanding and managing human factor risk in complex STSs and offers practical implications for proactive, system-oriented safety governance. Full article
(This article belongs to the Section Systems Theory and Methodology)
Show Figures

Figure 1

33 pages, 2497 KB  
Article
Human Error Identification for Air Traffic Controller in Remote Tower Apron Operation
by Rong Yi, Jianping Zhang, Jingyu Zhang, Xiaoqiang Tian, Xinyi Yang and Di Yao
Aerospace 2026, 13(2), 166; https://doi.org/10.3390/aerospace13020166 - 10 Feb 2026
Viewed by 88
Abstract
Remote towers are increasingly deployed at small-to-medium airports globally for cost efficiency, yet safety optimization for large airport remote apron control remains underexplored. This study proposes a human error identification framework for air traffic controllers (ATCOs) in large airport remote apron operations. Using [...] Read more.
Remote towers are increasingly deployed at small-to-medium airports globally for cost efficiency, yet safety optimization for large airport remote apron control remains underexplored. This study proposes a human error identification framework for air traffic controllers (ATCOs) in large airport remote apron operations. Using hierarchical task analysis (HTA), a cognitive-behavioral model, and the technique for retrospective analysis of cognitive errors (TRACEr), we analyzed error probability and severity through field research. Key findings reveal critical divergences. Memory functions showed the highest error probability, while perception errors caused the most severe outcomes. Working memory errors were most prevalent, but visual detection errors were most severe. Attention deficits were most frequent, while spatial confusion and information integration failures exceeded severity thresholds. Personal factors dominated performance-shaping factors, with low vigilance and equipment unavailability as primary high-risk conditions. This research provides an error identification checklist and analysis methodology to enhance human performance and aviation safety in remote apron control. Full article
(This article belongs to the Section Air Traffic and Transportation)
Show Figures

Figure 1

24 pages, 37585 KB  
Article
Dynamic Failure Analysis of Suction Anchor Installation Operation in Marine Natural Gas Hydrate Development Using DBN-GO Method
by Kang Liu, Haojun Zhang, Haitao Xu, Fei Cao, Guoming Chen, Lei Liu and Duoya Liu
Sustainability 2026, 18(4), 1769; https://doi.org/10.3390/su18041769 - 9 Feb 2026
Viewed by 130
Abstract
Suction anchors play an important role in the exploration and development of marine natural gas hydrate (NGH). Suction anchors increase the bearing capacity and reduce tilting or sinking risk of underwater wellheads in the exploration and development process. This study proposes a dynamic [...] Read more.
Suction anchors play an important role in the exploration and development of marine natural gas hydrate (NGH). Suction anchors increase the bearing capacity and reduce tilting or sinking risk of underwater wellheads in the exploration and development process. This study proposes a dynamic failure analysis procedure for suction anchor installation based on the DBN-GO method. Firstly, a Goal-Oriented (GO) model is established by analyzing the human and equipment factor nodes in the suction anchor installation operation process. A Bayesian Network (BN) analysis model is set up by mapping the key nodes in the GO model. Then, the Cognitive Reliability and Error Analysis Method (CREAM) and the Dempster–Shafer (D-S) evidence theory are used to quantify the failure probabilities of human and equipment factor nodes in the BN model. The main risk factors are identified using Bayesian backward inference. Finally, the dynamic risk assessment of the suction anchor installation operation is conducted, considering the equipment node transition probability of the BN. Tkae the second production test of natural gas hydrates in the South China Sea as a case study. The study result shows that the failure probability of the suction anchor installation operation is 0.298%, which is at a low-risk level. Suction pump pressure control is the most critical factor leading to human errors. Among the equipment factor, the reliability of the suction pump and the ROV is the most important. Dynamic Bayesian inference shows the risk gradually increases with time. A reasonable maintenance strategy is conducive to reducing the accumulated risks caused by the time-varying degradation of equipment performance. The results could provide significant support in risk management and decision-making for the suction anchor installation operation, which will further promote the environmental sustainability, operational safety and economic feasibility of marine natural gas hydrate development. Full article
(This article belongs to the Special Issue Advanced Research on Marine and Deep Oil & Gas Development)
Show Figures

Figure 1

13 pages, 545 KB  
Review
Near Misses as Signals of System Vulnerability in Thoracic Surgery: A Narrative Review on Quality Improvement and Patient Safety
by Dimitrios E. Magouliotis, Vasiliki Androutsopoulou, Prokopis-Andreas Zotos, Andrew Xanthopoulos, Ugo Cioffi, Noah Sicouri, Piergiorgio Solli and Marco Scarci
Healthcare 2026, 14(4), 423; https://doi.org/10.3390/healthcare14040423 - 8 Feb 2026
Viewed by 115
Abstract
Near misses—clinical events that could have resulted in patient harm but did not—are increasingly recognized as critical yet underutilized sources of insight in surgical quality improvement. In thoracic surgery, where procedures are physiologically demanding and care pathways are highly interdependent, near misses frequently [...] Read more.
Near misses—clinical events that could have resulted in patient harm but did not—are increasingly recognized as critical yet underutilized sources of insight in surgical quality improvement. In thoracic surgery, where procedures are physiologically demanding and care pathways are highly interdependent, near misses frequently precede major complications and expose latent system vulnerabilities rather than isolated technical errors. A structured narrative review methodology was employed, including a targeted literature search of major biomedical databases and thematic synthesis of relevant studies. This narrative review synthesizes evidence from patient safety science, surgical quality literature, and thoracic surgery—specific outcomes research to examine how near misses can be systematically leveraged to improve care. We discuss the transition from individual-centered explanations of adverse events to system-based models that emphasize human factors, communication, escalation pathways, and organizational culture. Particular attention is given to contemporary quality frameworks such as failure to rescue and textbook outcome, which highlight the importance of early recognition, coordinated response, and recovery from complications rather than complication avoidance alone. We further explore the central role of psychological safety and leadership behaviors in enabling meaningful learning from near misses. By reframing near misses as actionable data rather than anecdotal “close calls,” quality improvement emerges as a core professional responsibility in thoracic surgery. We conclude that excellence in thoracic surgery should be defined not by the absence of complications, but by the capacity of surgical systems to learn, adapt, and prevent future harm. Full article
Show Figures

Graphical abstract

21 pages, 649 KB  
Review
Smart Lies and Sharp Eyes: Pragmatic Artificial Intelligence for Cancer Pathology: Promise, Pitfalls, and Access Pathways
by Mohamed-Amine Bani
Cancers 2026, 18(3), 421; https://doi.org/10.3390/cancers18030421 - 28 Jan 2026
Viewed by 210
Abstract
Background: Whole-slide imaging and algorithmic advances have moved computational pathology from research to routine consideration. Despite notable successes, real-world deployment remains limited by generalization, validation gaps, and human-factor risks, which can be amplified in resource-constrained settings. Content/Scope: This narrative review and [...] Read more.
Background: Whole-slide imaging and algorithmic advances have moved computational pathology from research to routine consideration. Despite notable successes, real-world deployment remains limited by generalization, validation gaps, and human-factor risks, which can be amplified in resource-constrained settings. Content/Scope: This narrative review and implementation perspective summarizes clinically proximate AI capabilities in cancer pathology, including lesion detection, metastasis triage, mitosis counting, immunomarker quantification, and prediction of selected molecular alterations from routine histology. We also summarize recurring failure modes, dataset leakage, stain/batch/site shifts, misleading explanation overlays, calibration errors, and automation bias, and distinguish applications supported by external retrospective validation, prospective reader-assistance or real-world studies, and regulatory-cleared use. We translate these evidence patterns into a practical checklist covering dataset design, external and temporal validation, robustness testing, calibration and uncertainty handling, explainability sanity checks, and workflow-safety design. Equity Focus: We propose a stepwise adoption pathway for low- and middle-income countries: prioritize narrow, high-impact use cases; match compute and storage requirements to local infrastructure; standardize pre-analytics; pool validation cohorts; and embed quality management, privacy protections, and audit trails. Conclusions: AI can already serve as a reliable second reader for selected tasks, reducing variance and freeing expert time. Safe, equitable deployment requires disciplined validation, calibrated uncertainty, and guardrails against human-factor failure. With pragmatic scoping and shared infrastructure, pathology programs can realize benefits while preserving trust and accountability. Full article
Show Figures

Figure 1

19 pages, 1710 KB  
Article
Bacterial Colony Counting and Classification System Based on Deep Learning Model
by Chuchart Pintavirooj, Manao Bunkum, Naphatsawan Vongmanee, Jindapa Nampeng and Sarinporn Visitsattapongse
Appl. Sci. 2026, 16(3), 1313; https://doi.org/10.3390/app16031313 - 28 Jan 2026
Viewed by 232
Abstract
Microbiological analysis is crucial for identifying species, assessing infections, and diagnosing infectious diseases, thereby supporting both research studies and medical diagnosis. In response to these needs, accurate and efficient identification of bacterial colonies is essential. Conventionally, this process is performed through manual counting [...] Read more.
Microbiological analysis is crucial for identifying species, assessing infections, and diagnosing infectious diseases, thereby supporting both research studies and medical diagnosis. In response to these needs, accurate and efficient identification of bacterial colonies is essential. Conventionally, this process is performed through manual counting and visual inspection of colonies on agar plates. However, this approach is prone to several limitations arising from human error and external factors such as lighting conditions, surface reflections, and image resolution. To overcome these limitations, an automated bacterial colony counting and classification system was developed by integrating a custom-designed imaging device with advanced deep learning models. The imaging device incorporates controlled illumination, matte-coated surfaces, and a high-resolution camera to minimize reflections and external noise, thereby ensuring consistent and reliable image acquisition. Image-processing algorithms implemented in MATLAB were employed to detect bacterial colonies, remove background artifacts, and generate cropped colony images for subsequent classification. A dataset comprising nine bacterial species was compiled and systematically evaluated using five deep learning architectures: ResNet-18, ResNet-50, Inception V3, GoogLeNet, and the state-of-the-art EfficientNet-B0. Experimental results demonstrated high colony-counting accuracy, with a mean accuracy of 90.79% ± 5.25% compared to manual counting. The coefficient of determination (R2 = 0.9083) indicated a strong correlation between automated and manual counting results. For colony classification, EfficientNet-B0 achieved the best performance, with an accuracy of 99.78% and a macro-F1 score of 0.99, demonstrating strong capability in distinguishing morphologically distinct colonies such as Serratia marcescens. Compared with previous studies, this research provides a time-efficient and scalable solution that balances high accuracy with computational efficiency. Overall, the findings highlight the potential of combining optimized imaging systems with modern lightweight deep learning models to advance microbiological diagnostics and improve routine laboratory workflows. Full article
(This article belongs to the Special Issue AI-Based Biomedical Signal and Image Processing)
Show Figures

Figure 1

22 pages, 3191 KB  
Review
Airway Management in the ICU and Emergency Department in Resource-Limited Settings
by Sahil Kataria, Deven Juneja, Ravi Jain, Tonny Veenith and Prashant Nasa
Life 2026, 16(2), 195; https://doi.org/10.3390/life16020195 - 23 Jan 2026
Viewed by 1074
Abstract
Airway management is central to the care of critically ill patients, yet it remains one of the most challenging interventions in emergency departments and intensive care units. Patients often present with severe physiological instability, limited cardiopulmonary reserve, and high acuity, while clinicians often [...] Read more.
Airway management is central to the care of critically ill patients, yet it remains one of the most challenging interventions in emergency departments and intensive care units. Patients often present with severe physiological instability, limited cardiopulmonary reserve, and high acuity, while clinicians often work under constraints related to time for preparation, equipment availability, trained workforce, monitoring, and access to advanced rescue techniques. These challenges are particularly pronounced in low- and middle-income countries and other resource-limited or austere environments, where the margin for error is narrow and delays or repeated attempts in airway management may rapidly precipitate hypoxemia, hemodynamic collapse, or cardiac arrest. Although contemporary airway guidelines emphasize structured preparation and rescue pathways, many assume resources that are not consistently available in such settings. This narrative review discusses pragmatic, context-adapted strategies for airway management in constrained environments, with emphasis on physiology-first preparation, appropriate oxygenation and induction techniques, simplified rapid-sequence intubation, and the judicious use of basic airway adjuncts, supraglottic devices, and video laryngoscopy, where available. Adapted difficult airway algorithms, front-of-neck access in the absence of surgical backup, human factors, team training, and ethical considerations are also addressed. This review aims to support safer and effective airway management for critically ill patients in resource-limited emergency and intensive care settings. Full article
(This article belongs to the Special Issue Intensive Care Medicine: Current Concepts and Future Perspectives)
Show Figures

Figure 1

23 pages, 2572 KB  
Review
The Impact of User Interface and Experience (UI/UX) Design on Visual Ergonomics: A Technical Approach for Reducing Human Error in Industrial Settings
by Anael Vizcarra, Gustavo Quiroz and Jose Cornejo
Designs 2026, 10(1), 8; https://doi.org/10.3390/designs10010008 - 21 Jan 2026
Viewed by 469
Abstract
User Interface (UI) and User Experience (UX) design play a critical role in shaping human interaction with digital systems, particularly in professional environments where accuracy, safety, and efficiency are essential. Poor visual design increases cognitive load and the likelihood of human error, whereas [...] Read more.
User Interface (UI) and User Experience (UX) design play a critical role in shaping human interaction with digital systems, particularly in professional environments where accuracy, safety, and efficiency are essential. Poor visual design increases cognitive load and the likelihood of human error, whereas ergonomically informed interfaces can substantially improve task performance. This systematic literature review analyzes 20 peer-reviewed studies published between 2020 and 2024 to examine how visual ergonomics embedded in UI/UX design contributes to error reduction across industrial and professional contexts. The reviewed studies report measurable improvements when ergonomic principles are applied, including reductions in operational errors ranging from approximately 30% to 70%, improvements in task completion time between 20% and 60%, and increased user accuracy and satisfaction in safety-critical and high-workload environments. The findings indicate that visual hierarchy, modular layouts, adaptive components, and real-time feedback are consistently associated with improved performance outcomes. Moreover, task complexity, user expertise, and working conditions were identified as moderating factors influencing ergonomic demands. Overall, the review demonstrates that visual ergonomics should be treated not merely as a usability enhancement but as a strategic design approach for minimizing human error and supporting reliable human–machine interaction in complex digital environments. Full article
Show Figures

Figure 1

24 pages, 6115 KB  
Article
Comparison of GLMM, RF and XGBoost Methods for Estimating Daily Relative Humidity in China Based on Remote Sensing Data
by Ying Yao, Ling Wu, Hongbo Liu and Wenbin Zhu
Remote Sens. 2026, 18(2), 306; https://doi.org/10.3390/rs18020306 - 16 Jan 2026
Viewed by 170
Abstract
Relative humidity (RH) is an important meteorological factor that affects both the climate system and human activities. However, the existing observational station data are insufficient to meet the requirements of regional scale research. Machine learning methods offer new avenues for high precision RH [...] Read more.
Relative humidity (RH) is an important meteorological factor that affects both the climate system and human activities. However, the existing observational station data are insufficient to meet the requirements of regional scale research. Machine learning methods offer new avenues for high precision RH estimation, but the performance of different algorithms in complex geographical environments still needs to be thoroughly evaluated. Based on Chinese observational station data from 2011 to 2020, this study systematically evaluated the performance of three methods for estimating RH: the generalized linear mixed model (GLMM), random forest (RF) and the XGBoost algorithm. The results of ten-fold cross validation indicate that the two machine learning methods are significantly superior to the traditional GLMM. Among them, RF performed the best (the determinant coefficient (R2) = 0.73, root mean square error (RMSE) = 8.85%), followed by XGBoost (R2 = 0.72, RMSE = 9.07%), while the GLMM performed relatively poorly (R2 = 0.58, RMSE = 11.08%). The model performance shows significant spatial heterogeneity. All models exhibit high correlation but relatively large errors in the northern regions, while demonstrating low errors yet low correlation in the southern regions. Meanwhile, the model performance also shows significant seasonal variations, with the highest accuracy observed in the summer (June to September). Among all features, dew point temperature (Td) aridity index (AI) and day of year (DOY) are the main contributing factors for RH estimation. This study confirms that the RF model provides the highest accuracy in RH estimation. Full article
Show Figures

Figure 1

22 pages, 363 KB  
Review
Human Factors, Competencies, and System Interaction in Remotely Piloted Aircraft Systems
by John Murray and Graham Wild
Aerospace 2026, 13(1), 85; https://doi.org/10.3390/aerospace13010085 - 13 Jan 2026
Viewed by 431
Abstract
Research into Remotely Piloted Aircraft Systems (RPASs) has expanded rapidly, yet the competencies, knowledge, skills, and other attributes (KSaOs) required of RPAS pilots remain comparatively underexamined. This review consolidates existing studies addressing human performance, subject matter expertise, training practices, and accident causation to [...] Read more.
Research into Remotely Piloted Aircraft Systems (RPASs) has expanded rapidly, yet the competencies, knowledge, skills, and other attributes (KSaOs) required of RPAS pilots remain comparatively underexamined. This review consolidates existing studies addressing human performance, subject matter expertise, training practices, and accident causation to provide a comprehensive account of the KSaOs underpinning safe civilian and commercial drone operations. Prior research demonstrates that early work drew heavily on military contexts, which may not generalize to contemporary civilian operations characterized by smaller platforms, single-pilot tasks, and diverse industry applications. Studies employing subject matter experts highlight cognitive demands in areas such as situational awareness, workload management, planning, fatigue recognition, perceptual acuity, and decision-making. Accident analyses, predominantly using the human factors accident classification system and related taxonomies, show that skill errors and preconditions for unsafe acts are the most frequent contributors to RPAS occurrences, with limited evidence of higher-level latent organizational factors in civilian contexts. Emerging research emphasizes that RPAS pilots increasingly perform data-collection tasks integral to professional workflows, requiring competencies beyond aircraft handling alone. The review identifies significant gaps in training specificity, selection processes, and taxonomy suitability, indicating opportunities for future research to refine RPAS competency frameworks and support improved operational safety. Full article
(This article belongs to the Special Issue Human Factors and Performance in Aviation Safety)
Show Figures

Graphical abstract

19 pages, 7461 KB  
Article
Walking Dynamics, User Variability, and Window Size Effects in FGO-Based Smartphone PDR+GNSS Fusion
by Amjad Hussain Magsi and Luis Enrique Díez
Sensors 2026, 26(2), 431; https://doi.org/10.3390/s26020431 - 9 Jan 2026
Viewed by 297
Abstract
The performance of smartphone-based pedestrian positioning strongly depends on the GNSS signal quality, the motion dynamics that influence PDR accuracy, and the way both sources of information are fused. While recent studies have shown the benefits of Factor Graph Optimization (FGO) for Pedestrian [...] Read more.
The performance of smartphone-based pedestrian positioning strongly depends on the GNSS signal quality, the motion dynamics that influence PDR accuracy, and the way both sources of information are fused. While recent studies have shown the benefits of Factor Graph Optimization (FGO) for Pedestrian Dead Reckoning (PDR) Global Navigation Satellite Systems (GNSS) fusion, the interaction between human motion, PDR errors, and FGO window configuration has not been systematically examined. This work investigates how walking dynamics affect the optimal configuration of sliding-window FGO, and to what extent FGO mitigates motion-dependent PDR errors compared with the Kalman Filter (KF). Using data collected from ten pedestrians performing four motion types (slow walking, normal walking, jogging, and running), we analyze: (1) the relationship between walking speed and the FGO window size required to achieve stable positioning accuracy, and (2) the ability of FGO to suppress PDR outliers arising from motion irregularities across different users. The results show that a window size of around 10 poses offers the best overall balance between accuracy and computational load, providing substantial improvement over SWFGO with a 1-pose window and approaching the accuracy of batch FGO at a fraction of its cost. Increasing the window further to 30 poses yields only marginal accuracy gains while increasing computation, and this trend is consistent across all motion types. Additionally, FGO and SWFGO reduce PDR-induced outliers more effectively than KF across all users and motions, demonstrating improved robustness under gait variability and transient disturbances. Full article
(This article belongs to the Special Issue Smart Sensor Systems for Positioning and Navigation)
Show Figures

Figure 1

19 pages, 917 KB  
Article
Leveraging Artificial Intelligence-Based Applications to Remove Disruptive Factors from Pharmaceutical Care: A Quantitative Study in Eastern Romania
by Ionela Daniela Ferțu, Alina Mihaela Elisei, Mariana Lupoae, Alexandra Burlacu, Claudia Simona Ștefan, Luminița Enache, Andrei Vlad Brădeanu, Loredana Sabina Pascu, Iulia Chiscop, Mădălina Nicoleta Matei, Aurel Nechita and Ancuța Iacob
Pharmacy 2026, 14(1), 7; https://doi.org/10.3390/pharmacy14010007 - 9 Jan 2026
Viewed by 369
Abstract
Artificial Intelligence (AI) has increasingly contributed to advancements in pharmaceutical practice, particularly by enhancing the pharmacist–patient relationship and improving medication adherence. This quantitative, descriptive, cross-sectional study investigated Eastern Romanian pharmacists’ perception of AI-based applications as effective optimization tools, correlating it with disruptive communication [...] Read more.
Artificial Intelligence (AI) has increasingly contributed to advancements in pharmaceutical practice, particularly by enhancing the pharmacist–patient relationship and improving medication adherence. This quantitative, descriptive, cross-sectional study investigated Eastern Romanian pharmacists’ perception of AI-based applications as effective optimization tools, correlating it with disruptive communication factors. An anonymous and online questionnaire was distributed to community pharmacists, examining sociodemographic characteristics, awareness of disruptive factors, and the perceived usefulness of AI. The sample included 437 respondents: pharmacists (55.6%), mostly female (83.8%), and aged between 25 and 44 (52.6%). Data analysis involved descriptive statistics and independent t-tests. The statistical analysis revealed a significantly positive perception (p < 0.001) of AI on pharmacist–patient communication. Respondents viewed AI as a valuable tool for reducing medication errors and optimizing counseling time, though they maintain a strong emphasis on genuine human interaction. Significant correlations were found between disruptive factors—such as noise and high patient volume—and the quality of communication. Participants also expressed an increased interest in applications like automatic prescription scheduling and the use of chatbots. The study concludes that a balanced implementation of AI technologies is necessary, one that runs parallel with the continuous development of pharmacists’ communication skills. Future research should focus on validating AI’s impact on clinical outcomes and establishing clear ethical guidelines regarding the use of patient data. Full article
(This article belongs to the Special Issue AI Use in Pharmacy and Pharmacy Education)
Show Figures

Figure 1

30 pages, 6739 KB  
Article
A Fusion Algorithm for Pedestrian Anomaly Detection and Tracking on Urban Roads Based on Multi-Module Collaboration and Cross-Frame Matching Optimization
by Wei Zhao, Xin Gong, Lanlan Li and Luoyang Zuo
Sensors 2026, 26(2), 400; https://doi.org/10.3390/s26020400 - 8 Jan 2026
Viewed by 371
Abstract
Amid rapid advancements in artificial intelligence, the detection of abnormal human behaviors in complex traffic environments has garnered significant attention. However, detection errors frequently occur due to interference from complex backgrounds, small targets, and other factors. Therefore, this paper proposes a research methodology [...] Read more.
Amid rapid advancements in artificial intelligence, the detection of abnormal human behaviors in complex traffic environments has garnered significant attention. However, detection errors frequently occur due to interference from complex backgrounds, small targets, and other factors. Therefore, this paper proposes a research methodology that integrates the anomaly detection YOLO-SGCF algorithm with the tracking BoT-SORT-ReID algorithm. The detection module uses YOLOv8 as the baseline model, incorporating Swin Transformer to enhance global feature modeling capabilities in complex scenes. CBAM and CA attention are embedded into the Neck and backbone, respectively: CBAM enables dual-dimensional channel-spatial weighting, while CA precisely captures object location features by encoding coordinate information. The Neck layer incorporates GSConv convolutional modules to reduce computational load while expanding feature receptive fields. The loss function is replaced with Focal-EIoU to address sample imbalance issues and precisely optimize bounding box regression. For tracking, to enhance long-term tracking stability, ReID feature distances are incorporated during the BoT-SORT data association phase. This integrates behavioral category information from YOLO-SGCF, enabling the identification and tracking of abnormal pedestrian behaviors in complex environments. Evaluations on our self-built dataset (covering four abnormal behaviors: Climb, Fall, Fight, Phone) show mAP@50%, precision, and recall reaching 92.2%, 90.75%, and 86.57% respectively—improvements of 3.4%, 4.4%, and 6% over the original model—while maintaining an inference speed of 328.49 FPS. Additionally, generalization testing on the UCSD Ped1 dataset (covering six abnormal behaviors: Biker, Skater, Car, Wheelchair, Lawn, Runner) yielded an mAP score of 92.7%, representing a 1.5% improvement over the original model and outperforming existing mainstream models. Furthermore, the tracking algorithm achieved an MOTA of 90.8% and an MOTP of 92.6%, with a 47.6% reduction in IDS, demonstrating superior tracking performance compared to existing mainstream algorithms. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

19 pages, 667 KB  
Article
Replacing Stumbo’s Tables with Simple and Accurate Mathematical Modelling for Food Thermal Process Calculations
by Dario Friso
Processes 2026, 14(1), 155; https://doi.org/10.3390/pr14010155 - 2 Jan 2026
Viewed by 469
Abstract
The practical use of computational thermo-fluid dynamics (CFD) for food thermal process calculations still appears very premature due to both the high costs and the inhomogeneity and anisotropy of foods. Therefore, the traditional formula method with both Ball and Stumbo’s tables is still [...] Read more.
The practical use of computational thermo-fluid dynamics (CFD) for food thermal process calculations still appears very premature due to both the high costs and the inhomogeneity and anisotropy of foods. Therefore, the traditional formula method with both Ball and Stumbo’s tables is still widely used due to its accuracy and safety. In both cases, the calculations require consulting and interpolating data from the respective tables, making the procedure slow and prone to human errors. The computerization of Ball’s tables to speed up and automate the calculations with a new mathematical approach based on the substitution of the integral exponential function and the initial cooling hyperbola has already been developed. The high accuracy obtained, superior to the direct regression of the table data, suggested adopting it also in the computerization of Stumbo’s tables. However, the latter are 14 times larger than those of Ball due to the extension of the thermo-bacteriological parameter z up to over 100 °C and the variability of the cooling lag factor Jcc. Therefore, the mathematical modelling was modified using an additional function, dependent on z and Jcc. The results obtained with the mathematical modelling showed a mean relative error and the standard deviation with respect to the Stumbo’s tables equal to MRE ± SD = 0.62% ± 1.29%. Further validation was obtained by calculating the thermal process time for different lethalities and thermo-bacteriological parameters with MRE ± SD compared to the Stumbo tables equal to 1.04% ± 0.82%. Full article
Show Figures

Figure 1

27 pages, 16705 KB  
Article
Development of an Ozone (O3) Predictive Emissions Model Using the XGBoost Machine Learning Algorithm
by Esteban Hernandez-Santiago, Edgar Tello-Leal, Jailene Marlen Jaramillo-Perez and Bárbara A. Macías-Hernández
Big Data Cogn. Comput. 2026, 10(1), 15; https://doi.org/10.3390/bdcc10010015 - 1 Jan 2026
Viewed by 550
Abstract
High concentrations of tropospheric ozone (O3) in urban areas pose a significant risk to human health. This study proposes an evaluation framework based on the XGBoost algorithm to predict O3 concentration, assessing the model’s capacity for seasonal extrapolation and [...] Read more.
High concentrations of tropospheric ozone (O3) in urban areas pose a significant risk to human health. This study proposes an evaluation framework based on the XGBoost algorithm to predict O3 concentration, assessing the model’s capacity for seasonal extrapolation and spatial transferability. The experiment uses hourly air pollution data (O3, NO, NO2, and NOx) and meteorological factors (temperature, relative humidity, barometric pressure, wind speed, and wind direction) from six monitoring stations in the Monterrey Metropolitan Area, Mexico (from 22 September 2022 to 21 September 2023). In the preprocessing phase, the datasets were extended via feature engineering, including cyclic variables, rolling windows, and lag features, to capture temporal dynamics. The prediction models were optimized using a random search, with time-series cross-validation to prevent data leakage. The models were evaluated across a concentration range of 0.001 to 0.122 ppm, demonstrating high predictive accuracy, with a coefficient of determination (R2) of up to 0.96 and a root-mean-square error (RMSE) of 0.0034 ppm when predicting summer (O3) concentrations without prior knowledge. Spatial generalization was robust in residential areas (R2 > 0.90), but performance decreased in the industrial corridor (AQMS-NL03). We identified that this decrease is related to local complexity through the quantification of domain shift (Kolmogorov–Smirnov test) and Shapley additive explanations (SHAP) diagnostics, since the model effectively learns atmospheric inertia in stable areas but struggles with the stochastic effects of NOx titration driven by industrial emissions. These findings position the proposed approach as a reliable tool for “virtual detection” while highlighting the crucial role of environmental topology in model implementation. Full article
(This article belongs to the Special Issue Machine Learning and AI Technology for Sustainable Development)
Show Figures

Figure 1

Back to TopTop