Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (235)

Search Parameters:
Keywords = train delays analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 1289 KiB  
Article
An Analysis of Hybrid Management Strategies for Addressing Passenger Injuries and Equipment Failures in the Taipei Metro System: Enhancing Operational Quality and Resilience
by Sung-Neng Peng, Chien-Yi Huang, Hwa-Dong Liu and Ping-Jui Lin
Mathematics 2025, 13(15), 2470; https://doi.org/10.3390/math13152470 - 31 Jul 2025
Abstract
This study is the first to systematically integrate supervised machine learning (decision tree) and association rule mining techniques to analyze accident data from the Taipei Metro system, conducting a large-scale data-driven investigation into both passenger injury and train malfunction events. The research demonstrates [...] Read more.
This study is the first to systematically integrate supervised machine learning (decision tree) and association rule mining techniques to analyze accident data from the Taipei Metro system, conducting a large-scale data-driven investigation into both passenger injury and train malfunction events. The research demonstrates strong novelty and practical contributions. In the passenger injury analysis, a dataset of 3331 cases was examined, from which two highly explanatory rules were extracted: (i) elderly passengers (aged > 61) involved in station incidents are more likely to suffer moderate to severe injuries; and (ii) younger passengers (aged ≤ 61) involved in escalator incidents during off-peak hours are also at higher risk of severe injury. This is the first study to quantitatively reveal the interactive effect of age and time of use on injury severity. In the train malfunction analysis, 1157 incidents with delays exceeding five minutes were analyzed. The study identified high-risk condition combinations—such as those involving rolling stock, power supply, communication, and signaling systems—associated with specific seasons and time periods (e.g., a lift value of 4.0 for power system failures during clear mornings from 06:00–12:00, and 3.27 for communication failures during summer evenings from 18:00–24:00). These findings were further cross-validated with maintenance records to uncover underlying causes, including brake system failures, cable aging, and automatic train operation (ATO) module malfunctions. Targeted preventive maintenance recommendations were proposed. Additionally, the study highlighted existing gaps in the completeness and consistency of maintenance records, recommending improvements in documentation standards and data auditing mechanisms. Overall, this research presents a new paradigm for intelligent metro system maintenance and safety prediction, offering substantial potential for broader adoption and practical application. Full article
Show Figures

Figure 1

11 pages, 551 KiB  
Article
Artificial Neural Network for the Fast Screening of Samples from Suspected Urinary Tract Infections
by Cristiano Ialongo, Marco Ciotti, Alfredo Giovannelli, Flaminia Tomassetti, Martina Pelagalli, Stefano Di Carlo, Sergio Bernardini, Massimo Pieri and Eleonora Nicolai
Antibiotics 2025, 14(8), 768; https://doi.org/10.3390/antibiotics14080768 - 30 Jul 2025
Viewed by 173
Abstract
Background: Urine microbial analysis is a frequently requested test that is often associated with contamination during specimen collection or storage, which leads to false-positive diagnoses and delayed reporting. In the era of digitalization, machine learning (ML) can serve as a valuable tool to [...] Read more.
Background: Urine microbial analysis is a frequently requested test that is often associated with contamination during specimen collection or storage, which leads to false-positive diagnoses and delayed reporting. In the era of digitalization, machine learning (ML) can serve as a valuable tool to support clinical decision-making. Methods: This study investigates the application of a simple artificial neural network (ANN) to pre-identify negative and contaminated (false-positive) specimens. An ML model was developed using 8181 urine samples, including cytology, dipstick tests, and culture results. The dataset was randomly split 2:1 for training and testing a multilayer perceptron (MLP). Input variables with a normalized importance below 0.2 were excluded. Results: The final model used only microbial and either urine color or urobilinogen pigment analysis as inputs; other physical, chemical, and cellular parameters were omitted. The frequency of positive and negative specimens for bacteria was 6.9% and 89.6%, respectively. Contaminated specimens represented 3.5% of cases and were predominantly misclassified as negative by the MLP. Thus, the negative predictive value (NPV) was 96.5% and the positive predictive value (PPV) was 87.2%, leading to 0.82% of the cultures being unnecessary microbial cultures (UMC). Conclusions: These results suggest that the MLP is reliable for screening out negative specimens but less effective at identifying positive ones. In conclusion, ANN models can effectively support the screening of negative urine samples, detect clinically significant bacteriuria, and potentially reduce unnecessary cultures. Incorporating morphological information data could further improve the accuracy of our model and minimize false negatives. Full article
Show Figures

Figure 1

23 pages, 524 KiB  
Article
Clinician Experiences with Adolescents with Comorbid Chronic Pain and Eating Disorders
by Emily A. Beckmann, Claire M. Aarnio-Peterson, Kendra J. Homan, Cathleen Odar Stough and Kristen E. Jastrowski Mano
J. Clin. Med. 2025, 14(15), 5300; https://doi.org/10.3390/jcm14155300 - 27 Jul 2025
Viewed by 336
Abstract
Background/Objectives: Chronic pain and eating disorders are two prevalent and disabling pediatric health concerns, with serious, life-threatening consequences. These conditions can co-occur, yet little is known about best practices addressing comorbid pain and eating disorders. Delayed intervention for eating disorders may have [...] Read more.
Background/Objectives: Chronic pain and eating disorders are two prevalent and disabling pediatric health concerns, with serious, life-threatening consequences. These conditions can co-occur, yet little is known about best practices addressing comorbid pain and eating disorders. Delayed intervention for eating disorders may have grave implications, as eating disorders have one of the highest mortality rates among psychological disorders. Moreover, chronic pain not only persists but worsens into adulthood when left untreated. This study aimed to understand pediatric clinicians’ experiences with adolescents with chronic pain and eating disorders. Methods: Semi-structured interviews were conducted with hospital-based physicians (N = 10; 70% female; M years of experience = 15.3) and psychologists (N = 10; 80% female; M years of experience = 10.2) specializing in anesthesiology/pain, adolescent medicine/eating disorders, and gastroenterology across the United States. Audio transcripts were coded, and thematic analysis was used to identify key themes. Results: Clinicians described frequently encountering adolescents with chronic pain and eating disorders. Clinicians described low confidence in diagnosing comorbid eating disorders and chronic pain, which they attributed to lack of screening tools and limited training. Clinicians collaborated with and consulted clinicians who encountered adolescents with chronic pain and/or eating disorders. Conclusions: Results reflect clinicians’ desire for additional resources, training, and collaboration to address the needs of this population. Targets for future research efforts in comorbid pain and eating disorders were highlighted. Specifically, results support the development of screening tools, program development to improve training in complex medical and psychiatric presentations, and methods to facilitate more collaboration and consultation across health care settings, disciplines, and specialties. Full article
(This article belongs to the Section Clinical Pediatrics)
Show Figures

Figure 1

19 pages, 3636 KiB  
Article
Research on Wellbore Trajectory Prediction Based on a Pi-GRU Model
by Hanlin Liu, Yule Hu and Zhenkun Wu
Appl. Sci. 2025, 15(15), 8317; https://doi.org/10.3390/app15158317 - 26 Jul 2025
Viewed by 191
Abstract
Accurate wellbore trajectory prediction is of great significance for enhancing the efficiency and safety of directional drilling in coal mines. However, traditional mechanical analysis methods have high computational complexity, and the existing data-driven models cannot fully integrate non-sequential features such as stratum lithology. [...] Read more.
Accurate wellbore trajectory prediction is of great significance for enhancing the efficiency and safety of directional drilling in coal mines. However, traditional mechanical analysis methods have high computational complexity, and the existing data-driven models cannot fully integrate non-sequential features such as stratum lithology. To solve these problems, this study proposes a parallel input gated recurrent unit (Pi-GRU) model based on the TensorFlow framework. The GRU network captures the temporal dependencies of sequence data (such as dip angle and azimuth angle), while the BP neural network extracts deep correlations from non-sequence features (such as stratum lithology), thereby achieving multi-source data fusion modeling. Orthogonal experimental design was adopted to optimize the model hyperparameters, and the ablation experiment confirmed the necessity of the parallel architecture. The experimental results obtained based on the data of a certain coal mine in Shanxi Province show that the mean square errors (MSE) of the azimuth and dip angle angles of the Pi-GRU model are 0.06° and 0.01°, respectively. Compared with the emerging CNN-BiLSTM model, they are reduced by 66.67% and 76.92%, respectively. To evaluate the generalization performance of the model, we conducted cross-scenario validation on the dataset of the Dehong Coal Mine. The results showed that even under unknown geological conditions, the Pi-GRU model could still maintain high-precision predictions. The Pi-GRU model not only outperforms existing methods in terms of prediction accuracy, with an inference delay of only 0.21 milliseconds, but also requires much less computing power for training and inference than the maximum computing power of the Jetson TX2 hardware. This proves that the model has good practicability and deployability in the engineering field. It provides a new idea for real-time wellbore trajectory correction in intelligent drilling systems and shows strong application potential in engineering applications. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

27 pages, 705 KiB  
Article
A Novel Wavelet Transform and Deep Learning-Based Algorithm for Low-Latency Internet Traffic Classification
by Ramazan Enisoglu and Veselin Rakocevic
Algorithms 2025, 18(8), 457; https://doi.org/10.3390/a18080457 - 23 Jul 2025
Viewed by 306
Abstract
Accurate and real-time classification of low-latency Internet traffic is critical for applications such as video conferencing, online gaming, financial trading, and autonomous systems, where millisecond-level delays can degrade user experience. Existing methods for low-latency traffic classification, reliant on raw temporal features or static [...] Read more.
Accurate and real-time classification of low-latency Internet traffic is critical for applications such as video conferencing, online gaming, financial trading, and autonomous systems, where millisecond-level delays can degrade user experience. Existing methods for low-latency traffic classification, reliant on raw temporal features or static statistical analyses, fail to capture dynamic frequency patterns inherent to real-time applications. These limitations hinder accurate resource allocation in heterogeneous networks. This paper proposes a novel framework integrating wavelet transform (WT) and artificial neural networks (ANNs) to address this gap. Unlike prior works, we systematically apply WT to commonly used temporal features—such as throughput, slope, ratio, and moving averages—transforming them into frequency-domain representations. This approach reveals hidden multi-scale patterns in low-latency traffic, akin to structured noise in signal processing, which traditional time-domain analyses often overlook. These wavelet-enhanced features train a multilayer perceptron (MLP) ANN, enabling dual-domain (time–frequency) analysis. We evaluate our approach on a dataset comprising FTP, video streaming, and low-latency traffic, including mixed scenarios with up to four concurrent traffic types. Experiments demonstrate 99.56% accuracy in distinguishing low-latency traffic (e.g., video conferencing) from FTP and streaming, outperforming k-NN, CNNs, and LSTMs. Notably, our method eliminates reliance on deep packet inspection (DPI), offering ISPs a privacy-preserving and scalable solution for prioritizing time-sensitive traffic. In mixed-traffic scenarios, the model achieves 74.2–92.8% accuracy, offering ISPs a scalable solution for prioritizing time-sensitive traffic without deep packet inspection. By bridging signal processing and deep learning, this work advances efficient bandwidth allocation and enables Internet Service Providers to prioritize time-sensitive flows without deep packet inspection, improving quality of service in heterogeneous network environments. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

12 pages, 1396 KiB  
Article
Lateral Flow Assay to Detect Carbonic Anhydrase IX in Seromas of Breast Implant-Associated Anaplastic Large Cell Lymphoma
by Peng Xu, Katerina Kourentzi, Richard Willson, Honghua Hu, Anand Deva, Christopher Campbell and Marshall Kadin
Cancers 2025, 17(14), 2405; https://doi.org/10.3390/cancers17142405 - 21 Jul 2025
Viewed by 339
Abstract
Background/Objective: Breast implant-associated anaplastic large cell lymphoma (BIA-ALCL) has affected more than 1700 women with textured breast implants. About 80% of patients present with fluid (seroma) around their implant. BIA-ALCL can be cured by surgery alone when confined to the seroma and lining [...] Read more.
Background/Objective: Breast implant-associated anaplastic large cell lymphoma (BIA-ALCL) has affected more than 1700 women with textured breast implants. About 80% of patients present with fluid (seroma) around their implant. BIA-ALCL can be cured by surgery alone when confined to the seroma and lining of the peri-implant capsule. To address the need for early detection, we developed a rapid point of care (POC) lateral flow assay (LFA) to identify lymphoma in seromas. Methods: We compared 28 malignant seromas to 23 benign seromas using both ELISA and LFA. LFA test lines (TL) and control lines (CL) were visualized and measured with imaging software and the TL/CL ratio for each sample was calculated. Results: By visual exam, the sensitivity for detection of CA9 was 93% and specificity 78%, while the positive predictive value was 84% and negative predictive value 90%. Quantitative image analysis increased the positive predictive value to 96% while the negative predictive value reduced to 79%. Conclusions: We conclude that CA9 is a sensitive biomarker for detection and screening of patients for BIA-ALCL in patients who present with seromas of unknown etiology. The CA9 LFA can potentially replace ELISA, flow cytometry and other tests requiring specialized equipment, highly trained personnel, larger amounts of fluid and delay in diagnosis of BIA-ALCL. Full article
(This article belongs to the Special Issue Pre-Clinical Studies of Personalized Medicine for Cancer Research)
Show Figures

Figure 1

20 pages, 690 KiB  
Article
Wearable Sensor-Based Human Activity Recognition: Performance and Interpretability of Dynamic Neural Networks
by Dalius Navakauskas and Martynas Dumpis
Sensors 2025, 25(14), 4420; https://doi.org/10.3390/s25144420 - 16 Jul 2025
Viewed by 379
Abstract
Human Activity Recognition (HAR) using wearable sensor data is increasingly important in healthcare, rehabilitation, and smart monitoring. This study systematically compared three dynamic neural network architectures—Finite Impulse Response Neural Network (FIRNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU)—to examine their suitability [...] Read more.
Human Activity Recognition (HAR) using wearable sensor data is increasingly important in healthcare, rehabilitation, and smart monitoring. This study systematically compared three dynamic neural network architectures—Finite Impulse Response Neural Network (FIRNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU)—to examine their suitability and specificity for HAR tasks. A controlled experimental setup was applied, training 16,500 models across different delay lengths and hidden neuron counts. The investigation focused on classification accuracy, computational cost, and model interpretability. LSTM achieved the highest classification accuracy (98.76%), followed by GRU (97.33%) and FIRNN (95.74%), with FIRNN offering the lowest computational complexity. To improve model transparency, Layer-wise Relevance Propagation (LRP) was applied to both input and hidden layers. The results showed that gyroscope Y-axis data was consistently the most informative, while accelerometer Y-axis data was the least informative. LRP analysis also revealed that GRU distributed relevance more broadly across hidden units, while FIRNN relied more on a small subset. These findings highlight trade-offs between performance, complexity, and interpretability and provide practical guidance for applying explainable neural wearable sensor-based HAR. Full article
Show Figures

Figure 1

11 pages, 363 KiB  
Article
The Role of Centralized Sexual Assault Care Centers in HIV Post-Exposure Prophylaxis Treatment Adherence: A Retrospective Single Center Analysis
by Stefano Malinverni, Shirine Kargar Samani, Christine Gilles, Agnès Libois and Floriane Bédoret
Infect. Dis. Rep. 2025, 17(4), 77; https://doi.org/10.3390/idr17040077 - 3 Jul 2025
Viewed by 317
Abstract
Background: Sexual assault victims involving penetration are at risk of contracting human immunodeficiency virus (HIV). Post-exposure prophylaxis (PEP) can effectively prevent HIV infection if initiated promptly within 72 h following exposure and adhered to for 28 days. Nonetheless, therapeutic adherence amongst sexual assault [...] Read more.
Background: Sexual assault victims involving penetration are at risk of contracting human immunodeficiency virus (HIV). Post-exposure prophylaxis (PEP) can effectively prevent HIV infection if initiated promptly within 72 h following exposure and adhered to for 28 days. Nonetheless, therapeutic adherence amongst sexual assault victims is low. Victim-centered care, provided by specially trained forensic nurses and midwives, may increase adherence. Methods: We conducted a retrospective case–control study to evaluate the impact of sexual assault center (SAC)—centered care on adherence to PEP compared to care received in the emergency department (ED). Data from January 2011 to February 2022 were reviewed. Multivariable logistic regression analysis was employed to determine the association between centralized specific care for sexual assault victims and completion of the 28-day PEP regimen. The secondary outcome assessed was provision of psychological support within 5 days following the assault. Results: We analyzed 856 patients of whom 403 (47.1%) received care at a specialized center for sexual assault victims. Attendance at the SAC, relative to the ED, was not associated with greater probability of PEP completion both in the unadjusted (52% vs. 50.6%; odds ratio [OR]: 1.06, 95% CI: 0.81 to 1.39; p = 0.666) and adjusted (OR: 0.81, 95%CI 0.58–1.11; p = 0.193) analysis. The care provided at the SAC was associated with improved early (42.7% vs. 21.5%; p < 0.001) and delayed (67.3% vs. 33.7%; p < 0.001) psychological support. Conclusions: SAC-centered care is not associated with an increase in PEP completion rates in sexual assault victims beyond the increase associated with improved access to early and delayed psychological support. Other measures to improve PEP completion rates should be developed. What is already known on this topic—Completion rates for HIV post-exposure prophylaxis (PEP) among victims of sexual assault are low. Specialized sexual assault centers, which provide comprehensive care and are distinct from emergency departments, have been suggested as a potential means of improving treatment adherence and completion rates. However, their actual impact on treatment completion remains unclear. What this study adds—This study found that HIV PEP completion rates in sexual assault victims were not significantly improved by centralized care in a specialized sexual assault center when compared to care initiated in the emergency department and continued within a sexually transmitted infection clinic. However, linkage to urgent psychological and psychiatric care was better in the specialized sexual assault center. How this study might affect research, practice or policy—Healthcare providers in sexual assault centers should be more aware of their critical role in promoting PEP adherence and improving completion rates. Policymakers should ensure that measures aimed at improving HIV PEP outcomes are implemented at all points of patient contact in these centers. Further research is needed to assess the cost-effectiveness of specialized sexual assault centers. Full article
(This article belongs to the Section Sexually Transmitted Diseases)
Show Figures

Figure 1

21 pages, 4080 KiB  
Article
M-Learning: Heuristic Approach for Delayed Rewards in Reinforcement Learning
by Cesar Andrey Perdomo Charry, Marlon Sneider Mora Cortes and Oscar J. Perdomo
Mathematics 2025, 13(13), 2108; https://doi.org/10.3390/math13132108 - 27 Jun 2025
Viewed by 345
Abstract
The current design of reinforcement learning methods requires extensive computational resources. Algorithms such as Deep Q-Network (DQN) have obtained outstanding results in advancing the field. However, the need to tune thousands of parameters and run millions of training episodes remains a significant challenge. [...] Read more.
The current design of reinforcement learning methods requires extensive computational resources. Algorithms such as Deep Q-Network (DQN) have obtained outstanding results in advancing the field. However, the need to tune thousands of parameters and run millions of training episodes remains a significant challenge. This document proposes a comparative analysis between the Q-Learning algorithm, which laid the foundations for Deep Q-Learning, and our proposed method, termed M-Learning. The comparison is conducted using Markov Decision Processes with the delayed reward as a general test bench framework. Firstly, this document provides a full description of the main challenges related to implementing Q-Learning, particularly concerning its multiple parameters. Then, the foundations of our proposed heuristic are presented, including its formulation, and the algorithm is described in detail. The methodology used to compare both algorithms involved training them in the Frozen Lake environment. The experimental results, along with an analysis of the best solutions, demonstrate that our proposal requires fewer episodes and exhibits reduced variability in the outcomes. Specifically, M-Learning trains agents 30.7% faster in the deterministic environment and 61.66% faster in the stochastic environment. Additionally, it achieves greater consistency, reducing the standard deviation of scores by 58.37% and 49.75% in the deterministic and stochastic settings, respectively. The code will be made available in a GitHub repository upon this paper’s publication. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms, 2nd Edition)
Show Figures

Figure 1

17 pages, 23962 KiB  
Article
AI-Powered Mobile App for Nuclear Cataract Detection
by Alicja Anna Ignatowicz, Tomasz Marciniak and Elżbieta Marciniak
Sensors 2025, 25(13), 3954; https://doi.org/10.3390/s25133954 - 25 Jun 2025
Viewed by 527
Abstract
Cataract remains the leading cause of blindness worldwide, and the number of individuals affected by this condition is expected to rise significantly due to global population ageing. Early diagnosis is crucial, as delayed treatment may result in irreversible vision loss. This study explores [...] Read more.
Cataract remains the leading cause of blindness worldwide, and the number of individuals affected by this condition is expected to rise significantly due to global population ageing. Early diagnosis is crucial, as delayed treatment may result in irreversible vision loss. This study explores and presents a mobile application for Android devices designed for the detection of cataracts using deep learning models. The proposed solution utilizes a multi-stage classification approach to analyze ocular images acquired with a slit lamp, sourced from the Nuclear Cataract Database for Biomedical and Machine Learning Applications. The process involves identifying pathological features and assessing the severity of the detected condition, enabling comprehensive characterization of the NC (nuclear cataract) of cataract progression based on the LOCS III scale classification. The evaluation included a range of convolutional neural network architectures, from larger models like VGG16 and ResNet50, to lighter alternatives such as VGG11, ResNet18, MobileNetV2, and EfficientNet-B0. All models demonstrated comparable performance, with classification accuracies exceeding 91–94.5%. The trained models were optimized for mobile deployment, enabling real-time analysis of eye images captured with the device camera or selected from local storage. The presented mobile application, trained and validated on authentic clinician-labeled pictures, represents a significant advancement over existing mobile tools. The preliminary evaluations demonstrated a high accuracy in cataract detection and severity grading. These results confirm the approach is feasible and will serve as the foundation for ongoing development and extensions. Full article
(This article belongs to the Special Issue Recent Trends and Advances in Biomedical Optics and Imaging)
Show Figures

Figure 1

14 pages, 2070 KiB  
Article
Comparative Analysis of Machine/Deep Learning Models for Single-Step and Multi-Step Forecasting in River Water Quality Time Series
by Hongzhe Fang, Tianhong Li and Huiting Xian
Water 2025, 17(13), 1866; https://doi.org/10.3390/w17131866 - 23 Jun 2025
Viewed by 538
Abstract
There is a lack of a systematic comparison framework that can assess models in both single-step and multi-step forecasting situations while balancing accuracy, training efficiency, and prediction horizon. This study aims to evaluate the predictive capabilities of machine learning and deep learning models [...] Read more.
There is a lack of a systematic comparison framework that can assess models in both single-step and multi-step forecasting situations while balancing accuracy, training efficiency, and prediction horizon. This study aims to evaluate the predictive capabilities of machine learning and deep learning models in water quality time series forecasting. It made use of 22-month data with a 4 h interval from two monitoring stations located in a tributary of the Pearl River. Six models, specifically Support Vector Regression (SVR), XGBoost, K-Nearest Neighbors (KNN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) Network, Gated Recurrent Unit (GRU), and PatchTST, were employed in this study. In single-step forecasting, LSTM Network achieved superior accuracy for a univariate feature set and attained an overall 22.0% (Welch’s t-test, p = 3.03 × 10−7) reduction in Mean Squared Error (MSE) compared with the machine learning models (SVR, XGBoost, KNN), while RNN demonstrated significantly reduced training time. For a multivariate feature set, the deep learning models exhibited comparable accuracy but with no model achieving a significant increase in accuracy compared to the univariate scenario. The KNN model underperformed across error evaluation metrics, with the lowest accuracy, and the XGBoost model exhibited the highest computational complexity. In multi-step forecasting, the direct multi-step PatchTST model outperformed the iterated multi-step models (RNN, LSTM, GRU), with a reduced time-delay effect and a slower decrease in accuracy with increasing prediction length, but it still required specific adjustments to be better suited for the task of river water quality time series forecasting. The findings provide actionable guidelines for model selection, balancing predictive accuracy, training efficiency, and forecasting horizon requirements in environmental time series analysis. Full article
Show Figures

Figure 1

25 pages, 9063 KiB  
Article
Zonal Estimation of the Earliest Winter Wheat Identification Time in Shandong Province Considering Phenological and Environmental Factors
by Jiaqi Chen, Xin Du, Chen Wang, Cheng Cai, Guanru Fang, Ziming Wang, Mengyu Liu and Huanxue Zhang
Agronomy 2025, 15(6), 1463; https://doi.org/10.3390/agronomy15061463 - 16 Jun 2025
Viewed by 351
Abstract
Early-season crop mapping plays a critical role in yield estimation, agricultural management, and policy-making. However, most existing methods assign a uniform earliest identification time across provincial or broader extents, overlooking spatial heterogeneity in crop phenology and environmental conditions. This often results in delayed [...] Read more.
Early-season crop mapping plays a critical role in yield estimation, agricultural management, and policy-making. However, most existing methods assign a uniform earliest identification time across provincial or broader extents, overlooking spatial heterogeneity in crop phenology and environmental conditions. This often results in delayed detection or reduced mapping accuracy. To address this issue, we proposed a zonal-based early-season mapping framework for winter wheat by integrating phenological and environmental factors. Aggregation zones across Shandong Province were delineated using Principal Component Analysis (PCA) based on factors such as start of season, end of season, temperature, slope, and others. On this basis, early-season winter wheat identification was conducted for each zone individually. Training samples were generated using the Time-Weighted Dynamic Time Warping (TWDTW) method. Time-series datasets derived from Sentinel-1/2 imagery (2021–2022) were processed on the Google Earth Engine (GEE) platform, followed by feature selection and classification using the Random Forest (RF) algorithm. Results indicated that Shandong Province was divided into four zones (A–D), with Zone D (southwestern Shandong) achieving the earliest mapping by early December with an overall accuracy (OA) of 97.0%. Other zones reached optimal timing between late December and late January, all with OA above 95%. The zonal strategy improved OA by 3.6% compared to the non-zonal approach, demonstrated a high correlation with official municipal-level statistics (R2 = 0.97), and surpassed the ChinaWheat10 and ChinaWheatMap10 datasets in terms of crop differentiation and boundary delineation. Historical validation using 2017–2018 data from Liaocheng City, a prefecture-level city in Shandong Province, achieved an OA of 0.98 and an F1 score of 0.96, further confirming the temporal robustness of the proposed approach. This zonal strategy significantly enhances the accuracy and timeliness of early-season winter wheat mapping at a large scale. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

16 pages, 272 KiB  
Review
Enhancing Safety and Quality of Cardiopulmonary Resuscitation During Coronavirus Pandemic
by Diána Pálok, Barbara Kiss, László Gergely Élő, Ágnes Dósa, László Zubek and Gábor Élő
J. Clin. Med. 2025, 14(12), 4145; https://doi.org/10.3390/jcm14124145 - 11 Jun 2025
Viewed by 547
Abstract
Background: Professional knowledge and experience of healthcare organization went through continuous change and development with the progression of COVID-19 pandemic waves. However, carefully developed guidelines for cardiopulmonary resuscitation (CPR) remained largely unchanged regardless of the epidemic situation, with the largest change being a [...] Read more.
Background: Professional knowledge and experience of healthcare organization went through continuous change and development with the progression of COVID-19 pandemic waves. However, carefully developed guidelines for cardiopulmonary resuscitation (CPR) remained largely unchanged regardless of the epidemic situation, with the largest change being a more prominent bioethical approach. It would be possible to further improve the quality of CPR by systematic data collection, the facilitation of prospective studies, and further development of the methodology based on this evidence, as well as by providing information and developing provisions on interventions with expected poor outcomes, and ultimately by refusing resuscitation. Methods: This study involved the critical collection and analysis of literary data originating from the Web of Science and PubMed databases concerning bioethical aspects and the efficacy of CPR during the COVID-19 pandemic. Results: According to the current professional recommendation of the European Resuscitation Council (ERC), CPR should be initiated immediately in case of cardiac arrest in the absence of an exclusionary circumstance. One such circumstance is explicit refusal of CPR by a well-informed patient, which in practice takes the form of a prior declaration. ERC prescribes the following conjunctive conditions for do-not-attempt CPR (DNACPR) declarations: present, real, and applicable. It is recommended to take the declaration as a part of complex end-of-life planning, with the corresponding documentation available in an electronic database. The pandemic has brought significant changes in resuscitation practice at both lay and professional levels as well. Incidence of out-of-hospital resuscitation (OHCA) did not differ compared to the previous period, while cardiac deaths in public places almost halved during the epidemic (p < 0.001) as did the use of AEDs (p = 0.037). The number of resuscitations performed by bystanders and by the emergency medical service (EMS) also showed a significant decrease (p = 0.001), and the most important interventions (defibrillation, first adrenaline time) suffered a significant delay. Secondary survival until hospital discharge thus decreased by 50% during the pandemic period. Conclusions: The COVID-19 pandemic provided a significant impetus to the revision of guidelines. While detailed methodology has changed only slightly compared to the previous procedures, the DNACPR declaration regarding self-determination is mentioned in the context of complex end-of-life planning. The issue of safe environment has come to the fore for both lay and trained resuscitators. Future Directions: Prospective evaluation of standardized methods can further improve the patient’s autonomy and quality of life. Since clinical data are controversial, further prospective controlled studies are needed to evaluate the real hazards of aerosol-generating procedures. Full article
26 pages, 4216 KiB  
Article
Exploration of the Ignition Delay Time of RP-3 Fuel Using the Artificial Bee Colony Algorithm in a Machine Learning Framework
by Wenbo Liu, Zhirui Liu and Hongan Ma
Energies 2025, 18(12), 3037; https://doi.org/10.3390/en18123037 - 8 Jun 2025
Cited by 1 | Viewed by 423
Abstract
Ignition delay time (IDT) is a critical parameter for evaluating the autoignition characteristics of aviation fuels. However, its accurate prediction remains challenging due to the complex coupling of temperature, pressure, and compositional factors, resulting in a high-dimensional and nonlinear problem. To address this [...] Read more.
Ignition delay time (IDT) is a critical parameter for evaluating the autoignition characteristics of aviation fuels. However, its accurate prediction remains challenging due to the complex coupling of temperature, pressure, and compositional factors, resulting in a high-dimensional and nonlinear problem. To address this challenge for the complex aviation kerosene RP-3, this study proposes a multi-stage hybrid optimization framework based on a five-input, one-output BP neural network. The framework—referred to as CGD-ABC-BP—integrates randomized initialization, conjugate gradient descent (CGD), the artificial bee colony (ABC) algorithm, and L2 regularization to enhance convergence stability and model robustness. The dataset includes 700 experimental and simulated samples, covering a wide range of thermodynamic conditions: 624–1700 K, 0.5–20 bar, and equivalence ratios φ = 0.5 − 2.0. To improve training efficiency, the temperature feature was linearized using a 1000/T transformation. Based on 30 independent resampling trials, the CGD-ABC-BP model with a three-hidden-layer structure of [21 17 19] achieved strong performance on internal test data: R2 = 0.994 ± 0.001, MAE = 0.04 ± 0.015, MAPE = 1.4 ± 0.05%, and RMSE = 0.07 ± 0.01. These results consistently outperformed the baseline model that lacked ABC optimization. On an entirely independent external test set comprising 70 low-pressure shock tube samples, the model still exhibited strong generalization capability, achieving R2 = 0.976 and MAPE = 2.18%, thereby confirming its robustness across datasets with different sources. Furthermore, permutation importance and local gradient sensitivity analysis reveal that the model can reliably identify and rank key controlling factors—such as temperature, diluent fraction, and oxidizer mole fraction—across low-temperature, NTC, and high-temperature regimes. The observed trends align well with established findings in the chemical kinetics literature. In conclusion, the proposed CGD-ABC-BP framework offers a highly accurate and interpretable data-driven approach for modeling IDT in complex aviation fuels, and it shows promising potential for practical engineering deployment. Full article
Show Figures

Figure 1

19 pages, 840 KiB  
Article
A Dual-Feature Framework for Enhanced Diagnosis of Myeloproliferative Neoplasm Subtypes Using Artificial Intelligence
by Amna Bamaqa, N. S. Labeeb, Eman M. El-Gendy, Hani M. Ibrahim, Mohamed Farsi, Hossam Magdy Balaha, Mahmoud Badawy and Mostafa A. Elhosseini
Bioengineering 2025, 12(6), 623; https://doi.org/10.3390/bioengineering12060623 - 7 Jun 2025
Viewed by 674
Abstract
Myeloproliferative neoplasms, particularly the Philadelphia chromosome-negative (Ph-negative) subtypes such as essential thrombocythemia, polycythemia vera, and primary myelofibrosis, present diagnostic challenges due to overlapping morphological features and clinical heterogeneity. Traditional diagnostic approaches, including imaging and histopathological analysis, are often limited by interobserver variability, delayed [...] Read more.
Myeloproliferative neoplasms, particularly the Philadelphia chromosome-negative (Ph-negative) subtypes such as essential thrombocythemia, polycythemia vera, and primary myelofibrosis, present diagnostic challenges due to overlapping morphological features and clinical heterogeneity. Traditional diagnostic approaches, including imaging and histopathological analysis, are often limited by interobserver variability, delayed diagnosis, and subjective interpretations. To address these limitations, we propose a novel framework that integrates handcrafted and automatic feature extraction techniques for improved classification of Ph-negative myeloproliferative neoplasms. Handcrafted features capture interpretable morphological and textural characteristics. In contrast, automatic features utilize deep learning models to identify complex patterns in histopathological images. The extracted features were used to train machine learning models, with hyperparameter optimization performed using Optuna. Our framework achieved high performance across multiple metrics, including precision, recall, F1 score, accuracy, specificity, and weighted average. The concatenated probabilities, which combine both feature types, demonstrated the highest mean weighted average of 0.9969, surpassing the individual performances of handcrafted (0.9765) and embedded features (0.9686). Statistical analysis confirmed the robustness and reliability of the results. However, challenges remain in assuming normal distributions for certain feature types. This study highlights the potential of combining domain-specific knowledge with data-driven approaches to enhance diagnostic accuracy and support clinical decision-making. Full article
Show Figures

Figure 1

Back to TopTop