Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (552)

Search Parameters:
Keywords = sample information entropy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2357 KB  
Article
Heart Rate Variability Patterns Reflect Yoga Intervention in Chronically Stressed Pregnant Women: A Quasi-Randomized Controlled Trial
by Marlene J. E. Mayer, Nicolas B. Garnier, Clara Becker, Marta C. Antonelli, Silvia M. Lobmaier and Martin G. Frasch
Bioengineering 2025, 12(11), 1141; https://doi.org/10.3390/bioengineering12111141 - 22 Oct 2025
Abstract
Prenatal maternal stress (PS) is a risk factor for adverse offspring neurodevelopment. Heart rate variability (HRV) complexity provides a non-invasive marker of maternal autonomic regulation and may be influenced by mind–body interventions such as Yoga. In this quasi-randomized controlled trial, 28 chronically stressed [...] Read more.
Prenatal maternal stress (PS) is a risk factor for adverse offspring neurodevelopment. Heart rate variability (HRV) complexity provides a non-invasive marker of maternal autonomic regulation and may be influenced by mind–body interventions such as Yoga. In this quasi-randomized controlled trial, 28 chronically stressed pregnant women were followed from the second trimester until birth: 14 participated in weekly Hatha Yoga with electrocardiogram (ECG) recordings, and 14 received standard obstetric care with monthly ECGs. Group allocation was based on availability, with participants unaware of their assignment at enrollment. HRV complexity was assessed first with Sample Entropy and Entropy Rate and then expanded to 94 HRV metrics spanning temporal, frequency, nonlinear, and information-theoretical domains. All metrics were covariate-adjusted (maternal age, BMI, gestational age), standardized, and analyzed using timepoint-specific principal component analysis (PCA). From this, a unified HRV index was derived. Analyses revealed that HRV metric relationships changed dynamically across pregnancy, with PCA loadings shifting from frequency toward complexity measures in late gestation. The mixed effects model identified a significant time x group interaction effect (p = 0.041). These findings suggest a restructuring of HRV signal-analytical domains with advancing pregnancy attributable to Yoga and highlight the utility of advanced HRV analysis frameworks for future, larger trials. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

24 pages, 10663 KB  
Article
Feature Decomposition-Based Framework for Source-Free Universal Domain Adaptation in Mechanical Equipment Fault Diagnosis
by Peiyi Zhou, Weige Liang, Shiyan Sun and Qizheng Zhou
Mathematics 2025, 13(20), 3338; https://doi.org/10.3390/math13203338 - 20 Oct 2025
Viewed by 183
Abstract
Aiming at the problems of high complexity in source domain data, inaccessibility of target domain data, and unknown fault patterns in real-world industrial scenarios for mechanical fault diagnosis, this paper proposes a Feature Decomposition-based Source-Free Universal Domain Adaptation (FD-SFUniDA) framework for mechanical equipment [...] Read more.
Aiming at the problems of high complexity in source domain data, inaccessibility of target domain data, and unknown fault patterns in real-world industrial scenarios for mechanical fault diagnosis, this paper proposes a Feature Decomposition-based Source-Free Universal Domain Adaptation (FD-SFUniDA) framework for mechanical equipment fault diagnosis. First, the CBAM attention module is incorporated to enhance the ResNet-50 convolutional network for extracting feature information from source domain data. During the target domain adaptation phase, singular value decomposition is applied to the weights of the pre-trained model’s classification layer, orthogonally decoupling the feature space into a source-known subspace and a target-private subspace. Then, based on the magnitude of feature projections, a dynamic decision boundary is constructed and combined with an entropy threshold mechanism to accurately distinguish between known and unknown class samples. Furthermore, intra-class feature consistency is strengthened through neighborhood-expanded contrastive learning, and semantic weight calibration is employed to reconstruct the feature space, thereby suppressing the negative transfer effect. Finally, extensive experiments under multiple operating conditions on rolling bearing and reciprocating mechanism datasets demonstrate that the proposed method excels in addressing source-free fault diagnosis problems for mechanical equipment and shows promising potential for practical engineering applications in fault classification tasks. Full article
Show Figures

Figure 1

26 pages, 2425 KB  
Article
The Operational Safety Evaluation of UAVs Based on Improved Support Vector Machines
by Yulin Zhou and Shuguang Liu
Aerospace 2025, 12(10), 932; https://doi.org/10.3390/aerospace12100932 - 16 Oct 2025
Viewed by 193
Abstract
In response to the challenge of dynamic adaptability in operational safety assessment for UAVs operating in complex operational environments, this study proposes a novel operational safety assessment method based on an Improved Support Vector Machine. An operational safety assessment index system encompassing four [...] Read more.
In response to the challenge of dynamic adaptability in operational safety assessment for UAVs operating in complex operational environments, this study proposes a novel operational safety assessment method based on an Improved Support Vector Machine. An operational safety assessment index system encompassing four dimensions—operator, UAV platform, flight environment, flight mission—is constructed to provide a comprehensive foundation for evaluation. The method introduces a dynamic weighted information entropy mechanism based on a sliding window, overcoming the static features and delayed response of traditional SVM methods. Additionally, it integrates Gaussian and polynomial kernel functions to significantly enhance the generalization capability and classification accuracy of the SVM model in complex operational environments. Experimental results show that the proposed model demonstrates superior performance on test samples, effectively improving the accuracy of operational safety assessment for the Reconnaissance–Strike UAV in complex operational environments, and offering a novel methodology for UAV safety assessment. Full article
(This article belongs to the Special Issue Airworthiness, Safety and Reliability of Aircraft)
Show Figures

Figure 1

28 pages, 3748 KB  
Article
Enabling Adaptive Food Monitoring Through Sampling Rate Adaptation for Efficient, Reliable Critical Event Detection
by Elia Henrichs, Dana Jox, Pia Schweizer and Christian Krupitzer
J. Sens. Actuator Netw. 2025, 14(5), 102; https://doi.org/10.3390/jsan14050102 - 14 Oct 2025
Viewed by 313
Abstract
Monitoring systems are essential in many fields, such as food production, storage, and supply, to collect information about applications or their environments to enable decision-making. However, these systems generate massive amounts of data that require substantial processing. To improve data analysis efficiency and [...] Read more.
Monitoring systems are essential in many fields, such as food production, storage, and supply, to collect information about applications or their environments to enable decision-making. However, these systems generate massive amounts of data that require substantial processing. To improve data analysis efficiency and reduce data collectors’ energy demand, adaptive monitoring is a promising approach to reduce the gathered data while ensuring the monitoring of critical events. Adaptive monitoring is a system’s ability to adjust its monitoring activity during runtime in response to internal and external changes. This work investigates the application of adaptive monitoring—especially, the adaptation of the sensor sampling rate—in dynamic and unstable environments. This work evaluates 11 distinct approaches, based on threshold determination, statistical analysis techniques, and optimization methods, encompassing 33 customized implementations, regarding their data reduction extent and identification of critical events. Furthermore, analyses of Shannon’s entropy and the oscillation behavior allow for estimating the efficiency of the adaptation algorithms. The results demonstrate the applicability of adaptive monitoring in food storage environments, such as cold storage rooms and transportation containers, but also reveal differences in the approaches’ performance. Generally, some approaches achieve high observation accuracies while significantly reducing the data collected by adapting efficiently. Full article
Show Figures

Figure 1

39 pages, 13725 KB  
Article
SRTSOD-YOLO: Stronger Real-Time Small Object Detection Algorithm Based on Improved YOLO11 for UAV Imageries
by Zechao Xu, Huaici Zhao, Pengfei Liu, Liyong Wang, Guilong Zhang and Yuan Chai
Remote Sens. 2025, 17(20), 3414; https://doi.org/10.3390/rs17203414 - 12 Oct 2025
Viewed by 942
Abstract
To address the challenges of small target detection in UAV aerial images—such as difficulty in feature extraction, complex background interference, high miss rates, and stringent real-time requirements—this paper proposes an innovative model series named SRTSOD-YOLO, based on YOLO11. The backbone network incorporates a [...] Read more.
To address the challenges of small target detection in UAV aerial images—such as difficulty in feature extraction, complex background interference, high miss rates, and stringent real-time requirements—this paper proposes an innovative model series named SRTSOD-YOLO, based on YOLO11. The backbone network incorporates a Multi-scale Feature Complementary Aggregation Module (MFCAM), designed to mitigate the loss of small target information as network depth increases. By integrating channel and spatial attention mechanisms with multi-scale convolutional feature extraction, MFCAM effectively locates small objects in the image. Furthermore, we introduce a novel neck architecture termed Gated Activation Convolutional Fusion Pyramid Network (GAC-FPN). This module enhances multi-scale feature fusion by emphasizing salient features while suppressing irrelevant background information. GAC-FPN employs three key strategies: adding a detection head with a small receptive field while removing the original largest one, leveraging large-scale features more effectively, and incorporating gated activation convolutional modules. To tackle the issue of positive-negative sample imbalance, we replace the conventional binary cross-entropy loss with an adaptive threshold focal loss in the detection head, accelerating network convergence. Additionally, to accommodate diverse application scenarios, we develop multiple versions of SRTSOD-YOLO by adjusting the width and depth of the network modules: a nano version (SRTSOD-YOLO-n), small (SRTSOD-YOLO-s), medium (SRTSOD-YOLO-m), and large (SRTSOD-YOLO-l). Experimental results on the VisDrone2019 and UAVDT datasets demonstrate that SRTSOD-YOLO-n improves the mAP@0.5 by 3.1% and 1.2% compared to YOLO11n, while SRTSOD-YOLO-l achieves gains of 7.9% and 3.3% over YOLO11l, respectively. Compared to other state-of-the-art methods, SRTSOD-YOLO-l attains the highest detection accuracy while maintaining real-time performance, underscoring the superiority of the proposed approach. Full article
Show Figures

Figure 1

17 pages, 2165 KB  
Article
Seizure Type Classification Based on Hybrid Feature Engineering and Mutual Information Analysis Using Electroencephalogram
by Yao Miao
Entropy 2025, 27(10), 1057; https://doi.org/10.3390/e27101057 - 11 Oct 2025
Viewed by 285
Abstract
Epilepsy has diverse seizure types that challenge diagnosis and treatment, requiring automated and accurate classification to improve patient outcomes. Traditional electroencephalogram (EEG)-based diagnosis relies on manual interpretation, which is subjective and inefficient, particularly for multi-class differentiation in imbalanced datasets. This study aims to [...] Read more.
Epilepsy has diverse seizure types that challenge diagnosis and treatment, requiring automated and accurate classification to improve patient outcomes. Traditional electroencephalogram (EEG)-based diagnosis relies on manual interpretation, which is subjective and inefficient, particularly for multi-class differentiation in imbalanced datasets. This study aims to develop a hybrid framework for automated multi-class seizure type classification using segment-wise EEG processing and multi-band feature engineering to enhance precision and address data challenges. EEG signals from the TUSZ dataset were segmented into 1-s windows with 0.5-s overlaps, followed by the extraction of multi-band features, including statistical measures, sample entropy, wavelet energies, Hurst exponent, and Hjorth parameters. The mutual information (MI) approach was employed to select the optimal features, and seven machine learning models (SVM, KNN, DT, RF, XGBoost, CatBoost, LightGBM) were evaluated via 10-fold stratified cross-validation with a class balancing strategy. The results showed the following: (1) XGBoost achieved the highest performance (accuracy: 0.8710, F1 score: 0.8721, AUC: 0.9797), with γ-band features dominating importance. (2) Confusion matrices indicated robust discrimination but noted overlaps in focal subtypes. This framework advances seizure type classification by integrating multi-band features and the MI method, which offers a scalable and interpretable tool for supporting clinical epilepsy diagnostics. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

26 pages, 4780 KB  
Article
Uncertainty Quantification Based on Block Masking of Test Images
by Pai-Xuan Wang, Chien-Hung Liu and Shingchern D. You
Information 2025, 16(10), 885; https://doi.org/10.3390/info16100885 - 11 Oct 2025
Viewed by 153
Abstract
In image classification tasks, models may occasionally produce incorrect predictions, which can lead to severe consequences in safety-critical applications. For instance, if a model mistakenly classifies a red traffic light as green, it could result in a traffic accident. Therefore, it is essential [...] Read more.
In image classification tasks, models may occasionally produce incorrect predictions, which can lead to severe consequences in safety-critical applications. For instance, if a model mistakenly classifies a red traffic light as green, it could result in a traffic accident. Therefore, it is essential to assess the confidence level associated with each prediction. Predictions accompanied by high confidence scores are generally more reliable and can serve as a basis for informed decision-making. To address this, the present paper extends the block-scaling approach—originally developed for estimating classifier accuracy on unlabeled datasets—to compute confidence scores for individual samples in image classification. The proposed method, termed block masking confidence (BMC), applies a sliding mask filled with random noise to occlude localized regions of the input image. Each masked variant is classified, and predictions are aggregated across all variants. The final class is selected via majority voting, and a confidence score is derived based on prediction consistency. To evaluate the effectiveness of BMC, we conducted experiments comparing it against Monte Carlo (MC) dropout and a vanilla baseline across image datasets of varying sizes and distortion levels. While BMC does not consistently outperform the baselines under standard (in-distribution) conditions, it shows clear advantages on distorted and out-of-distribution (OOD) samples. Specifically, on the level-3 distorted iNaturalist 2018 dataset, BMC achieves a median expected calibration error (ECE) of 0.135, compared to 0.345 for MC dropout and 0.264 for the vanilla approach. On the level-3 distorted Places365 dataset, BMC yields an ECE of 0.173, outperforming MC dropout (0.290) and vanilla (0.201). For OOD samples in Places365, BMC achieves a peak entropy of 1.43, higher than the 1.06 observed for both MC dropout and vanilla. Furthermore, combining BMC with MC dropout leads to additional improvements. On distorted Places365, the median ECE is reduced to 0.151, and the peak entropy for OOD samples increases to 1.73. Overall, the proposed BMC method offers a promising framework for uncertainty quantification in image classification, particularly under challenging or distribution-shifted conditions. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

15 pages, 1215 KB  
Communication
Agegraphic Dark Energy from Entropy of the Anti-de Sitter Black Hole
by Qihong Huang, Yang Liu and He Huang
Universe 2025, 11(10), 336; https://doi.org/10.3390/universe11100336 - 10 Oct 2025
Viewed by 294
Abstract
In this paper, we analyze the agegraphic dark energy from the entropy of the anti-de Sitter black hole using the age of the universe as the IR cutoff. We constrain its parameter with the Pantheon+ Type Ia supernova sample and observational Hubble parameter [...] Read more.
In this paper, we analyze the agegraphic dark energy from the entropy of the anti-de Sitter black hole using the age of the universe as the IR cutoff. We constrain its parameter with the Pantheon+ Type Ia supernova sample and observational Hubble parameter data, finding that the Akaike Information Criterion cannot effectively distinguish this model from the standard ΛCDM model. The present value of Hubble constant H0 and the model parameter b2 are constrained to H0=67.7±1.8 and b2=0.3030.024+0.019. This model realizes the whole evolution of the universe, including the late-time accelerated expansion. Although it asymptotically approaches the standard ΛCDM model in the future, statefinder analysis shows that late-time deviations allow the two models to be distinguished. Full article
(This article belongs to the Special Issue Modified Gravity and Dark Energy Theories)
Show Figures

Figure 1

10 pages, 689 KB  
Article
Sex Differences in Foot Arch Structure Affect Postural Control and Energy Flow During Dynamic Tasks
by Xuan Liu, Shu Zhou, Yan Pan, Lei Li and Ye Liu
Life 2025, 15(10), 1550; https://doi.org/10.3390/life15101550 - 3 Oct 2025
Viewed by 539
Abstract
Background: This study investigated sex differences in foot arch structure and function, and their impact on postural control and energy flow during dynamic tasks. Findings aim to inform sex-specific training, movement assessment, and injury prevention strategies. Methods: A total of 108 participants (53 [...] Read more.
Background: This study investigated sex differences in foot arch structure and function, and their impact on postural control and energy flow during dynamic tasks. Findings aim to inform sex-specific training, movement assessment, and injury prevention strategies. Methods: A total of 108 participants (53 males and 55 females) underwent foot arch morphological assessments and performed a sit-to-stand (STS). Motion data were collected using an infrared motion capture system, three-dimensional force plates, and wireless surface electromyography. A rigid body model was constructed in Visual3D, and joint forces, segmental angular and linear velocities, center of pressure (COP), and center of mass (COM) were calculated using MATLAB. Segmental net energy was integrated to determine energy flow across different phases of the STS. Results: Arch stiffness was significantly higher in males. In terms of postural control, males exhibited significantly lower mediolateral COP frequency and anteroposterior COM peak velocity during the pre-seat-off phase, and lower COM displacement, peak velocity, and sample entropy during the post-seat-off phase compared to females. Conversely, males showed higher anteroposterior COM velocity before seat-off, and greater anteroposterior and vertical momentum after seat-off (p < 0.05). Regarding energy flow, males exhibited higher thigh muscle power, segmental net power during both phases, and greater shank joint power before seat-off. In contrast, females showed higher thigh joint power before seat-off and greater shank joint power after seat-off (p < 0.05). Conclusions: Significant sex differences in foot arch function influence postural control and energy transfer during STS. Compared to males, females rely on more frequent postural adjustments to compensate for lower arch stiffness, which may increase mechanical loading on the knee and ankle and elevate injury risk. Full article
(This article belongs to the Special Issue Focus on Exercise Physiology and Sports Performance: 2nd Edition)
Show Figures

Figure 1

29 pages, 17619 KB  
Article
Fusing Historical Records and Physics-Informed Priors for Urban Waterlogging Susceptibility Assessment: A Framework Integrating Machine Learning, Fuzzy Evaluation, and Decision Analysis
by Guangyao Chen, Wenxin Guan, Jiaming Xu, Chan Ghee Koh and Zhao Xu
Appl. Sci. 2025, 15(19), 10604; https://doi.org/10.3390/app151910604 - 30 Sep 2025
Viewed by 212
Abstract
Urban Waterlogging Susceptibility Assessment (UWSA) is vital for resilient urban planning and disaster preparedness. Conventional methods depend heavily on Historical Waterlogging Records (HWR), which are limited by their reliance on extreme rainfall events and prone to human omissions, resulting in spatial bias and [...] Read more.
Urban Waterlogging Susceptibility Assessment (UWSA) is vital for resilient urban planning and disaster preparedness. Conventional methods depend heavily on Historical Waterlogging Records (HWR), which are limited by their reliance on extreme rainfall events and prone to human omissions, resulting in spatial bias and incomplete coverage. While hydrodynamic models can simulate waterlogging scenarios, their large-scale application is restricted by the lack of accessible underground drainage data. Recently released flood control plans and risk maps provide valuable physics-informed priors (PI-Priors) that can supplement HWR for susceptibility modeling. This study introduces a dual-source integration framework that fuses HWR with PI-Priors to improve UWSA performance. PI-Priors rasters were vectorized to delineate two-dimensional waterlogging zones, and based on the Three-Way Decision (TWD) theory, a Multi-dimensional Connection Cloud Model (MCCM) with CRITIC-TOPSIS was employed to build an index system incorporating membership degree, credibility, and impact scores. High-quality samples were extracted and combined with HWR to create an enhanced dataset. A Maximum Entropy (MaxEnt) model was then applied with 20 variables spanning natural conditions, social capital, infrastructure, and built environment. The results demonstrate that this framework increases sample adequacy, reduces spatial bias, and substantially improves the accuracy and generalizability of UWSA under extreme rainfall. Full article
(This article belongs to the Topic Resilient Civil Infrastructure, 2nd Edition)
Show Figures

Figure 1

25 pages, 20020 KB  
Article
GLFNet: Attention Mechanism-Based Global–Local Feature Fusion Network for Micro-Expression Recognition
by Meng Zhang, Long Yao, Wenzhong Yang and Yabo Yin
Entropy 2025, 27(10), 1023; https://doi.org/10.3390/e27101023 - 28 Sep 2025
Viewed by 288
Abstract
Micro-expressions are extremely subtle and short-lived facial muscle movements that often reveal an individual’s genuine emotions. However, micro-expression recognition (MER) remains highly challenging due to its short duration, low motion intensity, and the imbalanced distribution of training samples. To address these issues, this [...] Read more.
Micro-expressions are extremely subtle and short-lived facial muscle movements that often reveal an individual’s genuine emotions. However, micro-expression recognition (MER) remains highly challenging due to its short duration, low motion intensity, and the imbalanced distribution of training samples. To address these issues, this paper proposes a Global–Local Feature Fusion Network (GLFNet) to effectively extract discriminative features for MER. Specifically, GLFNet consists of three core modules: the Global Attention (LA) module, which captures subtle variations across the entire facial region; the Local Block (GB) module, which partitions the feature map into four non-overlapping regions to emphasize salient local movements while suppressing irrelevant information; and the Adaptive Feature Fusion (AFF) module, which employs an attention mechanism to dynamically adjust channel-wise weights for efficient global–local feature integration. In addition, a class-balanced loss function is introduced to replace the conventional cross-entropy loss, mitigating the common issue of class imbalance in micro-expression datasets. Extensive experiments are conducted on three benchmark databases, SMIC, CASME II, and SAMM, under two evaluation protocols. The experimental results demonstrate that under the Composite Database Evaluation protocol, GLFNet consistently outperforms existing state-of-the-art methods in overall performance. Specifically, the unweighted F1-scores on the Combined, SAMM, CASME II, and SMIC datasets are improved by 2.49%, 2.02%, 0.49%, and 4.67%, respectively, compared to the current best methods. These results strongly validate the effectiveness and superiority of the proposed global–local feature fusion strategy in micro-expression recognition tasks. Full article
Show Figures

Figure 1

23 pages, 3811 KB  
Article
NSCLC EGFR Mutation Prediction via Random Forest Model: A Clinical–CT–Radiomics Integration Approach
by Anass Benfares, Badreddine Alami, Sara Boukansa, Mamoun Qjidaa, Ikram Benomar, Mounia Serraj, Ahmed Lakhssassi, Mohammed Ouazzani Jamil, Mustapha Maaroufi and Hassan Qjidaa
Adv. Respir. Med. 2025, 93(5), 39; https://doi.org/10.3390/arm93050039 - 26 Sep 2025
Viewed by 495
Abstract
Non-small cell lung cancer (NSCLC) is the leading cause of cancer-related mortality worldwide. Accurate determination of epidermal growth factor receptor (EGFR) mutation status is essential for selecting patients eligible for tyrosine kinase inhibitors (TKIs). However, invasive genotyping is often limited by tissue accessibility [...] Read more.
Non-small cell lung cancer (NSCLC) is the leading cause of cancer-related mortality worldwide. Accurate determination of epidermal growth factor receptor (EGFR) mutation status is essential for selecting patients eligible for tyrosine kinase inhibitors (TKIs). However, invasive genotyping is often limited by tissue accessibility and sample quality. This study presents a non-invasive machine learning model combining clinical data, CT morphological features, and radiomic descriptors to predict EGFR mutation status. A retrospective cohort of 138 patients with confirmed EGFR status and pre-treatment CT scans was analyzed. Radiomic features were extracted with PyRadiomics, and feature selection applied mutual information, Spearman correlation, and wrapper-based methods. Five Random Forest models were trained with different feature sets. The best-performing model, based on 11 selected variables, achieved an AUC of 0.91 (95% CI: 0.81–1.00) under stratified five-fold cross-validation, with an accuracy of 0.88 ± 0.03. Subgroup analysis showed that EGFR-WT had a performance of precision 0.93 ± 0.04, recall 0.92 ± 0.03, F1-score 0.91 ± 0.02, and EGFR-Mutant had a performance of precision 0.76 ± 0.05, recall 0.71 ± 0.05, F1-score 0.68 ± 0.04. SHapley Additive exPlanations (SHAP) analysis identified tobacco use, enhancement pattern, and gray-level-zone entropy as key predictors. Decision curve analysis confirmed clinical utility, supporting its role as a non-invasive tool for EGFR-screening. Full article
Show Figures

Figure 1

25 pages, 7439 KB  
Article
COA–VMPE–WD: A Novel Dual-Denoising Method for GPS Time Series Based on Permutation Entropy Constraint
by Ziyu Wang and Xiaoxing He
Appl. Sci. 2025, 15(19), 10418; https://doi.org/10.3390/app151910418 - 25 Sep 2025
Viewed by 210
Abstract
To address the challenge of effectively filtering out noise components in GPS coordinate time series, we propose a denoising method based on parameter-optimized variational mode decomposition (VMD). The method combines permutation entropy with mutual information as the fitness function and uses the crayfish [...] Read more.
To address the challenge of effectively filtering out noise components in GPS coordinate time series, we propose a denoising method based on parameter-optimized variational mode decomposition (VMD). The method combines permutation entropy with mutual information as the fitness function and uses the crayfish (COA) algorithm to adaptively obtain the optimal parameter combination of the number of modal decompositions and quadratic penalty factors for VMD, and then, sample entropy is used to identify effective mode components (IMF), which are reconstructed into denoised signals to achieve effective separation of signal and noise The experiments were conducted using simulated signals and 52 GPS station data from CMONOC to compare and analyze the COA–VMPE–WD method with wavelet denoising (WD), empirical mode decomposition (EMD), ensemble empirical mode decomposition (EEMD), and complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) methods. The result shows that the COA–VMPE–WD method can effectively remove noise from GNSS coordinate time series and preserve the original features of the signal, with the most significant effect on the U component. The COA–VMPE–WD method reduced station velocity by an average of 50.00%, 59.09%, 18.18%, and 64.00% compared to the WD, EMD, EEMD, and CEEMDAN methods. The noise reduction effect is higher than the other four methods, providing reliable data for subsequent analysis and processing. Full article
(This article belongs to the Special Issue Advanced GNSS Technologies: Measurement, Analysis, and Applications)
Show Figures

Figure 1

26 pages, 3429 KB  
Article
A Robust AI Framework for Safety-Critical LIB Degradation Prognostics: SE-VMD and Dual-Branch GRU-Transformer
by Yang Liu, Quan Li, Jinqi Zhu, Bo Zhang and Jia Guo
Electronics 2025, 14(19), 3794; https://doi.org/10.3390/electronics14193794 - 24 Sep 2025
Viewed by 361
Abstract
Lithium-ion batteries (LIBs) are critical components in safety-critical systems such as electric vehicles, aerospace, and grid-scale energy storage. Their degradation over time can lead to catastrophic failures, including thermal runaway and uncontrolled combustion, posing severe threats to human safety and infrastructure. Developing a [...] Read more.
Lithium-ion batteries (LIBs) are critical components in safety-critical systems such as electric vehicles, aerospace, and grid-scale energy storage. Their degradation over time can lead to catastrophic failures, including thermal runaway and uncontrolled combustion, posing severe threats to human safety and infrastructure. Developing a robust AI framework for degradation prognostics in safety-critical systems is essential to mitigate these risks and ensure operational safety. However, sensor noise, dynamic operating conditions, and the multi-scale nature of degradation processes complicate this task. Traditional denoising and modeling approaches often fail to preserve informative temporal features or capture both abrupt fluctuations and long-term trends simultaneously. To address these limitations, this paper proposes a hybrid data-driven framework that combines Sample Entropy-guided Variational Mode Decomposition (SE-VMD) with K-means clustering for adaptive signal preprocessing. The SE-VMD algorithm automatically determines the optimal number of decomposition modes, while K-means separates high- and low-frequency components, enabling robust feature extraction. A dual-branch architecture is designed, where Gated Recurrent Units (GRUs) extract short-term dynamics from high-frequency signals, and Transformers model long-term trends from low-frequency signals. This dual-branch approach ensures comprehensive multi-scale degradation feature learning. Additionally, experiments with varying sliding window sizes are conducted to optimize temporal modeling and enhance the framework’s robustness and generalization. Benchmark dataset evaluations demonstrate that the proposed method outperforms traditional approaches in prediction accuracy and stability under diverse conditions. The framework directly contributes to Artificial Intelligence for Security by providing a reliable solution for battery health monitoring in safety-critical applications, enabling early risk mitigation and ensuring operational safety in real-world scenarios. Full article
Show Figures

Figure 1

24 pages, 374 KB  
Article
Research on the Impact of Enterprise Artificial Intelligence on Supply Chain Resilience: Empirical Evidence from Chinese Listed Companies
by Lijie Lin and Xiangyu Zhang
Sustainability 2025, 17(19), 8576; https://doi.org/10.3390/su17198576 - 24 Sep 2025
Viewed by 945
Abstract
Artificial intelligence (AI), as a strategic technology leading the current technological revolution and industrial transformation, functions as a pivotal catalyst for enhancing high-quality supply chain development and as the primary engine driving supply chains towards environmentally sustainable, low-carbon models. This study seeks to [...] Read more.
Artificial intelligence (AI), as a strategic technology leading the current technological revolution and industrial transformation, functions as a pivotal catalyst for enhancing high-quality supply chain development and as the primary engine driving supply chains towards environmentally sustainable, low-carbon models. This study seeks to clarify how AI bolsters supply chain resilience through enhanced information transparency and dynamic capabilities, while examining the moderating influence of digital government in this context. Based on this, this study selected A-share listed companies from 2012 to 2023 as research samples. An entropy-based approach was utilized to develop a supply chain resilience indicator system. A two-way fixed-effects model was employed to analyze the mechanism by which business AI impacts supply chain resilience. Studies demonstrate that company artificial intelligence can markedly improve supply chain resilience. In this process, information transparency, innovative capacity, and absorptive capacity partially mediate the effect, while digital governance exerts a positive moderating influence. Heterogeneity studies indicate that artificial intelligence has a significantly greater favorable effect on supply chain resilience for high-tech corporations, manufacturing firms, growth-stage companies, mature-stage businesses, and chain master enterprises. The research findings not only reveal the impact and underlying mechanisms of enterprise artificial intelligence on supply chain resilience, offering a new perspective for systematically understanding the relationship between enterprise AI and supply chain resilience, but also provide key pathways and empirical evidence for leveraging digital technologies to build sustainable supply chains. Full article
(This article belongs to the Section Economic and Business Aspects of Sustainability)
Show Figures

Figure 1

Back to TopTop