Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (762)

Search Parameters:
Keywords = autoregressive processes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1121 KB  
Article
A Residual Control Chart Based on Convolutional Neural Network for Normal Interval-Censored Data
by Pei-Hsi Lee
Mathematics 2026, 14(3), 423; https://doi.org/10.3390/math14030423 - 26 Jan 2026
Viewed by 64
Abstract
To reduce reliability testing time, experiments are often terminated at a predetermined time, producing right-censored lifetime data. Alternatively, when test samples are inspected at fixed intervals, failures are only observed within these intervals, resulting in interval-censored lifetime data. Although quality control methods for [...] Read more.
To reduce reliability testing time, experiments are often terminated at a predetermined time, producing right-censored lifetime data. Alternatively, when test samples are inspected at fixed intervals, failures are only observed within these intervals, resulting in interval-censored lifetime data. Although quality control methods for right-censored data are well established, relatively little attention has been given to interval-censored observations. Motivated by the success of residual control charts based on convolutional neural network (CNN) for right-censored data, this study extends the chart for monitoring normally distributed interval-censored lifetime data. Simulation results based on average run length (ARL) indicate that the proposed method outperforms the traditional exponentially weighted moving average (EWMA) chart in detecting decreases in mean lifetime. The findings also highlight the practical benefits of employing high- or low-order autoregressive CNN models depending on the magnitude of process shifts. Full article
Show Figures

Figure 1

27 pages, 3763 KB  
Article
GO-PILL: A Geometry-Aware OCR Pipeline for Reliable Recognition of Debossed and Curved Pill Imprints
by Jaehyeon Jo, Sungan Yoon and Jeongho Cho
Mathematics 2026, 14(2), 356; https://doi.org/10.3390/math14020356 - 21 Jan 2026
Viewed by 149
Abstract
Manual pill identification is often inefficient and error-prone due to the large variety of medications and frequent visual similarity among pills, leading to misuse or dispensing errors. These challenges are exacerbated when pill imprints are engraved, curved, or irregularly arranged, conditions under which [...] Read more.
Manual pill identification is often inefficient and error-prone due to the large variety of medications and frequent visual similarity among pills, leading to misuse or dispensing errors. These challenges are exacerbated when pill imprints are engraved, curved, or irregularly arranged, conditions under which conventional optical character recognition (OCR)-based methods degrade significantly. To address this problem, we propose GO-PILL, a geometry-aware OCR pipeline for robust pill imprint recognition. The framework extracts text centerlines and imprint regions using the TextSnake algorithm. During imprint refinement, background noise is suppressed and contrast is enhanced to improve the visibility of embossed and debossed imprints. The imprint localization and alignment stage then rectifies curved or obliquely oriented text into a linear representation, producing geometrically normalized inputs suitable for OCR decoding. The refined imprints are processed by a multimodal OCR module that integrates a non-autoregressive language–vision fusion architecture for accurate character-level recognition. Experiments on a pill image dataset from the U.S. National Library of Medicine show that GO-PILL achieves an F1-score of 81.83% under set-based evaluation and a Top-10 pill identification accuracy of 76.52% in a simulated clinical scenario. GO-PILL consistently outperforms existing methods under challenging imprint conditions, demonstrating strong robustness and practical feasibility. Full article
(This article belongs to the Special Issue Applications of Deep Learning and Convolutional Neural Network)
Show Figures

Figure 1

26 pages, 2749 KB  
Article
Deep-Learning-Driven Adaptive Filtering for Non-Stationary Signals: Theory and Simulation
by Manuel J. Cabral S. Reis
Electronics 2026, 15(2), 381; https://doi.org/10.3390/electronics15020381 - 15 Jan 2026
Viewed by 212
Abstract
Adaptive filtering remains a cornerstone of modern signal processing but faces fundamental challenges when confronted with rapidly changing or nonlinear environments. This work investigates the integration of deep learning into adaptive-filter architectures to enhance tracking capability and robustness in non-stationary conditions. After reviewing [...] Read more.
Adaptive filtering remains a cornerstone of modern signal processing but faces fundamental challenges when confronted with rapidly changing or nonlinear environments. This work investigates the integration of deep learning into adaptive-filter architectures to enhance tracking capability and robustness in non-stationary conditions. After reviewing and analyzing classical algorithms—LMS, NLMS, RLS, and a variable step-size LMS (VSS-LMS)—their theoretical stability and mean-square error behavior are formalized under a slow-variation system model. Comprehensive simulations using drifting autoregressive (AR(2)) processes, piecewise-stationary FIR systems, and time-varying sinusoidal signals confirm the classical trade-off between performance and complexity: RLS achieves the lowest steady-state error, at a quadratic cost, whereas LMS remains computationally efficient with slower adaptation. A stabilized VSS-LMS algorithm is proposed to balance these extremes; the results show that it maintains numerical stability under abrupt parameter jumps while attaining steady-state MSEs that are comparable to RLS (approximately 3 × 10−2) and superior robustness to noise. These findings are validated by theoretical tracking-error bounds that are derived for bounded parameter drift. Building on this foundation, a deep-learning-driven adaptive filter is introduced, where the update rule is parameterized by a neural function, Uθ, that generalizes the classical gradient descent. This approach offers a pathway toward adaptive filters that are capable of self-tuning and context-aware learning, aligning with emerging trends in AI-augmented system architectures and next-generation computing. Future work will focus on online learning and FPGA/ASIC implementations for real-time deployment. Full article
Show Figures

Figure 1

23 pages, 9799 KB  
Article
Inertia Estimation of Regional Power Systems Using Band-Pass Filtering of PMU Ambient Data
by Kyeong-Yeong Lee, Sung-Guk Yoon and Jin Kwon Hwang
Energies 2026, 19(2), 424; https://doi.org/10.3390/en19020424 - 15 Jan 2026
Viewed by 258
Abstract
This paper proposes a regional inertia estimation method in power systems using ambient data measured by phasor measurement units (PMUs). The proposed method employs band-pass filtering to suppress the low-frequency influence of mechanical power and to attenuate high-frequency noise and discrepancies between rotor [...] Read more.
This paper proposes a regional inertia estimation method in power systems using ambient data measured by phasor measurement units (PMUs). The proposed method employs band-pass filtering to suppress the low-frequency influence of mechanical power and to attenuate high-frequency noise and discrepancies between rotor speed and electrical frequency. By utilizing a simple first-order AutoRegressive Moving Average with eXogenous input (ARMAX) model, this process allows the inertia constant to be directly identified. This method requires no prior model order selection, rotor speed estimation, or computation of the rate of change of frequency (RoCoF). The proposed method was validated through simulation on three benchmark systems: the Kundur two-area system, the IEEE Australian simplified 14-generator system, and the IEEE 39-bus system. The method achieved area-level inertia estimates within approximately ±5% error across all test cases, exhibiting consistent performance despite variations in disturbance models and system configurations. The estimation also maintained stable performance with short data windows of a few minutes, demonstrating its suitability for near real-time monitoring applications. Full article
Show Figures

Figure 1

20 pages, 3960 KB  
Article
Prediction and Performance of BDS Satellite Clock Bias Based on CNN-LSTM-Attention Model
by Junwei Ma, Jun Tang, Hanyang Teng and Xuequn Wu
Sensors 2026, 26(2), 422; https://doi.org/10.3390/s26020422 - 8 Jan 2026
Viewed by 258
Abstract
Satellite Clock Bias (SCB) is a major source of error in Precise Point Positioning (PPP). The real-time service products from the International GNSS Service (IGS) are susceptible to network interruptions. Such disruptions can compromise product availability and, consequently, degrade positioning accuracy. We introduce [...] Read more.
Satellite Clock Bias (SCB) is a major source of error in Precise Point Positioning (PPP). The real-time service products from the International GNSS Service (IGS) are susceptible to network interruptions. Such disruptions can compromise product availability and, consequently, degrade positioning accuracy. We introduce the CNN-LSTM-Attention model to address this challenge. The model enhances a Long Short-Term Memory (LSTM) network by integrating Convolutional Neural Networks (CNNs) and an Attention mechanism. The proposed model can efficiently extract data features and balance the weight allocation in the Attention mechanism, thereby improving both the accuracy and stability of predictions. Across various forecasting horizons (1, 2, 4, and 6 h), the CNN-LSTM-Attention model demonstrates prediction accuracy improvements of (76.95%, 66.84%, 65.92%, 84.33%, and 43.87%), (72.59%, 65.61%, 74.60%, 82.98%, and 51.13%), (70.45%, 68.52%, 81.63%, 88.44%, and 60.49%), and (70.26%, 70.51%, 84.28%, 93.66%, and 66.76%), respectively, across the five benchmark models: Linear Polynomial (LP), Quadratic Polynomial (QP), Autoregressive Integrated Moving Average (ARIMA), Backpropagation Neural Network (BP), and LSTM models. Furthermore, in dynamic PPP experiments utilizing IGS tracking stations, the model predictions achieve positioning accuracy comparable to that of post-processed products. This proves that the proposed model demonstrates superior accuracy and stability for predicting SCB, while also satisfying the demands of positioning applications. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

22 pages, 1159 KB  
Article
Domestic Financial Investment, Resource-Backed Capital Flows, and Economic Growth in Niger: An ARDL Approach
by Nesrine Gafsi
Resources 2026, 15(1), 11; https://doi.org/10.3390/resources15010011 - 5 Jan 2026
Viewed by 396
Abstract
Using the Autoregressive Distributed Lag (ARDL) model cointegration framework, this paper examines the long- and short-run impact of domestic financial investment and natural resource rents on economic growth in Niger within the period 1990–2021. The Bounds test confirms a long-run relationship among variables: [...] Read more.
Using the Autoregressive Distributed Lag (ARDL) model cointegration framework, this paper examines the long- and short-run impact of domestic financial investment and natural resource rents on economic growth in Niger within the period 1990–2021. The Bounds test confirms a long-run relationship among variables: F = 4.646 > 3.79 at 5%. Long-run results indicate that increasing domestic investment by 1% raises real Gross Domestic Product (GDP) per capita by approximately 0.30%, whereas 1% increase in natural resource rents leads to a reduction in growth by approximately 0.06%. At the same time, exports have a positive but very small effect, while imports and labor have negative long-run influences. Short-run dynamics further support a significant positive impact of domestic investment, at p = 0.0007, and a lagged effect of natural resources at p = 0.0308. The error-correction term is negative and significant, at −0.75, showing rapid adjustment toward equilibrium. Diagnostic tests confirm an absence of serial correlation and heteroskedasticity, while stability is confirmed by CUSUM and CUSUMSQ tests. The findings reveal a dualism in the growth path of Niger in that domestic financial investments favor sustainable expansion, whereas resource-based revenues undermine the growth process in the long run and call for financial market deepening and improved governance of resource revenues. Full article
Show Figures

Figure 1

30 pages, 3551 KB  
Article
Research on Bayesian Hierarchical Spatio-Temporal Model for Pricing Bias of Green Bonds
by Yiran Liu and Hanshen Li
Sustainability 2026, 18(1), 455; https://doi.org/10.3390/su18010455 - 2 Jan 2026
Viewed by 262
Abstract
Driven by carbon neutrality policies, the cumulative issuance volume of the global green bond market has surpassed $2.5 trillion over the past five years, with China, as the second largest issuer, accounting for 15%. However, there exists a yield difference of up to [...] Read more.
Driven by carbon neutrality policies, the cumulative issuance volume of the global green bond market has surpassed $2.5 trillion over the past five years, with China, as the second largest issuer, accounting for 15%. However, there exists a yield difference of up to 0.8% for bonds with the same credit rating across different policy regions, and the premium level fluctuates dramatically with market cycles, severely restricting the efficiency of green resource allocation. This study innovatively constructs a Bayesian hierarchical spatiotemporal model framework to systematically analyze pricing deviations through a three-level data structure: the base level quantifies the impact of bond micro-characteristics (third-party certification reduces financing costs by 0.15%), the temporal level captures market dynamics using autoregressive processes (premium volatility increases by 50% during economic recessions), and the spatial level reveals policy regional dependencies using conditional autoregressive models (carbon trading pilot provinces and cities form premium sinkholes). The core breakthroughs are: 1. Designing spatiotemporal interaction terms to explicitly model the policy diffusion process, with empirical evidence showing that the green finance reform pilot zone policy has a radiation radius of 200 km within three years, leading to a 0.10% increase in premiums in neighboring provinces; 2. Quantifying the posterior distribution of parameters using the Markov Chain Monte Carlo algorithm, demonstrating that the posterior mean of the policy effect in pilot provinces is −0.211%, with a half-life of 0.75 years, and the residual effect in non-pilot provinces is only −0.042%; 3. Establishing a hierarchical shrinkage prior mechanism, which reduces prediction error by 41% compared to traditional models in out-of-sample testing. Key findings include: the contribution of policy pilots is −0.192%, surpassing the effect of issuer credit ratings, and a 10 yuan/ton increase in carbon price can sustainably reduce premiums by 0.117%. In 2021, the “dual carbon” policy contributed 32% to premium changes through spatiotemporal interaction channels. The research results provide quantitative tools for issuers to optimize financing timing, investors to identify cross-regional arbitrage, and regulators to assess policy coordination, promoting the transformation of the green bond market from an efficiency priority to equitable allocation paradigm. Full article
Show Figures

Figure 1

24 pages, 980 KB  
Article
Multi-Task Seq2Seq Framework for Highway Incident Duration Prediction Incorporating Response Steps and Time Offsets
by Fengze Fan, Jianuo Hao and Xin Fu
Vehicles 2026, 8(1), 5; https://doi.org/10.3390/vehicles8010005 - 2 Jan 2026
Viewed by 244
Abstract
Highway traffic incident management is a dynamic and time-dependent process, and rapidly and accurately predicting its complete sequence of actions and corresponding time schedule is essential for improving the refinement and intelligence of traffic control systems. To address the limitations of existing studies [...] Read more.
Highway traffic incident management is a dynamic and time-dependent process, and rapidly and accurately predicting its complete sequence of actions and corresponding time schedule is essential for improving the refinement and intelligence of traffic control systems. To address the limitations of existing studies that predominantly focus on predicting the total duration while lacking fine-grained modeling of the response procedure, this study proposed a multi-task sequence-to-sequence (Seq2Seq) framework based on a BERT encoder and Transformer decoder to jointly predict incident response steps and their associated time offsets. The model first leveraged a pretrained BERT to encode the incident type and alarm description text, followed by an autoregressive Transformer decoder that generated a sequence of response actions. An action-aware temporal prediction module was incorporated to predict the time offset of each step in parallel, and an adaptive weighted multitask loss was adopted to optimize both action classification and time regression tasks. Experiments based on 4128 real records of highway incident handling in Yunnan Province demonstrated that the proposed model achieved improved performance in duration prediction, outperforming baseline approaches in RMSE (18.05), MAE (14.69), MAPE (37.13%), MedAE (13.23), and SMAPE (33.55%). In addition, the model attained BLEU-4 and ROUGE-L scores of 62.33% and 82.04% in procedure text generation, which confirmed its capability to effectively learn procedural logic and temporal patterns from textual data and offered an interpretable decision-support approach for traffic incident duration prediction. The findings of this study could further support intelligent traffic management systems by enhancing incident response planning, real-time control strategies, and resource allocation for expressway operations. Full article
Show Figures

Figure 1

34 pages, 1847 KB  
Article
Interpretable Nonlinear Forecasting of China’s CPI with Adaptive Threshold ARMA and Information Criterion Guided Integration
by Dezhi Cao, Yue Zhao and Xiaona Xu
Big Data Cogn. Comput. 2026, 10(1), 14; https://doi.org/10.3390/bdcc10010014 - 1 Jan 2026
Viewed by 242
Abstract
Accurate forecasting of China’s Consumer Price Index (CPI) is crucial for effective macroeconomic policymaking, yet remains challenging due to structural breaks and nonlinear dynamics inherent in the inflation process. Traditional linear models, such as ARIMA, often fail to capture threshold effects and regime [...] Read more.
Accurate forecasting of China’s Consumer Price Index (CPI) is crucial for effective macroeconomic policymaking, yet remains challenging due to structural breaks and nonlinear dynamics inherent in the inflation process. Traditional linear models, such as ARIMA, often fail to capture threshold effects and regime shifts. This study introduces a Threshold Autoregressive Moving Average (TARMA) model that embeds a nonlinear threshold mechanism within the conventional ARMA framework, enabling it to better capture the CPI’s complex behavior. Leveraging an evolutionary modeling approach, the TARMA model effectively identifies high- and low-inflation regimes, offering enhanced flexibility and interpretability. Empirical results demonstrate that TARMA significantly outperforms standard models. Specifically, regarding the CPI Index level, the out-of-sample Mean Absolute Percentage Error (MAPE) is reduced to approximately 0.35% (under the S-BIC integration scheme), significantly improving upon the baseline ARIMA model. The model adapts well to inflation regime shifts and delivers substantial improvements near turning points. Furthermore, integrating an information-criterion-based weighting scheme further refines forecasts and reduces errors. By addressing the limitations of linear models through threshold-driven nonlinearity, this study offers a more accurate and interpretable framework for forecasting China’s CPI inflation. Full article
(This article belongs to the Special Issue Artificial Intelligence in Digital Humanities)
Show Figures

Figure 1

32 pages, 1816 KB  
Article
Pragmatic Models for Detection of Hypertension Using Ballistocardiograph Signals and Machine Learning
by Sunil Kumar Prabhakar and Dong-Ok Won
Bioengineering 2026, 13(1), 43; https://doi.org/10.3390/bioengineering13010043 - 30 Dec 2025
Viewed by 313
Abstract
To identify hypertension, Ballistocardiograph (BCG) signals can be primarily utilized. The BCG signal must be thoroughly understood and interpreted so that its application in the classification process could become clearer and more distinct. Various unhealthy habits such as excess consumption of alcohol and [...] Read more.
To identify hypertension, Ballistocardiograph (BCG) signals can be primarily utilized. The BCG signal must be thoroughly understood and interpreted so that its application in the classification process could become clearer and more distinct. Various unhealthy habits such as excess consumption of alcohol and tobacco, accompanied by a lack of good diet and a sedentary lifestyle, lead to hypertension. Common symptoms of hypertension include chest pain, shortness of breath, blurred vision, mood swings, frequent urination, etc. In this work, two pragmatic models are proposed for the detection of hypertension using BCG signals and machine learning models. The first model uses K-means clustering, the maximum overlap discrete wavelet transform (MODWT) and the Empirical Wavelet Transform (EWT) techniques for feature extraction, followed by the Binary Tunicate Swarm Algorithm (BTSA) and Information Gain (IG) for feature selection, as well as two efficient hybrid classifiers such as the Hybrid AdaBoost–-Maximum Uncertainty Linear Discriminant Analysis (MULDA) classifier and the Hybrid AdaBoost–Random Forest (RF) classifier for the classification of BCG signals. The second model uses Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA) and the Random Feature Mapping (RFM) technique for feature extraction, followed by IG and the Aquila Optimization Algorithm (AOA) for feature selection, as well as two versatile hybrid classifiers such as the Hybrid AutoRegressive Integrated Moving Average (ARIMA)–AdaBoost classifier and the Time-weighted Hybrid AdaBoost–Support Vector Machine (TW-HASVM) classifier for the classification of BCG signals. The proposed methodology was tested on a publicly available BCG dataset, and the best results were obtained when the KPCA feature extraction technique was used with the AOA feature selection technique and classified using the Hybrid ARIMA–AdaBoost classifier, reporting a good classification accuracy of 96.89%. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

18 pages, 951 KB  
Article
Assessing the Performance and Evolution of China’s Quality Policies from a Value Co-Creation Perspective
by Jing Jiang, Hanting Zhou, Wenhe Chen, Longsheng Cheng and Suli Zheng
Sustainability 2026, 18(1), 323; https://doi.org/10.3390/su18010323 - 29 Dec 2025
Viewed by 332
Abstract
This study develops a value co-creation-oriented analytical framework to evaluate the performance and evolutionary dynamics of China’s national-level quality policies from 1979 to 2023. A comprehensive categorization and scoring system is established to measure policy intensity, coordination, and comprehensiveness. Policy texts are systematically [...] Read more.
This study develops a value co-creation-oriented analytical framework to evaluate the performance and evolutionary dynamics of China’s national-level quality policies from 1979 to 2023. A comprehensive categorization and scoring system is established to measure policy intensity, coordination, and comprehensiveness. Policy texts are systematically coded through content analysis, and indicator weights are determined using the Analytic Hierarchy Process (AHP). The resulting composite effect values are further analyzed through punctuated-equilibrium testing, breakpoint analysis, and a Vector Autoregression (VAR) model to estimate the temporal lag of policy implementation. Based on 10,962 policy documents retrieved from the Peking University Law Database, the results reveal clear evolutionary stages and cyclical upward trends in policy performance since the reform and opening-up, while the insufficient supply of demand-side policies remains a long-term structural weakness. The overall evolution path shows a transition from unilateral government provision centered on public value to dual government–market regulation driven by mixed commercial value, and finally toward pluralistic quality governance under value co-creation. Empirical evidence also indicates that quality policies act as short-term stimulus instruments that generate positive but sectorally differentiated effects across the three major industries. These findings highlight the need to expand policy coverage, enhance coordination and comprehensiveness, and rebalance the supply structure. Strengthening short-term stimulus effects while promoting inclusive, co-governed, and sustainable quality policy systems can further improve long-term effectiveness and provide useful insights for international discussions on value co-creation-based governance. Full article
Show Figures

Figure 1

27 pages, 4293 KB  
Article
SNR-Guided Enhancement and Autoregressive Depth Estimation for Single-Photon Camera Imaging
by Qingze Yin, Fangming Mu, Qinge Wu, Ding Ding, Ziyu Fan and Tongpo Zhang
Appl. Sci. 2026, 16(1), 245; https://doi.org/10.3390/app16010245 - 25 Dec 2025
Viewed by 353
Abstract
Recent advances in deep learning have intensified the need for robust low-light image processing in critical applications like autonomous driving, where single-photon cameras (SPCs) offer high photon sensitivity but produce noisy outputs requiring specialized enhancement. This work addresses this challenge through a unified [...] Read more.
Recent advances in deep learning have intensified the need for robust low-light image processing in critical applications like autonomous driving, where single-photon cameras (SPCs) offer high photon sensitivity but produce noisy outputs requiring specialized enhancement. This work addresses this challenge through a unified framework integrating three key components: an SNR-guided adaptive enhancement framework that dynamically processes regions with varying noise levels using spatial-adaptive operations and intelligent feature fusion; a specialized self-attention mechanism optimized for low-light conditions; and a conditional autoregressive generation approach applied to robust depth estimation from enhanced SPC images. Our comprehensive evaluation across multiple datasets demonstrates improved performance over state-of-the-art methods, achieving a PSNR of 24.61 dB on the LOL-v1 dataset and effectively recovering fine-grained textures in depth estimation, particularly in real-world SPC applications, while maintaining computational efficiency. The integrated solution effectively bridges the gap between single-photon sensing and practical computer vision tasks, facilitating more reliable operation in photon-starved environments through its novel combination of adaptive noise processing, attention-based feature enhancement, and generative depth reconstruction. Full article
Show Figures

Figure 1

18 pages, 4821 KB  
Article
Automated Baseline-Correction and Signal-Detection Algorithms with Web-Based Implementation for Thermal Liquid Biopsy Data Analysis
by Karl C. Reger, Gabriela Schneider, Keegan T. Line, Alagammai Kaliappan, Robert Buscaglia and Nichola C. Garbett
Cancers 2026, 18(1), 60; https://doi.org/10.3390/cancers18010060 - 24 Dec 2025
Viewed by 402
Abstract
Background/Objectives: Differential scanning calorimetry (DSC) analysis of blood plasma, also known as thermal liquid biopsy (TLB), is a promising approach for disease detection and monitoring; however, its wider adoption in clinical settings has been hindered by labor-intensive data processing workflows, particularly baseline correction. [...] Read more.
Background/Objectives: Differential scanning calorimetry (DSC) analysis of blood plasma, also known as thermal liquid biopsy (TLB), is a promising approach for disease detection and monitoring; however, its wider adoption in clinical settings has been hindered by labor-intensive data processing workflows, particularly baseline correction. Methods: We developed and tested two automated algorithms to address critical bottlenecks in TLB analysis: (1) a baseline-correction algorithm utilizing rolling-variance analysis for endpoint detection, and (2) a signal-detection algorithm that applies auto-regressive integrated moving average (ARIMA)-based stationarity testing to determine whether a profile contains interpretable thermal features. Both algorithms are implemented in ThermogramForge, an open-source R Shiny web application providing an end-to-end workflow for data upload, processing, and report generation. Results: The baseline-correction algorithm demonstrated excellent performance on plasma TLB data (characterized by high heat capacity), matching the quality of rigorous manual processing. However, its performance was less robust for low signal biofluids, such as urine, where weak thermal transitions reduce the reliability of baseline estimation. To address this, a complementary signal-detection algorithm was developed to screen for TLB profiles with discernable thermal transitions prior to baseline correction, enabling users to exclude non-informative data. The signal-detection algorithm achieved near-perfect classification accuracy for TLB profiles with well-defined thermal transitions and maintained a low false-positive rate of 3.1% for true noise profiles, with expected lower performance for borderline cases. The interactive review interface in ThermogramForge further supports quality control and expert refinement. Conclusions: The automated baseline-correction and signal-detection algorithms, together with their web-based implementation, substantially reduce analysis time while maintaining quality, supporting more efficient and reproducible TLB research. Full article
Show Figures

Figure 1

37 pages, 7149 KB  
Article
An AI Digital Platform for Fault Diagnosis and RUL Estimation in Drivetrain Systems Under Varying Operating Conditions
by Dimitrios M. Bourdalos, Xenofon D. Konstantinou, Josef Koutsoupakis, Ilias A. Iliopoulos, Kyriakos Kritikakos, George Karyofyllas, Panayotis E. Spiliotopoulos, Ioannis E. Saramantas, John S. Sakellariou, Dimitrios Giagopoulos, Spilios D. Fassois, Panagiotis Seventekidis and Sotirios Natsiavas
Machines 2026, 14(1), 26; https://doi.org/10.3390/machines14010026 - 24 Dec 2025
Viewed by 331
Abstract
Drivetrain systems operate under varying operating conditions (OCs), which often obscure early-stage fault signatures and hinder robust condition monitoring (CM). This work introduces an AI digital platform developed during the EEDRIVEN project, featuring a holistic CM framework that integrates statistical time series methods—using [...] Read more.
Drivetrain systems operate under varying operating conditions (OCs), which often obscure early-stage fault signatures and hinder robust condition monitoring (CM). This work introduces an AI digital platform developed during the EEDRIVEN project, featuring a holistic CM framework that integrates statistical time series methods—using Generalized AutoRegressive (GAR) models in a multiple model fault diagnosis scheme—with deep learning approaches, including autoencoders and convolutional neural networks, enhanced through a dedicated decision fusion methodology. The platform addresses all key CM tasks, including fault detection, fault type identification, fault severity characterization, and remaining useful life (RUL) estimation, which is performed using a dynamics-informed health indicator derived from GAR parameters and a simple linear Wiener process model. Training for the platform relies on a limited set of experimental vibration signals from the physical drivetrain, augmented with high-fidelity multibody dynamics simulations and surrogate-model realizations to ensure coverage of the full space of OCs and fault scenarios. Its performance is validated on hundreds of inspection experiments using confusion matrices, ROC curves, and metric-based plots, while the decision fusion scheme significantly strengthens diagnostic reliability across the CM stages. The results demonstrate near-perfect fault detection (99.8%), 97.8% accuracy in fault type identification, and over 96% in severity characterization. Moreover, the method yields reliable early-stage RUL estimates for the outer gear of the drivetrain, with normalized errors < 20% and consistently narrow confidence bounds, which confirms the platform’s robustness and practicality for real-world drivetrain systems monitoring. Full article
Show Figures

Figure 1

25 pages, 5120 KB  
Article
Application of a Hybrid CNN-LSTM Model for Groundwater Level Forecasting in Arid Regions: A Case Study from the Tailan River Basin
by Shuting Hu, Mingliang Du, Jiayun Yang, Yankun Liu, Ziyun Tuo and Xiaofei Ma
ISPRS Int. J. Geo-Inf. 2026, 15(1), 6; https://doi.org/10.3390/ijgi15010006 - 21 Dec 2025
Viewed by 437
Abstract
Accurate forecasting of groundwater level dynamics poses a critical challenge for sustainable water management in arid regions. However, the strong spatiotemporal heterogeneity inherent in groundwater systems and their complex interactions between natural processes and human activities often limit the effectiveness of conventional prediction [...] Read more.
Accurate forecasting of groundwater level dynamics poses a critical challenge for sustainable water management in arid regions. However, the strong spatiotemporal heterogeneity inherent in groundwater systems and their complex interactions between natural processes and human activities often limit the effectiveness of conventional prediction methods. To address this, a hybrid CNN-LSTM deep learning model is constructed. This model is designed to extract multivariate coupled features and capture temporal dependencies from multi-variable time series data, while simultaneously simulating the nonlinear and delayed responses of aquifers to groundwater abstraction. Specifically, the convolutional neural network (CNN) component extracts the multivariate coupled features of hydro-meteorological driving factors, and the long short-term memory (LSTM) network component models the temporal dependencies in groundwater level fluctuations. This integrated architecture comprehensively represents the combined effects of natural recharge–discharge processes and anthropogenic pumping on the groundwater system. Utilizing monitoring data from 2021 to 2024, the model was trained and tested using a rolling time-series validation strategy. Its performance was benchmarked against traditional models, including the autoregressive integrated moving average (ARIMA) model, recurrent neural network (RNN), and standalone LSTM. The results show that the CNN-LSTM model delivers superior performance across diverse hydrogeological conditions: at the upstream well AJC-7, which is dominated by natural recharge and discharge, the Nash–Sutcliffe efficiency (NSE) coefficient reached 0.922; at the downstream well AJC-21, which is subject to intensive pumping, the model maintained a robust NSE of 0.787, significantly outperforming the benchmark models. Further sensitivity analysis reveals an asymmetric response of the model’s predictions to uncertainties in pumping data, highlighting the role of key hydrogeological processes such as delayed drainage from the vadose zone. This study not only confirms the strong applicability of the hybrid deep learning model for groundwater level prediction in data-scarce arid regions but also provides a novel analytical pathway and mechanistic insight into the nonlinear behavior of aquifer systems under significant human influence. Full article
Show Figures

Figure 1

Back to TopTop