Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (497)

Search Parameters:
Keywords = multi-variate time series analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 21324 KB  
Article
Comprehensive Evaluation of Adhesive Compounds and Their Properties Involving Harrington’s Desirability Function
by Anna Kornilova, Aleksandr Shuvalov, Valentin Ermakov, Oleg Kornev and Mikhail Kovalev
Buildings 2025, 15(20), 3733; https://doi.org/10.3390/buildings15203733 - 16 Oct 2025
Abstract
The increase in the volume of construction work carried out with chemical anchors has led to a corresponding growth in the supply of these products on the market. Anchors possess numerous characteristics, including strength, anchorage displacement, temperature, curing time, and cost. Designers face [...] Read more.
The increase in the volume of construction work carried out with chemical anchors has led to a corresponding growth in the supply of these products on the market. Anchors possess numerous characteristics, including strength, anchorage displacement, temperature, curing time, and cost. Designers face the challenge of choosing the optimal solution for specific construction conditions. In practice, this often results in choosing anchors with maximum strength and minimum cost, which is not always the best option for long-term use. The authors of this study propose addressing this challenge through a multi-criteria optimization method based on the Harrington function. For implementation, 18 criteria were used. They were derived from reference sources and experimental results. Tests were conducted under short-term and long-term static loading. Based on these tests, strength characteristics were determined, and statistical analysis was carried out to calculate coefficients of variation and confidence intervals for the mean values. Nine types of chemical anchors with different bases were tested: epoxy-based, acrylate-based, methacrylate-based, polyester-based, and epoxy-acrylate-based (five samples in each series). In this study, the assumption that all criteria have equal weight coefficients is made as a limitation. The results of the study are valid only for static loading of anchors in uncracked concrete. The optimal adhesive compound was determined for the basic winter and summer sets of criteria. The practical significance lies in the implementation of a multi-criteria optimization method for selecting the adhesive compound. This approach allows users to choose the optimal adhesive compound for their needs. Full article
(This article belongs to the Special Issue Research on Performance of Buildings Structures and Materials)
Show Figures

Figure 1

22 pages, 3339 KB  
Article
An AutoML Algorithm: Multiple-Steps Ahead Forecasting of Correlated Multivariate Time Series with Anomalies Using Gated Recurrent Unit Networks
by Ying Su and Morgan C. Wang
AI 2025, 6(10), 267; https://doi.org/10.3390/ai6100267 - 14 Oct 2025
Abstract
Multiple time series forecasting is critical in domains such as energy management, economic analysis, web traffic prediction and air pollution monitoring to support effective resource planning. Traditional statistical learning methods, including Vector Autoregression (VAR) and Vector Autoregressive Integrated Moving Average (VARIMA), struggle with [...] Read more.
Multiple time series forecasting is critical in domains such as energy management, economic analysis, web traffic prediction and air pollution monitoring to support effective resource planning. Traditional statistical learning methods, including Vector Autoregression (VAR) and Vector Autoregressive Integrated Moving Average (VARIMA), struggle with nonstationarity, temporal dependencies, inter-series correlations, and data anomalies such as trend shifts, seasonal variations, and missing data. Furthermore, their effectiveness in multi-step ahead forecasting is often limited. This article presents an Automated Machine Learning (AutoML) framework that provides an end-to-end solution for researchers who lack in-depth knowledge of time series forecasting or advanced programming skills. This framework utilizes Gated Recurrent Unit (GRU) networks, a variant of Recurrent Neural Networks (RNNs), to tackle multiple correlated time series forecasting problems, even in the presence of anomalies. To reduce complexity and facilitate the AutoML process, many model parameters are pre-specified, thereby requiring minimal tuning. This design enables efficient and accurate multi-step forecasting while addressing issues including missing values and structural shifts. We also examine the advantages and limitations of GRU-based RNNs within the AutoML system for multivariate time series forecasting. Model performance is evaluated using multiple accuracy metrics across various forecast horizons. The empirical results confirm our proposed approach’s ability to capture inter-series dependencies and handle anomalies in long-range forecasts. Full article
Show Figures

Figure 1

37 pages, 20433 KB  
Article
Change Point Detection in Financial Market Using Topological Data Analysis
by Jian Yao, Jingyan Li, Jie Wu, Mengxi Yang and Xiaoxi Wang
Systems 2025, 13(10), 875; https://doi.org/10.3390/systems13100875 - 6 Oct 2025
Viewed by 627
Abstract
Change points caused by extreme events in global economic markets have been widely studied in the literature. However, existing techniques to identify change points rely on subjective judgments and lack robust methodologies. The objective of this paper is to generalize a novel approach [...] Read more.
Change points caused by extreme events in global economic markets have been widely studied in the literature. However, existing techniques to identify change points rely on subjective judgments and lack robust methodologies. The objective of this paper is to generalize a novel approach that leverages topological data analysis (TDA) to extract topological features from time series data using persistent homology. In this approach, we use Taken’s embedding and sliding window techniques to transform the initial time series data into a high-dimensional topological space. Then, in this topological space, persistent homology is used to extract topological features which can give important information related to change points. As a case study, we analyzed 26 stocks over the last 12 years by using this method and found that there were two financial market volatility indicators derived from our method, denoted as L1 and L2. They serve as effective indicators of long-term and short-term financial market fluctuations, respectively. Moreover, significant differences are observed across markets in different regions and sectors by using these indicators. By setting a significance threshold of 98 % for the two indicators, we found that the detected change points correspond exactly to four major financial extreme events in the past twelve years: the intensification of the European debt crisis in 2011, Brexit in 2016, the outbreak of the COVID-19 pandemic in 2020, and the energy crisis triggered by the Russia–Ukraine war in 2022. Furthermore, benchmark comparisons with established univariate and multivariate CPD methods confirm that the TDA-based indicators consistently achieve superior F1 scores across different tolerance windows, particularly in capturing widely recognized consensus events. Full article
(This article belongs to the Section Systems Practice in Social Science)
Show Figures

Figure 1

31 pages, 1116 KB  
Article
MoCap-Impute: A Comprehensive Benchmark and Comparative Analysis of Imputation Methods for IMU-Based Motion Capture Data
by Mahmoud Bekhit, Ahmad Salah, Ahmed Salim Alrawahi, Tarek Attia, Ahmed Ali, Esraa Eldesouky and Ahmed Fathalla
Information 2025, 16(10), 851; https://doi.org/10.3390/info16100851 - 1 Oct 2025
Viewed by 226
Abstract
Motion capture (MoCap) data derived from wearable Inertial Measurement Units is essential to applications in sports science and healthcare robotics. However, a significant amount of the potential of this data is limited due to missing data derived from sensor limitations, network issues, and [...] Read more.
Motion capture (MoCap) data derived from wearable Inertial Measurement Units is essential to applications in sports science and healthcare robotics. However, a significant amount of the potential of this data is limited due to missing data derived from sensor limitations, network issues, and environmental interference. Such limitations can introduce bias, prevent the fusion of critical data streams, and ultimately compromise the integrity of human activity analysis. Despite the plethora of data imputation techniques available, there have been few systematic performance evaluations of these techniques explicitly for the time series data of IMU-derived MoCap data. We address this by evaluating the imputation performance across three distinct contexts: univariate time series, multivariate across players, and multivariate across kinematic angles. To address this limitation, we propose a systematic comparative analysis of imputation techniques, including statistical, machine learning, and deep learning techniques, in this paper. We also introduce the first publicly available MoCap dataset specifically for the purpose of benchmarking missing value imputation, with three missingness mechanisms: missing completely at random, block missingness, and a simulated value-dependent missingness pattern simulated at the signal transition points. Using data from 53 karate practitioners performing standardized movements, we artificially generated missing values to create controlled experimental conditions. We performed experiments across the 53 subjects with 39 kinematic variables, which showed that discriminating between univariate and multivariate imputation frameworks demonstrates that multivariate imputation frameworks surpassunivariate approaches when working with more complex missingness mechanisms. Specifically, multivariate approaches achieved up to a 50% error reduction (with the MAE improving from 10.8 ± 6.9 to 5.8 ± 5.5) compared to univariate methods for transition point missingness. Specialized time series deep learning models (i.e., SAITS, BRITS, GRU-D) demonstrated a superior performance with MAE values consistently below 8.0 for univariate contexts and below 3.2 for multivariate contexts across all missing data percentages, significantly surpassing traditional machine learning and statistical methods. Notable traditional methods such as Generative Adversarial Imputation Networks and Iterative Imputers exhibited a competitive performance but remained less stable than the specialized temporal models. This work offers an important baseline for future studies, in addition to recommendations for researchers looking to increase the accuracy and robustness of MoCap data analysis, as well as integrity and trustworthiness. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

20 pages, 4016 KB  
Article
Transfer Learning-Enhanced N-BEATSx for Multivariate Forecasting of Tight Gas Well Production
by Yangnan Shangguan, Junhong Jia, Weiliang Xiong, Jinghua Wang, Xianlin Ma, Shilong Chang and Zhenzihao Zhang
Electronics 2025, 14(19), 3875; https://doi.org/10.3390/electronics14193875 - 29 Sep 2025
Viewed by 236
Abstract
Tight gas reservoirs present unique forecasting challenges due to steep decline rates, nonlinear production dynamics, and sensitivity to operational conditions. Conventional decline-curve methods and reservoir simulations are limited either by oversimplifying assumptions or by the need for extensive input data, although univariate deep [...] Read more.
Tight gas reservoirs present unique forecasting challenges due to steep decline rates, nonlinear production dynamics, and sensitivity to operational conditions. Conventional decline-curve methods and reservoir simulations are limited either by oversimplifying assumptions or by the need for extensive input data, although univariate deep learning models fail to fully capture external influences on well performance. To address these limitations, this study develops a transfer learning–enhanced N-BEATSx (Neural Basis Expansion Analysis Time Series with exogenous variables) framework for multivariate forecasting of tight gas well production. The model integrates exogenous variables, particularly casing pressure, with production histories to jointly represent reservoir behavior and operational effects. A pretraining dataset, comprising more than 100,000-day records from Block S of the Sulige Gas Field, was used to initialize the model, which was subsequently applied in a zero-shot setting to wells A1 and A2. Comparative analysis with the transfer learning-enhanced N-BEATS model demonstrates that N-BEATSx achieves consistently higher accuracy, with RMSE reductions of 23.9%, 39.1%, and 33.1% for Well A1 in short-, medium-, and long-term forecasts, respectively. These advances establish N-BEATSx as a robust tool for multivariate production forecasting, with direct industrial value in optimizing resource allocation, guiding development strategies, and enhancing operational decision-making in unconventional gas fields. Full article
Show Figures

Figure 1

38 pages, 6865 KB  
Article
Land Use and Land Cover Change Patterns from Orbital Remote Sensing Products: Spatial Dynamics and Trend Analysis in Northeastern Brazil
by Jhon Lennon Bezerra da Silva, Marcos Vinícius da Silva, Pabrício Marcos Oliveira Lopes, Rodrigo Couto Santos, Ailton Alves de Carvalho, Geber Barbosa de Albuquerque Moura, Thieres George Freire da Silva, Alan Cézar Bezerra, Alexandre Maniçoba da Rosa Ferraz Jardim, Maria Beatriz Ferreira, Patrícia Costa Silva, Josef Augusto Oberdan Souza Silva, Marcio Mesquita, Pedro Henrique Dias Batista, Rodrigo Aparecido Jordan and Henrique Fonseca Elias de Oliveira
Land 2025, 14(10), 1954; https://doi.org/10.3390/land14101954 - 26 Sep 2025
Viewed by 605
Abstract
Environmental degradation and soil desertification are among the most severe environmental issues of recent decades worldwide. Over time, these processes have led to increasingly extreme and highly dynamic climatic conditions. In Brazil, the Northeast Region is characterized by semi-arid and arid areas that [...] Read more.
Environmental degradation and soil desertification are among the most severe environmental issues of recent decades worldwide. Over time, these processes have led to increasingly extreme and highly dynamic climatic conditions. In Brazil, the Northeast Region is characterized by semi-arid and arid areas that exhibit high climatic variability and are extremely vulnerable to environmental changes and pressures from human activities. The application of geotechnologies and geographic information system (GIS) modeling is essential to mitigate the impacts and pressures on the various ecosystems of Northeastern Brazil (NEB), where the Caatinga biome is predominant and critically threatened by these factors. In this context, the objective was to map and assess the spatiotemporal patterns of land use and land cover (LULC), detecting significant trends of loss and gain, based on surface reflectance data and precipitation data over two decades (2000–2019). Remote sensing datasets were utilized, including Landsat satellite data (LULC data), MODIS sensor data (surface reflectance product) and TRMM data (precipitation data). The Google Earth Engine (GEE) software was used to process orbital images and determine surface albedo and acquisition of the LULC dataset. Satellite data were subjected to multivariate analysis, descriptive statistics, dispersion and variability assessments. The results indicated a significant loss trend over the time series (2000–2019) for forest areas (ZMK = −5.872; Tau = −0.958; p < 0.01) with an annual loss of −3705.853 km2 and a total loss of −74,117.06 km2. Conversely, farming areas (agriculture and pasture) exhibited a significant gain trend (ZMK = 5.807; Tau = 0.947; p < 0.01), with an annual gain of +3978.898 km2 and a total gain of +79,577.96 km2, indicating a substantial expansion of these areas over time. However, it is important to emphasize that deforestation of the region’s native vegetation contributes to reduced water production and availability. The trend analysis identified an increase in environmental degradation due to the rapid expansion of land use. LULC and albedo data confirmed the intensification of deforestation in the Northern, Northwestern, Southern and Southeastern regions of NEB. The Northwestern region was the most directly impacted by this increase due to anthropogenic pressures. Over two decades (2000–2019), forested areas in the NEB lost approximately 80.000 km2. Principal component analysis (PCA) identified a significant cumulative variance of 87.15%. It is concluded, then, that the spatiotemporal relationship between biophysical conditions and regional climate helps us to understand and evaluate the impacts and environmental dynamics, especially of the vegetation cover of the NEB. Full article
Show Figures

Figure 1

34 pages, 6187 KB  
Article
An Automated Domain-Agnostic and Explainable Data Quality Assurance Framework for Energy Analytics and Beyond
by Balázs András Tolnai, Zhipeng Ma, Bo Nørregaard Jørgensen and Zheng Grace Ma
Information 2025, 16(10), 836; https://doi.org/10.3390/info16100836 - 26 Sep 2025
Viewed by 340
Abstract
Nonintrusive load monitoring (NILM) relies on high-resolution sensor data to disaggregate total building energy into end-use load components, for example HVAC, ventilation, and appliances. On the ADRENALIN corpus, simple NaN handling with forward fill and mean substitution reduced average NMAE from 0.82 to [...] Read more.
Nonintrusive load monitoring (NILM) relies on high-resolution sensor data to disaggregate total building energy into end-use load components, for example HVAC, ventilation, and appliances. On the ADRENALIN corpus, simple NaN handling with forward fill and mean substitution reduced average NMAE from 0.82 to 0.76 for the Bayesian baseline, from 0.71 to 0.64 for BI-LSTM, and from 0.59 to 0.53 for the Time–Frequency Mask (TFM) model, across nine buildings and four temporal resolutions. However, many NILM models still show degraded accuracy due to unresolved data-quality issues, especially missing values, timestamp irregularities, and sensor inconsistencies, a limitation underexplored in current benchmarks. This paper presents a fully automated data-quality assurance pipeline for time-series energy datasets. The pipeline performs multivariate profiling, statistical analysis, and threshold-based diagnostics to compute standardized quality metrics, which are aggregated into an interpretable Building Quality Score (BQS) that predicts NILM performance and supports dataset ranking and selection. Explainability is provided by SHAP and a lightweight large language model, which turns visual diagnostics into concise, actionable narratives. The study evaluates practical quality improvement through systematic handling of missing values, linking metric changes to downstream error reduction. Using random-forest surrogates, SHAP identifies missingness and timestamp irregularity as dominant drivers of error across models. Core contributions include the definition and validation of BQS, an interpretable scoring and explanation framework for time-series quality, and an end-to-end evaluation of how quality diagnostics affect NILM performance at scale. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science for Smart Cities)
Show Figures

Figure 1

18 pages, 6741 KB  
Article
Revealing Sea-Level Dynamics Driven by El Niño–Southern Oscillation: A Hybrid Local Mean Decomposition–Wavelet Framework for Multi-Scale Analysis
by Xilong Yuan, Shijian Zhou, Fengwei Wang and Huan Wu
J. Mar. Sci. Eng. 2025, 13(10), 1844; https://doi.org/10.3390/jmse13101844 - 24 Sep 2025
Viewed by 320
Abstract
Analysis of global mean sea-level (GMSL) variations provides insights into their spatial and temporal characteristics. To analyze the sea-level cycle and its correlation with the El Niño–Southern Oscillation (ENSO, represented by the Oceanic Niño Index), this study proposes an enhanced analytical framework integrating [...] Read more.
Analysis of global mean sea-level (GMSL) variations provides insights into their spatial and temporal characteristics. To analyze the sea-level cycle and its correlation with the El Niño–Southern Oscillation (ENSO, represented by the Oceanic Niño Index), this study proposes an enhanced analytical framework integrating Local Mean Decomposition with an improved wavelet thresholding technique and wavelet transform. The GMSL time series (January 1993 to July 2020) underwent multi-scale decomposition and noise reduction using Local Mean Decomposition combined with improved wavelet thresholding. Subsequently, the Morlet continuous wavelet transform was applied to analyze the signal characteristics of both GMSL and the Oceanic Niño Index. Finally, cross-wavelet transform and wavelet coherence analyses were employed to investigate their correlation and phase relationships. Key findings include the following: (1) Persistent intra-annual variability (8–16-month cycles) dominates the GMSL signal, superimposed by interannual fluctuations (4–8-month cycles) related to climatic and seasonal forcing. (2) Phase analysis reveals that GMSL generally leads the Oceanic Niño Index during El Niño events but lags during La Niña events. (3) Strong El Niño episodes (May 1997 to May 1998 and October 2014 to April 2016) resulted in substantial net GMSL increases (+7 mm and +6 mm) and significant peak anomalies (+8 mm and +10 mm). (4) Pronounced negative peak anomalies occur during La Niña events, though prolonged events are often masked by the long-term sea-level rise trend, whereas shorter events exhibit clearly discernible and rapid GMSL decline. The results demonstrate that the proposed framework effectively elucidates the multi-scale coupling between ENSO and sea-level variations, underscoring its value for refining the understanding and prediction of climate-driven sea-level changes. Full article
Show Figures

Figure 1

21 pages, 3009 KB  
Article
A Synergistic Fault Diagnosis Method for Rolling Bearings: Variational Mode Decomposition Coupled with Deep Learning
by Shuzhen Wang, Xintian Su, Jinghan Li, Fei Li, Mingwei Li, Yafei Ren, Guoqiang Wang, Nianfeng Shi and Huafei Qian
Electronics 2025, 14(18), 3714; https://doi.org/10.3390/electronics14183714 - 19 Sep 2025
Viewed by 588
Abstract
To address the limitations of the traditional methods that are used to extract features from non-stationary signals and capture temporal dependency relationships, a rolling bearing fault diagnosis method combining variational mode decomposition (VMD) and deep learning is proposed. A hybrid VMD-CNN-Transformer model is [...] Read more.
To address the limitations of the traditional methods that are used to extract features from non-stationary signals and capture temporal dependency relationships, a rolling bearing fault diagnosis method combining variational mode decomposition (VMD) and deep learning is proposed. A hybrid VMD-CNN-Transformer model is constructed, where VMD is used to adaptively decompose bearing vibration signals into multiple intrinsic mode functions (IMFs). The convolutional neural network (CNN) captures the local features of each modal time series, while the multi-head self-attention mechanism of the Transformer captures the global dependencies of each mode, enabling the global analysis and fusion of features from each mode. Finally, a fully connected layer is used to classify the 10 fault types. The experimental results on the Case Western Reserve University bearing dataset demonstrate that the model achieves a fault diagnosis accuracy of 99.48%, which is significantly higher than that of single or traditional combined methods, providing a new technical path for the intelligent diagnosis of rolling bearing faults. Full article
Show Figures

Figure 1

17 pages, 615 KB  
Article
Personal Health and Well-Being Determinants Associated with the Day-to-Day Variability of Sedentary Behaviour in Community-Dwelling People with Stroke
by Lisa van Oirschot, Wendy Hendrickx and Martijn F. Pisters
J. Clin. Med. 2025, 14(18), 6560; https://doi.org/10.3390/jcm14186560 - 18 Sep 2025
Viewed by 349
Abstract
Background: To improve personalized behavioural interventions for people with stroke, it is crucial to understand which factors influence fluctuations in sedentary behaviour. This study aimed to explore the association between health and well-being determinants and daily variability in sedentary behaviour over time within [...] Read more.
Background: To improve personalized behavioural interventions for people with stroke, it is crucial to understand which factors influence fluctuations in sedentary behaviour. This study aimed to explore the association between health and well-being determinants and daily variability in sedentary behaviour over time within community-dwelling people with stroke. Methods: An n-of-1 study design was conducted to examine the associations between determinants and sedentary behaviour during the RISE-intervention randomized multiple baseline study. The percentage of sedentary behaviour was measured daily with the ActivPAL activity monitor. The Visual Analogue Scale scores of the determinants were collected with diaries. Dynamic regression modelling (time-series analysis) was performed, starting with univariable and followed by multivariable linear regressions. Results: The analysis included twelve community-dwelling people with stroke (median age 65 years), with daily sedentary behaviour ranging from 55.5 to 81.4 percent. Objectively measured sleep length was positively associated with the percentage of sedentary behaviour in three participants (p = 0.001, p < 0.0001, and p = 0.045), and negatively associated in one participant (p = 0.002). Subjective sleep length (p = 0.016), fatigue (p = 0.013) and pain (p = 0.0098) were exclusively associated with the percentage of sedentary behaviour in one participant. No significant associations were found for sleep quality, happiness, stress or time pressure in five participants. Conclusions: The findings indicate inconsistent association patterns between health and well-being determinants and the day-to-day variability of sedentary behaviour across participants. This highlights the need for a personalized approach, as the determinants associated with the daily variability of sedentary behaviour in one individual may differ from those influencing another individual. Full article
(This article belongs to the Section Clinical Rehabilitation)
Show Figures

Figure 1

34 pages, 16782 KB  
Article
Ultra-Short-Term Prediction of Monopile Offshore Wind Turbine Vibration Based on a Hybrid Model Combining Secondary Decomposition and Frequency-Enhanced Channel Self-Attention Transformer
by Zhenju Chuang, Yijie Zhao, Nan Gao and Zhenze Yang
J. Mar. Sci. Eng. 2025, 13(9), 1760; https://doi.org/10.3390/jmse13091760 - 11 Sep 2025
Viewed by 349
Abstract
Ice loads continue to pose challenges to the structural safety of offshore wind turbines (OWTs), while the rapid development of offshore wind power in cold regions is enabling the deployment of OWTs in deeper waters. To accurately simulate the dynamic response of an [...] Read more.
Ice loads continue to pose challenges to the structural safety of offshore wind turbines (OWTs), while the rapid development of offshore wind power in cold regions is enabling the deployment of OWTs in deeper waters. To accurately simulate the dynamic response of an OWT under combined ice–wind loading, this paper proposes a Discrete Element Method–Wind Turbine Integrated Analysis (DEM-WTIA) framework. The framework can synchronously simulate discontinuous ice-crushing processes and aeroelastic–structural dynamic responses through a holistic turbine model that incorporates rotor dynamics and control systems. To address the issue of insufficient prediction accuracy for dynamic responses, we introduced a multivariate time series forecasting method that integrates a secondary decomposition strategy with a hybrid prediction model. First, we developed a parallel signal processing mechanism, termed Adaptive Complete Ensemble Empirical Mode Decomposition with Improved Singular Spectrum Analysis (CEEMDAN-ISSA), which achieves adaptive denoising via permutation entropy-driven dynamic window optimization and multi-feature fusion-based anomaly detection, yielding a noise suppression rate of 76.4%. Furthermore, we propose the F-Transformer prediction model, which incorporates a Frequency-Enhanced Channel Attention Mechanism (FECAM). By integrating the Discrete Cosine Transform (DCT) into the Transformer architecture, the F-Transformer mines hidden features in the frequency domain, capturing potential periodicities in discontinuous data. Experimental results demonstrate that signals processed by ISSA exhibit increased signal-to-noise ratios and enhanced fidelity. The F-Transformer achieves a maximum reduction of 31.86% in mean squared error compared to the standard Transformer and maintains a coefficient of determination (R2) above 0.91 under multi-condition coupled testing. By combining adaptive decomposition and frequency-domain enhancement techniques, this framework provides a precise and highly adaptable ultra-short-term response forecasting tool for the safe operation and maintenance of offshore wind power in cold regions. Full article
(This article belongs to the Section Coastal Engineering)
Show Figures

Figure 1

18 pages, 2376 KB  
Article
CNN-Based Interpretable Feature Extraction Methods Considering Pairwise Interactions
by Kyuchang Chang, Sujin Lee and Jun-Geol Baek
Sensors 2025, 25(18), 5634; https://doi.org/10.3390/s25185634 - 10 Sep 2025
Viewed by 376
Abstract
This paper proposes a framework that improves classification performance for multivariate time series data while providing an objective assessment of each variable’s influence, including interaction effects. While convolutional neural networks (CNNs) offer significant advantages in analyzing multivariate time series data, the structural limitations [...] Read more.
This paper proposes a framework that improves classification performance for multivariate time series data while providing an objective assessment of each variable’s influence, including interaction effects. While convolutional neural networks (CNNs) offer significant advantages in analyzing multivariate time series data, the structural limitations of CNNs have restricted their ability to detect statistical interactions. Our approach creatively modifies convolutional filters and layer structures, enabling feature extraction that captures the influence of pairwise interactions. These extracted features are processed by interpretable models to calculate feature importance, enabling in-depth causal analysis by quantifying both individual and pairwise variable effects. In addition, the proposed method enhances the overall classification performance of multivariate time series data. Synthetic data experiments verified that the proposed method effectively extracted relevant features that explain pairwise interactions. In addition, in the multivariate time series classification experiments using real data, the proposed method demonstrated superior performance compared to baseline methods. These results suggest that the proposed approach is a practical and interpretable solution for multivariate time series classification tasks in domains where variable interactions play a decisive role, such as healthcare, finance, and manufacturing. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Sensing)
Show Figures

Figure 1

40 pages, 7228 KB  
Article
Guidance for Interactive Visual Analysis in Multivariate Time Series Preprocessing
by Flor de Luz Palomino Valdivia and Herwin Alayn Huillcen Baca
Sensors 2025, 25(18), 5617; https://doi.org/10.3390/s25185617 - 9 Sep 2025
Viewed by 656
Abstract
Multivariate time series analysis presents significant challenges due to its dynamism, heterogeneity, and scalability. Given this, preprocessing is considered a crucial step to ensure analytical quality. However, this phase falls solely on the user without system support, resulting in wasted time, subjective decision-making, [...] Read more.
Multivariate time series analysis presents significant challenges due to its dynamism, heterogeneity, and scalability. Given this, preprocessing is considered a crucial step to ensure analytical quality. However, this phase falls solely on the user without system support, resulting in wasted time, subjective decision-making, and cognitive overload, and is prone to errors that affect the quality of the results. This situation reflects the lack of interactive visual analysis approaches that effectively integrate preprocessing with guidance mechanisms. The main objective of this work was to design and develop a guidance system for interactive visual analysis in multivariate time series preprocessing, allowing users to understand, evaluate, and adapt their decisions in this critical phase of the analytical workflow. To this end, we propose a new guide-based approach that incorporates recommendations, explainability, and interactive visualization. This approach is embodied in the GUIAVisWeb tool, which organizes a workflow through tasks, subtasks, and preprocessing algorithms; recommends appropriate components through consensus validation and predictive evaluation; and explains the justification for each recommendation through visual representations. The proposal was evaluated in two dimensions: (i) quality of the guidance, with an average score of 6.19 on the Likert scale (1–7), and (ii) explainability of the algorithm recommendations, with an average score of 5.56 on the Likert scale (1–6). In addition, a case study was developed with air quality data that demonstrated the functionality of the tool and its ability to support more informed, transparent, and effective preprocessing decisions. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 3520 KB  
Article
A Deep Learning–Random Forest Hybrid Model for Predicting Historical Temperature Variations Driven by Air Pollution: Methodological Insights from Wuhan
by Yu Liu and Yuanfang Du
Atmosphere 2025, 16(9), 1056; https://doi.org/10.3390/atmos16091056 - 8 Sep 2025
Viewed by 823
Abstract
With the continuous acceleration of industrialization, air pollution has become increasingly severe and has, to some extent, contributed to the progression of global climate change. Against this backdrop, accurate temperature forecasting plays a vital role in various fields, including agricultural production, energy scheduling, [...] Read more.
With the continuous acceleration of industrialization, air pollution has become increasingly severe and has, to some extent, contributed to the progression of global climate change. Against this backdrop, accurate temperature forecasting plays a vital role in various fields, including agricultural production, energy scheduling, environmental governance, and public health protection. To improve the accuracy and stability of temperature prediction, this study proposes a hybrid modeling approach that integrates convolutional neural networks (CNNs), Long Short-Term Memory (LSTM) networks, and random forests (RFs). This model fully leverages the strengths of CNNs in extracting local spatial features, the advantages of LSTM in modeling long-term dependencies in time series, and the capabilities of RF in nonlinear modeling and feature selection through ensemble learning. Based on daily temperature, meteorological, and air pollutant observation data from Wuhan during the period 2015–2023, this study conducted multi-scale modeling and seasonal performance evaluations. Pearson correlation analysis and random forest-based feature importance ranking were used to identify two key pollutants (PM2.5 and O3) and two critical meteorological variables (air pressure and visibility) that are strongly associated with temperature variation. A CNN-LSTM model was then constructed using the meteorological variables as input to generate preliminary predictions. These predictions were subsequently combined with the concentrations of the selected pollutants to form a new feature set, which was input into the RF model for secondary regression, thereby enhancing the overall model performance. The main findings are as follows: (1) The six major pollutants exhibit clear seasonal distribution patterns, with generally higher concentrations in winter and lower in summer, while O3 shows the opposite trend. Moreover, the influence of pollutants on temperature demonstrates significant seasonal heterogeneity. (2) The CNN-LSTM-RF hybrid model shows excellent performance in temperature prediction tasks. The predicted values align closely with observed data in the test set, with a low prediction error (RMSE = 0.88, MAE = 0.66) and a high coefficient of determination (R2 = 0.99), confirming the model’s accuracy and robustness. (3) In multi-scale forecasting, the model performs well on both daily (short-term) and monthly (mid- to long-term) scales. While daily-scale predictions exhibit higher precision, monthly-scale forecasts effectively capture long-term trends. A paired-sample t-test on annual mean temperature predictions across the two time scales revealed a statistically significant difference at the 95% confidence level (t = −3.5299, p = 0.0242), indicating that time granularity has a notable impact on prediction outcomes and should be carefully selected and optimized based on practical application needs. (4) One-way ANOVA and the non-parametric Kruskal–Wallis test were employed to assess the statistical significance of seasonal differences in daily absolute prediction errors. Results showed significant variation across seasons (ANOVA: F = 2.94, p = 0.032; Kruskal–Wallis: H = 8.82, p = 0.031; both p < 0.05), suggesting that seasonal changes considerably affect the model’s predictive performance. Specifically, the model exhibited the highest RMSE and MAE in spring, indicating poorer fit, whereas performance was best in autumn, with the highest R2 value, suggesting a stronger fitting capability. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

24 pages, 603 KB  
Review
Dexamethasone Suppression Testing in Patients with Adrenal Incidentalomas with/Without Mild Autonomous Cortisol Secretion: Spectrum of Cortisol Cutoffs and Additional Assays (An Updated Analysis)
by Alexandra-Ioana Trandafir and Mara Carsote
Biomedicines 2025, 13(9), 2169; https://doi.org/10.3390/biomedicines13092169 - 5 Sep 2025
Viewed by 1151
Abstract
Background/Objective: The overnight 1-mg dexamethasone suppression test (DST) represents the conventional/standard tool for endogenous hypercortisolemia screening, typically in relationship with adrenal and pituitary masses. Nevertheless, an associated spectrum of challenges and pitfalls is found in daily practice. This analysis aimed to evaluate: [...] Read more.
Background/Objective: The overnight 1-mg dexamethasone suppression test (DST) represents the conventional/standard tool for endogenous hypercortisolemia screening, typically in relationship with adrenal and pituitary masses. Nevertheless, an associated spectrum of challenges and pitfalls is found in daily practice. This analysis aimed to evaluate: (I.) the diagnosis relevance of 1-mg DST in patients with adrenal incidentalomas (AIs) with/without mild autonomous cortisol secretion (MACS) exploring different cutoffs of the second-day plasma cortisol after dexamethasone administration (cs-DST) with respect to cardio-metabolic outcomes; (II.) the potential utility of adding other biomarkers to DST [plasma morning adrenocorticotropic hormone (ACTH), 24-h urinary free cortisol (UFC), late-night salivary cortisol (LNSC), dehydroepiandrosterone sulfate (DHEAS)]; and (III.) DST variability in time. Methods: This narrative analysis was based on searching full-text, English articles in PubMed (between January 2023 and April 2025) via using different term combinations: “dexamethasone suppression test” (n = 239), “diagnosis test for autonomous cortisol secretion” (n = 22), “diagnosis test for mild autonomous cortisol secretion” (n = 13) and “diagnosis test for Cushing Syndrome” (n = 61). We manually checked the title and abstract and finally included only the studies that provided hormonal testing results in adults with non-functional adenomas (NFAs) ± MACS. We excluded: reviews, meta-analyses, editorials, conference abstracts, case reports, and case series; non-human research; studies that did not provide clear criteria for distinguishing between Cushing syndrome and MACS; primary aldosteronism. Results: The sample-focused analysis (n = 13 studies) involved various designs: cross-sectional (n = 4), prospective (n = 1), retrospective (n = 7), and cohort (n = 1); a total of 4203 patients (female-to-male ratio = 1.45), mean age of 59.92 years. I. Cs-DST cutoffs varied among the studies (n = 6), specifically, 0.87, 0.9, 1.2, and 1.4 µg/dL in relationship with the cardio-metabolic outcomes. After adjusting for age (n = 1), only the prevalence of cardiovascular disease remained significantly higher in >0.9 µg/dL vs. ≤0.9 group (OR = 2.23). Multivariate analysis (n = 1) found cs-DST between 1.2 and 1.79 µg/dL was independently associated with hypertension (OR = 1.55, 95%CI: 1.08–2.23, p = 0.018), diabetes (OR = 1.60, 95%CI: 1.01–2.57, p = 0.045), and their combination (OR = 1.96, 95%CI:1.12–3.41, p = 0.018) after adjusting for age, gender, obesity, and dyslipidemia. A higher cs-DST was associated with a lower estimated glomerular filtration rate (eGFR), independently of traditional cardiovascular risk factors. Post-adrenalectomy eGFR improvement was more pronounced in younger individuals, those with lower eGFR before surgery, and with a longer post-operative follow-up. Cs-DST (n = 1) was strongly associated with AIs size and weakly associated with age, body mass index and eGFR. Cortisol level increased by 9% (95% CI: 6–11%) for each 10 mL/min/1.73 m2 decrease in eGFR. A lower cs-DST was associated with a faster post-adrenalectomy function recovery; the co-diagnosis of diabetes reduced the likelihood of this recovery (OR = 24.55, p = 0.036). II. Additional biomarkers assays (n = 5) showed effectiveness only for lower DHEAS to pinpoint MACS amid AIs (n = 2, cutoffs of <49.31 µg/dL, respectively, <75 µg/dL), and lower ACTH (n = 1, <12.6 pmol/L). III. Longitudinal analysis of DST’s results (n = 3): 22% of NFAS switch to MACS after a median of 35.7 months (n = 1), respectively, 29% (n = 1) after 48.6 ± 12.5 months, 11.8% (n = 1) after 40.4 ± 51.17 months. A multifactorial model of prediction showed the lowest risk of switch (2.4%) in individuals < 50 years with unilateral tumor and cs-DST < 0.45 µg/dL. In the subgroup of subjects without cardio-metabolic comorbidities at presentation, 25.6% developed ≥1 comorbidities during surveillance. Conclusions: The importance of exploring the domain of AIs/NFAs/MACS relates to an increasing detection in aging population, hence, the importance of their optimum hormonal characterization and identifying/forestalling cardio-metabolic consequences. The spectrum of additional biomarkers in MACS (other than DST) remains heterogeneous and still controversial, noting the importance of their cost-effectiveness, and availability in daily practice. Cs-DST serves as an independent predictor of cardio-metabolic outcomes, kidney dysfunction, while adrenalectomy may correct them in both MACS and NFAs, especially in younger population. Moreover, it serves as a predictor of switching the NFA into MACS category during surveillance. Changing the hormonal behavior over time implies awareness, since it increases the overall disease burden. Full article
(This article belongs to the Section Neurobiology and Clinical Neuroscience)
Show Figures

Figure 1

Back to TopTop