Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (525)

Search Parameters:
Keywords = Memory-Tree

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2292 KB  
Article
Tuning for Precision Forecasting of Green Market Volatility Time Series
by Sonia Benghiat and Salim Lahmiri
Stats 2026, 9(1), 12; https://doi.org/10.3390/stats9010012 - 29 Jan 2026
Viewed by 89
Abstract
In recent years, the green financial market has been exhibiting heightened volatility daily, largely due to policy changes and economic shifts. To explore the broader potential of predictive modeling in the context of short-term volatility time series, this study analyzes how fine-tuning hyperparameters [...] Read more.
In recent years, the green financial market has been exhibiting heightened volatility daily, largely due to policy changes and economic shifts. To explore the broader potential of predictive modeling in the context of short-term volatility time series, this study analyzes how fine-tuning hyperparameters in predictive models is essential for improving short-term forecasts of market volatility, particularly within the rapidly evolving domain of green financial markets. While traditional econometric models have long been employed to model market volatility, their application to green markets remains limited, especially when contrasted with the emerging potential of machine-learning and deep-learning approaches for capturing complex dynamics in this context. This study evaluates the performance of several data-driven forecasting models starting with machine-learning models: regression tree (RT) and support vector regression (SVR), and with deep-learning ones: long short-term memory (LSTM), convolutional neural network (CNN), and gated recurrent unit (GRU) applied to over a decade of daily estimated volatility data coming from three distinct green markets. Predictive accuracy is compared both with and without hyperparameter optimization methods. In addition, this study introduces the quantile loss metric to better capture the skewness and heavy tails inherent in these financial series, alongside two widely used evaluation metrics. This comparative analysis yields significant numerical and graphical insights, enhancing the understanding of short-term volatility predictability in green markets and advancing a relatively underexplored research domain. The study demonstrates that deep-learning predictors outperform machine-learning ones, and that including a hyperparameter tuning algorithm shows consistent improvements across all deep-learning models and for all volatility time series. Full article
(This article belongs to the Section Applied Statistics and Machine Learning Methods)
Show Figures

Figure 1

30 pages, 3115 KB  
Article
HST–MB–CREH: A Hybrid Spatio-Temporal Transformer with Multi-Branch CNN/RNN for Rare-Event-Aware PV Power Forecasting
by Guldana Taganova, Jamalbek Tussupov, Assel Abdildayeva, Mira Kaldarova, Alfiya Kazi, Ronald Cowie Simpson, Alma Zakirova and Bakhyt Nurbekov
Algorithms 2026, 19(2), 94; https://doi.org/10.3390/a19020094 - 23 Jan 2026
Viewed by 169
Abstract
We propose the Hybrid Spatio-Temporal Transformer with Multi-Branch CNN/RNN and Extreme-Event Head (HST–MB–CREH), a hybrid spatio-temporal deep learning architecture for joint short-term photovoltaic (PV) power forecasting and the detection of rare extreme events, to support the reliable operation of renewable-rich power systems. The [...] Read more.
We propose the Hybrid Spatio-Temporal Transformer with Multi-Branch CNN/RNN and Extreme-Event Head (HST–MB–CREH), a hybrid spatio-temporal deep learning architecture for joint short-term photovoltaic (PV) power forecasting and the detection of rare extreme events, to support the reliable operation of renewable-rich power systems. The model combines a spatio-temporal transformer encoder with three convolutional neural network (CNN)/recurrent neural network (RNN) branches (CNN → long short-term memory (LSTM), LSTM → gated recurrent unit (GRU), CNN → GRU) and a dense pathway for tabular meteorological and calendar features. A multitask output head simultaneously performs the regression of PV power and binary classification of extremes defined above the 95th percentile. We evaluate HST–MB–CREH on the publicly available Renewable Power Generation and Weather Conditions dataset with hourly resolutions from 2017 to 2022, using a 5-fold TimeSeriesSplit protocol to avoid temporal leakage and to cover multiple seasons. Compared with tree ensembles (RandomForest, XGBoost), recurrent baselines (Stacked GRU, LSTM), and advanced hybrid/transformer models (Hybrid Multi-Branch CNN–LSTM/GRU with Dense Path and Extreme-Event Head (HMB–CLED) and Spatio-Temporal Multitask Transformer with Extreme-Event Head (STM–EEH)), the proposed architecture achieves the best overall trade-off between accuracy and rare-event sensitivity, with normalized performance of RMSE_z = 0.2159 ± 0.0167, MAE_z = 0.1100 ± 0.0085, mean absolute percentage error (MAPE) = 9.17 ± 0.45%, R2 = 0.9534 ± 0.0072, and AUC_ext = 0.9851 ± 0.0051 across folds. Knowledge extraction is supported via attention-based analysis and permutation feature importance, which highlight the dominant role of global horizontal irradiance, diurnal harmonics, and solar geometry features. The results indicate that hybrid spatio-temporal multitask architectures can substantially improve both the forecast accuracy and robustness to extremes, making HST–MB–CREH a promising building block for intelligent decision-support tools in smart grids with a high share of PV generation. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

21 pages, 817 KB  
Article
Predicting Learner Contributions in MOOC Learning Forums Using the Hidden Markov Model
by Bing Wu and Ruodan Xie
Appl. Sci. 2026, 16(2), 881; https://doi.org/10.3390/app16020881 - 15 Jan 2026
Viewed by 144
Abstract
Learner engagement is a pivotal factor affecting the effectiveness of Massive Open Online Courses (MOOCs), as it promotes collaborative learning environments. However, measuring the extent of learners’ contributions in MOOC learning forums presents challenges due to the complex nature of engagement and its [...] Read more.
Learner engagement is a pivotal factor affecting the effectiveness of Massive Open Online Courses (MOOCs), as it promotes collaborative learning environments. However, measuring the extent of learners’ contributions in MOOC learning forums presents challenges due to the complex nature of engagement and its variability. Given the limited research in this domain, further investigation is necessary. This study aims to address this gap by utilizing the Hidden Markov Model (HMM) to identify latent states of MOOC learners and improve their participation in learning forums. The study constructs a multidimensional observable signal sequence based on learner-generated post data from MOOC forums, with a particular focus on the widely attended course on a MOOC platform. To evaluate the predictive accuracy of HMM in forecasting learner contributions, the study employs several prominent prediction models for comparative analysis, including k-nearest neighbor, logistic regression, random forest, extreme gradient boosting tree, and the long short-term memory network. The results demonstrate that HMM provides superior accuracy in predicting learner contributions compared to other models. These findings not only validate the effectiveness of HMM but also offer significant insights and recommendations for enhancing forum management practices. This research represents a substantial advancement in addressing the challenges related to learner engagement in MOOC learning forums and underscores the potential benefits of employing the HMM approach in this context. Full article
Show Figures

Figure 1

20 pages, 1248 KB  
Article
A Custom Transformer-Based Framework for Joint Traffic Flow and Speed Prediction in Autonomous Driving Contexts
by Behrouz Samieiyan and Anjali Awasthi
Future Transp. 2026, 6(1), 15; https://doi.org/10.3390/futuretransp6010015 - 12 Jan 2026
Viewed by 215
Abstract
Short-term traffic prediction is vital for intelligent transportation systems, enabling adaptive congestion control, real-time signal management, and dynamic route planning for autonomous vehicles (AVs). This study introduces a custom Transformer-based deep learning framework for joint forecasting of traffic flow and vehicle speed, leveraging [...] Read more.
Short-term traffic prediction is vital for intelligent transportation systems, enabling adaptive congestion control, real-time signal management, and dynamic route planning for autonomous vehicles (AVs). This study introduces a custom Transformer-based deep learning framework for joint forecasting of traffic flow and vehicle speed, leveraging handcrafted positional encoding and stacked multi-head attention layers to model multivariate traffic patterns. Evaluated against baselines including Long Short-Term Memory (LSTM), Support Vector Machine (SVM), Random Tree, and Random Forest on the Next-Generation Simulation (NGSIM) dataset, the model achieves 94.2% accuracy (Root Mean Squared Error (RMSE) 0.16) for flow and 92.1% accuracy for speed, outperforming traditional and deep learning approaches. A hybrid evaluation metric, integrating RMSE and threshold-based accuracy tailored to AV operational needs, enhances its practical relevance. With its parallel processing capability, this framework offers a scalable, real-time solution, advancing AV ecosystems and smart mobility infrastructure. Full article
Show Figures

Figure 1

16 pages, 940 KB  
Article
A Reinforcement Learning Framework for Fraud Detection in Highly Imbalanced Financial Data
by Alkis Papanastassiou, Benedetta Camaiani, Piergiulio Lenzi and Riccardo Crupi
Appl. Sci. 2026, 16(1), 252; https://doi.org/10.3390/app16010252 - 26 Dec 2025
Viewed by 543
Abstract
Anomaly detection in financial transactions is a challenging task, primarily due to severe class imbalance and the adaptive behavior of fraudulent activities. This paper presents a reinforcement learning framework for fraud detection (RLFD) to address this problem. We train a deep Q-network (DQN) [...] Read more.
Anomaly detection in financial transactions is a challenging task, primarily due to severe class imbalance and the adaptive behavior of fraudulent activities. This paper presents a reinforcement learning framework for fraud detection (RLFD) to address this problem. We train a deep Q-network (DQN) agent with a long short-term memory (LSTM) encoder to process sequences of financial events and identify anomalies. On a proprietary, highly imbalanced dataset, 10-fold cross-validation highlights a distinct trade-off in performance. While a gradient boosted trees (GBT) baseline demonstrates superior global ranking capabilities (higher ROC and PR AUC), the RLFD agent successfully learns a high-recall policy directly from the reward signal, meeting operational needs for rare event detection. Importantly, a dynamic orthogonality analysis proves that the two models detect distinct subsets of fraudulent activity. The RLFD agent consistently identifies unique fraudulent transactions that the tree-based model misses, regardless of the decision threshold. Even at high-confidence operating points, the RLFD agent accounts for nearly 30% of the detected anomalies. These results suggest that while tree-based models offer high precision for static patterns, RL-based agents capture sequential anomalies that are otherwise missed, supporting for a hybrid, parallel deployment strategy. Full article
Show Figures

Figure 1

18 pages, 1173 KB  
Article
Machine Learning Methods for Predicting Cancer Complications Using Smartphone Sensor Data: A Prospective Study
by Gabrielė Dargė, Gabrielė Kasputytė, Paulius Savickas, Adomas Bunevičius, Inesa Bunevičienė, Erika Korobeinikova, Domas Vaitiekus, Arturas Inčiūra, Laimonas Jaruševičius, Romas Bunevičius, Ričardas Krikštolaitis, Tomas Krilavičius and Elona Juozaitytė
Appl. Sci. 2026, 16(1), 249; https://doi.org/10.3390/app16010249 - 25 Dec 2025
Viewed by 442
Abstract
Complications are frequent in cancer patients and contribute to adverse outcomes and higher healthcare costs, underscoring the need for earlier identification and prediction. This study evaluated the feasibility of using passively generated smartphone sensor data to explore early-warning signals of complications and symptom [...] Read more.
Complications are frequent in cancer patients and contribute to adverse outcomes and higher healthcare costs, underscoring the need for earlier identification and prediction. This study evaluated the feasibility of using passively generated smartphone sensor data to explore early-warning signals of complications and symptom worsening during cancer treatment. A total of 108 patients were continuously monitored using accelerometer, GPS, and screen on/off data collected through the LAIMA application, while symptoms of depression, fatigue, and nausea were assessed every two weeks and complications were confirmed during clinic visits or emergency presentations. Smartphone data streams were aggregated into variables describing activity and sociability patterns. Machine learning models, including Decision Tree, Extreme Gradient Boosting, K-Nearest Neighbors, and Support Vector Machine, were used for complication prediction, and time-series models such as Autoregressive Integrated Moving Average, Holt–Winters, TBATS, Long Short-Term Memory neural network, and General Regression Neural Network were applied to identify early behavioral changes preceding symptom reports. In this exploratory analysis, the ensemble model demonstrated high sensitivity (89%) for identifying complication events. Smartphone-derived behavioral indicators enabled earlier detection of depression, fatigue, and vomiting by about nine days in a subset of patients. These findings demonstrate the feasibility of passive smartphone sensor data as exploratory early-warning signals, warranting validation in larger cohorts. Full article
Show Figures

Figure 1

27 pages, 1906 KB  
Article
GenIIoT: Generative Models Aided Proactive Fault Management in Industrial Internet of Things
by Isra Zafat, Arshad Iqbal, Maqbool Khan, Naveed Ahmad and Mohammed Ali Alshara
Information 2025, 16(12), 1114; https://doi.org/10.3390/info16121114 - 18 Dec 2025
Viewed by 536
Abstract
Detecting active failures is important for the Industrial Internet of Things (IIoT). The IIoT aims to connect devices and machinery across industries. The devices connect via the Internet and provide large amounts of data which, when processed, can generate information and even make [...] Read more.
Detecting active failures is important for the Industrial Internet of Things (IIoT). The IIoT aims to connect devices and machinery across industries. The devices connect via the Internet and provide large amounts of data which, when processed, can generate information and even make automated decisions on the administration of industries. However, traditional active fault management techniques face significant challenges, including highly imbalanced datasets, a limited availability of failure data, and poor generalization to real-world conditions. These issues hinder the effectiveness of prompt and accurate fault detection in real IIoT environments. To overcome these challenges, this work proposes a data augmentation mechanism which integrates generative adversarial networks (GANs) and the synthetic minority oversampling technique (SMOTE). The integrated GAN-SMOTE method increases minority class data by generating failure patterns that closely resemble industrial conditions, increasing model robustness and mitigating data imbalances. Consequently, the dataset is well balanced and suitable for the robust training and validation of learning models. Then, the data are used to train and evaluate a variety of models, including deep learning architectures, such as convolutional neural networks (CNNs) and long short-term memory networks (LSTMs), and conventional machine learning models, such as support vector machines (SVMs), K-nearest neighbors (KNN), and decision trees. The proposed mechanism provides an end-to-end framework that is validated on both generated and real-world industrial datasets. In particular, the evaluation is performed using the AI4I, Secom and APS datasets, which enable comprehensive testing in different fault scenarios. The proposed scheme improves the usability of the model and supports its deployment in a real IIoT environment. The improved detection performance of the integrated GAN-SMOTE framework effectively addresses fault classification challenges. This newly proposed mechanism enhances the classification accuracy up to 0.99. The proposed GAN-SMOTE framework effectively overcomes the major limitations of traditional fault detection approaches and proposes a robust, scalable and practical solution for intelligent maintenance systems in the IIoT environment. Full article
Show Figures

Figure 1

45 pages, 17121 KB  
Article
From Black Box to Transparency: An Explainable Machine Learning (ML) Framework for Ocean Wave Prediction Using SHAP and Feature-Engineering-Derived Variable
by Ahmet Durap
Mathematics 2025, 13(24), 3962; https://doi.org/10.3390/math13243962 - 12 Dec 2025
Viewed by 564
Abstract
Accurate prediction of significant wave height (SWH) is central to coastal ocean dynamics, wave–climate assessment, and operational marine forecasting, yet many high-performing machine-learning (ML) models remain opaque and weakly connected to underlying wave physics. We propose an explainable, feature engineering-guided ML framework for [...] Read more.
Accurate prediction of significant wave height (SWH) is central to coastal ocean dynamics, wave–climate assessment, and operational marine forecasting, yet many high-performing machine-learning (ML) models remain opaque and weakly connected to underlying wave physics. We propose an explainable, feature engineering-guided ML framework for coastal SWH prediction that combines extremal wave statistics, temporal descriptors, and SHAP-based interpretation. Using 30 min buoy observations from a high-energy, wave-dominated coastal site off Australia’s Gold Coast, we benchmarked seven regression models (Linear Regression, Decision Tree, Random Forest, Gradient Boosting, Support Vector Regression, K-Nearest Neighbors, and Neural Networks) across four feature sets: (i) Base (Hmax, Tz, Tp, SST, peak direction), (ii) Base + Temporal (lags, rolling statistics, cyclical hour/month encodings), (iii) Base + a physics-informed Wave Height Ratio, WHR = Hmax/Hs, and (iv) Full (Base + Temporal + WHR). Model skill is evaluated for full-year, 1-month, and 10-day prediction windows. Performance was assessed using R2, RMSE, MAE, and bias metrics, with the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) employed for multi-criteria ranking. Inclusion of WHR systematically improves performance, raising test R2 from a baseline range of ~0.85–0.95 to values exceeding 0.97 and reducing RMSE by up to 86%, with a Random Forest|Base + WHR configuration achieving the top TOPSIS score (1.000). SHAP analysis identifies WHR and lagged SWH as dominant predictors, linking model behavior to extremal sea states and short-term memory in the wave field. The proposed framework demonstrates how embedding simple, physically motivated features and explainable AI tools can transform black-box coastal wave predictors into transparent models suitable for geophysical fluid dynamics, coastal hazard assessment, and wave-energy applications. Full article
Show Figures

Figure 1

38 pages, 6341 KB  
Article
Nonlinear Perceptual Thresholds and Trade-Offs of Visual Environment in Historic Districts: Evidence from Street View Images in Shanghai
by Zhanzhu Wang, Weiying Zhang and Yongming Huang
Sustainability 2025, 17(24), 11075; https://doi.org/10.3390/su172411075 - 10 Dec 2025
Viewed by 376
Abstract
Historic districts, as important spatial units that carry urban cultural memory and everyday social life, play a crucial role in shaping residents’ spatial identity, emotional attachment, and perceptual experience. Although quantitative research on built environments and perception has advanced considerably in recent years, [...] Read more.
Historic districts, as important spatial units that carry urban cultural memory and everyday social life, play a crucial role in shaping residents’ spatial identity, emotional attachment, and perceptual experience. Although quantitative research on built environments and perception has advanced considerably in recent years, the mechanisms through which perception is formed in historic districts, particularly the nonlinear threshold effects and perceptual trade-off patterns that arise under conditions of high-density and mixed land use, remain insufficiently examined. To address this gap, this study develops an analytical framework that integrates spatial attributes with multidimensional subjective perceptions. Focusing on six historic districts in central Shanghai, the study combines micro-scale environmental indicators extracted from street-view imagery, POI data, and public perceptual evaluations and employs an XGBoost model to identify the nonlinear response patterns, threshold effects, and perceptual trade-offs across seven perceptual dimensions. The results show that natural elements such as visual greenery and sky openness generate significant threshold-based enhancement effects, and once reaching a certain level of visibility, they substantially increase positive perceptions including beauty, safety, and cleanliness. By contrast, commercial and traffic-related facilities exhibit dual and competing perceptual influences. Moderate densities enhance liveliness, whereas high concentrations tend to induce perceptual fatigue and intensify negative emotional responses. Overall, perceptual quality in historic districts does not arise from linear accumulation but is shaped by dynamic perceptual trade-offs among natural features, functional elements, and cultural symbolism. Overall, the study reveals the coupling mechanism between spatial renewal and perceptual experience amid the pressures of urban modernization. It also demonstrates that increasing visible greenery (e.g., planting street trees, incorporating micro-green spaces, improving façade greening), enhancing street openness (e.g., optimizing view corridors, reducing visual obstruction, implementing moderate setback adjustments), guiding a moderate mix and spatial distribution of commercial and service functions, and strengthening the perceptibility of cultural landscape elements (e.g., façade restoration, streetscape coordination, and improved signage systems) are concrete and effective planning and design actions for improving landscape quality and enhancing the experiential quality of historic districts. Full article
(This article belongs to the Section Tourism, Culture, and Heritage)
Show Figures

Figure 1

21 pages, 3252 KB  
Article
A Machine Learning-Based Calibration Framework for Low-Cost PM2.5 Sensors Integrating Meteorological Predictors
by Xuying Ma, Yuanyuan Fan, Yifan Wang, Xiaoqi Wang, Zelei Tan, Danyang Li, Jun Gao, Leshu Zhang, Yixin Xu, Xueyao Liu, Shuyan Cai, Yuxin Ma and Yongzhe Huang
Chemosensors 2025, 13(12), 425; https://doi.org/10.3390/chemosensors13120425 - 8 Dec 2025
Viewed by 851
Abstract
Low-cost sensors (LCSs) have rapidly expanded in urban air quality monitoring but still suffer from limited data accuracy and vulnerability to environmental interference compared with regulatory monitoring stations. To improve their reliability, we proposed a machine learning (ML)-based framework for LCS correction that [...] Read more.
Low-cost sensors (LCSs) have rapidly expanded in urban air quality monitoring but still suffer from limited data accuracy and vulnerability to environmental interference compared with regulatory monitoring stations. To improve their reliability, we proposed a machine learning (ML)-based framework for LCS correction that integrates various meteorological factors at observation sites. Taking Tongshan District of Xuzhou City as an example, this study carried out continuous co-location data collection of hourly PM2.5 measurements by placing our LCS (American Temtop M10+ series) close to a regular fixed monitoring station. A mathematical model was developed to regress the PM2.5 deviations (PM2.5 concentrations at the fixed station—PM2.5 concentrations at the LCS) and the most important predictor variables. The data calibration was carried out based on six kinds of ML algorithms: random forest (RF), support vector regression (SVR), long short-term memory network (LSTM), decision tree regression (DTR), Gated Recurrent Unit (GRU), and Bidirectional LSTM (BiLSTM), and the final model was selected from them with the optimal performance. The performance of calibration was then evaluated by a testing dataset generated in a bootstrap fashion with ten time repetitions. The results show that RF achieved the best overall accuracy, with R2 of 0.99 (training), 0.94 (validation), and 0.94 (testing), followed by DTR, BiLSTM, and GRU, which also showed strong predictive capabilities. In contrast, LSTM and SVR produced lower accuracy with larger errors under the limited data conditions. The results demonstrate that tree-based and advanced deep learning models can effectively capture the complex nonlinear relationships influencing LCS performance. The proposed framework exhibits high scalability and transferability, allowing its application to different LCS types and regions. This study advances the development of innovative techniques that enhance air quality assessment and support environmental research. Full article
Show Figures

Figure 1

27 pages, 11265 KB  
Article
Using Machine Learning Methods to Predict Cognitive Age from Psychophysiological Tests
by Daria D. Tyurina, Sergey V. Stasenko, Konstantin V. Lushnikov and Maria V. Vedunova
Healthcare 2025, 13(24), 3193; https://doi.org/10.3390/healthcare13243193 - 5 Dec 2025
Viewed by 370
Abstract
Background/Objectives: This paper presents the results of predicting chronological age from psychophysiological tests using machine learning regressors. Methods: Subjects completed a series of psychological tests measuring various cognitive functions, including reaction time and cognitive conflict, short-term memory, verbal functions, and color and spatial [...] Read more.
Background/Objectives: This paper presents the results of predicting chronological age from psychophysiological tests using machine learning regressors. Methods: Subjects completed a series of psychological tests measuring various cognitive functions, including reaction time and cognitive conflict, short-term memory, verbal functions, and color and spatial perception. The sample included 99 subjects, 68 percent of whom were men and 32 percent were women. Based on the test results, 43 features were generated. To determine the optimal feature selection method, several approaches were tested alongside the regression models using MAE, R2, and CV_R2 metrics. SHAP and Permutation Importance (via Random Forest) delivered the best performance with 10 features. Features selected through Permutation Importance were used in subsequent analyses. To predict participants’ age from psychophysiological test results, we evaluated several regression models, including Random Forest, Extra Trees, Gradient Boosting, SVR, Linear Regression, LassoCV, RidgeCV, ElasticNetCV, AdaBoost, and Bagging. Model performance was compared using the determination coefficient (R2) and mean absolute error (MAE). Cross-validated performance (CV_R2) was estimated via 5-fold cross-validation. To assess metric stability and uncertainty, bootstrapping (1000 resamples) was applied to the test set, yielding distributions of MAE and RMSE from which mean values and 95% confidence intervals were derived. Results: The study identified RidgeCV with winsorization and standardization as the best model for predicting cognitive age, achieving a mean absolute error of 5.7 years and an R2 of 0.60. Feature importance was evaluated using SHAP values and permutation importance. SHAP analysis showed that stroop_time_color and stroop_var_attempt_time were the strongest predictors, followed by several task-timing features with moderate contributions. Permutation importance confirmed this ranking, with these two features causing the largest performance drop when permuted. Partial dependence plots further indicated clear positive relationships between these key features and predicted age. Correlation analysis stratified by sex revealed that most features were significantly associated with age, with stronger effects generally observed in men. Conclusions: Feature selection revealed Stroop timing measures and task-related metrics from math and campimetry tests as the strongest predictors, reflecting core cognitive processes linked to aging. The results underscore the value of careful outlier handling, feature selection, and interpretable regularized models for analyzing psychophysiological data. Future work should include longitudinal studies and integration with biological markers to further improve clinical relevance. Full article
(This article belongs to the Special Issue AI-Driven Healthcare Insights)
Show Figures

Figure 1

23 pages, 1977 KB  
Article
A Generalizable Hybrid AI-LSTM Model for Energy Consumption and Decarbonization Forecasting
by Khaled M. Salem, A. O. Elgharib, Javier M. Rey-Hernández and Francisco J. Rey-Martínez
Sustainability 2025, 17(23), 10882; https://doi.org/10.3390/su172310882 - 4 Dec 2025
Viewed by 533
Abstract
This research presents a solution to the problem of controlling the energy demand and carbon footprint of old buildings, with the focus being on a (heated) building located in Madrid, Spain. A framework that incorporates AI and advanced hybrid ensemble approaches to make [...] Read more.
This research presents a solution to the problem of controlling the energy demand and carbon footprint of old buildings, with the focus being on a (heated) building located in Madrid, Spain. A framework that incorporates AI and advanced hybrid ensemble approaches to make very accurate energy consumption predictions was developed and tested using the MATLAB environment. At first, the study evaluated six individual AI models (ANN, RF, XGBoost, RBF, Autoencoder, and Decision Tree) using a dataset of 100 points that were collected from the building’s sensors. Their performance was evaluated with high-quality data, which were ensured to be free of missing values or outliers, and they were prepared using L1/L2 normalization to guarantee optimal model performance. Later, higher accuracy was achieved through combining the models by means of hybrid ensemble techniques (voting, stacking, and blending). The main contribution is the application of a Long Short-Term Memory (LSTM) model for predicting the energy consumption of the building and, very importantly, its carbon footprint over a 30-year period until 2050. Additionally, the proposed methodology provides a structured pathway for existing buildings to progress toward nearly Zero-Energy Building (nZEB) performance by enabling more effective control of their energy demand and operational emissions. The comprehensive assessment of predictive models definitively concludes that the blended ensemble method is the most powerful and accurate forecasting tool, achieving 97% accuracy. A scenario where building heating energy use jumps to 135 by 2050 (a 35% increase above 2020 levels) represents an alarming complete failure to achieve energy efficiency and decarbonization goals, which would fundamentally jeopardize climate targets, energy security, and consumer expenditure. Full article
Show Figures

Figure 1

29 pages, 6244 KB  
Article
Application of Long Short-Term Memory and XGBoost Model for Carbon Emission Reduction: Sustainable Travel Route Planning
by Sevcan Emek, Gizem Ildırar and Yeşim Gürbüzer
Sustainability 2025, 17(23), 10802; https://doi.org/10.3390/su172310802 - 2 Dec 2025
Viewed by 745
Abstract
Travel planning is a process that allows users to obtain maximum benefit from their time, cost and energy. When planning a route from one place to another, it is an important option to present alternative travel areas on the route. This study proposes [...] Read more.
Travel planning is a process that allows users to obtain maximum benefit from their time, cost and energy. When planning a route from one place to another, it is an important option to present alternative travel areas on the route. This study proposes a travel route planning (TRP) architecture using a Long Short-Term Memory (LSTM) and Extreme Gradient Boosting (XGBoost) model to improve both travel efficiency and environmental sustainability in route selection. This model incorporates carbon emissions directly into the route planning process by unifying user preferences, location recommendations, route optimization, and multimodal vehicle selection within a comprehensive framework. By merging environmental sustainability with user-focused travel planning, it generates personalized, practical, and low-carbon travel routes. The carbon emissions observed with TRP’s artificial intelligence (AI) recommendation route are presented comparatively with those of the user-determined route. XGBoost, Random Forest (RF), Categorical Boosting (CatBoost), Light Gradient Boosting Machine (LightGBM), (Extra Trees Regressor) ETR, and Multi-Layer Perception (MLP) models are applied to the TRP model. LSTM is compared with Recurrent Neural Networks (RNNs) and Gated Recurrent Unit (GRU) models. Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Squared Error (MSE), and Normalized Root Mean Square Error (NRMSE) error measurements of these models are carried out, and the best result is obtained using XGBoost and LSTM. TRP enhances environmental responsibility awareness within travel planning by integrating sustainability-oriented parameters into the decision-making process. Unlike conventional reservation systems, this model encourages individuals and organizations to prioritize eco-friendly options by considering not only financial factors but also environmental and socio-cultural impacts. By promoting responsible travel behaviors and supporting the adoption of sustainable tourism practices, the proposed approach contributes significantly to the broader dissemination of environmentally conscious travel choices. Full article
(This article belongs to the Special Issue Design of Sustainable Supply Chains and Industrial Processes)
Show Figures

Figure 1

18 pages, 1405 KB  
Article
Bidirectional Algorithms for Polygon Triangulations and (m + 2)-Angulations via Fuss–Catalan Numbers
by Aybeyan Selim, Muzafer Saracevic, Lazar Stosic, Omer Aydin and Mahir Zajmović
Mathematics 2025, 13(23), 3837; https://doi.org/10.3390/math13233837 - 30 Nov 2025
Cited by 1 | Viewed by 423
Abstract
Polygon triangulations and their generalizations to m+2angulations are fundamental in combinatorics and computational geometry. This paper presents a unified linear-time framework that establishes explicit bijections between mDyck words, planted m+1ary trees, and [...] Read more.
Polygon triangulations and their generalizations to m+2angulations are fundamental in combinatorics and computational geometry. This paper presents a unified linear-time framework that establishes explicit bijections between mDyck words, planted m+1ary trees, and  m+2angulations of convex polygons. We introduce stack-based and tree-based algorithms that enable reversible conversion between symbolic and geometric representations, prove their correctness and optimal complexity, and demonstrate their scalability through extensive experiments. The approach reveals a hierarchical decomposition encoded by Fuss–Catalan numbers, providing a compact and uniform representation for triangulations, quadrangulations, pentangulations, and higher-arity angulations. Experimental comparisons show clear advantages over rotation-based methods in both runtime and memory usage. The framework offers a general combinatorial foundation that supports efficient enumeration, compressed representation, and extensions to higher-dimensional or non-convex settings. Full article
(This article belongs to the Special Issue Advances in Algorithms, Data Structures, and Computing)
Show Figures

Figure 1

16 pages, 1831 KB  
Article
The ICN-UN Battery: A Machine Learning-Optimized Tool for Expeditious Alzheimer’s Disease Diagnosis
by Ernesto Barceló, Duban Romero, Ricardo Allegri, Eliana Meza, María I. Mosquera-Heredia, Oscar M. Vidal, Carlos Silvera-Redondo, Mauricio Arcos-Burgos, Pilar Garavito-Galofre and Jorge I. Vélez
Diagnostics 2025, 15(23), 3045; https://doi.org/10.3390/diagnostics15233045 - 28 Nov 2025
Viewed by 479
Abstract
Background/Objectives: Alzheimer’s disease (AD) accounts for ~70% of global dementia cases, with projections estimating 139 million affected individuals by 2050. This increasing burden highlights the urgent need for accessible, cost-effective diagnostic tools, particularly in low- and middle-income countries (LMICs). Traditional neuropsychological assessments, [...] Read more.
Background/Objectives: Alzheimer’s disease (AD) accounts for ~70% of global dementia cases, with projections estimating 139 million affected individuals by 2050. This increasing burden highlights the urgent need for accessible, cost-effective diagnostic tools, particularly in low- and middle-income countries (LMICs). Traditional neuropsychological assessments, while effective, are resource-intensive and time-consuming. Methods: A total of 760 older adults (394 [51.8%] with AD) were recruited and neuropsychologically evaluated at the Instituto Colombiano de Neuropedagogía (ICN) in collaboration with Universidad del Norte (UN), Barranquilla. Machine learning (ML) algorithms were trained on a screening protocol incorporating demographic data and neuropsychological measures assessing memory, language, executive function, and praxis. Model performance was determined using 10-fold cross-validation. Variable importance analyses identified key predictors to develop optimized, abbreviated ML-based protocols. Metrics of compactness, cohesion, and separation further quantified diagnostic differentiation performance. Results: The eXtreme Gradient Boosting (xgbTree) algorithm achieved the highest diagnostic accuracy (91%) with the full protocol. Five ML-optimized screening protocols were also developed. The most efficient, the ICN-UN battery (including MMSE, Rey–Osterrieth Complex Figure recall, Rey Auditory Verbal Learning, Lawton & Brody Scale, and FAST), maintained strong diagnostic performance while reducing screening time from over four hours to under 25 min. Conclusions: The ML-optimized ICN-UN protocol offers a rapid, accurate, and scalable AD screening solution for LMICs. While promising for clinical adoption and earlier detection, further validation in diverse populations is recommended. Full article
Show Figures

Figure 1

Back to TopTop