Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,948)

Search Parameters:
Keywords = linear and nonlinear regression

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 5029 KB  
Article
Fundamental Validation of an AI-Based Impact Analysis Framework for Structural Elements in Wooden Structures
by Tokikatsu Namba
Appl. Sci. 2026, 16(2), 915; https://doi.org/10.3390/app16020915 - 15 Jan 2026
Abstract
This study proposes an AI-based framework for impact analysis of wooden structures, focusing on quantitatively assessing how individual seismic elements and their spatial locations influence structural response. A single-story residential building was used as a case study. Numerical time-history analyses were performed using [...] Read more.
This study proposes an AI-based framework for impact analysis of wooden structures, focusing on quantitatively assessing how individual seismic elements and their spatial locations influence structural response. A single-story residential building was used as a case study. Numerical time-history analyses were performed using a detailed three-dimensional nonlinear model, and parametric variations in stiffness and strength were systematically generated using an orthogonal array. Machine learning models were then trained to investigate the relationship between these parameters and seismic responses, and explainable artificial intelligence (XAI) techniques, including SHAP, were applied to evaluate and interpret parameter influences. The results suggest that wall elements oriented parallel to the target inter-story drift direction generally have the greatest effect on seismic response. Quantitative analysis indicates that the relative importance of these elements roughly corresponds to their wall lengths, providing physically interpretable evidence. Model comparisons show that linear regression achieves high accuracy in the elastic range, while Gradient Boosting performs better under strong excitations inducing nonlinear behavior, reflecting the transition from elastic to plastic response. SHAP-based analysis further provides insights into both the magnitude and direction of parameter influence, enabling element- and location-specific interpretation not readily obtained from traditional global sensitivity measures. Overall, the findings indicate that the proposed framework has the potential to support the identification of influential structural elements and the quantitative assessment of their contributions, which could assist in informed engineering decision-making. Full article
Show Figures

Figure 1

43 pages, 43591 KB  
Article
Research on the Formation Mechanism of Spontaneous Living Spaces and Their Impact on Community Vitality
by Xiyue Guan, Wei Shang, Fukang Chen and Wei Liu
Buildings 2026, 16(2), 352; https://doi.org/10.3390/buildings16020352 - 14 Jan 2026
Abstract
Spontaneous living spaces are public activity venues within cities that emerge through residents’ autonomous creation and informal planning. Although these spaces may appear disorganized, they serve vital functions: fostering social interaction, enhancing community vitality, improving spatial adaptability, and increasing life satisfaction. However, research [...] Read more.
Spontaneous living spaces are public activity venues within cities that emerge through residents’ autonomous creation and informal planning. Although these spaces may appear disorganized, they serve vital functions: fostering social interaction, enhancing community vitality, improving spatial adaptability, and increasing life satisfaction. However, research on the formation mechanisms, structural logic, resident satisfaction, and the impact of spontaneous living spaces on community vitality is limited, and there is a lack of robust research methodologies. This study aims to explore the formation mechanisms of spontaneous living spaces within historic cultural districts and their influence on community vitality. Using Wuhan’s Tanhualin National Historic and Cultural District as a case study, this research innovatively combines the Mask R-CNN deep learning model with a Random Forest regression model. The Mask R-CNN model was employed to accurately identify and perform pixel-level segmentation of 1249 spontaneous living spaces. Combined with questionnaire surveys and the Random Forest model, this study reveals non-linear relationships between key factors such as community vitality, resident satisfaction with various types of spontaneous living spaces, and crowd density. The findings show that spontaneous living spaces effectively address residents’ unmet needs for emotional connection and dynamic lifestyles—needs often overlooked by official residential planning. This research provides a reliable technical framework and quantitative decision support for regulating the formation of spontaneous living spaces, thereby enhancing residents’ quality of life and urban vitality while preserving historical character. Full article
(This article belongs to the Special Issue Advancing Urban Analytics and Sensing for Sustainable Cities)
Show Figures

Figure 1

16 pages, 962 KB  
Article
Temporal Cardiorenal Dynamics and Mortality Prediction After TAVR: The Prognostic Value of the 48–72 h BUN/EF Ratio
by Aykan Çelik, Tuncay Kırış, Fatma Kayaaltı Esin, Semih Babacan, Harun Erdem and Mustafa Karaca
J. Clin. Med. 2026, 15(2), 676; https://doi.org/10.3390/jcm15020676 - 14 Jan 2026
Viewed by 15
Abstract
Background: Renal and cardiac dysfunction are major determinants of adverse outcomes following transcatheter aortic valve replacement (TAVR). The ratio of blood urea nitrogen to left ventricular ejection fraction (BUN/EF) integrates renal and cardiac status into a single physiological index. This study aimed to [...] Read more.
Background: Renal and cardiac dysfunction are major determinants of adverse outcomes following transcatheter aortic valve replacement (TAVR). The ratio of blood urea nitrogen to left ventricular ejection fraction (BUN/EF) integrates renal and cardiac status into a single physiological index. This study aimed to evaluate the prognostic value of both baseline and temporal (48–72 h) BUN/EF ratios for predicting mortality after TAVR. Methods: A total of 429 patients (mean age 76 ± 8 years; 51% female) who underwent TAVR for severe aortic stenosis between 2017 and 2025 were retrospectively analyzed. The primary endpoint was long-term all-cause mortality; in-hospital mortality was secondary. Receiver operating characteristic (ROC) curves, Cox regression, and reclassification metrics (NRI, IDI) assessed prognostic performance. Restricted cubic spline (RCS) analysis explored non-linear associations. Results: During a median follow-up of 733 days, overall and in-hospital mortality rates were 37.8% and 7.9%, respectively. Both baseline and 48–72 h BUN/EF ratios were independently associated with mortality (HR = 3.46 and 3.79 per 1 SD increase; both p < 0.001). The temporal ratio showed superior discrimination for in-hospital mortality (AUC = 0.826 vs. 0.743, p = 0.007). Adding baseline BUN/EF to EuroSCORE II significantly improved model performance (AUC 0.712 vs. 0.668, p = 0.031; NRI = 0.33; IDI = 0.067). RCS analysis revealed a linear relationship for baseline and a steep, non-linear association for temporal ratios with mortality risk. Conclusions: The 48–72 h BUN/EF ratio is a robust dynamic biomarker that predicts early mortality after TAVR, while baseline BUN/EF identifies patients at long-term risk. Integrating this simple bedside index into risk algorithms may refine postoperative monitoring and improve outcome prediction in TAVR populations. Full article
Show Figures

Figure 1

23 pages, 4679 KB  
Article
A Synergistic Rehabilitation Approach for Post-Stroke Patients with a Hand Exoskeleton: A Feasibility Study with Healthy Subjects
by Cristian Camardella, Tommaso Bagneschi, Federica Serra, Claudio Loconsole and Antonio Frisoli
Robotics 2026, 15(1), 21; https://doi.org/10.3390/robotics15010021 - 14 Jan 2026
Viewed by 78
Abstract
Hand exoskeletons are increasingly used to support post-stroke reach-to-grasp, yet most intention-detection strategies trigger assistance from local hand events without considering the synergy between proximal arm transport and distal hand shaping. We evaluated whether proximal arm kinematics, alone or fused with EMG, can [...] Read more.
Hand exoskeletons are increasingly used to support post-stroke reach-to-grasp, yet most intention-detection strategies trigger assistance from local hand events without considering the synergy between proximal arm transport and distal hand shaping. We evaluated whether proximal arm kinematics, alone or fused with EMG, can predict flexor and extensor digitorum activity for synergy-aligned hand assistance. We trained nine models per participant: linear regression (LINEAR), feedforward neural network (NONLINEAR), and LSTM, each under EMG-only, kinematics-only (KIN), and EMG+KIN inputs. Performance was assessed by RMSE on test trials and by a synergy-retention analysis, comparing synergy weights from original EMG versus a hybrid EMG in which extensor and flexor digitorum measure signals were replaced by model predictions. Results have shown that kinematic information can predict muscle activity even with a simple linear model (average RMSE around 30% of signal amplitude peak during go-to-grasp contractions), and synergy analysis indicated high cosine similarity between original and hybrid synergy weights (on average 0.87 for the LINEAR model). Furthermore, the LINEAR model with kinematics input has been tested in a real-time go-to-grasp motion, developing a high-level control strategy for a hand exoskeleton, to better simulate post-stroke rehabilitation scenarios. These results suggest the intrinsic synergistic motion of go-to-grasp actions, offering a practical path, in hand rehabilitation contexts, for timing hand assistance in synergy with arm transport and with minimal setup burden. Full article
(This article belongs to the Special Issue AI for Robotic Exoskeletons and Prostheses)
Show Figures

Figure 1

21 pages, 9269 KB  
Article
Study on Shaft Soft Rock Deformation Prediction Based on Weighted Improved Stacking Ensemble Learning
by Longlong Zhao, Shuang You, Qixing Feng and Hongguang Ji
Appl. Sci. 2026, 16(2), 834; https://doi.org/10.3390/app16020834 - 14 Jan 2026
Viewed by 65
Abstract
In recent years, deformation disasters in mine shafts have occurred frequently, posing a threat to mine safety. The nonlinear coupling relationship between shaft surrounding rock deformation and rock mass mechanical parameters is a key criterion for surrounding rock stability. However, existing machine learning [...] Read more.
In recent years, deformation disasters in mine shafts have occurred frequently, posing a threat to mine safety. The nonlinear coupling relationship between shaft surrounding rock deformation and rock mass mechanical parameters is a key criterion for surrounding rock stability. However, existing machine learning prediction methods are rarely applied to shaft deformation, and issues such as poor accuracy and generalization of single models remain. To address this, the study proposes a feature-weighted Stacking ensemble model, which considers 15 feature variables; using RMSE, MAE, R2, and inter-model MAPE correlation as evaluation metrics, GBDT, XGBoost, KNN, and MLP are selected as base learners, with Lasso linear regression as the meta-learner. Prediction errors are corrected by weighting the outputs of base learners based on prediction accuracy. Experiments show that, using MAPE as the evaluation metric, the improved model reduces the error by 2.59% compared with the best base learner KNN, by 6.83% compared with XGBoost, and by 0.18% more than the traditional Stacking algorithm, making it suitable for predicting weak surrounding rock shaft deformation under multi-feature conditions. Full article
Show Figures

Figure 1

28 pages, 8930 KB  
Article
Data-Driven AI Modeling of Renewable Energy-Based Smart EV Charging Stations Using Historical Weather and Load Data
by Hamza Bin Sajjad, Farhan Hameed Malik, Muhammad Irfan Abid, Muhammad Omer Khan, Zunaib Maqsood Haider and Muhammad Junaid Arshad
World Electr. Veh. J. 2026, 17(1), 37; https://doi.org/10.3390/wevj17010037 - 13 Jan 2026
Viewed by 175
Abstract
The trend of the world to electric mobility and the inclusion of renewable energy requires complex control and predictive models of Smart Electric Vehicle Charging Stations (SEVCSs). The paper describes an experimental artificial intelligence (AI) model that can be used to optimize EV [...] Read more.
The trend of the world to electric mobility and the inclusion of renewable energy requires complex control and predictive models of Smart Electric Vehicle Charging Stations (SEVCSs). The paper describes an experimental artificial intelligence (AI) model that can be used to optimize EV charging in New York City based on ten years of historical load and weather information. Nonlinear environmental relationships with urban energy demand and the use of Neural Fitting and Regression Learner models in MATLAB were used to explore the nonlinear relationships between the environment and energy demand. The quality of the input data was maintained with a lot of preprocessing, such as outlier removal, smoothing, and time alignment. The performance measurements showed that there was a Mean Absolute Percentage Error (MAPE) of 4.9, and a coefficient of determination (R2) of 0.93, meaning that there was a high level of concordance between the predicted and measured load profiles. Such findings indicate that AI-based models can be used to replicate load dynamics during renewable energy variability. The research combines the findings of long-term and multi-source data with the short-term forecasting to address the research gaps of past studies that were limited to a few small datasets or single-variable-based time series, which will provide a replicable base to develop energy-efficient and intelligent EV charging networks in line with future grid decarbonization goals. The proposed neural network had an R2 = 0.93 and RMSE = 36.4 MW. The Neural Fitting model led to less RMSE than linear regression and lower MAPE than the persistence method by a factor of about 15 and 22 percent, respectively. Full article
Show Figures

Figure 1

30 pages, 1128 KB  
Article
Analysis of Technological Readiness Indexes for Offshore Renewable Energies in Ibero-American Countries
by Claudio Moscoloni, Emiliano Gorr-Pozzi, Manuel Corrales-González, Adriana García-Mendoza, Héctor García-Nava, Isabel Villalba, Giuseppe Giorgi, Gustavo Guarniz-Avalos, Rodrigo Rojas and Marcos Lafoz
Energies 2026, 19(2), 370; https://doi.org/10.3390/en19020370 - 12 Jan 2026
Viewed by 99
Abstract
The energy transition in Ibero-American countries demands significant diversification, yet the vast potential of offshore renewable energies (ORE) remains largely untapped. Slow adoption is often attributed to the hostile marine environment, high investment costs, and a lack of institutional, regulatory, and industrial readiness. [...] Read more.
The energy transition in Ibero-American countries demands significant diversification, yet the vast potential of offshore renewable energies (ORE) remains largely untapped. Slow adoption is often attributed to the hostile marine environment, high investment costs, and a lack of institutional, regulatory, and industrial readiness. A critical barrier for policymakers is the absence of methodologically robust tools to assess national preparedness. Existing indices typically rely on simplistic weighting schemes or are susceptible to known flaws, such as the rank reversal phenomenon, which undermines their credibility for strategic decision-making. This study addresses this gap by developing a multi-criteria decision-making (MCDM) framework based on a problem-specific synthesis of established optimization principles to construct a comprehensive Offshore Readiness Index (ORI) for 13 Ibero-American countries. The framework moves beyond traditional methods by employing an advanced weight-elicitation model rooted in the Robust Ordinal Regression (ROR) paradigm to analyze 42 sub-criteria across five domains: Regulation, Planning, Resource, Industry, and Grid. Its methodological core is a non-linear objective function that synergistically combines a Shannon entropy term to promote a maximally unbiased weight distribution and to prevent criterion exclusion, with an epistemic regularization penalty that anchors the solution to expert-derived priorities within each domain. The model is guided by high-level hierarchical constraints that reflect overarching policy assumptions, such as the primacy of Regulation and Planning, thereby ensuring strategic alignment. The resulting ORI ranks Spain first, followed by Mexico and Costa Rica. Spain’s leadership is underpinned by its exceptional performance in key domains, supported by specific enablers, such as a dedicated renewable energy roadmap. The optimized block weights validate the model’s structure, with Regulation (0.272) and Electric Grid (0.272) receiving the highest importance. In contrast, lower-ranked countries exhibit systemic deficiencies across multiple domains. This research offers a dual contribution: methodological innovation in readiness assessment and an actionable tool for policy instruments. The primary policy conclusion is clear: robust regulatory frameworks and strategic planning are the pivotal enabling conditions for ORE development, while industrial capacity and infrastructure are consequent steps that must follow, not precede, a solid policy foundation. Full article
(This article belongs to the Special Issue Advanced Technologies for the Integration of Marine Energies)
Show Figures

Figure 1

10 pages, 1829 KB  
Proceeding Paper
Machine Learning Based Agricultural Price Forecasting for Major Food Crops in India Using Environmental and Economic Factors
by P. Ankit Krishna, Gurugubelli V. S. Narayana, Siva Krishna Kotha and Debabrata Pattnayak
Biol. Life Sci. Forum 2025, 54(1), 7; https://doi.org/10.3390/blsf2025054007 - 12 Jan 2026
Viewed by 118
Abstract
The contemporary agricultural market is profoundly volatile, where agricultural prices are based on a complex supply chain, climatic irregularity or unscheduled market demand. Prices of crops need to be predicted in a reliable and timely manner for farmers, policy-makers and other stakeholders to [...] Read more.
The contemporary agricultural market is profoundly volatile, where agricultural prices are based on a complex supply chain, climatic irregularity or unscheduled market demand. Prices of crops need to be predicted in a reliable and timely manner for farmers, policy-makers and other stakeholders to take evidence-based decisions ultimately for the benefit towards sustainable agriculture and economic sustainability. Objective: The objective of this study is to develop and evaluate a comprehensive machine learning model for predicting agricultural prices incorporating logistic, economic and environmental considerations. It is the desire to make agriculture more profitable by building simple and accurate forecasting models. Methods: An assorted dataset was collected, which covers major factors to constitute the dataset of temperature, rainfall, fertiliser use, pest and disease attack level, cost of transportation, market demand-supply ratio and regional competitiveness. The data was subjected to pre-processing and feature extraction for quality control/quality assurance. Several machine learning models (Linear Regression, Support Vector Machines, AdaBoost, Random Forest, and XGBoost) were trained and evaluated using performance metrics such as R2 score, Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE). Results: Out of the model approaches that were analysed, predictive performance was superior for XGBoost (with an R2 Score of 0.94, RMSE of 12.8 and MAE of 8.6). To generate accurate predictions, the ability to account for complex non-linear relationships between market and environmental information was necessary. Conclusions: The forecast model of the XGBoost-based prediction system is reliable, of low complexity and widely applicable to large-scale real-time forecasting of agricultural monitoring. The model substantially reduces the uncertainty of price forecasting, and does so by including multivariate environmental and economic aspects that permit more profitable management practices in a schedule for future sustainable agriculture. Full article
(This article belongs to the Proceedings of The 3rd International Online Conference on Agriculture)
Show Figures

Figure 1

13 pages, 979 KB  
Article
Modeling Absolute CO2–GDP Decoupling in the Context of the Global Energy Transition: Evidence from Econometrics and Explainable Machine Learning
by Ricardo Teruel-Gutiérrez, Pedro Fernandes da Anunciação and Ricardo Teruel-Sánchez
Sustainability 2026, 18(2), 758; https://doi.org/10.3390/su18020758 - 12 Jan 2026
Viewed by 118
Abstract
This study investigates the feasibility of absolute decoupling—where economies expand while CO2 (Carbon Dioxide) emissions decline in absolute terms—by identifying its key macro–energy drivers across 79 countries (2000–2025). We construct a comprehensive panel of energy-system indicators and estimate the probability of decoupling [...] Read more.
This study investigates the feasibility of absolute decoupling—where economies expand while CO2 (Carbon Dioxide) emissions decline in absolute terms—by identifying its key macro–energy drivers across 79 countries (2000–2025). We construct a comprehensive panel of energy-system indicators and estimate the probability of decoupling using two complementary classifiers: a penalized logistic regression and a gradient-boosted decision tree model (GBM). The non-parametric GBM significantly outperforms the linear baseline (ROC–AUC ~0.80 vs. 0.67), revealing complex non-linearities in the transition process. Explainable AI analysis (SHAP) demonstrates that decoupling is not driven by GDP growth rates alone, but primarily by sharp reductions in energy intensity and the active displacement of fossil fuels. Crucially, our results indicate that increasing renewable capacity is insufficient for absolute decoupling if the fossil fuel share does not simultaneously decline. These findings challenge passive “green growth” narratives, suggesting that current policies are inadequate; achieving climate targets requires targeted mechanisms for active fossil fuel phase-out rather than merely relying on renewable additions or economic modernization. Full article
Show Figures

Figure 1

24 pages, 3327 KB  
Article
From Binary Scores to Risk Tiers: An Interpretable Hybrid Stacking Model for Multi-Class Loan Default Prediction
by Ghazi Abbas, Zhou Ying and Muzaffar Iqbal
Systems 2026, 14(1), 78; https://doi.org/10.3390/systems14010078 - 11 Jan 2026
Viewed by 80
Abstract
Accurate credit risk assessment for small firms and farmers is crucial for financial stability and inclusion; however, many models still rely on binary default labels, overlooking the continuum of borrower vulnerability. To address this, we propose Transformer–LightGBM–Stacked Logistic Regression (TL-StackLR), a hybrid stacking [...] Read more.
Accurate credit risk assessment for small firms and farmers is crucial for financial stability and inclusion; however, many models still rely on binary default labels, overlooking the continuum of borrower vulnerability. To address this, we propose Transformer–LightGBM–Stacked Logistic Regression (TL-StackLR), a hybrid stacking framework for multi-class loan default prediction. The framework combines three learners: a Feature Tokenizer Transformer (FT-Transformer) for feature interactions, LightGBM for non-linear pattern recognition, and a stacked LR meta-learner for calibrated probability fusion. We transform binary labels into three risk tiers, Low, Medium, and High, based on quantile-based stratification of default probabilities, aligning the model with real-world risk management. Evaluated on datasets from 3045 firms and 2044 farmers in China, TL-StackLR achieves state-of-the-art ROC-AUC scores of 0.986 (firms) and 0.972 (farmers), with superior calibration and discrimination across all risk classes, outperforming all standalone and partial-hybrid benchmarks. The framework provides SHapley Additive exPlanations (SHAP) interpretability, showing how key risk drivers, such as income, industry experience, and mortgage score for firms and loan purpose, Engel coefficient, and income for farmers, influence risk tiers. This transparency transforms TL-StackLR into a decision-support tool, enabling targeted interventions for inclusive lending, thus offering a practical foundation for equitable credit risk management. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

24 pages, 15357 KB  
Article
Quantitative Assessment of Drought Impact on Grassland Productivity in Inner Mongolia Using SPI and Biome-BGC
by Yunjia Ma, Tianjie Lei, Jiabao Wang, Zhitao Lin, Hang Li and Baoyin Liu
Diversity 2026, 18(1), 36; https://doi.org/10.3390/d18010036 - 9 Jan 2026
Viewed by 130
Abstract
Drought poses a severe threat to grassland biodiversity and ecosystem function. However, quantitative frameworks that capture the interactive effects of drought intensity and duration on productivity remain scarce, limiting impact assessment accuracy. To bridge this gap, we developed and validated a novel hybrid [...] Read more.
Drought poses a severe threat to grassland biodiversity and ecosystem function. However, quantitative frameworks that capture the interactive effects of drought intensity and duration on productivity remain scarce, limiting impact assessment accuracy. To bridge this gap, we developed and validated a novel hybrid modeling framework to quantify drought impacts on net primary productivity (NPP) across Inner Mongolia’s major grasslands (1961–2012). Drought was characterized using the Standardized Precipitation Index (SPI), and ecosystem productivity was simulated with the Biome-BGC model. Our core innovation is the hybrid model, which integrates linear and nonlinear components to explicitly capture the compounded, nonlinear influence of combined drought intensity and duration. This represents a significant advance over conventional single-perspective approaches. Key results demonstrate that the hybrid model substantially outperforms linear and nonlinear models alone, yielding highly significant regression equations for all grassland types (meadow, typical, desert; all p < 0.001). Independent validation confirmed its robustness and high predictive skill (NSE ≈ 0.868, RMSE = 20.09 gC/m2/yr). The analysis reveals two critical findings: (1) drought duration is a stronger driver of productivity decline than instantaneous intensity, and (2) desert grasslands are the most vulnerable, followed by typical and meadow grasslands. The hybrid model serves as a practical tool for estimating site-specific productivity loss, directly informing grassland management priorities, adaptive grazing strategies, and early-warning system design. Beyond immediate applications, this framework provides a transferable methodology for assessing drought-induced vulnerability in biodiverse ecosystems, supporting conservation and climate-adaptive management. Full article
(This article belongs to the Special Issue Ecology and Restoration of Grassland—2nd Edition)
Show Figures

Figure 1

20 pages, 36648 KB  
Article
Global Lunar FeO Mapping via Wavelet–Autoencoder Feature Learning from M3 Hyperspectral Data
by Julia Fernández–Díaz, Fernando Sánchez Lasheras, Javier Gracia Rodríguez, Santiago Iglesias Álvarez, Antonio Luis Marqués Sierra and Francisco Javier de Cos Juez
Mathematics 2026, 14(2), 254; https://doi.org/10.3390/math14020254 - 9 Jan 2026
Viewed by 126
Abstract
Accurate global mapping of lunar iron oxide (FeO) abundance is essential for understanding the Moon’s geological evolution and for supporting future in situ resource utilization (ISRU). While hyperspectral data from the Moon Mineralogy Mapper (M3) provide a unique combination of high spectral dimensionality, [...] Read more.
Accurate global mapping of lunar iron oxide (FeO) abundance is essential for understanding the Moon’s geological evolution and for supporting future in situ resource utilization (ISRU). While hyperspectral data from the Moon Mineralogy Mapper (M3) provide a unique combination of high spectral dimensionality, hectometre-scale spatial resolution, and near-global coverage, existing FeO retrieval approaches struggle to fully exploit the high dimensionality, nonlinear spectral variability, and planetary-scale volume of the Global Mode dataset. To address these limitations, we present an integrated machine learning pipeline for estimating lunar FeO abundance from M3 hyperspectral observations. Unlike traditional methods based on raw reflectance or empirical spectral indices, the proposed framework combines Discrete Wavelet Transform (DWT), deep autoencoder-based feature compression, and ensemble regression to achieve robust and scalable FeO prediction. M3 spectra (83 bands, 475–3000 nm) are transformed using a Daubechies-4 (db4) DWT to extract 42 representative coefficients per pixel, capturing the dominant spectral information while filtering high-frequency noise. These features are further compressed into a six-dimensional latent space via a deep autoencoder and used as input to a Random Forest regressor, which outperforms kernel-based and linear Support Vector Regression (SVR) as well as Lasso regression in predictive accuracy and stability. The proposed model achieves an average prediction error of 1.204 wt.% FeO and demonstrates consistent performance across diverse lunar geological units. Applied to 806 orbital tracks (approximately 3.5×109 pixels), covering more than 95% of the lunar surface, the pipeline produces a global FeO abundance map at 150 m per pixel resolution. These results demonstrate the potential of integrating multiscale wavelet representations with nonlinear feature learning to enable large-scale, geochemically constrained planetary mineral mapping. Full article
Show Figures

Figure 1

25 pages, 5517 KB  
Article
A Novel Online Real-Time Prediction Method for Copper Particle Content in the Oil of Mining Equipment Based on Neural Networks
by Long Yuan, Zibin Du, Xun Gao, Yukang Zhang, Liusong Yang, Yuehui Wang and Junzhe Lin
Machines 2026, 14(1), 76; https://doi.org/10.3390/machines14010076 - 8 Jan 2026
Viewed by 140
Abstract
For the problem of online real-time prediction of copper particle content in the lubricating oil of the main spindle-bearing system of mining equipment, the traditional direct detection method is costly and has insufficient real-time performance. To this end, this paper proposes an indirect [...] Read more.
For the problem of online real-time prediction of copper particle content in the lubricating oil of the main spindle-bearing system of mining equipment, the traditional direct detection method is costly and has insufficient real-time performance. To this end, this paper proposes an indirect prediction method based on data-driven neural networks. The proposal of this method is based on a core assumption: during the stable wear stage of the equipment, there exists a modelable statistical correlation between the copper particle content in the oil and the total amount of non-ferromagnetic particles that are easy to measure online. Based on this, a neural network prediction model was constructed, with the online metal abrasive particle sensor signal (non-ferromagnetic particle content) as the input and the copper particle content as the output. The experimental data are derived from 100 real oil samples collected on-site from the lubrication system of the main shaft bearing of a certain mine mill. To enhance the model’s performance in the case of small samples, data augmentation techniques were adopted in the study. The verification results show that the average prediction accuracy of the proposed neural network model reaches 95.66%, the coefficient of determination (R2) is 0.91, and the average absolute error (MAE) is 0.3398. Its performance is significantly superior to that of the linear regression model used as the benchmark (with an average accuracy of approximately 80%, R2 = 0.71, and the mean absolute error (MAE) = 1.5628). This comparison result not only preliminarily verified the validity of the relevant hypotheses of non-ferromagnetic particles and copper particles in specific scenarios, but also revealed the nonlinear nature of the relationship between them. This research explores and preliminarily validates a low-cost technical path for the online prediction of copper particle content in the stable wear stage of the main shaft bearing system, suggesting its potential for engineering application within specific, well-defined scenarios. Full article
Show Figures

Figure 1

38 pages, 2642 KB  
Article
Capturing Short- and Long-Term Temporal Dependencies Using Bahdanau-Enhanced Fused Attention Model for Financial Data—An Explainable AI Approach
by Rasmi Ranjan Khansama, Rojalina Priyadarshini, Surendra Kumar Nanda and Rabindra Kumar Barik
FinTech 2026, 5(1), 4; https://doi.org/10.3390/fintech5010004 - 7 Jan 2026
Viewed by 128
Abstract
Prediction of stock closing price plays a critical role in financial planning, risk management, and informed investment decision-making. In this study, we propose a novel model that synergistically amalgamates Bidirectional GRU (BiGRU) with three complementary attention techniques—Top-k Sparse, Global, and Bahdanau Attention—to tackle [...] Read more.
Prediction of stock closing price plays a critical role in financial planning, risk management, and informed investment decision-making. In this study, we propose a novel model that synergistically amalgamates Bidirectional GRU (BiGRU) with three complementary attention techniques—Top-k Sparse, Global, and Bahdanau Attention—to tackle the complex, intricate, and non-linear temporal dependencies in financial time series. The proposed Fused Attention Model is validated on two highly volatile, non-linear, and complex- patterned stock indices: NIFTY 50 and S&P 500, with 80% of the historical price data used for model learning and the remaining 20% for testing. A comprehensive analysis of the results, benchmarked against various baseline and hybrid deep learning architectures across multiple regression performance metrics such as Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE), and R2 Score, demonstrates the superiority and noteworthiness of our proposed Fused Attention Model. Most significantly, the proposed model yields the highest prediction accuracy and generalization capability, with R2 scores of 0.9955 on NIFTY 50 and 0.9961 on S&P 500. Additionally, to mitigate the issues of interpretability and transparency of the deep learning model for financial forecasting, we utilized three different Explainable Artificial Intelligence (XAI) techniques, namely Integrated Gradients, SHapley Additive exPlanations (SHAP), and Attention Weight Analysis. The results of these three XAI techniques validated the utilization of three attention techniques along with the BiGRU model. The explainability of the proposed model named as BiGRU based Fused Attention (BiG-FA), in addition to its superior performance, thus offers a robust and interpretable deep learning model for time-series prediction, making it applicable beyond the financial domain. Full article
Show Figures

Figure 1

19 pages, 857 KB  
Article
Data-Driven Insights: Leveraging Sentiment Analysis and Latent Profile Analysis for Financial Market Forecasting
by Eyal Eckhaus
Big Data Cogn. Comput. 2026, 10(1), 24; https://doi.org/10.3390/bdcc10010024 - 7 Jan 2026
Viewed by 296
Abstract
Background: This study explores an innovative integration of big data analytics techniques aimed at enhancing predictive modeling in financial markets. It investigates how combining sentiment analysis with latent profile analysis (LPA) can accurately forecast stock prices. This research aligns with big data [...] Read more.
Background: This study explores an innovative integration of big data analytics techniques aimed at enhancing predictive modeling in financial markets. It investigates how combining sentiment analysis with latent profile analysis (LPA) can accurately forecast stock prices. This research aligns with big data methodologies by leveraging automated content analysis and segmentation algorithms to address real-world challenges in data-driven decision-making. This study leverages advanced computational methods to process and segment large-scale unstructured data, demonstrating scalability in data-rich environments. Methods: We compiled a corpus of 3843 financial news articles on Teva Pharmaceuticals from Bloomberg and Reuters. Sentiment scores were generated using the VADER tool, and LPA was applied to identify eight distinct sentiment profiles. These profiles were then used in segmented regression models and Structural Equation Modeling (SEM) to assess their predictive value for stock price fluctuations. Results: Six of the eight latent profiles demonstrated significantly higher predictive accuracy compared to traditional sentiment-based models. The combined profile-based regression model explained 47% of the stock price variance (R2 = 0.47), compared to 10% (R2 = 0.10) in the baseline model using sentiment analysis alone. Conclusion: This study pioneers the use of latent profile analysis (LPA) in sentiment analysis for stock price prediction, offering a novel integration of clustering and financial forecasting. By uncovering complex, non-linear links between market sentiment and stock movements, it addresses a key gap in the literature and establishes a powerful foundation for advancing sentiment-based financial models. Full article
Show Figures

Figure 1

Back to TopTop