Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,832)

Search Parameters:
Keywords = machining alternatives

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2120 KiB  
Article
Machine Learning Algorithms and Explainable Artificial Intelligence for Property Valuation
by Gabriella Maselli and Antonio Nesticò
Real Estate 2025, 2(3), 12; https://doi.org/10.3390/realestate2030012 - 1 Aug 2025
Abstract
The accurate estimation of urban property values is a key challenge for appraisers, market participants, financial institutions, and urban planners. In recent years, machine learning (ML) techniques have emerged as promising tools for price forecasting due to their ability to model complex relationships [...] Read more.
The accurate estimation of urban property values is a key challenge for appraisers, market participants, financial institutions, and urban planners. In recent years, machine learning (ML) techniques have emerged as promising tools for price forecasting due to their ability to model complex relationships among variables. However, their application raises two main critical issues: (i) the risk of overfitting, especially with small datasets or with noisy data; (ii) the interpretive issues associated with the “black box” nature of many models. Within this framework, this paper proposes a methodological approach that addresses both these issues, comparing the predictive performance of three ML algorithms—k-Nearest Neighbors (kNN), Random Forest (RF), and the Artificial Neural Network (ANN)—applied to the housing market in the city of Salerno, Italy. For each model, overfitting is preliminarily assessed to ensure predictive robustness. Subsequently, the results are interpreted using explainability techniques, such as SHapley Additive exPlanations (SHAPs) and Permutation Feature Importance (PFI). This analysis reveals that the Random Forest offers the best balance between predictive accuracy and transparency, with features such as area and proximity to the train station identified as the main drivers of property prices. kNN and the ANN are viable alternatives that are particularly robust in terms of generalization. The results demonstrate how the defined methodological framework successfully balances predictive effectiveness and interpretability, supporting the informed and transparent use of ML in real estate valuation. Full article
Show Figures

Figure 1

18 pages, 3318 KiB  
Article
Indirect AI-Based Estimation of Cardiorespiratory Fitness from Daily Activities Using Wearables
by Laura Saldaña-Aristizábal, Jhonathan L. Rivas-Caicedo, Kevin Niño-Tejada and Juan F. Patarroyo-Montenegro
Electronics 2025, 14(15), 3081; https://doi.org/10.3390/electronics14153081 (registering DOI) - 1 Aug 2025
Abstract
Cardiorespiratory fitness is a predictor of long-term health, traditionally assessed through structured exercise protocols that require maximal effort and controlled laboratory conditions. These protocols, while clinically validated, are often inaccessible, physically demanding, and unsuitable for unsupervised monitoring. This study proposes a non-invasive, unsupervised [...] Read more.
Cardiorespiratory fitness is a predictor of long-term health, traditionally assessed through structured exercise protocols that require maximal effort and controlled laboratory conditions. These protocols, while clinically validated, are often inaccessible, physically demanding, and unsuitable for unsupervised monitoring. This study proposes a non-invasive, unsupervised alternative—predicting the heart rate a person would reach after completing the step test, using wearable data collected during natural daily activities. Ground truth post-exercise heart rate was obtained through the Queens College Step Test, which is a submaximal protocol widely used in fitness settings. Separately, wearable sensors recorded heart rate (HR), blood oxygen saturation, and motion data during a protocol of lifestyle tasks spanning a range of intensities. Two machine learning models were developed—a Human Activity Recognition (HAR) model that classified daily activities from inertial data with 96.93% accuracy, and a regression model that estimated post step test HR using motion features, physiological trends, and demographic context. The regression model achieved an average root mean squared error (RMSE) of 5.13 beats per minute (bpm) and a mean absolute error (MAE) of 4.37 bpm. These findings demonstrate the potential of test-free methods to estimate standardized test outcomes from daily activity data, offering an accessible pathway to infer cardiorespiratory fitness. Full article
(This article belongs to the Special Issue Wearable Sensors for Human Position, Attitude and Motion Tracking)
Show Figures

Figure 1

15 pages, 1635 KiB  
Article
Modeling the Abrasive Index from Mineralogical and Calorific Properties Using Tree-Based Machine Learning: A Case Study on the KwaZulu-Natal Coalfield
by Mohammad Afrazi, Chia Yu Huat, Moshood Onifade, Manoj Khandelwal, Deji Olatunji Shonuga, Hadi Fattahi and Danial Jahed Armaghani
Mining 2025, 5(3), 48; https://doi.org/10.3390/mining5030048 (registering DOI) - 1 Aug 2025
Abstract
Accurate prediction of the coal abrasive index (AI) is critical for optimizing coal processing efficiency and minimizing equipment wear in industrial applications. This study explores tree-based machine learning models; Random Forest (RF), Gradient Boosting Trees (GBT), and Extreme Gradient Boosting (XGBoost) to predict [...] Read more.
Accurate prediction of the coal abrasive index (AI) is critical for optimizing coal processing efficiency and minimizing equipment wear in industrial applications. This study explores tree-based machine learning models; Random Forest (RF), Gradient Boosting Trees (GBT), and Extreme Gradient Boosting (XGBoost) to predict AI using selected coal properties. A database of 112 coal samples from the KwaZulu-Natal Coalfield in South Africa was used. Initial predictions using all eight input properties revealed suboptimal testing performance (R2: 0.63–0.72), attributed to outliers and noisy data. Feature importance analysis identified calorific value, quartz, ash, and Pyrite as dominant predictors, aligning with their physicochemical roles in abrasiveness. After data cleaning and feature selection, XGBoost achieved superior accuracy (R2 = 0.92), outperforming RF (R2 = 0.85) and GBT (R2 = 0.81). The results highlight XGBoost’s robustness in modeling non-linear relationships between coal properties and AI. This approach offers a cost-effective alternative to traditional laboratory methods, enabling industries to optimize coal selection, reduce maintenance costs, and enhance operational sustainability through data-driven decision-making. Additionally, quartz and Ash content were identified as the most influential parameters on AI using the Cosine Amplitude technique, while calorific value had the least impact among the selected features. Full article
(This article belongs to the Special Issue Mine Automation and New Technologies)
Show Figures

Figure 1

24 pages, 5018 KiB  
Article
Machine Learning for the Photonic Evaluation of Cranial and Extracranial Sites in Healthy Individuals and in Patients with Multiple Sclerosis
by Antonio Currà, Riccardo Gasbarrone, Davide Gattabria, Nicola Luigi Bragazzi, Giuseppe Bonifazi, Silvia Serranti, Paolo Missori, Francesco Fattapposta, Carlotta Manfredi, Andrea Maffucci, Luca Puce, Lucio Marinelli and Carlo Trompetto
Appl. Sci. 2025, 15(15), 8534; https://doi.org/10.3390/app15158534 (registering DOI) - 31 Jul 2025
Abstract
This study aims to characterize short-wave infrared (SWIR) reflectance spectra at cranial (at the scalp overlying the frontal cortex and the temporal bone window) and extracranial (biceps and triceps) sites in patients with multiple sclerosis (MS) and age-/sex-matched controls. We sought to identify [...] Read more.
This study aims to characterize short-wave infrared (SWIR) reflectance spectra at cranial (at the scalp overlying the frontal cortex and the temporal bone window) and extracranial (biceps and triceps) sites in patients with multiple sclerosis (MS) and age-/sex-matched controls. We sought to identify the diagnostic accuracy of wavelength-specific patterns in distinguishing MS from normal controls and spectral markers associated with disability (e.g., Expanded Disability Status Scale scores). To achieve these objectives, we employed a multi-site SWIR spectroscopy acquisition protocol that included measurements from traditional cranial locations as well as extracranial reference sites. Advanced spectral analysis techniques, including wavelength-dependent absorption modeling and machine learning-based classification, were applied to differentiate MS-related hemodynamic changes from normal physiological variability. Classification models achieved perfect performance (accuracy = 1.00), and cortical site regression models showed strong predictive power (EDSS: R2CV = 0.980; FSS: R2CV = 0.939). Variable Importance in Projection (VIP) analysis highlighted key wavelengths as potential spectral biomarkers. This approach allowed us to explore novel biomarkers of neural and systemic impairment in MS, paving the way for potential clinical applications of SWIR spectroscopy in disease monitoring and management. In conclusion, spectral analysis revealed distinct wavelength-specific patterns collected from cranial and extracranial sites reflecting biochemical and structural differences between patients with MS and normal subjects. These differences are driven by underlying physiological changes, including myelin integrity, neuronal density, oxidative stress, and water content fluctuations in the brain or muscles. This study shows that portable spectral devices may contribute to bedside individuation and monitoring of neural diseases, offering a cost-effective alternative to repeated imaging. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Diagnostics: Second Edition)
Show Figures

Figure 1

14 pages, 1885 KiB  
Article
Advancements in Hole Quality for AISI 1045 Steel Using Helical Milling
by Pedro Mendes Silva, António José da Fonseca Festas, Robson Bruno Dutra Pereira and João Paulo Davim
J. Manuf. Mater. Process. 2025, 9(8), 256; https://doi.org/10.3390/jmmp9080256 (registering DOI) - 31 Jul 2025
Viewed by 10
Abstract
Helical milling presents a promising alternative to conventional drilling for hole production, offering superior surface quality and improved production efficiency. While this technique has been extensively applied in the aerospace industry, its potential for machining common engineering materials, such as AISI 1045 steel, [...] Read more.
Helical milling presents a promising alternative to conventional drilling for hole production, offering superior surface quality and improved production efficiency. While this technique has been extensively applied in the aerospace industry, its potential for machining common engineering materials, such as AISI 1045 steel, remains underexplored in the literature. This study addresses this gap by systematically evaluating the influence of key process parameters—cutting speed (Vc), axial depth of cut (ap), and tool diameter (Dt)—on hole quality attributes, including surface roughness, burr formation, and nominal diameter accuracy. A full factorial experimental design (23) was employed, coupled with analysis of variance (ANOVA), to quantify the effects and interactions of these parameters. The results reveal that, with a higher Vc, it is possible to reduce surface roughness (Ra) by 30% to 40%, while an increased ap leads to a 50% increase in Ra. Additionally, Dt emerged as the most critical factor for nominal diameter accuracy, reducing geometrical errors by 1% with a larger Dt. Burr formation was predominantly observed at the lower end of the hole, highlighting challenges specific to this technique. These findings provide valuable insights into optimizing helical milling for low-carbon steels, offering a foundation for broader industrial adoption and further research. Full article
Show Figures

Figure 1

17 pages, 2622 KiB  
Article
A Method for Evaluating the Performance of Main Bearings of TBM Based on Entropy Weight–Grey Correlation Degree
by Zhihong Sun, Yuanke Wu, Hao Xiao, Panpan Hu, Zhenyong Weng, Shunhai Xu and Wei Sun
Sensors 2025, 25(15), 4715; https://doi.org/10.3390/s25154715 (registering DOI) - 31 Jul 2025
Viewed by 140
Abstract
The main bearing of a tunnel boring machine (TBM) is a critical component of the main driving system that enables continuous excavation, and its performance is crucial for ensuring the safe operation of the TBM. Currently, there are few testing technologies for TBM [...] Read more.
The main bearing of a tunnel boring machine (TBM) is a critical component of the main driving system that enables continuous excavation, and its performance is crucial for ensuring the safe operation of the TBM. Currently, there are few testing technologies for TBM main bearings, and a comprehensive testing and evaluation system has yet to be established. This study presents an experimental investigation using a self-developed, full-scale TBM main bearing test bench. Based on a representative load spectrum, both operational condition tests and life cycle tests are conducted alternately, during which the signals of the main bearing are collected. The observed vibration signals are weak, with significant vibration attenuation occurring in the large structural components. Compared with the test bearing, which reaches a vibration amplitude of 10 g in scale tests, the difference is several orders of magnitude smaller. To effectively utilize the selected evaluation indicators, the entropy weight method is employed to assign weights to the indicators, and a comprehensive analysis is conducted using grey relational analysis. This strategy results in the development of a comprehensive evaluation method based on entropy weighting and grey relational analysis. The main bearing performance is evaluated under various working conditions and the same working conditions in different time periods. The results show that the greater the bearing load, the lower the comprehensive evaluation coefficient of bearing performance. A multistage evaluation method is adopted to evaluate the performance and condition of the main bearing across multiple working scenarios. With the increase of the test duration, the bearing performance exhibits gradual degradation, aligning with the expected outcomes. The findings demonstrate that the proposed performance evaluation method can effectively and accurately evaluate the performance of TBM main bearings, providing theoretical and technical support for the safe operation of TBMs. Full article
Show Figures

Figure 1

9 pages, 1552 KiB  
Proceeding Paper
Kolmogorov–Arnold Networks for System Identification of First- and Second-Order Dynamic Systems
by Lily Chiparova and Vasil Popov
Eng. Proc. 2025, 100(1), 100059; https://doi.org/10.3390/engproc2025100059 - 30 Jul 2025
Viewed by 113
Abstract
System identification—originating in the 1950s from statistical theory—has since developed a wealth of algorithms, insights, and practical expertise. We introduce Kolmogorov–Arnold neural networks (KANs) as an interpretable alternative for model discovery. Leveraging KANs’ inherent property to approximate data and interpret it by employing [...] Read more.
System identification—originating in the 1950s from statistical theory—has since developed a wealth of algorithms, insights, and practical expertise. We introduce Kolmogorov–Arnold neural networks (KANs) as an interpretable alternative for model discovery. Leveraging KANs’ inherent property to approximate data and interpret it by employing learnable activation functions and decomposition of multivariate mappings into univariate transforms, we test its ability to recover the step responses of first- and second-order systems both numerically and symbolically. We employ synthetic datasets, both noise-free and with Gaussian noise, and find that KANs can achieve very low RMSE and parameter error with simple architectures. Our results demonstrate that KANs combine ease of implementation with symbolic transparency, positioning them as a compelling bridge between classical identification and modern machine learning. Full article
Show Figures

Figure 1

22 pages, 3083 KiB  
Article
Evaluating the Effect of Thermal Treatment on Phenolic Compounds in Functional Flours Using Vis–NIR–SWIR Spectroscopy: A Machine Learning Approach
by Achilleas Panagiotis Zalidis, Nikolaos Tsakiridis, George Zalidis, Ioannis Mourtzinos and Konstantinos Gkatzionis
Foods 2025, 14(15), 2663; https://doi.org/10.3390/foods14152663 - 29 Jul 2025
Viewed by 238
Abstract
Functional flours, high in bioactive compounds, have garnered increasing attention, driven by consumer demand for alternative ingredients and the nutritional limitations of wheat flour. This study explores the thermal stability of phenolic compounds in various functional flours using visible, near and shortwave-infrared (Vis–NIR–SWIR) [...] Read more.
Functional flours, high in bioactive compounds, have garnered increasing attention, driven by consumer demand for alternative ingredients and the nutritional limitations of wheat flour. This study explores the thermal stability of phenolic compounds in various functional flours using visible, near and shortwave-infrared (Vis–NIR–SWIR) spectroscopy (350–2500 nm), integrated with machine learning (ML) algorithms. Random Forest models were employed to classify samples based on flour type, baking temperature, and phenolic concentration. The full spectral range yielded high classification accuracy (0.98, 0.98, and 0.99, respectively), and an explainability framework revealed the wavelengths most relevant for each class. To address concerns regarding color as a confounding factor, a targeted spectral refinement was implemented by sequentially excluding the visible region. Models trained on the 1000–2500 nm and 1400–2500 nm ranges showed minor reductions in accuracy, suggesting that classification is not solely driven by visible characteristics. Results indicated that legume and wheat flours retain higher total phenolic content (TPC) under mild thermal conditions, whereas grape seed flour (GSF) and olive stone flour (OSF) exhibited notable thermal stability of TPC even at elevated temperatures. These first findings suggest that the proposed non-destructive spectroscopic approach enables rapid classification and quality assessment of functional flours, supporting future applications in precision food formulation and quality control. Full article
Show Figures

Figure 1

20 pages, 2776 KiB  
Article
Automatic 3D Reconstruction: Mesh Extraction Based on Gaussian Splatting from Romanesque–Mudéjar Churches
by Nelson Montas-Laracuente, Emilio Delgado Martos, Carlos Pesqueira-Calvo, Giovanni Intra Sidola, Ana Maitín, Alberto Nogales and Álvaro José García-Tejedor
Appl. Sci. 2025, 15(15), 8379; https://doi.org/10.3390/app15158379 - 28 Jul 2025
Viewed by 156
Abstract
This research introduces an automated 3D virtual reconstruction system tailored for architectural heritage (AH) applications, contributing to the ongoing paradigm shift from traditional CAD-based workflows to artificial intelligence-driven methodologies. It reviews recent advancements in machine learning and deep learning—particularly neural radiance fields (NeRFs) [...] Read more.
This research introduces an automated 3D virtual reconstruction system tailored for architectural heritage (AH) applications, contributing to the ongoing paradigm shift from traditional CAD-based workflows to artificial intelligence-driven methodologies. It reviews recent advancements in machine learning and deep learning—particularly neural radiance fields (NeRFs) and its successor, Gaussian splatting (GS)—as state-of-the-art techniques in the domain. The study advocates for replacing point cloud data in heritage building information modeling workflows with image-based inputs, proposing a novel “photo-to-BIM” pipeline. A proof-of-concept system is presented, capable of processing photographs or video footage of ancient ruins—specifically, Romanesque–Mudéjar churches—to automatically generate 3D mesh reconstructions. The system’s performance is assessed using both objective metrics and subjective evaluations of mesh quality. The results confirm the feasibility and promise of image-based reconstruction as a viable alternative to conventional methods. The study successfully developed a system for automated 3D mesh reconstruction of AH from images. It applied GS and Mip-splatting for NeRFs, proving superior in noise reduction for subsequent mesh extraction via surface-aligned Gaussian splatting for efficient 3D mesh reconstruction. This photo-to-mesh pipeline signifies a viable step towards HBIM. Full article
Show Figures

Figure 1

25 pages, 946 KiB  
Article
Short-Term Forecasting of the JSE All-Share Index Using Gradient Boosting Machines
by Mueletshedzi Mukhaninga, Thakhani Ravele and Caston Sigauke
Economies 2025, 13(8), 219; https://doi.org/10.3390/economies13080219 - 28 Jul 2025
Viewed by 395
Abstract
This study applies Gradient Boosting Machines (GBMs) and principal component regression (PCR) to forecast the closing price of the Johannesburg Stock Exchange (JSE) All-Share Index (ALSI), using daily data from 2009 to 2024, sourced from the Wall Street Journal. The models are evaluated [...] Read more.
This study applies Gradient Boosting Machines (GBMs) and principal component regression (PCR) to forecast the closing price of the Johannesburg Stock Exchange (JSE) All-Share Index (ALSI), using daily data from 2009 to 2024, sourced from the Wall Street Journal. The models are evaluated under three training–testing split ratios to assess short-term forecasting performance. Forecast accuracy is assessed using standard error metrics: mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE), and mean absolute scaled error (MASE). Across all test splits, the GBM consistently achieves lower forecast errors than PCR, demonstrating superior predictive accuracy. To validate the significance of this performance difference, the Diebold–Mariano (DM) test is applied, confirming that the forecast errors from the GBM are statistically significantly lower than those of PCR at conventional significance levels. These findings highlight the GBM’s strength in capturing nonlinear relationships and complex interactions in financial time series, particularly when using features such as the USD/ZAR exchange rate, oil, platinum, and gold prices, the S&P 500 index, and calendar-based variables like month and day. Future research should consider integrating additional macroeconomic indicators and exploring alternative or hybrid forecasting models to improve robustness and generalisability across different market conditions. Full article
Show Figures

Figure 1

18 pages, 3347 KiB  
Article
Assessment of Machine Learning-Driven Retrievals of Arctic Sea Ice Thickness from L-Band Radiometry Remote Sensing
by Ferran Hernández-Macià, Gemma Sanjuan Gomez, Carolina Gabarró and Maria José Escorihuela
Computers 2025, 14(8), 305; https://doi.org/10.3390/computers14080305 - 28 Jul 2025
Viewed by 171
Abstract
This study evaluates machine learning-based methods for retrieving thin Arctic sea ice thickness (SIT) from L-band radiometry, using data from the European Space Agency’s (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite. In addition to the operational ESA product, three alternative approaches are [...] Read more.
This study evaluates machine learning-based methods for retrieving thin Arctic sea ice thickness (SIT) from L-band radiometry, using data from the European Space Agency’s (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite. In addition to the operational ESA product, three alternative approaches are assessed: a Random Forest (RF) algorithm, a Convolutional Neural Network (CNN) that incorporates spatial coherence, and a Long Short-Term Memory (LSTM) neural network designed to capture temporal coherence. Validation against in situ data from the Beaufort Gyre Exploration Project (BGEP) moorings and the ESA SMOSice campaign demonstrates that the RF algorithm achieves robust performance comparable to the ESA product, despite its simplicity and lack of explicit spatial or temporal modeling. The CNN exhibits a tendency to overestimate SIT and shows higher dispersion, suggesting limited added value when spatial coherence is already present in the input data. The LSTM approach does not improve retrieval accuracy, likely due to the mismatch between satellite resolution and the temporal variability of sea ice conditions. These results highlight the importance of L-band sea ice emission modeling over increasing algorithm complexity and suggest that simpler, adaptable methods such as RF offer a promising foundation for future SIT retrieval efforts. The findings are relevant for refining current methods used with SMOS and for developing upcoming satellite missions, such as ESA’s Copernicus Imaging Microwave Radiometer (CIMR). Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

12 pages, 1066 KiB  
Article
Prediction of the Maximum and Minimum Prices of Stocks in the Stock Market Using a Hybrid Model Based on Stacking
by Sebastian Tuesta, Nahum Flores and David Mauricio
Algorithms 2025, 18(8), 471; https://doi.org/10.3390/a18080471 - 28 Jul 2025
Viewed by 250
Abstract
Predicting stock prices on stock markets is challenging due to the nonlinear and nonstationary nature of financial markets. This study presents a hybrid model based on integrated machine learning (ML) techniques—neural networks, support vector regression (SVR), and decision trees—that uses the stacking method [...] Read more.
Predicting stock prices on stock markets is challenging due to the nonlinear and nonstationary nature of financial markets. This study presents a hybrid model based on integrated machine learning (ML) techniques—neural networks, support vector regression (SVR), and decision trees—that uses the stacking method to estimate the next day’s maximum and minimum stock prices. The model’s performance was evaluated using three data sets: Brazil’s São Paulo Stock Exchange (iBovespa)—Companhia Energética do Rio Grande do Norte (CSRN) and CPFL Energia (CPFE)—and one from the New York Stock Exchange (NYSE), the Dow Jones Industrial Average (DJI). The datasets covered the following time periods: CSRN and CPFE from 1 January 2008 to 30 September 2013, and DJI from 3 December 2018 to 31 August 2024. For the CSRN ensemble, the hybrid model achieved a mean absolute percentage error (MAPE) of 0.197% for maximum price and 0.224% for minimum price, outperforming results from the literature. For the CPFE set, the model showed a MAPE of 0.834% for the maximum price and 0.937% for the minimum price, demonstrating comparable accuracy. The model obtained a MAPE of 0.439% for the DJI set for maximum price and 0.474% for minimum price, evidencing its applicability across different market contexts. These results suggest that the proposed hybrid approach offers a robust alternative for stock price prediction by overcoming the limitations of using a single ML technique. Full article
Show Figures

Figure 1

10 pages, 6510 KiB  
Proceeding Paper
Energy Consumption Forecasting for Renewable Energy Communities: A Case Study of Loureiro, Portugal
by Muhammad Akram, Chiara Martone, Ilenia Perugini and Emmanuele Maria Petruzziello
Eng. Proc. 2025, 101(1), 7; https://doi.org/10.3390/engproc2025101007 - 25 Jul 2025
Viewed by 512
Abstract
Intensive energy consumption in the building sector remains one of the primary contributors to climate change and global warming. Within Renewable Energy Communities (RECs), improving energy management is essential for promoting sustainability and reducing environmental impact. Accurate forecasting of energy consumption at the [...] Read more.
Intensive energy consumption in the building sector remains one of the primary contributors to climate change and global warming. Within Renewable Energy Communities (RECs), improving energy management is essential for promoting sustainability and reducing environmental impact. Accurate forecasting of energy consumption at the community level is a key tool in this effort. Traditionally, engineering-based methods grounded in thermodynamic principles have been employed, offering high accuracy under controlled conditions. However, their reliance on exhaustive building-level data and high computational costs limits their scalability in dynamic REC settings. In contrast, Artificial Intelligence (AI)-driven methods provide flexible and scalable alternatives by learning patterns from historical consumption and environmental data. This study investigates three Machine Learning (ML) models, Decision Tree (DT), Random Forest (RF), and CatBoost, and one Deep Learning (DL) model, Convolutional Neural Network (CNN), to forecast community electricity consumption using real smart meter data and local meteorological variables. The study focuses on a REC in Loureiro, Portugal, consisting of 172 residential users from whom 16 months of 15 min interval electricity consumption data were collected. Temporal features (hour of the day, day of the week, month) were combined with lag-based usage patterns, including features representing energy consumption at the corresponding time in the previous hour and on the previous day, to enhance model accuracy by leveraging short-term dependencies and daily repetition in usage behavior. Models were evaluated using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and the Coefficient of Determination R2. Among all models, CatBoost achieved the best performance, with an MSE of 0.1262, MAPE of 4.77%, and an R2 of 0.9018. These results highlight the potential of ensemble learning approaches for improving energy demand forecasting in RECs, supporting smarter energy management and contributing to energy and environmental performance. Full article
Show Figures

Figure 1

19 pages, 3405 KiB  
Article
Study on Hydrological–Meteorological Response in the Upper Yellow River Based on 100-Year Series Reconstruction
by Xiaohui He, Xiaoyu He, Yajun Gao and Fanchao Li
Water 2025, 17(15), 2223; https://doi.org/10.3390/w17152223 - 25 Jul 2025
Viewed by 296
Abstract
Precipitation, as a key input in the water cycle, directly influences the formation and change process of runoff. Meanwhile, the return runoff intuitively reflects the available quantity of water resources in a river basin. An in-depth analysis of the evolution laws and response [...] Read more.
Precipitation, as a key input in the water cycle, directly influences the formation and change process of runoff. Meanwhile, the return runoff intuitively reflects the available quantity of water resources in a river basin. An in-depth analysis of the evolution laws and response relationships between precipitation and return runoff over a long time scale serves as an important support for exploring the evolution of hydrometeorological conditions and provides an accurate basis for the scientific planning and management of water resources. Taking Lanzhou Station on the upper Yellow River as a typical case, this study proposes the VSSL (LSTM Fusion Method Optimized by SSA with VMD Decomposition) deep learning precipitation element series extension method and the SSVR (SVR Fusion Method Optimized by SSA) machine learning runoff element series extension method. These methods achieve a reasonable extension of the missing data and construct 100-year precipitation and return runoff series from 1921 to 2020. The research results showed that the performance of machine learning and deep learning methods in the precipitation and return runoff test sets is better than that of traditional statistical methods, and the fitting effect of return runoff is better than that of precipitation. The 100-year precipitation and return runoff series of Lanzhou Station from 1921 to 2020 show a non-significant upward trend at a rate of 0.26 mm/a and 0.42 × 108 m3/a, respectively. There is no significant mutation point in precipitation, while the mutation point of return runoff occurred in 1991. The 100-year precipitation series of Lanzhou Station has four time-scale alternations of dry and wet periods, with main periods of 60 years, 20 years, 12 years, and 6 years, respectively. The 100-year return runoff series has three time-scale alternations of dry and wet periods, with main periods of 60 years, 34 years, and 26 years, respectively. During the period from 1940 to 2000, an approximately 50-year cycle, precipitation and runoff not only have strong common-change energy and significant interaction, but also have a fixed phase difference. Precipitation changes precede runoff, and runoff responds after a fixed time interval. Full article
(This article belongs to the Section Water and Climate Change)
Show Figures

Figure 1

18 pages, 1687 KiB  
Article
Beyond Classical AI: Detecting Fake News with Hybrid Quantum Neural Networks
by Volkan Altıntaş
Appl. Sci. 2025, 15(15), 8300; https://doi.org/10.3390/app15158300 - 25 Jul 2025
Viewed by 172
Abstract
The advent of quantum computing has introduced new opportunities for enhancing classical machine learning architectures. In this study, we propose a novel hybrid model, the HQDNN (Hybrid Quantum–Deep Neural Network), designed for the automatic detection of fake news. The model integrates classical fully [...] Read more.
The advent of quantum computing has introduced new opportunities for enhancing classical machine learning architectures. In this study, we propose a novel hybrid model, the HQDNN (Hybrid Quantum–Deep Neural Network), designed for the automatic detection of fake news. The model integrates classical fully connected neural layers with a parameterized quantum circuit, enabling the processing of textual data within both classical and quantum computational domains. To assess its effectiveness, we conducted experiments on the widely used LIAR dataset utilizing Term Frequency–Inverse Document Frequency (TF-IDF) features, as well as transformer-based DistilBERT embeddings. The experimental results demonstrate that the HQDNN achieves a superior recall performance—92.58% with TF-IDF and 94.40% with DistilBERT—surpassing traditional machine learning models such as Logistic Regression, Linear SVM, and Multilayer Perceptron. Additionally, we compare the HQDNN with SetFit, a recent CPU-efficient few-shot transformer model, and show that while SetFit achieves higher precision, the HQDNN significantly outperforms it in recall. Furthermore, an ablation experiment confirms the critical contribution of the quantum component, revealing a substantial drop in performance when the quantum layer is removed. These findings highlight the potential of hybrid quantum–classical models as effective and compact alternatives for high-sensitivity classification tasks, particularly in domains such as fake news detection. Full article
Show Figures

Figure 1

Back to TopTop