Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (659)

Search Parameters:
Keywords = hybrid machining simulations

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 6505 KiB  
Review
Trends in Oil Spill Modeling: A Review of the Literature
by Rodrigo N. Vasconcelos, André T. Cunha Lima, Carlos A. D. Lentini, José Garcia V. Miranda, Luís F. F. de Mendonça, Diego P. Costa, Soltan G. Duverger and Elaine C. B. Cambui
Water 2025, 17(15), 2300; https://doi.org/10.3390/w17152300 (registering DOI) - 2 Aug 2025
Abstract
Oil spill simulation models are essential for predicting the oil spill behavior and movement in marine environments. In this study, we comprehensively reviewed a large and diverse body of peer-reviewed literature obtained from Scopus and Web of Science. Our initial analysis phase focused [...] Read more.
Oil spill simulation models are essential for predicting the oil spill behavior and movement in marine environments. In this study, we comprehensively reviewed a large and diverse body of peer-reviewed literature obtained from Scopus and Web of Science. Our initial analysis phase focused on examining trends in scientific publications, utilizing the complete dataset derived after systematic screening and database integration. In the second phase, we applied elements of a systematic review to identify and evaluate the most influential contributions in the scientific field of oil spill simulations. Our analysis revealed a steady and accelerating growth of research activity over the past five decades, with a particularly notable expansion in the last two. The field has also experienced a marked increase in collaborative practices, including a rise in international co-authorship and multi-authored contributions, reflecting a more global and interdisciplinary research landscape. We cataloged the key modeling frameworks that have shaped the field from established systems such as OSCAR, OIL-MAP/SIMAP, and GNOME to emerging hybrid and Lagrangian approaches. Hydrodynamic models were consistently central, often integrated with biogeochemical, wave, atmospheric, and oil-spill-specific modules. Environmental variables such as wind, ocean currents, and temperature were frequently used to drive model behavior. Geographically, research has concentrated on ecologically and economically sensitive coastal and marine regions. We conclude that future progress will rely on the real-time integration of high-resolution environmental data streams, the development of machine-learning-based surrogate models to accelerate computations, and the incorporation of advanced biodegradation and weathering mechanisms supported by experimental data. These advancements are expected to enhance the accuracy, responsiveness, and operational value of oil spill modeling tools, supporting environmental monitoring and emergency response. Full article
(This article belongs to the Special Issue Advanced Remote Sensing for Coastal System Monitoring and Management)
Show Figures

Figure 1

48 pages, 2506 KiB  
Article
Enhancing Ship Propulsion Efficiency Predictions with Integrated Physics and Machine Learning
by Hamid Reza Soltani Motlagh, Seyed Behbood Issa-Zadeh, Md Redzuan Zoolfakar and Claudia Lizette Garay-Rondero
J. Mar. Sci. Eng. 2025, 13(8), 1487; https://doi.org/10.3390/jmse13081487 - 31 Jul 2025
Abstract
This research develops a dual physics-based machine learning system to forecast fuel consumption and CO2 emissions for a 100 m oil tanker across six operational scenarios: Original, Paint, Advanced Propeller, Fin, Bulbous Bow, and Combined. The combination of hydrodynamic calculations with Monte [...] Read more.
This research develops a dual physics-based machine learning system to forecast fuel consumption and CO2 emissions for a 100 m oil tanker across six operational scenarios: Original, Paint, Advanced Propeller, Fin, Bulbous Bow, and Combined. The combination of hydrodynamic calculations with Monte Carlo simulations provides a solid foundation for training machine learning models, particularly in cases where dataset restrictions are present. The XGBoost model demonstrated superior performance compared to Support Vector Regression, Gaussian Process Regression, Random Forest, and Shallow Neural Network models, achieving near-zero prediction errors that closely matched physics-based calculations. The physics-based analysis demonstrated that the Combined scenario, which combines hull coatings with bulbous bow modifications, produced the largest fuel consumption reduction (5.37% at 15 knots), followed by the Advanced Propeller scenario. The results demonstrate that user inputs (e.g., engine power: 870 kW, speed: 12.7 knots) match the Advanced Propeller scenario, followed by Paint, which indicates that advanced propellers or hull coatings would optimize efficiency. The obtained insights help ship operators modify their operational parameters and designers select essential modifications for sustainable operations. The model maintains its strength at low speeds, where fuel consumption is minimal, making it applicable to other oil tankers. The hybrid approach provides a new tool for maritime efficiency analysis, yielding interpretable results that support International Maritime Organization objectives, despite starting with a limited dataset. The model requires additional research to enhance its predictive accuracy using larger datasets and real-time data collection, which will aid in achieving global environmental stewardship. Full article
(This article belongs to the Special Issue Machine Learning for Prediction of Ship Motion)
18 pages, 1498 KiB  
Article
A Proactive Predictive Model for Machine Failure Forecasting
by Olusola O. Ajayi, Anish M. Kurien, Karim Djouani and Lamine Dieng
Machines 2025, 13(8), 663; https://doi.org/10.3390/machines13080663 - 29 Jul 2025
Viewed by 276
Abstract
Unexpected machine failures in industrial environments lead to high maintenance costs, unplanned downtime, and safety risks. This study proposes a proactive predictive model using a hybrid of eXtreme Gradient Boosting (XGBoost) and Neural Networks (NN) to forecast machine failures. A synthetic dataset capturing [...] Read more.
Unexpected machine failures in industrial environments lead to high maintenance costs, unplanned downtime, and safety risks. This study proposes a proactive predictive model using a hybrid of eXtreme Gradient Boosting (XGBoost) and Neural Networks (NN) to forecast machine failures. A synthetic dataset capturing recent breakdown history and time since last failure was used to simulate industrial scenarios. To address class imbalance, SMOTE and class weighting were applied, alongside a focal loss function to emphasize difficult-to-classify failures. The XGBoost model was tuned via GridSearchCV, while the NN model utilized ReLU-activated hidden layers with dropout. Evaluation using stratified 5-fold cross-validation showed that the NN achieved an F1-score of 0.7199 and a recall of 0.9545 for the minority class. XGBoost attained a higher PR AUC of 0.7126 and a more balanced precision–recall trade-off. Sample predictions demonstrated strong recall (100%) for failures, but also a high false positive rate, with most prediction probabilities clustered between 0.50–0.55. Additional benchmarking against Logistic Regression, Random Forest, and SVM further confirmed the superiority of the proposed hybrid model. Model interpretability was enhanced using SHAP and LIME, confirming that recent breakdowns and time since last failure were key predictors. While the model effectively detects failures, further improvements in feature engineering and threshold tuning are recommended to reduce false alarms and boost decision confidence. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

28 pages, 2918 KiB  
Article
Machine Learning-Powered KPI Framework for Real-Time, Sustainable Ship Performance Management
by Christos Spandonidis, Vasileios Iliopoulos and Iason Athanasopoulos
J. Mar. Sci. Eng. 2025, 13(8), 1440; https://doi.org/10.3390/jmse13081440 - 28 Jul 2025
Viewed by 275
Abstract
The maritime sector faces escalating demands to minimize emissions and optimize operational efficiency under tightening environmental regulations. Although technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Digital Twins (DT) offer substantial potential, their deployment in real-time ship performance analytics [...] Read more.
The maritime sector faces escalating demands to minimize emissions and optimize operational efficiency under tightening environmental regulations. Although technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Digital Twins (DT) offer substantial potential, their deployment in real-time ship performance analytics is at an emerging state. This paper proposes a machine learning-driven framework for real-time ship performance management. The framework starts with data collected from onboard sensors and culminates in a decision support system that is easily interpretable, even by non-experts. It also provides a method to forecast vessel performance by extrapolating Key Performance Indicator (KPI) values. Furthermore, it offers a flexible methodology for defining KPIs for every crucial component or aspect of vessel performance, illustrated through a use case focusing on fuel oil consumption. Leveraging Artificial Neural Networks (ANNs), hybrid multivariate data fusion, and high-frequency sensor streams, the system facilitates continuous diagnostics, early fault detection, and data-driven decision-making. Unlike conventional static performance models, the framework employs dynamic KPIs that evolve with the vessel’s operational state, enabling advanced trend analysis, predictive maintenance scheduling, and compliance assurance. Experimental comparison against classical KPI models highlights superior predictive fidelity, robustness, and temporal consistency. Furthermore, the paper delineates AI and ML applications across core maritime operations and introduces a scalable, modular system architecture applicable to both commercial and naval platforms. This approach bridges advanced simulation ecosystems with in situ operational data, laying a robust foundation for digital transformation and sustainability in maritime domains. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

23 pages, 13580 KiB  
Article
Enabling Smart Grid Resilience with Deep Learning-Based Battery Health Prediction in EV Fleets
by Muhammed Cavus and Margaret Bell
Batteries 2025, 11(8), 283; https://doi.org/10.3390/batteries11080283 - 24 Jul 2025
Viewed by 244
Abstract
The widespread integration of electric vehicles (EVs) into smart grid infrastructures necessitates intelligent and robust battery health diagnostics to ensure system resilience and performance longevity. While numerous studies have addressed the estimation of State of Health (SOH) and the prediction of remaining useful [...] Read more.
The widespread integration of electric vehicles (EVs) into smart grid infrastructures necessitates intelligent and robust battery health diagnostics to ensure system resilience and performance longevity. While numerous studies have addressed the estimation of State of Health (SOH) and the prediction of remaining useful life (RUL) using machine and deep learning, most existing models fail to capture both short-term degradation trends and long-range contextual dependencies jointly. In this study, we introduce V2G-HealthNet, a novel hybrid deep learning framework that uniquely combines Long Short-Term Memory (LSTM) networks with Transformer-based attention mechanisms to model battery degradation under dynamic vehicle-to-grid (V2G) scenarios. Unlike prior approaches that treat SOH estimation in isolation, our method directly links health prediction to operational decisions by enabling SOH-informed adaptive load scheduling and predictive maintenance across EV fleets. Trained on over 3400 proxy charge-discharge cycles derived from 1 million telemetry samples, V2G-HealthNet achieved state-of-the-art performance (SOH RMSE: 0.015, MAE: 0.012, R2: 0.97), outperforming leading baselines including XGBoost and Random Forest. For RUL prediction, the model maintained an MAE of 0.42 cycles over a five-cycle horizon. Importantly, deployment simulations revealed that V2G-HealthNet triggered maintenance alerts at least three cycles ahead of critical degradation thresholds and redistributed high-load tasks away from ageing batteries—capabilities not demonstrated in previous works. These findings establish V2G-HealthNet as a deployable, health-aware control layer for smart city electrification strategies. Full article
Show Figures

Figure 1

16 pages, 3807 KiB  
Article
Optimization of Machining Efficiency of Aluminum Honeycomb Structures by Hybrid Milling Assisted by Longitudinal Ultrasonic Vibrations
by Oussama Beldi, Tarik Zarrouk, Ahmed Abbadi, Mohammed Nouari, Mohammed Abbadi, Jamal-Eddine Salhi and Mohammed Barboucha
Processes 2025, 13(8), 2348; https://doi.org/10.3390/pr13082348 - 23 Jul 2025
Viewed by 299
Abstract
The use of aluminum honeycomb structures is fast expanding in advanced sectors such as the aeronautics, aerospace, marine, and automotive industries. However, processing these structures represents a major challenge for producing parts that meet the strict standards. To address this issue, an innovative [...] Read more.
The use of aluminum honeycomb structures is fast expanding in advanced sectors such as the aeronautics, aerospace, marine, and automotive industries. However, processing these structures represents a major challenge for producing parts that meet the strict standards. To address this issue, an innovative manufacturing method using longitudinal ultrasonic vibration-assisted cutting, combined with a CDZ10 hybrid cutting tool, was developed to optimize the efficiency of traditional machining processes. To this end, a 3D numerical model was developed using the finite element method and Abaqus/Explicit 2017 software to simulate the complex interactions among the cutting tool and the thin walls of the structures. This model was validated by experimental tests, allowing the study of the influence of milling conditions such as feed rate, cutting angle, and vibration amplitude. The numerical results revealed that the hybrid technology significantly reduces the cutting force components, with a decrease ranging from 10% to 42%. In addition, it improves cutting quality by reducing plastic deformation and cell wall tearing, which prevents the formation of chips clumps on the tool edges, thus avoiding early wear of the tool. These outcomes offer new insights into optimizing industrial processes, particularly in fields with stringent precision and performance demands, like the aerospace sector. Full article
Show Figures

Figure 1

17 pages, 1794 KiB  
Article
Detection of Cumulative Bruising in Prunes Using Vis–NIR Spectroscopy and Machine Learning: A Nonlinear Spectral Response Approach
by Lisi Lai, Hui Zhang, Jiahui Gu and Long Wen
Appl. Sci. 2025, 15(15), 8190; https://doi.org/10.3390/app15158190 - 23 Jul 2025
Viewed by 166
Abstract
Early and accurate detection of mechanical damage in prunes is crucial for preserving postharvest quality and enabling automated sorting. This study proposes a practical and reproducible method for identifying cumulative bruising in prunes using visible–near-infrared (Vis–NIR) reflectance spectroscopy coupled with machine learning techniques. [...] Read more.
Early and accurate detection of mechanical damage in prunes is crucial for preserving postharvest quality and enabling automated sorting. This study proposes a practical and reproducible method for identifying cumulative bruising in prunes using visible–near-infrared (Vis–NIR) reflectance spectroscopy coupled with machine learning techniques. A self-developed impact simulation device was designed to induce progressive damage under controlled energy levels, simulating realistic postharvest handling conditions. Spectral data were collected from the equatorial region of each fruit and processed using a hybrid modeling framework comprising continuous wavelet transform (CWT) for spectral enhancement, uninformative variable elimination (UVE) for optimal wavelength selection, and support vector machine (SVM) for classification. The proposed CWT-UVE-SVM model achieved an overall classification accuracy of 93.22%, successfully distinguishing intact, mildly bruised, and cumulatively damaged samples. Notably, the results revealed nonlinear reflectance variations in the near-infrared region associated with repeated low-energy impacts, highlighting the capacity of spectral response patterns to capture progressive physiological changes. This research not only advances nondestructive detection methods for prune grading but also provides a scalable modeling strategy for cumulative mechanical damage assessment in soft horticultural products. Full article
Show Figures

Figure 1

25 pages, 2201 KiB  
Article
Evolutionary-Assisted Data-Driven Approach for Dissolved Oxygen Modeling: A Case Study in Kosovo
by Bruno da S. Macêdo, Larissa Lima, Douglas Lima Fonseca, Tales H. A. Boratto, Camila M. Saporetti, Osman Fetoshi, Edmond Hajrizi, Pajtim Bytyçi, Uilson R. V. Aires, Roland Yonaba, Priscila Capriles and Leonardo Goliatt
Earth 2025, 6(3), 81; https://doi.org/10.3390/earth6030081 - 21 Jul 2025
Viewed by 268
Abstract
Dissolved oxygen (DO) is widely recognized as a fundamental parameter in assessing water quality, given its critical role in supporting aquatic ecosystems. Accurate estimation of DO levels is crucial for effective management of riverine environments, especially in anthropogenically stressed regions. In this study, [...] Read more.
Dissolved oxygen (DO) is widely recognized as a fundamental parameter in assessing water quality, given its critical role in supporting aquatic ecosystems. Accurate estimation of DO levels is crucial for effective management of riverine environments, especially in anthropogenically stressed regions. In this study, a hybrid machine learning (ML) framework is introduced to predict DO concentrations, where optimization is performed through Genetic Algorithm Search with Cross-Validation (GASearchCV). The methodology was applied to a dataset collected from the Sitnica River in Kosovo, comprising more than 18,000 observations of temperature, conductivity, pH, and dissolved oxygen. The ML models Elastic Net (EN), Support Vector Regression (SVR), and Light Gradient Boosting Machine (LGBM) were fine-tuned using cross-validation and assessed using five performance metrics: coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error MARE, and mean square error (MSE). Among them, the LGBM model yielded the best predictive results, achieving an R2 of 0.944 and RMSE of 8.430 mg/L on average. A Monte Carlo Simulation-based uncertainty analysis further confirmed the model’s robustness, enabling comparison of the trade-off between uncertainty and predictive precision. Comparison with recent studies confirms the proposed framework’s competitive performance, demonstrating the effectiveness of automated tuning and ensemble learning in achieving reliable and real-time water quality forecasting. The methodology offers a scalable and reliable solution for advancing data-driven water quality forecasting, with direct applicability to real-time environmental monitoring and sustainable resource management. Full article
Show Figures

Figure 1

33 pages, 2299 KiB  
Review
Edge Intelligence in Urban Landscapes: Reviewing TinyML Applications for Connected and Sustainable Smart Cities
by Athanasios Trigkas, Dimitrios Piromalis and Panagiotis Papageorgas
Electronics 2025, 14(14), 2890; https://doi.org/10.3390/electronics14142890 - 19 Jul 2025
Viewed by 448
Abstract
Tiny Machine Learning (TinyML) extends edge AI capabilities to resource-constrained devices, offering a promising solution for real-time, low-power intelligence in smart cities. This review systematically analyzes 66 peer-reviewed studies from 2019 to 2024, covering applications across urban mobility, environmental monitoring, public safety, waste [...] Read more.
Tiny Machine Learning (TinyML) extends edge AI capabilities to resource-constrained devices, offering a promising solution for real-time, low-power intelligence in smart cities. This review systematically analyzes 66 peer-reviewed studies from 2019 to 2024, covering applications across urban mobility, environmental monitoring, public safety, waste management, and infrastructure health. We examine hardware platforms and machine learning models, with particular attention to power-efficient deployment and data privacy. We review the approaches employed in published studies for deploying machine learning models on resource-constrained hardware, emphasizing the most commonly used communication technologies—while noting the limited uptake of low-power options such as Low Power Wide Area Networks (LPWANs). We also discuss hardware–software co-design strategies that enable sustainable operation. Furthermore, we evaluate the alignment of these deployments with the United Nations Sustainable Development Goals (SDGs), highlighting both their contributions and existing gaps in current practices. This review identifies recurring technical patterns, methodological challenges, and underexplored opportunities, particularly in the areas of hardware provisioning, usage of inherent privacy benefits in relevant applications, communication technologies, and dataset practices, offering a roadmap for future TinyML research and deployment in smart urban systems. Among the 66 studies examined, 29 focused on mobility and transportation, 17 on public safety, 10 on environmental sensing, 6 on waste management, and 4 on infrastructure monitoring. TinyML was deployed on constrained microcontrollers in 32 studies, while 36 used optimized models for resource-limited environments. Energy harvesting, primarily solar, was featured in 6 studies, and low-power communication networks were used in 5. Public datasets were used in 27 studies, custom datasets in 24, and the remainder relied on hybrid or simulated data. Only one study explicitly referenced SDGs, and 13 studies considered privacy in their system design. Full article
(This article belongs to the Special Issue New Advances in Embedded Software and Applications)
Show Figures

Figure 1

16 pages, 1383 KiB  
Article
Probabilistic Demand Forecasting in the Southeast Region of the Mexican Power System Using Machine Learning Methods
by Ivan Itai Bernal Lara, Roberto Jair Lorenzo Diaz, María de los Ángeles Sánchez Galván, Jaime Robles García, Mohamed Badaoui, David Romero Romero and Rodolfo Alfonso Moreno Flores
Forecasting 2025, 7(3), 39; https://doi.org/10.3390/forecast7030039 - 18 Jul 2025
Viewed by 368
Abstract
This paper focuses on electricity demand forecasting and its uncertainty representation using a hybrid machine learning (ML) model in the eastern control area of southeastern Mexico. In this case, different sources of uncertainty are integrated by applying the Bootstrap method, which adds the [...] Read more.
This paper focuses on electricity demand forecasting and its uncertainty representation using a hybrid machine learning (ML) model in the eastern control area of southeastern Mexico. In this case, different sources of uncertainty are integrated by applying the Bootstrap method, which adds the characteristics of stochastic noise, resulting in a hybrid probabilistic and ML model in the form of a time series. The proposed methodology addresses a function density probability, which is the generalized of extreme values obtained from the errors of the ML model; however, it is adaptable and independent and simulates the variability that may arise due to unforeseen events. Results indicate that for a five-day forecast using only demand data, the proposed model achieves a Mean Absolute Percentage Error (MAPE) of 4.358%; however, incorporating temperature increases the MAPE to 5.123% due to growing uncertainty. In contrast, a day-ahead forecast, including temperature, improves accuracy, reducing MAPE to 1.644%. The stochastic noise component enhances probabilistic modeling, yielding a MAPE of 3.042% with and 2.073% without temperature in five-day forecasts. Therefore, the proposed model proves useful for regions with high demand variability, such as southeastern Mexico, while maintaining accuracy over longer time horizons. Full article
(This article belongs to the Section Power and Energy Forecasting)
Show Figures

Figure 1

21 pages, 1661 KiB  
Article
Performance Assessment of B-Series Marine Propellers with Cupping and Face Camber Ratio Using Machine Learning Techniques
by Mina Tadros and Evangelos Boulougouris
J. Mar. Sci. Eng. 2025, 13(7), 1345; https://doi.org/10.3390/jmse13071345 - 15 Jul 2025
Viewed by 355
Abstract
This study investigates the performance of B-series marine propellers enhanced through geometric modifications, namely face camber ratio (FCR) and cupping percentage modifications, using a machine learning (ML)-driven optimization framework. A large dataset of over 7000 open-water propeller configurations is curated, incorporating variations in [...] Read more.
This study investigates the performance of B-series marine propellers enhanced through geometric modifications, namely face camber ratio (FCR) and cupping percentage modifications, using a machine learning (ML)-driven optimization framework. A large dataset of over 7000 open-water propeller configurations is curated, incorporating variations in blade number, expanded area ratio (EAR), pitch-to-diameter ratio (P/D), FCR, and cupping percentage. A multi-layer artificial neural network (ANN) is trained to predict thrust, torque, and open-water efficiency (ηo) with a high coefficient of determination (R2), greater than 0.9999. The ANN is integrated into an optimization algorithm to identify optimal propeller designs for the KRISO Container Ship (KCS) using empirical constraints for cavitation and tip speed. Unlike prior studies that rely on boundary element method (BEM)-ML hybrids or multi-fidelity simulations, this study introduces a geometry-coupled analysis of FCR and cupping—parameters often treated independently—and applies empirical cavitation and acoustic (tip speed) limits to guide the design process. The results indicate that incorporating 1.0–1.5% cupping leads to a significant improvement in efficiency, up to 9.3% above the reference propeller, while maintaining cavitation safety margins and acoustic limits. Conversely, designs with non-zero FCR values (0.5–1.5%) show a modest efficiency penalty (up to 4.3%), although some configurations remain competitive when compensated by higher EAR, P/D, or blade count. The study confirms that the combination of cupping with optimized geometric parameters yields high-efficiency, cavitation-safe propellers. Furthermore, the ML-based framework demonstrates excellent potential for rapid, accurate, and scalable propeller design optimization that meets both performance and regulatory constraints. Full article
Show Figures

Figure 1

49 pages, 1398 KiB  
Review
Navigating AI-Driven Financial Forecasting: A Systematic Review of Current Status and Critical Research Gaps
by László Vancsura, Tibor Tatay and Tibor Bareith
Forecasting 2025, 7(3), 36; https://doi.org/10.3390/forecast7030036 - 14 Jul 2025
Viewed by 1379
Abstract
This systematic literature review explores the application of artificial intelligence (AI) and machine learning (ML) in financial market forecasting, with a focus on four asset classes: equities, cryptocurrencies, commodities, and foreign exchange markets. Guided by the PRISMA methodology, the study identifies the most [...] Read more.
This systematic literature review explores the application of artificial intelligence (AI) and machine learning (ML) in financial market forecasting, with a focus on four asset classes: equities, cryptocurrencies, commodities, and foreign exchange markets. Guided by the PRISMA methodology, the study identifies the most widely used predictive models, particularly LSTM, GRU, XGBoost, and hybrid deep learning architectures, as well as key evaluation metrics, such as RMSE and MAPE. The findings confirm that AI-based approaches, especially neural networks, outperform traditional statistical methods in capturing non-linear and high-dimensional dynamics. However, the analysis also reveals several critical research gaps. Most notably, current models are rarely embedded into real or simulated trading strategies, limiting their practical applicability. Furthermore, the sensitivity of widely used metrics like MAPE to volatility remains underexplored, particularly in highly unstable environments such as crypto markets. Temporal robustness is also a concern, as many studies fail to validate their models across different market regimes. While data covering one to ten years is most common, few studies assess performance stability over time. By highlighting these limitations, this review not only synthesizes the current state of the art but also outlines essential directions for future research. Specifically, it calls for greater emphasis on model interpretability, strategy-level evaluation, and volatility-aware validation frameworks, thereby contributing to the advancement of AI’s real-world utility in financial forecasting. Full article
(This article belongs to the Section Forecasting in Computer Science)
Show Figures

Figure 1

42 pages, 5715 KiB  
Article
Development and Fuel Economy Optimization of Series–Parallel Hybrid Powertrain for Van-Style VW Crafter Vehicle
by Ahmed Nabil Farouk Abdelbaky, Aminu Babangida, Abdullahi Bala Kunya and Péter Tamás Szemes
Energies 2025, 18(14), 3688; https://doi.org/10.3390/en18143688 - 12 Jul 2025
Viewed by 476
Abstract
The presence of toxic gas emissions from conventional vehicles is worrisome globally. Over the past few years, there has been a broad adoption of electric vehicles (EVs) to reduce energy usage and mitigate environmental emissions. The EVs are characterized by limited range, cost, [...] Read more.
The presence of toxic gas emissions from conventional vehicles is worrisome globally. Over the past few years, there has been a broad adoption of electric vehicles (EVs) to reduce energy usage and mitigate environmental emissions. The EVs are characterized by limited range, cost, and short range. This prompts the need for hybrid electric vehicles (HEVs). This study describes the conversion of a 2022 Volkswagen Crafter (VW) 35 TDI 340 delivery van from a conventional diesel powertrain into a hybrid electric vehicle (HEV) augmented with synchronous electrical machines (motor and generator) and a BMW i3 60 Ah battery pack. A downsized 1.5 L diesel engine and an electric motor–generator unit are integrated via a planetary power split device supported by a high-voltage lithium-ion battery. A MATLAB (R2024b) Simulink model of the hybrid system is developed, and its speed tracking PID controller is optimized using genetic algorithm (GA) and particle swarm optimization (PSO) methods. The simulation results show significant efficiency gains: for example, average fuel consumption falls from 9.952 to 7.014 L/100 km (a 29.5% saving) and CO2 emissions drop from 260.8 to 186.0 g/km (a 74.8 g reduction), while the vehicle range on a 75 L tank grows by ~40.7% (from 785.7 to 1105.5 km). The optimized series–parallel powertrain design significantly improves urban driving economy and reduces emissions without compromising performance. Full article
Show Figures

Figure 1

23 pages, 309 KiB  
Review
Mathematical Optimization in Machine Learning for Computational Chemistry
by Ana Zekić
Computation 2025, 13(7), 169; https://doi.org/10.3390/computation13070169 - 11 Jul 2025
Viewed by 418
Abstract
Machine learning (ML) is transforming computational chemistry by accelerating molecular simulations, property prediction, and inverse design. Central to this transformation is mathematical optimization, which underpins nearly every stage of model development, from training neural networks and tuning hyperparameters to navigating chemical space for [...] Read more.
Machine learning (ML) is transforming computational chemistry by accelerating molecular simulations, property prediction, and inverse design. Central to this transformation is mathematical optimization, which underpins nearly every stage of model development, from training neural networks and tuning hyperparameters to navigating chemical space for molecular discovery. This review presents a structured overview of optimization techniques used in ML for computational chemistry, including gradient-based methods (e.g., SGD and Adam), probabilistic approaches (e.g., Monte Carlo sampling and Bayesian optimization), and spectral methods. We classify optimization targets into model parameter optimization, hyperparameter selection, and molecular optimization and analyze their application across supervised, unsupervised, and reinforcement learning frameworks. Additionally, we examine key challenges such as data scarcity, limited generalization, and computational cost, outlining how mathematical strategies like active learning, meta-learning, and hybrid physics-informed models can address these issues. By bridging optimization methodology with domain-specific challenges, this review highlights how tailored optimization strategies enhance the accuracy, efficiency, and scalability of ML models in computational chemistry. Full article
(This article belongs to the Special Issue Feature Papers in Computational Chemistry)
18 pages, 3556 KiB  
Article
Multi-Sensor Fusion for Autonomous Mobile Robot Docking: Integrating LiDAR, YOLO-Based AprilTag Detection, and Depth-Aided Localization
by Yanyan Dai and Kidong Lee
Electronics 2025, 14(14), 2769; https://doi.org/10.3390/electronics14142769 - 10 Jul 2025
Viewed by 503
Abstract
Reliable and accurate docking remains a fundamental challenge for autonomous mobile robots (AMRs) operating in complex industrial environments with dynamic lighting, motion blur, and occlusion. This study proposes a novel multi-sensor fusion-based docking framework that significantly enhances robustness and precision by integrating YOLOv8-based [...] Read more.
Reliable and accurate docking remains a fundamental challenge for autonomous mobile robots (AMRs) operating in complex industrial environments with dynamic lighting, motion blur, and occlusion. This study proposes a novel multi-sensor fusion-based docking framework that significantly enhances robustness and precision by integrating YOLOv8-based AprilTag detection, depth-aided 3D localization, and LiDAR-based orientation correction. A key contribution of this work is the construction of a custom AprilTag dataset featuring real-world visual disturbances, enabling the YOLOv8 model to achieve high-accuracy detection and ID classification under challenging conditions. To ensure precise spatial localization, 2D visual tag coordinates are fused with depth data to compute 3D positions in the robot’s frame. A LiDAR group-symmetry mechanism estimates heading deviation, which is combined with visual feedback in a hybrid PID controller to correct angular errors. A finite-state machine governs the docking sequence, including detection, approach, yaw alignment, and final engagement. Simulation and experimental results demonstrate that the proposed system achieves higher docking success rates and improved pose accuracy under various challenging conditions compared to traditional vision- or LiDAR-only approaches. Full article
Show Figures

Figure 1

Back to TopTop