Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,575)

Search Parameters:
Keywords = random neural networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2120 KiB  
Article
Machine Learning Algorithms and Explainable Artificial Intelligence for Property Valuation
by Gabriella Maselli and Antonio Nesticò
Real Estate 2025, 2(3), 12; https://doi.org/10.3390/realestate2030012 (registering DOI) - 1 Aug 2025
Abstract
The accurate estimation of urban property values is a key challenge for appraisers, market participants, financial institutions, and urban planners. In recent years, machine learning (ML) techniques have emerged as promising tools for price forecasting due to their ability to model complex relationships [...] Read more.
The accurate estimation of urban property values is a key challenge for appraisers, market participants, financial institutions, and urban planners. In recent years, machine learning (ML) techniques have emerged as promising tools for price forecasting due to their ability to model complex relationships among variables. However, their application raises two main critical issues: (i) the risk of overfitting, especially with small datasets or with noisy data; (ii) the interpretive issues associated with the “black box” nature of many models. Within this framework, this paper proposes a methodological approach that addresses both these issues, comparing the predictive performance of three ML algorithms—k-Nearest Neighbors (kNN), Random Forest (RF), and the Artificial Neural Network (ANN)—applied to the housing market in the city of Salerno, Italy. For each model, overfitting is preliminarily assessed to ensure predictive robustness. Subsequently, the results are interpreted using explainability techniques, such as SHapley Additive exPlanations (SHAPs) and Permutation Feature Importance (PFI). This analysis reveals that the Random Forest offers the best balance between predictive accuracy and transparency, with features such as area and proximity to the train station identified as the main drivers of property prices. kNN and the ANN are viable alternatives that are particularly robust in terms of generalization. The results demonstrate how the defined methodological framework successfully balances predictive effectiveness and interpretability, supporting the informed and transparent use of ML in real estate valuation. Full article
Show Figures

Figure 1

28 pages, 10147 KiB  
Article
Construction of Analogy Indicator System and Machine-Learning-Based Optimization of Analogy Methods for Oilfield Development Projects
by Muzhen Zhang, Zhanxiang Lei, Chengyun Yan, Baoquan Zeng, Fei Huang, Tailai Qu, Bin Wang and Li Fu
Energies 2025, 18(15), 4076; https://doi.org/10.3390/en18154076 (registering DOI) - 1 Aug 2025
Abstract
Oil and gas development is characterized by high technical complexity, strong interdisciplinarity, long investment cycles, and significant uncertainty. To meet the need for quick evaluation of overseas oilfield projects with limited data and experience, this study develops an analogy indicator system and tests [...] Read more.
Oil and gas development is characterized by high technical complexity, strong interdisciplinarity, long investment cycles, and significant uncertainty. To meet the need for quick evaluation of overseas oilfield projects with limited data and experience, this study develops an analogy indicator system and tests multiple machine-learning algorithms on two analogy tasks to identify the optimal method. Using an initial set of basic indicators and a database of 1436 oilfield samples, a combined subjective–objective weighting strategy that integrates statistical methods with expert judgment is used to select, classify, and assign weights to the indicators. This process results in 26 key indicators for practical analogy analysis. Single-indicator and whole-asset analogy experiments are then performed with five standard machine-learning algorithms—support vector machine (SVM), random forest (RF), backpropagation neural network (BP), k-nearest neighbor (KNN), and decision tree (DT). Results show that SVM achieves classification accuracies of 86% and 95% in medium-high permeability sandstone oilfields, respectively, greatly surpassing other methods. These results demonstrate the effectiveness of the proposed indicator system and methodology, providing efficient and objective technical support for evaluating and making decisions on overseas oilfield development projects. Full article
(This article belongs to the Section H1: Petroleum Engineering)
Show Figures

Figure 1

48 pages, 2506 KiB  
Article
Enhancing Ship Propulsion Efficiency Predictions with Integrated Physics and Machine Learning
by Hamid Reza Soltani Motlagh, Seyed Behbood Issa-Zadeh, Md Redzuan Zoolfakar and Claudia Lizette Garay-Rondero
J. Mar. Sci. Eng. 2025, 13(8), 1487; https://doi.org/10.3390/jmse13081487 - 31 Jul 2025
Abstract
This research develops a dual physics-based machine learning system to forecast fuel consumption and CO2 emissions for a 100 m oil tanker across six operational scenarios: Original, Paint, Advanced Propeller, Fin, Bulbous Bow, and Combined. The combination of hydrodynamic calculations with Monte [...] Read more.
This research develops a dual physics-based machine learning system to forecast fuel consumption and CO2 emissions for a 100 m oil tanker across six operational scenarios: Original, Paint, Advanced Propeller, Fin, Bulbous Bow, and Combined. The combination of hydrodynamic calculations with Monte Carlo simulations provides a solid foundation for training machine learning models, particularly in cases where dataset restrictions are present. The XGBoost model demonstrated superior performance compared to Support Vector Regression, Gaussian Process Regression, Random Forest, and Shallow Neural Network models, achieving near-zero prediction errors that closely matched physics-based calculations. The physics-based analysis demonstrated that the Combined scenario, which combines hull coatings with bulbous bow modifications, produced the largest fuel consumption reduction (5.37% at 15 knots), followed by the Advanced Propeller scenario. The results demonstrate that user inputs (e.g., engine power: 870 kW, speed: 12.7 knots) match the Advanced Propeller scenario, followed by Paint, which indicates that advanced propellers or hull coatings would optimize efficiency. The obtained insights help ship operators modify their operational parameters and designers select essential modifications for sustainable operations. The model maintains its strength at low speeds, where fuel consumption is minimal, making it applicable to other oil tankers. The hybrid approach provides a new tool for maritime efficiency analysis, yielding interpretable results that support International Maritime Organization objectives, despite starting with a limited dataset. The model requires additional research to enhance its predictive accuracy using larger datasets and real-time data collection, which will aid in achieving global environmental stewardship. Full article
(This article belongs to the Special Issue Machine Learning for Prediction of Ship Motion)
27 pages, 1628 KiB  
Article
Reliability Evaluation and Optimization of System with Fractional-Order Damping and Negative Stiffness Device
by Mingzhi Lin, Wei Li, Dongmei Huang and Natasa Trisovic
Fractal Fract. 2025, 9(8), 504; https://doi.org/10.3390/fractalfract9080504 (registering DOI) - 31 Jul 2025
Abstract
Research on reliability control for enhancing power systems under random loads holds significant and undeniable importance in maintaining system stability, performance, and safety. The primary challenge lies in determining the reliability index while optimizing system parameters. To effectively address this challenge, we developed [...] Read more.
Research on reliability control for enhancing power systems under random loads holds significant and undeniable importance in maintaining system stability, performance, and safety. The primary challenge lies in determining the reliability index while optimizing system parameters. To effectively address this challenge, we developed a novel intelligent algorithm and conducted an optimal reliability assessment for a Negative Stiffness Device (NSD) seismic isolation structure incorporating fractional-order damping. This algorithm combines the Gaussian Radial Basis Function Neural Network (GRBFNN) with the Particle Swarm Optimization (PSO) algorithm. It takes the reliability function with unknown parameters as the objective function, while using the Backward Kolmogorov (BK) equation, which governs the reliability function and is accompanied by boundary and initial conditions, as the constraint condition. During the operation of this algorithm, the neural network is employed to solve the BK equation, thereby deriving the fitness function in each iteration of the PSO algorithm. Then the PSO algorithm is utilized to obtain the optimal parameters. The unique advantage of this algorithm is its ability to simultaneously achieve the optimization of implicit objectives and the solution of time-dependent BK equations.To evaluate the performance of the proposed algorithm, this study compared it with the algorithm combines the GRBFNN with Genetic Algorithm (GA-GRBFNN)across multiple dimensions, including performance and operational efficiency. The effectiveness of the proposed algorithm has been validated through numerical comparisons and Monte Carlo simulations. The control strategy presented in this paper provides a solid theoretical foundation for improving the reliability performance of mechanical engineering systems and demonstrates significant potential for practical applications. Full article
Show Figures

Figure 1

28 pages, 8732 KiB  
Article
Acceleration Command Tracking via Hierarchical Neural Predictive Control for the Effectiveness of Unknown Control
by Zhengpeng Yang, Chao Ming, Huaiyan Wang and Tongxing Peng
Aerospace 2025, 12(8), 689; https://doi.org/10.3390/aerospace12080689 (registering DOI) - 31 Jul 2025
Abstract
This paper presents a flight control framework based on neural network Model Predictive Control (NN-MPC) to tackle the challenges of acceleration command tracking for supersonic vehicles (SVs) in complex flight environments, addressing the shortcomings of traditional methods in managing nonlinearity, random disturbances, and [...] Read more.
This paper presents a flight control framework based on neural network Model Predictive Control (NN-MPC) to tackle the challenges of acceleration command tracking for supersonic vehicles (SVs) in complex flight environments, addressing the shortcomings of traditional methods in managing nonlinearity, random disturbances, and real-time performance requirements. Initially, a dynamic model is developed through a comprehensive analysis of the vehicle’s dynamic characteristics, incorporating strong cross-coupling effects and disturbance influences. Subsequently, a predictive mechanism is employed to forecast future states and generate virtual control commands, effectively resolving the issue of sluggish responses under rapidly changing commands. Furthermore, the approximation capability of neural networks is leveraged to optimize the control strategy in real time, ensuring that rudder deflection commands adapt to disturbance variations, thus overcoming the robustness limitations inherent in fixed-parameter control approaches. Within the proposed framework, the ultimate uniform bounded stability of the control system is rigorously established using the Lyapunov method. Simulation results demonstrate that the method exhibits exceptional performance under conditions of system state uncertainty and unknown external disturbances, confirming its effectiveness and reliability. Full article
(This article belongs to the Section Aeronautics)
18 pages, 4863 KiB  
Article
Evaluation of Explainable, Interpretable and Non-Interpretable Algorithms for Cyber Threat Detection
by José Ramón Trillo, Felipe González-López, Juan Antonio Morente-Molinera, Roberto Magán-Carrión and Pablo García-Sánchez
Electronics 2025, 14(15), 3073; https://doi.org/10.3390/electronics14153073 (registering DOI) - 31 Jul 2025
Abstract
As anonymity-enabling technologies such as VPNs and proxies become increasingly exploited for malicious purposes, detecting traffic associated with such services emerges as a critical first step in anticipating potential cyber threats. This study analyses a network traffic dataset focused on anonymised IP addresses—not [...] Read more.
As anonymity-enabling technologies such as VPNs and proxies become increasingly exploited for malicious purposes, detecting traffic associated with such services emerges as a critical first step in anticipating potential cyber threats. This study analyses a network traffic dataset focused on anonymised IP addresses—not direct attacks—to evaluate and compare explainable, interpretable, and opaque machine learning models. Through advanced preprocessing and feature engineering, we examine the trade-off between model performance and transparency in the early detection of suspicious connections. We evaluate explainable ML-based models such as k-nearest neighbours, fuzzy algorithms, decision trees, and random forests, alongside interpretable models like naïve Bayes, support vector machines, and non-interpretable algorithms such as neural networks. Results show that neural networks achieve the highest performance, with a macro F1-score of 0.8786, but explainable models like HFER offer strong performance (macro F1-score = 0.6106) with greater interpretability. The choice of algorithm depends on project-specific needs: neural networks excel in accuracy, while explainable algorithms are preferred for resource efficiency and transparency, as stated in this work. This work underscores the importance of aligning cybersecurity strategies with operational requirements, providing insights into balancing performance with interpretability. Full article
(This article belongs to the Special Issue Network Security and Cryptography Applications)
Show Figures

Graphical abstract

16 pages, 1194 KiB  
Systematic Review
Artificial Intelligence in the Diagnosis of Tongue Cancer: A Systematic Review with Meta-Analysis
by Seorin Jeong, Hae-In Choi, Keon-Il Yang, Jin Soo Kim, Ji-Won Ryu and Hyun-Jeong Park
Biomedicines 2025, 13(8), 1849; https://doi.org/10.3390/biomedicines13081849 - 30 Jul 2025
Viewed by 1
Abstract
Background: Tongue squamous cell carcinoma (TSCC) is an aggressive oral malignancy characterized by early submucosal invasion and a high risk of cervical lymph node metastasis. Accurate and timely diagnosis is essential, but it remains challenging when relying solely on conventional imaging and [...] Read more.
Background: Tongue squamous cell carcinoma (TSCC) is an aggressive oral malignancy characterized by early submucosal invasion and a high risk of cervical lymph node metastasis. Accurate and timely diagnosis is essential, but it remains challenging when relying solely on conventional imaging and histopathology. This systematic review aimed to evaluate studies applying artificial intelligence (AI) in the diagnostic imaging of TSCC. Methods: This review was conducted under PRISMA 2020 guidelines and included studies from January 2020 to December 2024 that utilized AI in TSCC imaging. A total of 13 studies were included, employing AI models such as Convolutional Neural Networks (CNNs), Support Vector Machines (SVMs), and Random Forest (RF). Imaging modalities analyzed included MRI, CT, PET, ultrasound, histopathological whole-slide images (WSI), and endoscopic photographs. Results: Diagnostic performance was generally high, with area under the curve (AUC) values ranging from 0.717 to 0.991, sensitivity from 63.3% to 100%, and specificity from 70.0% to 96.7%. Several models demonstrated superior performance compared to expert clinicians, particularly in delineating tumor margins and estimating the depth of invasion (DOI). However, only one study conducted external validation, and most exhibited moderate risk of bias in patient selection or index test interpretation. Conclusions: AI-based diagnostic tools hold strong potential for enhancing TSCC detection, but future research must address external validation, standardization, and clinical integration to ensure their reliable and widespread adoption. Full article
(This article belongs to the Special Issue Recent Advances in Oral Medicine—2nd Edition)
Show Figures

Figure 1

24 pages, 1686 KiB  
Review
Data-Driven Predictive Modeling for Investigating the Impact of Gear Manufacturing Parameters on Noise Levels in Electric Vehicle Drivetrains
by Krisztián Horváth
World Electr. Veh. J. 2025, 16(8), 426; https://doi.org/10.3390/wevj16080426 - 30 Jul 2025
Viewed by 19
Abstract
Reducing gear noise in electric vehicle (EV) drivetrains is crucial due to the absence of internal combustion engine noise, making even minor acoustic disturbances noticeable. Manufacturing parameters significantly influence gear-generated noise, yet traditional analytical methods often fail to predict these complex relationships accurately. [...] Read more.
Reducing gear noise in electric vehicle (EV) drivetrains is crucial due to the absence of internal combustion engine noise, making even minor acoustic disturbances noticeable. Manufacturing parameters significantly influence gear-generated noise, yet traditional analytical methods often fail to predict these complex relationships accurately. This research addresses this gap by introducing a data-driven approach using machine learning (ML) to predict gear noise levels from manufacturing and sensor-derived data. The presented methodology encompasses systematic data collection from various production stages—including soft and hard machining, heat treatment, honing, rolling tests, and end-of-line (EOL) acoustic measurements. Predictive models employing Random Forest, Gradient Boosting (XGBoost), and Neural Network algorithms were developed and compared to traditional statistical approaches. The analysis identified critical manufacturing parameters, such as surface waviness, profile errors, and tooth geometry deviations, significantly influencing noise generation. Advanced ML models, specifically Random Forest, XGBoost, and deep neural networks, demonstrated superior prediction accuracy, providing early-stage identification of gear units likely to exceed acceptable noise thresholds. Integrating these data-driven models into manufacturing processes enables early detection of potential noise issues, reduces quality assurance costs, and supports sustainable manufacturing by minimizing prototype production and resource consumption. This research enhances the understanding of gear noise formation and offers practical solutions for real-time quality assurance. Full article
Show Figures

Graphical abstract

22 pages, 4093 KiB  
Article
A Deep Learning-Driven Black-Box Benchmark Generation Method via Exploratory Landscape Analysis
by Haoming Liang, Fuqing Zhao, Tianpeng Xu and Jianlin Zhang
Appl. Sci. 2025, 15(15), 8454; https://doi.org/10.3390/app15158454 - 30 Jul 2025
Viewed by 33
Abstract
In the context of algorithm selection, the careful design of benchmark functions and problem instances plays a pivotal role in evaluating the performance of optimization methods. Traditional benchmark functions have been criticized for their limited resemblance to real-world problems and insufficient coverage of [...] Read more.
In the context of algorithm selection, the careful design of benchmark functions and problem instances plays a pivotal role in evaluating the performance of optimization methods. Traditional benchmark functions have been criticized for their limited resemblance to real-world problems and insufficient coverage of the problem space. Exploratory landscape analysis (ELA) offers a systematic framework for characterizing objective functions, based on quantitative landscape features. This study proposes a method for generating benchmark functions tailored to single-objective continuous optimization problems with boundary constraints using predefined ELA feature vectors to guide their construction. The process begins with the creation of random decision variables and corresponding objective values, which are iteratively adjusted using the covariance matrix adaptation evolution strategy (CMA-ES) to ensure alignment with a target ELA feature vector within a specified tolerance. Once the feature criteria are met, the resulting topological map point is used to train a neural network to produce a surrogate function that retains the desired landscape characteristics. To validate the proposed approach, functions from the well-known Black Box Optimization Benchmark (BBOB) suite are replicated, and novel functions are generated with unique ELA feature combinations not found in the original suite. The experiment results demonstrate that the synthesized landscapes closely resemble their BBOB counterparts and preserve the consistency of the algorithm rankings, thereby supporting the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

20 pages, 732 KiB  
Review
AI Methods Tailored to Influenza, RSV, HIV, and SARS-CoV-2: A Focused Review
by Achilleas Livieratos, George C. Kagadis, Charalambos Gogos and Karolina Akinosoglou
Pathogens 2025, 14(8), 748; https://doi.org/10.3390/pathogens14080748 - 30 Jul 2025
Viewed by 29
Abstract
Artificial intelligence (AI) techniques—ranging from hybrid mechanistic–machine learning (ML) ensembles to gradient-boosted decision trees, support-vector machines, and deep neural networks—are transforming the management of seasonal influenza, respiratory syncytial virus (RSV), human immunodeficiency virus (HIV), and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Symptom-based [...] Read more.
Artificial intelligence (AI) techniques—ranging from hybrid mechanistic–machine learning (ML) ensembles to gradient-boosted decision trees, support-vector machines, and deep neural networks—are transforming the management of seasonal influenza, respiratory syncytial virus (RSV), human immunodeficiency virus (HIV), and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Symptom-based triage models using eXtreme Gradient Boosting (XGBoost) and Random Forests, as well as imaging classifiers built on convolutional neural networks (CNNs), have improved diagnostic accuracy across respiratory infections. Transformer-based architectures and social media surveillance pipelines have enabled real-time monitoring of COVID-19. In HIV research, support-vector machines (SVMs), logistic regression, and deep neural network (DNN) frameworks advance viral-protein classification and drug-resistance mapping, accelerating antiviral and vaccine discovery. Despite these successes, persistent challenges remain—data heterogeneity, limited model interpretability, hallucinations in large language models (LLMs), and infrastructure gaps in low-resource settings. We recommend standardized open-access data pipelines and integration of explainable-AI methodologies to ensure safe, equitable deployment of AI-driven interventions in future viral-outbreak responses. Full article
(This article belongs to the Section Viral Pathogens)
Show Figures

Figure 1

22 pages, 1724 KiB  
Article
Development and Clinical Interpretation of an Explainable AI Model for Predicting Patient Pathways in the Emergency Department: A Retrospective Study
by Émilien Arnaud, Pedro Antonio Moreno-Sanchez, Mahmoud Elbattah, Christine Ammirati, Mark van Gils, Gilles Dequen and Daniel Aiham Ghazali
Appl. Sci. 2025, 15(15), 8449; https://doi.org/10.3390/app15158449 - 30 Jul 2025
Viewed by 57
Abstract
Background: Overcrowded emergency departments (EDs) create significant challenges for patient management and hospital efficiency. In response, Amiens Picardy University Hospital (APUH) developed the “Prediction of the Patient Pathway in the Emergency Department” (3P-U) model to enhance patient flow management. Objectives: To develop and [...] Read more.
Background: Overcrowded emergency departments (EDs) create significant challenges for patient management and hospital efficiency. In response, Amiens Picardy University Hospital (APUH) developed the “Prediction of the Patient Pathway in the Emergency Department” (3P-U) model to enhance patient flow management. Objectives: To develop and clinically validate an explainable artificial intelligence (XAI) model for hospital admission predictions, using structured triage data, and demonstrate its real-world applicability in the ED setting. Methods: Our retrospective, single-center study involved 351,019 patients consulting in APUH’s EDs between 2015 and 2018. Various models (including a cross-validation artificial neural network (ANN), a k-nearest neighbors (KNN) model, a logistic regression (LR) model, and a random forest (RF) model) were trained and assessed for performance with regard to the area under the receiver operating characteristic curve (AUROC). The best model was validated internally with a test set, and the F1 score was used to determine the best threshold for recall, precision, and accuracy. XAI techniques, such as Shapley additive explanations (SHAP) and partial dependence plots (PDP) were employed, and the clinical explanations were evaluated by emergency physicians. Results: The ANN gave the best performance during the training stage, with an AUROC of 83.1% (SD: 0.2%) for the test set; it surpassed the RF (AUROC: 71.6%, SD: 0.1%), KNN (AUROC: 67.2%, SD: 0.2%), and LR (AUROC: 71.5%, SD: 0.2%) models. In an internal validation, the ANN’s AUROC was 83.2%. The best F1 score (0.67) determined that 0.35 was the optimal threshold; the corresponding recall, precision, and accuracy were 75.7%, 59.7%, and 75.3%, respectively. The SHAP and PDP XAI techniques (as assessed by emergency physicians) highlighted patient age, heart rate, and presentation with multiple injuries as the features that most specifically influenced the admission from the ED to a hospital ward. These insights are being used in bed allocation and patient prioritization, directly improving ED operations. Conclusions: The 3P-U model demonstrates practical utility by reducing ED crowding and enhancing decision-making processes at APUH. Its transparency and physician validation foster trust, facilitating its adoption in clinical practice and offering a replicable framework for other hospitals to optimize patient flow. Full article
Show Figures

Figure 1

17 pages, 539 KiB  
Article
Non-Fragile H Asynchronous State Estimation for Delayed Markovian Jumping NNs with Stochastic Disturbance
by Lan Wang, Juping Tang, Qiang Li, Xianwei Yang and Haiyang Zhang
Mathematics 2025, 13(15), 2452; https://doi.org/10.3390/math13152452 - 30 Jul 2025
Viewed by 78
Abstract
This article focuses on tackling the non-fragile H asynchronous estimation problem for delayed Markovian jumping neural networks (NNs) featuring stochastic disturbance. To more accurately reflect real-world scenarios, external random disturbances with known statistical characteristics are incorporated. Through the integration of stochastic analysis [...] Read more.
This article focuses on tackling the non-fragile H asynchronous estimation problem for delayed Markovian jumping neural networks (NNs) featuring stochastic disturbance. To more accurately reflect real-world scenarios, external random disturbances with known statistical characteristics are incorporated. Through the integration of stochastic analysis theory and Lyapunov stability techniques, as well as several matrix constraints formulas, some sufficient and effective results are addressed. These criteria ensure that the considered NNs achieve anticipant H stability in line with an external disturbance mitigation level. Meanwhile, the expected estimator gains will be explicitly constructed by dealing with corresponding matrix constraints. To conclude, a numerical simulation example is offered to showcase workability and validity of the formulated estimation method. Full article
(This article belongs to the Special Issue Advanced Filtering and Control Methods for Stochastic Systems)
Show Figures

Figure 1

23 pages, 3478 KiB  
Article
Research on Fatigue Life Prediction Method of Spot-Welded Joints Based on Machine Learning
by Shanshan Li, Zhenfei Zhan, Jie Zou and Zihan Wang
Materials 2025, 18(15), 3542; https://doi.org/10.3390/ma18153542 - 29 Jul 2025
Viewed by 182
Abstract
Spot-welding joints are widely used in modern industries, and their fatigue life is crucial for the safety and reliability of structures. This paper proposes a method for predicting the fatigue life of spot-welding joints by integrating traditional structural stress methods and machine learning [...] Read more.
Spot-welding joints are widely used in modern industries, and their fatigue life is crucial for the safety and reliability of structures. This paper proposes a method for predicting the fatigue life of spot-welding joints by integrating traditional structural stress methods and machine learning algorithms. Systematic fatigue tests were conducted on Q&P980 steel spot-welding joints to investigate the influence of the galvanized layer on fatigue life. It was found that the galvanized layer significantly reduces the fatigue life of spot-welding joints. Further predictions of fatigue life using machine learning algorithms, including Random Forest, Artificial Neural Networks, and Gaussian Process Regression, demonstrated superior prediction accuracy and generalization ability compared to traditional structural stress methods. The Random Forest algorithm achieved an R2 value of 0.93, with lower error than traditional methods. This study provides an effective tool for the fatigue life assessment of spot-welding joints and highlights the potential application of machine learning in this field. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

18 pages, 1498 KiB  
Article
A Proactive Predictive Model for Machine Failure Forecasting
by Olusola O. Ajayi, Anish M. Kurien, Karim Djouani and Lamine Dieng
Machines 2025, 13(8), 663; https://doi.org/10.3390/machines13080663 - 29 Jul 2025
Viewed by 233
Abstract
Unexpected machine failures in industrial environments lead to high maintenance costs, unplanned downtime, and safety risks. This study proposes a proactive predictive model using a hybrid of eXtreme Gradient Boosting (XGBoost) and Neural Networks (NN) to forecast machine failures. A synthetic dataset capturing [...] Read more.
Unexpected machine failures in industrial environments lead to high maintenance costs, unplanned downtime, and safety risks. This study proposes a proactive predictive model using a hybrid of eXtreme Gradient Boosting (XGBoost) and Neural Networks (NN) to forecast machine failures. A synthetic dataset capturing recent breakdown history and time since last failure was used to simulate industrial scenarios. To address class imbalance, SMOTE and class weighting were applied, alongside a focal loss function to emphasize difficult-to-classify failures. The XGBoost model was tuned via GridSearchCV, while the NN model utilized ReLU-activated hidden layers with dropout. Evaluation using stratified 5-fold cross-validation showed that the NN achieved an F1-score of 0.7199 and a recall of 0.9545 for the minority class. XGBoost attained a higher PR AUC of 0.7126 and a more balanced precision–recall trade-off. Sample predictions demonstrated strong recall (100%) for failures, but also a high false positive rate, with most prediction probabilities clustered between 0.50–0.55. Additional benchmarking against Logistic Regression, Random Forest, and SVM further confirmed the superiority of the proposed hybrid model. Model interpretability was enhanced using SHAP and LIME, confirming that recent breakdowns and time since last failure were key predictors. While the model effectively detects failures, further improvements in feature engineering and threshold tuning are recommended to reduce false alarms and boost decision confidence. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

26 pages, 11912 KiB  
Article
Multi-Dimensional Estimation of Leaf Loss Rate from Larch Caterpillar Under Insect Pest Stress Using UAV-Based Multi-Source Remote Sensing
by He-Ya Sa, Xiaojun Huang, Li Ling, Debao Zhou, Junsheng Zhang, Gang Bao, Siqin Tong, Yuhai Bao, Dashzebeg Ganbat, Mungunkhuyag Ariunaa, Dorjsuren Altanchimeg and Davaadorj Enkhnasan
Drones 2025, 9(8), 529; https://doi.org/10.3390/drones9080529 - 28 Jul 2025
Viewed by 249
Abstract
Leaf loss caused by pest infestations poses a serious threat to forest health. The leaf loss rate (LLR) refers to the percentage of the overall tree-crown leaf loss per unit area and is an important indicator for evaluating forest health. Therefore, rapid and [...] Read more.
Leaf loss caused by pest infestations poses a serious threat to forest health. The leaf loss rate (LLR) refers to the percentage of the overall tree-crown leaf loss per unit area and is an important indicator for evaluating forest health. Therefore, rapid and accurate acquisition of the LLR via remote sensing monitoring is crucial. This study is based on drone hyperspectral and LiDAR data as well as ground survey data, calculating hyperspectral indices (HSI), multispectral indices (MSI), and LiDAR indices (LI). It employs Savitzky–Golay (S–G) smoothing with different window sizes (W) and polynomial orders (P) combined with recursive feature elimination (RFE) to select sensitive features. Using Random Forest Regression (RFR) and Convolutional Neural Network Regression (CNNR) to construct a multidimensional (horizontal and vertical) estimation model for LLR, combined with LiDAR point cloud data, achieved a three-dimensional visualization of the leaf loss rate of trees. The results of the study showed: (1) The optimal combination of HSI and MSI was determined to be W11P3, and the LI was W5P2. (2) The optimal combination of the number of sensitive features extracted by the RFE algorithm was 13 HSI, 16 MSI, and hierarchical LI (2 in layer I, 9 in layer II, and 11 in layer III). (3) In terms of the horizontal estimation of the defoliation rate, the model performance index of the CNNRHSI model (MPI = 0.9383) was significantly better than that of RFRMSI (MPI = 0.8817), indicating that the continuous bands of hyperspectral could better monitor the subtle changes of LLR. (4) The I-CNNRHSI+LI, II-CNNRHSI+LI, and III-CNNRHSI+LI vertical estimation models were constructed by combining the CNNRHSI model with the best accuracy and the LI sensitive to different vertical levels, respectively, and their MPIs reached more than 0.8, indicating that the LLR estimation of different vertical levels had high accuracy. According to the model, the pixel-level LLR of the sample tree was estimated, and the three-dimensional display of the LLR for forest trees under the pest stress of larch caterpillars was generated, providing a high-precision research scheme for LLR estimation under pest stress. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

Back to TopTop