Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (277)

Search Parameters:
Keywords = decision forest layer

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 8689 KiB  
Article
Transfer Learning-Based Accurate Detection of Shrub Crown Boundaries Using UAS Imagery
by Jiawei Li, Huihui Zhang and David Barnard
Remote Sens. 2025, 17(13), 2275; https://doi.org/10.3390/rs17132275 - 3 Jul 2025
Viewed by 294
Abstract
The accurate delineation of shrub crown boundaries is critical for ecological monitoring, land management, and understanding vegetation dynamics in fragile ecosystems such as semi-arid shrublands. While traditional image processing techniques often struggle with overlapping canopies, deep learning methods, such as convolutional neural networks [...] Read more.
The accurate delineation of shrub crown boundaries is critical for ecological monitoring, land management, and understanding vegetation dynamics in fragile ecosystems such as semi-arid shrublands. While traditional image processing techniques often struggle with overlapping canopies, deep learning methods, such as convolutional neural networks (CNNs), offer promising solutions for precise segmentation. This study employed high-resolution imagery captured by unmanned aircraft systems (UASs) throughout the shrub growing season and explored the effectiveness of transfer learning for both semantic segmentation (Attention U-Net) and instance segmentation (Mask R-CNN). It utilized pre-trained model weights from two previous studies that originally focused on tree crown delineation to improve shrub crown segmentation in non-forested areas. Results showed that transfer learning alone did not achieve satisfactory performance due to differences in object characteristics and environmental conditions. However, fine-tuning the pre-trained models by unfreezing additional layers improved segmentation accuracy by around 30%. Fine-tuned pre-trained models show limited sensitivity to shrubs in the early growing season (April to June) and improved performance when shrub crowns become more spectrally unique in late summer (July to September). These findings highlight the value of combining pre-trained models with targeted fine-tuning to enhance model adaptability in complex remote sensing environments. The proposed framework demonstrates a scalable solution for ecological monitoring in data-scarce regions, supporting informed land management decisions and advancing the use of deep learning for long-term environmental monitoring. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

19 pages, 1433 KiB  
Article
Cost-Optimised Machine Learning Model Comparison for Predictive Maintenance
by Yating Yang and Muhammad Zahid Iqbal
Electronics 2025, 14(12), 2497; https://doi.org/10.3390/electronics14122497 - 19 Jun 2025
Viewed by 485
Abstract
Predictive maintenance is essential for reducing industrial downtime and costs, yet real-world datasets frequently encounter class imbalance and require cost-sensitive evaluation due to costly misclassification errors. This study utilises the SCANIA Component X dataset to advance predictive maintenance through machine learning, employing seven [...] Read more.
Predictive maintenance is essential for reducing industrial downtime and costs, yet real-world datasets frequently encounter class imbalance and require cost-sensitive evaluation due to costly misclassification errors. This study utilises the SCANIA Component X dataset to advance predictive maintenance through machine learning, employing seven supervised algorithms, Support Vector Machine, Random Forest, Decision Tree, K-Nearest Neighbours, Multi-Layer Perceptron, XGBoost, and LightGBM, trained on time-series features extracted via a sliding window approach. A bespoke cost-sensitive metric, aligned with SCANIA’s misclassification cost matrix, assesses model performance. Three imbalance mitigation strategies, downsampling, downsampling with SMOTETomek, and manual class weighting, were explored, with downsampling proving most effective. Random Forest and Support Vector Machine models achieved high accuracy and low misclassification costs, whilst a voting ensemble further enhanced cost efficiency. This research emphasises the critical role of cost-aware evaluation and imbalance handling, proposing an ensemble-based framework to improve predictive maintenance in industrial applications Full article
Show Figures

Figure 1

24 pages, 4557 KiB  
Article
Advanced Multi-Level Ensemble Learning Approaches for Comprehensive Sperm Morphology Assessment
by Abdulsamet Aktas, Taha Cap, Gorkem Serbes, Hamza Osman Ilhan and Hakkı Uzun
Diagnostics 2025, 15(12), 1564; https://doi.org/10.3390/diagnostics15121564 - 19 Jun 2025
Viewed by 438
Abstract
Introduction: Fertility is fundamental to human well-being, significantly impacting both individual lives and societal development. In particular, sperm morphology—referring to the shape, size, and structural integrity of sperm cells—is a key indicator in diagnosing male infertility and selecting viable sperm in assisted reproductive [...] Read more.
Introduction: Fertility is fundamental to human well-being, significantly impacting both individual lives and societal development. In particular, sperm morphology—referring to the shape, size, and structural integrity of sperm cells—is a key indicator in diagnosing male infertility and selecting viable sperm in assisted reproductive technologies such as in vitro fertilisation (IVF) and intracytoplasmic sperm injection (ICSI). However, traditional manual evaluation methods are highly subjective and inconsistent, creating a need for standardized, automated systems. Objectives: This study aims to develop a robust and fully automated sperm morphology classification framework capable of accurately identifying a wide range of morphological abnormalities, thereby minimizing observer variability and improving diagnostic support in reproductive healthcare. Methods: We propose a novel ensemble-based classification approach that combines convolutional neural network (CNN)-derived features using both feature-level and decision-level fusion techniques. Features extracted from multiple EfficientNetV2 variants are fused and classified using Support Vector Machines (SVM), Random Forest (RF), and Multi-Layer Perceptron with Attention (MLP-Attention). Decision-level fusion is achieved via soft voting to enhance robustness and accuracy. Results: The proposed ensemble framework was evaluated using the Hi-LabSpermMorpho dataset, which contains 18 distinct sperm morphology classes. The fusion-based model achieved an accuracy of 67.70%, significantly outperforming individual classifiers. The integration of multiple CNN architectures and ensemble techniques effectively mitigated class imbalance and enhanced the generalizability of the model. Conclusions: The presented methodology demonstrates a substantial improvement over traditional and single-model approaches in automated sperm morphology classification. By leveraging ensemble learning and multi-level fusion, the model provides a reliable and scalable solution for clinical decision-making in male fertility assessment. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

23 pages, 5327 KiB  
Article
Optimized ANN Model for Predicting Buckling Strength of Metallic Aerospace Panels Under Compressive Loading
by Shahrukh Khan, Saiaf Bin Rayhan, Md Mazedur Rahman, Jakiya Sultana and Gyula Varga
Metals 2025, 15(6), 666; https://doi.org/10.3390/met15060666 - 15 Jun 2025
Viewed by 444
Abstract
The present research proposes an Artificial Neural Network (ANN) model to predict the critical buckling load of six different types of metallic aerospace grid-stiffened panels: isogrid type I, isogrid type II, bi-grid, X-grid, anisogrid, and waffle, all subjected to compressive loading. Six thousand [...] Read more.
The present research proposes an Artificial Neural Network (ANN) model to predict the critical buckling load of six different types of metallic aerospace grid-stiffened panels: isogrid type I, isogrid type II, bi-grid, X-grid, anisogrid, and waffle, all subjected to compressive loading. Six thousand samples (one thousand per panel type) were generated using the Latin Hypercube Sampling method to ensure a diverse and comprehensive dataset. The ANN model was systematically fine-tuned by testing various batch sizes, learning rates, optimizers, dense layer configurations, and activation functions. The optimized model featured an eight-layer architecture (200/100/50/25/12/6/3/1 neurons), used a selu–relu–linear activation sequence, and was trained using the Nadam optimizer (learning rate = 0.0025, batch size = 8). Using regression metrics, performance was benchmarked against classical machine learning models such as CatBoost, XGBoost, LightGBM, random forest, decision tree, and k-nearest neighbors. The ANN achieved superior results: MSE = 2.9584, MAE = 0.9875, RMSE = 1.72, and R2 = 0.9998, significantly outperforming all other models across all metrics. Finally, a Taylor Diagram was plotted to assess the model’s reliability and check for overfitting, further confirming the consistent performance of the ANN model across both training and testing datasets. These findings highlight the model’s potential as a robust and efficient tool for predicting the buckling strength of metallic aerospace grid-stiffened panels. Full article
(This article belongs to the Special Issue Mechanical Structure Damage of Metallic Materials)
Show Figures

Figure 1

29 pages, 973 KiB  
Article
Connected Vehicles Security: A Lightweight Machine Learning Model to Detect VANET Attacks
by Muawia A. Elsadig, Abdelrahman Altigani, Yasir Mohamed, Abdul Hakim Mohamed, Akbar Kannan, Mohamed Bashir and Mousab A. E. Adiel
World Electr. Veh. J. 2025, 16(6), 324; https://doi.org/10.3390/wevj16060324 - 11 Jun 2025
Viewed by 1828
Abstract
Vehicular ad hoc networks (VANETs) aim to manage traffic, prevent accidents, and regulate various parts of traffic. However, owing to their nature, the security of VANETs remains a significant concern. This study provides insightful information regarding VANET vulnerabilities and attacks. It investigates a [...] Read more.
Vehicular ad hoc networks (VANETs) aim to manage traffic, prevent accidents, and regulate various parts of traffic. However, owing to their nature, the security of VANETs remains a significant concern. This study provides insightful information regarding VANET vulnerabilities and attacks. It investigates a number of security models that have recently been introduced to counter VANET security attacks with a focus on machine learning detection methods. This confirms that several challenges remain unsolved. Accordingly, this study introduces a lightweight machine learning model with a gain information feature selection method to detect VANET attacks. A balanced version of the well-known and recent dataset CISDS2017 was developed by applying a random oversampling technique. The developed dataset was used to train, test, and evaluate the proposed model. In other words, two layers of enhancements were applied—using a suitable feature selection technique and fixing the dataset imbalance problem. The results show that the proposed model, which is based on the Random Forest (RF) classifier, achieved excellent performance in terms of classification accuracy, computational cost, and classification error. It achieved an accuracy rate of 99.8%, outperforming all benchmark classifiers, including AdaBoost, decision tree (DT), K-nearest neighbors (KNNs), and multi-layer perceptron (MLP). To the best of our knowledge, this model outperforms all the existing classification techniques. In terms of processing cost, it consumes the least processing time, requiring only 69%, 59%, 35%, and 1.4% of the AdaBoost, DT, KNN, and MLP processing times, respectively. It causes negligible classification errors. Full article
Show Figures

Figure 1

16 pages, 2965 KiB  
Article
Comparison of Selected Ensemble Supervised Learning Algorithms Used for Meteorological Normalisation of Particulate Matter (PM10)
by Karolina Gora and Mateusz Rzeszutek
Sustainability 2025, 17(12), 5274; https://doi.org/10.3390/su17125274 - 7 Jun 2025
Viewed by 474
Abstract
Air pollution, particularly PM10 particulate matter, poses significant health risks related to respiratory and cardiovascular diseases as well as cancer. Accurate identification of PM10 reduction factors is therefore essential for developing effective sustainable development strategies. According to the current state of [...] Read more.
Air pollution, particularly PM10 particulate matter, poses significant health risks related to respiratory and cardiovascular diseases as well as cancer. Accurate identification of PM10 reduction factors is therefore essential for developing effective sustainable development strategies. According to the current state of knowledge, machine learning methods are most frequently employed for this purpose due to their superior performance compared to classical statistical approaches. This study evaluated the performance of three machine learning algorithms—Decision Tree (CART), Random Forest, and Cubist Rule—in predicting PM10 concentrations and estimating long-term trends following meteorological normalisation. The research focused on Tarnów, Poland (2010–2022), with comprehensive consideration of meteorological variability. The results demonstrated superior accuracy for the Random Forest and Cubist models (R2 ~0.88–0.89, RMSE ~14 μg/m3) compared to CART (RMSE 19.96 μg/m3). Air temperature and boundary layer height emerged as the most significant predictive variables across all algorithms. The Cubist algorithm proved particularly effective in detecting the impact of policy interventions, making it valuable for air quality trend analysis. While the study confirmed a statistically significant annual decrease in PM10 concentrations (0.83–1.03 μg/m3), pollution levels still exceeded both the updated EU air quality standards from 2024 (Directive (EU) 2024/2881), which will come into force in 2030, and the more stringent WHO guidelines from 2021. Full article
(This article belongs to the Section Pollution Prevention, Mitigation and Sustainability)
Show Figures

Figure 1

27 pages, 7294 KiB  
Article
Enhancing Predictive Accuracy of Landslide Susceptibility via Machine Learning Optimization
by Chuanwei Zhang, Dingshuai Liu, Paraskevas Tsangaratos, Ioanna Ilia, Sijin Ma and Wei Chen
Appl. Sci. 2025, 15(11), 6325; https://doi.org/10.3390/app15116325 - 4 Jun 2025
Viewed by 605
Abstract
The present study examines the application of four machine learning models—Multi-Layer Perceptron, Naive Bayes, Credal Decision Trees, and Random Forests—to assess landslide susceptibility using Mei County, China, as a case study. Aerial photographs and field survey data were integrated into a GIS system [...] Read more.
The present study examines the application of four machine learning models—Multi-Layer Perceptron, Naive Bayes, Credal Decision Trees, and Random Forests—to assess landslide susceptibility using Mei County, China, as a case study. Aerial photographs and field survey data were integrated into a GIS system to develop a landslide inventory map. Additionally, 16 landslide conditioning factors were collected and processed, including elevation, Normalized Difference Vegetation Index, precipitation, terrain, land use, lithology, slope, aspect, stream power index, topographic wetness index, sediment transport index, plan curvature, profile curvature, and distance to roads. From the landslide inventory, 87 landslides were identified, along with an equal number of randomly selected non-landslide locations. These data points, combined with the conditioning factors, formed a spatial dataset for our landslide analysis. To implement the proposed methodological approach, the dataset was divided into two subsets: 70% formed the training subset and 30% formed the testing subset. A correlation analysis was conducted to examine the relationship between the conditioning factors and landslide occurrence, and the certainty factor method was applied to assess their influence. Beyond model comparison, the central focus of this research is the optimization of machine learning parameters to enhance prediction reliability and spatial accuracy. The results show that the Random Forests and Multi-Layer Perceptron models provided superior predictive capability, offering detailed and actionable landslide susceptibility maps. Specifically, the area under the receiver operating characteristic curve and other statistical indicators were calculated to assess the models’ predictive accuracy. By producing high-resolution susceptibility maps tailored to local geomorphological conditions, this work supports more informed land-use planning, infrastructure development, and early warning systems in landslide-prone areas. The findings also contribute to the growing body of research on artificial intelligence-driven natural hazard assessment, offering a replicable framework for integrating machine learning in geospatial risk analysis and environmental decision-making. Full article
(This article belongs to the Special Issue Novel Technology in Landslide Monitoring and Risk Assessment)
Show Figures

Figure 1

22 pages, 4971 KiB  
Article
Machine Learning and Multilayer Perceptron-Based Customized Predictive Models for Individual Processes in Food Factories
by Byunghyun Lim, Dongju Kim, Woojin Cho and Jae-Hoi Gu
Energies 2025, 18(11), 2964; https://doi.org/10.3390/en18112964 - 4 Jun 2025
Viewed by 391
Abstract
A factory energy management system, based on information and communication technology, facilitates efficient energy management using the real-time monitoring, analyzing, and controlling of the energy consumption of a factory. However, traditional food processing plants use basic control systems that cannot analyze energy consumption [...] Read more.
A factory energy management system, based on information and communication technology, facilitates efficient energy management using the real-time monitoring, analyzing, and controlling of the energy consumption of a factory. However, traditional food processing plants use basic control systems that cannot analyze energy consumption for each phase of processing. This makes it difficult to identify usage patterns for individual operations. This study identifies steam energy consumption patterns across four stages of food processing. Additionally, it proposes a customized predictive model employing four machine learning algorithms—linear regression, decision tree, random forest, and k-nearest neighbor—as well as two deep learning algorithms: long short-term memory and multi-layer perceptron. The enhanced multi-layer perceptron model achieved a high performance, with a coefficient of determination (R2) of 0.9418, a coefficient of variation of root mean square error (CVRMSE) of 9.49%, and a relative accuracy of 93.28%. The results of this study demonstrate that straightforward data and models can accurately predict steam energy consumption for individual processes. These findings suggest that a customized predictive model, tailored to the energy consumption characteristics of each process, can offer precise energy operation guidance for food manufacturers, thereby improving energy efficiency and reducing consumption. Full article
(This article belongs to the Section K: State-of-the-Art Energy Related Technologies)
Show Figures

Figure 1

27 pages, 5926 KiB  
Article
Evaluation of Machine Learning Models for Enhancing Sustainability in Additive Manufacturing
by Waqar Shehbaz and Qingjin Peng
Technologies 2025, 13(6), 228; https://doi.org/10.3390/technologies13060228 - 3 Jun 2025
Viewed by 517
Abstract
Additive manufacturing (AM) presents significant opportunities for advancing sustainability through optimized process control and material utilization. This research investigates the application of machine learning (ML) models to directly associate AM process parameters with sustainability metrics, which is often a challenge by experimental methods [...] Read more.
Additive manufacturing (AM) presents significant opportunities for advancing sustainability through optimized process control and material utilization. This research investigates the application of machine learning (ML) models to directly associate AM process parameters with sustainability metrics, which is often a challenge by experimental methods alone. Initially, experimental data are generated by systematically varying key AM parameters, layer height, infill density, infill pattern, build orientation, and number of shells. Subsequently, four ML models, Linear Regression, Decision Trees, Random Forest, and Gradient Boosting, are trained and evaluated. Hyperparameter tuning is conducted using the Limited-memory Broyden–Fletcher–Goldfarb–Shanno with Box constraints (L-BFGS-B) algorithm, which demonstrates the superior computational efficiency compared to traditional approaches such as grid and random search. Among the models, Random Forest yields the highest predictive accuracy and lowest mean squared error across all target sustainability indicators: energy consumption, part weight, scrap weight, and production time. The results confirm the efficacy of ML in predicting sustainability outcomes when supported by robust experimental data. This research offers a scalable and computationally efficient approach to enhancing sustainability in AM processes and contributes to data-driven decision-making in sustainable manufacturing. Full article
Show Figures

Graphical abstract

29 pages, 4204 KiB  
Article
A Comparative Study of Ensemble Machine Learning and Explainable AI for Predicting Harmful Algal Blooms
by Omer Mermer, Eddie Zhang and Ibrahim Demir
Big Data Cogn. Comput. 2025, 9(5), 138; https://doi.org/10.3390/bdcc9050138 - 20 May 2025
Viewed by 860
Abstract
Harmful algal blooms (HABs), driven by environmental pollution, pose significant threats to water quality, public health, and aquatic ecosystems. This study enhances the prediction of HABs in Lake Erie, part of the Great Lakes system, by utilizing ensemble machine learning (ML) models coupled [...] Read more.
Harmful algal blooms (HABs), driven by environmental pollution, pose significant threats to water quality, public health, and aquatic ecosystems. This study enhances the prediction of HABs in Lake Erie, part of the Great Lakes system, by utilizing ensemble machine learning (ML) models coupled with explainable artificial intelligence (XAI) for interpretability. Using water quality data from 2013 to 2020, various physical, chemical, and biological parameters were analyzed to predict chlorophyll-a (Chl-a) concentrations, which are a commonly used indicator of phytoplankton biomass and a proxy for algal blooms. This study employed multiple ensemble ML models, including random forest (RF), deep forest (DF), gradient boosting (GB), and XGBoost, and compared their performance against individual models, such as support vector machine (SVM), decision tree (DT), and multi-layer perceptron (MLP). The findings revealed that the ensemble models, particularly XGBoost and deep forest (DF), achieved superior predictive accuracy, with R2 values of 0.8517 and 0.8544, respectively. The application of SHapley Additive exPlanations (SHAPs) provided insights into the relative importance of the input features, identifying the particulate organic nitrogen (PON), particulate organic carbon (POC), and total phosphorus (TP) as the critical factors influencing the Chl-a concentrations. This research demonstrates the effectiveness of ensemble ML models for achieving high predictive accuracy, while the integration of XAI enhances model interpretability. The results support the development of proactive water quality management strategies and highlight the potential of advanced ML techniques for environmental monitoring. Full article
(This article belongs to the Special Issue Machine Learning Applications and Big Data Challenges)
Show Figures

Figure 1

21 pages, 8045 KiB  
Article
A GIS-Based Decision Support Model (DSM) for Harvesting System Selection on Steep Terrain: Integrating Operational and Silvicultural Criteria
by Benno Eberhard, Zoran Trailovic, Natascia Magagnotti and Raffaele Spinelli
Forests 2025, 16(5), 854; https://doi.org/10.3390/f16050854 - 20 May 2025
Viewed by 342
Abstract
The goal of this study was to develop a GIS-based Decision Support Model for selecting the best timber harvesting systems on steep terrain. The model combines multiple layers, each representing an important factor in mechanized logging. These layers are used to create a [...] Read more.
The goal of this study was to develop a GIS-based Decision Support Model for selecting the best timber harvesting systems on steep terrain. The model combines multiple layers, each representing an important factor in mechanized logging. These layers are used to create a final map that functions as a spatially explicit Decision Support Model that helps decide which machines are best suited for different forest areas. A key idea of this study is to consider not only operational criteria (slope, ruggedness, wetness, and road accessibility), but also a fundamental silvicultural aspect, i.e., the assessment of tree growth classes to enable the integration of silvicultural deliberations into timber harvest planning. The data used for this model come from orthophoto image and a Digital Terrain Model (DTM). The operational factors were analyzed using GIS tools, while the silvicultural aspects were assessed using the deep learning algorithm DeepForest and tree growth equations (allometric functions). The model was tested by comparing its results with field data taken in a Norway Spruce stand in South Tyrol/Italy. The findings show that the model reliably evaluates operational factors. For silvicultural aspects, it tends to underestimate the number of small trees, but provides a good representation of tree size classes within a forest stand. The innovation of this method is that it relies on low-cost, open-source tools instead of expensive 3D scanning devices. Full article
(This article belongs to the Section Forest Operations and Engineering)
Show Figures

Figure 1

21 pages, 8395 KiB  
Article
Deep Artificial Neural Network Modeling of the Ablation Performance of Ceramic Matrix Composites in the Hydrogen Torch Test
by Jayanta Bhusan Deb, Christopher Varela, Fahim Faysal, Yiting Wang, Chiranjit Maiti and Jihua Gou
J. Compos. Sci. 2025, 9(5), 239; https://doi.org/10.3390/jcs9050239 - 13 May 2025
Viewed by 652
Abstract
In recent years, there has been increasing interest in new materials such as ceramic matrix composites (CMCs) for power generation and aerospace propulsion applications through hydrogen combustion. This study employed a deep artificial neural network (DANN) model to predict the ablation performance of [...] Read more.
In recent years, there has been increasing interest in new materials such as ceramic matrix composites (CMCs) for power generation and aerospace propulsion applications through hydrogen combustion. This study employed a deep artificial neural network (DANN) model to predict the ablation performance of CMCs in the hydrogen torch test (HTT). The study was conducted in three phases to increase the accuracy of the model’s predictions. Initially, to predict the thermal behavior of ceramic composites, two linear machine learning models were used known as Lasso and Ridge regression. In the second step, four decision tree-based ensemble machine learning models, namely random forest, gradient boosting regression, extreme gradient boosting regression, and extra tree regression, were used to improve the prediction accuracy metrics, including root mean square error (RMSE), mean absolute error (MAE), correlation coefficient (R2 score), and mean absolute percentage error (MAPE), relative to the previously introduced linear models. Finally, to forecast the thermal stability of CMCs with time, an optimized DANN model with two hidden layers having rectified linear unit activation function was developed. The data collection procedure involved preparing CMCs with continuous Yttria-Stabilized Zirconia (YSZ) fibers and silicon carbide (SiC) matrix using a polymer infiltration and pyrolysis (PIP) technique. The samples were exposed to a hydrogen flame at a high heat flux of 183 W/cm2 for a duration of 10 min. A good agreement between the DANN model’s predictions and experimental data with an R2 score of 0.9671, RMSE of 16.45, an MAE of 14.07, and an MAPE of 3.92% confirmed the acceptability of the developed neural network model in this study. Full article
(This article belongs to the Special Issue Feature Papers in Journal of Composites Science in 2025)
Show Figures

Graphical abstract

25 pages, 9072 KiB  
Article
An Application Study of Machine Learning Methods for Lithological Classification Based on Logging Data in the Permafrost Zones of the Qilian Mountains
by Xudong Hu, Guo Song, Chengnan Wang, Kun Xiao, Hai Yuan, Wangfeng Leng and Yiming Wei
Processes 2025, 13(5), 1475; https://doi.org/10.3390/pr13051475 - 12 May 2025
Cited by 1 | Viewed by 440
Abstract
Lithology identification is fundamental for the logging evaluation of natural gas hydrate reservoirs. The Sanlutian field, located in the permafrost zones of the Qilian Mountains (PZQM), presents unique challenges for lithology identification due to its complex geological features, including fault development, missing and [...] Read more.
Lithology identification is fundamental for the logging evaluation of natural gas hydrate reservoirs. The Sanlutian field, located in the permafrost zones of the Qilian Mountains (PZQM), presents unique challenges for lithology identification due to its complex geological features, including fault development, missing and duplicated stratigraphy, and a diverse array of rock types. Conventional methods frequently encounter difficulties in precisely discerning these rock types. This study employs well logging and core data from hydrate boreholes in the region to evaluate the performance of four data-driven machine learning (ML) algorithms for lithological classification: random forest (RF), multi-layer perceptron (MLP), logistic regression (LR), and decision tree (DT). The results indicate that seven principal lithologies—sandstone, siltstone, argillaceous siltstone, silty mudstone, mudstone, oil shale, and coal—can be effectively distinguished through the analysis of logging data. Among the tested models, the random forest algorithm demonstrated superior performance, achieving optimal precision, recall, F1-score, and Jaccard coefficient values of 0.941, 0.941, 0.940, and 0.889, respectively. The models were ranked in the following order based on evaluation criteria: RF > MLP > DT > LR. This research highlights the potential of integrating artificial intelligence with logging data to enhance lithological classification in complex geological settings, providing valuable technical support for the exploration and development of gas hydrate resources. Full article
Show Figures

Figure 1

21 pages, 7195 KiB  
Article
A Deep Learning Algorithm for Multi-Source Data Fusion to Predict Effluent Quality of Wastewater Treatment Plant
by Shitao Zhang, Jiafei Cao, Yang Gao, Fangfang Sun and Yong Yang
Toxics 2025, 13(5), 349; https://doi.org/10.3390/toxics13050349 - 27 Apr 2025
Viewed by 555
Abstract
The operational complexity of wastewater treatment systems mainly stems from the diversity of influent characteristics and the nonlinear nature of the treatment process. Together, these factors make the control of effluent quality in wastewater treatment plants (WWTPs) difficult to manage effectively. To address [...] Read more.
The operational complexity of wastewater treatment systems mainly stems from the diversity of influent characteristics and the nonlinear nature of the treatment process. Together, these factors make the control of effluent quality in wastewater treatment plants (WWTPs) difficult to manage effectively. To address this challenge, constructing accurate effluent quality models for WWTPs can not only mitigate these complexities, but also provide critical decision support for operational management. In this research, we introduce a deep learning method that fuses multi-source data. This method utilises various indicators to comprehensively analyse and predict the quality of effluent water: water quantity data, process data, energy consumption data, and water quality data. To assess the efficacy of this method, a case study was carried out at an industrial effluent treatment plant (IETP) in Anhui Province, China. Deep learning algorithms including long short-term memory (LSTM) and gated recurrent unit (GRU) were found to have a favourable prediction performance by comparing with traditional machine learning algorithms (random forest, RF) and multi-layer perceptron (MLP). The results show that the R2 of LSTM and GRU is 1.36%~31.82% higher than that of MLP and 9.10%~47.75% higher than that of traditional machine learning algorithms. Finally, the RReliefF approach was used to identify the key parameters affecting the water quality behaviour of IETP effluent, and it was found that, by optimising the multi-source feature structure, not only the monitoring and management strategies can be optimised, but also the modelling efficiency of the model can be further improved. Full article
Show Figures

Figure 1

41 pages, 18914 KiB  
Article
Cost-Efficient RSSI-Based Indoor Proximity Positioning, for Large/Complex Museum Exhibition Spaces
by Panos I. Philippopoulos, Kostas N. Koutrakis, Efstathios D. Tsafaras, Evangelia G. Papadopoulou, Dimitrios Sigalas, Nikolaos D. Tselikas, Stefanos Ougiaroglou and Costas Vassilakis
Sensors 2025, 25(9), 2713; https://doi.org/10.3390/s25092713 - 25 Apr 2025
Viewed by 615
Abstract
RSSI-based proximity positioning is a well-established technique for indoor localization, featuring simplicity and cost-effectiveness, requiring low-price and off-the-shelf hardware. However, it suffers from low accuracy (in NLOS traffic), noise, and multipath fading issues. In large complex spaces, such as museums, where heavy visitor [...] Read more.
RSSI-based proximity positioning is a well-established technique for indoor localization, featuring simplicity and cost-effectiveness, requiring low-price and off-the-shelf hardware. However, it suffers from low accuracy (in NLOS traffic), noise, and multipath fading issues. In large complex spaces, such as museums, where heavy visitor traffic is expected to seriously impact the ability to maintain LOS, RSSI coupled with Bluetooth Low Energy (BLE) seems ideal in terms of market availability, cost-/energy-efficiency and scalability that affect competing technologies, provided it achieves adequate accuracy. Our work reports and discusses findings of a BLE/RSSI-based pilot, implemented at the Museum of Modern Greek Culture in Athens, involving eight buildings with 47 halls with diverse areas, shapes, and showcase layouts. Wearable visitor BLE beacons provided cell-level location determined by a prototype tool (VTT), integrating in its architecture different functionalities: raw RSSI data smoothing with Kalman filters, hybrid positioning provision, temporal methods for visitor cell prediction, spatial filtering, and prediction based on popular machine learning classifiers. Visitor movement modeling, based on critical parameters influencing signal measurements, provided scenarios mapped to popular behavioral models. One such model, “ant”, corresponding to relatively slow nomadic cell roaming, was selected for basic experimentation. Pilot implementation decisions and methods adopted at all layers of the VTT architecture followed the overall concept of simplicity, availability, and cost-efficiency, providing a maximum infrastructure cost of 8 Euro per m2 covered. A total 15 methods/algorithms were evaluated against prediction accuracy across 20 RSSI datasets, incorporating diverse hall cell allocations and visitor movement patterns. RSSI data, temporal and spatial management with simple low-processing methods adopted, achieved a maximum prediction accuracy average of 81.53% across all datasets, while ML algorithms (Random Forest) achieved a maximum prediction accuracy average of 87.24%. Full article
Show Figures

Figure 1

Back to TopTop