Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,584)

Search Parameters:
Keywords = boosting technique

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1332 KB  
Article
Leakage-Free Evaluation for Employee Attrition Prediction on Tabular Data
by Ana Maria Căvescu and Alina Nirvana Popescu
Information 2026, 17(3), 308; https://doi.org/10.3390/info17030308 (registering DOI) - 23 Mar 2026
Abstract
In the context of employee attrition prediction using imbalanced tabular data, we propose a reproducible, leakage-aware evaluation protocol and validate it on the IBM HR Attrition dataset. We perform the train/test split prior to any rebalancing; SMOTE (Synthetic Minority Over-sampling Technique) is applied [...] Read more.
In the context of employee attrition prediction using imbalanced tabular data, we propose a reproducible, leakage-aware evaluation protocol and validate it on the IBM HR Attrition dataset. We perform the train/test split prior to any rebalancing; SMOTE (Synthetic Minority Over-sampling Technique) is applied exclusively within the training portion of each fold in stratified 5-fold cross-validation, while the test set remains untouched. One-Hot Encoding is performed consistently using pd.get_dummies. We benchmark Logistic Regression, Random Forest, ExtraTrees, LightGBM, and XGBoost using imbalance-aware metrics: F1 for the minority class, PR-AUC reported as Average Precision (AP), and ROC-AUC reported both in cross-validation and on the held-out test set. XGBoost attains the best mean AP in cross-validation (0.556 ± 0.056). Logistic Regression achieves the highest mean F1 (0.439 ± 0.048), while LightGBM yields the best mean ROC-AUC (0.791 ± 0.026). On the test set, XGBoost achieves a precision value of 0.65 and a recall value of 0.45 at a fixed threshold of 0.5. Overall, the results highlight a trade-off between stable minority-class detection (Logistic Regression) and stronger risk ranking performance (boosting models) under class imbalance. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

13 pages, 516 KB  
Article
Ultra-Hypofractionated Whole-Breast Irradiation With or Without Simultaneous Integrated Boost Using Helical Tomotherapy for Early-Stage Breast Cancer: A Real-World Dosimetric and Clinical Outcome Study
by Pei-Yu Hou, Chen-Hsi Hsieh, Hsin-Pei Yeh and Eva Yu-Hsuan Chuang
Cancers 2026, 18(6), 1015; https://doi.org/10.3390/cancers18061015 - 20 Mar 2026
Abstract
Background: Ultra-hypofractionated whole-breast irradiation (WBI) delivering 26 Gy in five fractions has been established as a standard of care following the FAST-Forward trial. However, real-world data addressing advanced delivery techniques and the feasibility of incorporating a simultaneous integrated boost (SIB) remain limited. [...] Read more.
Background: Ultra-hypofractionated whole-breast irradiation (WBI) delivering 26 Gy in five fractions has been established as a standard of care following the FAST-Forward trial. However, real-world data addressing advanced delivery techniques and the feasibility of incorporating a simultaneous integrated boost (SIB) remain limited. Methods: We retrospectively analyzed 40 patients with early-stage breast cancer (pT1–2N0M0) treated with breast-conserving surgery, followed by ultra-hypofractionated WBI using helical tomotherapy. Patients received either WBI alone (26 Gy in five fractions) or WBI with an SIB to the tumor bed (29–30 Gy in five fractions). Dosimetric parameters for planning target volumes (PTVs) and organs at risk (OARs) were evaluated. Acute skin toxicity was assessed using CTCAE version 5.0. Results: The median patient age was 55.7 years. The mean PTV V95% was 97.8%, with excellent hotspot control (PTV V105% < 5% and V107% < 2%). For left-sided tumors, the mean heart dose was 1.67 Gy, and the ipsilateral lung V8Gy remained below 15% in all patients. Acute radiation dermatitis was limited to Grade 0–1 in all cases. At a median follow-up of 14.8 months, both local control and overall survival were 100%. Conclusions: Ultra-hypofractionated WBI delivered using helical tomotherapy, with or without SIB, demonstrates robust dosimetric quality, minimal acute toxicity, and favorable early clinical outcomes in routine clinical practice. Full article
15 pages, 1153 KB  
Article
Structured Over-Relaxed Monotone FISTA for Linear Inverse Problems in Image Restoration
by Zixuan Chen and Xinzhu Zhao
Axioms 2026, 15(3), 235; https://doi.org/10.3390/axioms15030235 - 20 Mar 2026
Abstract
In this paper, we propose an efficient numerical algorithm for solving large-scale ill-posed linear inverse problems encountered in image restoration. To boost computational efficiency, we extend the structured fast iterative shrinkage-thresholding algorithm (sFISTA) for addressing the corresponding l1-regularized minimization problem, and [...] Read more.
In this paper, we propose an efficient numerical algorithm for solving large-scale ill-posed linear inverse problems encountered in image restoration. To boost computational efficiency, we extend the structured fast iterative shrinkage-thresholding algorithm (sFISTA) for addressing the corresponding l1-regularized minimization problem, and further introduce the over-relaxation technique to accelerate the algorithm. The proposed algorithm is termed structured over-relaxed monotone FISTA (sOMFISTA). The convergence analysis of sOMFISTA is also conducted. The algorithmic framework of sOMFISTA is universally applicable to any non-smooth convex regularization term, exhibiting remarkable flexibility. Extensive numerical experiments are carried out to systematically validate the superiority in efficiency and performance of the proposed sOMFISTA. Full article
33 pages, 2201 KB  
Review
Machine Learning Models for Non-Intrusive Load Monitoring: A Systematic Review and Meta-Analysis
by Herman Cristiano Jaime, Adler Diniz de Souza, Raphael Carlos Santos Machado and Otávio de Souza Martins Gomes
Inventions 2026, 11(2), 29; https://doi.org/10.3390/inventions11020029 - 19 Mar 2026
Abstract
Non-Intrusive Load Monitoring (NILM) systems are increasingly applied in residential and commercial environments to disaggregate energy consumption without requiring additional hardware sensors. The integration of Machine Learning (ML) techniques has enhanced the accuracy and efficiency of load identification and classification in smart meter-based [...] Read more.
Non-Intrusive Load Monitoring (NILM) systems are increasingly applied in residential and commercial environments to disaggregate energy consumption without requiring additional hardware sensors. The integration of Machine Learning (ML) techniques has enhanced the accuracy and efficiency of load identification and classification in smart meter-based systems. This study presents a systematic review and meta-analysis aimed at identifying, classifying, and quantitatively evaluating ML models applied to NILM. Searches were conducted in the IEEE Xplore and Scopus databases, restricted to peer-reviewed publications from 2017 to 2024. Thirty studies met the eligibility criteria and were included in the quantitative synthesis using a random-effects meta-analysis model (DerSimonian–Laird estimator). The primary effect measure was the F1-score. Statistical analyses were performed using R (version 4.5.0) and Python (version 3.10.0), including heterogeneity assessment and subgroup analyses according to model type. Hybrid models, such as SVDT-KNN-MLP, LE-CRNN, and RBFNN-MOGA, achieved the highest pooled F1-scores, although supported by a limited number of studies. Traditional approaches, including CNN, KNN, and Random Forest, demonstrated consistently strong performance and broader validation, whereas Boosted Trees and RNN-based models showed lower or more variable results. Substantial heterogeneity was observed across studies, highlighting the need for dataset standardization, reproducible evaluation frameworks, and further validation of emerging hybrid architectures in diverse operational scenarios. This study contributes by providing a quantitative synthesis of machine learning models applied to NILM using a structured PRISMA-based methodology and subgroup analysis by model architecture. Unlike previous narrative reviews, this work integrates scientometric analysis with meta-analytic performance aggregation, offering a consolidated and comparative evidence base for future NILM research. Full article
Show Figures

Figure 1

15 pages, 896 KB  
Article
Enhancing Network Intrusion Detection Under Class Imbalance Using a Three-Discriminator Generative Adversarial Network
by Taesu Kim, Hyoseong Park, Dongil Shin and Dongkyoo Shin
Electronics 2026, 15(6), 1253; https://doi.org/10.3390/electronics15061253 - 17 Mar 2026
Viewed by 102
Abstract
Network Intrusion Detection Systems (NIDS) play a crucial role in protecting network environments against cyberattacks. However, traditional NIDS rely heavily on predefined attack signatures, which limits their ability to detect zero-day attacks. Although machine learning-based intrusion detection techniques have been widely adopted in [...] Read more.
Network Intrusion Detection Systems (NIDS) play a crucial role in protecting network environments against cyberattacks. However, traditional NIDS rely heavily on predefined attack signatures, which limits their ability to detect zero-day attacks. Although machine learning-based intrusion detection techniques have been widely adopted in Network Intrusion Prevention Systems (NIPS), publicly available network traffic datasets often suffer from severe class imbalance, leading to biased learning and degraded detection performance. To address this issue, this study proposes data augmentation framework based on a 3D-GAN (Three-Discriminator Generative Adversarial Network). The proposed architecture integrates an autoencoder, a CNN (Convolutional Neural Network), and an LSTM (Long Short-Term Memory) network as parallel discriminators to capture the statistical, spatial, and temporal characteristics of network traffic. By jointly optimizing multiple discriminator losses, the framework enhances training stability and generates high-quality synthetic samples. Experiments were conducted on the CIC-UNSW-NB15 dataset using Random Forest-, XGBoost (eXtreme Gradient Boosting)-, and BiGRU (Bidirectional Gated Recurrent Unit)-based classifiers. Two augmented datasets were constructed to address class imbalance, containing approximately 100,000 and 350,000 samples, respectively. Among them, Dataset 2, augmented using the proposed 3D-GAN, demonstrated the most significant performance improvement. Compared to the original imbalanced dataset, the XGBoost classifier trained on Dataset 2 achieved approximately a 4% increase in both accuracy and F1-score, while reducing the false positive rate and false negative rate by approximately 3.5%. Furthermore, the optimal configuration attained an F1-score of 0.9816, indicating superior capability in modeling complex network traffic patterns. Overall, this study highlights the potential of GAN-based data augmentation for alleviating class imbalance and improving the robustness and generalization of intrusion detection systems. Full article
Show Figures

Figure 1

36 pages, 4766 KB  
Article
Fault Diagnosis of Rotating Machinery Using Supervised Machine Learning Algorithms with Integrated Data-Driven and Physics-Informed Feature Sets
by Anastasija Angjusheva Ignjatovska, Zlatko Petreski, Viktor Gavriloski, Dejan Shishkovski, Simona Domazetovska Markovska, Maja Anachkova and Damjan Pecioski
Sensors 2026, 26(6), 1876; https://doi.org/10.3390/s26061876 - 17 Mar 2026
Viewed by 130
Abstract
This study proposes a supervised machine learning framework for vibration-based fault diagnosis of rotating machinery using integrated data-driven and physics-informed feature sets. A dataset acquired under variable load and multiple operating conditions was used for model training. Parallel signal processing techniques were applied [...] Read more.
This study proposes a supervised machine learning framework for vibration-based fault diagnosis of rotating machinery using integrated data-driven and physics-informed feature sets. A dataset acquired under variable load and multiple operating conditions was used for model training. Parallel signal processing techniques were applied to capture fault-related information across multiple frequency bands including time-domain analysis, frequency-domain analysis, baseband analysis, and envelope analysis. From the corresponding signal representations, statistical, spectral, and physics-based features associated with characteristic fault frequencies were extracted and combined into integrated feature sets. The diagnostic performance of models trained using purely data-driven features was systematically compared with models incorporating integrated data-driven and physics-informed features. Support Vector Machine, Random Forests, Gradient Boosting, and an ensemble classifier were evaluated using accuracy, precision, recall, and F1-score metrics. The proposed framework employs a two-layer classification strategy, where the first layer performs multiclass fault identification, while the second layer evaluates the presence of imbalance as a coexisting fault. In addition, the influence of different feature groups as well as individual measurement axes and their combinations on diagnostic performance were analyzed. Validation using a new dataset measured in laboratory conditions confirmed the robustness and generalization capability of the proposed diagnostic framework. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

30 pages, 2223 KB  
Article
Comparative Performance Analysis of Machine Learning Models for Predicting the Weighted Arithmetic Water Quality Index
by Bedia Çalış, İbrahim Bayhan, Hamza Yalçin, İbrahim Öztürk and Mehmet İrfan Yeşilnacar
Water 2026, 18(6), 696; https://doi.org/10.3390/w18060696 - 16 Mar 2026
Viewed by 124
Abstract
Precise water quality forecasting is vital for sustainable resource management and public health, especially in semi-arid environments. This study investigates the predictive capabilities of ten Machine Learning (ML) algorithms using a dataset of 308 drinking water samples collected from various districts in Şanlıurfa [...] Read more.
Precise water quality forecasting is vital for sustainable resource management and public health, especially in semi-arid environments. This study investigates the predictive capabilities of ten Machine Learning (ML) algorithms using a dataset of 308 drinking water samples collected from various districts in Şanlıurfa Province, Türkiye. We evaluated ten predictive models, including Support Vector Regressor (SVR) and Extreme Gradient Boosting (XGBoost), both integrated with dimensionality reduction and hyperparameter optimization. Nineteen physicochemical and microbiological parameters—Temperature, chlorine (Cl), pH, Electrical Conductivity (EC), Total Dissolved Solids (TDS), nitrite (NO2), nitrate (NO3), ammonium (NH4+), sulfate (SO42−), Free Chlorine (Cl2), calcium (Ca2+), magnesium (Mg2+), sodium (Na+), potassium (K+), fluoride (F), trihalomethanes (THMs), Escherichia coli, Enterococci, Total Coliform—were used as input features. The dataset was split into training (75%) and testing (25%) subsets, and model performance was assessed through 10-fold cross-validation and hold-out testing procedures. To improve model generalization and mitigate the effects of class imbalance, we implemented the Adaptive Synthetic Sampling (ADASYN) technique. ML algorithms were evaluated using standard regression metrics: Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and the Coefficient of Determination (R2). The LSTM model optimized using Randomized Search outperformed the SVR and XGBoost models, demonstrating the highest accuracy and generalization capability, as evidenced by the superior R2 value of 0.999 following ADASYN balancing and the lowest RMSE (1.206). These findings underscore the effectiveness of the LSTM framework in modeling the complex variance of the Weighted Arithmetic Water Quality Index (WAWQI). The findings of this study are expected to support future water quality monitoring strategies, inform policy development, and contribute to sustainable water resource management in arid and semi-arid regions. Full article
(This article belongs to the Section Urban Water Management)
Show Figures

Figure 1

22 pages, 4100 KB  
Article
Explainable Machine Learning-Based Urban Waterlogging Prediction Framework
by Yinghua Deng and Xin Lu
Urban Sci. 2026, 10(3), 156; https://doi.org/10.3390/urbansci10030156 - 13 Mar 2026
Viewed by 206
Abstract
Urban waterlogging has become a critical challenge to urban sustainability under the combined pressures of rapid urbanization and increasingly frequent extreme weather events. However, traditional predictive models struggle to achieve real-time, point-specific early warning effectively, primarily due to the interference of redundant high-dimensional [...] Read more.
Urban waterlogging has become a critical challenge to urban sustainability under the combined pressures of rapid urbanization and increasingly frequent extreme weather events. However, traditional predictive models struggle to achieve real-time, point-specific early warning effectively, primarily due to the interference of redundant high-dimensional data and the inability to handle severe data imbalance. This study proposes a lightweight and interpretable machine learning framework for real-time waterlogging hotspot prediction, based on a multi-dimensional feature space. Specifically, we implement a Lasso-based mechanism to distill 37 multi-source variables into five core determinants. This process effectively isolates dominant environmental drivers while filtering noise. To further overcome the recall bottleneck, we propose a Synthetic Minority Over-sampling Technique based on Weighted Distance and Cleaning (SMOTE-WDC) algorithm that incorporates weighted feature distances and density-based noise cleaning. Validating the framework on datasets from Shenzhen (2023–2024), we demonstrate that the integrated Gradient Boosting Decision Tree (GBDT) model integrated with this strategy achieves optimal performance using only five features, yielding an F1-score of 0.808 and an Area Under the Precision-Recall Curve (AUC-PR) of 0.895. Notably, a Recall of 0.882 is attained, representing a 4.6% improvement over the baseline. This study contributes a cost-effective, high-sensitivity approach to disaster risk reduction, advancing predictive urban waterlogging management. Full article
Show Figures

Figure 1

24 pages, 6557 KB  
Article
Ka-Band 16-Channel T/R Module Based on MMIC with Low Cost and High Integration
by Mengyun He, Qinghua Zeng, Xuesong Zhao, Song Wang, Yan Zhao, Pengfei Zhang, Gaoang Li and Xiao Liu
Electronics 2026, 15(6), 1185; https://doi.org/10.3390/electronics15061185 - 12 Mar 2026
Viewed by 247
Abstract
Based on monolithic microwave integrated circuit (MMIC) technology, this paper presents the design and implementation of a low-cost, highly integrated Ka-band sixteen-channel transmit/receive (T/R) module, specifically tailored to meet the application requirements of phased array antennas in airborne and spaceborne radar systems, satellite [...] Read more.
Based on monolithic microwave integrated circuit (MMIC) technology, this paper presents the design and implementation of a low-cost, highly integrated Ka-band sixteen-channel transmit/receive (T/R) module, specifically tailored to meet the application requirements of phased array antennas in airborne and spaceborne radar systems, satellite communications, and 5G/6G millimeter-wave networks. The proposed module employs an MMIC-based single-channel dual-chip discrete architecture, optimally integrating amplitude-phase multifunction chips and transmit-receive multifunction chips in terms of both fabrication process and performance characteristics, achieving a favorable balance between high performance and high-integration density. Using low-cost, low-temperature co-fired ceramic (LTCC) substrates, full-silver conductive paste, and a nickel–palladium–gold plating process, a novel “back-to-back” thin-slice packaging technique is presented to improve integration, lower manufacturing costs, and boost long-term reliability. Furthermore, the design incorporates glass insulators and a direct array interconnection scheme, which significantly minimizes transmission losses and reduces interface dimensions. The final module measures 70.3 mm × 26.2 mm × 10.9 mm and weighs only 34 g. Experimental results demonstrate a transmit output power of at least 23 dBm, a receive gain exceeding 26 dB, and a noise figure below 3.5 dB, achieving a 22.5–58% reduction in volume per channel while maintaining competitive RF performance. To improve testing effectiveness and guarantee data consistency, an automated radio frequency (RF) test system based on Python 3.11.5 was also developed. This work provides a practical technical approach for the engineering realization of Ka-band phased array systems. Full article
Show Figures

Figure 1

10 pages, 2482 KB  
Proceeding Paper
AClustering-Enhanced Explainable Approach Involving Convolutional Neural Networks for Predicting the Compressive Strength of Lightweight Aggregate Concrete
by Violeta Migallón, Héctor Penadés and José Penadés
Eng. Proc. 2026, 124(1), 77; https://doi.org/10.3390/engproc2026124077 - 11 Mar 2026
Viewed by 51
Abstract
Lightweight aggregate concrete (LWAC) is a practical alternative to conventional concrete in civil engineering, offering advantages such as reduced density, enhanced insulation properties, and improved seismic performance. However, segregation during compaction remains a limitation, as it can lead to non-uniform material distribution and [...] Read more.
Lightweight aggregate concrete (LWAC) is a practical alternative to conventional concrete in civil engineering, offering advantages such as reduced density, enhanced insulation properties, and improved seismic performance. However, segregation during compaction remains a limitation, as it can lead to non-uniform material distribution and reduced compressive strength. This study addresses this issue by combining non-destructive techniques with deep learning methods to predict the compressive strength of LWAC. We propose an explainable approach based on a convolutional recurrent neural network architecture, enhanced by unsupervised clustering and SHapley Additive exPlanations (SHAP), to improve interpretability. To optimize predictive performance, several aggregation strategies are evaluated at the recurrent layer before the dense layers, including full-sequence flattening, max pooling, average pooling, and an attention mechanism over the full sequence. Experimental results show that the proposed model outperforms conventional machine learning methods such as multilayer perceptron (MLP), random forest (RF), and support vector regression (SVR), as well as ensemble methods such as gradient boosting (GBR), XGBoost, and weighted average ensemble (WAE). Furthermore, when combined with unsupervised clustering, the model identifies latent behavioral patterns that are not observable through traditional evaluation techniques. This demonstrates the potential of integrating non-destructive testing with interpretable deep learning as a reliable approach for the structural assessment of LWAC. Full article
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

21 pages, 474 KB  
Article
Performance Evaluation of Machine Learning and Deep Learning Models for Credit Risk Prediction
by Irvine Mapfumo and Thokozani Shongwe
J. Risk Financial Manag. 2026, 19(3), 210; https://doi.org/10.3390/jrfm19030210 - 11 Mar 2026
Viewed by 316
Abstract
Credit risk prediction is essential for financial institutions to effectively assess the likelihood of borrower defaults and manage associated risks. This study presents a comparative analysis of deep learning architectures and traditional machine learning models on imbalanced credit risk datasets. To address class [...] Read more.
Credit risk prediction is essential for financial institutions to effectively assess the likelihood of borrower defaults and manage associated risks. This study presents a comparative analysis of deep learning architectures and traditional machine learning models on imbalanced credit risk datasets. To address class imbalance, we employ three resampling techniques: Synthetic Minority Over-sampling Technique (SMOTE), Edited Nearest Neighbors (ENN), and the hybrid SMOTE-ENN. We evaluate the performance of various models, including multilayer perceptron (MLP), convolutional neural network (CNN), long short-term memory (LSTM), gated recurrent unit (GRU), logistic regression, decision tree, support vector machine (SVM), random forest, adaptive boosting, and extreme gradient boosting. The analysis reveals that SMOTE-ENN combined with MLP achieves the highest F1-score of 0.928 (accuracy 95.4%) on the German dataset, while SMOTE-ENN with random forest attains the best F1-score of 0.789 (accuracy 82.1%) on the Taiwanese dataset. SHapley Additive exPlanations (SHAP) are employed to enhance model interpretability, identifying key drivers of credit default. These findings provide actionable guidance for developing transparent, high-performing, and robust credit risk assessment systems. Full article
(This article belongs to the Section Financial Technology and Innovation)
Show Figures

Figure 1

24 pages, 1495 KB  
Article
Predicting Bioactive Compounds in Arbutus unedo L. Leaves Using Machine Learning: Influence of Extraction Technique, Solvent Type, and Geographical Location
by Jasmina Lapić, Anica Bebek Markovinović, Nikolina Račić, Lana Vujanić, Marko Kostić, Dušan Rakić, Senka Djaković and Danijela Bursać Kovačević
Foods 2026, 15(6), 993; https://doi.org/10.3390/foods15060993 - 11 Mar 2026
Viewed by 197
Abstract
This study investigates the effects of extraction technique, solvent type, and geographical origin on the recovery of bioactive compounds from Arbutus unedo L. leaves collected from two Croatian islands (Vis and Mali Lošinj) and extracted using conventional, Soxhlet, and ultrasound-assisted extraction (UAE) with [...] Read more.
This study investigates the effects of extraction technique, solvent type, and geographical origin on the recovery of bioactive compounds from Arbutus unedo L. leaves collected from two Croatian islands (Vis and Mali Lošinj) and extracted using conventional, Soxhlet, and ultrasound-assisted extraction (UAE) with green solvents (distilled water, 70% ethanol, and ethyl acetate). Extracts were purified and characterized by thin-layer chromatography, column chromatography, and FTIR spectroscopy. Total phenols, hydroxycinnamic acids, flavonols, condensed tannins, and antioxidant capacity were quantified spectrophotometrically. Solvent type had the greatest influence, with 70% ethanol yielding the highest levels of bioactives and antioxidant capacity. Geographical origin significantly affected total phenolics and condensed tannins, with leaves from Vis outperforming those from Mali Lošinj. UAE was slightly more efficient than conventional and Soxhlet methods, particularly for thermolabile phenolics. Machine learning algorithms were applied as exploratory tools, using total phenols as a proxy variable to estimate selected bioactive compounds and antioxidant capacity based on extraction parameters. Decision Tree and Gradient Boosting models showed high goodness of fit within the experimental dataset (R2 > 0.91). These results support the potential of green extraction strategies combined with data-driven screening for the valorization of A. unedo leaf extracts, while highlighting the need for further validation prior to industrial application. Full article
Show Figures

Figure 1

19 pages, 13647 KB  
Article
Identification and Application of Flow Units in Tight Sandstone Reservoirs Under Complex Structural Settings Based on the SSOM Algorithm: A Case Study of the Shaximiao Formation in Southern Sichuan Basin
by Hanxuan Yang, Jiaxun Lu, Yani Deng, Zhiwei Zheng, Lin Jiang, Hui Long, Lei Zhang and Xinrui Wang
Energies 2026, 19(6), 1397; https://doi.org/10.3390/en19061397 - 10 Mar 2026
Viewed by 190
Abstract
To address the challenges of strong tectonic stress anisotropy, multi-scale pore networks, and complex seepage pathways in the tight sandstone reservoirs of the Shaximiao Formation, southern Sichuan Basin, this study integrates petrophysical analysis with machine learning techniques to develop an intelligent flow unit [...] Read more.
To address the challenges of strong tectonic stress anisotropy, multi-scale pore networks, and complex seepage pathways in the tight sandstone reservoirs of the Shaximiao Formation, southern Sichuan Basin, this study integrates petrophysical analysis with machine learning techniques to develop an intelligent flow unit identification methodology applicable to complex structural settings. Based on core petrophysical properties, mercury injection capillary pressure (MICP) data, and production dynamics, the reservoirs were classified into a fracture-type plus four conventional-type (I–IV) flow unit system. Quantitative identification of flow units was achieved using conventional well-logging curves (Gamma Ray, Spontaneous Potential, Caliper, etc.—eight curves total) using the Gradient Boosting Decision Tree (GBDT), Backpropagation Neural Network (BPANN), and Supervised Self-Organizing Map (SSOM) algorithms. Key findings include the following: The SSOM algorithm delivered optimal performance, achieving a 90.1% average accuracy on the test set, significantly outperforming GBDT (87.8%) and BPANN (85.5%), particularly in capturing nonlinear responses of fracture-type reservoirs and class-overlapping samples. Flow unit spatial distribution exhibits dual sedimentary-structural control: High-quality units (Types I/II) are enriched at the base of distributary channels in deltaic plain facies (J2S12), while fracture-type units cluster near fault peripheries. Strong planar heterogeneity is observed in the J2S13 sub-member: Near-source areas (south/southwest) develop banded Type I/II units, whereas distal regions are dominated by Type IV units. This methodology provides a theoretical foundation and intelligent technological pathway for the efficient development of highly heterogeneous tight sandstone reservoirs. Full article
(This article belongs to the Section H: Geo-Energy)
Show Figures

Figure 1

9 pages, 514 KB  
Proceeding Paper
Predictive Analytics for Inventory Backorder Optimization Using Machine Learning
by Thean Pheng Lim, Shi Yean Wong, Wei Chien Ng and Guat Guan Toh
Eng. Proc. 2026, 128(1), 13; https://doi.org/10.3390/engproc2026128013 - 9 Mar 2026
Viewed by 249
Abstract
The need for effective inventory management in the transition from “Just-in-Time” to “Just-in-Case” supply chain strategies was addressed by developing a machine learning model to predict inventory backorders. Using a large store keeping unit dataset, five supervised learning algorithms, namely, logistic regression, random [...] Read more.
The need for effective inventory management in the transition from “Just-in-Time” to “Just-in-Case” supply chain strategies was addressed by developing a machine learning model to predict inventory backorders. Using a large store keeping unit dataset, five supervised learning algorithms, namely, logistic regression, random forest, k-nearest neighbours, Naïve Bayes, and gradient boosting, were implemented with Python 3.13 Data imbalance was managed using the synthetic minority over-sampling technique, while power transformation was applied to improve data distribution and model performance. Among the models, random forest demonstrated the highest prediction accuracy at 98% and a strong receiver operating characteristic score of 0.897, making it the best model for backorder prediction. This approach enhances supply chain resilience and proactive inventory control, enabling manufacturers to mitigate risks of stockouts and optimize resource planning. It is necessary to incorporate advanced balancing techniques, hyperparameter tuning, and cross-validation methods to improve predictive performance further. Full article
Show Figures

Figure 1

43 pages, 1950 KB  
Review
A Comprehensive Review of Machine Learning and Deep Learning Methods for Flood Inundation Mapping
by Abinash Silwal, Anil Subedi, Rajee Tamrakar, Kshitij Dahal, Dewasis Dahal, Kenneth Okechukwu Ekpetere and Mohamed Zhran
Earth 2026, 7(2), 44; https://doi.org/10.3390/earth7020044 - 9 Mar 2026
Viewed by 938
Abstract
Flood inundation mapping (FIM) is essential in disaster risk management, infrastructure planning, and climate adaptation. Traditional hydrodynamic models, such as the Hydrologic Engineering Center’s River Analysis System (HEC-RAS) and LISFLOOD-Floodplain (LISFLOOD-FP), provide physically interpretable flood simulations but are often data- and computation-intensive and [...] Read more.
Flood inundation mapping (FIM) is essential in disaster risk management, infrastructure planning, and climate adaptation. Traditional hydrodynamic models, such as the Hydrologic Engineering Center’s River Analysis System (HEC-RAS) and LISFLOOD-Floodplain (LISFLOOD-FP), provide physically interpretable flood simulations but are often data- and computation-intensive and difficult to scale across regions. In recent years, machine learning (ML) and deep learning (DL) approaches have emerged as data-driven alternatives that leverage remote sensing observations, digital elevation models (DEMs), and hydro-climatic datasets to enable scalable and near-real-time flood mapping. Our review synthesizes recent advances in ML-based flood inundation mapping, categorizing methods into traditional machine learning techniques (e.g., Random Forest (RF), Support Vector Machines (SVM), Gradient Boosting (GB)), deep learning architectures (e.g., Convolutional Neural Networks (CNNs), U-Net, Long Short-Term Memory networks (LSTM)), and emerging hybrid and physics-informed frameworks. We evaluate model performance across flood extent and flood depth estimation tasks, highlighting strengths, limitations, and common benchmarking practices reported in the literature. The review identifies key challenges related to model interpretability, data bias, transferability, and regulatory acceptance, and highlights recent progress in explainable artificial intelligence (XAI), uncertainty-aware modeling, and physics-informed learning as pathways toward operational adoption. By unifying terminology, performance metrics, and methodological comparisons, this review provides a coherent framework for advancing trustworthy, scalable, and decision-relevant flood inundation mapping under increasing climate-driven flood risk. Full article
Show Figures

Graphical abstract

Back to TopTop