Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (227)

Search Parameters:
Keywords = histogram-based operation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 5195 KB  
Article
Computational Ghost Imaging Encryption for Multiple Images Based on Compressed Sensing and Block Scrambling
by Zhipeng Wang, Jiahuan Yang, Ruizhi Ge, Yingying Zhang and Yi Qin
Information 2026, 17(3), 239; https://doi.org/10.3390/info17030239 - 1 Mar 2026
Viewed by 173
Abstract
To achieve high capacity, high speed, and secure image transmission, we propose a multi-image computational ghost imaging (CGI)-based encryption scheme that integrates compressed sensing (CS), block scrambling, and dynamic-salt-driven bidirectional XOR diffusion. First, multiple images are partitioned into 8 × 8 pixel blocks, [...] Read more.
To achieve high capacity, high speed, and secure image transmission, we propose a multi-image computational ghost imaging (CGI)-based encryption scheme that integrates compressed sensing (CS), block scrambling, and dynamic-salt-driven bidirectional XOR diffusion. First, multiple images are partitioned into 8 × 8 pixel blocks, and their spatial structure is disrupted through random scrambling. The scrambled composite image then undergoes pixel-level encryption via two-round bidirectional XOR diffusion, using session-unique keys derived from SHA-256-based dynamic salt, eliminating the statistical characteristics of the original images. Subsequently, each pixel block is subjected to both Gaussian CS and Hadamard-based CGI measurements in parallel, achieving dual-mode compressive encryption and enhancing robustness through measurement redundancy. Finally, only the scrambling key, the XOR-diffusion key, and the compressed measurements are stored; the original image information is thus transformed into unrecognizable measurement data. During the decryption process, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) with a Discrete Cosine Transform (DCT) sparse basis is employed for dual-sparse reconstruction from the compressed measurements, recovering the encrypted composite image. An inverse XOR operation is then applied to remove the pixel-level diffusion, followed by block reordering using the scrambling key to restore the original images. Experimental results demonstrate that the proposed scheme enables efficient and secure multi-image transmission while maintaining high decrypted image quality. Security analysis indicates that the scheme possesses high key sensitivity, effectively resisting chosen-plaintext attacks. Histogram uniformity analysis and cropping attack resistance experiments further confirm its excellent statistical security and robustness. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

14 pages, 421 KB  
Article
Artificial Intelligence-Based Evaluation of Permanent First Molar Extraction Indications in Children Using Panoramic Radiographs
by Serap Gülçin Çetin, Ömer Faruk Ertuğrul, Nursezen Kavasoğlu and Veysel Eratilla
Children 2026, 13(2), 277; https://doi.org/10.3390/children13020277 - 17 Feb 2026
Viewed by 268
Abstract
Background: The aim of this study was to develop an artificial intelligence (AI)-based decision support model for evaluating the extraction indication of permanent first molars in pediatric patients using panoramic radiographs, and to investigate the potential contribution of this model to the clinical [...] Read more.
Background: The aim of this study was to develop an artificial intelligence (AI)-based decision support model for evaluating the extraction indication of permanent first molars in pediatric patients using panoramic radiographs, and to investigate the potential contribution of this model to the clinical decision-making process. Methods: This retrospective observational study analyzed 1000 panoramic radiographs obtained from children aged 8–10 years who attended the Clinics of Batman University Faculty of Dentistry for routine dental examination. Among the radiographs meeting the inclusion criteria, a total of 176 panoramic images were selected based on dental maturation assessment using the Demirjian tooth development staging system. Cases in which the permanent second molar was classified as Demirjian stages E–F were labeled as “extraction indication present”, while the remaining stages were labeled as “extraction indication absent”. A balanced dataset was created, consisting of 88 cases in each group. Image features were extracted using Gabor filters and Histogram of Oriented Gradients (HOG). The selected features were analyzed using a Support Vector Machine (SVM) classifier with a radial basis function (RBF) kernel. Model performance was evaluated using accuracy, sensitivity, specificity, F1-score, and area under the receiver operating characteristic curve (ROC–AUC). Results: The proposed Gabor–HOG–SVM-based AI model achieved an overall classification accuracy of 77.78% with an AUC value of 0.77 in distinguishing between “extraction indication present” and “extraction indication absent” cases. For the extraction-indicated group, the sensitivity was 0.81 and the F1-score was 0.79, whereas for the non-indicated group, the sensitivity and F1-score were 0.74 and 0.77, respectively. No statistically significant differences were observed between the groups in terms of age or sex distribution (p > 0.05). Conclusions: This study demonstrates that artificial intelligence-based analysis of panoramic radiographic images can provide an objective and reproducible decision support approach for evaluating extraction indications of permanent first molars in pediatric patients. The proposed model should be considered as an adjunctive tool to reduce observer-dependent variability rather than a replacement for clinical judgment, and its clinical applicability should be further validated through multicenter and multi-parametric studies. Full article
(This article belongs to the Section Pediatric Dentistry & Oral Medicine)
Show Figures

Figure 1

50 pages, 3261 KB  
Article
Impact of Internal Validation Protocols on Predictive Maintenance Performance in Biomedical Equipment
by Jihanne Moufid, Rim Koulali, Khalid Moussaid and Noreddine Abghour
Technologies 2026, 14(2), 115; https://doi.org/10.3390/technologies14020115 - 12 Feb 2026
Viewed by 393
Abstract
Predictive maintenance (PdM) is a strategic enabler of healthcare digitalization, yet its deployment remains constrained by methodological weaknesses in model evaluation. Biomedical maintenance data, structured around equipment life cycles and repeated interventions, violate the independence and stationarity assumptions of conventional random cross-validation. This [...] Read more.
Predictive maintenance (PdM) is a strategic enabler of healthcare digitalization, yet its deployment remains constrained by methodological weaknesses in model evaluation. Biomedical maintenance data, structured around equipment life cycles and repeated interventions, violate the independence and stationarity assumptions of conventional random cross-validation. This work presents an empirical analysis of internal validation protocol design using a real-world, multi-hospital dataset comprising 3403 maintenance interventions. Three classification models (logistic regression, random forest, histogram-based gradient boosting) are evaluated under four validation schemes: random K-fold, equipment-grouped K-fold, temporal holdout, and roll-forward validation. The results reveal a consistent decrease in apparent predictive performance as validation constraints are progressively strengthened. Random cross-validation overestimates AUROC by approximately 0.03–0.06 compared with temporally constrained protocols. Under deployment-aligned temporal validation, model performance stabilizes at an AUROC of approximately 0.83–0.84. Equipment-grouped and temporal validation effectively mitigate structural bias and yield more stable and interpretable models. These findings highlight the critical role of validation protocol choice in the credible assessment of predictive maintenance models and provide practical guidance for the deployment of PdM systems based on real-world data in resource-limited healthcare environments. The analysis is limited to public hospitals within a single national context and relies on a class-balanced experimental subset, which may affect the direct transferability of absolute performance estimates to other healthcare systems or operational settings. Full article
Show Figures

Figure 1

25 pages, 7517 KB  
Article
VCC: Vertical Feature and Circle Combined Descriptor for 3D Place Recognition
by Wenguang Li, Yongxin Ma, Jiying Ren, Jinshun Ou, Jun Zhou and Panling Huang
Sensors 2026, 26(4), 1185; https://doi.org/10.3390/s26041185 - 11 Feb 2026
Viewed by 243
Abstract
Loop closure detection remains a critical challenge in LiDAR-based SLAM, particularly for achieving robust place recognition in environments with rotational and translational variations. To extract more concise environmental representations from point clouds and improve extraction efficiency, this paper proposes a novel composite descriptor—the [...] Read more.
Loop closure detection remains a critical challenge in LiDAR-based SLAM, particularly for achieving robust place recognition in environments with rotational and translational variations. To extract more concise environmental representations from point clouds and improve extraction efficiency, this paper proposes a novel composite descriptor—the vertical feature and circle combined (VCC) descriptor, a novel 3D local descriptor designed for efficient and rotation-invariant place recognition. The VCC descriptor captures environmental structure by extracting vertical features from voxelized point clouds and encoding them into circular arc-based histograms, ensuring robustness to viewpoint changes. Under the same hardware, experiments conducted on different datasets demonstrate that the proposed algorithm significantly improves both feature representation efficiency and loop closure recognition performance when compared with the other descriptors, completing loop closure retrieval within 30 ms, which satisfies real-time operation requirements. The results confirm that VCC provides a compact, efficient, and rotation-invariant representation suitable for LiDAR-based SLAM systems. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

33 pages, 745 KB  
Article
XAI-Driven Malware Detection from Memory Artifacts: An Alert-Driven AI Framework with TabNet and Ensemble Classification
by Aristeidis Mystakidis, Grigorios Kalogiannnis, Nikolaos Vakakis, Nikolaos Altanis, Konstantina Milousi, Iason Somarakis, Gabriela Mihalachi, Mariana S. Mazi, Dimitris Sotos, Antonis Voulgaridis, Christos Tjortjis, Konstantinos Votis and Dimitrios Tzovaras
AI 2026, 7(2), 66; https://doi.org/10.3390/ai7020066 - 10 Feb 2026
Viewed by 718
Abstract
Modern malware presents significant challenges to traditional detection methods, often leveraging fileless techniques, in-memory execution, and process injection to evade antivirus and signature-based systems. To address these challenges, alert-driven memory forensics has emerged as a critical capability for uncovering stealthy, persistent, and zero-day [...] Read more.
Modern malware presents significant challenges to traditional detection methods, often leveraging fileless techniques, in-memory execution, and process injection to evade antivirus and signature-based systems. To address these challenges, alert-driven memory forensics has emerged as a critical capability for uncovering stealthy, persistent, and zero-day threats. This study presents a two-stage host-based malware detection framework, that integrates memory forensics, explainable machine learning, and ensemble classification, designed as a post-alert asynchronous SOC workflow balancing forensic depth and operational efficiency. Utilizing the MemMal-D2024 dataset—comprising rich memory forensic artifacts from Windows systems infected with malware samples whose creation metadata spans 2006–2021—the system performs malware detection, using features extracted from volatile memory. In the first stage, an Attentive and Interpretable Learning for structured Tabular data (TabNet) model is used for binary classification (benign vs. malware), leveraging its sequential attention mechanism and built-in explainability. In the second stage, a Voting Classifier ensemble, composed of Light Gradient Boosting Machine (LGBM), eXtreme Gradient Boosting (XGB), and Histogram Gradient Boosting (HGB) models, is used to identify the specific malware family (Trojan, Ransomware, Spyware). To reduce memory dump extraction and analysis time without compromising detection performance, only a curated subset of 24 memory features—operationally selected to reduce acquisition/extraction time and validated via redundancy inspection, model explainability (SHAP/TabNet), and training data correlation analysis —was used during training and runtime, identifying the best trade-off between memory analysis and detection accuracy. The pipeline, which is triggered from host-based Wazuh Security Information and Event Management (SIEM) alerts, achieved 99.97% accuracy in binary detection and 70.17% multiclass accuracy, resulting in an overall performance of 87.02%, including both global and local explainability, ensuring operational transparency and forensic interpretability. This approach provides an efficient and interpretable detection solution used in combination with conventional security tools as an extra layer of defense suitable for modern threat landscapes. Full article
Show Figures

Figure 1

17 pages, 4637 KB  
Article
An Approach for Spectrum Extraction Based on Canny Operator-Enabled Adaptive Edge Extraction and Centroid Localization
by Ao Li, Xinlan Ge, Zeyu Gao, Qiang Yuan, Yong Chen, Chao Yang, Licheng Zhu, Shiqing Ma, Shuai Wang and Ping Yang
Photonics 2026, 13(2), 169; https://doi.org/10.3390/photonics13020169 - 10 Feb 2026
Viewed by 255
Abstract
In adaptive optics systems, high spatial resolution detection is a core prerequisite for achieving accurate wavefront correction. High spatial resolution wavefront measurement based on the traditional Shack-Hartmann technique is limited by the density of the microlens array. In contrast, off-axis digital holography technology [...] Read more.
In adaptive optics systems, high spatial resolution detection is a core prerequisite for achieving accurate wavefront correction. High spatial resolution wavefront measurement based on the traditional Shack-Hartmann technique is limited by the density of the microlens array. In contrast, off-axis digital holography technology is applied in wavefront measurement systems of adaptive optics systems due to its advantages of high spatial resolution, non-contact measurement, and full-field measurement. However, during the demodulation of its interference fringes, the accurate extraction of the complex amplitude of the +1st-order diffraction order directly determines the precision of wavefront reconstruction. Traditional frequency-domain filtering methods suffer from drawbacks such as reliance on manual threshold setting, poor adaptability to irregular spectra, and localization deviations caused by multi-region interference, making it difficult to meet the dynamic application requirements of adaptive optics. To address these issues, this study proposes a spectrum extraction method based on the Canny operator for adaptive edge extraction and centroid localization. The method first locks the rough range of the +1st-order spectrum through multi-stage peak screening, then achieves complete segmentation of spectrum spots by combining adaptive histogram equalization with edge closing and filling, resolves centroid indexing errors via maximum connected component screening, and ultimately accomplishes accurate extraction through Gaussian window filtering. Simulation experimental results show that, in comparison with two classical spectrum filtering methods, the centroid estimation error of the proposed method remains below 0.245 pixels under different noise intensity conditions. Moreover, the root mean square error of the residual wavefront corresponding to the reconstructed wavefront of the proposed method is reduced by 89.0% and 87.2% compared with those of the two classical methods, respectively. We further carried out measurement experiments based on a self-developed atmospheric turbulence test bench. The experimental results demonstrate that the proposed method exhibits higher-precision spectral centroid localization capability, which provides a reliable technical support for the high-precision measurement of dynamic distortion induced by atmospheric turbulence. Full article
Show Figures

Figure 1

16 pages, 4846 KB  
Article
Therapeutically Induced Modulation of Collagen I-to-III Ratio Three Weeks After Rabbit Achilles Tendon Full Transection
by Gabriella Meier Bürgisser, Olivera Evrova, Pietro Giovanoli, Maurizio Calcagni and Johanna Buschmann
Biology 2026, 15(2), 204; https://doi.org/10.3390/biology15020204 - 22 Jan 2026
Viewed by 262
Abstract
During tendon healing, collagen III expression precedes that of collagen I. The collagen I-to-III ratio at a certain time point post-laceration serves as an indicator of the healing status. Consequently, it is crucial to understand how different therapeutic approaches to support tendon healing [...] Read more.
During tendon healing, collagen III expression precedes that of collagen I. The collagen I-to-III ratio at a certain time point post-laceration serves as an indicator of the healing status. Consequently, it is crucial to understand how different therapeutic approaches to support tendon healing affect the collagen I-to-III ratio in the extracellular matrix of a healing tendon, particularly across distinct anatomical zones. We compared the impact of a platelet-derived growth factor-BB (PDGF-BB) treatment via controlled release from coaxially electrospun DegraPol® (Ab medica, Cerro Maggiore, Italy) hollow-fiber mesh with a treatment by the vehicle alone (no PDGF-BB) in the rabbit Achilles tendon full transection model and provide data on the collagen I-to-III ratio 3 weeks post-operation. For this purpose, we compared a dual-color Herovici staining to two single IHC labeling, for collagen I and collagen III, respectively. Herovici staining (HV) was expected to offer a more precise approach (pink-to-blue histogram) than the two separately labeled IHC stainings, both with chromogenic DAB labeling (red-to-green histogram), despite an anticipated positive correlation of the data assessed by these methods. Different zones were compared, i.e., native tendon tissue, reactive zone at interface to implant, hot zone within the core of the healing tendon and the zone within the scaffold, meaning the collagen deposited within the fibers of the implanted DegraPol® tube, respectively. The analysis revealed that the ratios obtained via HV correlated weakly with the ratios obtained by IHC. Based on HV, PDGF-BB therapy led to higher collagen I-to-III ratios in all zones, except for the zone within the scaffold pores, while IHC did not reveal significant differences. Notably, collagen I-to-III ratios were not higher in immediate proximity, but rather distal from the PDGF-BB releasing implant, specifically in the core of the healing tendon tissue. Hence, a PDGF-BB therapy is suggestive of greater collagen maturation in specific zones of the healing tendon. Full article
(This article belongs to the Section Zoology)
Show Figures

Figure 1

24 pages, 3406 KB  
Article
Reliability Assessment of the Infrastructure Leakage Index for a Single DMA Using High-Resolution AMI Water Meter Data
by Ewelina Kilian-Błażejewska, Wojciech Koral and Bożena Gil
Water 2026, 18(2), 198; https://doi.org/10.3390/w18020198 - 12 Jan 2026
Viewed by 346
Abstract
This study presents an analysis of the Infrastructure Leakage Index (ILI) variability for two District Metered Areas (DMAs) in the Silesian Region (Poland), based on 2024 data. The objective of the study was to evaluate whether high-frequency AMI data can be used to [...] Read more.
This study presents an analysis of the Infrastructure Leakage Index (ILI) variability for two District Metered Areas (DMAs) in the Silesian Region (Poland), based on 2024 data. The objective of the study was to evaluate whether high-frequency AMI data can be used to reliably identify and remove distorted measurement periods, thereby improving the credibility of the annual ILI value for each individual DMA. ILIT values were calculated for daily, weekly, and monthly intervals using synchronized hourly data from an Advanced Metering Infrastructure (AMI) system and water network monitoring platforms. A key methodological advantage was the use of fully synchronous inflow–outflow–consumption data, enabling diagnostic reconstruction of hourly water balances and validation of the representativeness of data segments used for ILIT estimation. The study applied statistical measures of variability (standard deviation, variance, coefficient of variation) and graphical methods (histograms, boxplots) to evaluate ILIT behavior across time resolutions. Rather than comparing leakage performance between DMAs—which is performed exclusively using normalized indicators such as ILI—the analysis examined how hourly diagnostic information explains short-term distortions in the ILI and how filtering such periods affects the stability of the annual value for each DMAs. The results confirm that ILIT interpretation is highly dependent on temporal resolution. Daily data is more responsive to anomalies and operational events, while monthly data provides more stable values suitable for benchmarking. The findings demonstrate that daily and hourly data should be used diagnostically to detect non-representative periods, whereas monthly aggregation provides the most robust basis for reporting and inter-DMA comparison. Overall, the study proposes a practical procedure for ILI validation using AMI data and demonstrates its application on two real DMAs. Full article
(This article belongs to the Section Urban Water Management)
Show Figures

Figure 1

25 pages, 8488 KB  
Article
From Localized Collapse to City-Wide Impact: Ensemble Machine Learning for Post-Earthquake Damage Classification
by Bilal Ein Larouzi and Yasin Fahjan
Infrastructures 2026, 11(1), 25; https://doi.org/10.3390/infrastructures11010025 - 12 Jan 2026
Viewed by 404
Abstract
Effective disaster management depends on rapidly understanding earthquake damage, yet traditional methods struggle to operate at scale and rely on expert inspections that become difficult when access is limited or time is critical. Satellite-based damage detection also faces limitations, particularly under adverse weather [...] Read more.
Effective disaster management depends on rapidly understanding earthquake damage, yet traditional methods struggle to operate at scale and rely on expert inspections that become difficult when access is limited or time is critical. Satellite-based damage detection also faces limitations, particularly under adverse weather conditions and delays associated with satellite overpass schedules. This study introduces a machine learning-based approach to assess post-earthquake building damage using real observations collected after the event. The aim is to develop fast and reliable estimation techniques that can be deployed immediately after the mainshock by integrating structural, seismic, and geographic data. Three machine learning models—Random Forest, Histogram Gradient Boosting, and Bagging Classifier—are evaluated across both reinforced concrete and masonry buildings and across multiple spatial levels, including building, district, and city scales. Damage is categorized using practical three-class (traffic light) and detailed four-class systems. The models generally perform better in simpler classifications, with the Bagging Classifier offering the most consistent results across different scales. Although detecting severely damaged buildings remains challenging in some cases, the three-class system proves especially effective for supporting rapid decision-making during emergency response. Overall, this study demonstrates how machine learning can provide faster, scalable, and practical earthquake damage assessments that benefit emergency teams and urban planners. Full article
(This article belongs to the Topic Disaster Risk Management and Resilience)
Show Figures

Figure 1

11 pages, 1585 KB  
Article
Statistical Post-Processing of Ensemble LLWS Forecasts Using EMOS: A Case Study at Incheon International Airport
by Chansoo Kim
Appl. Sci. 2026, 16(2), 750; https://doi.org/10.3390/app16020750 - 11 Jan 2026
Viewed by 255
Abstract
Low-level wind shear (LLWS) is a critical aviation hazard that can cause flight disruptions and pose significant safety risks. Despite its operational importance, forecasting LLWS remains a challenging task. To improve LLWS prediction, probabilistic forecasting approaches based on ensemble prediction systems are increasingly [...] Read more.
Low-level wind shear (LLWS) is a critical aviation hazard that can cause flight disruptions and pose significant safety risks. Despite its operational importance, forecasting LLWS remains a challenging task. To improve LLWS prediction, probabilistic forecasting approaches based on ensemble prediction systems are increasingly used. In this study, LLWS forecasts were generated using a high-resolution, limited-area ensemble model, which allows for the representation of forecast uncertainty and variability in atmospheric conditions. Forecasts for Incheon International Airport were generated twice daily over the period from December 2018 to February 2020. To enhance forecast skill, statistical post-processing techniques, specifically Ensemble Model Output Statistics (EMOS), were applied and calibrated using Aircraft Meteorological Data Relay (AMDAR) observations. Prior to calibration, rank histograms were examined to assess the reliability and distributional consistency of the ensemble forecasts. Forecast performance was evaluated using commonly applied probabilistic verification metrics, including the mean absolute error (MAE), the continuous ranked probability score (CRPS), and probability integral transform (PIT). The results indicate that ensemble forecasts adjusted through statistical post-processing generally provide more reliable and accurate predictions than the unprocessed raw ensemble outputs. Full article
(This article belongs to the Special Issue Advanced Statistical Methods in Environmental and Climate Sciences)
Show Figures

Figure 1

33 pages, 3089 KB  
Article
A Machine Learning-Based Data-Driven Model for Predicting Wastewater Quality Parameters in the Industrial Domain
by Madalina Carbureanu and Catalina Gabriela Gheorghe
Appl. Sci. 2026, 16(2), 694; https://doi.org/10.3390/app16020694 - 9 Jan 2026
Cited by 1 | Viewed by 491
Abstract
This study proposes HGBRCond, a machine learning model for conductivity prediction in controlled biodegradation processes. Eight regression algorithms were evaluated using experimental data (n = 424) from a micro-pilot treatment system. HGBRCond, based on Histogram-Gradient Boosting Regression (best performing ML model), achieved [...] Read more.
This study proposes HGBRCond, a machine learning model for conductivity prediction in controlled biodegradation processes. Eight regression algorithms were evaluated using experimental data (n = 424) from a micro-pilot treatment system. HGBRCond, based on Histogram-Gradient Boosting Regression (best performing ML model), achieved optimal performance (R2 = 0.877 ± 0.011, RMSE = 10.235 ± 0.54 µS/cm) through 10-fold cross-validation. Unlike standard HGBR and previous conductivity models that lack comprehensive validation frameworks, HGBRCond integrates rigorous statistical validation (cross-validation, sensitivity analysis, confidence intervals) with multi-level interpretability (Morris screening, SHAP analysis, feature importance), achieving a 6.8% performance improvement over standard gradient boosting approaches while addressing mechanistic interpretability gaps present in prior work. However, limitations constrain direct potential industrial applicability: limited dataset (n = 424), narrow conductivity range (285–360 µS/cm), strong dissolved oxygen dependence, sensitivity across two critical parameters, constant flowrate, and validation restricted to controlled conditions. These constraints require model recalibration for potential industrial application. Future work will focus on model validation across extended operational ranges using industrial samples and full-scale testing to establish applicability beyond controlled experimental settings. Full article
Show Figures

Figure 1

45 pages, 1557 KB  
Article
A Hybrid Gradient-Based Optimiser for Solving Complex Engineering Design Problems
by Jamal Zraqou, Riyad Alrousan, Zaid Khrisat, Faten Hamad, Niveen Halalsheh and Hussam Fakhouri
Computation 2026, 14(1), 11; https://doi.org/10.3390/computation14010011 - 4 Jan 2026
Cited by 1 | Viewed by 462
Abstract
This paper proposes JADEGBO, a hybrid gradient-based metaheuristic for solving complex single- and multi-constraint engineering design problems as well as cost-sensitive security optimisation tasks. The method combines Adaptive Differential Evolution with Optional External Archive (JADE), which provides self-adaptive exploration through p-best mutation, [...] Read more.
This paper proposes JADEGBO, a hybrid gradient-based metaheuristic for solving complex single- and multi-constraint engineering design problems as well as cost-sensitive security optimisation tasks. The method combines Adaptive Differential Evolution with Optional External Archive (JADE), which provides self-adaptive exploration through p-best mutation, an external archive, and success-based parameter learning, with the Gradient-Based Optimiser (GBO), which contributes Newton-inspired gradient search rules and a local escaping operator. In the proposed scheme, JADE is first employed to discover promising regions of the search space, after which GBO performs an intensified local refinement of the best individuals inherited from JADE. The performance of JADEGBO is assessed on the CEC2017 single-objective benchmark suite and compared against a broad set of classical and recent metaheuristics. Statistical indicators, convergence curves, box plots, histograms, sensitivity analyses, and scatter plots show that the hybrid typically attains the best or near-best mean fitness, exhibits low run-to-run variance, and maintains a favourable balance between exploration and exploitation across rotated, shifted, and composite landscapes. To demonstrate practical relevance, JADEGBO is further applied to the following four well-known constrained engineering design problems: welded beam, pressure vessel, speed reducer, and three-bar truss design. The algorithm consistently produces feasible high-quality designs and closely matches or improves upon the best reported results while keeping computation time competitive. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

29 pages, 9773 KB  
Article
Prediction of Mean Fragmentation Size in Open-Pit Mine Blasting Operations Using Histogram-Based Gradient Boosting and Grey Wolf Optimization Approach
by Madalitso Mame, Shuai Huang, Chuanqi Li, Xiaoguang Zhou and Jian Zhou
Appl. Sci. 2026, 16(1), 311; https://doi.org/10.3390/app16010311 - 28 Dec 2025
Viewed by 454
Abstract
Blast-induced rock fragmentation plays a critical role in mining and civil engineering. One of the primary objectives of blasting operations is to achieve the desired rock fragmentation size, which is a key indicator of the quality of the blasting process. Predicting the mean [...] Read more.
Blast-induced rock fragmentation plays a critical role in mining and civil engineering. One of the primary objectives of blasting operations is to achieve the desired rock fragmentation size, which is a key indicator of the quality of the blasting process. Predicting the mean fragmentation size (MFS) is crucial to avoid increased production costs, material loss, and ore dilution. This study integrates three tree-based regression techniques—gradient boosting regression (GBR), histogram-based gradient boosting machine (HGB), and extra trees (ET)—with two optimization algorithms, namely, grey wolf optimization (GWO) and particle swarm optimization (PSO), to predict the MFS. The performance of the resulting models was evaluated using four statistical measures: coefficient of determination (R2), root mean squared error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). The results indicate that the GWO-HGB model outperformed all other models, achieving R2, RMSE, MAE, and MAPE values of 0.9402, 0.0251, 0.0185, and 0.0560, respectively, in the testing phase. Additionally, the Shapley additive explanations (SHAP), local interpretable model-agnostic explanations (LIME), and neural network-based sensitivity analyses were applied to examine how input parameters influence model predictions. The analysis revealed that unconfined compressive strength (UCS) emerged as the most influential parameter affecting MFS prediction in the developed model. This study provides a novel hybrid intelligent model to predict MFS for optimized blasting operations in open-pit mines. Full article
(This article belongs to the Special Issue Advances and Technologies in Rock Mechanics and Rock Engineering)
Show Figures

Figure 1

18 pages, 2022 KB  
Article
Study of the Flowability Properties, Morphology and Microstructure of Hazelnut (Corylus avellana L.) Shell Waste Particles Obtained by Milling
by Israel Arzate-Vázquez, Juan Vicente Méndez-Méndez, Ruth Nohemí Domínguez-Fernández, Mayra Beatriz Gómez-Patiño, Daniel Arrieta-Baez, José Jorge Chanona-Pérez, Nayeli Vélez-Rivera and Germán Anibal Rodríguez-Castro
Recycling 2026, 11(1), 3; https://doi.org/10.3390/recycling11010003 - 22 Dec 2025
Viewed by 493
Abstract
Mechanical milling is a relevant preliminary processing operation that is widely used for the reuse of various types of agro-industrial waste. The objective of this study was to conduct milling experiments of hazelnut (Corylus avellana L.) shell waste at different times (0.5, [...] Read more.
Mechanical milling is a relevant preliminary processing operation that is widely used for the reuse of various types of agro-industrial waste. The objective of this study was to conduct milling experiments of hazelnut (Corylus avellana L.) shell waste at different times (0.5, 1 and 1.5 min) and subsequently evaluate the particle size distribution (PSD) of the powders obtained by sieving methodology. In addition, flowability parameters were determined for the particles retained on the sieves, and their morphology and microstructure were examined using several microscopy techniques. The results demonstrated that the hazelnut shells were successfully fractionated under the milling conditions investigated (short milling times ≤ 1.5 min), and the histograms of the PSD exhibited a wide dispersion of sizes (≤1.7 mm). The particles retained from sieve100 to residue exhibited poor or no flow, attributable to the high degree of cohesion between them. Morphological analysis based on optical microscopy and image analysis revealed that there was an increase in the aspect ratio parameter when the particle size decreased, meaning that the particles had elongated shapes. Microscopic analysis (SEM, AFM and CLSM) showed that the particles exhibited complex shapes and a comparable microstructure, comprising tightly packed clusters of sclerenchyma cells. From the microscopy images obtained (SEM and AFM), it was inferred that the cracks generated during blade impacts propagate along the middle lamella of the cells, allowing the cluster-like arrangement to be preserved. The CLSM results demonstrated that as the size of hazelnut shell particles decreases, the exposure of lignin on its surface is favored. The findings of this study demonstrate that hazelnut shell waste can be readily pre-processed using a blade grinder, thereby facilitating its reuse in applications that demand fine particle sizes (e.g., bioadsorption of pollutants and the production of biocomposite materials). Likewise, the results concerning the flowability parameters, microstructural arrangement, and morphological features of the different particle fractions obtained are crucial variables that must be considered. These variables significantly influence the possible applications for the revalorization of this type of agro-industrial waste. Full article
Show Figures

Graphical abstract

36 pages, 7466 KB  
Article
Prediction and Uncertainty Quantification of Flow Rate Through Rectangular Top-Hinged Gate Using Hybrid Gradient Boosting Models
by Pourya Nejatipour, Giuseppe Oliveto, Ibrokhim Sapaev, Ehsan Afaridegan and Reza Fatahi-Alkouhi
Water 2025, 17(24), 3470; https://doi.org/10.3390/w17243470 - 6 Dec 2025
Cited by 2 | Viewed by 821
Abstract
Accurate estimation of flow discharge, Q, through hydraulic structures such as spillways and gates is of great importance in water resources engineering. Each hydraulic structure, due to its unique characteristics, requires a specific and comprehensive study. In this regard, the present study [...] Read more.
Accurate estimation of flow discharge, Q, through hydraulic structures such as spillways and gates is of great importance in water resources engineering. Each hydraulic structure, due to its unique characteristics, requires a specific and comprehensive study. In this regard, the present study innovatively focuses on predicting Q through Rectangular Top-Hinged Gates (RTHGs) using advanced Gradient Boosting (GB) models. The GB models evaluated in this study include Categorical Boosting (CatBoost), Histogram-based Gradient Boosting (HistGBoost), Light Gradient Boosting Machine (LightGBoost), Natural Gradient Boosting (NGBoost), and Extreme Gradient Boosting (XGBoost). One of the essential factors in developing artificial intelligence models is the accurate and proper tuning of their hyperparameters. Therefore, four powerful metaheuristic algorithms—Covariance Matrix Adaptation Evolution Strategy (CMA-ES), Sparrow Search Algorithm (SSA), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA)—were evaluated and compared for hyperparameter tuning, using LightGBoost as the baseline model. An assessment of error metrics, convergence speed, stability, and computational cost revealed that SSA achieved the best performance for the hyperparameter optimization of GB models. Consequently, hybrid models combining GB algorithms with SSA were developed to predict Q through RTHGs. Random split was used to divide the dataset into two sets, with 70% for training and 30% for testing. Prediction uncertainty was quantified via Confidence Intervals (CI) and the R-Factor index. CatBoost-SSA produced the most accurate prediction performance among the models (R2 = 0.999 training, 0.984 testing), and NGBoost-SSA provided the lowest uncertainty (CI = 0.616, R-Factor = 3.596). The SHapley Additive exPlanations (SHAP) method identified h/B (upstream water depth to channel width ratio) and channel slope, S, as the most influential predictors. Overall, this study confirms the effectiveness of SSA-optimized boosting models for reliable and interpretable hydraulic modeling, offering a robust tool for the design and operation of gated flow control systems. Full article
Show Figures

Figure 1

Back to TopTop