Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (213)

Search Parameters:
Keywords = global histogramming

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 17603 KB  
Article
SICABI: Symmetry-Informed Stochastic Modeling via Dominant-Period Stationarity and Recursive Adaptive Parametric Density Estimation
by Daniel Canton-Enriquez, Jorge-Luis Perez-Ramos, Selene Ramirez-Rosales, Luis-Antonio Diaz-Jimenez, Ana-Marcela Herrera-Navarro and Hugo Jimenez-Hernandez
Symmetry 2026, 18(4), 681; https://doi.org/10.3390/sym18040681 - 20 Apr 2026
Abstract
Wind dynamics in urban environments exhibit non-stationarity and marked spatial variability, complicating stochastic modeling when a single global distribution is assumed. This article discusses the estimation of wind density under quasi-stationary regimes at the local level using SICABI, a two-phase framework: (i) Stationary [...] Read more.
Wind dynamics in urban environments exhibit non-stationarity and marked spatial variability, complicating stochastic modeling when a single global distribution is assumed. This article discusses the estimation of wind density under quasi-stationary regimes at the local level using SICABI, a two-phase framework: (i) Stationary Region Identification (ISR) estimates, through spectral power analysis, a specific dominant period for each location and validates the induced subsampling using the Augmented Dickey–Fuller (ADF) test, and (ii) RAPID adjusts an adaptive parametric density by recursively updating the mixture parameters and creating new components when a normalized membership distance exceeds a threshold. The analysis uses wind speed records collected from eight stations in the Metropolitan Area of Queretaro, Mexico, during the period from 1 January 2023 to 31 December 2023, aggregated at a 10 min resolution, from which Xδ,s is constructed for each site. RAPID is compared against Gaussian Kernel Density Estimation (KDE) with Silverman bandwidth and EM-fitted Gaussian mixtures with BIC-based selection (Kmax=12). The resulting densities were compared with an empirical density estimated from a histogram over a fixed grid (m=50) using the MISE and RMSE metrics. The results reveal marked site-dependent differences in dominant periodicity and residual behavior, including asymmetry and heavy tails. ISR identified dominant periods ranging from 37 to 166 days, and RAPID adapted its complexity with Ks[5,10] without fixing the number of mixture components in advance. Quantitatively, RAPID achieved the lowest RMSE at 6/8 sites and the lowest MISE at 5/8 sites, while also exhibiting shorter execution times than KDE and MoG under the same input Xδ,s. The results support RAPID as a competitive adaptive method for site-specific density estimation in non-stationary urban climate signals. In this context, local regimes can be viewed as approximate invariants under time translation in the weak stochastic sense, while deviations from this assumption are reflected in increased distributional complexity across sites. Full article
Show Figures

Figure 1

20 pages, 1048 KB  
Article
Soiling Status Detection in Photovoltaic Energy Systems Using Machine Learning and Weather Data for Cleaning Alerts
by Bruno Knevitz Hammerschmitt, João Carlos Jachenski Junior, Leandro Mario, Edwin Augusto Tonolo, Patryk Henrique de Fonseca, Rafael Martini Silva and Natália Pereira Menezes
Energies 2026, 19(8), 1964; https://doi.org/10.3390/en19081964 - 18 Apr 2026
Viewed by 168
Abstract
Soiling in photovoltaic systems is a recurring problem that reduces energy generation and demands efficient operation and maintenance (O&M) strategies. In this context, this paper proposes a machine learning-based approach to identify dirt levels and generate cleaning alerts using operational and weather data. [...] Read more.
Soiling in photovoltaic systems is a recurring problem that reduces energy generation and demands efficient operation and maintenance (O&M) strategies. In this context, this paper proposes a machine learning-based approach to identify dirt levels and generate cleaning alerts using operational and weather data. Initially, the models were evaluated with a decision threshold ranging from 0.5 to 0.7, using only operational features. Subsequently, the inclusion of weather features was tested, which improved the models’ performance and enabled the selection of the best models for the exhaustive features search step. The models analyzed in this step were Extra Trees, Histogram-based Gradient Boosting, Extreme Gradient Boosting, and Random Forest. Exhaustive analysis further improved model performance, as indicated by global metrics and ROC curves. The Extra Trees model with a threshold of 0.5 showed the best performance and was selected as the final configuration, achieving an accuracy of 0.9884 and an AUC-ROC of 0.9957. Finally, the selected model was applied to determine daily soiling levels and trigger alerts based on temporal persistence, indicating its potential to support predictive O&M decisions and cleaning actions in PV systems. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

15 pages, 1420 KB  
Article
DC-MEPV: Dual-Channel Assisted Music Emotion Perception and Visualization in Acousto-Optic Synergistic Intelligent Cockpits
by Wei Shen, Xingang Mou, Songqing Le, Zhixing Zong and Jiaji Li
Appl. Sci. 2026, 16(8), 3800; https://doi.org/10.3390/app16083800 - 13 Apr 2026
Viewed by 255
Abstract
We propose a Dual-Channel assisted Music Emotion Perception and Visualization (DC-MEPV) framework designed for ambient lighting in intelligent vehicle cockpits, addressing the increasing demand for advanced human–machine interaction in the automotive industry. This framework consists of three main components: the Multi-Scale Feature Extraction [...] Read more.
We propose a Dual-Channel assisted Music Emotion Perception and Visualization (DC-MEPV) framework designed for ambient lighting in intelligent vehicle cockpits, addressing the increasing demand for advanced human–machine interaction in the automotive industry. This framework consists of three main components: the Multi-Scale Feature Extraction Block (MSFEB), the Global Sequence Modeling Block (GSMB), and the Emotional Color Visualization Algorithm (ECV-Algo). The MSFEB extracts valence and arousal (V-A) features from dual channels at multiple temporal scales, with each channel employing a hybrid neural network architecture to capture multi-scale emotional representations. The GSMB integrates positional encoding, bidirectional long short-term memory (BiLSTM) networks, and multi-head self-attention mechanisms to dynamically model global emotional sequences. The ECV algorithm utilizes personalized emotion–color association rules to achieve expressive emotion-driven lighting visualization based on a continuous mapping from emotion space to color space. We conducted comprehensive comparison and ablation experiments to evaluate the model’s emotion perception performance, and designed three metrics to evaluate the quality of the generated visualizations. The model outperformed other networks in both comparative and ablation experiments. Additionally, the generated lights demonstrated strong performance in terms of CIEDE2000 variation rates, unique color ratios, and joint histogram entropy. DC-MEPV achieved excellent performance in emotion perception and visualizations on the DEAM and PMEmo datasets. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

28 pages, 1021 KB  
Article
Cost-Aware Network Traffic Anomaly Detection with Histogram-Based Gradient Boosting
by Dariusz Żelasko
Appl. Sci. 2026, 16(7), 3496; https://doi.org/10.3390/app16073496 - 3 Apr 2026
Viewed by 239
Abstract
Intrusion Detection Systems (IDSs) operate under asymmetric misclassification costs: false alarms (FP) consume analysts’ time and erode trust, whereas missed attacks (FN) carry business risks. This paper presents a complete pipeline for network anomaly detection on the CIC-IDS2017 dataset using Histogram-Based Gradient Boosting [...] Read more.
Intrusion Detection Systems (IDSs) operate under asymmetric misclassification costs: false alarms (FP) consume analysts’ time and erode trust, whereas missed attacks (FN) carry business risks. This paper presents a complete pipeline for network anomaly detection on the CIC-IDS2017 dataset using Histogram-Based Gradient Boosting (HGB), with a particular focus on cost-aware threshold selection on a validation split for representative operating regimes wFP:wFN{1:1, 1:2, 1:3, 1:4, 1:5, 1:10}—treated as scenario-based proxies for varying risk posture, attack severity, and analyst workload rather than as universally fixed costs—and on the role of isotonic calibration. The results indicate that (i) under 1:1, the cost-optimal operating point aligns with the F1/MCC optimum; (ii) for 1:k cost regimes, the optimum shifts to lower thresholds, reducing FN at the expense of FP and increasing the alert rate; and (iii) isotonic calibration improves PR/ROC (ranking separation), but in the reported 1:5 experiment it did not reduce the final TEST-set operational cost relative to the uncalibrated run, despite using a separately selected post-calibration threshold. The evaluation includes PR/ROC curves, Cost–Threshold and Alert–Threshold sweeps, per-class recall, and permutation importance. In addition, the proposed approach is compared with unsupervised baselines (Isolation Forest, LOF). The results provide practical guidance for SOC decisions on how to choose thresholds consistent with alert budgets and risk profiles. In deployment, these operating points can be indexed to context (e.g., user type, service class, or time of day), yielding a small library of adaptive thresholds rather than one immutable global threshold. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

41 pages, 25740 KB  
Article
Standardized Images and Evaluation Metrics for Tomography
by Anna Frixou, Theodoros Leontiou, Efstathios Stiliaris and Costas N. Papanicolas
Tomography 2026, 12(4), 49; https://doi.org/10.3390/tomography12040049 - 1 Apr 2026
Viewed by 355
Abstract
Background/Objectives: Modern tomographic reconstruction methods—including physics-informed and AI-based approaches—can produce very high fidelity images. In this regime, widely used global image quality metrics often approach saturation, making it harder to distinguish residual differences between methods and identify remaining performance gaps. This study introduces [...] Read more.
Background/Objectives: Modern tomographic reconstruction methods—including physics-informed and AI-based approaches—can produce very high fidelity images. In this regime, widely used global image quality metrics often approach saturation, making it harder to distinguish residual differences between methods and identify remaining performance gaps. This study introduces a physically grounded and standardized evaluation framework designed to retain sensitivity beyond conventional global metrics and support both comparison and systematic improvement in tomographic reconstruction methods. Methods: The proposed framework defines standardized reference images—“Source”, “Detector”, “Ideal”, and “Realistic”—using Monte Carlo simulations, with the Ideal Image serving as a physically grounded benchmark. Reconstruction performance is evaluated using pixel-wise difference and χ2 maps, Region-of-Interest analysis, intensity (gray-value) histogram comparisons, and the Structure and Contrast Index (SCI), computed on difference maps. Demonstrations use simulated SPECT data reconstructed with ART, MLEM, and RISE-1. Results: Across case studies, SCI and χ2-based diagnostics reveal structured residuals and localized deficiencies not evident from global similarity metrics such as SSIM or NMSE. Comparative analyses show that methods with similar global scores can exhibit distinct residual structures and region-specific performance variations, while improved agreement in the sinogram domain does not necessarily translate into improved image fidelity. Histogram-based diagnostics provide complementary information on intensity redistribution not captured by pixel-domain summaries. Conclusions: The framework provides a reproducible, physically meaningful, and sensitive approach for evaluating tomographic reconstruction performance in the high-fidelity regime. By combining standardized reference images with multi-domain and multi-metric analysis, it enables robust benchmarking and supports physically consistent interpretation of reconstruction quality. Full article
Show Figures

Figure 1

20 pages, 3303 KB  
Article
Revisiting Remote Sensing Image Dehazing via a Dynamic Histogram-Sorted Transformer
by Naiwei Chen, Xin He, Shengyuan Li, Fengning Liu, Haoyi Lv, Haowei Peng and Yuebu Qubie
Remote Sens. 2026, 18(7), 1040; https://doi.org/10.3390/rs18071040 - 30 Mar 2026
Viewed by 321
Abstract
Remote sensing images are highly susceptible to spatially non-uniform haze under complex atmospheric conditions, leading to contrast degradation and structural detail loss. Moreover, remote sensing scenes usually exhibit complex spatial structures, highly uneven haze distribution, and significant statistical variability, which further increases the [...] Read more.
Remote sensing images are highly susceptible to spatially non-uniform haze under complex atmospheric conditions, leading to contrast degradation and structural detail loss. Moreover, remote sensing scenes usually exhibit complex spatial structures, highly uneven haze distribution, and significant statistical variability, which further increases the difficulty of haze removal. To address this issue, we revisit the haze degradation mechanism of remote sensing imagery and propose a dynamic histogram-sorted Transformer dehazing method from the perspectives of statistical distribution modeling and region-adaptive restoration. Specifically, a Histogram-Sorted Adaptive Attention is designed to map spatial features into the statistical distribution domain through a dynamic histogram sorting mechanism, enabling explicit discrimination and precise modeling of regions with different haze densities. Meanwhile, a Perception-Adaptive Feed-Forward Network is constructed, which incorporates a stable routing-based mixture-of-experts mechanism to adaptively select restoration strategies according to local texture characteristics and global haze density, thereby significantly enhancing the adaptability of the model in complex remote sensing scenarios. Extensive experimental results demonstrate that the proposed method achieves superior performance over existing approaches across multiple remote sensing benchmark datasets, effectively improving both visual quality and robustness of remote sensing imagery. Full article
Show Figures

Figure 1

29 pages, 3941 KB  
Article
Explainable Deep Learning for Thoracic Radiographic Diagnosis: A COVID-19 Case Study Toward Clinically Meaningful Evaluation
by Divine Nicholas-Omoregbe, Olamilekan Shobayo, Obinna Okoyeigbo, Mansi Khurana and Reza Saatchi
Electronics 2026, 15(7), 1443; https://doi.org/10.3390/electronics15071443 - 30 Mar 2026
Viewed by 369
Abstract
COVID-19 still poses a global public health challenge, exerting pressure on radiology services. Chest X-ray (CXR) imaging is widely used for respiratory assessment due to its accessibility and cost-effectiveness. However, its interpretation is often challenging because of subtle radiographic features and inter-observer variability. [...] Read more.
COVID-19 still poses a global public health challenge, exerting pressure on radiology services. Chest X-ray (CXR) imaging is widely used for respiratory assessment due to its accessibility and cost-effectiveness. However, its interpretation is often challenging because of subtle radiographic features and inter-observer variability. Although recent deep learning (DL) approaches have shown strong performance in automated CXR classification, their black-box nature limits interpretability. This study proposes an explainable deep learning framework for COVID-19 detection from chest X-ray images. The framework incorporates anatomically guided preprocessing, including lung-region isolation, contrast-limited adaptive histogram equalization (CLAHE), bone suppression, and feature enhancement. A novel four-channel input representation was constructed by combining lung-isolated soft-tissue images with frequency-domain opacity maps, vessel enhancement maps, and texture-based features. Classification was performed using a modified Xception-based convolutional neural network, while Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to provide visual explanations and enhance interpretability. The framework was evaluated on the publicly available COVID-19 Radiography Database, achieving an accuracy of 95.3%, an AUC of 0.983, and a Matthews Correlation Coefficient of approximately 0.83. Threshold optimisation improved sensitivity, reducing missed COVID-19 cases while maintaining high overall performance. Explainability analysis showed that model attention was primarily focused on clinically relevant lung regions. Full article
(This article belongs to the Special Issue Image Processing Based on Convolution Neural Network: 2nd Edition)
Show Figures

Figure 1

18 pages, 2683 KB  
Article
Engineering the Image Representation for Deep Learning in Contrast-Enhanced Mammography: A Systematic Analysis of Preprocessing and Anatomical Masking
by Roberta Fusco, Vincenza Granata, Paolo Vallone, Teresa Petrosino, Maria Daniela Iasevoli, Mauro Mattace Raso, Davide Pupo, Piero Trovato, Igino Simonetti, Paolo Pariante, Vincenzo Cerciello, Gerardo Ferrara, Modesta Longobucco, Giulia Capuano, Roberto Morcavallo, Caterina Todisco, Fabiana Antenucci, Mario Sansone, Daniele La Forgia and Antonella Petrillo
Bioengineering 2026, 13(3), 322; https://doi.org/10.3390/bioengineering13030322 - 11 Mar 2026
Cited by 1 | Viewed by 503
Abstract
Deep-learning models applied to contrast-enhanced mammography (CEM) are known to be highly sensitive to the input image representation. However, preprocessing is often treated as a secondary step and rarely analyzed as an independent design variable. In this work, we present a systematic engineering [...] Read more.
Deep-learning models applied to contrast-enhanced mammography (CEM) are known to be highly sensitive to the input image representation. However, preprocessing is often treated as a secondary step and rarely analyzed as an independent design variable. In this work, we present a systematic engineering analysis of a deterministic, label-independent preprocessing pipeline for CEM images. The pipeline integrates intensity normalization, global histogram matching, local contrast enhancement, denoising, and anatomically constrained breast masking. Using a controlled experimental design, identical deep-learning architectures were trained under different input representations to isolate the impact of preprocessing on classification performance and stability. Across convolutional neural network architectures, anatomically constrained preprocessing consistently improves discrimination performance, reduces variability across cross-validation folds, and enhances training stability. Breast mask-based representations demonstrate substantial gains in AUROC and AUPRC compared to raw DICOM inputs. These findings highlight image preprocessing as a first-class engineering component in medical AI pipelines. Breast masking significantly improves robustness and generalization, independently of network architecture complexity. From a clinical perspective, improving model robustness and sensitivity to malignant lesions may contribute to more reliable AI-assisted decision support in contrast-enhanced mammography, particularly in settings characterized by acquisition variability and heterogeneous patient populations. Full article
(This article belongs to the Special Issue New Sights of Deep Learning and Digital Model in Biomedicine)
Show Figures

Figure 1

23 pages, 4427 KB  
Article
Virtual Reassembly Method for Cultural Relic Fragments Based on Multi-Feature Extraction
by Jianghong Zhao, Jia Yang, Mengtian Cao, Lisha Yin, Rui Liu and Xinfeng Chang
Appl. Sci. 2026, 16(5), 2588; https://doi.org/10.3390/app16052588 - 8 Mar 2026
Viewed by 402
Abstract
The virtual reassembly of fragmented cultural relics remains a challenging task due to incomplete contours, complex fracture geometries, and the lack of reliable accuracy evaluation when ground-truth models are unavailable. To address these issues, this study proposes an automated virtual reassembly framework based [...] Read more.
The virtual reassembly of fragmented cultural relics remains a challenging task due to incomplete contours, complex fracture geometries, and the lack of reliable accuracy evaluation when ground-truth models are unavailable. To address these issues, this study proposes an automated virtual reassembly framework based on multi-feature extraction and hierarchical fragment matching. First, contour points are extracted from fragment point clouds using neighborhood roughness analysis and further refined through a Cylinder Box-based completion strategy to recover missing contour segments. Then, multiple complementary features, including Fast Point Feature Histograms (FPFHs), Heat Kernel Signatures (HKSs), and a spatial cube-based contour shape descriptor, are jointly constructed to characterize both local geometric details and global structural properties of fragments. To improve matching efficiency and robustness, a tree-based fragment retrieval strategy combined with a coarse-to-fine registration scheme is employed to identify adjacent fragments while reducing computational complexity. In addition, a pseudo-ground-truth accuracy evaluation method is introduced to quantitatively assess cumulative reassembly errors in the absence of reliable reference data. Experiments conducted on the public Buddha head dataset demonstrate that the proposed method achieves stable and visually consistent reassembly results, with a cumulative error as low as 1.58%, while significantly reducing retrieval computations compared with exhaustive matching strategies. These results indicate that the proposed framework provides a practical and verifiable solution for the automated digital restoration of fragmented cultural relics. Full article
(This article belongs to the Special Issue Non-Destructive Techniques for Heritage Conservation)
Show Figures

Figure 1

21 pages, 3308 KB  
Article
NILM-Based Feedback for Demand Response: A Reproducible Binary State-Detection Algorithm Using Active Power
by Yuriy Zhukovskiy, Pavel Suslikov and Daniil Rasputin
Electricity 2026, 7(1), 23; https://doi.org/10.3390/electricity7010023 - 5 Mar 2026
Cited by 1 | Viewed by 566
Abstract
Non-intrusive load monitoring (NILM) can provide actionable feedback for demand response (DR) when direct measurements of device states are unavailable. We propose a reproducible, engineering-oriented pipeline for detecting ON/OFF states of end-use load groups from an aggregated active power time series. The method [...] Read more.
Non-intrusive load monitoring (NILM) can provide actionable feedback for demand response (DR) when direct measurements of device states are unavailable. We propose a reproducible, engineering-oriented pipeline for detecting ON/OFF states of end-use load groups from an aggregated active power time series. The method uses robust hysteresis-based labeling with adaptive thresholds derived from the median and median absolute deviation, followed by compact feature engineering restricted to global active power (GAP). After removing collinear features (|r| > 0.98), permutation importance is used to retain informative predictors. Probabilistic binary classifiers (LGBM, Histogram-based Gradient Boosting, XGBoost, and CatBoost) are trained for each target load, and the decision threshold is optimized via Fβ to balance missed events and false alarms. A post-processing stage stabilizes predictions by smoothing probabilities and suppressing spurious triggers. Model quality is assessed with both sample-wise metrics and event-based metrics that credit the correct detection of switching intervals within a time tolerance. Experiments on the open “Individual Household Electric Power Consumption” dataset (1-min resolution, 2007–2010) demonstrate that lightweight gradient boosting models, particularly LGBM, deliver reliable and interpretable state estimates suitable for practical DR integration and edge deployment. Full article
Show Figures

Figure 1

23 pages, 2371 KB  
Article
Machine-Learning Crop-Type Mapping Sensitivity to Feature Selection and Hyperparameter Tuning
by Mayra Perez-Flores, Frédéric Satgé, Jorge Molina-Carpio, Renaud Hostache, Ramiro Pillco-Zolá, Diego Tola, Elvis Uscamayta-Ferrano, Lautaro Bustillos, Marie-Paule Bonnet and Celine Duwig
Remote Sens. 2026, 18(4), 563; https://doi.org/10.3390/rs18040563 - 11 Feb 2026
Viewed by 1176
Abstract
To improve crop yields and incomes, farmers consistently adapt their practices to climate and market fluctuations, resulting in highly variable crop field distribution and coverage in space and time. As these dynamics illustrate farmers’ challenges, up-to-date crop-type mapping is essential for understanding farmers’ [...] Read more.
To improve crop yields and incomes, farmers consistently adapt their practices to climate and market fluctuations, resulting in highly variable crop field distribution and coverage in space and time. As these dynamics illustrate farmers’ challenges, up-to-date crop-type mapping is essential for understanding farmers’ needs and supporting their adoption of sustainable practices. With global coverage and frequent temporal observations, remote sensing data are generally integrated into machine learning models to monitor crop dynamics. Unlike physical-based models that rely on straightforward use, implementing machine learning models requires extensive user interaction. In this context, this study assesses how sensitive the models’ outputs are to feature selection and hyperparameter tuning, as both processes rely on user judgment. To achieve this, Sentinel-1 (S1) and Sentinel-2 (S2) features are integrated into five distinct models (Random Forest (RF), Support Vector Machine (SVM), Light Gradient Boosting (LGB), Histogram-based Gradient Boosting (HGB), and Extreme Gradient Boosting (XGB)), considering several features selection (Variance Inflation Factor (VIF) and Sequential Feature Selector (SFS)) and hyperparameter tuning (Grid-Search) setup. Results show that the preprocess modeling feature selection (VIF) discards the features that the wrapped method (SFS) keeps, resulting in less reliable crop-type mapping. Additionally, hyperparameter tuning appears to be sensitive to the input features, and considering it after any feature selection improved the crop-type mapping. In this context a three-step nested modeling setup, including first hyperparameter tuning, followed by a wrapped feature selection (SFS) and additional hyperparameter tuning, leads to the most reliable model outputs. For the study region, LGB and XGB (SVM) are the most (least) suitable models for crop-type mapping, and model reliability improves when integrating S1 and S2 features rather than considering S1 or S2 alone. Finally, crop-type maps are derived across different regions and time periods to highlight the benefits of the proposed method for monitoring crop dynamics in space and time. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Agroforestry (Third Edition))
Show Figures

Figure 1

33 pages, 745 KB  
Article
XAI-Driven Malware Detection from Memory Artifacts: An Alert-Driven AI Framework with TabNet and Ensemble Classification
by Aristeidis Mystakidis, Grigorios Kalogiannnis, Nikolaos Vakakis, Nikolaos Altanis, Konstantina Milousi, Iason Somarakis, Gabriela Mihalachi, Mariana S. Mazi, Dimitris Sotos, Antonis Voulgaridis, Christos Tjortjis, Konstantinos Votis and Dimitrios Tzovaras
AI 2026, 7(2), 66; https://doi.org/10.3390/ai7020066 - 10 Feb 2026
Viewed by 1388
Abstract
Modern malware presents significant challenges to traditional detection methods, often leveraging fileless techniques, in-memory execution, and process injection to evade antivirus and signature-based systems. To address these challenges, alert-driven memory forensics has emerged as a critical capability for uncovering stealthy, persistent, and zero-day [...] Read more.
Modern malware presents significant challenges to traditional detection methods, often leveraging fileless techniques, in-memory execution, and process injection to evade antivirus and signature-based systems. To address these challenges, alert-driven memory forensics has emerged as a critical capability for uncovering stealthy, persistent, and zero-day threats. This study presents a two-stage host-based malware detection framework, that integrates memory forensics, explainable machine learning, and ensemble classification, designed as a post-alert asynchronous SOC workflow balancing forensic depth and operational efficiency. Utilizing the MemMal-D2024 dataset—comprising rich memory forensic artifacts from Windows systems infected with malware samples whose creation metadata spans 2006–2021—the system performs malware detection, using features extracted from volatile memory. In the first stage, an Attentive and Interpretable Learning for structured Tabular data (TabNet) model is used for binary classification (benign vs. malware), leveraging its sequential attention mechanism and built-in explainability. In the second stage, a Voting Classifier ensemble, composed of Light Gradient Boosting Machine (LGBM), eXtreme Gradient Boosting (XGB), and Histogram Gradient Boosting (HGB) models, is used to identify the specific malware family (Trojan, Ransomware, Spyware). To reduce memory dump extraction and analysis time without compromising detection performance, only a curated subset of 24 memory features—operationally selected to reduce acquisition/extraction time and validated via redundancy inspection, model explainability (SHAP/TabNet), and training data correlation analysis —was used during training and runtime, identifying the best trade-off between memory analysis and detection accuracy. The pipeline, which is triggered from host-based Wazuh Security Information and Event Management (SIEM) alerts, achieved 99.97% accuracy in binary detection and 70.17% multiclass accuracy, resulting in an overall performance of 87.02%, including both global and local explainability, ensuring operational transparency and forensic interpretability. This approach provides an efficient and interpretable detection solution used in combination with conventional security tools as an extra layer of defense suitable for modern threat landscapes. Full article
Show Figures

Figure 1

52 pages, 9165 KB  
Article
A Hybrid Deep Learning Framework for Automated Dental Disorder Diagnosis from X-Ray Images
by A. A. Abd El-Aziz, Mohammed Elmogy, Mahmood A. Mahmood and Sameh Abd El-Ghany
J. Clin. Med. 2026, 15(3), 1076; https://doi.org/10.3390/jcm15031076 - 29 Jan 2026
Viewed by 478
Abstract
Background: Dental disorders, such as cavities, periodontal disease, and periapical infections, remain major global health issues, often resulting in pain, tooth loss, and systemic complications if not identified early. Traditional diagnostic methods rely heavily on visual inspection and manual interpretation of panoramic X-ray [...] Read more.
Background: Dental disorders, such as cavities, periodontal disease, and periapical infections, remain major global health issues, often resulting in pain, tooth loss, and systemic complications if not identified early. Traditional diagnostic methods rely heavily on visual inspection and manual interpretation of panoramic X-ray images by dental professionals, making them time-consuming, subjective, and less accessible in resource-limited settings. Objectives: Accurate and timely diagnosis is vital for effective treatment and prevention of disease progression, reducing healthcare costs and patient discomfort. Recent advances in deep learning (DL) have demonstrated remarkable potential to automate and improve the precision of dental diagnostics by objectively analyzing panoramic, periapical, and bitewing X-rays. Methods: In this research, a hybrid feature-fusion framework is proposed. It integrates handcrafted Histogram of Oriented Gradients (HOG) features with deep representations from DenseNet-201 and the Shifted Window (Swin) Transformer models. Sequential dependencies among the fused features were learned utilizing the Long Short-Term Memory (LSTM) classifier. The framework was evaluated on the Dental Radiography Analysis and Diagnosis (DRAD) dataset following preprocessing steps, including resizing, normalization, Contrast Limited Adaptive Histogram Equalization (CLAHE) enhancement, and image cropping. Results: The proposed LSTM-based hybrid model achieved 96.47% accuracy, 91.76% specificity, 94.92% precision, 91.76% recall, and 93.14% F1-score. Conclusions: The proposed framework offers flexibility, interpretability, and strong empirical performance, making it suitable for various image-based recognition applications and serving as a reproducible framework for future research on hybrid feature fusion and sequence-based classification. Full article
(This article belongs to the Special Issue Clinical Advances in Cancer Imaging)
Show Figures

Figure 1

23 pages, 3238 KB  
Article
Agricultural Injury Severity Prediction Using Integrated Data-Driven Analysis: Global Versus Local Explainability Using SHAP
by Omer Mermer, Yanan Liu, Charles A. Jennissen, Milan Sonka and Ibrahim Demir
Safety 2026, 12(1), 6; https://doi.org/10.3390/safety12010006 - 8 Jan 2026
Viewed by 648
Abstract
Despite the agricultural sector’s consistently high injury rates, formal reporting is often limited, leading to sparse national datasets that hinder effective safety interventions. To address this, our study introduces a comprehensive framework leveraging advanced ensemble machine learning (ML) models to predict and interpret [...] Read more.
Despite the agricultural sector’s consistently high injury rates, formal reporting is often limited, leading to sparse national datasets that hinder effective safety interventions. To address this, our study introduces a comprehensive framework leveraging advanced ensemble machine learning (ML) models to predict and interpret the severity of agricultural injuries. We use a unique, manually curated dataset of over 2400 agricultural incidents from AgInjuryNews, a public repository of news reports detailing incidents across the United States. We evaluated six ensemble models, including Gradient Boosting (GB), eXtreme Grading Boosting (XGB), Light Gradient Boosting Machine (LightGBM), Adaptive Boosting (AdaBoost), Histogram-based Gradient Boosting Regression Trees (HistGBRT), and Random Forest (RF), for their accuracy in classifying injury outcomes as fatal or non-fatal. A key contribution of our work is the novel integration of explainable artificial intelligence (XAI), specifically SHapley Additive exPlanations (SHAP), to overcome the “black-box” nature of complex ensemble models. The models demonstrated strong predictive performance, with most achieving an accuracy of approximately 0.71 and an F1-score of 0.81. Through global SHAP analysis, we identified key factors influencing injury severity across the dataset, such as the presence of helmet use, victim age, and the type of injury agent. Additionally, our application of local SHAP analysis revealed how specific variables like location and the victim’s role can have varying impacts depending on the context of the incident. These findings provide actionable, context-aware insights for developing targeted policy and safety interventions for a range of stakeholders, from first responders to policymakers, offering a powerful tool for a more proactive approach to agricultural safety. Full article
(This article belongs to the Special Issue Farm Safety, 2nd Edition)
Show Figures

Figure 1

14 pages, 4855 KB  
Article
Generalized Synchronization of a Novel Hyperchaotic System and Application in Secure Communication
by Mohamed M. El-Dessoky, Nehad Almohammadi and Mansoor Alsulami
Mathematics 2026, 14(1), 111; https://doi.org/10.3390/math14010111 - 28 Dec 2025
Viewed by 437
Abstract
In this paper, a generalized synchronization (GS) framework for identical hyperchaotic systems is presented. The main objective is to achieve generalized synchronization with guaranteed global stability and effective convergence, which remains a key challenge in synchronization-based secure communication systems. The proposed controller is [...] Read more.
In this paper, a generalized synchronization (GS) framework for identical hyperchaotic systems is presented. The main objective is to achieve generalized synchronization with guaranteed global stability and effective convergence, which remains a key challenge in synchronization-based secure communication systems. The proposed controller is systematically derived to ensure global asymptotic convergence of the synchronization errors for arbitrary initial conditions and distinct scaling factors. This formulation unifies complete, anti-, and generalized synchronization within a single control structure. To demonstrate the applicability of the proposed method, it is integrated into an image encryption algorithm, where the hyperchaotic trajectories of the drive system generate highly random permutation and diffusion sequences. Simulation results verify that the designed controller achieves effective generalized synchronization and that the encrypted images exhibit uniform histograms and low pixel correlation, indicating strong security and resistance to statistical attacks. Full article
(This article belongs to the Special Issue Chaotic Systems and Their Applications, 2nd Edition)
Show Figures

Figure 1

Back to TopTop