Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (388)

Search Parameters:
Keywords = histogram selection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 3373 KB  
Article
Intelligent Assessment Framework of Unmanned Air Vehicle Health Status Based on Bayesian Stacking
by Junfu Qiao, Jinqin Guo, Yu Zhang and Yongwei Li
Batteries 2026, 12(2), 62; https://doi.org/10.3390/batteries12020062 - 14 Feb 2026
Viewed by 87
Abstract
This paper proposed a stacking-based ensemble model to replace the traditional single machine learning model prediction approach, significantly improving the evaluation efficiency of SoC and SoH of lithium batteries. Firstly, a dataset was constructed including three input variables (temperature, current, and voltage) and [...] Read more.
This paper proposed a stacking-based ensemble model to replace the traditional single machine learning model prediction approach, significantly improving the evaluation efficiency of SoC and SoH of lithium batteries. Firstly, a dataset was constructed including three input variables (temperature, current, and voltage) and two output variables (SoC and SoH). Pearson correlation coefficients and histograms were used for preliminary analysis of the correlations and distributions of the dataset. The multi-layer perceptron (MLP), support vector machine (SVM), random forest (RF), and extreme gradient boosting tree (XGB) were used as base prediction models. Bayesian optimization (BO) was used to fine-tune the parameters of these models, then three statistical indicators were compared to assess the prediction accuracy of the four ML models. Furthermore, MLP, SVM, and RF were selected as base models, while XGB was used as the meta-model, enhancing the integrated performance of the prediction models. SHAP was used to quantify the influence of the output variables on SoC. Finally, linked measures for the prediction model were proposed to achieve autonomous monitoring of drones. The results showed that XGB exhibited superior prediction accuracy, with R2 of 0.93 and RMSE of 0.14. The ensemble model obtained using stacking reduced the number of outliers by 89.4%. Current was identified as the key variable influencing both SoC and SoH. Furthermore, the intelligent prediction model proposed in this paper can be integrated with controllers, visualization web pages, and other systems to enable the health status assessment of drones. Full article
(This article belongs to the Section Battery Performance, Ageing, Reliability and Safety)
Show Figures

Figure 1

17 pages, 3648 KB  
Article
Feasibility Study of Multiorgan Dosiomics for Evaluating Radiation-Induced Xerostomia and Dysphagia in Head and Neck Cancer Radiotherapy
by Takahiro Nakamoto, Koichi Yasuda, Takaaki Yoshimura, Takahiro Kanehira, Hiroshi Tamura, Sora Takagi, Sora Kobayashi, Tomohiko Miyazaki, Shuhei Takahashi, Yoshihiro Fujita, Takayuki Hashimoto and Hidefumi Aoyama
Cancers 2026, 18(4), 619; https://doi.org/10.3390/cancers18040619 - 13 Feb 2026
Viewed by 97
Abstract
Background/Objectives: The severity of radiation-induced toxicity in head and neck cancer (HNC) radiotherapy should be predicted for the prognosis of the patient’s quality of life. Multiple organs at risk (OARs) are susceptible to toxicity in the head and neck. Hence, we aimed to [...] Read more.
Background/Objectives: The severity of radiation-induced toxicity in head and neck cancer (HNC) radiotherapy should be predicted for the prognosis of the patient’s quality of life. Multiple organs at risk (OARs) are susceptible to toxicity in the head and neck. Hence, we aimed to investigate the feasibility of evaluating radiation-induced xerostomia and dysphagia based on multi-OAR dosiomics in HNC radiotherapy. Methods: We used radiotherapy treatment planning and toxicity data collected from 44 patients with HNC. High- and low-toxicity grades were classified using dosiomic models derived from multiple OARs. Dosiomic features were computed from the planned dose distribution per OAR. A prediction model was derived using selected dosiomic features and toxicity grades based on extreme gradient boosting for every OAR and all OARs. The model performance was evaluated in terms of the area under the curve (AUC) from leave-one-out cross-validation. Models based on dose volume histogram features and combining these with dosiomic features were derived to compare the prediction performance per OAR. Performance comparisons across OARs were also conducted. Results: The prediction models with the highest AUCs for xerostomia and dysphagia were the dosiomic model using all OARs, with an AUC of 0.843 (95% confidence interval—CI, 0.725–0.961), and that using the middle pharyngeal constrictor muscle, with an AUC of 0.878 (95% CI, 0.772–0.984). Conclusions: The evaluation results demonstrated the feasibility and potential of predicting radiation-induced toxicities based on multi-OAR dosiomics in HNC radiotherapy. Further investigations are required to determine the generalizability of our findings. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

23 pages, 5744 KB  
Article
Improving the Prediction of Radiation Pneumonitis: Leveraging Radiomics and Dosiomics Within IDLSS Lung Subregions
by Tsair-Fwu Lee, Wen-Ping Yun, Ling-Chuan Chang-Chien, Hung-Yu Chang, Yi-Lun Liao, Ya-Shin Kuan, Chiu-Feng Chiu, Cheng-Shie Wuu, Yang-Wei Hsieh, Liyun Chang, Yu-Chang Hu, Yu-Wei Lin and Pei-Ju Chao
Life 2026, 16(2), 328; https://doi.org/10.3390/life16020328 - 13 Feb 2026
Viewed by 98
Abstract
Purpose: This study develops a predictive model for radiation pneumonitis (RP) risk in lung cancer patients after volume-modulated arc therapy (VMAT) that leverages high-dimensional dosiomics and dose–volume histogram (DVH) features within IDLSS (incremental-dose interval-based lung subregion) lung subregions. Methods: We retrospectively analyzed data [...] Read more.
Purpose: This study develops a predictive model for radiation pneumonitis (RP) risk in lung cancer patients after volume-modulated arc therapy (VMAT) that leverages high-dimensional dosiomics and dose–volume histogram (DVH) features within IDLSS (incremental-dose interval-based lung subregion) lung subregions. Methods: We retrospectively analyzed data from 136 lung cancer patients treated with VMAT between 2015 and 2022, including 39 patients who developed RP greater than Grade 2. Using the IDLSS method, seven regions of interest (ROIs), including the Planning Target Volume (PTV), normal lung, and five subdivided lung areas, were delineated on pretreatment Computed Tomography (CT) images. DVH, radiomics, and dosiomics features were extracted from these ROIs and organized into nine distinct feature sets. A comprehensive pipeline was applied, integrating IDLSS-defined lung subregions, high-dimensional dosiomics features, LASSO-based feature selection, and SMOTE oversampling to address class imbalance in the training data. Logistic regression, random forest, and feedforward neural networks were constructed and optimized via tenfold cross-validation. Model performance across different feature sets was evaluated via the average AUC, F1 score, and other performance metrics. Results: LASSO regression revealed that BMI and volume within the 5–10 Gy and 10–20 Gy lung subregions were significant predictors of RP. The performance evaluation demonstrated that the dosiomics features consistently outperformed the DVH features across the models. Combining radiomics and dosiomics achieved the highest predictive accuracy (AUC = 0.91, ACC = 0.89, NPV = 0.95, PPV = 0.78, F1 score = 0.82, sensitivity = 0.88, specificity = 0.90). Applying SMOTE during training significantly improved sensitivity without compromising specificity, confirming the value of balancing strategies in enhancing model performance. Incorporating all the features together did not provide additional performance gains. Conclusions: Integrating radiomics and dosiomics features extracted from IDLSS-defined lung subregions significantly enhances the ability to predict RP after VMAT, surpassing traditional DVH metrics. The substantial contribution of dosiomics features highlights the importance of spatial dose heterogeneity in RP risk assessment. Full article
(This article belongs to the Special Issue Advanced Technologies and Clinical Practice of Cancer Radiotherapy)
Show Figures

Figure 1

22 pages, 1730 KB  
Article
Toward a Hybrid Intrusion Detection Framework for IIoT Using a Large Language Model
by Musaad Algarni, Mohamed Y. Dahab, Abdulaziz A. Alsulami, Badraddin Alturki and Raed Alsini
Sensors 2026, 26(4), 1231; https://doi.org/10.3390/s26041231 - 13 Feb 2026
Viewed by 126
Abstract
The widespread connectivity of the Industrial Internet of Things (IIoT) improves the efficiency and functionality of connected devices. However, it also raises serious concerns about cybersecurity threats. Implementing an effective intrusion detection system (IDS) for IIoT is challenging due to heterogeneous data, high [...] Read more.
The widespread connectivity of the Industrial Internet of Things (IIoT) improves the efficiency and functionality of connected devices. However, it also raises serious concerns about cybersecurity threats. Implementing an effective intrusion detection system (IDS) for IIoT is challenging due to heterogeneous data, high feature dimensionality, class imbalance, and the risk of data leakage during evaluation. This paper presents a leakage-safe hybrid intrusion detection framework that combines text-based and numerical network flow features in an IIoT environment. Each network flow is converted into a short text description and encoded using a frozen Large Language Model (LLM) called the Bidirectional Encoder Representations from Transformers (BERT) model to obtain fixed semantic embeddings, while numerical traffic features are standardized in parallel. To improve class separation, class prototypes are computed in Principal Component Analysis (PCA) space, and cosine similarity scores for these prototypes are added to the feature set. Class imbalance is handled only in the training data using the Synthetic Minority Over-sampling Technique (SMOTE). A Random Forest (RF) is used to select the top features, followed by a Histogram-based Gradient Boosting (HGB) classifier for final prediction. The proposed framework is evaluated on the Edge-IIoTset and ToN_IoT datasets and achieves promising results. Empirically, the framework attains 98.19% accuracy on Edge-IIoTset and 99.15% accuracy on ToN_IoT, indicating robust, leakage-safe performance. Full article
Show Figures

Figure 1

23 pages, 2371 KB  
Article
Machine-Learning Crop-Type Mapping Sensitivity to Feature Selection and Hyperparameter Tuning
by Mayra Perez-Flores, Frédéric Satgé, Jorge Molina-Carpio, Renaud Hostache, Ramiro Pillco-Zolá, Diego Tola, Elvis Uscamayta-Ferrano, Lautaro Bustillos, Marie-Paule Bonnet and Celine Duwig
Remote Sens. 2026, 18(4), 563; https://doi.org/10.3390/rs18040563 - 11 Feb 2026
Viewed by 111
Abstract
To improve crop yields and incomes, farmers consistently adapt their practices to climate and market fluctuations, resulting in highly variable crop field distribution and coverage in space and time. As these dynamics illustrate farmers’ challenges, up-to-date crop-type mapping is essential for understanding farmers’ [...] Read more.
To improve crop yields and incomes, farmers consistently adapt their practices to climate and market fluctuations, resulting in highly variable crop field distribution and coverage in space and time. As these dynamics illustrate farmers’ challenges, up-to-date crop-type mapping is essential for understanding farmers’ needs and supporting their adoption of sustainable practices. With global coverage and frequent temporal observations, remote sensing data are generally integrated into machine learning models to monitor crop dynamics. Unlike physical-based models that rely on straightforward use, implementing machine learning models requires extensive user interaction. In this context, this study assesses how sensitive the models’ outputs are to feature selection and hyperparameter tuning, as both processes rely on user judgment. To achieve this, Sentinel-1 (S1) and Sentinel-2 (S2) features are integrated into five distinct models (Random Forest (RF), Support Vector Machine (SVM), Light Gradient Boosting (LGB), Histogram-based Gradient Boosting (HGB), and Extreme Gradient Boosting (XGB)), considering several features selection (Variance Inflation Factor (VIF) and Sequential Feature Selector (SFS)) and hyperparameter tuning (Grid-Search) setup. Results show that the preprocess modeling feature selection (VIF) discards the features that the wrapped method (SFS) keeps, resulting in less reliable crop-type mapping. Additionally, hyperparameter tuning appears to be sensitive to the input features, and considering it after any feature selection improved the crop-type mapping. In this context a three-step nested modeling setup, including first hyperparameter tuning, followed by a wrapped feature selection (SFS) and additional hyperparameter tuning, leads to the most reliable model outputs. For the study region, LGB and XGB (SVM) are the most (least) suitable models for crop-type mapping, and model reliability improves when integrating S1 and S2 features rather than considering S1 or S2 alone. Finally, crop-type maps are derived across different regions and time periods to highlight the benefits of the proposed method for monitoring crop dynamics in space and time. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Agroforestry (Third Edition))
Show Figures

Figure 1

33 pages, 745 KB  
Article
XAI-Driven Malware Detection from Memory Artifacts: An Alert-Driven AI Framework with TabNet and Ensemble Classification
by Aristeidis Mystakidis, Grigorios Kalogiannnis, Nikolaos Vakakis, Nikolaos Altanis, Konstantina Milousi, Iason Somarakis, Gabriela Mihalachi, Mariana S. Mazi, Dimitris Sotos, Antonis Voulgaridis, Christos Tjortjis, Konstantinos Votis and Dimitrios Tzovaras
AI 2026, 7(2), 66; https://doi.org/10.3390/ai7020066 - 10 Feb 2026
Viewed by 287
Abstract
Modern malware presents significant challenges to traditional detection methods, often leveraging fileless techniques, in-memory execution, and process injection to evade antivirus and signature-based systems. To address these challenges, alert-driven memory forensics has emerged as a critical capability for uncovering stealthy, persistent, and zero-day [...] Read more.
Modern malware presents significant challenges to traditional detection methods, often leveraging fileless techniques, in-memory execution, and process injection to evade antivirus and signature-based systems. To address these challenges, alert-driven memory forensics has emerged as a critical capability for uncovering stealthy, persistent, and zero-day threats. This study presents a two-stage host-based malware detection framework, that integrates memory forensics, explainable machine learning, and ensemble classification, designed as a post-alert asynchronous SOC workflow balancing forensic depth and operational efficiency. Utilizing the MemMal-D2024 dataset—comprising rich memory forensic artifacts from Windows systems infected with malware samples whose creation metadata spans 2006–2021—the system performs malware detection, using features extracted from volatile memory. In the first stage, an Attentive and Interpretable Learning for structured Tabular data (TabNet) model is used for binary classification (benign vs. malware), leveraging its sequential attention mechanism and built-in explainability. In the second stage, a Voting Classifier ensemble, composed of Light Gradient Boosting Machine (LGBM), eXtreme Gradient Boosting (XGB), and Histogram Gradient Boosting (HGB) models, is used to identify the specific malware family (Trojan, Ransomware, Spyware). To reduce memory dump extraction and analysis time without compromising detection performance, only a curated subset of 24 memory features—operationally selected to reduce acquisition/extraction time and validated via redundancy inspection, model explainability (SHAP/TabNet), and training data correlation analysis —was used during training and runtime, identifying the best trade-off between memory analysis and detection accuracy. The pipeline, which is triggered from host-based Wazuh Security Information and Event Management (SIEM) alerts, achieved 99.97% accuracy in binary detection and 70.17% multiclass accuracy, resulting in an overall performance of 87.02%, including both global and local explainability, ensuring operational transparency and forensic interpretability. This approach provides an efficient and interpretable detection solution used in combination with conventional security tools as an extra layer of defense suitable for modern threat landscapes. Full article
Show Figures

Figure 1

14 pages, 2997 KB  
Article
Impact of Non-Linear CT Resampling on Enhancing Synthetic-CT Generation in Total Marrow and Lymphoid Irradiation
by Monica Bianchi, Nicola Lambri, Daniele Loiacono, Stefano Tomatis, Marta Scorsetti, Cristina Lenardi and Pietro Mancosu
Appl. Sci. 2026, 16(3), 1660; https://doi.org/10.3390/app16031660 - 6 Feb 2026
Viewed by 138
Abstract
Computed tomography (CT) images are stored at a 12-bit depth. However, many deep learning libraries and pre-trained models are designed for 8-bit images, requiring an intermediate compression step before restoring the original 12-bit physical range. This process causes information loss and can compromise [...] Read more.
Computed tomography (CT) images are stored at a 12-bit depth. However, many deep learning libraries and pre-trained models are designed for 8-bit images, requiring an intermediate compression step before restoring the original 12-bit physical range. This process causes information loss and can compromise image reliability. This study investigated the impact of two CT resampling methods (8-bit compression; 12-bit decompression) on dose calculation and image quality. Ten total marrow and lymphoid irradiation patients were selected. CT scans were resampled using linear and non-linear look-up tables (l_LUT/nl_LUT). Original and resampled CTs were evaluated considering: (i) Hounsfield unit (HU) root mean squared error (RMSE); (ii) dose-volume histogram (DVH) statistics for target volume and several organs; (iii) 3D gamma passing rate (GPR) with a 1%/1.25 mm criterion; (iv) lymph nodes contouring and diagnostic quality (scale 1–5). The RMSE for l_LUT vs. nl_LUT was 7 ± 1 vs. 10 ± 1 HU. Maximum differences in DVH statistics were 0.4%, with a 3D-GPR = 100% for all cases. CTs resampled with l_LUT exhibited evident brain pixelation (score = 1), whereas nl_LUT matched the original CT quality (score = 4). Both LUTs were acceptable for lymph nodes delineation. The nl_LUT optimized the CT resampling process, providing a more efficient method for possible deep learning applications in synthetic CT generation. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

25 pages, 51444 KB  
Article
Local Contrast Enhancement in Digital Images Using a Tunable Modified Hyperbolic Tangent Transformation
by Camilo E. Echeverry and Manuel G. Forero
Mathematics 2026, 14(3), 571; https://doi.org/10.3390/math14030571 - 5 Feb 2026
Viewed by 155
Abstract
Low contrast is a frequent challenge in image analysis, especially within medical imaging and highly saturated scenes. To address this issue, we present a nonlinear transformation for local contrast enhancement in digital images. Our method adapts the hyperbolic tangent function using two parameters: [...] Read more.
Low contrast is a frequent challenge in image analysis, especially within medical imaging and highly saturated scenes. To address this issue, we present a nonlinear transformation for local contrast enhancement in digital images. Our method adapts the hyperbolic tangent function using two parameters: one to select the intensity range for modification and another to control the degree of enhancement. This approach outperforms conventional histogram-based techniques such as histogram equalization and specification in local contrast enhancement, without increasing computational cost, and produces smooth, artifact-free results in user-defined regions of interest. In addition, the proposed method was compared with CLAHE in MRIs, showing that, unlike CLAHE, the proposed method does not enhance the noise present in the background of the image. Furthermore, in deep learning contexts where dataset size is often limited, our method could serve as an effective data augmentation tool—generating varied contrast images while preserving anatomical structures, which improves neural network training for brain tumor detection in magnetic resonance imaging. The ability to manipulate local contrast may offer a pathway toward better interpretability of convolutional neural networks, as targeted contrast adjustments allow researchers to probe model sensitivity and enhance the explainability of classification and detection mechanisms. Full article
(This article belongs to the Special Issue Data Mining and Algorithms Applied in Image Processing)
Show Figures

Figure 1

7 pages, 893 KB  
Proceeding Paper
Histogram-Based Vehicle Black Smoke Identification in Fixed Monitoring Environments
by Meng-Syuan Tsai, Yun-Sin Lin and Jiun-Jian Liaw
Eng. Proc. 2025, 120(1), 24; https://doi.org/10.3390/engproc2025120024 - 3 Feb 2026
Viewed by 151
Abstract
The black smoke emitted by diesel vehicles poses a long-term threat to air quality and human health, with suspended particulate matter being the most significant concern. We developed an image-based black smoke detection system in this study. The system uses YOLOv9 to locate [...] Read more.
The black smoke emitted by diesel vehicles poses a long-term threat to air quality and human health, with suspended particulate matter being the most significant concern. We developed an image-based black smoke detection system in this study. The system uses YOLOv9 to locate vehicles and vertically divides the bounding box into nine regions, selecting the bottom three as regions of interest. A reference baseline histogram is established from the first frame of the video under a non-smoke condition. For subsequent frames, a dynamic baseline histogram is calculated, and the presence of black smoke emissions is determined using baseline histogram differences. Experimental results confirm that the system can reliably identify black smoke-emitting vehicles in both dynamic and static environments. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

27 pages, 1579 KB  
Article
Quadra Sense: A Fusion of Deep Learning Classifiers for Mitosis Detection in Breast Cancer Histopathology
by Afnan M. Alhassan and Nouf I. Altmami
Diagnostics 2026, 16(3), 393; https://doi.org/10.3390/diagnostics16030393 - 26 Jan 2026
Viewed by 274
Abstract
Background/Objectives: The difficulties caused by breast cancer have been addressed in a number of ways. Since it is said to be the second most common cause of death from cancer among women, early intervention is crucial. Early detection is difficult because of [...] Read more.
Background/Objectives: The difficulties caused by breast cancer have been addressed in a number of ways. Since it is said to be the second most common cause of death from cancer among women, early intervention is crucial. Early detection is difficult because of the existing detection tools’ shortcomings in objectivity and accuracy. Quadra Sense, a fusion of deep learning (DL) classifiers for mitosis detection in breast cancer histopathology, is proposed to address the shortcomings of current approaches. It demonstrates a greater capacity to produce more accurate results. Methods: Initially, the raw dataset is preprocessed by using a normalization by means of color channel normalization (zero-mean normalization) and stain normalization (Macenko Stain Normalization), and the artifact can be removed via median filtering and contrast enhancement using histogram equalization; ROI identification is performed using modified Fully Convolutional Networks (FCNs) followed by the feature extraction (FE) with Modified InceptionV4 (M-IV4), by which the deep features are retrieved and the feature are selected by means of a Self-Improved Seagull Optimization Algorithm (SA-SOA), and finally, classification is performed using Mito-Quartet. Results: Ultimately, using a performance evaluation, the suggested approach achieved a higher accuracy of 99.2% in comparison with the current methods. Conclusions: From the outcomes, the recommended technique performs well. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

17 pages, 10961 KB  
Article
Optimizing Image Segmentation for Microstructure Analysis of High-Strength Steel: Histogram-Based Recognition of Martensite and Bainite
by Filip Hallo, Tomasz Jażdżewski, Piotr Bała, Grzegorz Korpała and Krzysztof Regulski
Materials 2026, 19(2), 429; https://doi.org/10.3390/ma19020429 - 22 Jan 2026
Viewed by 156
Abstract
This study systematically compares three unsupervised segmentation algorithms (Simple Linear Iterative Clustering (SLIC), Felzenszwalb’s graph-based method, and the Watershed algorithm) in combination with two classification approaches: Random Forest using histogram-based features and Convolutional Neural Networks (CNNs). The study employs Bayesian optimization to jointly [...] Read more.
This study systematically compares three unsupervised segmentation algorithms (Simple Linear Iterative Clustering (SLIC), Felzenszwalb’s graph-based method, and the Watershed algorithm) in combination with two classification approaches: Random Forest using histogram-based features and Convolutional Neural Networks (CNNs). The study employs Bayesian optimization to jointly tune segmentation parameters and model hyperparameters, investigating how segmentation quality impacts downstream classification performance. The methodology is validated using light optical microscopy images of a high-strength steel sample, with performance evaluated through stratified cross-validation and independent test sets. The findings demonstrate the critical importance of segmentation algorithm selection and provide insights into the trade-offs between feature-engineered and end-to-end learning approaches for microstructure analysis. Full article
(This article belongs to the Section Metals and Alloys)
Show Figures

Figure 1

23 pages, 13473 KB  
Article
Automatic Threshold Selection Guided by Maximizing Homologous Isomeric Similarity Under Unified Transformation Toward Unimodal Distribution
by Yaobin Zou, Wenli Yu and Qingqing Huang
Electronics 2026, 15(2), 451; https://doi.org/10.3390/electronics15020451 - 20 Jan 2026
Viewed by 1303
Abstract
Traditional thresholding methods are often tailored to specific histogram patterns, making it difficult to achieve robust segmentation across diverse images exhibiting non-modal, unimodal, bimodal, or multimodal distributions. To address this limitation, this paper proposes an automatic thresholding method guided by maximizing homologous isomeric [...] Read more.
Traditional thresholding methods are often tailored to specific histogram patterns, making it difficult to achieve robust segmentation across diverse images exhibiting non-modal, unimodal, bimodal, or multimodal distributions. To address this limitation, this paper proposes an automatic thresholding method guided by maximizing homologous isomeric similarity under a unified transformation toward unimodal distribution. The primary objective is to establish a generalized selection criterion that functions independently of the input histogram’s pattern. The methodology employs bilateral filtering, non-maximum suppression, and Sobel operators to transform diverse histogram patterns into a unified, right-skewed unimodal distribution. Subsequently, the optimal threshold is determined by maximizing the normalized Renyi mutual information between the transformed edge image and binary contour images extracted at varying levels. Experimental validation on both synthetic and real-world images demonstrates that the proposed method offers greater adaptability and higher accuracy compared to representative thresholding and non-thresholding techniques. The results show a significant reduction in misclassification errors and improved correlation metrics, confirming the method’s effectiveness as a unified thresholding solution for images with non-modal, unimodal, bimodal, or multimodal histogram patterns. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition)
Show Figures

Figure 1

11 pages, 2095 KB  
Article
Dosimetric Challenges of Small Lung Lesions in Low-Density Tissue Treated with Stereotactic Body Radiation Therapy
by Indra J. Das, Meisong Ding and Mohamed E. Abazeed
J. Clin. Med. 2026, 15(2), 603; https://doi.org/10.3390/jcm15020603 - 12 Jan 2026
Viewed by 308
Abstract
Background/Objectives: Stereotactic body radiation therapy (SBRT) is widely used for small lung tumors, but the physics of electron transport in low-density lungs remains incompletely understood. This study quantifies the effect of lung density on dosimetry for small lesions. Methods: To study the dosimetric [...] Read more.
Background/Objectives: Stereotactic body radiation therapy (SBRT) is widely used for small lung tumors, but the physics of electron transport in low-density lungs remains incompletely understood. This study quantifies the effect of lung density on dosimetry for small lesions. Methods: To study the dosimetric parameters a pseudo patient option was chosen. A lung SBRT patient with a central lesion was modeled in the Eclipse treatment planning system using the AAA algorithm. Three target sizes (1.0, 1.5, and 2.0 cm) were planned with lung densities overridden from 0.1 to 1.0 g/cm3. Standard SBRT constraints were applied, and dosimetry indices (CI, HI, GI), maximum dose, and MU/Gy were recorded to see the pattern. Results: Dose–volume histograms (DVHs) showed marked dependence on both lesion size and lung density. Lower densities produced higher maximum doses (up to 135% at 0.1 g/cm3), steeper DVH tails, and significantly increased MU/Gy. Conformity was achievable in all cases, but at the cost of degraded homogeneity and gradient indices. At higher density (1.0 g/cm3), maximum dose values fell to 108–110% which is typical in non-lung cases. Conclusions: SBRT planning in low-density lungs requires substantially higher MU and results in greater dose spillage despite acceptable conformity. These findings highlight the importance of considering density effects when comparing clinical outcomes across institutions and selecting optimal plans, where minimizing MU/Gy may reduce unnecessary dose burden. Full article
(This article belongs to the Section Nuclear Medicine & Radiology)
Show Figures

Figure 1

23 pages, 12620 KB  
Article
The Color Image Watermarking Algorithm Based on Quantum Discrete Wavelet Transform and Chaotic Mapping
by Yikang Yuan, Wenbo Zhao, Zhongyan Li and Wanquan Liu
Symmetry 2026, 18(1), 33; https://doi.org/10.3390/sym18010033 - 24 Dec 2025
Viewed by 449
Abstract
Quantum watermarking is a technique that embeds specific information into a quantum carrier for the purpose of digital copyright protection. In this paper, we propose a novel color image watermarking algorithm that integrates quantum discrete wavelet transform with Sinusoidal–Tent mapping and baker mapping. [...] Read more.
Quantum watermarking is a technique that embeds specific information into a quantum carrier for the purpose of digital copyright protection. In this paper, we propose a novel color image watermarking algorithm that integrates quantum discrete wavelet transform with Sinusoidal–Tent mapping and baker mapping. Initially, chaotic sequences are generated using Sinusoidal–Tent mapping to determine the channels suitable for watermark embedding. Subsequently, a one-level quantum Haar wavelet transform is applied to the selected channel to decompose the image. The watermarked image is then scrambled via discrete baker mapping, and the scrambled image is embedded into the High-High subbands. The invisibility of the watermark is evaluated by calculating the peak signal-to-noise ratio, Structural similarity index measure, and Learned Perceptual Image Patch Similarity, with comparisons made against the color histogram. The robustness of the proposed algorithm is assessed through the calculation of Normalized Cross-Correlation. In the simulation results, PSNR is close to 63, SSIM is close to 1, LPIPS is close to 0.001, and NCC is close to 0.97. This indicates that the proposed watermarking algorithm exhibits excellent visual quality and a robust capability to withstand various attacks. Additionally, through ablation study, the contribution of each technique to overall performance was systematically evaluated. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

16 pages, 1560 KB  
Article
Performance Comparison of U-Net and Its Variants for Carotid Intima–Media Segmentation in Ultrasound Images
by Seungju Jeong, Minjeong Park, Sumin Jeong and Dong Chan Park
Diagnostics 2026, 16(1), 2; https://doi.org/10.3390/diagnostics16010002 - 19 Dec 2025
Viewed by 537
Abstract
Background/Objectives: This study systematically compared the performance of U-Net and variants for automatic analysis of carotid intima-media thickness (CIMT) in ultrasound images, focusing on segmentation accuracy and real-time efficiency. Methods: Ten models were trained and evaluated using a publicly available Carotid [...] Read more.
Background/Objectives: This study systematically compared the performance of U-Net and variants for automatic analysis of carotid intima-media thickness (CIMT) in ultrasound images, focusing on segmentation accuracy and real-time efficiency. Methods: Ten models were trained and evaluated using a publicly available Carotid Ultrasound Boundary Study (CUBS) dataset (2176 images from 1088 subjects). Images were preprocessed using histogram-based smoothing and resized to a resolution of 256 × 256 pixels. Model training was conducted using identical hyperparameters (50 epochs, batch size 8, Adam optimizer with a learning rate of 1 × 10−4, and binary cross-entropy loss). Segmentation accuracy was assessed using Dice, Intersection over Union (IoU), Precision, Recall, and Accuracy metrics, while real-time performance was evaluated based on training/inference times and the model parameter counts. Results: All models achieved high accuracy, with Dice/IoU scores above 0.80/0.67. Attention U-Net achieved the highest segmentation accuracy, while UNeXt demonstrated the fastest training/inference speeds (approximately 420,000 parameters). Qualitatively, UNet++ produced smooth and natural boundaries, highlighting its strength in boundary reconstruction. Additionally, the relationship between the model parameter count and Dice performance was visualized to illustrate the tradeoff between accuracy and efficiency. Conclusions: This study provides a quantitative/qualitative evaluation of the accuracy, efficiency, and boundary reconstruction characteristics of U-Net-based models for CIMT segmentation, offering guidance for model selection according to clinical requirements (accuracy vs. real-time performance). Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

Back to TopTop