Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (71)

Search Parameters:
Keywords = Jaccard similarity measure

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6234 KiB  
Article
Characterizing Breast Tumor Heterogeneity Through IVIM-DWI Parameters and Signal Decay Analysis
by Si-Wa Chan, Chun-An Lin, Yen-Chieh Ouyang, Guan-Yuan Chen, Chein-I Chang, Chin-Yao Lin, Chih-Chiang Hung, Chih-Yean Lum, Kuo-Chung Wang and Ming-Cheng Liu
Diagnostics 2025, 15(12), 1499; https://doi.org/10.3390/diagnostics15121499 - 12 Jun 2025
Viewed by 1687
Abstract
Background/Objectives: This research presents a novel analytical method for breast tumor characterization and tissue classification by leveraging intravoxel incoherent motion diffusion-weighted imaging (IVIM-DWI) combined with hyperspectral imaging techniques and deep learning. Traditionally, dynamic contrast-enhanced MRI (DCE-MRI) is employed for breast tumor diagnosis, but [...] Read more.
Background/Objectives: This research presents a novel analytical method for breast tumor characterization and tissue classification by leveraging intravoxel incoherent motion diffusion-weighted imaging (IVIM-DWI) combined with hyperspectral imaging techniques and deep learning. Traditionally, dynamic contrast-enhanced MRI (DCE-MRI) is employed for breast tumor diagnosis, but it involves gadolinium-based contrast agents, which carry potential health risks. IVIM imaging extends conventional diffusion-weighted imaging (DWI) by explicitly separating the signal decay into components representing true molecular diffusion (D) and microcirculation of capillary blood (pseudo-diffusion or D*). This separation allows for a more comprehensive, non-invasive assessment of tissue characteristics without the need for contrast agents, thereby offering a safer alternative for breast cancer diagnosis. The primary purpose of this study was to evaluate different methods for breast tumor characterization using IVIM-DWI data treated as hyperspectral image stacks. Dice similarity coefficients and Jaccard indices were specifically used to evaluate the spatial segmentation accuracy of tumor boundaries, confirmed by experienced physicians on dynamic contrast-enhanced MRI (DCE-MRI), emphasizing detailed tumor characterization rather than binary diagnosis of cancer. Methods: The data source for this study consisted of breast MRI scans obtained from 22 patients diagnosed with mass-type breast cancer, resulting in 22 distinct mass tumor cases analyzed. MR images were acquired using a 3T MRI system (Discovery MR750 3.0 Tesla, GE Healthcare, Chicago, IL, USA) with axial IVIM sequences and a bipolar pulsed gradient spin echo sequence. Multiple b-values ranging from 0 to 2500 s/mm2 were utilized, specifically thirteen original b-values (0, 15, 30, 45, 60, 100, 200, 400, 600, 1000, 1500, 2000, and 2500 s/mm2), with the last four b-value images replicated once for a total of 17 bands used in the analysis. The methodology involved several steps: acquisition of multi-b-value IVIM-DWI images, image pre-processing, including correction for motion and intensity inhomogeneity, treating the multi-b-value data as hyperspectral image stacks, applying hyperspectral techniques like band expansion, and evaluating three tumor detection methods: kernel-based constrained energy minimization (KCEM), iterative KCEM (I-KCEM), and deep neural networks (DNNs). The comparisons were assessed by evaluating the similarity of the detection results from each method to ground truth tumor areas, which were manually drawn on DCE-MRI images and confirmed by experienced physicians. Similarity was quantitatively measured using the Dice similarity coefficient and the Jaccard index. Additionally, the performance of the detectors was evaluated using 3D-ROC analysis and its derived criteria (AUCOD, AUCTD, AUCBS, AUCTDBS, AUCODP, AUCSNPR). Results: The findings objectively demonstrated that the DNN method achieved superior performance in breast tumor detection compared to KCEM and I-KCEM. Specifically, the DNN yielded a Dice similarity coefficient of 86.56% and a Jaccard index of 76.30%, whereas KCEM achieved 78.49% (Dice) and 64.60% (Jaccard), and I-KCEM achieved 78.55% (Dice) and 61.37% (Jaccard). Evaluation using 3D-ROC analysis also indicated that the DNN was the best detector based on metrics like target detection rate and overall effectiveness. The DNN model further exhibited the capability to identify tumor heterogeneity, differentiating high- and low-cellularity regions. Quantitative parameters, including apparent diffusion coefficient (ADC), pure diffusion coefficient (D), pseudo-diffusion coefficient (D*), and perfusion fraction (PF), were calculated and analyzed, providing insights into the diffusion characteristics of different breast tissues. Analysis of signal intensity decay curves generated from these parameters further illustrated distinct diffusion patterns and confirmed that high cellularity tumor regions showed greater water molecule confinement compared to low cellularity regions. Conclusions: This study highlights the potential of combining IVIM-DWI, hyperspectral imaging techniques, and deep learning as a robust, safe, and effective non-invasive diagnostic tool for breast cancer, offering a valuable alternative to contrast-enhanced methods by providing detailed information about tissue microstructure and heterogeneity without the need for contrast agents. Full article
(This article belongs to the Special Issue Recent Advances in Breast Cancer Imaging)
Show Figures

Figure 1

13 pages, 750 KiB  
Article
Semantic Evaluation of Nursing Assessment Scales Translations by ChatGPT 4.0: A Lexicometric Analysis
by Mauro Parozzi, Mattia Bozzetti, Alessio Lo Cascio, Daniele Napolitano, Roberta Pendoni, Ilaria Marcomini, Elena Sblendorio, Giovanni Cangelosi, Stefano Mancin and Antonio Bonacaro
Nurs. Rep. 2025, 15(6), 211; https://doi.org/10.3390/nursrep15060211 - 11 Jun 2025
Cited by 2 | Viewed by 1023 | Correction
Abstract
Background/Objectives: The use of standardized assessment tools within the nursing care process is a globally established practice, widely recognized as a foundation for evidence-based evaluation. Accurate translation is essential to ensure their correct and consistent clinical use. While effective, traditional procedures are [...] Read more.
Background/Objectives: The use of standardized assessment tools within the nursing care process is a globally established practice, widely recognized as a foundation for evidence-based evaluation. Accurate translation is essential to ensure their correct and consistent clinical use. While effective, traditional procedures are time-consuming and resource-intensive, leading to increasing interest in whether artificial intelligence can assist or streamline this process for nursing researchers. Therefore, this study aimed to assess the translation’s quality of nursing assessment scales performed by ChatGPT 4.0. Methods: A total of 31 nursing rating scales with 772 items were translated from English to Italian using two different prompts, and then underwent a deep lexicometric analysis. To assess the semantic accuracy of the translations the Sentence-BERT, Jaccard similarity, TF-IDF cosine similarity, and Overlap ratio were used. Sensitivity, specificity, AUC, and AUROC were calculated to assess the quality of the translation classification. Paired-sample t-tests were conducted to compare the similarity scores. Results: The Maastricht prompt produced translations that are marginally but consistently more semantically and lexically faithful to the original. While all differences were found to be statistically significant, the corresponding effect sizes indicate that the advantage of the Maastricht prompt is slight but consistent across all measures. The sensitivity of the prompts was 0.929 (92.9%) for York and 0.932 (93.2%) for Maastricht. Specificity and precision remained for both at 1.000. Conclusions: Findings highlight the potential of prompt engineering as a low-cost, effective method to enhance translation outcomes. Nonetheless, as translation represents only a preliminary step in the full validation process, further studies should investigate the integration of AI-assisted translation within the broader framework of instrument adaptation and validation. Full article
Show Figures

Graphical abstract

17 pages, 3120 KiB  
Article
LAAVOS: A DeAOT-Based Approach for Medaka Larval Ventricular Video Segmentation
by Kai Rao, Minghao Wang and Shutan Xu
Appl. Sci. 2025, 15(12), 6537; https://doi.org/10.3390/app15126537 - 10 Jun 2025
Viewed by 428
Abstract
Accurate segmentation of the ventricular region in embryonic heart videos of medaka fish (Oryzias latipes) holds significant scientific value for research on heart development mechanisms. However, existing medaka ventricular datasets are overly simplistic and fail to meet practical application requirements. And [...] Read more.
Accurate segmentation of the ventricular region in embryonic heart videos of medaka fish (Oryzias latipes) holds significant scientific value for research on heart development mechanisms. However, existing medaka ventricular datasets are overly simplistic and fail to meet practical application requirements. And the video frames contain multiple complex interfering factors, including optical interference from the filming environment, dynamic color changes caused by blood flow, significant diversity in ventricular scales, image blurring in certain video frames, high similarity in organ structures, and indistinct boundaries between the ventricles and atria. These challenges mean existing methods still face notable technical difficulties in medaka embryonic ventricular segmentation tasks. To address these challenges, this study first constructs a medaka embryonic ventricular video dataset containing 4200 frames with pixel-level annotations. Building upon this, we propose a semi-supervised video segmentation model based on the hierarchical propagation feature decoupling framework (DeAOT) and innovatively design an architecture that combines the LA-ResNet encoder with the AFPViS decoder, significantly improving the accuracy of medaka ventricular segmentation. Experimental results demonstrate that, compared to the traditional U-Net model, our method achieves a 13.48% improvement in the mean Intersection over Union (mIoU) metric. Additionally, compared to the state-of-the-art DeAOT method, it achieves a notable 4.83% enhancement in the comprehensive evaluation metric Jaccard and F-measure (J&F), providing reliable technical support for research on embryonic heart development. Full article
(This article belongs to the Special Issue Pattern Recognition in Video Processing)
Show Figures

Figure 1

32 pages, 2404 KiB  
Review
Bio-Inspired Metaheuristics in Deep Learning for Brain Tumor Segmentation: A Decade of Advances and Future Directions
by Shoffan Saifullah, Rafał Dreżewski, Anton Yudhana, Wahyu Caesarendra and Nurul Huda
Information 2025, 16(6), 456; https://doi.org/10.3390/info16060456 - 29 May 2025
Cited by 1 | Viewed by 900
Abstract
Accurate segmentation of brain tumors in magnetic resonance imaging (MRI) remains a challenging task due to heterogeneous tumor structures, varying intensities across modalities, and limited annotated data. Deep learning has significantly advanced segmentation accuracy; however, it often suffers from sensitivity to hyperparameter settings [...] Read more.
Accurate segmentation of brain tumors in magnetic resonance imaging (MRI) remains a challenging task due to heterogeneous tumor structures, varying intensities across modalities, and limited annotated data. Deep learning has significantly advanced segmentation accuracy; however, it often suffers from sensitivity to hyperparameter settings and limited generalization. To overcome these challenges, bio-inspired metaheuristic algorithms have been increasingly employed to optimize various stages of the deep learning pipeline—including hyperparameter tuning, preprocessing, architectural design, and attention modulation. This review systematically examines developments from 2015 to 2025, focusing on the integration of nature-inspired optimization methods such as Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Differential Evolution (DE), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), and novel hybrids including CJHBA and BioSwarmNet into deep learning-based brain tumor segmentation frameworks. A structured multi-query search strategy was executed using Publish or Perish across Google Scholar and Scopus databases. Following PRISMA guidelines, 3895 records were screened through automated filtering and manual eligibility checks, yielding a curated set of 106 primary studies. Through bibliometric mapping, methodological synthesis, and performance analysis, we highlight trends in algorithm usage, application domains (e.g., preprocessing, architecture search), and segmentation outcomes measured by metrics such as Dice Similarity Coefficient (DSC), Jaccard Index (JI), Hausdorff Distance (HD), and ASSD. Our findings demonstrate that bio-inspired optimization significantly enhances segmentation accuracy and robustness, particularly in multimodal settings involving FLAIR and T1CE modalities. The review concludes by identifying emerging research directions in hybrid optimization, real-time clinical applicability, and explainable AI, providing a roadmap for future exploration in this interdisciplinary domain. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

24 pages, 12924 KiB  
Article
Analysis of Forest Change Detection Induced by Hurricane Helene Using Remote Sensing Data
by Rizwan Ahmed Ansari, Tony Esimaje, Oluwatosin Michael Ibrahim and Timothy Mulrooney
Forests 2025, 16(5), 788; https://doi.org/10.3390/f16050788 - 8 May 2025
Cited by 1 | Viewed by 510
Abstract
The occurrence of hurricanes in the southern U.S. is on the rise, and assessing the damage caused to forests is essential for implementing protective measures and comprehending recovery dynamics. This work aims to create a novel data integration framework that employs LANDSAT 8, [...] Read more.
The occurrence of hurricanes in the southern U.S. is on the rise, and assessing the damage caused to forests is essential for implementing protective measures and comprehending recovery dynamics. This work aims to create a novel data integration framework that employs LANDSAT 8, drone-based images, and geographic information system data for change detection analysis for different forest types. We propose a method for change vector analysis based on a unique spectral mixture model utilizing composite spectral indices along with univariate difference imaging to create a change detection map illustrating disturbances in the areas of McDowell County in western North Carolina impacted by Hurricane Helene. The spectral indices included near-infrared-to-red ratios, a normalized difference vegetation index, Tasseled Cap indices, and a soil-adjusted vegetation index. In addition to the satellite imagery, the ground truth data of forest damage were also collected through the field investigation and interpretation of post-Helene drone images. Accuracy assessment was conducted with geographic information system (GIS) data and maps from the National Land Cover Database. Accuracy assessment was carried out using metrics such as overall accuracy, precision, recall, F score, Jaccard similarity, and kappa statistics. The proposed composite method performed well with overall accuracy and Jaccard similarity values of 73.80% and 0.6042, respectively. The results exhibit a reasonable correlation with GIS data and can be employed to assess damage severity. Full article
Show Figures

Figure 1

18 pages, 315 KiB  
Article
Generalizations and Properties of Normalized Similarity Measures for Boolean Models
by Amelia Bădică, Costin Bădică, Doina Logofătu and Ionuţ-Dragoş Neremzoiu
Mathematics 2025, 13(3), 384; https://doi.org/10.3390/math13030384 - 24 Jan 2025
Viewed by 704
Abstract
In this paper, we provide a closer look at some of the most popular normalized similarity/distance measures for Boolean models. This work includes the generalization of three classes of measures described as generalized Kulczynski, generalized Jaccard, and generalized Consonni and Todeschini measures, theoretical [...] Read more.
In this paper, we provide a closer look at some of the most popular normalized similarity/distance measures for Boolean models. This work includes the generalization of three classes of measures described as generalized Kulczynski, generalized Jaccard, and generalized Consonni and Todeschini measures, theoretical ordering of the similarity measures inside each class, as well as between classes, and positive and negative results regarding the metric properties of measures related to satisfying or not satisfying the triangle inequality axiom. Full article
(This article belongs to the Special Issue Mathematics and Applications)
Show Figures

Figure 1

21 pages, 3668 KiB  
Article
LD-SMOTE: A Novel Local Density Estimation-Based Oversampling Method for Imbalanced Datasets
by Jiacheng Lyu, Jie Yang, Zhixun Su and Zilu Zhu
Symmetry 2025, 17(2), 160; https://doi.org/10.3390/sym17020160 - 22 Jan 2025
Viewed by 1113
Abstract
Imbalanced data have become an essential stumbling block in the field of machine learning. In this paper, a novel oversampling method based on local density estimation, namely LD-SMOTE, is presented to address constraints of the popular rebalance technique SMOTE. LD-SMOTE initiates with k [...] Read more.
Imbalanced data have become an essential stumbling block in the field of machine learning. In this paper, a novel oversampling method based on local density estimation, namely LD-SMOTE, is presented to address constraints of the popular rebalance technique SMOTE. LD-SMOTE initiates with k-means clustering to quantificationally measure the classification contribution of each feature. Subsequently, a novel distance metric grounded in Jaccard similarity is defined, which accentuates the features that are more intricately linked to the minority class. Utilizing this metric, we estimate the local density with a Gaussian-like function to control the quantity of synthetic samples around every minority sample, thus simulating the distribution of the minority class. Additionally, the generation of synthetic samples occurs within a triangular region constructed by this minority sample and its two chosen neighbors in LD-SMOTE, instead of on the line connecting the minority sample and one of its neighbors. Experimental comparisons between LD-SMOTE and 16 existing resampling methods on 19 datasets reveal a significant average increase in LD-SMOTE with 6.4% in accuracy, 4.4% in the F-measure, 5.4% in the G-mean, and 4.0% in AUC. This result indicates that LD-SMOTE can be an alternative oversampling method for imbalanced datasets. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

21 pages, 2867 KiB  
Article
A Resource-Efficient Multi-Entropy Fusion Method and Its Application for EEG-Based Emotion Recognition
by Jiawen Li, Guanyuan Feng, Chen Ling, Ximing Ren, Xin Liu, Shuang Zhang, Leijun Wang, Yanmei Chen, Xianxian Zeng and Rongjun Chen
Entropy 2025, 27(1), 96; https://doi.org/10.3390/e27010096 - 20 Jan 2025
Cited by 2 | Viewed by 1457
Abstract
Emotion recognition is an advanced technology for understanding human behavior and psychological states, with extensive applications for mental health monitoring, human–computer interaction, and affective computing. Based on electroencephalography (EEG), the biomedical signals naturally generated by the brain, this work proposes a resource-efficient multi-entropy [...] Read more.
Emotion recognition is an advanced technology for understanding human behavior and psychological states, with extensive applications for mental health monitoring, human–computer interaction, and affective computing. Based on electroencephalography (EEG), the biomedical signals naturally generated by the brain, this work proposes a resource-efficient multi-entropy fusion method for classifying emotional states. First, Discrete Wavelet Transform (DWT) is applied to extract five brain rhythms, i.e., delta, theta, alpha, beta, and gamma, from EEG signals, followed by the acquisition of multi-entropy features, including Spectral Entropy (PSDE), Singular Spectrum Entropy (SSE), Sample Entropy (SE), Fuzzy Entropy (FE), Approximation Entropy (AE), and Permutation Entropy (PE). Then, such entropies are fused into a matrix to represent complex and dynamic characteristics of EEG, denoted as the Brain Rhythm Entropy Matrix (BREM). Next, Dynamic Time Warping (DTW), Mutual Information (MI), the Spearman Correlation Coefficient (SCC), and the Jaccard Similarity Coefficient (JSC) are applied to measure the similarity between the unknown testing BREM data and positive/negative emotional samples for classification. Experiments were conducted using the DEAP dataset, aiming to find a suitable scheme regarding similarity measures, time windows, and input numbers of channel data. The results reveal that DTW yields the best performance in similarity measures with a 5 s window. In addition, the single-channel input mode outperforms the single-region mode. The proposed method achieves 84.62% and 82.48% accuracy in arousal and valence classification tasks, respectively, indicating its effectiveness in reducing data dimensionality and computational complexity while maintaining an accuracy of over 80%. Such performances are remarkable when considering limited data resources as a concern, which opens possibilities for an innovative entropy fusion method that can help to design portable EEG-based emotion-aware devices for daily usage. Full article
Show Figures

Figure 1

18 pages, 2811 KiB  
Article
The Power of Words from the 2024 United States Presidential Debates: A Natural Language Processing Approach
by Ana Lorena Jiménez-Preciado, José Álvarez-García, Salvador Cruz-Aké and Francisco Venegas-Martínez
Information 2025, 16(1), 2; https://doi.org/10.3390/info16010002 - 25 Dec 2024
Cited by 1 | Viewed by 3974
Abstract
This study analyzes the linguistic patterns and rhetorical strategies employed in the 2024 U.S. presidential debates from the exchanges between Donald Trump, Joe Biden, and Kamala Harris. This paper examines debate transcripts to find underlying themes and communication styles using Natural Language Processing [...] Read more.
This study analyzes the linguistic patterns and rhetorical strategies employed in the 2024 U.S. presidential debates from the exchanges between Donald Trump, Joe Biden, and Kamala Harris. This paper examines debate transcripts to find underlying themes and communication styles using Natural Language Processing (NLP) advanced techniques, including an n-gram analysis, sentiment analysis, and lexical diversity measurements. The methodology combines a quantitative text analysis with qualitative interpretation through the Jaccard similarity coefficient, the Type–Token Ratio, and the Measure of Textual Lexical Diversity. The empirical results reveal distinct linguistic profiles for each candidate: Trump consistently employed emotionally charged language with high sentiment volatility, while Biden and Harris demonstrated more measured approaches with higher lexical diversity. Finally, this research contributes to the understanding of political discourse in high-stakes debates through NLP and can offer information on the evolution of the communication strategies of the presidential candidates of any country with this regime. Full article
(This article belongs to the Special Issue 2nd Edition of Information Retrieval and Social Media Mining)
Show Figures

Graphical abstract

28 pages, 3496 KiB  
Article
The Diversity of Macrofungi in the Forests of Ningxia, Western China
by Xiaojuan Deng, Minqi Li, Yucheng Dai, Xuetai Zhu, Xingfu Yan, Zhaojun Wei and Yuan Yuan
Diversity 2024, 16(12), 725; https://doi.org/10.3390/d16120725 - 26 Nov 2024
Viewed by 1131
Abstract
The diversity of macrofungi has been closely associated with forest diversity and stability. However, such a correlation has not been established for the forests of the Ningxia Autonomous Region due to the lack of systematic data on its macrofungal diversity. Therefore, for the [...] Read more.
The diversity of macrofungi has been closely associated with forest diversity and stability. However, such a correlation has not been established for the forests of the Ningxia Autonomous Region due to the lack of systematic data on its macrofungal diversity. Therefore, for the present study, we collected 3130 macrofungal specimens from the forests of the Helan Mts., Luo Mts., and Liupan Mts. in Ningxia and assessed them using morphological and molecular approaches. We identified 468 species belonging to 157 genera, 72 families, 18 orders, 11 classes, and 2 phyla. Among them, 31 species were ascomycetes, and 437 species were basidiomycetes. Tricholomataceae, with 96 species of 22 genera, was the most species-rich family, and Inocybe was the most species-rich genus (6.2%). The Jaccard similarity index measurement revealed the highest similarity in macrofungal species (16.15%) between the Helan and Liupan Mountains and the lowest (7.72%) between the Luo and Liupan Mountains. Further analyses of the macrofungal population of Ningxia showed that 206 species possess considerable potential for utilization, including 172 edible, 70 medicinal, and 36 edible–medicinal ones. Meanwhile, 54 species were identified as being poisonous. In these forests, saprophytic fungi were the most abundant, with 318 species (67.95%), followed by symbiotic fungi (31.62%) and parasitic fungi (0.04%). Grouping based on the geographical distribution indicated that the fungi of Ningxia are composed mainly of the cosmopolitan and north temperate types. These observations unveil the diversity and community structure of macrofungi in Ningxia forests. Full article
Show Figures

Figure 1

16 pages, 2605 KiB  
Article
Applying a Deep Learning Model for Total Kidney Volume Measurement in Autosomal Dominant Polycystic Kidney Disease
by Jia-Lien Hsu, Anandakumar Singaravelan, Chih-Yun Lai, Zhi-Lin Li, Chia-Nan Lin, Wen-Shuo Wu, Tze-Wah Kao and Pei-Lun Chu
Bioengineering 2024, 11(10), 963; https://doi.org/10.3390/bioengineering11100963 - 26 Sep 2024
Cited by 4 | Viewed by 1903
Abstract
Background: Autosomal dominant polycystic kidney disease (ADPKD) is the most common hereditary renal disease leading to end-stage renal disease. Total kidney volume (TKV) measurement has been considered as a surrogate in the evaluation of disease severity and prognostic predictor of ADPKD. However, the [...] Read more.
Background: Autosomal dominant polycystic kidney disease (ADPKD) is the most common hereditary renal disease leading to end-stage renal disease. Total kidney volume (TKV) measurement has been considered as a surrogate in the evaluation of disease severity and prognostic predictor of ADPKD. However, the traditional manual measurement of TKV by medical professionals is labor-intensive, time-consuming, and human error prone. Materials and methods: In this investigation, we conducted TKV measurements utilizing magnetic resonance imaging (MRI) data. The dataset consisted of 30 patients with ADPKD and 10 healthy individuals. To calculate TKV, we trained models using both coronal- and axial-section MRI images. The process involved extracting images in Digital Imaging and Communications in Medicine (DICOM) format, followed by augmentation and labeling. We employed a U-net model for image segmentation, generating mask images of the target areas. Subsequent post-processing steps and TKV estimation were performed based on the outputs obtained from these mask images. Results: The average TKV, as assessed by medical professionals from the testing dataset, was 1501.84 ± 965.85 mL with axial-section images and 1740.31 ± 1172.21 mL with coronal-section images, respectively (p = 0.73). Utilizing the deep learning model, the mean TKV derived from axial- and coronal-section images was 1536.33 ± 958.68 mL and 1636.25 ± 964.67 mL, respectively (p = 0.85). The discrepancy in mean TKV between medical professionals and the deep learning model was 44.23 ± 58.69 mL with axial-section images (p = 0.8) and 329.12 ± 352.56 mL with coronal-section images (p = 0.9), respectively. The average variability in TKV measurement was 21.6% with the coronal-section model and 3.95% with the axial-section model. The axial-section model demonstrated a mean Dice Similarity Coefficient (DSC) of 0.89 ± 0.27 and an average patient-wise Jaccard coefficient of 0.86 ± 0.27, while the mean DSC and Jaccard coefficient of the coronal-section model were 0.82 ± 0.29 and 0.77 ± 0.31, respectively. Conclusion: The integration of deep learning into image processing and interpretation is becoming increasingly prevalent in clinical practice. In our pilot study, we conducted a comparative analysis of the performance of a deep learning model alongside corresponding axial- and coronal-section models, a comparison that has been less explored in prior research. Our findings suggest that our deep learning model for TKV measurement performs comparably to medical professionals. However, we observed that varying image orientations could introduce measurement bias. Specifically, our AI model exhibited superior performance with axial-section images compared to coronal-section images. Full article
Show Figures

Graphical abstract

10 pages, 1804 KiB  
Article
The Development of a Yolov8-Based Model for the Measurement of Critical Shoulder Angle (CSA), Lateral Acromion Angle (LAA), and Acromion Index (AI) from Shoulder X-ray Images
by Turab Selçuk
Diagnostics 2024, 14(18), 2092; https://doi.org/10.3390/diagnostics14182092 - 22 Sep 2024
Cited by 1 | Viewed by 1156
Abstract
Background: The accurate and effective evaluation of parameters such as critical shoulder angle, lateral acromion angle, and acromion index from shoulder X-ray images is crucial for identifying pathological changes and assessing disease risk in the shoulder joint. Methods: In this study, a YOLOv8-based [...] Read more.
Background: The accurate and effective evaluation of parameters such as critical shoulder angle, lateral acromion angle, and acromion index from shoulder X-ray images is crucial for identifying pathological changes and assessing disease risk in the shoulder joint. Methods: In this study, a YOLOv8-based model was developed to automatically measure these three parameters together, contributing to the existing literature. Initially, YOLOv8 was used to segment the acromion, glenoid, and humerus regions, after which the CSA, LAA angles, and AI between these regions were calculated. The MURA dataset was employed in this study. Results: Segmentation performance was evaluated with the Dice and Jaccard similarity indices, both exceeding 0.9. Statistical analyses of the measurement performance, including Pearson correlation coefficient, RMSE, and ICC values demonstrated that the proposed model exhibits high consistency and similarity with manual measurements. Conclusions: The results indicate that automatic measurement methods align with manual measurements with high accuracy and offer an effective alternative for clinical applications. This study provides valuable insights for the early diagnosis and management of shoulder diseases and makes a significant contribution to existing measurement methods. Full article
(This article belongs to the Special Issue Recent Advances in Bone and Joint Imaging—2nd Edition)
Show Figures

Figure 1

21 pages, 8578 KiB  
Article
Noise Resilience in Dermoscopic Image Segmentation: Comparing Deep Learning Architectures for Enhanced Accuracy
by Fatih Ergin, Ismail Burak Parlak, Mouloud Adel, Ömer Melih Gül and Kostas Karpouzis
Electronics 2024, 13(17), 3414; https://doi.org/10.3390/electronics13173414 - 28 Aug 2024
Cited by 2 | Viewed by 1537
Abstract
Skin diseases and lesions can be ambiguous to recognize due to the similarity of lesions and enhanced imaging features. In this study, we compared three cutting-edge deep learning frameworks for dermoscopic segmentation: U-Net, SegAN, and MultiResUNet. We used a dermoscopic dataset including detailed [...] Read more.
Skin diseases and lesions can be ambiguous to recognize due to the similarity of lesions and enhanced imaging features. In this study, we compared three cutting-edge deep learning frameworks for dermoscopic segmentation: U-Net, SegAN, and MultiResUNet. We used a dermoscopic dataset including detailed lesion annotations with segmentation masks to help train and evaluate models on the precise localization of melanomas. SegAN is a special type of Generative Adversarial Network (GAN) that introduces a new architecture by adding generator and discriminator steps. U-Net has become a common strategy in segmentation to encode and decode image features for limited data. MultiResUNet is a U-Net-based architecture that overcomes the insufficient data problem in medical imaging by extracting contextual details. We trained the three frameworks on colored images after preprocessing. We added incremental Gaussian noise to measure the robustness of segmentation performance. We evaluated the frameworks using the following parameters: accuracy, sensitivity, specificity, Dice and Jaccard coefficients. Our accuracy results show that SegAN (92%) and MultiResUNet (92%) both outperform U-Net (86%), which is a well-known segmentation framework for skin lesion analysis. MultiResUNet sensitivity (96%) outperforms the methods in the challenge leaderboard. These results suggest that SegAN and MultiResUNet are more resistant techniques against noise in dermoscopic segmentation. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Graphical abstract

15 pages, 3474 KiB  
Article
Comparison of Six Measures of Genetic Similarity of Interspecific Brassicaceae Hybrids F2 Generation and Their Parental Forms Estimated on the Basis of ISSR Markers
by Jan Bocianowski, Janetta Niemann, Anna Jagieniak and Justyna Szwarc
Genes 2024, 15(9), 1114; https://doi.org/10.3390/genes15091114 - 23 Aug 2024
Cited by 3 | Viewed by 1733
Abstract
Genetic similarity determines the extent to which two genotypes share common genetic material. It can be measured in various ways, such as by comparing DNA sequences, proteins, or other genetic markers. The significance of genetic similarity is multifaceted and encompasses various fields, including [...] Read more.
Genetic similarity determines the extent to which two genotypes share common genetic material. It can be measured in various ways, such as by comparing DNA sequences, proteins, or other genetic markers. The significance of genetic similarity is multifaceted and encompasses various fields, including evolutionary biology, medicine, forensic science, animal and plant breeding, and anthropology. Genetic similarity is an important concept with wide application across different scientific disciplines. The research material included 21 rapeseed genotypes (ten interspecific Brassicaceae hybrids of F2 generation and 11 of their parental forms) and 146 alleles obtained using 21 ISSR molecular markers. In the presented study, six measures for calculating genetic similarity were compared: Euclidean, Jaccard, Kulczyński, Sokal and Michener, Nei, and Rogers. Genetic similarity values were estimated between all pairs of examined genotypes using the six measures proposed above. For each genetic similarity measure, the average, minimum, maximum values, and coefficient of variation were calculated. Correlation coefficients between the genetic similarity values obtained from each measure were determined. The obtained genetic similarity coefficients were used for the hierarchical clustering of objects using the unweighted pair group method with an arithmetic mean. A multiple regression model was written for each method, where the independent variables were the remaining methods. For each model, the coefficient of multiple determination was calculated. Genetic similarity values ranged from 0.486 to 0.993 (for the Euclidean method), from 0.157 to 0.986 (for the Jaccard method), from 0.275 to 0.993 (for the Kulczyński method), from 0.272 to 0.993 (for the Nei method), from 0.801 to 1.000 (for the Rogers method) and from 0.486 to 0.993 (for the Sokal and Michener method). The results indicate that the research material was divided into two identical groups using any of the proposed methods despite differences in the values of genetic similarity coefficients. Two of the presented measures of genetic similarity (the Sokal and Michener method and the Euclidean method) were the same. Full article
(This article belongs to the Section Plant Genetics and Genomics)
Show Figures

Figure 1

23 pages, 16012 KiB  
Article
Investigation of Flood Hazard Susceptibility Using Various Distance Measures in Technique for Order Preference by Similarity to Ideal Solution
by Hüseyin Akay and Müsteyde Baduna Koçyiğit
Appl. Sci. 2024, 14(16), 7023; https://doi.org/10.3390/app14167023 - 10 Aug 2024
Cited by 4 | Viewed by 1966
Abstract
In the present study, flood hazard susceptibility maps generated using various distance measures in the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) were analyzed. Widely applied distance measures such as Euclidean, Manhattan, Chebyshev, Jaccard, and Soergel were used in [...] Read more.
In the present study, flood hazard susceptibility maps generated using various distance measures in the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) were analyzed. Widely applied distance measures such as Euclidean, Manhattan, Chebyshev, Jaccard, and Soergel were used in TOPSIS to generate flood hazard susceptibility maps of the Gökırmak sub-basin located in the Western Black Sea Region, Türkiye. A frequency ratio (FR) and weight of evidence (WoE) were adapted to hybridize the nine flood conditioning factors considered in this study. The Receiver Operating Characteristic (ROC) analysis and Seed Cell Area Index (SCAI) were used for the validation and testing of the generated flood susceptibility maps by extracting 70% and 30% of the inventory data of the generated flood susceptibility map for validation and testing, respectively. When the Area Under Curve (AUC) and SCAI values were examined, it was found that the Manhattan distance metric hybridized with the FR method gave the best prediction results with AUC values of 0.904 and 0.942 for training and testing, respectively. Furthermore, the natural break method was found to give the best predictions of the flood hazard susceptibility classes. So, the Manhattan distance measure could be preferred to Euclidean for flood susceptibility mapping studies. Full article
(This article belongs to the Special Issue Emerging Approaches in Hydrology and Water Resources)
Show Figures

Figure 1

Back to TopTop