Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (76)

Search Parameters:
Keywords = LBP-SVM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 10358 KiB  
Article
Smartphone-Based Sensing System for Identifying Artificially Marbled Beef Using Texture and Color Analysis to Enhance Food Safety
by Hong-Dar Lin, Yi-Ting Hsieh and Chou-Hsien Lin
Sensors 2025, 25(14), 4440; https://doi.org/10.3390/s25144440 - 16 Jul 2025
Viewed by 277
Abstract
Beef fat injection technology, used to enhance the perceived quality of lower-grade meat, often results in artificially marbled beef that mimics the visual traits of Wagyu, characterized by dense fat distribution. This practice, driven by the high cost of Wagyu and the affordability [...] Read more.
Beef fat injection technology, used to enhance the perceived quality of lower-grade meat, often results in artificially marbled beef that mimics the visual traits of Wagyu, characterized by dense fat distribution. This practice, driven by the high cost of Wagyu and the affordability of fat-injected beef, has led to the proliferation of mislabeled “Wagyu-grade” products sold at premium prices, posing potential food safety risks such as allergen exposure or consumption of unverified additives, which can adversely affect consumer health. Addressing this, this study introduces a smart sensing system integrated with handheld mobile devices, enabling consumers to capture beef images during purchase for real-time health-focused assessment. The system analyzes surface texture and color, transmitting data to a server for classification to determine if the beef is artificially marbled, thus supporting informed dietary choices and reducing health risks. Images are processed by applying a region of interest (ROI) mask to remove background noise, followed by partitioning into grid blocks. Local binary pattern (LBP) texture features and RGB color features are extracted from these blocks to characterize surface properties of three beef types (Wagyu, regular, and fat-injected). A support vector machine (SVM) model classifies the blocks, with the final image classification determined via majority voting. Experimental results reveal that the system achieves a recall rate of 95.00% for fat-injected beef, a misjudgment rate of 1.67% for non-fat-injected beef, a correct classification rate (CR) of 93.89%, and an F1-score of 95.80%, demonstrating its potential as a human-centered healthcare tool for ensuring food safety and transparency. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

16 pages, 3953 KiB  
Article
Skin Lesion Classification Using Hybrid Feature Extraction Based on Classical and Deep Learning Methods
by Maryem Zahid, Mohammed Rziza and Rachid Alaoui
BioMedInformatics 2025, 5(3), 41; https://doi.org/10.3390/biomedinformatics5030041 - 16 Jul 2025
Viewed by 377
Abstract
This paper proposes a hybrid method for skin lesion classification combining deep learning features with conventional descriptors such as HOG, Gabor, SIFT, and LBP. Feature extraction was performed by extracting features of interest within the tumor area using suggested fusion methods. We tested [...] Read more.
This paper proposes a hybrid method for skin lesion classification combining deep learning features with conventional descriptors such as HOG, Gabor, SIFT, and LBP. Feature extraction was performed by extracting features of interest within the tumor area using suggested fusion methods. We tested and compared features obtained from different deep learning models coupled to HOG-based features. Dimensionality reduction and performance improvement were achieved by Principal Component Analysis, after which SVM was used for classification. The compared methods were tested on the reference database skin cancer-malignant-vs-benign. The results show a significant improvement in terms of accuracy due to complementarity between the conventional and deep learning-based methods. Specifically, the addition of HOG descriptors led to an accuracy increase of 5% for EfficientNetB0, 7% for ResNet50, 5% for ResNet101, 1% for NASNetMobile, 1% for DenseNet201, and 1% for MobileNetV2. These findings confirm that feature fusion significantly enhances performance compared to the individual application of each method. Full article
Show Figures

Figure 1

11 pages, 779 KiB  
Proceeding Paper
A Novel Approach for Classifying Gliomas from Magnetic Resonance Images Using Image Decomposition and Texture Analysis
by Kunda Suresh Babu, Benjmin Jashva Munigeti, Krishna Santosh Naidana and Sesikala Bapatla
Eng. Proc. 2025, 87(1), 70; https://doi.org/10.3390/engproc2025087070 - 30 May 2025
Viewed by 310
Abstract
Accurate glioma categorization using magnetic resonance (MR) imaging is critical for optimal treatment planning. However, the uneven and diffuse nature of glioma borders makes manual classification difficult and time-consuming. To address these limitations, we provide a unique strategy that combines image decomposition and [...] Read more.
Accurate glioma categorization using magnetic resonance (MR) imaging is critical for optimal treatment planning. However, the uneven and diffuse nature of glioma borders makes manual classification difficult and time-consuming. To address these limitations, we provide a unique strategy that combines image decomposition and local texture feature extraction to improve classification precision. The procedure starts with a Gaussian filter (GF) to smooth and reduce noise in MR images, followed by non-subsampled Laplacian Pyramid (NSLP) decomposition to capture multi-scale image information, making glioma borders more visible, TV-L1 normalization to handle intensity discrepancies, and local binary patterns (LBPs) to extract significant texture features from the processed images, which are then fed into a range of supervised machine learning classifiers, such as support vector machines (SVMs), K-nearest neighbors (KNNs), decision trees (DTs), AdaBoost, and LogitBoost, which have been trained to distinguish between low-grade (LG) and high-grade (HG) gliomas. According to experimental findings, our proposed approach consistently performs better than the state-of-the-art glioma classification techniques, with a higher degree of accuracy in differentiating LG and HG gliomas. This method has the potential to significantly increase diagnostic precision, enabling doctors to make better-informed and efficient treatment choices. Full article
(This article belongs to the Proceedings of The 5th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

19 pages, 3037 KiB  
Article
Histopathological Image Analysis Using Machine Learning to Evaluate Cisplatin and Exosome Effects on Ovarian Tissue in Cancer Patients
by Tuğba Şentürk, Fatma Latifoğlu, Çiğdem Gülüzar Altıntop, Arzu Yay, Zeynep Burçin Gönen, Gözde Özge Önder, Özge Cengiz Mat and Yusuf Özkul
Appl. Sci. 2025, 15(4), 1984; https://doi.org/10.3390/app15041984 - 14 Feb 2025
Viewed by 874
Abstract
Cisplatin, a widely used chemotherapeutic agent, is highly effective in treating various cancers, including ovarian and lung cancers, but it often causes ovarian tissue damage and impairs reproductive health. Exosomes derived from mesenchymal stem cells are believed to possess reparative effects on such [...] Read more.
Cisplatin, a widely used chemotherapeutic agent, is highly effective in treating various cancers, including ovarian and lung cancers, but it often causes ovarian tissue damage and impairs reproductive health. Exosomes derived from mesenchymal stem cells are believed to possess reparative effects on such damage, as suggested by previous studies. This study aims to evaluate the reparative effects of cisplatin and exosome treatments on ovarian tissue damage through the analysis of histopathological images and machine learning (ML)-based classification techniques. Five experimental groups were examined: Control, cisplatin-treated (Cis), exosome-treated (Exo), exosome-before-cisplatin (ExoCis), and cisplatin-before-exosome (CisExo). A set of 177 Local Binary Pattern (LBP) features were extracted from histopathological images, followed by feature selection using Lasso regression. Classification was performed using ML algorithms, including decision tree (DT), k-nearest neighbors (KNN), support vector machine (SVM), and Artificial Neural Network (ANN). The CisExo group exhibited the most homogeneous texture, suggesting effective tissue recovery, whereas the ExoCis group demonstrated greater heterogeneity, possibly indicating incomplete recovery. KNN and ANN classifiers achieved the highest accuracy, particularly in comparisons between the Control and CisExo groups, reaching an accuracy of 87%. The highest classification accuracy was observed for the Control vs. Cis groups (approximately 91%), reflecting distinct features, whereas the Control vs. Exo groups demonstrated lower accuracy (around 68%) due to feature similarity. Exosome treatments, particularly when administered post-cisplatin, significantly improve ovarian tissue recovery. This study highlights the potential of ML-based classification as a robust tool for evaluating therapeutic outcomes. Additionally, it underscores the promise of exosome therapy in mitigating chemotherapy-induced ovarian damage and preserving reproductive health. Further research is warranted to validate these findings and optimize treatment protocols. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

9 pages, 3908 KiB  
Proceeding Paper
Automated Glaucoma Detection in Fundus Images Using Comprehensive Feature Extraction and Advanced Classification Techniques
by Vijaya Kumar Velpula, Jyothisri Vadlamudi, Purna Prakash Kasaraneni and Yellapragada Venkata Pavan Kumar
Eng. Proc. 2024, 82(1), 33; https://doi.org/10.3390/ecsa-11-20437 - 25 Nov 2024
Cited by 2 | Viewed by 731
Abstract
Glaucoma, a primary cause of irreversible blindness, necessitates early detection to prevent significant vision loss. In the literature, fundus imaging is identified as a key tool in diagnosing glaucoma, which captures detailed retina images. However, the manual analysis of these images can be [...] Read more.
Glaucoma, a primary cause of irreversible blindness, necessitates early detection to prevent significant vision loss. In the literature, fundus imaging is identified as a key tool in diagnosing glaucoma, which captures detailed retina images. However, the manual analysis of these images can be time-consuming and subjective. Thus, this paper presents an automated system for glaucoma detection using fundus images, combining diverse feature extraction methods with advanced classifiers, specifically Support Vector Machine (SVM) and AdaBoost. The pre-processing step incorporated image enhancement via Contrast-Limited Adaptive Histogram Equalization (CLAHE) to enhance image quality and feature extraction. This work investigated individual features such as the histogram of oriented gradients (HOG), local binary patterns (LBP), chip histogram features, and the gray-level co-occurrence matrix (GLCM), as well as their various combinations, including HOG + LBP + chip histogram + GLCM, HOG + LBP + chip histogram, and others. These features were utilized with SVM and Adaboost classifiers to improve classification performance. For validation, the ACRIMA dataset, a public fundus image collection comprising 369 glaucoma-affected and 309 normal images, was used in this work, with 80% of the data allocated for training and 20% for testing. The results of the proposed study show that different feature sets yielded varying accuracies with the SVM and Adaboost classifiers. For instance, the combination of LBP + chip histogram achieved the highest accuracy of 99.29% with Adaboost, while the same combination yielded a 65.25% accuracy with SVM. The individual feature LBP alone achieved 97.87% with Adaboost and 98.58% with SVM. Furthermore, the combination of GLCM + LBP provided a 98.58% accuracy with Adaboost and 97.87% with SVM. The results demonstrate that CLAHE and combined feature sets significantly enhance detection accuracy, providing a reliable tool for early and precise glaucoma diagnosis, thus facilitating timely intervention and improved patient outcomes. Full article
Show Figures

Figure 1

14 pages, 2039 KiB  
Article
Deep Learning Based Breast Cancer Detection Using Decision Fusion
by Doğu Manalı, Hasan Demirel and Alaa Eleyan
Computers 2024, 13(11), 294; https://doi.org/10.3390/computers13110294 - 14 Nov 2024
Cited by 4 | Viewed by 3059
Abstract
Breast cancer, which has the highest mortality and morbidity rates among diseases affecting women, poses a significant threat to their lives and health. Early diagnosis is crucial for effective treatment. Recent advancements in artificial intelligence have enabled innovative techniques for early breast cancer [...] Read more.
Breast cancer, which has the highest mortality and morbidity rates among diseases affecting women, poses a significant threat to their lives and health. Early diagnosis is crucial for effective treatment. Recent advancements in artificial intelligence have enabled innovative techniques for early breast cancer detection. Convolutional neural networks (CNNs) and support vector machines (SVMs) have been used in computer-aided diagnosis (CAD) systems to identify breast tumors from mammograms. However, existing methods often face challenges in accuracy and reliability across diverse diagnostic scenarios. This paper proposes a three parallel channel artificial intelligence-based system. First, SVM distinguishes between different tumor types using local binary pattern (LBP) features. Second, a pre-trained CNN extracts features, and SVM identifies potential tumors. Third, a newly developed CNN is trained and used to classify mammogram images. Finally, a decision fusion that combines results from the three channels to enhance system performance is implemented using different rules. The proposed decision fusion-based system outperforms state-of-the-art alternatives with an overall accuracy of 99.1% using the product rule. Full article
Show Figures

Figure 1

18 pages, 6161 KiB  
Article
A Novel Hybrid Model for Automatic Non-Small Cell Lung Cancer Classification Using Histopathological Images
by Oguzhan Katar, Ozal Yildirim, Ru-San Tan and U Rajendra Acharya
Diagnostics 2024, 14(22), 2497; https://doi.org/10.3390/diagnostics14222497 - 8 Nov 2024
Cited by 2 | Viewed by 2201
Abstract
Background/Objectives: Despite recent advances in research, cancer remains a significant public health concern and a leading cause of death. Among all cancer types, lung cancer is the most common cause of cancer-related deaths, with most cases linked to non-small cell lung cancer [...] Read more.
Background/Objectives: Despite recent advances in research, cancer remains a significant public health concern and a leading cause of death. Among all cancer types, lung cancer is the most common cause of cancer-related deaths, with most cases linked to non-small cell lung cancer (NSCLC). Accurate classification of NSCLC subtypes is essential for developing treatment strategies. Medical professionals regard tissue biopsy as the gold standard for the identification of lung cancer subtypes. However, since biopsy images have very high resolutions, manual examination is time-consuming and depends on the pathologist’s expertise. Methods: In this study, we propose a hybrid model to assist pathologists in the classification of NSCLC subtypes from histopathological images. This model processes deep, textural and contextual features obtained by using EfficientNet-B0, local binary pattern (LBP) and vision transformer (ViT) encoder as feature extractors, respectively. In the proposed method, each feature matrix is flattened separately and then combined to form a comprehensive feature vector. The feature vector is given as input to machine learning classifiers to identify the NSCLC subtype. Results: We set up 13 different training scenarios to test 4 different classifiers: support vector machine (SVM), logistic regression (LR), light gradient boosting machine (LightGBM) and extreme gradient boosting (XGBoost). Among these scenarios, we obtained the highest classification accuracy (99.87%) with the combination of EfficientNet-B0 + LBP + ViT Encoder + SVM. The proposed hybrid model significantly enhanced the classification accuracy of NSCLC subtypes. Conclusions: The integration of deep, textural, and contextual features assisted the model in capturing subtle information from the images, thereby reducing the risk of misdiagnosis and facilitating more effective treatment planning. Full article
Show Figures

Figure 1

27 pages, 10884 KiB  
Article
Two–Stage Detection and Localization of Inter–Frame Tampering in Surveillance Videos Using Texture and Optical Flow
by Naheed Akhtar, Muhammad Hussain and Zulfiqar Habib
Mathematics 2024, 12(22), 3482; https://doi.org/10.3390/math12223482 - 7 Nov 2024
Viewed by 1208
Abstract
Surveillance cameras provide security and protection through real-time monitoring or through the investigation of recorded videos. The authenticity of surveillance videos cannot be taken for granted, but tampering detection is challenging. Existing techniques face significant limitations, including restricted applicability, poor generalizability, and high [...] Read more.
Surveillance cameras provide security and protection through real-time monitoring or through the investigation of recorded videos. The authenticity of surveillance videos cannot be taken for granted, but tampering detection is challenging. Existing techniques face significant limitations, including restricted applicability, poor generalizability, and high computational complexity. This paper presents a robust detection system to meet the challenges of frame duplication (FD) and frame insertion (FI) detection in surveillance videos. The system leverages the alterations in texture patterns and optical flow between consecutive frames and works in two stages; first, suspicious tampered videos are detected using motion residual–based local binary patterns (MR–LBPs) and SVM; second, by eliminating false positives, the precise tampering location is determined using the consistency in the aggregation of optical flow and the variance in MR–LBPs. The system is extensively evaluated on a large COMSATS Structured Video Tampering Evaluation Dataset (CSVTED) comprising challenging videos with varying quality of tampering and complexity levels and cross–validated on benchmark public domain datasets. The system exhibits outstanding performance, achieving 99.5% accuracy in detecting and pinpointing tampered regions. It ensures the generalization and wide applicability of the system while maintaining computational efficiency. Full article
Show Figures

Figure 1

33 pages, 30114 KiB  
Article
Exploring the Influence of Object, Subject, and Context on Aesthetic Evaluation through Computational Aesthetics and Neuroaesthetics
by Fangfu Lin, Wanni Xu, Yan Li and Wu Song
Appl. Sci. 2024, 14(16), 7384; https://doi.org/10.3390/app14167384 - 21 Aug 2024
Cited by 4 | Viewed by 2149
Abstract
Background: In recent years, computational aesthetics and neuroaesthetics have provided novel insights into understanding beauty. Building upon the findings of traditional aesthetics, this study aims to combine these two research methods to explore an interdisciplinary approach to studying aesthetics. Method: Abstract artworks were [...] Read more.
Background: In recent years, computational aesthetics and neuroaesthetics have provided novel insights into understanding beauty. Building upon the findings of traditional aesthetics, this study aims to combine these two research methods to explore an interdisciplinary approach to studying aesthetics. Method: Abstract artworks were used as experimental materials. Based on traditional aesthetics and in combination, features of composition, tone, and texture were selected. Computational aesthetic methods were then employed to correspond these features to physical quantities: blank space, gray histogram, Gray Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP), and Gabor filters. An electroencephalogram (EEG) experiment was carried out, in which participants conducted aesthetic evaluations of the experimental materials in different contexts (genuine, fake), and their EEG data were recorded to analyze the impact of various feature classes in the aesthetic evaluation process. Finally, a Support Vector Machines (SVMs) was utilized to model the feature data, Event-Related Potentials (ERPs), context data, and subjective aesthetic evaluation data. Result: Behavioral data revealed higher aesthetic ratings in the genuine context. ERP data indicated that genuine contexts elicited more negative deflections in the prefrontal lobes between 200 and 1000 ms. Class II compositions demonstrated more positive deflections in the parietal lobes at 50–120 ms, while Class I tones evoked more positive amplitudes in the occipital lobes at 200–300 ms. Gabor features showed significant variations in the parieto-occipital area at an early stage. Class II LBP elicited a prefrontal negative wave with a larger amplitude. The results of the SVM models indicated that the model incorporating aesthetic subject and context data (ACC = 0.76866) outperforms the model using only parameters of the aesthetic object (ACC = 0.68657). Conclusion: A positive context tends to provide participants with a more positive aesthetic experience, but abstract artworks may not respond to this positivity. During aesthetic evaluation, the ERP data activated by different features show a trend from global to local. The SVM model based on multimodal data fusion effectively predicts aesthetics, further demonstrating the feasibility of the combined research approach of computational aesthetics and neuroaesthetics. Full article
Show Figures

Figure 1

21 pages, 8540 KiB  
Article
LBCNIN: Local Binary Convolution Network with Intra-Class Normalization for Texture Recognition with Applications in Tactile Internet
by Nikolay Neshov, Krasimir Tonchev and Agata Manolova
Electronics 2024, 13(15), 2942; https://doi.org/10.3390/electronics13152942 - 25 Jul 2024
Viewed by 1272
Abstract
Texture recognition is a pivotal task in computer vision, crucial for applications in material sciences, medicine, and agriculture. Leveraging advancements in Deep Neural Networks (DNNs), researchers seek robust methods to discern intricate patterns in images. In the context of the burgeoning Tactile Internet [...] Read more.
Texture recognition is a pivotal task in computer vision, crucial for applications in material sciences, medicine, and agriculture. Leveraging advancements in Deep Neural Networks (DNNs), researchers seek robust methods to discern intricate patterns in images. In the context of the burgeoning Tactile Internet (TI), efficient texture recognition algorithms are essential for real-time applications. This paper introduces a method named Local Binary Convolution Network with Intra-class Normalization (LBCNIN) for texture recognition. Incorporating features from the last layer of the backbone, LBCNIN employs a non-trainable Local Binary Convolution (LBC) layer, inspired by Local Binary Patterns (LBP), without fine-tuning the backbone. The encoded feature vector is fed into a linear Support Vector Machine (SVM) for classification, serving as the only trainable component. In the context of TI, the availability of images from multiple views, such as in 3D object semantic segmentation, allows for more data per object. Consequently, LBCNIN processes batches where each batch contains images from the same material class, with batch normalization employed as an intra-class normalization method, aiming to produce better results than single images. Comprehensive evaluations across texture benchmarks demonstrate LBCNIN’s ability to achieve very good results under different resource constraints, attributed to the variability in backbone architectures. Full article
(This article belongs to the Section Electronic Multimedia)
Show Figures

Figure 1

15 pages, 1620 KiB  
Article
Classification of the Pathological Range of Motion in Low Back Pain Using Wearable Sensors and Machine Learning
by Fernando Villalba-Meneses, Cesar Guevara, Alejandro B. Lojan, Mario G. Gualsaqui, Isaac Arias-Serrano, Paolo A. Velásquez-López, Diego Almeida-Galárraga, Andrés Tirado-Espín, Javier Marín and José J. Marín
Sensors 2024, 24(3), 831; https://doi.org/10.3390/s24030831 - 27 Jan 2024
Cited by 8 | Viewed by 3526
Abstract
Low back pain (LBP) is a highly common musculoskeletal condition and the leading cause of work absenteeism. This project aims to develop a medical test to help healthcare professionals decide on and assign physical treatment for patients with nonspecific LBP. The design uses [...] Read more.
Low back pain (LBP) is a highly common musculoskeletal condition and the leading cause of work absenteeism. This project aims to develop a medical test to help healthcare professionals decide on and assign physical treatment for patients with nonspecific LBP. The design uses machine learning (ML) models based on the classification of motion capture (MoCap) data obtained from the range of motion (ROM) exercises among healthy and clinically diagnosed patients with LBP from Imbabura–Ecuador. The following seven ML algorithms were tested for evaluation and comparison: logistic regression, decision tree, random forest, support vector machine (SVM), k-nearest neighbor (KNN), multilayer perceptron (MLP), and gradient boosting algorithms. All ML techniques obtained an accuracy above 80%, and three models (SVM, random forest, and MLP) obtained an accuracy of >90%. SVM was found to be the best-performing algorithm. This article aims to improve the applicability of inertial MoCap in healthcare by making use of precise spatiotemporal measurements with a data-driven treatment approach to improve the quality of life of people with chronic LBP. Full article
(This article belongs to the Special Issue Biomedical Sensors for Diagnosis and Rehabilitation)
Show Figures

Figure 1

34 pages, 2723 KiB  
Article
Bamboo Plant Classification Using Deep Transfer Learning with a Majority Multiclass Voting Algorithm
by Ankush D. Sawarkar, Deepti D. Shrimankar, Sarvat Ali, Anurag Agrahari and Lal Singh
Appl. Sci. 2024, 14(3), 1023; https://doi.org/10.3390/app14031023 - 25 Jan 2024
Cited by 9 | Viewed by 4548
Abstract
Bamboos, also known as non-timber forest products (NTFPs) and belonging to the family Poaceae and subfamily Bambusoideae, have a wide range of flowering cycles from 3 to 120 years; hence, it is difficult to identify species. Here, the focus is on supervised machine [...] Read more.
Bamboos, also known as non-timber forest products (NTFPs) and belonging to the family Poaceae and subfamily Bambusoideae, have a wide range of flowering cycles from 3 to 120 years; hence, it is difficult to identify species. Here, the focus is on supervised machine learning (ML) and deep learning (DL) as a potential automated approach for the identification and classification of commercial bamboo species, with the help of the majority multiclass voting (MajMulVot) algorithm. We created an image dataset of 2000 bamboo instances, followed by a texture dataset prepared using local binary patterns (LBP) and gray-level cooccurrence matrix (GLCM)-based methods. First, we deployed five ML models for the texture datasets, where support vector machine (SVM) shows an accuracy rate of 82.27%. We next deployed five DL-based convolutional neural network (CNN) models for bamboo classification, namely AlexNet, VGG16, ResNet18, VGG19, and GoogleNet, using the transfer learning (TL) approach, where VGG16 prevails, with an accuracy rate of 88.75%. Further, a MajMulVot-based ensemble approach was introduced to improve the classification accuracy of all ML- and DL-based models. The ML-MajMulVot enhanced the accuracy for the texture dataset to 86.96%. In the same way, DL-MajMulVot increased the accuracy to 92.8%. We performed a comparative analysis of all classification models with and without K-fold cross-validation and MajMulVot methods. The proposed findings indicate that even difficult-to-identify species may be identified accurately with adequate image datasets. The suggested technology can also be incorporated into a mobile app to offer farmers effective agricultural methods. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

31 pages, 9627 KiB  
Article
Multi-Method Analysis of Histopathological Image for Early Diagnosis of Oral Squamous Cell Carcinoma Using Deep Learning and Hybrid Techniques
by Mehran Ahmad, Muhammad Abeer Irfan, Umar Sadique, Ihtisham ul Haq, Atif Jan, Muhammad Irfan Khattak, Yazeed Yasin Ghadi and Hanan Aljuaid
Cancers 2023, 15(21), 5247; https://doi.org/10.3390/cancers15215247 - 31 Oct 2023
Cited by 18 | Viewed by 3612
Abstract
Oral cancer is a fatal disease and ranks seventh among the most common cancers throughout the whole globe. Oral cancer is a type of cancer that usually affects the head and neck. The current gold standard for diagnosis is histopathological investigation, however, the [...] Read more.
Oral cancer is a fatal disease and ranks seventh among the most common cancers throughout the whole globe. Oral cancer is a type of cancer that usually affects the head and neck. The current gold standard for diagnosis is histopathological investigation, however, the conventional approach is time-consuming and requires professional interpretation. Therefore, early diagnosis of Oral Squamous Cell Carcinoma (OSCC) is crucial for successful therapy, reducing the risk of mortality and morbidity, while improving the patient’s chances of survival. Thus, we employed several artificial intelligence techniques to aid clinicians or physicians, thereby significantly reducing the workload of pathologists. This study aimed to develop hybrid methodologies based on fused features to generate better results for early diagnosis of OSCC. This study employed three different strategies, each using five distinct models. The first strategy is transfer learning using the Xception, Inceptionv3, InceptionResNetV2, NASNetLarge, and DenseNet201 models. The second strategy involves using a pre-trained art of CNN for feature extraction coupled with a Support Vector Machine (SVM) for classification. In particular, features were extracted using various pre-trained models, namely Xception, Inceptionv3, InceptionResNetV2, NASNetLarge, and DenseNet201, and were subsequently applied to the SVM algorithm to evaluate the classification accuracy. The final strategy employs a cutting-edge hybrid feature fusion technique, utilizing an art-of-CNN model to extract the deep features of the aforementioned models. These deep features underwent dimensionality reduction through principal component analysis (PCA). Subsequently, low-dimensionality features are combined with shape, color, and texture features extracted using a gray-level co-occurrence matrix (GLCM), Histogram of Oriented Gradient (HOG), and Local Binary Pattern (LBP) methods. Hybrid feature fusion was incorporated into the SVM to enhance the classification performance. The proposed system achieved promising results for rapid diagnosis of OSCC using histological images. The accuracy, precision, sensitivity, specificity, F-1 score, and area under the curve (AUC) of the support vector machine (SVM) algorithm based on the hybrid feature fusion of DenseNet201 with GLCM, HOG, and LBP features were 97.00%, 96.77%, 90.90%, 98.92%, 93.74%, and 96.80%, respectively. Full article
(This article belongs to the Section Cancer Causes, Screening and Diagnosis)
Show Figures

Figure 1

6 pages, 2944 KiB  
Proceeding Paper
Feature Extraction of Ophthalmic Images Using Deep Learning and Machine Learning Algorithms
by Tunuri Sundeep, Uppalapati Divyasree, Karumanchi Tejaswi, Ummadi Reddy Vinithanjali and Anumandla Kiran Kumar
Eng. Proc. 2023, 56(1), 170; https://doi.org/10.3390/ASEC2023-15231 - 26 Oct 2023
Cited by 2 | Viewed by 1766
Abstract
Deep learning and Machine Learning Algorithms has become the most popular method for analyzing and extracting features especially in medical images. And feature extraction has made this task much easier. Our aim is to check which feature extraction technique works best for a [...] Read more.
Deep learning and Machine Learning Algorithms has become the most popular method for analyzing and extracting features especially in medical images. And feature extraction has made this task much easier. Our aim is to check which feature extraction technique works best for a classifier. We used Ophthalmic Images and applied feature extraction techniques such as Gabor, LBP (Local Binary Pattern), HOG (Histograms of Oriented Gradients), and SIFT (Scale-Invariant Feature Transform), where the obtained feature extraction techniques are passed through classifiers such as RFC (Random Forest Classifier), CNN (Convolutional Neural Network), SVM (Support Vector Machine), and KNN (K-Nearest Neighbors). Then, we compared the performance of each technique and selected which feature extraction technique gives the best performance for a specified classifier. We achieved an accuracy of 94% for Gabor Feature Extraction technique using CNN Classifier, 92% accuracy for HOG Feature Extraction technique using RFC Classifier, 90% accuracy for LBP Feature Extraction technique using RFC Classifier and we achieved 92% accuracy for SIFT Feature Extraction technique using RFC Classifier. Full article
(This article belongs to the Proceedings of The 4th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

19 pages, 3035 KiB  
Article
A Comparative Analysis of Skin Cancer Detection Applications Using Histogram-Based Local Descriptors
by Yildiz Aydin
Diagnostics 2023, 13(19), 3142; https://doi.org/10.3390/diagnostics13193142 - 6 Oct 2023
Cited by 12 | Viewed by 3337
Abstract
Among the most serious types of cancer is skin cancer. Despite the risk of death, when caught early, the rate of survival is greater than 95%. This inspires researchers to explore methods that allow for the early detection of skin cancer that could [...] Read more.
Among the most serious types of cancer is skin cancer. Despite the risk of death, when caught early, the rate of survival is greater than 95%. This inspires researchers to explore methods that allow for the early detection of skin cancer that could save millions of lives. The ability to detect the early signs of skin cancer has become more urgent in light of the rising number of illnesses, the high death rate, and costly healthcare treatments. Given the gravity of these issues, experts have created a number of existing approaches for detecting skin cancer. Identifying skin cancer and whether it is benign or malignant involves detecting features of the lesions such as size, form, symmetry, color, etc. The aim of this study is to determine the most successful skin cancer detection methods by comparing the outcomes and effectiveness of the various applications that categorize benign and malignant forms of skin cancer. Descriptors such as the Local Binary Pattern (LBP), the Local Directional Number Pattern (LDN), the Pyramid of Histogram of Oriented Gradients (PHOG), the Local Directional Pattern (LDiP), and Monogenic Binary Coding (MBC) are used to extract the necessary features. Support vector machines (SVM) and XGBoost are used in the classification process. In addition, this study uses colored histogram-based features to classify the various characteristics obtained from the color images. In the experimental results, the applications implemented with the proposed color histogram-based features were observed to be more successful. Under the proposed method (the colored LDN feature obtained using the YCbCr color space with the XGBoost classifier), a 90% accuracy rate was achieved on Dataset 1, which was obtained from the Kaggle website. For the HAM10000 data set, an accuracy rate of 96.50% was achieved under a similar proposed method (the colored MBC feature obtained using the HSV color space with the XGBoost classifier). Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

Back to TopTop