Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = HEp-2 cell image classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
9 pages, 202 KB  
Article
Pilot Study of AI-Assisted ANA Immunofluorescence Reading—Comparison with Classical Visual Interpretation
by Sarah Mayr, Margit Dollinger, Boris Ehrenstein, Florian Günther, Olga Krammer, Antonia Schuster, Thomas Büttner, Rico Hiemann, Peter Schierack, Dirk Roggenbuck and Martin Fleck
J. Clin. Med. 2025, 14(19), 6924; https://doi.org/10.3390/jcm14196924 - 30 Sep 2025
Cited by 1 | Viewed by 1565
Abstract
Background: Antinuclear antibodies (ANAs) play a crucial role in diagnosing systemic autoimmune rheumatic diseases, particularly systemic lupus erythematosus. The recommended standard for ANA detection is indirect immunofluorescence testing (IIFT) using human epithelial (HEp-2) cells. Since visual interpretation (VI) of IIFT images is time-consuming [...] Read more.
Background: Antinuclear antibodies (ANAs) play a crucial role in diagnosing systemic autoimmune rheumatic diseases, particularly systemic lupus erythematosus. The recommended standard for ANA detection is indirect immunofluorescence testing (IIFT) using human epithelial (HEp-2) cells. Since visual interpretation (VI) of IIFT images is time-consuming and labor-intensive, research is focusing on automated interpretation systems that use artificial intelligence (AI). Methods: Consecutive serum samples (number of sera = 143) from routine clinical care were collected from patients visiting our tertiary rheumatology center. ANA were detected by IIFT with visual interpretation and compared with IIFT using the AI-based interpretation system akiron® NEO (Medipan, 15827 Blankenfelde-Mahlow, Germany). ANA titer levels and patterns were analyzed according to the Competent Level of the International Consensus on ANA Pattern classification. Results: Agreement of positive/negative ANA discrimination between AI-aided and VI-IIFT at the recommended cut-off of 80 was good (Cohen’s kappa [κ] 0.69) but significantly different (McNemar test, p < 0.0001). At a cut-off of ≥1/80, the agreement was improved (κ 0.76) and the difference between both methods was non-significant (p = 1.0000). The ANA pattern recognition agreement between both approaches was moderate (κ = 0.54). The direct comparison using only the akiron® NEO HEp-2 cell ANA assay revealed a good agreement (0.67), which improved to very good (κ = 0.80) when differences between ANA patterns anti-cell (AC)4/5 and AC2 were neglected. Notably, titer levels in the automated evaluations were frequently assessed at higher values than in the gold standard interpretation. Conclusions: Our study demonstrates a good agreement for positive/negative ANA discrimination. ANA pattern recognition by AI-aided interpretation showed moderate to very good agreement with VI. Further research and algorithm refinement (e.g., improved pattern recognition and titer calibration) are necessary to support its future implementation as a reliable screening method. Full article
18 pages, 13103 KB  
Article
ILViT: An Inception-Linear Attention-Based Lightweight Vision Transformer for Microscopic Cell Classification
by Zhangda Liu, Panpan Wu, Ziping Zhao and Hengyong Yu
J. Imaging 2025, 11(7), 219; https://doi.org/10.3390/jimaging11070219 - 1 Jul 2025
Cited by 1 | Viewed by 1274
Abstract
Microscopic cell classification is a fundamental challenge in both clinical diagnosis and biological research. However, existing methods still struggle with the complexity and morphological diversity of cellular images, leading to limited accuracy or high computational costs. To overcome these constraints, we propose an [...] Read more.
Microscopic cell classification is a fundamental challenge in both clinical diagnosis and biological research. However, existing methods still struggle with the complexity and morphological diversity of cellular images, leading to limited accuracy or high computational costs. To overcome these constraints, we propose an efficient classification method that balances strong feature representation with a lightweight design. Specifically, an Inception-Linear Attention-based Lightweight Vision Transformer (ILViT) model is developed for microscopic cell classification. The ILViT integrates two innovative modules: Dynamic Inception Convolution (DIC) and Contrastive Omni-Kolmogorov Attention (COKA). DIC combines dynamic and Inception-style convolutions to replace large kernels with fewer parameters. COKA integrates Omni-Dimensional Dynamic Convolution (ODC), linear attention, and a Kolmogorov-Arnold Network(KAN) structure to enhance feature learning and model interpretability. With only 1.91 GFLOPs and 8.98 million parameters, ILViT achieves high efficiency. Extensive experiments on four public datasets are conducted to validate the effectiveness of the proposed method. It achieves an accuracy of 97.185% on BioMediTech dataset for classifying retinal pigment epithelial cells, 97.436% on ICPR-HEp-2 dataset for diagnosing autoimmune disorders via HEp-2 cell classification, 90.528% on Hematological Malignancy Bone Marrow Cytology Expert Annotation dataset for categorizing bone marrow cells, and 99.758% on a white blood cell dataset for distinguishing leukocyte subtypes. These results show that ILViT outperforms the state-of-the-art models in both accuracy and efficiency, demonstrating strong generalizability and practical potential for cell image classification. Full article
Show Figures

Figure 1

15 pages, 1588 KB  
Article
Classification of HEp-2 Staining Pattern Images Using Adapted Multilayer Perceptron Neural Network-Based Intra-Class Variation of Cell Shape
by Khamael Al-Dulaimi, Jasmine Banks, Aiman Al-Sabaawi, Kien Nguyen, Vinod Chandran and Inmaculada Tomeo-Reyes
Sensors 2023, 23(4), 2195; https://doi.org/10.3390/s23042195 - 15 Feb 2023
Cited by 4 | Viewed by 2921
Abstract
There exists a growing interest from the clinical practice research communities in the development of methods to automate HEp-2 stained cells classification procedure from histopathological images. Challenges faced by these methods include variations in cell densities and cell patterns, overfitting of features, large-scale [...] Read more.
There exists a growing interest from the clinical practice research communities in the development of methods to automate HEp-2 stained cells classification procedure from histopathological images. Challenges faced by these methods include variations in cell densities and cell patterns, overfitting of features, large-scale data volume and stained cells. In this paper, a multi-class multilayer perceptron technique is adapted by adding a new hidden layer to calculate the variation in the mean, scale, kurtosis and skewness of higher order spectra features of the cell shape information. The adapted technique is then jointly trained and the probability of classification calculated using a Softmax activation function. This method is proposed to address overfitting, stained and large-scale data volume problems, and classify HEp-2 staining cells into six classes. An extensive experimental analysis is studied to verify the results of the proposed method. The technique has been trained and tested on the dataset from ICPR-2014 and ICPR-2016 competitions using the Task-1. The experimental results have shown that the proposed model achieved higher accuracy of 90.3% (with data augmentation) than of 87.5% (with no data augmentation). In addition, the proposed framework is compared with existing methods, as well as, the results of methods using in ICPR2014 and ICPR2016 competitions.The results demonstrate that our proposed method effectively outperforms recent methods. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing II)
Show Figures

Figure 1

15 pages, 2289 KB  
Article
IoMT-Based Automated Diagnosis of Autoimmune Diseases Using MultiStage Classification Scheme for Sustainable Smart Cities
by Divya Biligere Shivanna, Thompson Stephan, Fadi Al-Turjman, Manjur Kolhar and Sinem Alturjman
Sustainability 2022, 14(21), 13891; https://doi.org/10.3390/su142113891 - 26 Oct 2022
Cited by 9 | Viewed by 3714
Abstract
The resolution of complex medical diagnoses using pattern recognition requires an artificial neural network-based expert system to automate autoimmune disease diagnosis in blood samples. This process is done using image-based computer-aided diagnosis (CAD) to reduce errors in the diagnosis process. This paper describes [...] Read more.
The resolution of complex medical diagnoses using pattern recognition requires an artificial neural network-based expert system to automate autoimmune disease diagnosis in blood samples. This process is done using image-based computer-aided diagnosis (CAD) to reduce errors in the diagnosis process. This paper describes a Multistage Classification Scheme (MSCS), which uses antinuclear antibody (ANA) tests to identify and classify the existence of autoantibodies in the blood serum that bind to antigens found in the nuclei of mammalian cells. The MSCS classified HEp-2 cells into three stages by using Binary Tree (BT), Artificial Neural Network (ANN), and Support Vector Machine (SVM) as basic blocks. The Indirect Immunofluorescence (IIF) technique is used in the ANA test with Human Epithelial type-2 (HEp-2) cells as substrates. The efficiency of the proposed methodology is assessed using the dataset of ICPR 2016. The intermediate cells (IMC) and positive cells (PC) were separated in Stage 1 prior to preprocessing based on their total strength, and special preprocessing is applied to intermediate cells for improved output, and positive cells are subjected to mild preprocessing. The mean class accuracy (MCA) was 84.9% for intermediate cells and 95.8% for positive cells, although the carefully picked 24 features and SVM classifier were applied. ANN showed better performance by adjusting the weights using the SCGBP algorithm. So, the MCA is 88.4% and 97.1% for intermediate and positive cells, respectively. BT had an MCA of 95.3% for intermediate and 98.6% for positive. In Stage 2, the meta learners BT2, ANN2, and SVM2 were trained for an augmented feature set (24 + 3 results from base learners). Therefore, the performance of BT2, ANN2, and SV M2 was increased by 1.8%, 4.5%, and 4.1% as compared to Stage 1. In Stage 3, the final prediction was performed by majority voting among the results of the three meta learners to achieve 99.1% MCA. The proposed algorithm can be embedded into a CAD framework built for the ANA examination. The proposed model will improve operational efficiency, decrease medical expenses, expand accessibility to healthcare, and improve patient safety in the sector, enabling enterprises to lower unplanned downtime, develop new products or services, increase operational effectiveness, and enhance risk management. Full article
Show Figures

Figure 1

18 pages, 2967 KB  
Article
SIFT-CNN: When Convolutional Neural Networks Meet Dense SIFT Descriptors for Image and Sequence Classification
by Dimitrios Tsourounis, Dimitris Kastaniotis, Christos Theoharatos, Andreas Kazantzidis and George Economou
J. Imaging 2022, 8(10), 256; https://doi.org/10.3390/jimaging8100256 - 21 Sep 2022
Cited by 38 | Viewed by 11732
Abstract
Despite the success of hand-crafted features in computer visioning for many years, nowadays, this has been replaced by end-to-end learnable features that are extracted from deep convolutional neural networks (CNNs). Whilst CNNs can learn robust features directly from image pixels, they require large [...] Read more.
Despite the success of hand-crafted features in computer visioning for many years, nowadays, this has been replaced by end-to-end learnable features that are extracted from deep convolutional neural networks (CNNs). Whilst CNNs can learn robust features directly from image pixels, they require large amounts of samples and extreme augmentations. On the contrary, hand-crafted features, like SIFT, exhibit several interesting properties as they can provide local rotation invariance. In this work, a novel scheme combining the strengths of SIFT descriptors with CNNs, namely SIFT-CNN, is presented. Given a single-channel image, one SIFT descriptor is computed for every pixel, and thus, every pixel is represented as an M-dimensional histogram, which ultimately results in an M-channel image. Thus, the SIFT image is generated from the SIFT descriptors for all the pixels in a single-channel image, while at the same time, the original spatial size is preserved. Next, a CNN is trained to utilize these M-channel images as inputs by operating directly on the multiscale SIFT images with the regular convolution processes. Since these images incorporate spatial relations between the histograms of the SIFT descriptors, the CNN is guided to learn features from local gradient information of images that otherwise can be neglected. In this manner, the SIFT-CNN implicitly acquires a local rotation invariance property, which is desired for problems where local areas within the image can be rotated without affecting the overall classification result of the respective image. Some of these problems refer to indirect immunofluorescence (IIF) cell image classification, ground-based all-sky image-cloud classification and human lip-reading classification. The results for the popular datasets related to the three different aforementioned problems indicate that the proposed SIFT-CNN can improve the performance and surpasses the corresponding CNNs trained directly on pixel values in various challenging tasks due to its robustness in local rotations. Our findings highlight the importance of the input image representation in the overall efficiency of a data-driven system. Full article
(This article belongs to the Special Issue Advances and Challenges in Multimodal Machine Learning)
Show Figures

Figure 1

17 pages, 3852 KB  
Article
Application of Supervised Machine Learning to Recognize Competent Level and Mixed Antinuclear Antibody Patterns Based on ICAP International Consensus
by Yi-Da Wu, Ruey-Kai Sheu, Chih-Wei Chung, Yen-Ching Wu, Chiao-Chi Ou, Chien-Wen Hsiao, Huang-Chen Chang, Ying-Chieh Huang, Yi-Ming Chen, Win-Tsung Lo, Lun-Chi Chen, Chien-Chung Huang, Tsu-Yi Hsieh, Wen-Nan Huang, Tsai-Hung Yen, Yun-Wen Chen, Chia-Yu Chen and Yi-Hsing Chen
Diagnostics 2021, 11(4), 642; https://doi.org/10.3390/diagnostics11040642 - 1 Apr 2021
Cited by 13 | Viewed by 13776
Abstract
Background: Antinuclear antibody pattern recognition is vital for autoimmune disease diagnosis but labor-intensive for manual interpretation. To develop an automated pattern recognition system, we established machine learning models based on the International Consensus on Antinuclear Antibody Patterns (ICAP) at a competent level, mixed [...] Read more.
Background: Antinuclear antibody pattern recognition is vital for autoimmune disease diagnosis but labor-intensive for manual interpretation. To develop an automated pattern recognition system, we established machine learning models based on the International Consensus on Antinuclear Antibody Patterns (ICAP) at a competent level, mixed patterns recognition, and evaluated their consistency with human reading. Methods: 51,694 human epithelial cells (HEp-2) cell images with patterns assigned by experienced medical technologists collected in a medical center were used to train six machine learning algorithms and were compared by their performance. Next, we choose the best performing model to test the consistency with five experienced readers and two beginners. Results: The mean F1 score in each classification of the best performing model was 0.86 evaluated by Testing Data 1. For the inter-observer agreement test on Testing Data 2, the average agreement was 0.849 (κ) among five experienced readers, 0.844 between the best performing model and experienced readers, 0.528 between experienced readers and beginners. The results indicate that the proposed model outperformed beginners and achieved an excellent agreement with experienced readers. Conclusions: This study demonstrated that the developed model could reach an excellent agreement with experienced human readers using machine learning methods. Full article
(This article belongs to the Special Issue Advances in Identification and Management of Systemic Sclerosis)
Show Figures

Figure 1

23 pages, 5688 KB  
Article
A Classification Method for the Cellular Images Based on Active Learning and Cross-Modal Transfer Learning
by Caleb Vununu, Suk-Hwan Lee and Ki-Ryong Kwon
Sensors 2021, 21(4), 1469; https://doi.org/10.3390/s21041469 - 20 Feb 2021
Cited by 11 | Viewed by 3602
Abstract
In computer-aided diagnosis (CAD) systems, the automatic classification of the different types of the human epithelial type 2 (HEp-2) cells represents one of the critical steps in the diagnosis procedure of autoimmune diseases. Most of the methods prefer to tackle this task using [...] Read more.
In computer-aided diagnosis (CAD) systems, the automatic classification of the different types of the human epithelial type 2 (HEp-2) cells represents one of the critical steps in the diagnosis procedure of autoimmune diseases. Most of the methods prefer to tackle this task using the supervised learning paradigm. However, the necessity of having thousands of manually annotated examples constitutes a serious concern for the state-of-the-art HEp-2 cells classification methods. We present in this work a method that uses active learning in order to minimize the necessity of annotating the majority of the examples in the dataset. For this purpose, we use cross-modal transfer learning coupled with parallel deep residual networks. First, the parallel networks, which take simultaneously different wavelet coefficients as inputs, are trained in a fully supervised way by using a very small and already annotated dataset. Then, the trained networks are utilized on the targeted dataset, which is quite larger compared to the first one, using active learning techniques in order to only select the images that really need to be annotated among all the examples. The obtained results show that active learning, when mixed with an efficient transfer learning technique, can allow one to achieve a quite pleasant discrimination performance with only a few annotated examples in hands. This will help in building CAD systems by simplifying the burdensome task of labeling images while maintaining a similar performance with the state-of-the-art methods. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Medical Imaging System)
Show Figures

Figure 1

20 pages, 2002 KB  
Technical Note
Performance of Fine-Tuning Convolutional Neural Networks for HEp-2 Image Classification
by Vincenzo Taormina, Donato Cascio, Leonardo Abbene and Giuseppe Raso
Appl. Sci. 2020, 10(19), 6940; https://doi.org/10.3390/app10196940 - 3 Oct 2020
Cited by 26 | Viewed by 3230
Abstract
The search for anti-nucleus antibodies (ANA) represents a fundamental step in the diagnosis of autoimmune diseases. The test considered the gold standard for ANA research is indirect immunofluorescence (IIF). The best substrate for ANA detection is provided by Human Epithelial type 2 (HEp-2) [...] Read more.
The search for anti-nucleus antibodies (ANA) represents a fundamental step in the diagnosis of autoimmune diseases. The test considered the gold standard for ANA research is indirect immunofluorescence (IIF). The best substrate for ANA detection is provided by Human Epithelial type 2 (HEp-2) cells. The first phase of HEp-2 type image analysis involves the classification of fluorescence intensity in the positive/negative classes. However, the analysis of IIF images is difficult to perform and particularly dependent on the experience of the immunologist. For this reason, the interest of the scientific community in finding relevant technological solutions to the problem has been high. Deep learning, and in particular the Convolutional Neural Networks (CNNs), have demonstrated their effectiveness in the classification of biomedical images. In this work the efficacy of the CNN fine-tuning method applied to the problem of classification of fluorescence intensity in HEp-2 images was investigated. For this purpose, four of the best known pre-trained networks were analyzed (AlexNet, SqueezeNet, ResNet18, GoogLeNet). The classifying power of CNN was investigated with different training modalities; three levels of freezing weights and scratch. Performance analysis was conducted, in terms of area under the ROC (Receiver Operating Characteristic) curve (AUC) and accuracy, using a public database. The best result achieved an AUC equal to 98.6% and an accuracy of 93.9%, demonstrating an excellent ability to discriminate between the positive/negative fluorescence classes. For an effective performance comparison, the fine-tuning mode was compared to those in which CNNs are used as feature extractors, and the best configuration found was compared with other state-of-the-art works. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning for Biomedical Data)
Show Figures

Figure 1

22 pages, 4541 KB  
Article
A Strictly Unsupervised Deep Learning Method for HEp-2 Cell Image Classification
by Caleb Vununu, Suk-Hwan Lee and Ki-Ryong Kwon
Sensors 2020, 20(9), 2717; https://doi.org/10.3390/s20092717 - 9 May 2020
Cited by 16 | Viewed by 5684
Abstract
Classifying the images that portray the Human Epithelial cells of type 2 (HEp-2) represents one of the most important steps in the diagnosis procedure of autoimmune diseases. Performing this classification manually represents an extremely complicated task due to the heterogeneity of these cellular [...] Read more.
Classifying the images that portray the Human Epithelial cells of type 2 (HEp-2) represents one of the most important steps in the diagnosis procedure of autoimmune diseases. Performing this classification manually represents an extremely complicated task due to the heterogeneity of these cellular images. Hence, an automated classification scheme appears to be necessary. However, the majority of the available methods prefer to utilize the supervised learning approach for this problem. The need for thousands of images labelled manually can represent a difficulty with this approach. The first contribution of this work is to demonstrate that classifying HEp-2 cell images can also be done using the unsupervised learning paradigm. Unlike the majority of the existing methods, we propose here a deep learning scheme that performs both the feature extraction and the cells’ discrimination through an end-to-end unsupervised paradigm. We propose the use of a deep convolutional autoencoder (DCAE) that performs feature extraction via an encoding–decoding scheme. At the same time, we embed in the network a clustering layer whose purpose is to automatically discriminate, during the feature learning process, the latent representations produced by the DCAE. Furthermore, we investigate how the quality of the network’s reconstruction can affect the quality of the produced representations. We have investigated the effectiveness of our method on some benchmark datasets and we demonstrate here that the unsupervised learning, when done properly, performs at the same level as the actual supervised learning-based state-of-the-art methods in terms of accuracy. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

18 pages, 4684 KB  
Article
A Dynamic Learning Method for the Classification of the HEp-2 Cell Images
by Caleb Vununu, Suk-Hwan Lee, Oh-Jun Kwon and Ki-Ryong Kwon
Electronics 2019, 8(8), 850; https://doi.org/10.3390/electronics8080850 - 31 Jul 2019
Cited by 13 | Viewed by 3964
Abstract
The complete analysis of the images representing the human epithelial cells of type 2, commonly referred to as HEp-2 cells, is one of the most important tasks in the diagnosis procedure of various autoimmune diseases. The problem of the automatic classification of these [...] Read more.
The complete analysis of the images representing the human epithelial cells of type 2, commonly referred to as HEp-2 cells, is one of the most important tasks in the diagnosis procedure of various autoimmune diseases. The problem of the automatic classification of these images has been widely discussed since the unfolding of deep learning-based methods. Certain datasets of the HEp-2 cell images exhibit an extreme complexity due to their significant heterogeneity. We propose in this work a method that tackles specifically the problem related to this disparity. A dynamic learning process is conducted with different networks taking different input variations in parallel. In order to emphasize the localized changes in intensity, the discrete wavelet transform is used to produce different versions of the input image. The approximation and detail coefficients are fed to four different deep networks in a parallel learning paradigm in order to efficiently homogenize the features extracted from the images that have different intensity levels. The feature maps from these different networks are then concatenated and passed to the classification layers to produce the final type of the cellular image. The proposed method was tested on a public dataset that comprises images from two intensity levels. The significant heterogeneity of this dataset limits the discrimination results of some of the state-of-the-art deep learning-based methods. We have conducted a comparative study with these methods in order to demonstrate how the dynamic learning proposed in this work manages to significantly minimize this heterogeneity related problem, thus boosting the discrimination results. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

12 pages, 1150 KB  
Article
Deep CNN for IIF Images Classification in Autoimmune Diagnostics
by Donato Cascio, Vincenzo Taormina and Giuseppe Raso
Appl. Sci. 2019, 9(8), 1618; https://doi.org/10.3390/app9081618 - 18 Apr 2019
Cited by 24 | Viewed by 3794
Abstract
The diagnosis and monitoring of autoimmune diseases are very important problem in medicine. The most used test for this purpose is the antinuclear antibody (ANA) test. An indirect immunofluorescence (IIF) test performed by Human Epithelial type 2 (HEp-2) cells as substrate antigen is [...] Read more.
The diagnosis and monitoring of autoimmune diseases are very important problem in medicine. The most used test for this purpose is the antinuclear antibody (ANA) test. An indirect immunofluorescence (IIF) test performed by Human Epithelial type 2 (HEp-2) cells as substrate antigen is the most common methods to determine ANA. In this paper we present an automatic HEp-2 specimen system based on a convolutional neural network method able to classify IIF images. The system consists of a module for features extraction based on a pre-trained AlexNet network and a classification phase for the cell-pattern association using six support vector machines and a k-nearest neighbors classifier. The classification at the image-level was obtained by analyzing the pattern prevalence at cell-level. The layers of the pre-trained network and various system parameters were evaluated in order to optimize the process. This system has been developed and tested on the HEp-2 images indirect immunofluorescence images analysis (I3A) public database. To test the generalisation performance of the method, the leave-one-specimen-out procedure was used in this work. The performance analysis showed an accuracy of 96.4% and a mean class accuracy equal to 93.8%. The results have been evaluated comparing them with some of the most representative works using the same database. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

21 pages, 2545 KB  
Article
An Automatic HEp-2 Specimen Analysis System Based on an Active Contours Model and an SVM Classification
by Donato Cascio, Vincenzo Taormina and Giuseppe Raso
Appl. Sci. 2019, 9(2), 307; https://doi.org/10.3390/app9020307 - 16 Jan 2019
Cited by 19 | Viewed by 4616 | Correction
Abstract
The antinuclear antibody (ANA) test is widely used for screening, diagnosing, and monitoring of autoimmune diseases. The most common methods to determine ANA are indirect immunofluorescence (IIF), performed by human epithelial type 2 (HEp-2) cells, as substrate antigen. The evaluation of ANA consist [...] Read more.
The antinuclear antibody (ANA) test is widely used for screening, diagnosing, and monitoring of autoimmune diseases. The most common methods to determine ANA are indirect immunofluorescence (IIF), performed by human epithelial type 2 (HEp-2) cells, as substrate antigen. The evaluation of ANA consist an analysis of fluorescence intensity and staining patterns. This paper presents a complete and fully automatic system able to characterize IIF images. The fluorescence intensity classification was obtained by performing an image preprocessing phase and implementing a Support Vector Machines (SVM) classifier. The cells identification problem has been addressed by developing a flexible segmentation methods, based on the Hough transform for ellipses, and on an active contours model. In order to classify the HEp-2 cells, six SVM and one k-nearest neighbors (KNN)classifiers were developed. The system was tested on a public database consisting of 2080 IIF images. Unlike almost all work presented on this topic, the proposed system automatically addresses all phases of the HEp-2 image analysis process. All results have been evaluated by comparing them with some of the most representative state-of-the-art work, demonstrating the goodness of the system in the characterization of HEp-2 images. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

18 pages, 2216 KB  
Article
A Deep Feature Extraction Method for HEp-2 Cell Image Classification
by Caleb Vununu, Suk-Hwan Lee and Ki-Ryong Kwon
Electronics 2019, 8(1), 20; https://doi.org/10.3390/electronics8010020 - 24 Dec 2018
Cited by 22 | Viewed by 4932
Abstract
The automated and accurate classification of the images portraying the Human Epithelial cells of type 2 (HEp-2) represents one of the most important steps in the diagnosis procedure of many autoimmune diseases. The extreme intra-class variations of the HEp-2 cell images datasets drastically [...] Read more.
The automated and accurate classification of the images portraying the Human Epithelial cells of type 2 (HEp-2) represents one of the most important steps in the diagnosis procedure of many autoimmune diseases. The extreme intra-class variations of the HEp-2 cell images datasets drastically complicates the classification task. We propose in this work a classification framework that, unlike most of the state-of-the-art methods, uses a deep learning-based feature extraction method in a strictly unsupervised way. We propose a deep learning-based hybrid feature learning with two levels of deep convolutional autoencoders. The first level takes the original cell images as the inputs and learns to reconstruct them, in order to capture the features related to the global shape of the cells, and the second network takes the gradients of the images, in order to encode the localized changes in intensity (gray variations) that characterize each cell type. A final feature vector is constructed by combining the latent representations extracted from the two networks, giving a highly discriminative feature representation. The created features will be fed to a nonlinear classifier whose output will represent the type of the cell image. We have tested the discriminability of the proposed features on two of the most popular HEp-2 cell classification datasets, the SNPHEp-2 and ICPR 2016 datasets. The results show that the proposed features manage to capture the distinctive characteristics of the different cell types while performing at least as well as the actual deep learning-based state-of-the-art methods in terms of discrimination. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

24 pages, 1285 KB  
Article
Feature Importance for Human Epithelial (HEp-2) Cell Image Classification
by Vibha Gupta and Arnav Bhavsar
J. Imaging 2018, 4(3), 46; https://doi.org/10.3390/jimaging4030046 - 26 Feb 2018
Cited by 7 | Viewed by 7952
Abstract
Indirect Immuno-Fluorescence (IIF) microscopy imaging of human epithelial (HEp-2) cells is a popular method for diagnosing autoimmune diseases. Considering large data volumes, computer-aided diagnosis (CAD) systems, based on image-based classification, can help in terms of time, effort, and reliability of diagnosis. Such approaches [...] Read more.
Indirect Immuno-Fluorescence (IIF) microscopy imaging of human epithelial (HEp-2) cells is a popular method for diagnosing autoimmune diseases. Considering large data volumes, computer-aided diagnosis (CAD) systems, based on image-based classification, can help in terms of time, effort, and reliability of diagnosis. Such approaches are based on extracting some representative features from the images. This work explores the selection of the most distinctive features for HEp-2 cell images using various feature selection (FS) methods. Considering that there is no single universally optimal feature selection technique, we also propose hybridization of one class of FS methods (filter methods). Furthermore, the notion of variable importance for ranking features, provided by another type of approaches (embedded methods such as Random forest, Random uniform forest) is exploited to select a good subset of features from a large set, such that addition of new features does not increase classification accuracy. In this work, we have also, with great consideration, designed class-specific features to capture morphological visual traits of the cell patterns. We perform various experiments and discussions to demonstrate the effectiveness of FS methods along with proposed and a standard feature set. We achieve state-of-the-art performance even with small number of features, obtained after the feature selection. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

Back to TopTop