Next Article in Journal
Power Tower Inspection Simultaneous Localization and Mapping: A Monocular Semantic Positioning Approach for UAV Transmission Tower Inspection
Next Article in Special Issue
Artificial Intelligence-Based Detection of Human Embryo Components for Assisted Reproduction by In Vitro Fertilization
Previous Article in Journal
Metrological Evaluation of the Demosaicking Effect on Colour Digital Image Correlation with Application in Monitoring of Paintings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Singular Nuclei Segmentation for Automatic HER2 Quantification Using CISH Whole Slide Images

by
Md Shakhawat Hossain
1,2,*,
M. M. Mahbubul Syeed
2,3,
Kaniz Fatema
2,3,
Md Sakir Hossain
1 and
Mohammad Faisal Uddin
2,3
1
Department of CS, American International University-Bangladesh, Dhaka 1229, Bangladesh
2
RIoT Research Center, Independent University, Bangladesh, Dhaka 1229, Bangladesh
3
Department of CSE, Independent University, Bangladesh, Dhaka 1229, Bangladesh
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(19), 7361; https://doi.org/10.3390/s22197361
Submission received: 3 September 2022 / Revised: 20 September 2022 / Accepted: 22 September 2022 / Published: 28 September 2022
(This article belongs to the Special Issue Biomedical Image Processing and Sensing Application)

Abstract

:
Human epidermal growth factor receptor 2 (HER2) quantification is performed routinely for all breast cancer patients to determine their suitability for HER2-targeted therapy. Fluorescence in situ hybridization (FISH) and chromogenic in situ hybridization (CISH) are the US Food and Drug Administration (FDA) approved tests for HER2 quantification in which at least 20 cancer-affected singular nuclei are quantified for HER2 grading. CISH is more advantageous than FISH for cost, time and practical usability. In clinical practice, nuclei suitable for HER2 quantification are selected manually by pathologists which is time-consuming and laborious. Previously, a method was proposed for automatic HER2 quantification using a support vector machine (SVM) to detect suitable singular nuclei from CISH slides. However, the SVM-based method occasionally failed to detect singular nuclei resulting in inaccurate results. Therefore, it is necessary to develop a robust nuclei detection method for reliable automatic HER2 quantification. In this paper, we propose a robust U-net-based singular nuclei detection method with complementary color correction and deconvolution adapted for accurate HER2 grading using CISH whole slide images (WSIs). The efficacy of the proposed method was demonstrated for automatic HER2 quantification during a comparison with the SVM-based approach.

1. Introduction

Breast cancer is one of the world’s most frequent forms of cancer. In the United States alone 268,600 cases were diagnosed among women in 2019, which climbed to 330,840 cases in 2021 [1]. Approximately 20% of the patients are HER2 positive due to HER2 gene amplification or subsequent HER2 protein over-expression [2]. HER2, a transmembrane tyrosine kinase receptor encoded by the E R B B 2 gene on chromosome 17 q 12 , is a predictive and prognostic biomarker for breast, gastric and other cancers [3]. HER2 grading is done for all breast cancer patients to identify a HER2-positive patient. As an aggressive subgroup, HER2-positive breast cancer is treated with anti-HER2 targeted therapy, such as trastuzumab or lapatinib, to destroy the nucleus of the cancer cell [4,5,6,7,8]. Targeted therapy improves the patient’s condition, and in 1998 the FDA approved trastuzumab to treat HER2-positive breast cancer patients. However, if such treatment is given to HER2-negative patients, it may cause cardiac toxicity [9]; in addition, it is highly expensive [9,10,11]. Therefore, an accurate HER2 grading is crucial for designing a treatment plan.
Clinically, HER2 positivity is determined by counting a myriad of HER2 genes inside nuclei or subsequent HER2 proteins outside the nuclei in the cell membrane, as illustrated in Figure 1. Thus, HER2 quantification methods can be divided into two groups: HER2 protein based and HER2-gene based. Of the two, HER2 gene-based tests are considered more reliable. Immunohistochemistry (IHC), FISH, and CISH are the FDA approved tests for HER2 quantification [12,13,14,15]. IHC is a protein-based qualitative test where FISH and CISH count HER2 genes, and IHC rates the intensity of membranous staining as 0, 1+, 2+, or 3+. However, an IHC test is not conclusive. The American Society of Clinical Oncology and College of American Pathologists (ASCO/CAP) recommend conducting a reflex FISH or CISH test to confirm the HER2 grade [12].
For FISH or CISH analysis, the invasive breast cancer regions are first identified from a biopsy. After that, singular nuclei suitable for HER2 quantification are selected. A singular nucleus that is not overlapped with another nucleus and does not have any missing parts is suitable for quantification, as shown in Figure 2. Usually, a healthy nucleus has two copies of CEP17 and four copies of the HER2 gene. The copy of the HER2 gene increases in comparison to the copy of the CEP17 gene in a HER2-positive nucleus. Therefore, HER2 and CEP17 signals are counted for singular nuclei from cancer regions. As seen in Figure 3, the inclusion of non-singular nuclei in the quantification causes inaccurate signal counting and incorrect analysis. As a result, quantifying only the singular nuclei is a must for precise HER2 grading. ASCO/CAP recommends quantifying at least 20 nuclei. Then, the HER2 grade is determined based on the HER2-to-CEP17 ratio and the average HER2 copy as shown in Table 1.
For the HER2 gene-based quantification, laboratories select suitable nuclei and then count signals manually from FISH or CISH slides under a microscope. It is a labor-intensive and time-consuming task that is also vulnerable to subjective interpretation. As a result, an automated quantification method has many advantages. Several methods have been proposed for counting signals from FISH in a semi-automated or automated approach [16,17,18,19]. A few methods were proposed to quantify CISH slides automatically [20]. The choice of FISH versus CISH varies among institutions. FISH uses fluorescence imaging and the tests require special training and setup for the test. FISH dyes are expensive, and preparing a specimen takes a long time. On the other side, CISH uses bright-field imaging and does not require any special setup or training. Plus, the CISH dyes are cheaper, and the specimen preparation time is shorter. Thus, the CISH test is more practical than FISH. Previously, an automated HER2 grading system called Shimaris-PACQ was proposed using CISH WSI by Yagi et al. [20]. CISH used SVM to detect singular nuclei, and the system was considered the state of the art for automatic HER2 quantification. However, it failed to detect singular nuclei on some occasions, which led to inaccurate results. Therefore, in this paper we propose a robust nuclei detection method using deep learning for reliable automatic HER2 grading using CISH.

2. Literature Review

Cell or nuclei-based assessment is a widely used technique in biomedical image analysis for a variety of purposes, including determining cancer grade, counting bio-marker signals inside a nucleus, distinguishing cancerous nuclei from non-cancerous nuclei, nuclei characterization, assessing tumor cellularity [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38]. These assessment methods can be divided into morphological and molecular image analysis. Hematoxylin & Eosin (H&E) is the most commonly used staining method for morphological analysis of features such as nuclei shape, size, and distortion. Molecular analysis is used to detect and quantify molecules that are not present in H&E specimens. Popular techniques related to this research include CISH and FISH. As a result, we concentrated primarily on the nuclei segmentation methods developed for CISH and FISH specimens. Several methods have been proposed for segmenting nuclei from FISH slides [19,21,37,38]. On the other side, only a limited number of methods are available for segmenting nuclei from CISH slides [20].
One of the major challenges in analyzing histopathology specimens such as H&E, CISH and FISH is that they preserve the original tissue structure and the nuclei are often part of these structures. The presence of different tissue structures, varied staining and nuclei overlap make nuclei detection challenging. Therefore, a method that has been optimized for one stain and specimen doesn’t work as well for another. Furthermore, because the methods do not apply to the same dataset, it is a challenge to compare their performance. The majority of nuclei detection methods developed for H&E or IHC slides do not work well for CISH and FISH [22,32,33,35]. Plus, the nuclei detection result produced by these methods is not suitable for HER2 quantification because detecting nuclei is not enough; only singular nuclei are included in the quantification. Furthermore, a method developed for FISH specimens doesn’t generalize well for CISH specimens. Therefore, it is necessary to develop a nuclei detection method for HER2 quantification using CISH. Yagi et al. proposed a method of detecting singular nuclei from CISH slides, but it failed in some cases when the stain condition changed and the slides were scanned with a different scanner [20]. A commercial application was developed by 3dhistech but it led to a high number of false positives and included non-singular nuclei for quantification. This signifies the importance of a robust singular nuclei segmentation method for CISH slides to perform automatic HER2 quantification.
The approaches used for nuclei segmentation can be divided into three groups: (1) using image analysis tools such as ImageJ, CellProfiler, 3dhistech and CaseViewer; (2) using traditional machine learning such as SVM and Random Forest and (3) using deep learning such as U-net and other pixel-wise classification methods. Among the three approaches, deep learning has achieved higher accuracy and reliability whereas U-net based methods are leading the way. U-net based pixel-wise segmentation has been found to be the most effective, fast and state of the art [21]. This motivated us to use the U-net-based method for singular nuclei segmentation from CISH slides. For this purpose, we modified the original U-net model to differentiate the boundary pixels inside the nuclei from outside nuclei pixels. To segment singular nuclei, the method was trained on an expert’s manual annotation. Moreover, expert pathologists evaluated the proposed method, and then we compared the results to the SVM-based method. The proposed method outperformed it in a demonstration and was found to be robust against multiple scanners and varying stain conditions. Furthermore, the method was found effective for FISH slides when demonstrated.

3. Materials and Methods

The existing state of the art of CISH-based automatic HER2 quantification failed to segment suitable nuclei in some cases. Accurate marking of the nucleus boundary is important because the system uses the boundary to count the bio-marker signals. If a signal is inside the nucleus or lies entirely on the boundary then it is counted for that nucleus. But if the signals partially overlap with the boundary, they are excluded. The previous, SVM-based nucleus detection method used color deconvolution to separate the nuclei-dye channel, from which the nuclei were detected. This step is very useful when there is cross-talk among dyes in an image like CISH. However, it used intensity values to distinguish noise pixels from nuclear pixels and to mark the boundary, which is not effective if the staining condition of the specimen varies. Plus, this method was highly parameterized and its performance depended on the careful selection of parameters. To develop a more robust nucleus detection method, we relied on deep learning, which allows the extraction and selection of non-handcrafted features by a convoluted neural network (CNN).
Algorithm 1 explains the proposed nuclei segmentation approach, which begins by assessing the quality of the CISH WSI. Automated nuclei detection fails if the quality of the image is not satisfactory, as shown in Figure 4. Figure 5 shows how image quality affects nuclei segmentation. Therefore, before segmenting nuclei, we evaluated the WSI’s quality using the referenceless method proposed by Yamaguchi et al. [39,40].
If the quality of the WSI is satisfactory, only then is it used for nuclei detection. After a quality check, the color of the input image is corrected by comparing it to an ideally stained image. Then, the nucleus dye channel of the image is obtained using color deconvolution. The image is a single-channel gray-scale image used by U-net to segment singular nuclei. Each step in the nuclei segmentation method is explained in detail below.
Algorithm 1: Singular nuclei segmentation method
  • Initialization of W t h , Q t h , α , β , γ , M , θ , P N u c l e i R e f , P C E P 17 R e f , P H E R 2 R e f
  • procedureSingularNucleiSegmentation( I R G B , W t h , Q t h )
  •     while  I R G B ! = N I L  do
  •          W i = WhiteCheck( I R G B )
  •         if  W i < W t h  then
  •             Q i = QualityCheck( I R G B , α , β , γ )
  •            if  Q i Q t h  then
  •                 I C C = ColorCorrection( I R G B , P N u c l e i R e f , P C E P 17 R e f , P H E R 2 R e f )
  •                 I C D N = ColorDeconvolution( I C C , M )
  •                 I O u t , S c o r e n u c l e i = Segmentation( I C D N , u θ )
  •            end if
  •         end if
  •     end while
  •     return I O u t , S c o r e n u c l e i
  • end procedure
  • procedureWhiteCheck( I R G B )
  •      I g r a y = 0.299× I R + 0.587× I G + 0.114× I B
  •     while  P i x e l I g r a y ! = N I L  do
  •         if  P i x e l I g r a y 200  then
  •             W c o u n t ++
  •         end if
  •     end while
  •      W p i x e l s = W c o u n t / P i x e l I g r a y
  •     return W p i x e l s
  • end procedure
  • procedureQualityCheck( I R G B , α , β , γ )
  •     Q = α + β × B l u r r i n e s s + γ × N o i s e
  •      B l u r r i n e s s and N o i s e are calculated using Equations (1) and (2)
  •     return Q
  • end procedure
  • procedureColorCorrection( I R G B , P N u c l e i R e f , P C E P 17 R e f , P H E R 2 R e f )
  •     while  P i x e l I R G B ! = N I L  do
  •         For each pixel P i x e l I R G B of I R G B , estimate I R C , I G C , I B C using Equations (4) and (5)
  •     end while
  •     Construct I C C from I R C , I G C , I B C
  •     return I C C
  • end procedure
  • procedureColorDeconvolution( I C C , M )
  •     Camera response I C C k is estimated using Equation (7)
  •      a N u c l e i = I C C k · M N u c l e i  ; k = R , G , B
  •     return I C D N a N u c l e i
  • end procedure
  • procedureSegmentation( I C D N , u θ )
  •      I S e g = u θ ( I C D N )
  •      I O u t = PostProcess( I S e g )
  •     return I O u t
  • end procedure
  • procedurePostProcess( I S e g )
  •      I S e g = Apply Morphological Opening on I S e g
  •     Count nuclei in I S e g
  •     while  N u c l e i ! = N I L  do
  •         Calculate C i r c u l a r i t y and A r e a of each N u c l e i using Equation (9)
  •         if  C i r c u l a r i t y > 0.80 | | ( A r e a > 500 & & A r e a < 5000 )  then
  •             S c o r e n u c l e i = ( C i r c u l a r i t y + C u r v a t u r e ) / 2
  •             I P P = I P P & N u c l e i
  •         end if
  •     end while
  •     return I P P , S c o r e n u c l e i
  • end procedure

3.1. Dataset

In this experiment, we used 32 randomly selected breast cancer CISH WSI specimens. The CISH WSIs were scanned using a 3dhistech WSI scanner with 40× objective lens (NA 0.95) which provided an image resolution of 0.13 µm/pixel. The specimens were de-identified and did not contain any details of the patient. However, for confidentiality reasons, the dataset cannot yet be made public, but we did select and export images from the WSI using 3dhistech CaseViewer. For training the proposed U-net model, we used a set of 35 images exported from 22 CISH WSIs. Another set of 15 images exported from 10 CISH WSIs were used for testing the model. The test dataset was unseen in the training and included the cases for which the previously proposed SVM-based method failed to detect sufficient singular nuclei.

3.2. Image Quality Assessment

Image quality evaluation methods can be broadly categorized into three groups: (1) full reference-based assessment (FR-IQA), (2) reduced reference assessment (RR-IQA) and (3) no reference or referenceless assessment (NR-IQA). A full reference assessment method evaluated the quality by comparing with it a reference, considered to be the ideal image. Reduced-reference assessment evaluates the perceptual quality of an image through partial information of the corresponding reference image. The goal of the no-reference method is to estimate the perceptual image quality in accordance with subjective evaluations without using any reference. This approach is suitable when it is difficult to obtain an ideal reference images as in our case.
In our work, we used the no-reference quality evaluation method proposed by Yamaguchi et al. [40] to evaluate the quality of a WSI for automated image analysis and diagnosis. This method first estimates the number of white pixels in the input image, I R G B . A pixel, P i x e l I g r a y with an intensity value higher than 200 in grayscale is considered white. If an image contains white pixels W p i x e l s more than a threshold of W t h , say 50%, then it is considered useless for analysis and rejected for nucleus segmentation as it doesn’t contain enough tissue. The image was converted to grayscale considering the human sensitivity to red I R , green I G and blue I B color. After that, the quality of the image was estimated based on its blurriness and noise. If these indices of an image are high, it is considered to be poor quality; thus, it was rejected for nuclei segmentation when the quality was higher the selected threshold Q t h .
The difference between the local maxima and minima is calculated as the width of the edges. After that, the average width for the edges is calculated, which serves as the quality index. Blurry edges have small gradients that result in large width values compared to sharp edges. Blurriness is the average width of edges as shown in Equation (1). A edge is defined by its gradient, which is higher than a pre-defined threshold value. The edge width is obtained by measuring the distance between the local maximum and local minimum of edge pixels. Then, the total width of all edges is divided by the number of edges which gave the blurriness index. A blurry image has a small gradient on the edges resulting in higher width for the edges. Thus, a large average width indicates a blurry image in contrast to a smaller, sharp image.
B l u r r i n e s s = 1 E i = 1 E w ( i )
where E is the number of total edges, and w ( i ) is the width of edge i. A pixel is considered noise if its value is high and independent of its surrounding pixels. First, high-intensity pixels were detected using the unsharp mask and were either noise or edges. Then, the minimum difference between the center pixel and surrounding pixels in a 3 × 3 pixel window is calculated at all pixels to remove the edge pixels. After that, the average value of these minimum differences was calculated to derive the image noise. A higher value indicates more noise.
N o i s e = 1 N i = 1 N [ d m i n ( i ) ] 2
where N is the number of total pixels, and d m i n ( i ) is the minimum difference for pixel i. Finally, the quality degradation index was estimated using linear regression analysis in which blurriness and noise were used the predictors, and the co-efficients of predictors were derived by training the regression model given in Equation (3). The mean square error (MSE) between the original images and the digitally degraded versions was used in place of the quality degradation index to train the model. In our experiment, we found a linear relationship between the MSE and quality degradation indices.
Q u a l i t y d e g r a d a t i o n i n d e x = α + β B l u r r i n e s s + γ N o i s e
Here, α is the intercept and β and γ are the co-efficients of predictors.

3.3. Color Correction

Color correction is a typical step in pathological image analysis to handle the color variations that may be caused by a variety of factors, including staining conditions, WSI scanner settings and the WSI viewer. The majority of color correction techniques rely on data from an external reference which is considered to be an ideally prepared specimen. The proposed method modifies the color distribution of the input image using a reference CISH WSI for nuclei segmentation using the reference-based color correction method of Murakami et al. [41], which states that the values of a pixel can be derived by multiplying the primary vectors with some weights. Figure 6 shows the color distribution of a CISH image.
Thus, the color of a pixel as a CISH image can be represented using the following model:
R G B = P N u c l e i P C E P 17 P H E R 2 W 1 W 2 W 3
where R , G , B are the red, green and blue values of a pixel in RGB channels; P N u c l e i , P C E P 17 , P H E R 2 are the primary color vectors; and W 1 , W 2 , W 3 are the weighting coefficients. The primary vectors are derived from the image. For example, the primary vector P N u c l e i is derived by calculating the average red, green and blue values from the nuclei only areas. Similarly, the P C E P 17 and P H E R 2 vectors are derived from the CEP17 and HER2 only areas. The proposed method used the model (1) to correct the color of an input image based on the given in Equation (2):
R C G C B C = P N u c l e i R e f P C E P 17 R e f P H E R 2 R e f W 1 W 2 W 3
where R C G C B C are the color-corrected values of the pixel; P N u c l e i R e f , P C E P 17 R e f , P H E R 2 R e f are the reference color vectors; and W 1 , W 2 , W 3 are the weights. The reference vectors P N u c l e i R e f , P C E P 17 R e f , and P H E R 2 R e f are derived from the nuclei, CEP17 and HER2 only areas of the reference image. The values of the weighting coefficients are derived by inverting the equation, shown in model (1). Then the color-corrected image is used for color deconvolution where the nuclei-dye channel is separated which is for the nuclei segmentation.

3.4. Color Deconvolution

Color deconvolution is useful for separating the dye contribution if the cross-talk of dyes is significant in the specimen. In the case of CISH, three different dyes were used: blue for highlighting the nuclei, magenta for CEP17 and black for HER2. Based on the Beer–Lambert law, the proposed method applied color deconvolution to the CISH image to separate the nuclear dye channel image which was then used to segment the nuclei using U-net. The cross-talk of dyes is not significant in the FISH image. The Beer-Lambert law is the linear relationship between absorbance and concentration of an absorbing species. For an imaging device it can be represented using the following mathematical model:
g = M a a = g M 1
where g is the camera response; M is the color matrix; and a is the dye contribution. Using this model, we derived the contribution of each dye. Now, the camera response g can be derived from the input image as:
g k = log I o k I k ; k = R , G , B
where g = ( g R   g G   g B ) T is the optical density; I = ( I R   I G   I B ) T is the intensity of R G B components of every pixel. I o = ( I o R   I o G   I o B ) T is the average intensity of glass pixels. The color matrix was composed of an optical density vector for specific colors. Therefore, the M was derived based on the average R, G and B values of nuclei only, CEP17 only and HER2 only areas as
M = A v g R N u c l e i A v g G N u c l e i A v g B N u c l e i A v g R C E P 17 A v g G C E P 17 A v g B C E P 17 A v g R H E R 2 A v g G H E R 2 A v g B H E R 2
Here each element of M represents an optical density derived by dividing by the glass intensity and then performing the log. The nuclei, CEP17 and HER2 color responses are denoted as M N u c l e i , M C E P 17 and M H E R 2 . Thus, the model in (6) results in three stain-separated grayscale images ( a N u c l e i , a C E P 17 , a H E R 2 ) , belongs to nuclei, CEP17 and HER2. The proposed method uses the nucleus channel for U-net.

3.5. U-Net for Nuclei Segmentation

The proposed method uses a U-net [28] network to detect the untruncated and non-overlapped singular nuclei from a grayscale image. Since the conventional U-net is a semantic segmentation, plural nuclei are sometimes included in the segmentation result and difficult to separate them. To avoid this problem by an approach similar to instant segmentation, we trained the U-net with 3-classes; background, boundary, or inside the nuclei, respectively [32]. Gray-scale images obtained by applying color deconvolution to the CISH images were used to train the network. The output of the nuclei detection model has 3 channels, each having the same height and width as the input image. Their pixel values represent the probability of each pixel being background, boundary, or inside the nuclei. A pixel belonging to the boundary class means that it is on or inside an annotated boundary within 2 pixels. We trained the U-net based segmentation model u θ : I C D N S such that the segmentation of the nuclei dye image I C D N can be obtained as I S e g = u θ ( I C D N ) where u θ is a non-linear function; θ is a vector of parameters; and S is the set of segments. The parameter vector θ is derived from the training for which the accuracy of the segmentation model u θ ( I C D N ) is minimal. If L is a segment of I C D N , then L is represented as L = ϕ I S e g where ϕ is the labeling operator and I S e g is a segmentation approximation of I C D N .
We used 35 color-deconvoluted nuclei dye images (384 × 768 × 1) for training collected from the CISH WSI of breast cancer patients. Each image contained approximately 100 nuclei and a total of 3500 were annotated manually for training. Figure 7 shows the pathologist’s annotation from which the nuclei boundary (NB) label was produced and served as the ground truth. Data were augmented by applying vertical flip, horizontal flip and random zooming (×1.0 ×1.1) during training. The network was trained by the Adam optimizer and the loss function was categorical cross-entropy. Figure 8 illustrates the training process. The epoch size was 30 and the learning rate was 0.0001. SoftMax was used as the output function. We also used batch normalization. The U-net network consisted of a couple of encoding and decoding layers. Another set of 15 images was prepared for testing the model including cases where the SVM model failed. Figure 9 illustrates the process of predicting nuclei for a given input image and the post-processing for the prediction. In post-processing, the inside class map is transformed into a binary map. The inside region is marked in blue in the prediction rectangle in Figure 9. To recover the shape, we simply applied a dilation operation at the end of segmentation to each connected component.

3.6. Scoring Nuclei Suitability

A non-singular nucleus tends to have low circularity compared to a singular nucleus. Again, if some parts are missing then the size of the nucleus becomes very small. On the other hand, if multiple nuclei are overlapped then its area becomes larger compared to that of a singular nucleus. Therefore, the proposed method scored each segmented nucleus based on it circularity, which was estimated as
C i r c u l a r i t y = 4 π A r e a P e r i m e t e r 2
The proposed method eliminates a nucleus if its c i r c u l a r i t y < 0.80 or a r e a < 500 or a r e a > 5000 . The rest of the nuclei were assigned a score based on its circularity.

4. Results

4.1. Accuracy of Proposed Nuclei Segmentation

The proposed method was demonstrated on 14 breast cancer cases by two different scanners that included 7 CISH, for which the SVM-based method failed, and 7 FISH WSIs. First, the performance of the proposed method was evaluated for 7 CISH WSIs using the intersection over union (IoU) metric. IoU measures the number of common pixels between the pathologist’s annotation and the model’s prediction divided by the total number of pixels covered by both, as shown in Equation (7).
I o U = A n n o t a t i o n P r e d i c t i o n A n n o t a t i o n P r e d i c t i o n
However, when judging the performance of the nuclei detection, the IoU metric could be misleading. Such an example is illustrated in Figure 10. Both segmentations resulted in a similar IoU, but the left one was completely useless for singular nuclei quantification. Moreover, the inclusion of a non-nuclear area or the exclusion of some part of the nucleus could affect signal counting even if the difference is very small. Therefore, we relied on the pathologist’s manual evaluation to identify the false and true positives. For the HER2 quantification, we needed to use only a limited number of singular nuclei, thus, the number of false negatives was not a major issue as long as the method detected enough singular nuclei. Two pathologists marked the true positives and false positives for 7 CISH cases where the SVM-based method failed. Then we compared the SVM and the proposed U-net nuclei detection provided in Table 2, which shows that the proposed method increased true positive detection and reduced false detection significantly. The true positive rate and false positive rate were 0.96 and 0.03 for U-net while it was 0.60 and 0.39 for SVM, accordingly. Figure 11 shows nuclei detection by the SVM and U-net-based methods for the same image.

4.2. Application for Automatic HER2 Quantification

We also applied the proposed method to identify singular nuclei using FISH, but used the intensity image instead of the dye image. To evaluate the efficacy of the proposed method for automatic HER2 quantification method, we integrated it with the Shimaris–PACQ. Then, we compared the results with the pathologist’s manual CISH and FISH, counts and automated CISH using SVM-nuclei detection as shown in Figure 12 and Figure 13. From the figures, it is clear that the proposed method yielded higher concordance for automatic HER2 quantification with the pathologist’s manual quantification compared to SVM-based automated quantification. In practice, manual FISH counts werew considered the clinical guideline. The correlation between the pathologist’s FISH count and the Shimaris–PACQ was 0.99 when used with the proposed method, while it was 0.29 when used with SVM-based method.
The proposed method segmented singular nuclei where the SVM-based method failed. The method was integrated with the Shimaris–PACQ to ensure its effectiveness for automated quantification. Shimaris–PACQ achieved higher concordance with pathologist’s results when used with the proposed method compared to SVM. Thus, it can be stated that the proposed nuclei segmentation method enabled precise and reliable automatic HER2 quantification. Moreover, this method separated singular nuclei from FISH. This method is also effective for FISH-based automatic quantification, as shown in Figure 12 and Figure 13. Thus, the proposed nuclei segmentation method is robust regardless of the scanners, staining conditions and histology specimen types.

4.3. Time Requirement Analysis

We analyzed the time requirements for the proposed nuclei segmentation method to ensure its practical usability for automatic quantification, as shown in Table 3. We also estimated the time requirement for automatic HER2 quantification using the proposed nuclei segmentation. Then we compared the results with the time requirement of the previous system that used SVM for nuclei detection [20]. The time was estimated by using a personal notebook with a 2.6 GHz Intel Core i5 processor without an external graphics card. The turnaround time ranged from 1.33 to 4.00 min for the 7 cases, which was 2.90 to 7.10 min in our experiment. Thus, it can be concluded that the proposed nuclei segmentation method saves a significant amount of time for HER2 assessment compared to the previously proposed automatic CISH–HER2 quantification.

5. Discussion

The proposed nuclei segmentation method segments singular nuclei suitable for HER2 quantification for breast cancer patients. When trained on a limited dataset, this method produced very few false positives and detected a large number of true positives of singular nuclei, which is a significant improvement over previous methods. In the demonstration, this method outperformed the state-of-the-art [20]. The method [20] for automatic HER2 quantification using CISH was validated by comparing the results to the pathologists’ manual FISH and manual CISH counts. However, when the staining condition changed and the specimen was scanned with a different scanner, it failed to segment suitable singular nuclei for some CISH cases. The proposed nuclei segmentation method is robust against the stain variation and multiple scanners. Moreover, this method yielded higher concordance with the automatic HER2 quantification using CISH when compared with pathologists’ FISH and CISH counts. On top of that, the method is applicable for HER2 quantification using FISH. One significant benefit of having a nuclei segmentation method like the proposed method is that it allows laboratories to choose CISH or FISH based on their convenience.
The [20] method was highly dependent on a large number of parameters, which limited its generalizability. As a result, it failed when the optimal scanner and staining profile conditions changed. The proposed method is less parameterized and has been found to be effective for a variety of scanner and staining conditions. Furthermore, it works for FISH slides that had been scanned by a different scanner with different settings. This ensures that the proposed method is generalizable. When the [20] method does not apply to FISH slides due to its selection of parameters optimized for CISH images, particularly the method’s noise removal technique.
The [30] method increased the average IoU score for nuclei segmentation, but this improvement had no effect on HER2 quantification because the result was calculated for the overall nuclei pixel segmentation, which included many non-singular nuclei. Furthermore, it frequently misclassified the boundary pixels, which is critical for HER2 quantification. The [31] method achieved a good segmentation result for FISH images. However its performance in segmenting singular nuclei from the CISH images where the amount of nuclei overlapping was higher than for FISH. Our method not only improved segmentation performance but also ensured its clinical relevance by evaluating its results by experts and combining it with the quantification methods. Most of the nuclei detection methods previously proposed for H&E, CISH and FISH failed when the quality of the image became poor. Image quality is a prerequisite for autonomous image analysis. According to [20], automatic HER2 quantification systems fail if the image quality is poor. As a result, we used an evaluation method to ensure that only images of sufficient quality were used for nuclei segmentation.
In this paper, we proposed a more robust nuclei detection method based on deep learning. Currently, a major limitation of applying deep learning to medical images is obtaining the training data, which is time-consuming, costly and laborious. The proposed U-net based nuclei detection method worked well with limited training data. This method demonstrated high reliability when trained on a limited dataset. This is an important feature, especially for critical clinical applications where large image samples are difficult to obtain. More training data would improve the accuracy as more nuclei features could be obtained, but the current performance of the model was sufficient to quantify the limited number of nuclei for HER2 assessment.
Another notable aspect of the proposed method is that it has been tested and found to be effective for multi-modal images. Furthermore, its efficacy was demonstrated through integration and demonstration with both HER2 quantification systems, CISH and FISH. The proposed nuclei segmentation method was demonstrated with the HER2 quantification using a personal notebook without a GPU which took a maximum of 4 minutes per case. Time is another advantage of the proposed method. Its practical usability and time requirement are efficient for automatic HER2 quantification. It is impractical to allocate advanced computing resources in hospitals.
This nuclei segmentation method can be demonstrated for other nuclei-based assessment applications such as the tumor cellularity of breast cancer patients and imaging modalities such as H&E, by optimizing some parameters.

6. Conclusions

In this paper, we presented a U-net-based singular-nuclei segmentation method for automatic HER2 quantification using CISH. Furthermore, the application of the method was proven effective for FISH. It started by assessing image quality, then the image’s color was adjusted to handle the color variation. The method then separated the nuclei dye channel using color deconvolution. These three procedures were used to ensure robustness against stain and scanner variation. The singular nuclei were then segmented using the U-net, which identified the nucleus boundary concurrently and performed well when trained with a small number of images.

Author Contributions

Formal analysis, M.S.H. (Md Shakhawat Hossain); Funding acquisition, M.M.M.S. and M.F.U.; Methodology, M.S.H. (Md Shakhawat Hossain); Project administration, M.M.M.S. and M.F.U.; Supervision, M.S.H. (Md Shakhawat Hossain); Validation, K.F. and M.S.H. (Md Sakir Hossain); Visualization, K.F.; Writing—original draft, M.S.H. (Md Shakhawat Hossain); Writing—review & editing, M.S.H. (Md Shakhawat Hossain). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Society, A.C. Breast Cancer Facts & Figures 2019–2020; American Cancer Society: Atlanta, GA, USA, 2019; pp. 1–44. [Google Scholar]
  2. Yaziji, H.; Goldstein, L.C.; Barry, T.S.; Werling, R.; Hwang, H.; Ellis, G.K.; Gralow, J.R.; Livingston, R.B.; Gown, A.M. HER-2 testing in breast cancer using parallel tissue-based methods. JAMA 2004, 291, 1972–1977. [Google Scholar] [CrossRef] [PubMed]
  3. English, D.P.; Roque, D.M.; Santin, A.D. HER2 expression beyond breast cancer: Therapeutic implications for gynecologic malignancies. Mol. Diagn. Ther. 2013, 17, 85–99. [Google Scholar] [CrossRef] [PubMed]
  4. Seidman, A.D.; Fornier, M.N.; Esteva, F.J.; Tan, L.; Kaptain, S.; Bach, A.; Panageas, K.S.; Arroyo, C.; Valero, V.; Currie, V.; et al. Weekly trastuzumab and paclitaxel therapy for metastatic breast cancer with analysis of efficacy by HER2 immunophenotype and gene amplification. J. Clin. Oncol. 2001, 19, 2587–2595. [Google Scholar] [CrossRef] [PubMed]
  5. Vogel, C.L.; Cobleigh, M.A.; Tripathy, D.; Gutheil, J.C.; Harris, L.N.; Fehrenbacher, L.; Slamon, D.J.; Murphy, M.; Novotny, W.F.; Burchmore, M.; et al. Efficacy and safety of trastuzumab as a single agent in first-line treatment of HER2-overexpressing metastatic breast cancer. J. Clin. Oncol. 2002, 20, 719–726. [Google Scholar] [CrossRef] [PubMed]
  6. Gianni, L.; Dafni, U.; Gelber, R.D.; Azambuja, E.; Muehlbauer, S.; Goldhirsch, A.; Untch, M.; Smith, I.; Baselga, J.; Jackisch, C.; et al. Treatment with trastuzumab for 1 year after adjuvant chemotherapy in patients with HER2-positive early breast cancer: A 4-year follow-up of a randomised controlled trial. Lancet Oncol. 2011, 12, 236–244. [Google Scholar] [CrossRef]
  7. Piccart-Gebhart, M.J.; Procter, M.; Leyland-Jones, B.; Goldhirsch, A.; Untch, M.; Smith, I.; Gianni, L.; Baselga, J.; Bell, R.; Jackisch, C.; et al. Trastuzumab after adjuvant chemotherapy in HER2-positive breast cancer. N. Engl. J. Med. 2005, 353, 1659–1672. [Google Scholar] [CrossRef]
  8. Ryan, Q.; Ibrahim, A.; Cohen, M.H.; Johnson, J.; Ko, C.w.; Sridhara, R.; Justice, R.; Pazdur, R. FDA drug approval summary: Lapatinib in combination with capecitabine for previously treated metastatic breast cancer that overexpresses HER-2. Oncologist 2008, 13, 1114–1119. [Google Scholar] [CrossRef]
  9. Tan-Chiu, E.; Yothers, G.; Romond, E.; Geyer Jr, C.E.; Ewer, M.; Keefe, D.; Shannon, R.P.; Swain, S.M.; Brown, A.; Fehrenbacher, L.; et al. Assessment of cardiac dysfunction in a randomized trial comparing doxorubicin and cyclophosphamide followed by paclitaxel, with or without trastuzumab as adjuvant therapy in node-positive, human epidermal growth factor receptor 2–overexpressing breast cancer: NSABP B-31. J. Clin. Oncol. 2005, 23, 7811–7819. [Google Scholar]
  10. Kurian, A.W.; Thompson, R.N.; Gaw, A.F.; Arai, S.; Ortiz, R.; Garber, A.M. A cost-effectiveness analysis of adjuvant trastuzumab regimens in early HER2/neu–positive breast cancer. J. Clin. Oncol. 2007, 25, 634–641. [Google Scholar] [CrossRef]
  11. Liberato, N.L.; Marchetti, M.; Barosi, G. Cost effectiveness of adjuvant trastuzumab in human epidermal growth factor receptor 2–positive breast cancer. J. Clin. Oncol. 2007, 25, 625–633. [Google Scholar] [CrossRef]
  12. Wolff, A.C.; Hammond, M.E.H.; Allison, K.H.; Harvey, B.E.; Mangu, P.B.; Bartlett, J.M.; Bilous, M.; Ellis, I.O.; Fitzgibbons, P.; Hanna, W.; et al. Human epidermal growth factor receptor 2 testing in breast cancer: American Society of Clinical Oncology/College of American Pathologists clinical practice guideline focused update. Arch. Pathol. Lab. Med. 2018, 142, 1364–1382. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Furrer, D.; Sanschagrin, F.; Jacob, S.; Diorio, C. Advantages and disadvantages of technologies for HER2 testing in breast cancer specimens. Am. J. Clin. Pathol. 2015, 144, 686–703. [Google Scholar] [CrossRef] [PubMed]
  14. Wolff, A.C.; Hammond, M.E.H.; Schwartz, J.N.; Hagerty, K.L.; Allred, D.C.; Cote, R.J.; Dowsett, M.; Fitzgibbons, P.L.; Hanna, W.M.; Langer, A.; et al. American Society of Clinical Oncology/College of American Pathologists guideline recommendations for human epidermal growth factor receptor 2 testing in breast cancer. Arch. Pathol. Lab. Med. 2007, 131, 18–43. [Google Scholar] [CrossRef] [PubMed]
  15. Wolff, A.C.; Hammond, M.E.H.; Hicks, D.G.; Dowsett, M.; McShane, L.M.; Allison, K.H.; Allred, D.C.; Bartlett, J.M.; Bilous, M.; Fitzgibbons, P.; et al. Recommendations for human epidermal growth factor receptor 2 testing in breast cancer: American Society of Clinical Oncology/College of American Pathologists clinical practice guideline update. Arch. Pathol. Lab. Med. 2014, 138, 241–256. [Google Scholar] [CrossRef] [PubMed]
  16. Konsti, J.; Lundin, J.; Jumppanen, M.; Lundin, M.; Viitanen, A.; Isola, J. A public-domain image processing tool for automated quantification of fluorescence in situ hybridisation signals. J. Clin. Pathol. 2008, 61, 278–282. [Google Scholar] [CrossRef] [PubMed]
  17. Furrer, D.; Jacob, S.; Caron, C.; Sanschagrin, F.; Provencher, L.; Diorio, C. Validation of a new classifier for the automated analysis of the human epidermal growth factor receptor 2 (HER2) gene amplification in breast cancer specimens. Diagn. Pathol. 2013, 8, 17. [Google Scholar] [CrossRef]
  18. van der Logt, E.M.; Kuperus, D.A.; van Setten, J.W.; van den Heuvel, M.C.; Boers, J.E.; Schuuring, E.; Kibbelaar, R.E. Fully automated fluorescent in situ hybridization (FISH) staining and digital analysis of HER2 in breast cancer: A validation study. PLoS ONE 2015, 10, e0123201. [Google Scholar] [CrossRef]
  19. Zakrzewski, F.; de Back, W.; Weigert, M.; Wenke, T.; Zeugner, S.; Mantey, R.; Sperling, C.; Friedrich, K.; Roeder, I.; Aust, D.; et al. Automated detection of the HER2 gene amplification status in Fluorescence in situ hybridization images for the diagnostics of cancer tissues. Sci. Rep. 2019, 9, 8231. [Google Scholar] [CrossRef]
  20. Hossain, M.S.; Hanna, M.G.; Uraoka, N.; Nakamura, T.; Edelweiss, M.; Brogi, E.; Hameed, M.R.; Yamaguchi, M.; Ross, D.S.; Yagi, Y. Automatic quantification of HER2 gene amplification in invasive breast cancer from chromogenic in situ hybridization whole slide images. J. Med. Imaging 2019, 6, 047501. [Google Scholar] [CrossRef]
  21. Caicedo, J.C.; Roth, J.; Goodman, A.; Becker, T.; Karhohs, K.W.; Broisin, M.; Molnar, C.; McQuin, C.; Singh, S.; Theis, F.J.; et al. Evaluation of deep learning strategies for nucleus segmentation in fluorescence images. Cytom. Part A 2019, 95, 952–965. [Google Scholar] [CrossRef]
  22. Veta, M.; Van Diest, P.J.; Kornegoor, R.; Huisman, A.; Viergever, M.A.; Pluim, J.P. Automatic nuclei segmentation in H&E stained breast cancer histopathology images. PLoS ONE 2013, 8, e70221. [Google Scholar]
  23. Liu, Y.; Long, F. Acute lymphoblastic leukemia cells image analysis with deep bagging ensemble learning. In ISBI 2019 C-NMC Challenge: Classification in Cancer Cell Imaging; Springer: Berlin/Heidelberg, Germany, 2019; pp. 113–121. [Google Scholar]
  24. Tran, T.; Kwon, O.H.; Kwon, K.R.; Lee, S.H.; Kang, K.W. Blood cell images segmentation using deep learning semantic segmentation. In Proceedings of the 2018 IEEE International Conference on Electronics and Communication Engineering (ICECE), Xi’an, China, 10–12 December 2018; pp. 13–16. [Google Scholar]
  25. Bougen-Zhukov, N.; Loh, S.Y.; Lee, H.K.; Loo, L.H. Large-scale image-based screening and profiling of cellular phenotypes. Cytom. Part A 2017, 91, 115–125. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Hernández, C.X.; Sultan, M.M.; Pande, V.S. Using deep learning for segmentation and counting within microscopy data. arXiv 2018, arXiv:1802.10548. [Google Scholar]
  27. Araújo, F.H.; Silva, R.R.; Ushizima, D.M.; Rezende, M.T.; Carneiro, C.M.; Bianchi, A.G.C.; Medeiros, F.N. Deep learning for cell image segmentation and ranking. Comput. Med. Imaging Graph. 2019, 72, 13–21. [Google Scholar] [CrossRef] [PubMed]
  28. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  29. Hollandi, R.; Szkalisity, A.; Toth, T.; Tasnadi, E.; Molnar, C.; Mathe, B.; Grexa, I.; Molnar, J.; Balind, A.; Gorbe, M.; et al. nucleAIzer: A parameter-free deep learning framework for nucleus segmentation using image style transfer. Cell Syst. 2020, 10, 453–458. [Google Scholar] [CrossRef]
  30. Long, F. Microscopy cell nuclei segmentation with enhanced U-Net. BMC Bioinform. 2020, 21, 8. [Google Scholar] [CrossRef] [PubMed]
  31. Fang, J.; Zhou, Q.; Wang, S. Segmentation technology of nucleus image based on U-net network. Sci. Program. 2021, 2021, 1892497. [Google Scholar] [CrossRef]
  32. Cui, Y.; Zhang, G.; Liu, Z.; Xiong, Z.; Hu, J. A deep learning algorithm for one-step contour aware nuclei segmentation of histopathology images. Med. Biol. Eng. Comput. 2019, 57, 2027–2043. [Google Scholar] [CrossRef]
  33. Cui, Y.; Hu, J. Self-adjusting nuclei segmentation (SANS) of Hematoxylin-Eosin stained histopathological breast cancer images. In Proceedings of the 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Shenzhen, China, 15–18 December 2016; pp. 956–963. [Google Scholar]
  34. Naylor, P.; Laé, M.; Reyal, F.; Walter, T. Nuclei segmentation in histopathology images using deep neural networks. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, Australia, 18–21 April 2017; pp. 933–936. [Google Scholar]
  35. Paramanandam, M.; O’Byrne, M.; Ghosh, B.; Mammen, J.J.; Manipadam, M.T.; Thamburaj, R.; Pakrashi, V. Automated segmentation of nuclei in breast cancer histopathology images. PLoS ONE 2016, 11, e0162053. [Google Scholar] [CrossRef]
  36. Xing, F.; Xie, Y.; Yang, L. An automatic learning-based framework for robust nucleus segmentation. IEEE Trans. Med. Imaging 2015, 35, 550–566. [Google Scholar] [CrossRef]
  37. Nandy, K.; Gudla, P.R.; Meaburn, K.J.; Misteli, T.; Lockett, S.J. Automatic nuclei segmentation and spatial FISH analysis for cancer detection. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, USA, 3–6 September 2009; pp. 6718–6721. [Google Scholar]
  38. Paternoster, S.F.; Brockman, S.R.; McClure, R.F.; Remstein, E.D.; Kurtin, P.J.; Dewald, G.W. A new method to extract nuclei from paraffin-embedded tissue to study lymphomas using interphase fluorescence in situ hybridization. Am. J. Pathol. 2002, 160, 1967–1972. [Google Scholar] [CrossRef]
  39. Hossain, M.S.; Nakamura, T.; Kimura, F.; Yagi, Y.; Yamaguchi, M. Practical image quality evaluation for whole slide imaging scanner. In Proceedings of the Biomedical Imaging and Sensing Conference, SPIE, Yokohama, Japan, 25–27 April 2018; Volume 10711, pp. 203–206. [Google Scholar]
  40. Shakhawat, H.M.; Nakamura, T.; Kimura, F.; Yagi, Y.; Yamaguchi, M. Automatic Quality Evaluation of Whole Slide Images for the Practical Use of Whole Slide Imaging Scanner. ITE Trans. Media Technol. Appl. 2020, 8, 252–268. [Google Scholar] [CrossRef]
  41. Murakami, Y.; Abe, T.; Hashiguchi, A.; Yamaguchi, M.; Saito, A.; Sakamoto, M. Color correction for automatic fibrosis quantification in liver biopsy specimens. J. Pathol. Inform. 2013, 4, 36. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Example of normal and HER2 positive cell.
Figure 1. Example of normal and HER2 positive cell.
Sensors 22 07361 g001
Figure 2. Examples of singular and non-singular nuclei.
Figure 2. Examples of singular and non-singular nuclei.
Sensors 22 07361 g002
Figure 3. False positives of singular nuclei which may lead to inaccurate signal counting.
Figure 3. False positives of singular nuclei which may lead to inaccurate signal counting.
Sensors 22 07361 g003
Figure 4. The nuclei detection method succeeded for the original (top-left and bottom-left) images but failed for the blurry (bottom-right) and noisy version of the images (top-right).
Figure 4. The nuclei detection method succeeded for the original (top-left and bottom-left) images but failed for the blurry (bottom-right) and noisy version of the images (top-right).
Sensors 22 07361 g004
Figure 5. Effect of image quality on the nuclei detection method.
Figure 5. Effect of image quality on the nuclei detection method.
Sensors 22 07361 g005
Figure 6. An example model of color distribution of CISH WSI.
Figure 6. An example model of color distribution of CISH WSI.
Sensors 22 07361 g006
Figure 7. Examples of how the data were annotated. The top row shows the deconvoluted version of the original color images, and the bottom row shows the pathologist’s annotations for the corresponding images.
Figure 7. Examples of how the data were annotated. The top row shows the deconvoluted version of the original color images, and the bottom row shows the pathologist’s annotations for the corresponding images.
Sensors 22 07361 g007
Figure 8. Overview of the training process.
Figure 8. Overview of the training process.
Sensors 22 07361 g008
Figure 9. Overview of nuclei segmentation.
Figure 9. Overview of nuclei segmentation.
Sensors 22 07361 g009
Figure 10. Wrong prediction (left) and correct prediction (right).
Figure 10. Wrong prediction (left) and correct prediction (right).
Sensors 22 07361 g010
Figure 11. Proposed method outperformed the SVM based nuclei detection. Original image (top left), ground truth (top right), SVM detection (bottom left) and proposed U net segmentation (bottom-right).
Figure 11. Proposed method outperformed the SVM based nuclei detection. Original image (top left), ground truth (top right), SVM detection (bottom left) and proposed U net segmentation (bottom-right).
Sensors 22 07361 g011
Figure 12. Evaluation of proposed nuclei segmentation enabled automatic HER2 quantification using both CISH and FISH with respect to the HER2––CEP17 ratio.
Figure 12. Evaluation of proposed nuclei segmentation enabled automatic HER2 quantification using both CISH and FISH with respect to the HER2––CEP17 ratio.
Sensors 22 07361 g012
Figure 13. Evaluation of proposed nuclei segmentation-enabled automatic HER2 quantification using both CISH and FISH with respect to the average HER2 copy.
Figure 13. Evaluation of proposed nuclei segmentation-enabled automatic HER2 quantification using both CISH and FISH with respect to the average HER2 copy.
Sensors 22 07361 g013
Table 1. HER2 grading based on HER2 and CEP17 counts.
Table 1. HER2 grading based on HER2 and CEP17 counts.
HER2 GroupsCriteriaHER2 Status
Group 1 H E R 2 / C E P 17 2 Avg.
H E R 2 copy 4
Positive
Group 2 H E R 2 / C E P 17 2 Avg.
H E R 2 copy < 4
Positive
Group 3 H E R 2 / C E P 17 < 2 Avg.
H E R 2 copy 6
Positive
Group 4 H E R 2 / C E P 17 < 2 Avg.
H E R 2 copy 4 & < 6
Equivocal
Group 5 H E R 2 / C E P 17 < 2 Avg.
H E R 2 copy < 4
Negative
Table 2. Comparison of SVM and Proposed U-net based nuclei detection results.
Table 2. Comparison of SVM and Proposed U-net based nuclei detection results.
SVMProposed U-Net
SITPFPIoUTPFPIoU
1400.054600.67
225140.505010.87
319170.273920.86
4250.094660.80
528170.335820.93
6650.094910.90
71150.224900.80
Total95631.55337125.83
TP: true positives; FP: false positives.
Table 3. HER2 grading based on HER2 and CEP17 counts.
Table 3. HER2 grading based on HER2 and CEP17 counts.
SIQuantified Area (µm 2 )Total Quantification
Time (min) Using
SVM-Based Method
Total Quantification
Time (min) Using
Proposed Method
Proposed Nuclei
Segmentation Time
(min)
1 21.1 K2.911.330.66
2 44.8 K7.104.001.81
3 34.3 K5.003.601.41
4 23.7 K4.102.240.52
5 23.7 K4.702.911.65
6 18.4 K3.101.820.40
7 18.4 K3.802.591.61
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hossain, M.S.; Syeed, M.M.M.; Fatema, K.; Hossain, M.S.; Uddin, M.F. Singular Nuclei Segmentation for Automatic HER2 Quantification Using CISH Whole Slide Images. Sensors 2022, 22, 7361. https://doi.org/10.3390/s22197361

AMA Style

Hossain MS, Syeed MMM, Fatema K, Hossain MS, Uddin MF. Singular Nuclei Segmentation for Automatic HER2 Quantification Using CISH Whole Slide Images. Sensors. 2022; 22(19):7361. https://doi.org/10.3390/s22197361

Chicago/Turabian Style

Hossain, Md Shakhawat, M. M. Mahbubul Syeed, Kaniz Fatema, Md Sakir Hossain, and Mohammad Faisal Uddin. 2022. "Singular Nuclei Segmentation for Automatic HER2 Quantification Using CISH Whole Slide Images" Sensors 22, no. 19: 7361. https://doi.org/10.3390/s22197361

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop