A Novel Computer-Aided-Diagnosis System for Breast Ultrasound Images Based on BI-RADS Categories

: The breast ultrasound is not only one of major devices for breast tissue imaging, but also one of important methods in breast tumor screening. It is non-radiative, non-invasive, harmless, simple, and low cost screening. The American College of Radiology (ACR) proposed the Breast Imaging Reporting and Data System (BI-RADS) to evaluate far more breast lesion severities compared to traditional diagnoses according to ﬁve-criterion categories of masses composition described as follows: shape, orientation, margin, echo pattern, and posterior features. However, there exist some problems, such as intensity differences and different resolutions in image acquisition among different types of ultrasound imaging modalities so that clinicians cannot always identify accurately the BI-RADS categories or disease severities. To this end, this article adopted three different brands of ultrasound scanners to fetch breast images for our experimental samples. The breast lesion was detected on the original image using preprocessing, image segmentation, etc. The breast tumor’s severity was evaluated on the features of the breast lesion via our proposed classiﬁers according to the BI-RADS standard rather than traditional assessment on the severity; i.e., merely using benign or malignant. In this work, we mainly focused on the BI-RADS categories 2–5 after the stage of segmentation as a result of the clinical practice. Moreover, several features related to lesion severities based on the selected BI-RADS categories were introduced into three machine learning classiﬁers, including a Support Vector Machine (SVM), Random Forest (RF), and Convolution Neural Network (CNN) combined with feature selection to develop a multi-class assessment of breast tumor severity based on BI-RADS. Experimental results show that the proposed CAD system based on BI-RADS can obtain the identiﬁcation accuracies with SVM, RF, and CNN reaching 80.00%, 77.78%, and 85.42%, respectively. We also validated the performance and adaptability of the classiﬁcation using different ultrasound scanners. Results also indicate that the evaluations of F-score based on CNN can obtain measures higher than 75% (i.e., prominent adaptability) when samples were tested on various BI-RADS categories.


Introduction
The commonly used modalities in the diagnosis of breast carcinoma include mammography, Breast Ultrasound (BUS), Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). The major advantages of BUS are it being non-radiative, non-invasive, simple, and low cost for screening. This screening tool is suitable for women at any age, especially young women under the age of 35. Medical ultrasound transmits high-frequency sound waves via a transducer over the breast. Once the waves hits a tissue or structure, it bounces back such that the transducer receives changes of waves to create a black-and-white image of breast tissues and structures called a sonogram. Various tissues can be visualized via a sonogram under different volumes, pitches, and frequencies. This information can help a physician confirm the diagnosis that a breast lesion is identified as benign or malignant. The BUS has a panoramic view of different tissues of the breast which is composed of skin, subcutaneous, fat, muscle, glands, chest wall, and other tissues. Above all, the BUS is real-time that can obtain the result quickly. Therefore, it has become one of the major pieces of equipment for breast lesion detection-they include breast cancers, breast cysts, benign tumors, fibrocystic breast, etc. Traditionally, the severity of the patient's lesion was assessed as benign or malignant by clinicians' subjective and qualitative judgment. This cannot be effectively used to provide diagnosis or treatment for patients. In addition, it may cause some unnecessary surgery or pathological biopsy for a patient having suspicious breast carcinoma. To address this problem, the American College of Radiology (ACR) proposed a gold standard in 1993; namely, the Breast Imaging Reporting and Data System (BI-RADS). The initial edition of BI-RADS was created in 1993 and the newest edition of BI-RADS is the 5th edition [1].
BI-RADS provides a uniform standard in breast carcinoma severity and sorts the results into various categories numbered 0 through 6 according to different degrees of severity, as described in Table 1. It also provides mass descriptors examined with ultrasound, including shape, orientation, margin, echo pattern, and posterior features, as listed in Table 2. Clinically, radiologists analyzed and identified the characteristics of tumor composition intuitively and clinicians assessed the severity of breast tumors as benign or malignant according to this standard. Furthermore, this can reduce the differences between interpretations of breast cancer for physicians so that patients can obtain appropriate treatment. Table 1. Concordance between BI-RADS assessment categories and management recommendations [1].  Table 2, the shape category represents the shape of mass including: round, oval, or irregular. Orientation refers to the lesion's long axis relative to the skin; the descriptors for the mass margin are circumscribed or non-circumscribed, which in non-circumscribed margins include micro-lobulated, indistinct, angular, and speculated; the descriptors for the mass echo pattern rely on tissue having various reference echotextures; the posterior feature refers to the attenuation characteristics of a mass relative to its acoustic transmission. In other words, BI-RADS provides a uniform standard for the interpretation of a breast image regarding the breast lesion-whether there exists a tumor; and evaluating the shape regularity of the mass, the distinction degrees of the non-circumscribed, the variabilities of the reference echotexture, the variabilities of the posterior feature, etc. (See Figure 1). Table 2. Masses features category [1].

Category
Descriptors Description

Shape
Oval Elliptical or egg-shaped.

Irregular
Neither round or oval in shape.
Orientation Parallel Long axis of lesion parallels the skin line.

Not parallel
Long axis, not orientation along the skin line.

Margin Circumscribed
A margin that is well defined or sharp, with an abrupt transition between the lesion and surrounding tissue.
Not circumscribed The mass has one or more of the following features: indistinct, angular, microlobulated or speculated.
-Indistinct No clear demarcation between a mass and its surround tissue.
-Angular Some or all of the margin has sharp corners, often forming acute angles.
-Microlobulated Short cycle undulations impact a scalloped appearance to the margin of the mass.
-Spiculated Margin is formed or characterized by sharp lines projecting from the mass.

Echo pattern
Anechoic Without internal echoes.

Hyperechoic
Having increased echogenicity relative to fat or equal to fibroglandular tissue. Complex cystic and solid Mass contains both anechoic and echogenic components.

Hypoechoic
Defined relative to fat; masses are characterized by low-level echoes. Isoechoic Having the same echogenicity as fat.

Heterogeneous
The breasts are heterogeneously dense, which may obscure small masses  In this paper, machine learning classifiers were applied to identify the severities of breast lesion on BUS images so as to develop an automated and accurate CAD system. The proposed CAD system can provide diagnostic references for surgeons in interpreting BI-RADS categories. Basically, the proposed CAD system can be roughly divided into the following steps: image preprocessing, image segmentation, feature extraction, feature selection and classification. Figure 2 shows a flowchart of the BI-RADS category system. Especially, this study also focuses on multi-class identification based on the BI-RADS category 2 to 5, and cope with problems such as different resolutions and intensity variations in image acquisition among different types of ultrasound imaging modalities. The rest of this paper is organized as follows. In Section II, we survey the related works on the classification of breast ultrasound BI-RADS and review the relevant techniques. Section III describes the proposed method for multi-class Computer-Aided-Diagnosis system. The experimental results are illustrated in Section IV, and Section V concludes this paper.

Related Work
Various Computer-Aided-Diagnosis (CAD) systems in the breast imaging modality has been widely developed. In general, traditional CAD systems consist of two major stages: one is segmentation of masses, and the other is severity classification of masses. Some works also explored segmentation of breast masses. For example, Alvarenga et al. [2] used semi-automatic CAD system to detect the contours of breast lesion. Shan [3] developed a fully automatic detection system for breast lesion segmentation. Both accurate lesion boundary detection and feature selection are important for breast cancer diagnosis. The research in [2] identifies lesion severity on ultrasound images using morphology features, and the researches in [3][4][5] adopt texture features for classification. Both of morphology features and texture features are used for classification [6][7][8]. Moon et al [7] cooperates with pathological diagnosis to identify a breast lesion with BI-RADS as benign or malignant. As a whole, most of the researches only can predict the detected breast lesion tumor as benign and malignant [2][3][4][5][6][7][8].
In recent years, most of researches who conduct classification of breast ultrasound BI-RADS only focused on borderline category such as BI-RADS 3 to reduce the ambiguities. Related studies and their adopted methods are illustrated in Table 3, respectively.
In actuality, there exist several speckle noises on BUS images as a result of human factors or mechanical imaging factors. Traditional noise removal method cannot reduce the speckle noises effectively. Therefore, Yu and Acton [9] developed a nonlinear anisotropic diffusion technique, speckle reducing anisotropic diffusion (SRAD) to remove the speckle. Besides, how to extract robust features related to the BI-RADS description is an important issue for accurate classification. Shan [3] adopted region growing from the seed point, speckle reduction, and the Neutrosophic L-Means clustering (NLM) to develop a fully automatic segmentation method for BUS images. Although the segmentation accuracy reaches 94%, this method has two major limitations. The first one is over-segmentation problem occurred where the lesion position near the image boundary. Another problem is that it cannot detect multiple lesions in one image. In this work, we propose a method to improve the problems mentioned above.
In recent years, some studies usually capitalized on the machine learning such as the selected features and Support Vector Machine (SVM) to predict the lesion severity with a two-category classification. Recently, Random Forest (RF) has become a commonly used method for multi-class identification especially in surface-types analysis of satellite imagery [10]. Since 2012, Krizhevsky [11] proposed Convolution Neural Network (CNN) on color images to identify objects of interest. Classification researches in medical images have gradually aimed at the approaches based on deep learning. In recent years, CNN has been applied and demonstrated to tissue identification of brain MRI images effectively [12,13], because it can achieve better classification results. Nowadays, Yap et al. [14] using CNN for breast lesions detection; Chiang et al. [15] adopted 3-D CNN for breast tumor detection.
After further exploration, we ensemble the proposed methods and existing techniques into our proposed breast ultrasound CAD system so that our method can extract more robust features. The features selected by the proposed system are further illustrated in Table 4. Table 3. Literature reviews of breast ultrasound CAD system.

Study
Year

Materials
In this paper, we collected retrospective data (between 2012 and 2014) from our cooperating hospital. There was a total of 151 tumor lesions samples used to validate the performance of the proposed system. These images included 151 tumor lesions composed of 79 benign tumors (BIRADS 2-3) and 72 malignant tumors (BIRADS 4-5) and were first identified by an experienced physician. The links with personal information of these patients from these images will be further removed by an experienced radiologist. Then, each fetched ultrasound image of the patients will be stored as JPEG format with its corresponding serial number with a spatial resolution of pixels and 256 intensity values. Because the original images fetched by different imaging instruments do not have the same size, the border of each image may be directly cropped to a specified size so that some details of the acquired image may be preserved, and the size may be consistent (321×321). The original images acquired from three different imaging instruments including PHILIPS, SIEMENS, and TOSHIBA after the crop operation are shown in Figure 3, respectively. Hereinafter referred to as Model B, B, and C. We find that these images both may have different performances in various breast tissues.

Methods
The major goal of the proposed CAD system is to predict the severity of breast carcinoma, and can be roughly divided into the following stages: preprocessing, segmentation/detection of breast tumor, feature extraction, and BI-RADS category prediction. Our proposed CAD system first acquires the BUS image, performs image preprocessing that include speckle noise removal, image normalization, and image enhancement, and then perform image segmentation that include k-means clustering, boundary region removal, regions ranking, and region growing.

Image Preprocessing
In the preprocessing stage, speckle noise reduction based on anisotropic diffusion (SRAD) [17] was conducted to remove the speckle noise caused by different diffusion signals. It reduced the interference of noises on the breast tissues, as shown in Figure 5b. Next, intensity normalization was applied to decrease the brightness differences among all images acquired by different types of imaging instruments. This operation preserves image quality, and eliminates variabilities from different radiologists who fetch images with different ultrasound machines, as shown in Figure 5c. Finally, a contrast enhancement method based on histogram equalization was performed to increase the contrast between the tumor regions and the non-tumor (background) regions as shown in Figure 5d.

Segmentation of Breast Tumor
In the segmentation stage, we only focus on how to detect effectively true tumor regions for latter processing such as feature extraction, and classification. To this end, the tumor Region Of Interest (ROI) needs to be located beforehand. An initial segmentation based on K-means clustering [18] was performed to detect the suspicious tumor areas including tumor and background which is composed of fat, skin, muscle, glands and other tissues, as shown in Figure 5e. Since the dark tissues around a tumor are easy to be mis-classified as candidate tumors after K-means clustering, eliminating the pixels which do not belong to the tumor tissue using some processing operations is necessary in order to improve the clustering accuracies. In this step, operation based on morphological dilation reconstruction was applied to remove the regions touching the border of the image, and objects with smaller area were considered as artifacts and further filtered out. The remaining regions may thus be considered as suspicious/candidate tumor or lesion regions, as shown in Figure 5f.
Unfortunately, after the previous step, some suspicious tumor regions may be preserved yet. Clinically, a tumor may be found as an object having a round shape. In order to detect an actual tumor region, a region ranking method [3] based on scoring in terms of area similarity and circularity was applied on Figure 5f to determine the region whose ranking score was greater than a specified threshold. Thus, candidate tumor areas may be further selected to be tumor areas, as shown in Figure 5g. However, there exist holes or irregular shapes in the detected tumor regions due to non-uniform intensities caused by different skills or experience in fetching images. Seed regions inside these candidate regions may be determined and used as reference points for region growing [19]. In order to resolve this problem, we used the growing method proposed by Shan [3]. The seed point is used as a start point, and a region will be expanded gradually by observing its neighborhood pixels' intensity values and gradient magnitudes. This operation can be expressed as Equation (1): where G(v) is the intensity value of pixel v, m is the intensity mean of region, and M is the intensity mean of the entire image. In our experiment, we used b 1 = 1 and b 2 = 1.9 as our parameters. Nevertheless, the proposed approach still cannot accurately detect the tumor acquired by difference ultrasound machines. Therefore, the growing method is further modified from Equation (1) to Equation (2) in order to resolve the problem coming from different machines: where G(v) is the intensity value in pixel v, m is the average intensity of the growing area, M is the average intensity of the entire image, and b is criterion parameter for halted growing. According to the different sonogram models, the average parameter of b is determined by various experiments which are set as 7, 18, and 14 for Model B, Model B, and Model C, respectively. After performing the region growing, an actual breast tumor can thus be detected, as shown in Figure 5h. Resultant images of the detected tumor contour with different machines are shown in Figure 6, respectively. Therefore, the proposed breast ultrasound CAD system not only can segment multi-tumors, but also can be suitable for different types of breast ultrasonic scanners. Especially, it even can detect multiple tumors automatically.

Feature Extraction
Once the actual tumor contour has been located, how to extract robust features is important for correct BI-RADS classification. Here, morphology features and texture features which correspond to the BI-RADS descriptors are defined and measured. The flow chart of the feature extraction stage is illustrated in Figure 7. A total of 145 features composed of their corresponding descriptors, including shape features, orientation features, margin features, echo pattern, posterior features, etc., as mentioned in the Table 4, were extracted to identify the BI-RADS category. These 17 features corresponding to morphology, such as shape, orientation, and margin, were measured quantitatively.
Besides, texture can be efficiently used to describe echo pattern and the posterior, as mentioned in Table 4. This is due to the fact that it can measure these two feature categories using the spatial intensity distribution of the tissues or using degree of degradation when the ultrasound signals penetrate the tissues. In general, gray level co-occurrence matrix (GLCM) [4] based on orientation decomposition is a good measure for texture feature, and is evaluated often by distances and orientation between neighboring pixels. Here, GLCM at four directions (0 • , 45 • , 90 • , 135 • ) and one distance (d = 1 pixel) are first constructed; G is referred to as the matrix. Then, an estimate of the probability p ij is computed from the elements of G. A variety of texture descriptors used for characterizing the contents of GLCM, including contrast, correlation, energy, entropy, homogeneity, sum average, and sum entropy, were computed and averaged, respectively. Therefore, a total of seven features were chosen when we performed GLCM on the original image. Besides, multi-resolution wavelet transform (adopting eight images) and multi-resolution ranklet transform (adopting nine images) were also performed using the same descriptors measured from the same operations by using GLCM, respectively. Finally, a total of 126 features were generated from the original image, the multi-resolution wavelet transform, and the multi-resolution ranklet transform. Because the posterior feature can be used to evaluate the echo variations behind the tumor, it can be often measured directly from the intensity distributions close to the tumor region via computing the difference of the average between the upper and lower area, and the difference of the average between the right and the left area (generating two features). Finally, a total of 128 mass features corresponding to BI-RADS descriptors such as echo pattern and posterior features were extracted in this stage [4]. However, only the SVM classifier needs to be performed using a feature selection stage, such as feature dimension reduction, and feature scaling, in order to exclude unnecessary or insignificant features, because the classifier itself does not have the capability of feature selection. Various features related to lesion or disease severities corresponding to the BI-RADS categories were fed into three machine learning and deep learning classifiers, including: Support Vector Machine (SVM), Random Forest (RF), and Convolution Neural Network (CNN) with/without feature selection operation to develop a multi-class assessment of breast tumor severity based on BI-RADS, as shown in Figure 8. From these three classifiers, a classifier evaluated with the highest performance was chosen as our final classification decision for the proposed CAD system.

Classifiers
Three types of machine learning techniques were applied to predict the BI-RADS category, including SVM, RF, and CNN. The implementations of these classifiers are described below.

Support Vector Machine
Basically, SVM is a two-class classifier based on statistical theory in the field of machine learning. In this study: a multi-class SVM classifier with Radial Basis Function (RBF) kernel function whose gamma function and penalty coefficient C were adjusted continuously in order to obtain better performance and speed up the entire computation process. LIBSVM [20] utilized the model, and was implemented on the Matlab. The RBF (Gaussian) kernel function was used as a mapping transform function for SVM. The model calculates the correction rate after mapping and compares the correction rate to obtain a better trade-Off. Finally, it obtains the optimal plane parameter set, which is C = 2 x i , gamma = 2 x j . Experimental testing results show that optimal parameter C and gamma value are set as 181.0193 and 0.088388 in terms of our experiments, respectively. The proposed classifier would obtain best classification performance. The stage aims to design a multi-class SVM classifier [21] from a number of optimal planes in a large data space. In fact, a multi-classification between different categories not only requires a lot of computation time, but will reduce identification accuracy indirectly. SVM often combines the Heuristic algorithm to evaluate the robustness of features correlation and selectivity in a reasonable time. In order to determine the best solution, it is necessary to avoid this problem using more resources on computation time. In this paper, a multi-class SVM based on a Directed Acyclic Graph (DAG-SVM) [22] was presented to predict the BI-RADS category, and the BI-RADS classification architecture of the BI-RADS DAG-SVM is shown in Figure 9.

Random Forest
A Decision Tree is one of the commonly used predictive models in machine learning. It is a non-parametric supervised learning method used for both classification and regression tasks, whereas a Random Forest (RF) [23] is generated randomly by various Decision Trees (DT). Above all, it uses majority voting strategy to obtain the best prediction results from preliminary classification results. As shown in Figure 10, the optimal subset of each tree can be transformed into a variable from the feature set x, and a total of Decision Trees N is thus established. The optimal decision of each tree is transformed to a variable; i.e., the overall classification result Y. That is, RF can automatically select the best combination from individual trees based on gain ratio and split information. The classifier RF can be denoted as Equation (3) where c is the class label, and P t is the BI-RADS category probability measured by DT. After evaluating the performance of the forest with mean square error and tree number equal 200, out of bag (OOB) error curve will not be significantly reduced, and then the classification may reach a steady state finally. This operation not only minimize error measure, but ensure that the proposed prediction Model Can obtain better identification accuracies between the training stage and the testing stage as much as possible. Our experiments also show that if the tree number is greater than 200, the RF classification efficiency no longer increase significantly. Therefore, this stage selects 200 trees as a limitation criterion for our RF classifier.

Convolutional Neural Network
Convolutional Neural Network (CNN) [11]: a class of deep artificial neural networks; it has become a popular classification approach in computer vision recently. It consists of an input layer, a convolution layer, a pooling layer, a fully connected layer, and an output layer. A feature map is obtained by processing the input image with the convolutional kernel; weight-sharing is used among neurons in the same convolutional layer. During back propagation, convolution layers get updated to let the network learn the features itself. The weights or bias for every neuron may be modified or updated using gradient descent and backpropagation where the chain rule of calculus is used to adjust the weights that will minimize the loss function.
In this study, a CNN architecture was implemented on the BUS image in order to validate the classification performance of the model. Before each acquired BUS image was introduced into the model, the size of the image was resized to a spatial resolution of 99 × 99 pixels in order to satisfy the input criterion of the proposed CNN model. Because the sample size of the data set is restricted, effective data preprocessing and augmentation is mandatory for medical image datasets, especially for CNN training. In the CNN training stage, only geometric translation and rotation were performed to preserve the shape textures of breast tumors for the final classification. Moreover, all BUS image samples were decomposed into five categories, including background itself (negative sample) and BI-RAD categories 2-5 (positive sample). The illustration of CNN on an input image is shown in Figure 11. The CNN architecture includes an input layer, two convolution layers, and two max-pooling layers, and two fully connected layers for the multi-class classification are also shown in Figure 12. The fully connected layer 1 flattens max-pooling layer 2 into a single vector. The detailed parameters of the proposed CNN architecture are further summarized in Table 5.

Experimental Results and Discussion
In our experiments, we collected retrospective data from our cooperating hospital in 2012. There is a total of 151 tumor lesion samples. These 151 images were decomposed into 106 training samples and 45 testing samples for evaluating the performance of SVM. In order to evaluate the performance of RF and CNN, 103 training samples and 48 testing samples were used.

Performance Evaluation with Confusion Matrix
As mentioned previously, in order to validate the detection performance of the proposed CAD, we compared the segmentation results with Equation (1). Figure 13 shows the contour detection results using different approaches on different ultrasound instruments, including: Model B, Model B, and Model C.

Performance Evaluation with Confusion Matrix
In general, confusion matrix is a commonly used index to evaluate the efficiency of the machine learning. In order to validate the performance of the multi-class identification, the predicted and actual BI-RADS categories (ground truth) were compared to generate a confusion matrix [25]. As shown in Figure 14, a 2D confusion matrix is generated in terms of the predicted and actual classification results using different machine learning approaches. Assume that K 1 , K 2 , . . . , K 3 are the multiclass label; A ij denotes that the actual class is K i for each sample, but the predicted class is K j . According to the confusion matrix, various indices for the performance evaluation can be measured so as to evaluate the classification efficiency. Some popular indicators will be expressed further in terms of the following definitions.
Accuracy is used to evaluate the correction percentage for all categories, and the measure is calculated by Equation (4): Precision is its prediction accuracy for a specific class. In other words, it is the measurement accuracy of successful prediction for a sample, and it is calculated by Equation (5): Recall is an accuracy measure of the prediction model (the test model of the classifier) in the measurement of a particular class. In other words, the probability for a certain category is not misclassified, and its definition is denoted as Equation (6): The statistical F-score (F1-score) is the harmonic mean calculated by the precision rate and the recall rate. When the F-score is over 70%, the test method will be considered to be more effective, and the measure is calculated by Equation (7): In our experiments, there are a total of 106 training samples and 45 testing samples which were selected and identified by an experienced radiologist. The evaluation using SVM+PCA to classify the BI-RADS category is summarized in Table 6. Besides, the evaluation using RF to classify the BI-RADS category is also shown in Table 7. In order to evaluate the performance of CNN, 103 training samples and 48 testing samples were used. The evaluation result for the method is shown in Table 8. In terms of three confusion matrices, Tables 6-8, the classification efficiency can be evaluated and compared, as shown in Table 9. Actually, optimal performance can be obtained using CNN for various evaluation indices in different BI-BARDS category classification.

Conclusions
In the study, three types of ultrasound imaging instruments including Model B, Model B, and Model C were adopted to acquire breast ultrasound images for our experimental samples. A series of image processing operations such as speckle noise removal, image normalization, and image enhancement etc. were performed to detect the breast lesion contours, extract features related to BI-RADS category with/without feature selection procedure depending on the selected classifier. Finally, the multi-class classifiers based on machine learning methods were implemented to identify actual BI-RADS 2-5 Categories in terms of the selected features. Experimental results also reveal that the proposed system not only can automatically, accurately, and reliably detect multi-tumors, but also show that the identification accuracies performed with SVM, RF, and CNN were evaluated as 80.00%, 77.78%, and 85.42%, respectively. CNN is higher in accuracy than RF and SVM. Because CNN model used is a pixel-based structure with feature map in extracting features whose features can be more than a total of 145 mass features that were extracted in our proposed feature selection. Using different ultrasound imaging instruments, evaluations of F-score based on CNN can obtain various measures higher than 75% when samples were tested on various BI-RADS categories.
In the future, we hope to acquire sufficient BI-RADS 3-4 samples from ultrasound elastography and B-Mode ultrasound images in order to improve and validate the evaluation performance. Cross-validation can thus also be demonstrated if we can further combine with much more ultrasound samples and histopathological examination results so that the performance of the feature extraction and the classification can be further improved in the proposed CAD system.

Conflicts of Interest:
The authors declare no conflict of interest.