Next Article in Journal
Thoracic Aorta: Anatomy and Pathology
Next Article in Special Issue
Systematic Review for Risks of Pressure Injury and Prediction Models Using Machine Learning Algorithms
Previous Article in Journal
Intradiscal Gelified Ethanol Nucleolysis versus Endoscopic Surgery for Lumbar Disc Herniation Radiculopathy
Previous Article in Special Issue
Acoustic-Based Deep Learning Architectures for Lung Disease Diagnosis: A Comprehensive Overview
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Resolution Network with Dynamic Convolution and Coordinate Attention for Classification of Chest X-ray Images

School of Microelectronics, Tianjin University, Tianjin 300072, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Diagnostics 2023, 13(13), 2165; https://doi.org/10.3390/diagnostics13132165
Submission received: 9 April 2023 / Revised: 12 June 2023 / Accepted: 21 June 2023 / Published: 25 June 2023
(This article belongs to the Special Issue Classification of Diseases Using Machine Learning Algorithms)

Abstract

:
The development of automatic chest X-ray (CXR) disease classification algorithms is significant for diagnosing thoracic diseases. Owing to the characteristics of lesions in CXR images, including high similarity in appearance of the disease, varied sizes, and different occurrence locations, most existing convolutional neural network-based methods have insufficient feature extraction for thoracic lesions and struggle to adapt to changes in lesion size and location. To address these issues, this study proposes a high-resolution classification network with dynamic convolution and coordinate attention (HRCC-Net). In the method, this study suggests a parallel multi-resolution network in which a high-resolution branch acquires essential detailed features of the lesion and multi-resolution feature swapping and fusion to obtain multiple receptive fields to extract complicated disease features adequately. Furthermore, this study proposes dynamic convolution to enhance the network’s ability to represent multi-scale information to accommodate lesions of diverse scales. In addition, this study introduces a coordinate attention mechanism, which enables automatic focus on pathologically relevant regions and capturing the variations in lesion location. The proposed method is evaluated on ChestX-ray14 and CheXpert datasets. The average AUC (area under ROC curve) values reach 0.845 and 0.913, respectively, indicating this method’s advantages compared with the currently available methods. Meanwhile, with its specificity and sensitivity to measure the performance of medical diagnostic systems, the network can improve diagnostic efficiency while reducing the rate of misdiagnosis. The proposed algorithm has great potential for thoracic disease diagnosis and treatment.

1. Introduction

In clinical practice and medicine, X-ray imaging technology (X-ray), magnetic resonance imaging (MRI), and computed tomography (CT) are widely employed for disease diagnosis. Among these modalities, X-ray imaging is commonly employed for examining chest lesions due to its advantages of low radiation dose, cost-effectiveness, and ability to detect obvious lesion tissues and structures easily [1]. By utilizing the different densities and thicknesses of human tissues, X-rays create grey-scale images of chest radiographs with varying contrasts on the film [2]. However, the analysis and interpretation of the numerous chest radiographs generated worldwide heavily rely on human visual examination, which presents challenges such as the need for specialized skills, concentration, time consumption, high costs, potential operator bias, and the inability to leverage valuable information from large-scale datasets [3]. Furthermore, the shortage of radiologists proficient in reading chest radiographs poses a significant challenge to public health in many countries [4]. Hence, the development of automated algorithms for computer-aided diagnosis of chest diseases in CXR is of paramount importance.
Numerous studies have demonstrated the effectiveness of deep learning techniques, particularly convolutional neural networks (CNNs), in medical image processing. Much research has been conducted on thoracic disease classification tasks. In recent years, deep convolutional neural networks (DCNN) have achieved impressive success in medical image classification [5,6,7]. However, despite the advancements made in this field, the automatic classification of thoracic diseases in CXR images for multi-label scenarios still requires further improvement. Firstly, the high similarity in the appearance of certain thoracic conditions on CXR images can lead to inaccuracies in distinguishing between two categories, especially for some patients with two or more pathologies. The remarkable interclass similarity observed in these images hampers the effective learning of discriminative features, thereby posing difficulties in accurately diagnosing conditions [8]. Secondly, the size of the lesions on the CXR varies considerably from one disease to another, as shown in Figure 1; for instance, cardiomegaly can encompass the entire heart, whereas effusion and mass tend to be much smaller in size. This scale variability poses difficulties for the model in accommodating and adapting to the diverse scale changes encountered while classifying different thoracic disorders. As depicted in Figure 1, the region of pneumonia can appear at various locations within the lung field, while cardiomegaly is typically located around the heart region. The presence of highly variable locations adds complexity to the task of CXR classification. Furthermore, CXR images often contain numerous areas that are irrelevant to the specific disease being diagnosed. As illustrated in Figure 1, healthy tissues constitute the majority of the image [9]. These non-disease regions provide limited diagnostic information and can impose unnecessary computational costs during analysis. Especially in exceptional cases where the lesion area is relatively small, it is necessary to exclude the interference of disease-independent regions [10]. Therefore, extracting detailed features of chest lesions and adaptively capturing variations in size and location is crucial to formulating an accurate and robust model for CXR analysis.
Currently, the prevailing frameworks for multi-label thoracic disease classification consist of serially connected networks that employ high-to-low-resolution convolutions to encode the input image into a low-resolution representation before classification. However, this approach has limitations as the extracted features may generate ambiguous mappings due to multiple convolutions, resulting in the loss of critical details. Furthermore, the aforementioned network architecture typically utilizes a single scale of the receptive domain for feature extraction in each convolutional layer. This limitation restricts the network’s ability to extract features across a wide range of spatial scales, thereby compromising its feature extraction capability to some extent. This study uses a parallel multi-resolution network as the basic network for feature extraction, which connects high-resolution to low-resolution networks in parallel. Additionally, it includes multi-resolution feature switching and fusion modules, enabling the network to capture detailed features of complex pathologies and sustain high-resolution representations. This approach is advantageous compared to traditional networks as it provides discriminative information for indistinct diseases.
To extract multi-scale information and enhance the representation of different scales in each convolutional layer, GoogLeNet [11] introduces the concept of “inception modules”. These modules utilize convolutional kernels of different sizes, enabling the extraction of features from multiple receptive domains simultaneously. This innovative approach breaks away from the traditional convolutional model and significantly improves the network’s ability to capture multi-scale information. Finally, to enhance the diversity of feature learning, a common approach is to splice and aggregate features extracted at different scales [12]. This approach can improve model performance, but it is computationally expensive. This paper uses dynamic convolution module to extract multi-scale information while not increasing many parameters and computational effort. In this case, the dynamic convolution is equivalent to conditional parametric convolution (CondConv), which can capture lesions of different sizes while improving classification accuracy.
Numerous studies [9,13] have demonstrated that the attention mechanism renders DCNN the ability to allocate more processing resources to vital information during the learning process, which makes it possible to strengthen the discriminatory power of DCNN via adapting to variances in appearance, location, and scale of thoracic abnormalities [9]. This study presents a coordinate attention (CA) module, which allows the network to focus more on focal regions while suppressing features in the picture that are irrelevant to the condition, further enhancing the classification results.

1.1. Challenges and Motivation

In summary, certain thoracic diseases are similar in appearance, and pathological abnormalities are difficult to discriminate with close presentation features. However, the existing state-of-the-art classification methods do not sufficiently extract the features of chest lesions and lose essential details, which leads to the recognition accuracy is not excellent. In addition, the location and size of lesion regions vary significantly for different disease categories. Such scale variability and location diversity pose difficulties for models to adapt to variations in scale and location when classifying various thoracic disorders, ultimately limiting the overall classification performance.
To address these issues, this study proposes a high-resolution classification network with dynamic convolution and coordinate attention (HRCC-Net), aiming to extract essential detailed features of the lesions as well as adapting to the variations of sizes and locations of the disease. First, this study proposes a multi-resolution network as the backbone that maintains high-resolution representations to obtain accurate, detailed lesion characteristics and thus identify subtle distinctions between different pathologies. Simultaneous multi-resolution feature swapping and fusion allow multiple receptive fields to capture rich contextual information to extract disease features adequately. Second, to enhance the network’s expression of lesions at various scales and improve the multi-scale representation of each convolutional layer, this study dynamically aggregates multiple convolutional kernels through the dynamic convolution (CondConv) module. Third, this study introduces a coordinate attention (CA) mechanism into the multi-resolution network to focus on the lesion region and extract the critical location features of the pathology by capturing location information and channel relationships. Finally, to alleviate the problem of imbalance of sample data in the dataset, this study adopts a weighted focal loss (WFL) function, which can make the network more effective in adjusting the corresponding weights according to the difficulty of disease classification.

1.2. Contributions

The contributions of this paper are as follows:
  • This study proposes a multi-scale high-resolution network, HRCC-Net, which contains a high-resolution branch to obtain critical detail features of lesions and multi-resolution units to acquire multiple receptive fields, thus sufficiently extracting complex appearance representations of pathological abnormalities.
  • This study proposes dynamic convolution (CondConv) blocks for feature extraction of diseases to acquire multi-scale information of lesions in images, adapting to lesion size variations.
  • This study introduces a coordinate attention (CA) mechanism to detect the spatial location of pathological abnormalities while excluding the interference of irrelevant regions and automatically capturing changes in lesion location.
The structure of the rest of the paper is as follows. Section 2 presents the related work of the paper. Section 3 provides an in-depth description of our method. Section 4 contains the appropriate experiments and the analysis of the results. Section 5 discusses the effectiveness of our proposed network. Section 6 summarises the work of the paper.

2. Related Work

2.1. Related CNN Networks

Wang et al. [14] presented the ChestX-ray14 dataset and evaluated classical CNN architectures, namely AlexNet [15], VGGNet [16], GoogLeNet [11], and ResNet [17], to predict the presence of multiple diseases. Huang et al. [18] proposed DenseNet, a new CNN structure that outperformed the then state-of-the-art results to reach optimality in a benchmark image classification task. Rajpurkar et al. [19] suggested classifying CXR images by fine-tuning a modified DenseNet that replaced the last fully connected layer with a 14-output fully connected layer, which successfully outperformed experienced radiologists in detecting pneumonia. Chen et al. [20] proposed DualCheXNet. This network focuses on cooperative complementary learning, combining two asymmetric networks based on ResNet and DenseNet to improve the model based on the different anomalies from the original CXR to capture more discriminative features adaptively. In addition, in the diagnosis of chest X-ray images, researchers have tried to apply Transformer to images, and some works have combined CNN architecture with self-attention. Okolo et al. [21] proposed the IEViT model, built on the ViT architecture, and introduced a CNN block to classify various pathological conditions in CXR images.

2.2. Multi-Scale Convolution Module

Convolution kernels of different scales produce different receptive domains at each layer, allowing the extraction of multi-scale features at different levels. Multi-scale convolutional structures, such as Inception [11] and ResNext [22], have succeeded in various computer vision tasks. In such an architecture, a layer consists of multiple convolutional branches that were aggregated to compute the final output [23]. Currently, the Inception module is still a commonly used multi-scale feature extractor. Ibtehaz et al. [24] cleverly used jump connections to modify parallel-connected Inception to continuous connections and obtained different scale features. Cheng et al. [25] used an expanded Res2Net block similar to the Inception operation, which also extracted and aggregated features from a four-scale receptive domain. Xie S et al. [6] proposed ResNeXt, which was essentially a grouped convolution, where the number of groups was controlled by the number of variable bases, and the single-way convolution became a multi-way convolution with multiple branches. The dynamic convolutional layer is the mathematical equivalent of a multi-branch convolutional layer, where each branch is a single convolution, and the output is conditionally adapted to the activation of the neural network according to the input through weighting and aggregation. In this paper, the dynamic convolution module [23,26] learns to scale the activation of the output of each layer through a squeeze-excitation operation [27], which grades the input of the previous layer according to the learned attention weights [28,29,30]. Still, only one convolution needs to be computed, increasing neither the depth nor the width of the network but improving the model’s performance by attentionally aggregating multiple convolution kernels.

2.3. Attention Mechanism

Visual attention mechanisms enable deep learning models to learn more discriminative representations. Integrating visual attention mechanisms into deep learning has led to significant progress on many visual tasks, such as localization [31], tracking [32], visible question answering [33], and segmentation [34]. Hu et al. [27] proposed a squeeze-excitation (SE) network that adaptively recalibrates the channel feature maps by explicitly modeling the interdependencies between channels. The SE module is channel attention, which enables the network to achieve feature rescaling by learning the importance of feature channels. However, the SE module only considers channel weights and ignores spatial information. The model aims to focus on extracting pixel features of lesion locations, which requires further optimization of the attention module to enable the network to concentrate on lesion regions while reinforcing channel information. Ypsilantis et al. [35] proposed a stochastic attention-based model to determine which areas should be visually explored and to derive whether specific radiological abnormalities are present. However, only one disease, cardiomegaly, was considered in their study. Guan et al. [13] explored visual attention mechanisms. They proposed a categorical residual attention learning (CRAL) framework that aims to suppress the impairment of irrelevant classes by assigning smaller weights to feature representations, thereby addressing the problem that the recognition of one or more pathologies from CXR images is often hindered by pathologies that are not relevant to the target. However, the mechanism is guided only by the target loss function. Similar soft attention mechanisms were explored in [13,35,36] to determine which regions were more critical for classification or localization tasks. This paper uses a coordinate attention approach that can effectively capture location information and channel relationships to enhance the feature representation of the network [27], focusing information processing not only on the lesion region but also automatically locating areas of pathological abnormality.

3. Method

This study proposes a high-resolution thoracic disease classification network (HRCC-Net) with dynamic convolution and coordinate attention to address the limitations in the classification process of existing methods for the multi-label thoracic disease. The architecture of HRCC-Net is shown in Figure 2 and Table 1. The network contains three key components: (1) Its backbone network has a parallel multi-resolution feature extraction network as its core. The network consists of multiple branches from high to low resolution, which can maintain the high resolution of the input image throughout the path, thus preserving spatial details for the identification of complicated thoracic diseases, especially those with similar feature presentation. Additionally, the duplicated multi-resolution feature exchange and fusion modules can capture rich contextual information by acquiring multiple receptive fields. (2) A multi-scale lesion feature extraction module that dynamically aggregates multiple convolutional kernels on each convolutional layer according to the relevant attention weights to improve the accuracy of feature extraction for different scale targets as much as possible without adding additional feature channels and to more fully characterize pathological images of thoracic diseases with variable sizes and slight differences in lesion textures. (3) The coordinate attention module, which simultaneously considers location information and channel relationships, enables more accurate identification and localization of the lesion’s exact location.
This paper uses two standard 3 × 3 convolutions with stride 2 for initial feature extraction of multi-labeled CXR images. Then the output feature map is fed into a parallel multi-resolution feature extraction network, which employs 4 stages in the feature extraction process. Each stage is divided into a parallel extraction feature layer and a multi-resolution fusion layer. The first stage consists of a convolution block containing 4 residual bottleneck units to ensure the quality of the original feature map. Stages 2–4 extract image features using 4 multi-scale attention modules, each containing convolutional blocks consisting of multi-scale module and coordinate attention. In the parallel extraction of features, a high-resolution to low-resolution stream is gradually added by downsampling with a 3 × 3 convolution kernel in stride 2, from a high-resolution convolution stream as the first stage, and the multi-resolution streams are connected in parallel. The multi-resolution fusion process uses a 3 × 3 deconvolution with a stride 2 for upsampling. The image’s resolution is recovered layer by layer to the dimension of the previous layer for feature fusion. Finally, the feature map is passed through the output head, followed by classification. At the same time, to improve the classification accuracy of the difficult-to-classify diseases and to achieve an overall improvement in the classification performance, this paper uses a weighted focal loss (WFL) function, which allows the network to adjust the corresponding weights more effectively according to the difficulty of the disease classification.

3.1. Parallel Multi-Resolution Feature Extraction Network

Inspired by the high-resolution network (HRNet) [8], this study employs the parallel multi-resolution feature extraction network as the backbone. The specific structure of the network shows in Figure 2, and the system consists of four parallel multi-resolution sub-networks, designed from top to bottom with four different resolutions of 56 × 56, 28 × 28, 14 × 14, and 7 × 7.
This network consists of 4 stages. In the first stage, the high-resolution branch is output after a convolution block containing 4 residual bottleneck units. This high-resolution branch can maintain a high-resolution representation, providing more detailed information about the area of the lesion and its location, thus identifying slight differences between lesions and between lesions and normal tissue. In contrast, for some location-sensitive lesions, the spatial location information retained by the parallel multi-resolution facilitates disease classification. In stage 2, the network structure has two resolution subnets: the original high-resolution subnet and a low-resolution subnet with half the resolution. In stages 3 and 4, a low-resolution subnet is added with half the solution and twice the number of channels of the previous branch. The multi-resolution subnetwork provides different scales of receptive fields. It addresses the problem of insufficient spatial information for high semantic information and incomplete semantic information for low feature maps rich in spatial data. It enables richer semantic information to accommodate the differences between lesions of different sizes. In addition, the repeated cross-resolution exchange and fusion module increase the resolution representation with the help of the low-resolution term allowing for richer semantic information and extraction of abundant lesion features.

3.2. Multi-Scale Feature Extraction Module

This study uses dynamic convolution as a multi-scale lesion feature extraction module to obtain feature information from lesions at different scales. Each convolution layer’s kernels at different scales are dynamically assigned according to the relevant attention weights to remove multi-level lesion features from various receptive fields. Because of the small size of the convolutional kernels and the fact that these kernels are aggregated in a non-linear way by attention, redundant feature information is reduced. This study uses the dynamic convolution (CondConv) module to enhance the standard convolution by inserting the CondConv module into the residual module, the exact structure of which is shown in Figure 3. After passing through the first dynamic convolution module, the feature map uses the ReLU function to activate the aggregated features, and then through another dynamic convolution, using coordinate attention to strengthen the location information, combined with the forward path, and finally through the ReLU activation to obtain the output.
This paper uses a dynamic convolution containing 3 convolution kernels, i.e., n = 3. The dynamic convolution output is as follows [26]:
Output ( x ) = σ ( ( α 1 · W 1 + α 2 · W 2 + + α n · W n ) x )
where each α i = r i ( x ) is an input-dependent learnable weight parameter, n is the number of convolutional kernels, σ is the activation function, and each convolutional kernel W i has a size equal to that of a standard convolutional kernel.
The weight coefficients α i for each ordinary convolutional kernel are obtained through the attention module, which is shown in Figure 3. Assuming that x is an input to the attention module, the process can be represented as follows:
α i = r i x = Sigmoid FC GAP x i
where GAP stands for global average pooling, FC stands for fully connected and Sigmoid implements incentive operations. According to the weight coefficients σ , a dynamic convolutional kernel suitable for this input is obtained by a weighted combination of multiple convolutional kernels W i , which can be expressed as: ( α 1 · W 1 + + α n · W n ) .

3.3. Lesion Location Enhancement Module

Different pathological abnormalities commonly occur in various areas of the lung field, and to more accurately identify and localize the exact location of the lesion, excluding a large number of irrelevant disease regions in the CXR image, this paper adopts the coordinate attention (CA) mechanism. As shown in Figure 4, after putting the input feature map through the multi-scale focal feature extraction module, to alleviate the loss of location information caused by 2D global pooling, the CA block uses two 1D global pooling operations to aggregate the input features along the X and Y directions, respectively into two independent direction-aware feature maps, which allows the capture of long-distance dependencies along one spatial path while retaining precise position information along another. Then, it is put together for convolution, interacting with the information in both directions. After BN and non-linear activation functions, this feature map is split and convolved separately, focusing on horizontal and vertical directions. The two feature maps embedded with specific directional information are encoded as two attention maps. The coordinate attention weight generation process uses the features with high spatial location information generated in the coordinate embedding stage to combine the relationship between channels further to generate channel weight coefficients g h and g w in the X and Y directions to reweight the input feature images. The weights in both directions can focus on both location and channel, reinforcing both channel information and location information in the lesion area. Enabling accurate localization of the lesion, it can accurately locate the lesion and automatically capture the location changes to help the network achieve better classification.
Assume that we have the input X = [ x 1 , x 2 , , x C ] R C × H × W , where C , H , and W represent the width, height, and number of channels of the feature map, respectively. The first step is the embedding of the coordinate information. Given the input X, then encode each channel along the horizontal and vertical coordinates using the two spatial ranges (H, 1) or (1, W) of the pooling kernel, respectively. Thus, the output of the cth channel at height h can be expressed as:
z c h ( h ) = 1 W 0 i < W x ( h , i )
Similarly, the output of the c channel of width w can be written as:
z c w ( w ) = 1 H 0 j < H x ( j , w )
In the next step of coordinate attention generation, given the aggregated feature maps generated by Equations (3) and (4), this study first concatenates and then passes them through the shared 1 × 1 convolutional transform function F to obtain:
f = δ ( F ( [ z h , z w ] ) )
where [ , ] stands for the concentration operation, z h and z w are the 1D feature vectors output after global averaging pooling in both directions, respectively, F stands for the convolution operation, δ for batch normalization and hard swish activation function calculation, and f for the intermediate feature map output from this operation.
Then f is divided into two independent tensors f h and f w along different spatial dimensions, which are recovered into a tensor with the same number of channels as the input X by 1 × 1 convolution operation respectively, and the attention weights are mapped to the range (0, 1) by the Sigmoid function. The output can be expressed as:
g h = σ ( F h ( f h ) )
g w = σ ( F w ( f w ) )
where F h and F w are 1 × 1 convolution operations, σ stands for the Sigmoid activation function, and g h and g w are attention weights. The image of the output features can be expressed as:
y c ( i , j ) = x c ( i , j ) × g c h ( i ) × g c w ( j )
where c stands for channel, x c ( i , j ) for the input feature image and y c ( i , j ) for the output feature image.

3.4. Weighted Focal Loss Function

This paper uses the ChestX-ray14 dataset, where the label of each chest radiograph can be represented as a 14-dimensional vector, i.e.,
Y = [ y 1 , y 2 , , y n ]
where n = 14, representing 14 diseases; y i ( i = 1 , 2 , , n ) the valid category label for each condition, y i = 1 means having the disease and is considered positive, y i = 0 represents not having the disease and is considered harmful.
Due to the diversity of textural and hierarchical features of each disease in the ChestX-ray14 dataset, and the number of samples with class imbalance, the classification difficulty of each illness varies greatly. The loss function used in this paper is an improvement on the focal loss (FL) function to address these issues. Let p i represent the prediction probability of the network model for label y i = 1 . Then the focal loss function for each disease is formulated as [37]:
F L ( p i ) = α ( 1 p i ) γ ln p i y i = 1 ( 1 α ) p i r ln ( 1 p i ) y i = 0
where the role of α is to balance the number of positive and negative samples for each disease, usually set to 0.25; the part of γ is to balance the number of pieces for various difficult-to-classify conditions, γ is traditionally taken as 2; the expression of the focal loss function varies with the value of y in the set [0, 1].
However, for the ChestX-ray14 dataset, the information on texture, size, and location presented on radiographs varies with each disease, resulting in a significant difference in the difficulty of classification of each disease sample label. The results of several experiments show that the classification accuracy of different diseases varies greatly. For example, the Infiltration classification is more accurate, while the hernia classification is less accurate. In the training process, to improve the classification accuracy of the difficult-to-classify diseases and thus achieve the overall classification performance, the most direct method is to increase the weight of the difficult-to-classify diseases and decrease the weight of the easier classified conditions. In addition, the larger the difference between the accuracy of the difficult-to-classify diseases and the average accuracy, the more significant the change in the corresponding weights for that disease should be and the more extensive the range of variation in its associated loss function, thus skewing the network’s computational resources towards the difficult-to-classify conditions. Therefore, to make the network more effective in adjusting the corresponding weights according to the difficulty of disease classification, this paper adds weight coefficients to the FL function as the corresponding loss function for each disease and calls it the WFL function, defined as:
W F L ( p i ) = 1 α i j = 1 N 1 α j F L ( p i ) = α 1 α i j = 1 N 1 α j ( 1 p i ) γ ln p i y i = 1 ( 1 α ) 1 α i j = 1 N 1 α j p i γ ln ( 1 p i ) y i = 0
where i , j = 1 , 2 , , N is the total number of classification labels, and N = 14 since 14 diseases are classified in this paper; α j stands for the accuracy of disease classification corresponding to label j in the last round of training.
From Equation (11), it can be seen that the loss weights are linearly related to the inverse of the disease classification accuracy, so the overall classification accuracy can be improved by increasing the proportion of losses for difficult-to-classify diseases so that the network applies more attention to them.

4. Experimentation and Analysis of Results

4.1. Datasets and PreProcessing

This experiment evaluated our method on the ChestX-ray14 dataset and used the CheXpert dataset as a complementary validation experiment to verify the classification performance for uncertainty-label-containing pathologies. The AUC scores for each pathology and the mean AUC scores for all pathologies were reported separately.
ChestX-ray14: The experiments use the large multi-label dataset ChestX-ray14 collated by the National Institutes of Health (NIH). The dataset contains a total of 112,120 front views of X-ray chest films with 14 different diseases. Of these, 60,361 images are labeled as “No Finding”; the rest are labeled as one or more chest diseases. The distribution of different conditions in the dataset is shown in Figure 5. A few disease categories have fewer samples, such as hernia, pneumonia, and fibrosis, while conditions such as infiltrates and effusions have more samples. This imbalance of sample numbers increases the difficulty of model classification. In this paper, the dataset is randomly partitioned into a training set, a test set, and a validation set in the ratio of 7:2:1 commonly to ensure that samples from the same image do not cross over in the three parts.
CheXpert: A large dataset of chest radiographs with uncertainty labels and expert comparisons. It contains 224,316 chest radiographs of 65,240 patients. A total of 14 observations are labeled in radiology reports, capturing uncertainties inherent in radiography interpretation. Samples are marked as positive, negative, or indeterminate (with different types of thoracic disease) based on the observations. The HRCC-Net is evaluated and compared for performance on the validation set segmented by [38]. It is also evaluated for five competing pathologies (atelectasis, cardiomegaly, consolidation, edema, and pleural effusion).

4.2. Experimental Details

This paper’s experimental code is based on Python 3.60; the server environment is Ubuntu 16.04. the experiments are based on the pytorch 1.10.1 framework for network construction and training, and the driver version is CUDA 11.3. The CPU is Intel Corei9-9900X, the graphics card is Nvidia RTX2080Ti (11GB) ×4. The training will stop when the validation loss reaches stability, using Adam optimizer for optimization. In experimental parameters, the initial learning rate Lr is a crucial hyperparameter in model training. This study experimentally compares the classification performance of the model with different initial learning rates and batch sizes. Specifically, Lr is chosen to be 0.0006, 0.0008, 0.001, 0.0012, and 0.0014 in the proposed HRCC-Net model, batch sizes of 64 and 128 are chosen, and the maximum training number is determined by the validation set. The number of training epochs is stopped when the minimum loss is achieved on the validation set and the loss is stable. Experimental results (Table 2) show when the batch size is 128, the learning rate Lr increase from 0.0006 to 0.001 mean AUC is improved, and when Lr = 0.001, the average AUC of the classification model reached an optimum of 0.845. When Lr > 0.001, the model’s performance decreases and may deviate from the optimal value. Therefore, to achieve the best classification effect, we set the initial learning rate Lr to 0.001, the batch size to 128, and the maximum number of training rounds epoch to 80.
To objectively and comprehensively evaluate the network training process, the following data enhancement techniques were applied to the training set in this study. The original chest film image in the dataset is a grayscale image of 1024 × 1024 pixels. To reduce the computational effort and to match the network model, the image is scaled to 256 × 256 pixels and converted to RGB 3-channel format. The image is randomly selected at the center and cropped to 224 × 224 pixels, and randomly flipped horizontally to achieve data enhancement. Finally, the thorax is converted to vector format and normalized to a pixel value restricted to 0–255 for readout by the network.

4.3. Evaluation Metrics

This paper defines thoracic disease classification as a 14-dimensional binary classification task for a multi-label classification problem, i.e., each CXR image has only two cases of positive class (containing the label) and negative class (not containing the label) for each disease label. To objectively and comprehensively evaluate the diagnostic performance of the network and to facilitate comparison with other algorithms, this paper uses the receiver operating characteristics (ROC) curve to represent the algorithm’s ability to identify each disease and calculates the area under ROC curve (AUC) values for quantitative analysis and comparison. The ROC curve is used to analyze the classification effect of the binary classification model, which outputs positive and negative classes. It can visually reflect the performance of the classifier. Its horizontal axis is the False Positive Rate (FPR); FPR stands for the probability that the classifier will misclassify a positive case in all negative samples. The vertical axis is the True Positive Rate (TPR), and TPR stands for the possibility that the classifier will correctly classify a positive point in all positive samples, calculated as [39]:
F P R = F P F P + T N
T P R = T P T P + F N
Meanwhile, the average accuracy, sensitivity, specificity and F1 are used as additional evaluation metrics to further validate the classification performance of the proposed method, which can be expressed as:
A c c u r a c y = T P + T N T P + F P + F N + T N
S e n s i t i v i t y = T P T P + F N
S p e c i f i c i t y = T N F P + T N
F 1 - s c o r e = 2 × T P 2 × T P + F P + F N
FP is the false positive case, TN is the true negative case, TP is the true positive case, and FN is the false negative case. The curve depicts the game between the true positive rate (vertical) and the false positive rate (horizontal). Therefore, the closer the curve is to the top left corner, the better the classification performance of the algorithm, indicating that the network obtains a high actual class rate while the false positive class rate is low for the classification of samples. In the ROC curve, the function f(x) = x represents the random result, which means the most inferior performance of the classifier, so the ROC curve is generally located above the function.
The AUC value represents the area under the ROC curve. It is a probability value, taking matters in the range [0.5, 1], indicating the potential for an optimistic sample prediction to be greater than the probability of a negative prediction. Therefore, the AUC value is positively correlated with the accuracy of disease classification, and the higher the AUC value, the higher the accuracy of the classification of the corresponding disease. The average AUC value is used to evaluate the overall classification performance of the network model.

4.4. Results and Analysis

4.4.1. Results on ChestX-ray14

The ROC curves in Figure 6 depict the performance of the algorithms in this paper on the ChestX-ray14 dataset. The figure shows the ROC curve for each disease and the average ROC curve for the 14 diseases, respectively. From the definition of the ROC curve, it is clear that the point (0, 0) in the lower-left corner of the curve indicates that all samples are predicted as negative class. Point (1, 1) in the upper right corner indicates that all samples are predicted as positive classes. The point (1, 0) in the lower right corner means that all positive class samples are predicted as negative. Currently, the classification effect is the worst. The point (0, 1) in the upper left corner indicates that all samples are correctly classified, at which point the best classification is achieved.
As shown in Figure 6, all curves located above the function f(x) = x, and most curves are located in the upper left-hand corner overall, indicating that the algorithm has good overall classification performance for 14 diseases, with an average AUC of 0.845, confirming the algorithm’s effectiveness. Meanwhile, the distribution of 14 curves on the coordinate axes is denser and clustered, our algorithm can achieve a certain degree of balance in the accuracy of each disease classification.
This study applied focal loss function and weighted focal loss function on the algorithm model proposed for experiments, respectively, as shown in Figure 7. This paper tests the average AUC values at each epoch, and stop training and testing when the loss function reached stability. The curve is plotted on the same axis, with the training epoch as the horizontal axis and the average AUC value as the vertical axis. When WFL is applied, the model curve rose smoothly, the model performance was better, and the WFL is more effective to improve the learning ability of the model. It allows the network to target and exert more attention on the more difficult-to-classify diseases, thus optimizing the overall classification performance.
To validate the performance of the proposed network, this paper compared HRCC-Net with nine recent deep-learning networks on the ChestX-ray14 dataset. The comparison networks chosen were those with better classification results so far; namely, Wang et al. [14] proposed a benchmark classification network based on ResNet-50, Yao et al. [40] suggested a weakly supervised diagnosis network, Ma et al. [41] proposed a multi attentive classification network based on ResNet-101, Guan et al. [10,13] presented a categorical residual attention learning (CRAL) framework based on ResNet and DenseNet as well as a two-branch network constructed via DenseNet-121, Wang et al. [9] proposed a three-attention learning A 3 Net model based on DenseNet-121, Ouyang et al. [42] proposed a hierarchical attention learning algorithm for weakly supervised learning, Zhu et al. [43] proposed a pixel-level classification and attention network (PCAN) with DenseNet-121 as the backbone, and Chen et al. [39] proposed a semantic similarity graph embedding (SSGE) framework. In which the multi-resolution backbone network is used as the baseline network. Table 3 gives the AUC per category and the average overall classes obtained from our model and the other nine methods. The highest used to diagnose thoracic disease in each row is highlighted in bold.
First, the performance of the baseline network was evaluated on the ChestX-ray14 dataset, and the average AUC of the multi-resolution network reaches 0.833, which is competitive compared to all other networks. Regarding each pathology, the baseline network shows an improvement of more than 2% for diseases with difficult-to-identify pathological features of effusion, pneumonia, consolidation, and edema. Notably, the AUC of effusion improves by about 4.3% from 0.840 to 0.883, and the AUC of consolidation improves by about 4.7% from 0.763 to 0.810. Cardiomegaly obtained the best classification accuracy among several methods (0.915). Other disorders, such as infiltration and hernia, are less different than Wang et al. [9]. In addition, it was observed that nodule and pleural thickening are identified with low accuracy in several methods, indicating that the network has insufficient ability to detect minor target diseases and cannot focus on disease areas. However, compared to the final model, introducing the multi-scale attention module results in a 2% improvement in the accuracy of the module.
Then, the methods in this paper are compared with those of others. Chen et al.’s method is the most advanced method at present. At the same time, the model in this paper achieves an average accuracy of 0.845 for identifying 14 diseases, which is the best among all methods. Especially the most apparent improvement compared with [14,40], with 10% and 8.4% performance improvement, respectively, and 1.2% compared to the baseline. Regarding AUC values for each disease, our model achieved the highest AUC in 10 of the 14 pathologies. Most importantly, this model is adaptable to disease with different scale sizes, all with specific enhancement, such as cardiomegaly, pneumothorax, effusion, mass. Among these pathologies, the AUC of atelectasis, effusion, pneumonia, consolidation, and edema are all improved by 3% or more compared to other methods. For some other pathologies, the improvement is less noticeable, e.g., hernia, fibrosis. In addition, the baseline network performs well on fibrosis, but the accuracy decreases after introducing the multi-scale module, which is worth exploring. This difference may be due to the number of positive samples (e.g., only 227 hernia and 1686 fibrosis samples). In summary, this model is competitive with current deep learning network networks.

4.4.2. Results on CheXpert

The performance evaluation of HRCC-Net is conducted on the CheXpert dataset to assess its comprehensiveness and robustness. This paper primarily focuses on comparing the results obtained using the single-model architecture. On the CheXpert dataset, due to the uncertainty setting of the training labels, this paper explicitly merges the uncertainty labels in CheXpert with the help of two uncertainty labeling methods commonly used in multi-label classification [38]: (1) replacing all uncertainty labels with “0”; (2) replacing all uncertainty labels with “1” labels. Table 4 illustrates the experimental results, using the multi-resolution backbone network as the baseline, the average AUC scores for the five pathologies reached 0.875 and 0.879 for the “0” and “1” strategies, respectively. With HRCC-Net, both improve by more than 2% with 0.904 and 0.913, respectively. In particular, when the uncertainty label is set to “1” the mean AUC score exceeds the corresponding baseline by approximately 3.5%. When the uncertainty label is set to “0”, the performance of cardiomegaly and edema improved significantly. When set to “1”, the performance of atelectasis and edema enhanced considerably in terms of AUC score. This paper compares the method with several other ways. When the uncertainty label is set to “1”, our model performs best in the AUC and the average AUC for all pathologies except atelectasis. When set to “0”, the performance of disease classification is improved to a certain extent compared to other methods.

4.4.3. Computational Consumption Analysis

In addition, computational consumption such as parameters and floating point operations (FLOPs) is also a factor that should be considered in experiments. The number of parameters and FLOPs of HRCC-Net are given in Table 5. Taking 256 × 256 input images as an example, and using multi-resolution backbone network as the baseline, the model parameters increase from 20.9 M to 23.1 M after adding the CondConv module, and the FLOPs increase from 9.71 G to 10.87 G. The model parameters and computational effort increase but not much, indicating that the CondConv module can maintain efficient inference while improving the multi-scale representation of the convolutional layer. With the addition of CA module, the FLOPs and Parameters are slightly increased. The CA module focuses on the relevant focal areas without increasing the computational volume too much. Moreover, the time used for training is different when using batch-sized training data. In our experiments, when the batch size is 64, it takes about 22 h to train the HRCC-Net network, consuming approximately 28.3 G of GPU memory. When the batch size is set to 128, the experimental device can be fully utilized, occupying 36.2 G of GPU memory and reducing the training time by one-third.

4.4.4. Other Parameters Comparison

The comparative results of the above evaluation metrics are concluded in Table 6, which shows that the HRCC-Net improves the best results in terms of accuracy, sensitivity, specificity, and F1 score by 1.3%, 0.4%, 1.3%, and 1.2%, respectively, compared with the other models, which indicates that the model in this paper can obtain superior classification results. In general, the performance of a medical diagnostic system can be measured by its specificity and sensitivity. The increase in sensitivity and specificity indicates that the method can improve diagnostic efficiency while reducing the rate of misdiagnosis.
The performance of the proposed HRCC-Net is measured by introducing other technical parameters, such as FLOPs and test times. As shown in Table 7, note that for the 25,596 test images provided in the ChestX-ray14 test set, the time cost is recorded and averaged at the end of the test run. The experimental results show that the method in this paper is less computational while the average AUC value is higher compared to other networks. The computational effort is larger compared to [10]. Still, the AUC values and average test times are satisfactory, indicating that the algorithm can maintain efficient inference and diagnosis with a reasonable computational effort. In contrast, the method is valuable for improving the performance of chest disease classification on the ChestX-ray14 dataset.

4.5. Ablation Study

The unique feature of the proposed HRCC-Net network is the use of a multi-resolution network HRNet, CondConv module, and CA mechanism. To assess the effectiveness of each module or component in the model, ablation experiments are conducted on the ChestX-ray14 dataset. In this case, the DenseNet-121 pre-trained on ImageNet is used as the baseline network for this experiment, the last global average pool is replaced with the maximum global pool, and the last classifier is replaced with a 14-dimensional fully connected layer. The proposed model is compared with the DenseNet-121 baseline network while activating one or more modules. The results of the ablation experiments are shown in Table 8. It is observed that it is up to 4% higher than the baseline network when using only the HRNet backbone network. The high-resolution branch of the multi-resolution backbone network preserve key detailed features of the disease, and the multi-resolution fusion module provides much richer contextual information to characterize the disease in complex pathologies, so the ability to identify the disease is substantially improved. With the addition of the CondConv module, the average AUC is increased by 0.7%. This module can improve the multi-scale expression capability of the network, which can capture the lesion size adaptively and adapt to the dimensional changes of various diseases. The introduction of the CA module enables the model to obtain better performance. CA module enables the network to focus more on the focal region, concentrate more information processing on the target region, and exclude the interference of irrelevant tissues. By combining them, the average AUC score reached 84.5%, 5.2% higher than the baseline network.

4.5.1. Selection of Backbone Network

In this paper, the backbone network is critical in determining classification performance. A multi-resolution branching parallel network HRNet [8] is chosen as the backbone classification network in this study to maintain a high-resolution representation. Some current frameworks (e.g., ResNet, DenseNet) encode the input image as a low-resolution representation by concatenating high- to low-resolution convolutions. This experiment compares HRNet with the basic VGGNet-16, ResNet-101, and DenseNet-121 networks that use pre-trained weights in our experiments. For these CNN networks, transfer learning allows the extraction of as many common features as possible from a large amount of training data, thus making the learning burden of the model lighter for a specific task. For the current multi-label thoracic disease classification task, some parameters may not be suitable; the current parameters were fine-tuned to obtain better results by overlapping with all our tasks. Loading weights are pre-trained on ImageNet by freezing the weight parameters of all networks except the last fully connected layer, and modifying the classifier of the last fully connected layer by changing the classifier to a 14-dimensional fully connected layer. The results are shown in Figure 8. Showing that the above network using serial connection indeed loses some feature information during constant convolution, the HRNet network with parallel multi-resolution convolution and feature fusion units enables the critical details of the lesions to be extracted and to adequately study the complicated appearance of the disease to distinguish the lesions with different characteristics. In terms of classification results, cardiomegaly, nodule, mass, and emphysema all have significant improvement, and for some diseases with strong inter-class similarity, such as infiltration, the improvement is also obvious.

4.5.2. Effectiveness of the CondConv Module

HRCC-Net uses multiple convolution kernels to extract multi-scale information from lesions to achieve a finer granularity of multiple available, receptive fields in convolution for feature extraction. The experiments are compared with a multi-resolution parallel network as a baseline to demonstrate the necessity of the CondConv module, as Table 9 shows. In this case, the Inception module, multi-branch grouped convolution, is compared with the CondConv module. The Inception module can improve model performance but is computationally expensive. In contrast, multi-branch grouped convolution is relatively lightweight but has slight model performance improvement. The CondConv module dynamically aggregates multiple parallel convolution kernels based on attention weights in a way that requires only one convolution to be computed without increasing the computational cost and adapting to variations in disease size.

4.5.3. Effectiveness of the CA Module

To demonstrate the performance of coordinate attention, a series of ablation experiments are performed with a multi-resolution network as the baseline, as shown in Table 10. The importance of encoding coordinate information is understood by removing horizontal or vertical attention from coordinate attention. The model’s performance with attention along any direction is comparable to that of the model with SE attention. The SE module only considers the channel features. In chest radiograph images, due to the variations in lesion size and location, spatial features must also be considered to focus on the lesion area and identify the spatial location of the abnormal pathology. Meanwhile, the CBAM is compared with the CA module, where the former uses dual attention but tries to exploit positional information by reducing the channel dimensionality of the input data, and then uses convolution to compute spatial attention that only captures local relationships. On the other hand, CA embeds positional information into channel attention, considering both pixel and location features, which can capture the long-distance dependence of lesions. The results show that our adopted method can identify and localize the exact location of the lesion more accurately.

4.6. Visualized Analysis

To further demonstrate the effectiveness of the HRCC-Net network, the decision process of the model is visualized by using the Gradient Weighted Class Activation Mapping (Grad-CAM) method. The generation of heat maps qualitatively analyzes whether the model is focused on the correct location and whether the learned features are rich, providing an explanatory view of the model’s effectiveness. The absolute eigenvalue of each position is first obtained from the last convolutional layer of the HRCC-Net model, and then the maximum value is calculated along the feature channel. This experiment uses 983 chest radiograph disease locations from eight diseases in the ChestX-ray14 dataset to map disease annotations. As shown in Figure 9, it can be observed that discriminative regions of the images are activated, allowing a visual assessment of the network’s ability to locate lesion locations and learn disease features. The brighter the color of each area in the heat map, the more attention the model pays to that location and the richer the features learned. The results show that the multi-scale attention module allows the model to adapt to variations in the size of pathological abnormalities, automatically focus on lesion regions, and learn abundant disease features to identify lesions accurately. More major-sized chest diseases, such as the cardiomegaly in Figure 9b and small-sized disease mass in Figure 9e, etc., can be captured automatically. For different locations of disease, such as pneumothorax, regions of hyperactivation are approximately matched to the bounding box of the pathological abnormality, indicating that the network can identify the spatial location of the pathological abnormality and accurately localize to the diseased region.

5. Discussion

This study proposes a multi-scale high-resolution classification network with dynamic convolution and coordinate attention (HRCC-Net). Specifically, some thoracic diseases have a significant degree of similarity in the appearance; the size of the lesions varies considerably, different conditions commonly occur in various locations of the thorax, and there are a large number of areas in the CXR images that are not unrelated to the disease. However, the existing thoracic disease classifiers based on CNN have insufficient feature extraction for thoracic lesions and lose essential details, which leads to the recognition accuracy being less than optimal. Such scale variability and location diversity make it difficult for the model to adapt to variations in the scale and location of the disease. Therefore, this study proposes a multi-scale high-resolution classification network to classify complex and diverse multi-label thorax diseases. The study is evaluated on the ChestX-ray14 dataset, and the proposed model achieves an average AUC of 0.845 in classifying 14 thoracic diseases. In addition, this study validates the performance of HRCC-Net on the CheXpert dataset, showing that the proposed solution can be extended and retrained for the classification of CXR images of various pathologies. This algorithm is generalizable and robust.
Compared with other research methods in Table 3, the network in this paper obtained satisfactory results. When using a multi-resolution backbone network, the classification results are competitive compared to other networks. This is because the network maintains the high-resolution representations of the images during feature extraction, where the high-resolution branch can extract the vital detail features of the lesions to distinguish them. The multi-resolution fusion module can acquire multiple receptive fields to obtain rich semantic information, which can identify the subtle differences of the lesions and thus improve classification accuracy. The terminal classification model HRCC-Net has higher accuracy and is adaptable to diseases of different sizes, such as cardiomegaly and pleural thickening. This is explained by the ability of the multi-scale convolution module to extract multi-scale information from each convolutional layer, adapting to variations in disease size. Meanwhile, the coordinate attention module focuses on the images’ pixel features and location information simultaneously, enabling the network to focus on the relevant regions of the disease and capture the location changes automatically. In addition, the complexity of the model is also considered, and the present model can experiment with high inference performance with lower complexity, and the multi-scale convolution module and coordinate attention module do not increase excessive computational consumption.
Moreover, the effectiveness of each module through ablation experiments. Our multi-resolution network has significantly improved classification results compared to other backbone networks. The Inception module containing multiple convolution kernels can improve the model capability but brings a large amount of computation. In contrast, multi-branch group convolution is lightweight but inferior in extracting multi-scale information. Dynamic convolution allows the network to adapt to variations in disease size without increasing the computational cost too much. The coordinate attention mechanism performs better than SE attention and CBAM attention, enabling more accurate identification and localization of the lesion’s exact location. Although our network increased the attention to spatial information while focusing on the channel information of features, it did not focus on the information on the correlation between different chest diseases.

6. Conclusions

This study proposes a multi-scale high-resolution classification network with dynamic convolution and coordinate attention (HRCC-Net) to classify complicated and diverse multi-label thorax diseases, capable of extracting critical detail features of lesions and capturing changes in size and location. The network consists of multiple branches from high to low resolution, which can maintain the high-resolution representations of the input image throughout the path, and the multi-resolution swapping module can obtain abundant disease features, thus extracting critical detail information for the identification of complicated thoracic diseases, especially those with similar feature presentation. Dynamic convolution is utilized in each convolutional layer to acquire multi-scale receptive fields and thus obtain multi-scale information about the illness to accommodate different scales of lesions. In addition, the coordinate attention mechanism is introduced to focus on the disease’s pixel features and location features to identify the spatial location of abnormal regions and capture changes in lesion location. The study is evaluated on the ChestX-ray14 and the CheXpert datasets, with average AUC of 0.845 and 0.913, respectively. Compared with several currently advanced methods, this study proves the competitiveness of HRCC-Net and performs a series of ablation experiments to verify the method’s effectiveness. Although the proposed method achieves high classification performance, it still has a few limitations. This paper has not attended to the information on the correlation between different thoracic diseases. Moreover, this study assumes that all labels in the ChestX-ray dataset are valid, ignoring the error rate of the labels in the dataset.
In future work, the following issues will be further studied. (1) Due to the intrinsic correlation between multiple diseases, the potential semantic relationships between diseases can be explored by introducing Transformer as a priori knowledge. (2) Resolving the uncertainty of the presence of noise labels: most of the existing classification methods ignore the problem that labels are hardly completely realistic and valid [47]. Multiple levels of weight assignment and replacement can be applied to the tags to eliminate noise and thus reduce the interference of noisy tags. (3) Processing ambiguity and uncertainty of medical images: CNN alone cannot handle the uncertainty and fuzziness present in the images, with the help of fuzzy sets, these irrelevant noises and undesired background parts can be handled during the image fusion process [48,49].

Author Contributions

Methodology and writing-original draft preparation, J.G. and X.G.; writing review and editing, M.C. and Q.L.; resources, M.C. and J.G.; data curation, M.C. and J.G.; software, M.C.; supervision, X.G. and M.J.A.; project administration, X.G.; funding acquisition, Q.L. All authors have read and agreed to the published version of this manuscript.

Funding

This research was supported by the National Natural Science Foundation of China under Grant No. 61471263, No. 61872267 and U21B2024, the Natural Science Foundation of Tianjin, China, under Grant 16JCZDJC31100, and Tianjin University Innovation Foundation under Grant 2021XZC-0024.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be made available.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mulrenan, C.; Rhode, K.; Fischer, B.M. A Literature Review on the Use of Artificial Intelligence for the Diagnosis of COVID-19 on CT and Chest X-ray. Diagnostics 2022, 12, 869. [Google Scholar] [CrossRef] [PubMed]
  2. Nneji, G.U.; Cai, J.; Deng, J.; Monday, H.N.; James, E.C.; Ukwuoma, C.C. Multi-Channel Based Image Processing Scheme for Pneumonia Identification. Diagnostics 2022, 12, 325. [Google Scholar] [CrossRef] [PubMed]
  3. Dunnmon, J.A.; Yi, D.; Langlotz, C.P.; Ré, C.; Rubin, D.L.; Lungren, M.P. Assessment of convolutional neural networks for automated classification of chest radiographs. Radiology 2019, 290, 537–544. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Williams, B.G.; Gouws, E.; Boschi-Pinto, C.; Bryce, J.; Dye, C. Estimates of world-wide distribution of child deaths from acute respiratory infections. Lancet Infect. Dis. 2002, 2, 25–32. [Google Scholar] [CrossRef] [PubMed]
  5. Zhang, J.; Xie, Y.; Xia, Y.; Shen, C. Attention Residual Learning for Skin Lesion Classification. IEEE Trans. Med. Imaging 2019, 38, 2092–2103. [Google Scholar] [CrossRef]
  6. Xie, Y.; Xia, Y.; Zhang, J.; Song, Y.; Feng, D.; Fulham, M.; Cai, W. Knowledge-based Collaborative Deep Learning for Benign-Malignant Lung Nodule Classification on Chest CT. IEEE Trans. Med. Imaging 2019, 38, 991–1004. [Google Scholar] [CrossRef] [PubMed]
  7. Wang, H.; Jia, H.; Lu, L.; Xia, Y. Thorax-net: An attention regularized deep neural network for classification of thoracic diseases on chest radiography. IEEE J. Biomed. Health Inform. 2019, 24, 475–485. [Google Scholar] [CrossRef]
  8. Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3349–3364. [Google Scholar] [CrossRef] [Green Version]
  9. Wang, H.; Wang, S.; Qin, Z.; Zhang, Y.; Li, R.; Xia, Y. Triple attention learning for classification of 14 thoracic diseases using chest radiography. Med. Image Anal. 2021, 67, 101846. [Google Scholar] [CrossRef]
  10. Guan, Q.; Huang, Y.; Luo, Y.; Liu, P.; Xu, M.; Yang, Y. Discriminative feature learning for thorax disease classification in chest X-ray images. IEEE Trans. Image Process. 2021, 30, 2476–2487. [Google Scholar] [CrossRef]
  11. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  12. Zhang, Y.; Lu, Y.; Chen, W.; Chang, Y. MSMANet: A multi-scale mesh aggregation network for brain tumor segmentation. Appl. Soft Comput. 2021, 110, 107733. [Google Scholar] [CrossRef]
  13. Guan, Q.; Huang, Y. Multi-label chest X-ray image classification via category-wise residual attention learning. Pattern Recognit. Lett. 2020, 130, 259–266. [Google Scholar] [CrossRef]
  14. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. ChestX-Ray8: Hospital-Scale Chest X-Ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3462–3471. [Google Scholar] [CrossRef] [Green Version]
  15. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  16. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  17. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  18. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef] [Green Version]
  19. Rajpurkar, P.; Irvin, J.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.; Shpanskaya, K.; et al. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv 2017, arXiv:1711.05225. [Google Scholar]
  20. Chen, B.; Li, J.; Guo, X.; Lu, G. DualCheXNet: Dual asymmetric feature learning for thoracic disease classification in chest X-rays. Biomed. Signal Process. Control 2019, 53, 101554. [Google Scholar] [CrossRef]
  21. Okolo, G.I.; Katsigiannis, S.; Ramzan, N. IEViT: An enhanced vision transformer architecture for chest X-ray image classification. Comput. Methods Programs Biomed. 2022, 226, 107141. [Google Scholar] [CrossRef]
  22. Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5987–5995. [Google Scholar] [CrossRef] [Green Version]
  23. Yang, B.; Bender, G.; Le, Q.V.; Ngiam, J. Condconv: Conditionally parameterized convolutions for efficient inference. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
  24. Ibtehaz, N.; Sohel Rahman, M.M. Rethinking the U-Net architecture for multimodal biomedical image segmentation. arXiv 2019, arXiv:1902.04049. [Google Scholar] [CrossRef]
  25. Gao, S.H.; Cheng, M.M.; Zhao, K.; Zhang, X.Y.; Yang, M.H.; Torr, P. Res2net: A new multi-scale backbone architecture. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 652–662. [Google Scholar] [CrossRef] [Green Version]
  26. Chen, Y.; Dai, X.; Liu, M.; Chen, D.; Yuan, L.; Liu, Z. Dynamic Convolution: Attention Over Convolution Kernels. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11027–11036. [Google Scholar] [CrossRef]
  27. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar] [CrossRef] [Green Version]
  28. Luong, M.T.; Pham, H.; Manning, C.D. Effective approaches to attention-based neural machine translation. arXiv 2015, arXiv:1508.04025. [Google Scholar]
  29. Bahdanau, D.; Cho, K.; Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv 2014, arXiv:1409.0473. [Google Scholar]
  30. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
  31. Cao, C.; Liu, X.; Yang, Y.; Yu, Y.; Wang, J.; Wang, Z.; Huang, Y.; Wang, L.; Huang, C.; Xu, W.; et al. Look and Think Twice: Capturing Top-Down Visual Attention with Feedback Convolutional Neural Networks. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 2956–2964. [Google Scholar] [CrossRef]
  32. Bazzani, L.; Freitas, N.d.; Larochelle, H.; Murino, V.; Ting, J.A. Learning Attentional Policies for Tracking and Recognition in Video with Deep Networks. In Proceedings of the 28th International Conference on International Conference on Machine Learning, Bellevue, WA, USA, 28 June–2 July 2011; ICML’11. Omnipress: Madison, WI, USA, 2011; pp. 937–944. [Google Scholar]
  33. Yang, Z.; He, X.; Gao, J.; Deng, L.; Smola, A. Stacked Attention Networks for Image Question Answering. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 21–29. [Google Scholar] [CrossRef] [Green Version]
  34. Hong, S.; Oh, J.; Lee, H.; Han, B. Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 3204–3212. [Google Scholar] [CrossRef] [Green Version]
  35. Pesce, E.; Withey, S.J.; Ypsilantis, P.P.; Bakewell, R.; Goh, V.; Montana, G. Learning to detect chest radiographs containing pulmonary lesions using visual attention networks. Med. Image Anal. 2019, 53, 26–38. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Fan, D.P.; Zhou, T.; Ji, G.P.; Zhou, Y.; Chen, G.; Fu, H.; Shen, J.; Shao, L. Inf-net: Automatic COVID-19 lung infection segmentation from ct images. IEEE Trans. Med. Imaging 2020, 39, 2626–2637. [Google Scholar] [CrossRef]
  37. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar] [CrossRef] [Green Version]
  38. Irvin, J.; Rajpurkar, P.; Ko, M.; Yu, Y.; Ciurea-Ilcus, S.; Chute, C.; Marklund, H.; Haghgoo, B.; Ball, R.; Shpanskaya, K.; et al. CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison. Proc. AAAI Conf. Artif. Intell. 2019, 33, 590–597. [Google Scholar] [CrossRef] [Green Version]
  39. Chen, B.; Zhang, Z.; Li, Y.; Lu, G.; Zhang, D. Multi-label chest X-ray image classification via semantic similarity graph embedding. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 2455–2468. [Google Scholar] [CrossRef]
  40. Yao, L.; Prosky, J.; Poblenz, E.; Covington, B.; Lyman, K. Weakly supervised medical diagnosis and localization from multiple resolutions. arXiv 2018, arXiv:1803.07703. [Google Scholar]
  41. Ma, Y.; Zhou, Q.; Chen, X.; Lu, H.; Zhao, Y. Multi-attention Network for Thoracic Disease Classification and Localization. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1378–1382. [Google Scholar] [CrossRef]
  42. Ouyang, X.; Karanam, S.; Wu, Z.; Chen, T.; Huo, J.; Zhou, X.S.; Wang, Q.; Cheng, J.Z. Learning hierarchical attention for weakly-supervised chest X-ray abnormality localization and diagnosis. IEEE Trans. Med. Imaging 2020, 40, 2698–2710. [Google Scholar] [CrossRef] [PubMed]
  43. Zhu, X.; Pang, S.; Zhang, X.; Huang, J.; Zhao, L.; Tang, K.; Feng, Q. PCAN: Pixel-wise classification and attention network for thoracic disease classification and weakly supervised localization. Comput. Med. Imaging Graph. 2022, 102, 102137. [Google Scholar] [CrossRef]
  44. Pham, H.H.; Le, T.T.; Tran, D.Q.; Ngo, D.T.; Nguyen, H.Q. Interpreting chest X-rays via CNNs that exploit hierarchical disease dependencies and uncertainty labels. Neurocomputing 2021, 437, 186–194. [Google Scholar] [CrossRef]
  45. Guendel, S.; Grbic, S.; Georgescu, B.; Liu, S.; Maier, A.; Comaniciu, D. Learning to recognize abnormalities in chest X-rays with location-aware dense networks. In Proceedings of the Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications: 23rd Iberoamerican Congress, CIARP 2018, Madrid, Spain, 19–22 November 2018; Proceedings 23. Springer: Cham, Switzerland, 2019; pp. 757–765. [Google Scholar]
  46. Chen, B.; Li, J.; Lu, G.; Zhang, D. Lesion location attention guided network for multi-label thoracic disease classification in chest X-rays. IEEE J. Biomed. Health Inform. 2019, 24, 2016–2027. [Google Scholar] [CrossRef] [PubMed]
  47. Ying, X.; Liu, H.; Huang, R. COVID-19 chest X-ray image classification in the presence of noisy labels. Displays 2023, 77, 102370. [Google Scholar] [CrossRef] [PubMed]
  48. Koundal, D.; Sharma, B.; Gandotra, E. Spatial intuitionistic fuzzy set based image segmentation. Imaging Med. 2017, 9, 95–101. [Google Scholar]
  49. Bhalla, K.; Koundal, D.; Sharma, B.; Hu, Y.C.; Zaguia, A. A fuzzy convolutional neural network for enhancing multi-focus image fusion. J. Vis. Commun. Image Represent. 2022, 84, 103485. [Google Scholar] [CrossRef]
Figure 1. Examples of lesion areas on the ChestX-ray14 dataset. The first row shows a CXR image with a relatively small lesion area overall. The second row shows CXR images where multiple diseases are present. The disease existing in each bounding box corresponds to the pathology name in the same colour in the next row.
Figure 1. Examples of lesion areas on the ChestX-ray14 dataset. The first row shows a CXR image with a relatively small lesion area overall. The second row shows CXR images where multiple diseases are present. The disease existing in each bounding box corresponds to the pathology name in the same colour in the next row.
Diagnostics 13 02165 g001
Figure 2. HRCC-Net network structure containing the parallel multi-resolution feature extraction network backbone and multiscale attention modules consisting of dynamic convolution (CondConv) and coordinate attention (CA). C is the number of channels for feature mapping and x is the input resolution.
Figure 2. HRCC-Net network structure containing the parallel multi-resolution feature extraction network backbone and multiscale attention modules consisting of dynamic convolution (CondConv) and coordinate attention (CA). C is the number of channels for feature mapping and x is the input resolution.
Diagnostics 13 02165 g002
Figure 3. Multi-scale lesion feature extraction module structure, containing two dynamic convolution (CondConv) modules.
Figure 3. Multi-scale lesion feature extraction module structure, containing two dynamic convolution (CondConv) modules.
Diagnostics 13 02165 g003
Figure 4. Structure of the CA mechanism.
Figure 4. Structure of the CA mechanism.
Diagnostics 13 02165 g004
Figure 5. The ChestX-ray14 dataset, containing 112,120 anterior views of X-ray thoracic radiographs.
Figure 5. The ChestX-ray14 dataset, containing 112,120 anterior views of X-ray thoracic radiographs.
Diagnostics 13 02165 g005
Figure 6. ROC curves of the HRCC-Net network for 14 pathologies, corresponding to the AUC scores in Table 3.
Figure 6. ROC curves of the HRCC-Net network for 14 pathologies, corresponding to the AUC scores in Table 3.
Diagnostics 13 02165 g006
Figure 7. AUC values for each training epoch with different loss functions.
Figure 7. AUC values for each training epoch with different loss functions.
Diagnostics 13 02165 g007
Figure 8. Per-class AUC of different backbone networks.
Figure 8. Per-class AUC of different backbone networks.
Diagnostics 13 02165 g008
Figure 9. The right side of the figure shows the heat map, and the left side shows the physician-labeled map with the disease location marked by a yellow rectangular box. The figure visualizes that the network can accurately locate the diseased area, which is highly overlapping with the physician-labeled map, indicating that the effect of this network is more pronounced.
Figure 9. The right side of the figure shows the heat map, and the left side shows the physician-labeled map with the disease location marked by a yellow rectangular box. The figure visualizes that the network can accurately locate the diseased area, which is highly overlapping with the physician-labeled map, indicating that the effect of this network is more pronounced.
Diagnostics 13 02165 g009
Table 1. The network structure of HRCC-Net, where Transition stands for fusion module, and Concat stands for channel cascade. Each multi-scale attention module containing 4 multi-scale modules and coordinate attention.
Table 1. The network structure of HRCC-Net, where Transition stands for fusion module, and Concat stands for channel cascade. Each multi-scale attention module containing 4 multi-scale modules and coordinate attention.
LayerOperationNum_BranchesResolution BranchOutput SizeOutput_Channels
Input 224 × 2243
StemConv3 × 3 Block
Conv3 × 3 Block
1
1
112 × 112
56 × 56
64
64
Stage1Bottleneck Block ×   4
Transition1
11, 1/256 × 56
28 × 28
48
96
Stage2Multi-Scale
Attention Module ×   4
Transition2
21, 1/2, 1/456 × 56
28 × 28
14 × 14
48
96
192
Stage3Multi-Scale
Attention Module ×   4
Transition3
31, 1/2, 1/4, 1/856 × 56
28 × 28
14 × 14
7 × 7
48
96
192
384
Stage4Multi-Scale
Attention Module ×   4
Transition4
41, 1/2, 1/4, 1/856 × 56
28 × 28
14 × 14
7 × 7
48
96
192
384
Output HeadDownsample, Contcat
Downsample, Contcat
Downsample, Contcat
96
192
384
ClassificationConv1 × 1 Block
Linear1
Linear2
1 × 1
1 × 1
1 × 1
2048
512
14
Table 2. Selection of HRCC-Net hyperparameters.
Table 2. Selection of HRCC-Net hyperparameters.
LrBatch SizeEpoch
64128
0.00060.83960.834890
0.00080.84150.840280
0.0010.84080.845180
0.00120.83910.843275
0.00140.83460.835870
Table 3. Comparative results of different methods on ChestX-ray14.
Table 3. Comparative results of different methods on ChestX-ray14.
Disease[14][40][41][13][10][9][42][43][39]BaselineOurs
R-50*R-101D-121D-121D-121*D-121GCN
Atel0.7000.7330.7630.7810.7850.7790.7700.7850.7920.8080.823
Card0.8100.8650.8840.8800.8990.8950.8700.8970.8920.9150.908
Effu0.7580.8060.8160.8290.8350.8360.8300.8370.8400.8830.886
Infi0.6610.6730.6790.7020.6990.7100.7100.7060.7140.7090.711
Mass0.6930.7180.8010.8340.8380.8340.8300.8340.8480.8460.848
Nodu0.6690.7770.7290.7730.7750.7770.7900.7860.8120.7610.781
Pne10.6580.6840.7100.7290.7380.7370.7200.7300.7330.7620.768
Pne20.7990.8050.8370.8570.8710.8780.8800.8710.8850.8710.897
Cons0.7030.7110.7440.7540.7630.7590.7400.7630.7530.8100.819
Edem0.8050.8060.8410.8500.8500.8550.8400.8540.8480.8890.897
Emph0.8330.8420.8840.9080.9240.9330.9400.9210.9480.9040.933
Fibr0.7860.7430.8010.8300.8310.8380.8300.8170.8270.8370.808
PT0.6840.7240.7540.7780.7760.7910.7900.7910.7950.7740.808
Hern0.8720.7750.8760.9170.9220.9380.9100.9430.9320.9090.943
Mean0.7450.7610.7940.8160.8220.8260.8190.8240.8300.8330.845
* The 14 pathologies are Atelectasis (Atel), Cardiomegaly (Card), Effusion (Effu), Infiltration (Infi), Mass (Mass), Nodue (Nodu), Pneumonia (Pne1), Pneumothorax (Pne2), Consolidation (Cons), Edema (Edem), Emphysema (Emph), Fibrosis (Fibr), Pleural Thickening (PT) and Hernia (Hern), respectively. * represents that the combination of ResNet and DenseNet is used in [40,42].
Table 4. Comparison results of HRCC-Net on CheXpert.
Table 4. Comparison results of HRCC-Net on CheXpert.
PolicyMethodAtelectasisCardiomegalyConsolidationEdemaPleural EffusionMean
Zeros[38]0.8110.8400.9320.9290.9310.889
[44]0.8060.8330.9290.9330.9210.884
[10]0.8040.8740.9400.8940.9230.889
Baseline0.7960.8300.9280.8970.9250.875
Ours0.8280.8820.9430.9370.9300.904
Ones[38]0.8580.8320.8990.9410.9340.893
[44]0.8250.8550.9370.9300.9230.894
[10]0.8470.8680.9230.9240.9260.898
[43]0.8480.8650.9080.9120.9400.895
Baseline0.7700.8490.9420.9060.9280.879
Ours0.8470.8910.9450.9400.9430.913
Table 5. Computational consumption of HRCC-Net.
Table 5. Computational consumption of HRCC-Net.
ModelParameters/MFLOPs/GBatch Size (Times/h)
64128
Baseline20.99.71--
Baseline+CondConv23.110.87--
Baseline+CondConv+CA24.311.25--
HRCC-Net24.311.25227.1
Table 6. Comparison of other evaluation metrics on the ChestX-Ray14 dataset.
Table 6. Comparison of other evaluation metrics on the ChestX-Ray14 dataset.
MethodAccuracySensitivitySpecificityF1
[14]75.673.476.172.9
[45]75.973.876.473.3
[46]76.774.376.973.7
[44]77.274.977.374.1
[39]77.575.377.674.3
Ours78.875.778.975.5
Table 7. Other technical parameters comparison.
Table 7. Other technical parameters comparison.
ModelFLOPs/GTimes/SAverage AUC
[14]21.470.0310.745
[45]14.970.0350.807
[10]2.960.3500.822
[20]15.030.0450.823
[46]34.960.0940.824
[9]-0.1000.826
[39]17.740.0590.830
Ours11.250.0410.845
Table 8. Ablation experiments on the ChestX-ray14 dataset.
Table 8. Ablation experiments on the ChestX-ray14 dataset.
HRNetCondConvCAMean
   0.793
  0.833
 0.840
 0.839
0.845
Table 9. Comparative experiments of the CondConv module.
Table 9. Comparative experiments of the CondConv module.
SettingAverage AUCParameters/M
Baseline0.833020.9
+Inception0.838227.0
+GroupConv0.834821.8
+Condconv0.84023.1
Table 10. Comparative experiments of the CA module.
Table 10. Comparative experiments of the CA module.
SettingAverage AUCParameters/M
Baseline0.833020.9
+SE0.834421.46
+X Attention0.834421.46
+Y Attention0.834521.46
+CBAM0.835821.88
+CA0.837222
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Q.; Chen, M.; Geng, J.; Adamu, M.J.; Guan, X. High-Resolution Network with Dynamic Convolution and Coordinate Attention for Classification of Chest X-ray Images. Diagnostics 2023, 13, 2165. https://doi.org/10.3390/diagnostics13132165

AMA Style

Li Q, Chen M, Geng J, Adamu MJ, Guan X. High-Resolution Network with Dynamic Convolution and Coordinate Attention for Classification of Chest X-ray Images. Diagnostics. 2023; 13(13):2165. https://doi.org/10.3390/diagnostics13132165

Chicago/Turabian Style

Li, Qiang, Mingyu Chen, Jingjing Geng, Mohammed Jajere Adamu, and Xin Guan. 2023. "High-Resolution Network with Dynamic Convolution and Coordinate Attention for Classification of Chest X-ray Images" Diagnostics 13, no. 13: 2165. https://doi.org/10.3390/diagnostics13132165

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop