Next Article in Journal
Temporal and Spatial Analysis of Deformation Monitoring of the Ming Great Wall in Shanxi Province through InSAR
Next Article in Special Issue
Self-Learning Robot Autonomous Navigation with Deep Reinforcement Learning Techniques
Previous Article in Journal
A Novel CA-RegNet Model for Macau Wetlands Auto Segmentation Based on GF-2 Remote Sensing Images
Previous Article in Special Issue
Visual Odometry of a Low-Profile Pallet Robot Based on Ortho-Rectified Ground Plane Image from Fisheye Camera
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advancing Image Object Detection: Enhanced Feature Pyramid Network and Gradient Density Loss for Improved Performance

1
School of Physics and Mechanical and Electrical Engineering, Longyan University, Longyan 364012, China
2
School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China
3
Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(22), 12174; https://doi.org/10.3390/app132212174
Submission received: 24 September 2023 / Revised: 2 November 2023 / Accepted: 3 November 2023 / Published: 9 November 2023
(This article belongs to the Special Issue Autonomous Vehicles and Robotics)

Abstract

:
In the era of artificial intelligence, the significance of images and videos as intuitive conveyors of information cannot be overstated. Computer vision techniques rooted in deep learning have revolutionized our ability to autonomously and accurately identify objects within visual media, making them a focal point of contemporary research. This study addresses the pivotal role of image object detection, particularly in the contexts of autonomous driving and security surveillance, by presenting an in-depth exploration of this field with a focus on enhancing the feature pyramid network. One of the key challenges in existing object detection methodologies lies in mitigating information loss caused by multi-scale feature fusion. To tackle this issue, we propose the enhanced feature pyramid, which adeptly amalgamates features extracted across different scales. This strategic enhancement effectively curbs information attrition across various layers, thereby strengthening the feature extraction capabilities of the foundational network. Furthermore, we confront the issue of excessive classification loss in image object detection tasks by introducing the gradient density loss function, designed to mitigate classification discrepancies. Empirical results unequivocally demonstrate the efficacy of our approach in enhancing the detection of multi-scale objects within images. When evaluated across benchmark datasets, including MS COCO 2017, MS COCO 2014, Pascal VOC 2007, and Pascal VOC 2012, our method achieves impressive average precision scores of 39.4%, 42.0%, 51.5%, and 49.9%, respectively. This performance clearly outperforms alternative state-of-the-art methods in the field. This research not only contributes to the evolving landscape of computer vision and object detection but also has practical implications for a wide range of applications, aligning with the transformative trends in the automotive industry and security technologies.

1. Introduction

Object detection holds a central position in the realm of computer vision, forming the bedrock for tasks such as image segmentation and object tracking. With the relentless progression of computer hardware capabilities, object detection has pervaded numerous domains, encompassing security, healthcare, and autonomous driving. It has notably catalyzed urban development by curtailing expenses and elevating societal efficacy. In everyday life, applications like facial recognition-based transactions, access control, and text recognition have seamlessly integrated into our routines. In the medical arena, object detection technology lends critical support to physicians, aiding in diagnoses and accurately pinpointing potential anomaly regions. Within industrial production, object detection methods discern product defects, facilitating automated oversight and serving as a pivotal component in assembly lines. In the military domain, these techniques offer reconnaissance potential by surveilling adversary targets through remote sensing imagery. In agriculture, object detection assumes the role of pest infestation monitoring and constitutes a cornerstone in automated harvest equipment. The ongoing evolution of object detection methodologies has instigated transformative revolutions across multifarious sectors, yielding enhancements in efficiency, precision, and automation.
The objective of object detection encompasses the identification of object categories within input images and the precise delineation of their positions and boundaries using rectangular bounding boxes, coupled with associated confidence scores. While the human eye may find recognizing object categories and positions intuitive, for computers, an image translates to a set of data points. Deriving object category insights and accurately localizing objects from these data points constitutes a formidable computational task. Variables like object dimensions, orientations, illumination, and background disturbances all contribute to the intricacies of object detection performance. Hence, object detection confronts a multitude of challenges, demanding precise object localization and classification. Attaining these objectives with a harmonious interplay of accuracy and speed underscores the core of successful object detection. Furthermore, objects manifest in diverse scales, driving the pursuit of equilibrium in performance across these scales as a pressing research endeavor. The incorporation of adept attention mechanisms further augments object detection efficiency, requiring the construction of streamlined attention mechanisms that adeptly balance efficacy and computational efficiency.
Conventional approaches to image object detection employ sliding window methodologies to systematically navigate through entire images. These methods are complemented by manually engineered features that possess the ability to remain invariant to factors like lighting conditions, rotation, translation, and scaling. These intricately designed features are tailored to accommodate common fluctuations in lighting and shapes. The subsequent phase involves training classifiers on these features. Viola and Jones [1] pioneered the Viola–Jones (VJ) detector, which extracts Haar features from images to automate facial detection through a sliding window mechanism. Dalal and Triggs [2] introduced the concept of histogram of oriented gradient (HOG) features, effectively balancing between the invariance to translation and scaling. Felzenszwalb et al. [3] innovated the deformable part model (DPM), which dissects object detection into various constituent detection tasks. Prominent classifiers employed include support vector machines (SVMs) and Bayesian classifiers.
Conventional methodologies for image object detection are fraught with several challenges. These include the substantial computational load of sliding window mechanisms and the intricate development of handcrafted features. Deep-learning-driven object detection algorithms present remarkable advantages over their traditional counterparts, excelling in terms of precision and efficiency, thereby positioning themselves as the leading approach in the field of object detection. The integration of deep learning into object detection has led to effective resolutions for many of these challenges. In 2012, Hinton et al. [4] introduced the groundbreaking AlexNet model, achieving a substantial breakthrough in the ImageNet image recognition competition. Following this milestone, convolutional neural networks (CNNs) have gradually gained prominence in the realm of object detection, forming the cornerstone of computer vision models. With each passing year, numerous researchers have contributed to the development of diverse network models that exhibit enhanced performance in both accuracy and speed for image recognition tasks. Deep-learning-based object detection algorithms predominantly fall into two distinct categories: one-stage algorithms and two-stage algorithms. Among these, the two-stage algorithms initially extract features from the input image, subsequently conducting object classification and bounding box regression on selected candidate regions.
The inception of the two-stage algorithm era was marked by the introduction of the R-CNN (regions with CNN feature), pioneered by Girshick et al. [5], which marked a pioneering breakthrough in the realm of object detection. The R-CNN approach ushered in the integration of convolutional neural networks (CNNs) into object detection, reshaping the landscape of this domain. The R-CNN method incorporated the Selective Search algorithm to generate a set of candidate regions. Subsequently, these regions were resized to a standardized dimension and fed into a neural network for feature extraction. This network was fine-tuned based on a pretrained ImageNet model tailored to the specific object detection task. Despite achieving commendable outcomes when compared to traditional methodologies of that time, the R-CNN approach faced certain limitations. Notably, the presence of fully connected layers hindered its accuracy when confronted with deformations, and its adaptability to varying image sizes remained constrained.
The challenges were addressed by He et al. [6] through the introduction of the SPPNet (Spatial Pyramid Pooling Network), a pioneering approach that brought about the concept of spatial pyramid pooling. This innovation facilitated the model’s ability to seamlessly adapt to images of varying sizes. Subsequently, Girshick [7] presented the Fast R-CNN framework, leveraging the VGG network as the foundational architecture to enhance both the computational speed and precision of object detection. Further building upon the Fast R-CNN foundation, Faster R-CNN, as introduced by Ren et al. [8], incorporated the Region Proposal Network (RPN) for the generation of candidate bounding boxes. This innovation significantly improved both the accuracy and speed of the approach by replacing the resource-intensive selective search step with a more efficient candidate box generation mechanism. The Faster R-CNN methodology employed a shared feature map to simultaneously handle the tasks of classification and regression, integrating it seamlessly into the RPN for the generation of candidate bounding boxes.
Cai and Vasconcelos [9] introduced the Cascade R-CNN, a novel approach that involved training with varying thresholds to enhance the accuracy of object detection. It is important to note that while two-stage algorithms typically offer superior accuracy by initially predicting potential object locations based on input features and subsequently conducting object classification and bounding box regression for more precise classification and positioning, they often exhibit slower processing speeds. Recognizing this challenge, one-stage algorithms such as YOLO (You Only Look Once), proposed by Redmon et al. [10], execute both classification and regression tasks in a single step, thus boosting detection speed, albeit at the cost of a slight reduction in accuracy compared to Faster R-CNN.
Liu et al. [11] introduced the single-shot multibox detector (SSD), which leverages features from various layers to enhance feature extraction and achieve greater accuracy than YOLO. YOLOv3, presented by Redmon and Farhadi [12], incorporated the feature pyramid network (FPN) [13] with multi-scale features to enhance accuracy in detecting small objects. Bochkovskiy et al. [14] proposed YOLOv4, which employed the CSPDarkNet as the backbone, boasting increased parameters and utilizing the spatial pooling layer (SPP) to expand the receptive field of the backbone network, ultimately leading to superior performance. Dai et al. [15] adopted the Transformer architecture for object detection, eliminating the requirement for anchor boxes and introducing a direct set-based prediction approach. This work demonstrated the potential of Transformers in computer vision tasks beyond their traditional application in natural language processing.
This study addresses the imperative need for enhancing the performance of current object detection methodologies and introduces improvements in the accuracy of image object detection tasks through the utilization of computer vision techniques such as feature pyramids. The proposed approach introduces an object detection methodology founded upon an augmented feature pyramid network. This approach specifically targets two principal limitations encountered in prevailing image object detection techniques. Firstly, it addresses the challenge of inadequate object detection across various scales by incorporating an augmented feature pyramid network to extract multi-scale features. This ameliorates the issue of significant information loss in top-level features during the fusion of multi-scale features in conventional methods, resulting in superior-quality multi-scale information. Secondly, the method tackles the limitation of commonly employed focal classification loss functions in object detection models, which tend to overly emphasize the classification features of intricate samples, leading to delayed model convergence. This is mitigated by refining the classification loss function to align more effectively with the attributes of the classification task, rendering it more proficient in accurate classification based on real-world conditions. The final experimental section encompasses diverse assessments, including ablation experiments conducted on the COCO2017 dataset and performance comparisons with 14 existing object detection algorithms on four widely recognized publicly accessible object detection datasets. The consistently observed outcomes of these experiments underscore the effectiveness of the proposed approach, showcasing higher detection accuracy in comparison to established baseline image object detection methods across these extensively utilized object detection datasets.
In today’s data-driven world, the role of images and videos as intuitive vehicles for transmitting information has taken on unprecedented significance. With the dawn of the AI era, computer vision techniques, particularly those grounded in deep learning, have enabled the autonomous and precise identification of objects within visual media. This technological advancement has propelled image and video analysis to the forefront of contemporary research, ushering in transformative possibilities across various domains. Our empirical findings not only substantiate the effectiveness of our approach but also establish it as a promising solution for discerning multi-scale objects within images and videos. Importantly, our research aligns with the quest for safer and more reliable autonomous driving systems, as it contributes to the evolving landscape of computer vision and object detection, directly impacting the future of mobility and security.

2. Problem Description

Object detection constitutes a ubiquitous domain within the realm of computer vision. Throughout its evolutionary trajectory, a central hurdle has involved crafting more potent feature extraction modules rooted in deep learning. In the nascent phases of object detection networks, comparatively straightforward convolutional neural networks were integrated as feature extraction modules within the bedrock network architecture. As a result, the foundational network architecture was constrained to extracting features solely at a singular scale. While these features showcased robust information representation capacities at a macroscopic level, the escalating requisites for heightened object detection performance accentuated the inadequacy of these features operating at a solitary scale. In the course of convolutional pooling, information pertaining to diminutive objects experienced a gradual erosion within the confines of the one-scale feature maps.
Subsequently, the significance of multi-scale feature extraction networks gained prominence, exemplified by models such as UNet [16] and FPN [17]. At present, the extensively employed FPN architecture has exhibited enhanced performance in object detection, as demonstrated in the case of Sparse R-CNN [18]. Through the fusion of multi-scale feature maps, FPN facilitates the reciprocal exploitation of both high-level and low-level features. High-level features augment the semantic representation of low-level features, while low-level features amplify intricate details grounded in high-level features. Nevertheless, considering FPN’s multi-scale fusion principle, directly amalgamating features from distinct levels featuring pronounced semantic disparities may not yield optimal results. Moreover, during the utilization of FPN, direct fusion of features spanning the highest and lowest levels could potentially engender information loss.
The proposed enhancements in this study are predicated on the Sparse R-CNN object detection algorithm, which represents a distinctive approach. Sparse R-CNN, functioning as a two-stage object detection technique, deviates from the conventional paradigm of such algorithms. Notably, its Region Proposal Network (RPN) engenders a fixed ensemble of 100 candidate boxes. Subsequently, these designated boxes traverse a Dynamic Instance Interactive Head (DIIHead) module. For every individual candidate box, the RoIAlign algorithm orchestrates the extraction of region-specific features, subsequently facilitating the execution of tasks pertaining to target classification and positional regression. The processed boxes subsequently undergo a succession of six iterations through the DIIHead module, progressively refining the initial 100 candidate boxes and culminating in the derivation of the ultimate output.
Of notable significance, the Sparse R-CNN method circumvents the necessity for a post-processing step such as nonmaximum suppression (NMS). This sets it apart from conventional two-stage object detection techniques like Faster R-CNN, which generate a voluminous array of candidate boxes exceeding hundreds of thousands during the Region Proposal Network (RPN) phase. In stark contrast, Sparse R-CNN judiciously employs a sparse collection of 100 candidate boxes within the output feature map, each imbued with learnable attributes. This limited set of 100 candidates places a higher reliance on the extraction of high-level features. Notably, the feature pyramid network (FPN) encounters a diminution of high-level features as a consequence of fusion. Consequently, the direct adoption of FPN as the multi-scale feature extraction module for Sparse R-CNN remains poised for enhancements.
Within Sparse R-CNN, the ultimate goal is to derive conclusive outcomes from the sparse set of 100 learnable candidate boxes. To ensure a closer alignment between the ascertained classes and the actual target classes, an improved classification loss function becomes imperative. Such a function would yield a more accurate assessment of the proximity between identified and genuine target classes. In its classification loss, Sparse R-CNN employs focal loss. The introduction of focal loss was primarily aimed at mitigating the challenge of imbalanced hard and easy samples. Given the abundance of candidate boxes generated in detection, coupled with the paucity of positive samples, a class imbalance dilemma emerges. Nonetheless, incorporating focal loss in Sparse R-CNN gives rise to certain issues. Notably, focal loss tends to excessively concentrate on intricately classifiable samples. This predicament becomes pronounced when the model attains advanced training stages, resulting in a decline in performance.
To tackle the abovementioned challenges, this paper presents an image object detection methodology grounded in a feature pyramid framework. This approach rectifies the deficiencies in the multi-scale feature extraction network employed by Sparse R-CNN, consequently elevating the expressive capacity of multi-scale features within Sparse R-CNN. Furthermore, the classification loss incorporated in this methodology leverages a gradient density loss function. This function is adept at providing a more accurate assessment of classification loss concerning detection boxes.
In this work, we make the following key contributions and introduce novel aspects:
Enhanced feature pyramid network: We propose an advanced feature pyramid network that significantly enhances multi-scale feature extraction and fusion capabilities. This innovation reduces information loss during feature fusion, leading to improved feature representations.
Gradient density loss function: Our introduction of the gradient density loss function, as a replacement for the traditional focal loss, offers a more precise assessment of object detection classification losses, particularly for challenging samples.
Comprehensive evaluation: We rigorously evaluate our method on four well-established public object detection datasets, including MS COCO 2017, MS COCO 2014, Pascal VOC 2007, and Pascal VOC 2012. Substantial improvements in average precision are achieved compared to existing methods.
Enhanced small-scale target detection: The application of the enhanced feature pyramid network results in superior detection capabilities for small-scale objects, while maintaining high accuracy for larger and medium-sized targets.
Visual comparisons: We present visual comparisons to demonstrate the practical effectiveness of our enhancements.

3. Method

Multi-scale feature extraction networks play a crucial role in object detection tasks. In widely employed datasets, small objects account for a smaller fraction of the entire image, while other objects necessitate the preservation of high-level large-scale features. Furthermore, the utilized loss function for assessing classification performance might not comprehensively cater to real-world scenarios. With these considerations in mind, this paper introduces an object detection approach founded on a feature pyramid network. The overarching structure of this methodology encompasses two primary constituents: an enhanced feature pyramid network module and a loss function tailored for object detection classification.

3.1. Enhanced Feature Pyramid Network

In preparation for feature fusion within the FPN, a 1 × 1 convolutional layer is employed. This step serves to transform the channel dimensions, facilitating the subsequent aggregation of feature maps from different scales. The outcome is a fused feature map endowed with enriched information. However, a commonly employed fusion strategy involves channel dimension reduction in higher-level features via convolutional layers, enabling their summation with lower-level feature maps. This process, unfortunately, introduces information loss within feature maps originating from varying scales. Additionally, the semantic gap between feature maps generated at different scales is substantial. If the model neglects these notable semantic disparities and directly applies 1 × 1 convolutions followed by addition for multi-scale information fusion, there remains room for improvement within this fusion strategy. It is essential to acknowledge that the fundamental design principle of the FPN involves fusing high-level features with their low-level counterparts, thereby empowering high-level features to enhance low-level feature representation by leveraging semantic insights from larger scales. Nevertheless, the feature map located at the highest level lacks a corresponding higher-level, larger-scale feature map for fusion. Consequently, the original network architecture generates a feature map at the highest level, corresponding to the largest scale, devoid of input from features at other scales. Instead, this map is directly subjected to channel dimension reduction before being merged with feature maps from lower scales. This approach seems inequitable to the highest-scale feature map, especially considering that lower-scale feature maps have already undergone fusion with features from more elevated scales.
The approach proposed in this paper effectively tackles the limitations of the FPN utilized in the Sparse R-CNN network to extract multi-scale features. It incorporates mechanisms for enhancing both top-level and bottom-level features, thereby facilitating the extraction of multi-scale image features that encompass both fine-grained and coarse-grained details. This approach better satisfies the demands of object detection. Given an input image, the process of multi-scale feature extraction benefits from the top-level and progressive feature enhancement mechanisms, resulting in the extraction of more finely fused and higher-quality multi-scale feature maps with minimized information loss. Subsequently, the detection network processes these multi-scale features to generate object classification scores and bounding box coordinates, which collectively constitute the foundation for the final object detection outcomes.
The architecture of the feature extraction network is depicted in Figure 1. It encompasses two principal stages: top-level feature enhancement and progressive feature fusion across multiple scales. During the first stage, top-level feature enhancement leverages the multi-scale features obtained from the backbone network. These features are integrated into the ultimate output network features. By employing a reduction in channel dimensionality, the extent of information loss in top-level features is ameliorated. The subsequent stage involves progressive feature fusion across multiple scales during the multi-scale feature fusion phase. This approach effectively diminishes the semantic gap existing between feature maps of varying resolutions within the feature pyramid’s multi-scale feature fusion process. Consequently, this strategy contributes to more refined multi-scale feature fusion. The combination of these two stages culminates in the creation of well-integrated multi-scale feature maps.
During the stage of multi-scale feature extraction, the initial input image undergoes processing through the backbone network, which is constructed based on the ResNet-50 architecture. Convolutional operations are executed across four residual convolutional modules (referred to as layer_1 to layer_4), generating feature maps at corresponding scales: 256 × H × W , 512 × H / 2 × W / 2 , 1024 × H / 4 × W / 4 , and 2048 × H × W . This design allows the feature extraction network to progressively gather insights from both diminutive-scale and substantial-scale features spanning diverse scales. Notably, subsequent to each residual convolutional module, the feature map is proportionally downsampled by factors of 0.5, 0.25, and 0.125, respectively. Within the FPN network of Sparse R-CNN, subsequent to the extraction of feature maps at the four distinct scales, a process of reversed feature fusion is initiated. To facilitate this fusion, a 1 × 1 convolutional operation is applied to the feature map of each layer. This operation ensures uniformity in output channel dimensions across layers, thus simplifying the ensuing feature fusion process. Typically, fusion is executed through element-wise addition.
In the ultimate top-level feature map at the 0.125× scale (depicted in Figure 2), a module for top-level feature enhancement is incorporated. The module’s output undergoes fusion with the original top-level features, thereby generating enhanced top-level features. This augmentation effectively counteracts the loss of top-level feature data that can arise during the ensuing process of reversed multi-scale feature fusion. Consequently, this addresses the constraint attributed to the absence of higher-level features available for fusion with the top-level counterparts.
To be more precise, the top-level feature enhancement module incorporates an adaptive pooling mechanism. In this process, the aspect ratio of the feature map remains constant, while the pooling ratio can be altered. This permits the execution of adaptive pooling using multiple pooling ratios. Consequently, the model operates at diverse scales, generating target features of varying scales based on the original top-level input features. This approach enables progressive feature fusion across distinct channels, and the resultant feature scale ς after extraction can be expressed as shown in Equation (1).
ς = ( a 1 × S , a 2 × S , , a n × S )
where S represents the initial feature scale, corresponding to the scale of the top-level feature. The symbols a i , where i = 1 , 2 , , n , signify the designated pooling ratios. Following this, a 3 × 3 convolutional kernel is applied to condense the multi-channel features to 256. Subsequently, an upsampling procedure is engaged to restore the dimensions to the original scale S in preparation for subsequent fusion. The technique of bilinear interpolation is harnessed in the upsampling phase to compute new pixel values. For given coordinates of four points, ( x 1 , y 1 ) , ( x 2 , y 2 ) , ( x 3 , y 3 ) , and ( x 4 , y 4 ) , the interpolation formula employed to ascertain the value at a novel coordinate ( x , y ) is as follows:
f ( x , y 1 ) x 2 x x 2 x 1 f ( Q 11 ) + x x 1 x 2 x 1 f ( Q 21 )
f ( x , y 2 ) x 2 x x 2 x 1 f ( Q 12 ) + x x 1 x 2 x 1 f ( Q 22 )
f ( x , y ) y 2 y y 2 y 1 f ( x , y 1 ) + y y 1 y 2 y 1 f ( x , y 2 )
where Q 11 = ( x 1 , y 1 ) , Q 12 = ( x 1 , y 2 ) , Q 21 = ( x 2 , y 1 ) , Q 22 = ( x 2 , y 2 ) .
In this process, following the upsampling step, the feature maps that underwent pooling with varying ratios in the preceding stage have their scales equalized. These feature maps are subsequently concatenated. Recognizing the semantic disparity among these feature maps with distinct original ratios, each of them undergoes a sequence consisting of a 1 × 1 convolutional layer, an ReLU layer, and a 3 × 3 convolutional layer. This series of operations generates weight maps for each multi-scale feature map after the concatenation. Finally, these weight maps are element-wise multiplied with their corresponding feature maps from before the concatenation on each channel, followed by summation. This sequence constitutes the output of the top-level feature enhancement module.
The rationale behind the capability of the top-level enhancement module to improve the representation of the highest-level features lies in its utilization of adaptive pooling at the highest scale. This process effectively extracts features one level higher on the top-level feature map, which contains richer semantic content. Subsequently, this enhanced feature map is integrated with the original top-level feature map. Moreover, the paper introduces two adaptive pooling branches with distinct scaling ratios to combine high-level semantic features with even higher-level, diverse semantic information. These adaptive pooling branches can be extended to encompass multiple ratios, further enhancing the feature representation potential of the highest-level features.
Following the operations of the two adaptive pooling branches, bilinear interpolation is employed to upsample the feature maps. These upsampled maps are then concatenated, forming a cascaded representation of multi-scale enhanced features based on the top-level features. Subsequent to a 1 × 1 convolutional layer that reduces the number of channels, the result is multiplied with the original feature map that has not been concatenated. This procedure yields the final enhanced top-level features.
In the reverse progressive feature fusion process of the pyramid network, the prevalent approach involves upsampling each layer’s feature map, resulting in uniform dimensions and channel numbers for the feature maps at every level. This facilitates direct merging during the fusion with lower-level features. While nearest neighbor interpolation is frequently used for upsampling in FPN implementation, this method lacks consideration for semantic differences among features of different scales. Hence, the use of bilinear interpolation during upsampling generates smoother multi-scale feature maps, thereby minimizing the semantic gap between distinct feature maps. Moreover, when enhancing top-level features and conducting multi-scale feature fusion, direct application of nearest neighbor interpolation during upsampling might lead to information loss during feature fusion at the top level. The enhanced progressive feature fusion module effectively mitigates the decline in quality of multi-scale features caused by significant semantic gaps across different levels.

3.2. Gradient Density Loss Function

In conventional classification tasks, the cross-entropy loss function is widely utilized. The equation for the binary cross-entropy loss function is expressed as follows:
L cross - entropy = l o g ( p ) , if T = 1 l o g ( 1 p ) , if T = 0
where T { 0 , 1 } signifies the true class label of the sample, and the corresponding probability is denoted as p [ 0 , 1 ] .
Next, let us denote “ o u t ” as the output of the model. In this scenario, we have p = sigmoid ( o u t ) . Now, by differentiating the aforementioned cross-entropy loss function with respect to the model’s output, we arrive at
L cross - entropy out = p 1 , if T = 1 p , if T = 0
At this juncture, the magnitude of the gradient is defined as “ g r a d n o r m ”, which can be represented as
g r a d n o r m = | p T | = 1 p , if T = 1 p , if T = 0
where the value of “ g r a d n o r m ” indicates the level of difficulty posed by a sample and its impact on the overall gradient. To address the challenge stemming from uneven gradient distribution due to varying sample quantities, the concept of the “gradient density loss function” is introduced. This strategy involves partitioning the gradient into distinct ranges, counting the number of samples within each range, and evaluating the distribution within these ranges. Subsequently, using the sample count in each range and the length of the range, the “ g r a d i e n t d e n s i t y ( G D ) is defined. This metric reflects the number of samples per unit magnitude.
G D ( g r a d n o r m ) = 1 l ε ( g r a d n o r m ) k = 1 N δ ε ( g r a d n o r m k , g r a d n o r m )
l ε ( g r a d n o r m ) = m i n ( g r a d n o r m + ε 2 , 1 ) m a x ( g r a d n o r m ε 2 , 0 )
δ ε ( x , y ) = 1 , y ε 2 x < y + ε 2 0 , o t h e r w i s e
where ε represents the magnitude within the range. Therefore, the gradient density G D ( g r a d n o r m ) can be expressed as the number of samples within a certain interval [ g r a d n o r m ε 2 , g r a d n o r m + ε 2 ] . Subsequently, the gradient density parameter is defined as follows:
β i = N G D ( g r a d n o r m i )
In the equation, N represents the total number of samples. This parameter is used to weigh the cross-entropy loss classification function, resulting in a new gradient density loss function. This function is designed to reduce the weight of larger gradient density intervals, thereby diminishing the impact of challenging samples on the model. This adjustment helps mitigate the influence of difficult outliers on the final accuracy. For better comprehension, let us express the above equation in an alternative form:
β i = 1 G D ( g r a d n o r m i ) / N
As can be discerned, G D ( g r a d n o r m i ) / N signifies the proportion of sample gradients within the gradient partitioning interval concerning the total sample count. If the sample count within the gradient interval is uniformly distributed, then β i = 1 for every interval, implying that the gradient density parameter in that interval would hold no sway over the result. Conversely, for demanding samples characterized by higher gradient densities and larger counts, this value would diminish. This characteristic serves to counterbalance the impact of a multitude of challenging samples on the model’s precision.
Hence, the substitution of the classification loss function focal loss [19] with the gradient density loss function in Sparse R-CNN can be mathematically expressed as follows:
L g c l s = 1 N i = 1 N β i L c r o s s e n t r o p y ( p i , p i g t ) = i = 1 N L c r o s s e n t r o p y ( p i , p i g t ) G D ( g r a d n o r m i )
In this equation, p i g t represents the actual classification label, p i represents the predicted label, and G D ( g r a d n o r m i ) represents the count of samples falling within the ith interval after dividing the gradient magnitudes into N intervals using the parameter N. The term L c r o s s e n t r o p y ( p i , p i g t ) denotes the cross-entropy classification loss function.
Figure 3 displays the gradient magnitude distribution of a converged model. Along the horizontal axis, the gradient magnitude is depicted, with the right side representing more challenging samples and the left side representing simpler ones. The vertical axis, on a logarithmic scale, illustrates the corresponding number of samples at each gradient magnitude. The logarithmic scale is chosen due to the substantial variance in quantities between simple and moderately difficult samples in practical scenarios. The graph highlights that the largest number of samples falls into the simple category, while moderately difficult samples are fewer in number. Remarkably, the count of extremely challenging samples increases notably, surpassing even the quantity of moderately difficult ones by a substantial margin. As a consequence, these highly challenging samples can potentially lead to a reduction in model accuracy. This is due to the fact that these challenging samples, acting as outliers, often possess gradient directions significantly distinct from those of other simple and moderately difficult samples. Forcing a nearly converged model to learn from these exceedingly challenging exceptional samples can result in inaccurate classification for a considerable number of other samples.
After applying the gradient density loss function to weight the cross-entropy loss function, the relationship between the gradient magnitudes of samples and their corresponding quantities is illustrated in Figure 4.
In Figure 4, the horizontal axis represents the initial gradient magnitudes, while the vertical axis illustrates the gradient magnitudes after employing distinct loss functions. The observation reveals that within the region associated with simple and moderately difficult samples, situated on the left side of Figure 3, the tendencies of the gradient density loss and focal loss align. This correspondence implies that these two loss functions demonstrate coherent behavior in discriminating between simple and moderately difficult samples. Conversely, for challenging samples situated on the right side of the graph, the gradient density loss effectively diminishes the gradient magnitudes. This adjustment serves to stabilize the model when confronted with difficult and exceptional samples, thereby amplifying training efficiency.
After integrating the enhanced loss function, the comprehensive loss function for the image object detection method is formulated as
L o s s = λ 1 L o s s c l s + λ 2 L o s s L 1 + λ 3 L o s s g i o u
where λ 1 , λ 2 , and λ 3 denote the coefficients allocated to the classification loss, bounding box regression loss, and intersection over union (IoU) loss, correspondingly.
The FL is a well-known loss function introduced to address the class imbalance problem in object detection. It assigns higher weights to hard-to-classify examples, reducing the contribution of easily classifiable examples to the loss. While FL effectively handles class imbalance, it does not explicitly consider the density of object instances within an image, potentially leading to suboptimal performance in scenarios where objects are densely packed.
Class-agnostic focal loss (CF) extends FL by making it class-agnostic, thus focusing on the overall detection quality without distinguishing between different object classes. This modification enhances the detection of objects irrespective of their classes, which is particularly useful when dealing with diverse or unknown objects. However, CF may not effectively address the issue of object density when objects of varying classes are densely distributed.
GHM-C introduces gradient harmonization to address the imbalance between easy and hard examples during training. It effectively balances the learning process between different examples based on the gradient distribution, enhancing the network’s ability to focus on challenging examples. However, similar to FL, GHM-C does not explicitly consider object density.
In contrast, our proposed gradient density loss is specifically designed to address the challenge of object detection in scenarios with varying object densities. It combines the benefits of FL and GHM-C by incorporating density-based weighting into the loss calculation. Gradient density loss assigns higher weights to challenging examples, both in terms of classification difficulty and object density. By integrating these aspects, it aims to enhance the detection of objects in densely populated scenes, contributing to improved overall detection performance.
In summary, the proposed gradient density loss introduces an innovative approach that combines the strengths of FL, CF, and GHM-C while explicitly considering object density. This approach aims to offer a more comprehensive solution for object detection, particularly in challenging scenarios. Our comparative analysis shows that gradient density loss provides a valuable addition to the field of object detection, especially in cases where object density varies significantly across images.

3.3. The Overall Framework

The model proposed in this paper follows the overall architecture of Sparse R-CNN while introducing improvements on top of it, which is depicted in Figure 5.
In the section pertaining to the backbone network, the model employs a ResNet-50 network that has been pretrained on the ImageNet dataset. Subsequent enhancements are applied to the feature pyramid network (FPN) module of the Sparse R-CNN network using the proposed enhanced feature pyramid network (FPN) for multi-layer feature fusion. Given that the input and output scales, as well as the channel numbers, of the enhanced FPN remain consistent with those of the Sparse R-CNN network, which utilizes feature vectors from the four stages of the ResNet-50 backbone network (Stage 1–4), these improvements do not affect other components of the network.
Within the enhanced FPN’s module for multi-scale fusion, the top-level features receive initial enhancement. A dedicated top-level feature enhancement module is constructed based on the output of the Stage 4 residual layer of ResNet-50. The convolutional layers within the top-level feature enhancement module utilize Xavier initialization. After producing higher-level feature vectors through this module, subsequent stages involve the utilization of bilinear interpolation for both upsampling and fusion. This stands in contrast to the nearest-neighbor interpolation employed in the fusion stage of the original Sparse R-CNN algorithm. This refined approach mitigates top-level feature degradation, a concern in the original Sparse R-CNN algorithm’s multi-scale features. Moreover, due to the smoother feature fusion between different scales, the semantic gap between these features is reduced, leading to superior quality features and ultimately augmenting the algorithm’s accuracy.
Post feature extraction, the process proceeds by employing the Region Proposal Network (RPN) to generate an embedding matrix for each of the fixed 100 detection boxes. These matrices encapsulate the coordinates and classification features of the detection boxes and possess dimensions of 100 × 4 and 100 × 256 , respectively. The initial coordinates of the detection boxes are assigned using a random distribution. Subsequently, RoIAlign is utilized to extract features from the corresponding positions of the input feature maps for each detection box. Employing the multi-scale feature maps from the four layers, each layer maps the embedding matrices of the detection boxes to the extracted multi-scale feature maps. Consequently, feature vectors linked to each detection box are retrieved.
In relation to the label assigner within the detection head, a threshold of 0.5 is defined as the criterion for categorizing positive and negative samples. Classification and regression tasks are accomplished via fully connected layers. The regression loss employs the L1 loss from the original Sparse R-CNN. The GIoU loss is employed for IoU loss. Additionally, an enhancement is introduced via the proposed gradient density classification loss to modify the original Sparse R-CNN’s focal loss. The classification loss weight is set to 2.0, the regression loss weight to 2.0, and the IoU loss weight to 5.0. Upon completion of the Sparse R-CNN detection process, the customary nonmaximum suppression (NMS) post-processing step is eschewed. Instead, the model directly outputs the 100 detection boxes. During visualization, confidence thresholding can be applied to display pertinent detection boxes.

4. Experiments and Analysis

4.1. Datasets and Evaluation Metrics

This section outlines the datasets employed in the experiments and the evaluation criteria utilized.
MS COCO 2017: The Microsoft Common Objects in Context (MS COCO) dataset [21] stands as a cornerstone in computer vision tasks, including object detection and semantic segmentation. It gained prominence following the discontinuation of the ImageNet competition in 2017. COCO has attracted contributions from global tech giants like Google, Microsoft, Facebook, and esteemed research institutions. It boasts a comprehensive object detection dataset drawn from intricate everyday scenes, featuring over 200,000 images and 80 categories. The training set of COCO 2017 comprises 118,287 images, and the validation set contains 5000 images.
MS COCO 2014: Introduced in 2014 [21], the COCO dataset of 2014 diverges in content from its 2017 counterpart. It incorporates a training set of 82,783 images, a validation set of 40,504 images, and a test set encompassing 40,775 images.
Pascal VOC 2007 [22]: Pascal VOC (Pattern Analysis, Statistical Modeling, and Computational Learning Visual Object Classes) serves as a benchmark dataset for object detection, encompassing image classification, object detection, and semantic segmentation tasks. The 2007 edition comprises 20 categories and a total of 9963 annotated images. The dataset is partitioned into a training set of 5011 images and a test set of 4952 images.
Pascal VOC 2012: Representing the concluding year of this esteemed computer vision challenge, Pascal VOC 2012 features image classification, object detection, object segmentation, and action recognition as its principal tasks. It encompasses 11,530 images with 27,450 object annotations and 6929 segmentation annotations.
Evaluation Criteria: For object detection evaluation, the COCO dataset’s evaluation metrics are employed due to the conversion of Pascal VOC datasets into COCO format for experimentation. The primary evaluation metric for object detection in the COCO dataset is the mean average precision (mAP), a measure of average accuracy. Given that object detection involves generating detection boxes, calculating the average precision entails a series of steps. The average AP values are determined for detecting diverse classes of objects prior to assessing classification accuracy.
Throughout COCO evaluation, multiple intersection over union (IoU) thresholds are set, often spanning from 0.5 to 0.95 with increments of 0.05. For each IoU threshold, the average precision is computed for each object class. Ultimately, the obtained AP values across various IoU thresholds are averaged to yield the comprehensive mAP score.
When computing the mean mAP, a crucial metric at play is intersection over union (IoU), employed to gauge the intersection extent between a detected bounding box and its corresponding ground truth bounding box. While assessing the COCO dataset, IoU thresholds, frequently set at levels like 0.5 and 0.75, come into play. If the intersection over union value between the network’s output detection box and the actual ground truth bounding box exceeds the threshold, COCO deems the detection as accurate. A comprehensive illustration of the IoU calculation process is depicted in Figure 6.
IoU stands as a pivotal metric in evaluating the efficacy of object detection, as it gauges the alignment between the predicted detection box and the true ground truth box, thereby indicating the accuracy of the detection. This assessment is based on the comparison of the predicted detection box, denoted as a r e a 1 , and the actual annotated boundary box for the target, referred to as a r e a 2 , with their intersection represented as a r e a 3 . Mathematically, IoU is computed as follows:
I o U = S a r e a 3 S a r e a 1 + S a r e a 2 S a r e a 3
In tandem with IoU calculation, the evaluation process also incorporates classification attributes, encompassing the counts of true positives (TPs), false positives (FPs), false negatives (FNs), and true negatives (TNs). These metrics capture the four potential scenarios that arise between the classification outcome and the actual ground truth object: (1) TPs: Detection boxes with an IoU greater than the threshold accurately represent detected objects. (2) FPs: Detection boxes lacking corresponding ground truth annotations are deemed false positives. (3) FNs: Ground truth annotations without corresponding detection boxes are classified as false negatives. (4) TNs: Instances where neither ground truth annotations nor detection boxes are present.
Drawing from these scenarios, the evaluation of precision and recall becomes feasible. Precision, also referred to as the positive predictive value, assesses the proportion of correctly identified positive instances out of all instances predicted as positive. On the other hand, recall, also known as sensitivity, measures the proportion of correctly identified positive instances out of all actual positive instances. The formulas for precision and recall computation are as follows:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
Accuracy signifies the proportion of correct outcomes among all the targets predicted by the network, whereas recall indicates the proportion of correctly predicted detection boxes output by the network in comparison to all the true target annotations. In the COCO dataset, each annotated detection box is accompanied by a class confidence score. By varying the confidence score thresholds, different detection results can be generated, yielding varying accuracy and recall values, which are graphically represented in the precision–recall (P–R) curve. The preeminent metric, mean average precision (mAP), is calculated through a series of IoU thresholds spanning from 0.5 to 0.95, with increments of 0.05. The determination of mAP entails computing the precision of detection at each IoU threshold, culminating in the final mAP value.
Within the ambit of COCO’s evaluation metrics, there exists an element that computes precision based on target size. The COCO dataset classifies targets into three size categories: large, medium, and small, contingent on the area of their authentic annotation boxes. Targets featuring an annotation pixel area exceeding 96 × 96 are designated as large, while those below 32 × 32 are denoted as small, and those encompassed between 32 × 32 and 96 × 96 are categorized as medium. In the ultimate assessment, the COCO dataset computes precision separately for these three target sizes, denoted as A P s m a l l , A P m e d i u m , and A P l a r g e .

4.2. Experimental Environment

The method presented in this paper is implemented using the PyTorch deep learning framework, and all experiments are carried out on an NVIDIA RTX 1080Ti GPU with 11 GB of memory. The initial learning rate for training is configured at 0.000025, and the training process employs the SGD optimizer with a momentum value of 0.9. The learning rate is subjected to an equal interval reduction strategy. The model undergoes training for a cumulative 150 epochs across the complete dataset, employing a batch size of 2 for training images.

4.3. Ablation Experiments

In contrast to existing methodologies, this paper introduces two novel modules: the enhanced feature pyramid network and the gradient density object detection classification loss function. To meticulously assess the actual efficacy of these modules in enhancing the performance of the baseline method in real-world scenarios, this study undertakes ablation experiments. The comparative reference point for these evaluations is the baseline method derived from the official implementation of Sparse R-CNN.
The results in Table 1 highlight the enhancements achieved by incorporating the enhanced feature pyramid network and gradient density object detection classification loss function modules into the initial baseline method, Sparse R-CNN. The integration of the enhanced feature pyramid network module contributes to a 0.4% increase in average precision compared to the original Sparse R-CNN baseline. This improvement is consistent across various object scales, encompassing large, medium, and small objects. By replacing focal loss with the gradient density loss function in the baseline approach, no substantial changes are observed in the overall metrics, yet the average precision experiences a slight elevation of 0.1%. Notably, when both the enhanced feature pyramid network module and gradient density loss function are simultaneously applied to the baseline, the average precision advances by 1.1%. This translates to a 2.4% boost in precision for small-scale objects, and 0.9% and 0.3% enhancements for medium and large objects, respectively.
The adoption of the enhanced feature pyramid network module brings about a marked improvement in feature quality, positively impacting detection performance across various object scales. Furthermore, the incorporation of the gradient density loss function during training effectively reduces the model’s susceptibility to challenging small objects, resulting in enhanced detection accuracy for such objects, without compromising the performance for medium and large objects. In terms of computational efficiency, both Improvement 1 and Improvement 2 had a minor effect on the training and inference speeds of the model. Specifically, the model’s inference rate slightly decreased by 1.1 images per second, and the training time per iteration saw a marginal increase of 0.0159 s.

4.4. Comparative Experiments with Existing Methods

To validate the practical effectiveness of the proposed enhancements, a series of comparative experiments were conducted by incorporating the proposed modules into the baseline method. Subsequently, these enhanced methods were systematically compared against several state-of-the-art object detection approaches, all evaluated on the four widely used benchmark datasets. The obtained validation results unequivocally demonstrate that the proposed method surpassed the performance of these existing advanced methods.
Table 2 presents the performance comparison on the COCO 2017 dataset, where the enhancements proposed in this study were rigorously assessed against prominent 2021 methods, including YOLOX, YOLOF, and Deformable DETR. The observed improvements in average precision were substantial, with increments of 0.3%, 0.1%, and 0.4% for the aforementioned methods, respectively. Furthermore, Table 3 reveals the outcome of the enhancements on the COCO 2014 dataset. The proposed improvements exhibited remarkable average precision boosts of 0.7% and 0.6% when juxtaposed with the YOLOX and YOLOF methods proposed in 2021. Additionally, a 0.1% enhancement was achieved when compared to Deformable DETR. These comparisons illustrate the potency of the introduced enhancements in enhancing object detection accuracy.
The superiority of the proposed method is evident from the results displayed in Table 4 on the Pascal VOC 2007 dataset. Against Deformable DETR, YOLOF, and YOLOX, the method introduced in this paper exhibited remarkable average precision enhancements of 0.4%, 0.6%, and 0.7%, respectively. Notably, in terms of accuracy for detecting large objects, the proposed method outperformed YOLOX by a substantial margin of 1.9%. Moreover, when benchmarked against Dynamic R-CNN, DETR, and CentriPetalNet, the enhancements showcased their prowess by delivering substantial average precision improvements of 2.4%, 2.3%, and an impressive 4.0%, respectively. This compelling comparison demonstrates the considerable advancements introduced by the proposed method across multiple evaluation metrics.
Table 5 reveals the compelling performance of the proposed method on the Pascal VOC 2012 dataset. Compared to YOLOX, YOLOF, and Deformable DETR, the method introduced in this paper excelled, with average precision improvements of 0.2%, 0.7%, and 0.1%, respectively. When placed against Dynamic R-CNN, DETR, and CentriPetalNet, the proposed method boasted impressive enhancements of 1.9%, 2.4%, and a notable 3.7% in terms of average precision. The results illustrate that the proposed method not only exhibited slight yet consistent improvements in average detection precision and detection recall metrics for various object sizes when compared to advanced methods like Deformable DETR and YOLOX, but it also outperformed Dynamic R-CNN, DETR, and CentriPetalNet, reaffirming its superiority across diverse evaluation metrics.
In summary, our characterization of “superior performance” is based on improvements over the baseline model’s results within the specific domain of image object detection.
(1)
Benchmark and context: When referring to our results as “superior performance”, we take into account the benchmark and context in which our evaluation is conducted. Our research aims to enhance image object detection methods, and our proposed techniques achieve significant improvements over existing approaches. While the absolute value of 51% AP may not seem extraordinarily high in a broader context, it represents a noteworthy advancement within the specific domain of image object detection, which typically involves numerous challenges, including diverse object sizes, scales, and varying degrees of complexity.
(2)
Superiority over baseline: We compare our approach with a baseline model, which is often a standard practice in the field of computer vision. By demonstrating an improvement from the baseline model’s performance, we highlight the effectiveness of our proposed methodologies. Our experiments are carried out in the same training/testing data setting, ensuring a fair evaluation.
(3)
Impact of data variability: Data variability can indeed affect model performance. To address this concern, our experimentation is designed to include a diverse range of object classes, sizes, and complexities, which often reflect real-world scenarios. We understand that by changing the dataset, the performance might vary, and it could be an interesting direction for future research.

4.5. Visualization Results

This subsection presents the visualization results obtained from the COCO 2017, COCO 2014, Pascal VOC 2007, and Pascal VOC 2012 datasets. Among them, Figure 7 provides a visual representation of the outcomes achieved by this work on the COCO 2017 dataset. In the first row of visual comparison results, the baseline method succeeds in detecting human targets and three bird targets, albeit missing the handbag. However, with the improved model, the detection of the handbag is accomplished without any adverse impact on the detection of other objects. Moving to the second row of visual comparison results, the baseline method can spot train cars and traffic lights, yet it overlooks the small distant bird target. Interestingly, the improved model successfully captures this overlooked target. In the third row of visual comparison results, the baseline method incorrectly identifies the camera in the bottom left corner as a water bottle and also mistakes the charger for a cellphone. However, upon evaluation with the improved model, these misclassifications are effectively rectified.
Figure 8 illustrates the visualization results of this work on the MS COCO 2014 dataset.
In the first row of results, the baseline method successfully detects the occluded teddy bear target and the sliced banana target, yet it falls short in detecting another occluded banana target. The improved method’s detection results, on the other hand, manage to successfully identify the occluded banana target. However, there are a few instances of misclassifications, such as pies being incorrectly labeled as a bowl and banana targets. Moving to the second row of results, the baseline method effectively detects the bus target and the fire hydrant target, but struggles with misclassifying several pillars behind the bus as humans. The improved method’s results rectify this misclassification. In the third row, a scenario involving extremely small targets is presented, showcasing multiple car targets. Here, the baseline method misses a substantial number of car targets, while the improved method detects more car targets when compared to the baseline. Moreover, the improved method also manages to detect the clock target that the baseline’s results display, albeit both methods exhibiting some misclassifications of bus targets. This clearly demonstrates that the incorporation of the enhanced feature pyramid module enhances the detection capability for small-scale targets while maintaining accuracy for larger and medium-sized targets.
Figure 9 provides a visual comparison of results on the VOC 2007 dataset. In the first row of results, the baseline method successfully detects the car target and the person driving the car. However, it misclassifies the billboard located next to the race track as a train target. Upon implementing the improved model, this misclassification is rectified. Shifting to the second row of results, the baseline method produces inaccurate detections, which are corrected when the improved model is utilized. Nevertheless, some misclassifications still persist, such as the incorrect labeling of a gift-wrapped target as a handbag target. Finally, in the third row of results, the baseline method encounters a notable misclassification scenario where a substantial number of fruit and biscuit targets are erroneously labeled as cake targets.
Figure 10 provides a visual comparison of results on the VOC 2012 dataset. In the first row of examples, the baseline method successfully detects the dining table, person, and chair targets. However, the car detection result in the top right corner is incorrect. The improved model corrects this misclassification and identifies additional targets, such as mobile phones, blankets, bowls, and more chairs, showcasing superior performance compared to the baseline method. Transitioning to the second row of results, the baseline method misclassifies a distant car as a bus and erroneously labels a prominent piece of clothing as a person. These misclassifications are rectified in the output of the improved model. Moving on to the third row of results, in scenarios featuring smaller distant person targets, the improved model demonstrates improved detection performance compared to the baseline method, successfully capturing more of the smaller targets.
Here, we present a comprehensive analysis of the performance variations observed when applying our proposed method to different datasets. These variations are crucial for understanding the adaptability and limitations of our approach and shedding light on the intricate interplay of dataset characteristics, specific challenges, data distribution, model adaptability, and generalization capabilities.
(1)
Dataset characteristics: The COCO 2017 dataset and the COCO 2014 dataset differ in terms of the number of images, object categories, and scene complexity. COCO 2017 is more extensive, while COCO 2014 is a slightly smaller dataset. These differences in dataset characteristics can influence the performance due to variations in object diversity, object sizes, and scene complexities. The Pascal VOC datasets (2007 and 2012) are relatively smaller and contain fewer object categories. The smaller dataset size can make it more challenging for models to generalize, potentially affecting performance.
(2)
Data distribution: The distribution of object sizes and object densities within images can vary significantly between datasets. COCO datasets are known for their diverse object sizes and complex scenes, while Pascal VOC datasets may have different size and density characteristics. Our method, which takes into account object density using the gradient density loss, may show variations in performance based on these distributions. Class imbalance, where some classes have more instances than others, can also impact performance. Some datasets may exhibit more pronounced class imbalances than others, leading to variations in detection performance across object categories.
(3)
Specific challenges: Each dataset may come with its unique challenges. For example, COCO datasets may involve instances of small objects or objects with heavy occlusions, while Pascal VOC datasets may have specific challenges related to class distribution and object appearance. These dataset-specific challenges can affect detection performance.
(4)
Adaptability of model: Our proposed approach, including the enhanced feature pyramid network and gradient density loss, is designed to improve object detection under varying conditions. However, its adaptability to different datasets depends on the specific characteristics of those datasets. The adaptability of our approach may also be influenced by the selection of hyperparameters. Fine-tuning hyperparameters for specific datasets can further enhance performance.
(5)
Model generalization: The ability of our model to generalize across datasets is a crucial factor. Generalization depends on the diversity of the training data, the robustness of the network architecture, and the effectiveness of the proposed loss function.
The proposed image object detection approach brings several novel elements to the field:
(1)
Enhanced feature pyramid network: The introduction of an enhanced feature pyramid network stands as a key innovation. This enhancement improves the baseline’s multi-scale feature extraction and fusion capabilities, which is crucial for accurate object detection. The reduction in information loss during feature fusion enhances the network’s ability to represent features effectively.
(2)
Gradient density loss function: The replacement of the conventional focal loss function with the gradient density loss function is a novel contribution. This novel loss function provides a more accurate assessment of object detection classification loss, mitigating the influence of challenging samples and leading to improved detection performance.
The contributions of this work can be summarized as follows:
(1)
Enhanced detection accuracy: The integration of the enhanced feature pyramid network and the gradient density loss function leads to a significant improvement in detection accuracy across a range of object scales and complexities. This contributes to the advancement of object detection methodologies.
(2)
Improved multi-scale features: The proposed network architecture enhances the extraction and fusion of multi-scale features, which results in higher-quality features and overall improved detection performance. This contribution addresses a crucial aspect of object detection tasks.
(3)
Novel loss function: The introduction of the gradient density loss function provides a novel approach to handling classification loss in object detection. By accounting for the influence of challenging samples, this loss function enhances the model’s capability to accurately classify objects.
While the proposed approach shows promising results, there are certain limitations to be considered:
(1)
Computational overhead: Although the introduced enhancements have a minor impact on training and inference speeds, there might still be computational overhead associated with the proposed method. Further optimization might be necessary for real-time applications.
(2)
Generalization to diverse datasets: While the proposed approach performs well on the tested benchmark datasets, its generalization to more diverse and challenging datasets remains to be validated. Different datasets may present unique challenges that could affect the method’s performance.
In the pursuit of further advancement, several avenues for future research open up:
(1)
Domain adaptation: Exploring the adaptability of the proposed approach to domain-specific challenges, such as specific object categories or environmental conditions, could enhance its practical applicability.
(2)
Efficiency optimization: Continued efforts to optimize the computational efficiency of the method will be valuable for real-time applications and large-scale deployments.
(3)
Robustness testing: Evaluating the proposed approach’s robustness against occlusions, lighting variations, and other real-world challenges could provide insights into its practical viability.
(4)
Exploration of loss functions: Further investigation into alternative loss functions or modifications to the existing ones could contribute to even more accurate and stable training processes.
(5)
Interpretability: Exploring methods to enhance the interpretability of the model’s decisions could improve the transparency and trustworthiness of the proposed approach.

5. Conclusions

This paper presents an innovative image object detection approach based on the pyramid network, aimed at enhancing the detection performance of current object detection methods. The key contributions and innovations of this research can be summarized in three aspects. Firstly, an enhanced feature pyramid network is introduced to enhance the baseline method. This enhancement significantly strengthens the capabilities of multi-scale feature extraction and fusion. By minimizing information loss during the fusion of multi-scale features, the enhanced network effectively enhances the feature representation potential. Secondly, the object detection classification loss function of the baseline method is enhanced by introducing the gradient density loss function, replacing the focal loss function used in the baseline. This incorporation of the gradient density loss function results in a more precise evaluation of object detection classification loss. It enriches the landscape of classification loss evaluation, a critical component not only for our method but also for the evolving paradigms in autonomous driving and security surveillance. The pursuit of higher classification accuracy aligns seamlessly with the demands of real-world systems where safety and reliability are paramount. Lastly, comprehensive experiments validate the effectiveness of the proposed method across four widely used public object detection datasets: MS COCO 2017, MS COCO 2014, Pascal VOC 2007, and Pascal VOC 2012. The achieved average precision on these datasets reaches 39.4%, 42.0%, 51.5%, and 49.9% respectively. These outcomes surpass the performance of existing methods in terms of precision metrics. In summary, this approach introduces a novel strategy for image object detection, combining an enhanced feature pyramid network with a gradient density object detection classification loss function. The experimental results underscore its superior performance compared to existing methods on diverse benchmark datasets.

Author Contributions

Conceptualization, W.Z.; methodology, Y.W.; software, Q.W., F.W. and F.L.; validation, R.Z. and Y.Z.; writing, Y.W. and Y.Z.; writing—review and editing, F.W.; project administration, S.D.; funding acquisition, S.D. and W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Fujian Province (Grant Nos. 2021J011086, 2023J01964, 2023J01965, 2023J01966), by the Fujian Province Chinese Academy of Sciences STS Program Supporting Project (Grant no. 2023T3084) and by the Qimai Science and Technology Innovation Project of Wuping Country (Grant no. WPQM001).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the datasets used in this manuscript are publicly available datasets, already in the public domain.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
  2. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
  3. Felzenszwalb, P.F.; Girshick, R.B.; McAllester, D.; Ramanan, D. Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 32, 1627–1645. [Google Scholar] [CrossRef] [PubMed]
  4. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  5. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 142–158. [Google Scholar] [CrossRef] [PubMed]
  6. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed]
  7. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  8. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  9. Cai, Z.; Vasconcelos, N. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6154–6162. [Google Scholar]
  10. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  11. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
  12. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  13. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  14. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  15. Dai, X.; Chen, Y.; Yang, J.; Zhang, P.; Yuan, L.; Zhang, L. Dynamic detr: End-to-end object detection with dynamic attention. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 2988–2997. [Google Scholar]
  16. Guan, S.; Khan, A.A.; Sikdar, S.; Chitnis, P.V. Fully dense UNet for 2-D sparse photoacoustic tomography artifact removal. IEEE J. Biomed. Health Inform. 2019, 24, 568–576. [Google Scholar] [CrossRef] [PubMed]
  17. Ghiasi, G.; Lin, T.Y.; Le, Q.V. Nas-fpn: Learning scalable feature pyramid architecture for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7036–7045. [Google Scholar]
  18. Lopez-Marcano, S.; LJinks, E.; Buelow, C.A.; Brown, C.J.; Wang, D.; Kusy, B.; Connolly, R.M. Automatic detection of fish and tracking of movement for ecology. Ecol. Evol. 2021, 11, 8254–8263. [Google Scholar] [CrossRef] [PubMed]
  19. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  20. Li, B.; Liu, Y.; Wang, X. Gradient harmonized single-stage detector. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; pp. 8577–8584. [Google Scholar]
  21. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; pp. 740–755. [Google Scholar]
  22. Shetty, S. Application of convolutional neural network for image classification on Pascal VOC challenge 2012 dataset. arXiv 2016, arXiv:1607.03785. [Google Scholar]
  23. Law, H.; Deng, J. Cornernet: Detecting objects as paired keypoints. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 734–750. [Google Scholar]
  24. Zhang, S.; Chi, C.; Yao, Y.; Lei, Z.; Li, S.Z. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9759–9768. [Google Scholar]
  25. Wang, J.; Chen, K.; Xu, R.; Liu, Z.; Loy, C.C.; Lin, D. Carafe: Content-aware reassembly of features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 3007–3016. [Google Scholar]
  26. Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; Tian, Q. Centernet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 6569–6578. [Google Scholar]
  27. Zhu, B.; Wang, J.; Jiang, Z.; Zong, F.; Liu, S.; Li, Z.; Sun, J. Autoassign: Differentiable label assignment for dense object detection. arXiv 2020, arXiv:2007.03496. [Google Scholar]
  28. Dong, Z.; Li, G.; Liao, Y.; Wang, F.; Ren, P.; Qian, C. Centripetalnet: Pursuing high-quality keypoint pairs for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10519–10528. [Google Scholar]
  29. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 213–229. [Google Scholar]
  30. Zhang, H.; Chang, H.; Ma, B.; Wang, N.; Chen, X. Dynamic R-CNN: Towards high quality object detection via dynamic training. In Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 260–275. [Google Scholar]
  31. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; Dai, J. Deformable detr: Deformable transformers for end-to-end object detection. arXiv 2020, arXiv:2010.04159. [Google Scholar]
  32. Chen, Q.; Wang, Y.; Yang, T.; Zhang, X.; Cheng, J.; Sun, J. You only look one-level feature. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13039–13048. [Google Scholar]
  33. Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. Yolox: Exceeding yolo series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
Figure 1. Feature extraction network.
Figure 1. Feature extraction network.
Applsci 13 12174 g001
Figure 2. Top-level feature fusion module.
Figure 2. Top-level feature fusion module.
Applsci 13 12174 g002
Figure 3. Distribution of sample gradient magnitudes and corresponding quantities [20].
Figure 3. Distribution of sample gradient magnitudes and corresponding quantities [20].
Applsci 13 12174 g003
Figure 4. Gradient magnitudes with different weighted loss functions [20], where gradient density loss is represented by the red line and focal loss is represented by the blue line. Here, FL represents the focal loss, CE represents the cross-entropy, and GHM-C represents the gradient harmonizing mechanism classification.
Figure 4. Gradient magnitudes with different weighted loss functions [20], where gradient density loss is represented by the red line and focal loss is represented by the blue line. Here, FL represents the focal loss, CE represents the cross-entropy, and GHM-C represents the gradient harmonizing mechanism classification.
Applsci 13 12174 g004
Figure 5. The overall framework.
Figure 5. The overall framework.
Applsci 13 12174 g005
Figure 6. Calculation of IoU for bounding boxes.
Figure 6. Calculation of IoU for bounding boxes.
Applsci 13 12174 g006
Figure 7. Visualization results on MS COCO 2017 dataset.
Figure 7. Visualization results on MS COCO 2017 dataset.
Applsci 13 12174 g007
Figure 8. Visualization results on MS COCO 2014 dataset.
Figure 8. Visualization results on MS COCO 2014 dataset.
Applsci 13 12174 g008
Figure 9. Visualization results on VOC 2007 dataset.
Figure 9. Visualization results on VOC 2007 dataset.
Applsci 13 12174 g009
Figure 10. Visualization results on Pascal VOC 2012 dataset.
Figure 10. Visualization results on Pascal VOC 2012 dataset.
Applsci 13 12174 g010
Table 1. The results of the ablation experiments on the COCO 2017 dataset.
Table 1. The results of the ablation experiments on the COCO 2017 dataset.
Model AP 0.5 : 0.95 AP 0.5 AP 0.75 AP s AP m AP 1 AP 100 AR s AR m AR 1 Inference (img/s)Training (s/iter)
Baseline0.3840.5600.4140.2060.4090.5400.5780.3400.6210.77227.50.2982
Baseline + Improvement 10.3880.5660.4180.2090.4130.5470.5790.3410.6210.78226.60.3095
Baseline + Improvement 20.3850.5640.4130.2080.4070.5400.5790.3420.6210.78126.70.3100
Baseline + Improvement 1 + Improvement 20.3950.5780.4230.2320.4180.5430.5810.3670.6210.78826.40.3141
Table 2. Comparative experiments were conducted on the COCO 2017 dataset.
Table 2. Comparative experiments were conducted on the COCO 2017 dataset.
Model AP 0.5 : 0.95 AP 0.5 AP 0.75 AP s AP m AP 1 AP 100 AR s AR m AR 1
Faster RCNN [8]0.3060.5110.3240.1500.3360.3890.4820.1790.5300.618
Focal loss [19]0.3070.5020.3300.1420.3350.3900.4780.1750.5280.623
CornerNet [23]0.3020.4440.3160.1370.3200.3890.4800.1700.5220.619
ATSS [24]0.3150.4790.3400.1760.3480.4060.4890.2020.5320.628
carafe [25]0.3080.2080.3310.1840.3420.3770.4870.2010.5240.611
Cascade RCNN [9]0.3420.5180.3690.1880.3750.4470.5090.3120.5850.706
CenterNet [26]0.2950.4610.3140.1020.3290.4670.4850.2280.5170.728
AutoAssign [27]0.3480.5280.3770.1840.3850.4490.5060.3090.5990.709
CentriPetal Net [28]0.3580.5220.3780.1880.3820.4700.5170.3200.6080.720
DETR [29]0.3910.5770.4090.1740.4240.5870.5710.3500.6170.790
Dynamic R-CNN [30]0.3890.5760.4270.2210.4190.5170.5700.3540.6210.779
Deformable DETR [31]0.3910.5810.4240.2390.4230.5410.5850.3690.6310.790
YOLOF [32]0.3930.5830.4270.2380.4210.5420.5820.3680.6200.788
YOLOX [33]0.3900.5800.4300.2370.4400.5060.5830.3690.6380.772
Ours0.3940.5880.4230.2320.4180.5430.5860.3700.6390.788
Table 3. Comparative experiments were conducted on the COCO 2014 dataset.
Table 3. Comparative experiments were conducted on the COCO 2014 dataset.
Model AP 0.5 : 0.95 AP 0.5 AP 0.75 AP s AP m AP 1 AP 100 AR s AR m AR 1
Faster RCNN [8]0.3240.5280.3450.1780.3590.4030.5010.1950.5520.638
Focal loss [19]0.3270.5270.3470.1820.3520.4040.5060.1980.5540.642
CornerNet [23]0.3240.4680.3350.1570.3480.4080.5040.1950.5410.632
ATSS [24]0.3360.4980.3610.1950.3680.4280.5280.2240.5540.648
carafe [25]0.3240.2280.3540.2010.3680.3940.5210.2240.5480.637
Cascade RCNN [9]0.3620.5380.3750.2040.3920.4670.5270.3340.6070.724
CenterNet [26]0.3140.4850.3380.1250.3480.4830.5020.2480.5380.743
AutoAssign [27]0.3620.5480.3950.2040.4050.4680.5420.3280.6170.725
CentriPetal Net [28]0.3800.5460.3920.2070.4150.4980.5470.3370.6080.747
DETR [29]0.4100.6040.4250.1950.4420.6010.5950.3720.6380.809
Dynamic R-CNN [30]0.4020.5940.4480.2430.4400.5340.5910.3720.6410.799
Deformable DETR [31]0.4240.5940.4470.2590.4420.5610.6070.3890.6510.814
YOLOF [32]0.4180.6080.4480.2680.4480.5670.6040.3820.6200.801
YOLOX [33]0.4190.6010.4390.2580.4310.5290.6020.3780.6550.795
Ours0.4250.6090.4490.2680.4420.5680.6090.3890.6580.819
Table 4. Comparative experiments were conducted on the Pascal VOC 2007 dataset.
Table 4. Comparative experiments were conducted on the Pascal VOC 2007 dataset.
Model AP 0.5 : 0.95 AP 0.5 AP 0.75 AP s AP m AP 1 AP 100 AR s AR m AR 1
Faster RCNN [8]0.4180.6190.4180.2750.4290.5080.5980.2840.6240.729
Focal loss [19]0.4160.6140.4200.2670.4250.5110.5960.2800.6180.722
CornerNet [23]0.4280.5480.4290.2840.4270.5180.6180.2970.6330.728
ATSS [24]0.4400.5990.4810.2990.4590.5180.6280.3140.6420.740
carafe [25]0.4280.3580.4380.3150.4850.4990.6180.3280.6490.735
Cascade RCNN [9]0.4710.6420.4740.3080.4020.5750.6310.4250.7090.815
CenterNet [26]0.4230.5740.4270.2340.4680.5910.6030.3590.6370.832
AutoAssign [27]0.4520.6580.4990.3050.4960.5570.6510.4170.7290.834
CentriPetal Net [28]0.4850.6440.4830.3080.5240.5990.6580.4280.7080.851
DETR [29]0.5020.7000.5280.2860.5610.6230.6840.4830.7280.895
Dynamic R-CNN [30]0.5010.6920.5380.3440.5480.6250.6890.4830.7520.899
Deformable DETR [31]0.5210.7030.5480.3540.5540.6720.6990.4810.7550.910
YOLOF [32]0.5190.7000.5480.3530.5420.6610.7020.4820.7310.900
YOLOX [33]0.5180.7010.5470.3510.5610.6550.7000.4720.7510.895
Ours0.5250.7080.5480.3580.5680.6740.7040.4850.7550.899
Table 5. Comparative experiments were conducted on the Pascal VOC 2012 dataset.
Table 5. Comparative experiments were conducted on the Pascal VOC 2012 dataset.
Model AP 0.5 : 0.95 AP 0.5 AP 0.75 AP s AP m AP 1 AP 100 AR s AR m AR 1
Faster RCNN [8]0.3950.5920.3920.2540.4020.4860.5320.2630.6080.701
Focal loss [19]0.3940.5890.3890.2510.3970.4850.5300.2510.6100.698
CornerNet [23]0.4020.5210.4050.2610.4040.4960.5980.2750.6120.704
ATSS [24]0.4210.5720.4620.2710.4320.4920.6040.2940.6240.723
carafe [25]0.4010.3310.4180.2950.4650.4720.5920.3010.6240.712
Cascade RCNN [9]0.4580.6210.4500.2810.3810.5510.6120.4020.6980.792
CenterNet [26]0.4010.5570.4490.2150.4480.5710.6050.3340.6240.819
AutoAssign [27]0.4310.6350.4710.2850.4740.5390.6210.3940.7000.804
CentriPetal Net [28]0.4620.6210.4680.2810.5040.5740.6380.4020.6840.831
DETR [29]0.4750.6750.5010.2630.5410.6820.6590.4620.7050.876
Dynamic R-CNN [30]0.4800.6720.5120.3220.5240.6020.6640.4620.7300.870
Deformable DETR [31]0.4980.6820.5210.3350.5320.6510.6810.4620.7320.871
YOLOF [32]0.4920.6880.5210.3420.5280.6420.6810.4680.7100.870
YOLOX [33]0.4970.6820.5210.3350.5380.6010.6810.4740.7310.872
Ours0.4990.6890.5250.3040.5470.6520.6820.4780.7410.874
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Wang, Q.; Zou, R.; Wen, F.; Liu, F.; Zhang, Y.; Du, S.; Zeng, W. Advancing Image Object Detection: Enhanced Feature Pyramid Network and Gradient Density Loss for Improved Performance. Appl. Sci. 2023, 13, 12174. https://doi.org/10.3390/app132212174

AMA Style

Wang Y, Wang Q, Zou R, Wen F, Liu F, Zhang Y, Du S, Zeng W. Advancing Image Object Detection: Enhanced Feature Pyramid Network and Gradient Density Loss for Improved Performance. Applied Sciences. 2023; 13(22):12174. https://doi.org/10.3390/app132212174

Chicago/Turabian Style

Wang, Ying, Qinghui Wang, Ruirui Zou, Falin Wen, Fenglin Liu, Yihang Zhang, Shaoyi Du, and Wei Zeng. 2023. "Advancing Image Object Detection: Enhanced Feature Pyramid Network and Gradient Density Loss for Improved Performance" Applied Sciences 13, no. 22: 12174. https://doi.org/10.3390/app132212174

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop