Next Article in Journal
Evaluation of Incident Light Characteristics for Vehicle-Integrated Photovoltaics Installed on Roofs and Hoods Across All Types of Vehicles: A Case Study of Commercial Passenger Vehicles
Previous Article in Journal
Comparative In Vitro Analysis of Composite Resins Used in Clear Aligner Attachments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DynseNet: A Dynamic Dense-Connection Neural Network for Land–Sea Classification of Radar Targets

by
Jingang Wang
1,2,
Tong Xiao
1,2,
Kang Chen
1,2 and
Peng Liu
1,2,*
1
Hainan Branch, Institute of Acoustics, Chinese Academy of Sciences, Haikou 570105, China
2
Lingshui, Marine Information, Hainan Observation and Research Station, Lingshui 572423, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(15), 8703; https://doi.org/10.3390/app15158703
Submission received: 23 June 2025 / Revised: 4 August 2025 / Accepted: 5 August 2025 / Published: 6 August 2025

Abstract

Radar is one of the primary means of monitoring maritime targets. Compared to electro-optical systems, radar offers the advantage of all-weather, day-and-night operation. However, existing radar target detection algorithms predominantly achieve binary detection (i.e., determining the presence or absence of a target) and are unable to accurately classify target types. This limitation is particularly significant for coastal-deployed maritime surveillance radars, which must contend with not only maritime vessels but also various land-based and island targets within their monitoring range. This paper aims to enhance the informational breadth of existing binary detection methods by proposing a land–sea classification method of radar targets based on dynamic dense connections. The core idea behind this method is to merge the interlayer output features of the network and to augment and weigh them through dynamic convolutional combinations to improve the feature extraction capability of the network. The experimental results demonstrate that the proposed attribute recognition method outperforms current deep network architectures.

1. Introduction

Radar is one of the primary means of monitoring maritime targets [1,2,3]. Owing to its unique operating principles, radar can effectively detect and track targets under various weather conditions. This capability stands in stark contrast to electro-optical systems, which are often limited by lighting and meteorological factors; for instance, adverse weather conditions such as fog, heavy rain, or strong winds significantly impair their performance [4]. Therefore, radar plays an indispensable role in fields such as maritime surveillance, military reconnaissance, aerospace applications, and shipping safety due to its all-weather operational capability [5]. Furthermore, with continuous technological advancements, radar technology is evolving, expanding its application potential in target identification, data fusion, and intelligent analysis.
However, existing target detection algorithms [6,7,8,9] for X-band pulse primarily achieve binary detection, meaning they can only determine the presence or absence of a target without accurately classifying its type, as shown in Figure 1. This limitation poses numerous challenges in practical applications, particularly in complex maritime environments, where relying solely on binary detection may lead to false alarms or missed detections, thereby affecting the scientific and effective nature of decision-making. For example, during maritime surveillance operations, failing to differentiate between types of vessels could adversely impact law enforcement actions or resource allocation, wasting precious monitoring resources and potentially creating safety hazards. Consequently, enhancing the target classification capability of radar systems has become an urgent issue that needs addressing to meet increasingly complex maritime monitoring demands. Researchers must explore more advanced and effective algorithms to enable these systems to simultaneously perform the dual tasks of target detection and classification.
For land-based maritime surveillance radars, the monitored area not only includes vessels on the water’s surface but often also involves various terrestrial and island targets. These different types of targets possess unique attributes and characteristics, making it crucial to conduct in-depth attribute analysis of detected targets. By accurately determining the category in which a target belongs, an efficient monitoring system can better support decision-making processes in practical law enforcement and emergency response. For instance, swiftly identifying the type of vessel (such as fishing boats, commercial ships, or military vessels) during marine patrols directly influences the degree of and approach to law enforcement, ensuring maritime safety and order. Additionally, monitoring surrounding land and island targets enhances overall security measures, enabling timely identification and response to potential risks. Moreover, clarifying the properties of targets can support multidimensional data analyses, providing a scientific basis for strategic planning and resource allocation.
Our approach involves utilizing X-band pulse radar echoes for preliminary exploration of sea-surface target category recognition (such as cargo ships or fishing vessels), first distinguishing between sea-surface targets and land targets, which is very beneficial for the fine classification of sea-surface targets. This paper enhances the informational breadth of existing binary detection methods by proposing a radar target attribute recognition method based on dynamic dense connections. Researchers have proposed clustering [11,12,13,14] and sea–land segmentation methods [15,16,17,18,19] based on radar echoes. Reference [11] proposes pre-sampling the original data to reduce data volume, thereby minimizing computation time during the clustering process. References [12,13] employ an automatic parameter adjustment method to enable the clustering algorithm to adapt to the distribution characteristics of large-scale datasets. Reference [14] utilizes a subdivision approach to further segment target detection, enhancing the performance of the clustering algorithm. Reference [16] achieves sea–land segmentation by distinguishing land clutter from sea clutter through an iterative covariance matrix. Reference [18] integrates three modules—the edge enhancement module, maximum fusion difference convolution, and multiscale spatial attention module—to extract richer features. References [15,16,17,18,19] were able to determine the boundary between the ocean and land through clustering or classification.
Building upon these methods, this paper introduces a neural network-based approach for attribute analysis of radar targets, enabling precise discrimination between land-based targets (including islands in the ocean) and sea-based vessel targets. Our method takes into account both the characteristics of pulse compression radar echoes and the task of target attribute recognition. The core idea of this method is to fuse the output features across the layers of the network, employing dynamic augmentation and weighted combinations of convolutional outputs to enhance the feature extraction capabilities of the network.
Specifically, this approach considers the interrelations between various levels while incorporating a dynamic adjustment mechanism, allowing the network to flexibly optimize the feature extraction process based on the characteristics of the input data. Furthermore, the method employs multiscale feature fusion techniques to improve sensitivity to targets of varying sizes and shapes, further enhancing the accuracy of target recognition. The experimental results demonstrate that the proposed attribute recognition method outperforms current mainstream deep network architectures in terms of accuracy and efficiency, validating its effectiveness and practicality in radar target detection and providing new insights and solutions for future related research and applications. The remainder of this paper is organized as follows: Section 2 introduces the proposed method, detailing the structure and principles of the DynseNet model. Section 3 presents the experimental results and discussion. Finally, Section 4 concludes the paper.

2. The Proposed Methodology

The proposed DynseNet architecture is shown in Figure 2 and primarily consists of a dynamic convolution module, a dense network, and an attention fusion module. First, we use the dynamic convolution module to dynamically adjust the convolution operations based on the input data, allowing the model to better adapt to input data. The input data is obtained after the X-band radar receives the echo information, which is then processed through Cell-Averaging Constant-False-Alarm-Rate (CA-CFAR) detection [20], clustering, etc. The detailed processing procedure is described in Section 2.1. Next, we employ a dense network to extract echo features. Through the dense connectivity mechanism, each layer receives feature maps from all preceding layers, enabling feature reuse. Finally, we utilize a dual-domain attention fusion module to fuse information from both global and local domains to obtain relevant features. These relevant features are then fed into a classifier to achieve classification of land and vessel targets.

2.1. Data Processing

This paper conducts maritime exploration using X-band radar deployed in the Qiongzhou Strait. The location of the radar deployment and related information are shown in Figure 3 and Table 1. After receiving the echo data, it is transmitted from the development board to the computer via a fiber optic connection, and we first perform matched filtering. Subsequently, the CA-CFAR algorithm is employed for target detection; CA-CFAR estimates the background noise level by calculating the average power of the reference cells surrounding the unit under test, and it can dynamically adjust the detection threshold to maintain a constant false alarm rate. Following this, the radar point cloud is clustered using the algorithm outlined in reference [2], which allows us to identify several target regions. The aim of this study is to classify these target regions based on their attributes; thus, cargo ships, vessels, yachts, and other similar objects on the sea are all considered maritime targets without distinguishing between specific types of vessels, only categorizing them as either maritime or terrestrial targets. We delineate the maritime and terrestrial areas based on the coastline displayed on electronic navigational charts and annotate the samples located in different regions as maritime targets or terrestrial targets. The detailed information for the final dataset is presented in Table 2. The weather conditions during the observation period are also recorded in the table.
The observation period spans four days (designated as training sets D1–D2 and test sets D1–D2), with each observation focusing on a fixed area. We also retain the original radar echo data to augment the sample data in the dataset. As mentioned earlier, the objective of this paper is to conduct attribute analysis on clustered point cloud data to determine whether they belong to the sea-surface target or land target category. Therefore, the input of the neural network is the raw return intensity of the point cloud in the specific area. To facilitate further analysis and comparison, we also performed region filling on the returns within specific observation windows. The entire data processing process is as follows:
  • Perform matched filtering and CA-CFAR detection on the radar echo data to obtain a frame of radar point cloud data, including the detected target point positions and and their echo intensities.
  • Use the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm described in [11] to cluster and merge the target points into clusters. The specific steps are as follows:
    (a)
    Initialize the distance threshold D i s and the target number threshold P t s .
    (b)
    Randomly select a detected target point, and use this point as the center to search for neighboring target points within a radius of D i s .
    (c)
    If the number of neighboring target points N t s > = P t s , then the selected target point is considered a core point.
    (d)
    Then, randomly select another target point from the remaining target points and determine whether it is a core point.
    (e)
    Repeat the previous step until all the target points in the current frame have been traversed.
    (f)
    Traverse each core point, search for neighboring core points within the distance threshold Dis, and classify them into the same cluster.
  • For each cluster, define the bounding rectangle based on the boundary of its core points. Each cell in the matrix corresponds to a target point within the cluster, with the cell value representing the echo intensity of the corresponding target point in the matched filtering data. The cell values of non-target point positions in the rectangle are set to 0.
  • Obtain the center coordinates of each bounding rectangle. In the rectangle, reverse search the corresponding echo intensity in the matched filtering data. Fill the coordinate cells within the bounding rectangle that have a value of 0 with the echo intensity retrieved from the corresponding positions.
The effect before and after filling is shown in Figure 4. In the figure, (a) and (c) represent the target points detected in a cluster before filling, with the bottom-left corner of the bounding rectangle taken as the origin (0,0) and X and Y representing the horizontal and vertical coordinates relative to the origin. In the figure, (b) and (d) represent the corresponding bounding rectangles filled using the matched filtering data, where the filling retains more local information.

2.2. Dynamic Convolution Module

We select a sequence of multiple frames of echo data D R r c as the input data for the network model, where r represents the number of consecutive echo samples and c is the number of sampling points in a single echo. In this paper, both r and c are set to 224. We employ a dynamic convolution module to perform adaptive feature encoding on the input data D. Depending on the input, different attention mechanisms generate varying filter weights within the same network layer, calculated as follows:
P = A v g P o o l ( D ) C = C o n v 2 ( U ( C o n v 1 ( P ) ) )
where A v g P o o l represents the average pooling operation, C o n v 1 and C o n v 2 represent the convolution operations, and U represents the Rectified Linear Unit (ReLU) activation function, defined as R e L U = m a x ( 0 , x ) . Subsequently, Softmax is applied to C for normalization, yielding the filter weight coefficients w 1 , w 2 , , w k , calculated as follows:
w 1 , w 2 , , w k = s o f t m a x ( C ) W c = i = 1 k w i W i b c = i = 1 k w i b i
where W i and b i represent the learnable parameters in the convolution kernels, W c and b c are the parameters after weighted fusion, and k is the number of convolution kernels. We define the above operations as D y C o n v . Next, we use the weighted fusion filter for feature encoding, calculated as follows:
C = M a x P o o l ( U ( F n o r m ( D y C o n v ( D ) ) ) )
where M a x P o o l represents the max pooling operation and F n o r m represents the batch normalization operation.

2.3. Dense Block

The dense network we propose consists of four dense blocks and three transition layers. The four dense blocks contain 6, 12, 24, and 16 dense layers, respectively. The computation process for each dense layer is as follows:
x = D y C o n v ( U ( F n o r m ( D y C o n v ( U ( F n o r m ( X ) ) ) ) ) )
the dense layers are connected in a dense manner, where the input of the current layer includes the outputs of all previous layers. If the current layer is the l-th layer, its input X can be defined as follows:
X = x 0 x 1 x l 1
where x 0 x 1 x l 1 represent the outputs from layers 0 to l 1 , and ⊕ denotes the concatenation operation. The encoded features will serve as the input to the first dense layer of the first dense block. Before entering the next dense block, we utilize a transition layer to reduce the number of feature map channels, making the network model more efficient. The computation process is as follows:
T = A v g P o o l ( D y C o n v ( U ( F n o r m ( X ) ) ) )
where A v g P o o l represents the pooling operation, and T is the downsampled feature map, which will serve as the input to the next dense block.

2.4. Attention Fusion Block

After obtaining the feature map M from the final dense block, both global and local attention mechanisms are employed to capture feature information across various aspects. By integrating these features, related features are obtained. The computation process for global attention is outlined below:
Q j = W q j M K j = W k j M V j = W v j M h e a d j = s o f t m a x ( Q j · K j d k ) V j
where h e a d j represents the attention value of the j-th head, which is set to 8 in this paper. W q j , W k j , and W v j are the parameter matrices for the scientific system, and d k is the scaling factor. Subsequently, the attention values from multiple heads are concatenated together and a skip connection is applied. The computation process is as follows:
G = ( h e a d 1 h e a d 2 h e a d 8 ) W 0 + M
where W 0 represents the weight matrix and G denotes the extracted global features. The computation process for local attention is outlined as follows:
L = S i g m o i d ( C o n v 3 ( A v g P o o l ( M ) ) ) M + M
where S i g m o i d is represented as a non-linear activation function and ⊙ denotes element-wise multiplication. L represents the extracted local features. We integrate global and local attention, and the process is as follows:
F = G L S = C o n v 6 ( F ) + C o n v 5 ( C o n v 4 ( F ) ) O = S + F C ( F C ( S ) )
where F C represents the fully connected layer and O denotes the fused features.

2.5. Training Loss

The fused features are input into the average pooling and fully connected layers to obtain the prediction results. During the entire network training process, we calculate the classification loss using cross-entropy as follows:
L o s s = 1 N i = 1 N y i l o g ( y ^ i )
where y i represents the true label, y ^ i represents the output result of the model inference, and N represents the current batch size.

3. Experiments and Discussion

3.1. Comparison Methods and Evaluation Metrics

In the experiments, we use LeNet, ResNet, GoogleNet, and DenseNet as baselines for comparison. To verify the effectiveness of the padding preprocessing step, we also conducted comparative experiments before and after preprocessing. During the training process, we trained the network for 50 epochs with a batch size of 16, using accuracy as the key metric to evaluate the model’s performance.
Accuracy = T P + T N T P + T N + F P + F N
where T P represents the number of actual positive samples correctly predicted by the model, F P denotes the number of actual negative samples incorrectly predicted as positive by the model, T N represents the number of actual negative samples correctly predicted by the model, and F N denotes the number of actual positive samples incorrectly predicted as negative by the model.

3.2. Performance Analysis

In this section, we compare the radar target attribute recognition network DynseNet, which is based on dense information dynamic redistribution, with other networks. Overall, DynseNet demonstrates superior performance in enhancing the effectiveness of detection algorithms compared to the four comparison methods.
From the experimental results in Table 3, The bold text represents the best-performing algorithm under the current dataset. It can be observed that when the data is not padded, the proposed model’s attribute recognition accuracy on D1 data exceeds that of the comparison algorithms by 27.46%, 3.95%, 1.33%, and 1.18%, respectively. For D2 data, the proposed model’s accuracy exceeds the comparison algorithms by 19.32%, 3.32%, 2.48%, and 0.48%, respectively. When the data is padded, the proposed model’s attribute recognition accuracy on D1 data exceeds that of the comparison algorithms by 22.16%, 6.68%, 3.04%, and 1.60%, respectively. For D2 data, the accuracy exceeds the comparison algorithms by 10.19%, 2.12%, 2.02%, and 1.21%, respectively. It can also be observed that both the proposed method and the comparison algorithms show improvements on D1 and D2 data after padding, with the proposed method improving by 5.69% and 14.41%, respectively, which demonstrates the effectiveness of our preprocessing for radar echoes.
To more intuitively demonstrate the superiority of our proposed model, we performed visualization processing on the detection results from different models applied to single-frame radar data. The resulting comparison is shown in Figure 5. It is evident that in the comparative algorithms, there are instances where land is misidentified as sea targets or sea targets are misidentified as land, leading to some false detections. In contrast, the detection results from our proposed model are almost consistent with the actual targets displayed.

4. Conclusions

Radar plays a crucial role in monitoring maritime targets, and it is essential to distinguish between sea-surface and land targets before performing fine classification. This differentiation greatly aids in improving subsequent fine classification. To address this, this paper proposes a radar target classification method using dynamic dense connections. The proposed method takes into full consideration the characteristics of pulse compression radar echoes and the task of target attribute recognition, enabling it to enhance existing binary detection methods. The experimental results show that this approach surpasses current deep network architectures in attribute recognition performance. Of course, the method proposed in this paper also has certain limitations, including high computational complexity and a current inability to achieve multi-target recognition. Future work will focus on further improving the proposed method’s ability to capture subtle differences between different target types by exploring advanced techniques such as reinforcement learning, enhancing multi-target recognition, and thereby refining the method. Additionally, incorporating multi-sensor data fusion strategies to combine radar information with other sensor modalities could enhance target classification accuracy in complex maritime environments. Moreover, investigating real-time implementation on embedded systems for practical deployment will be a key area for future research.

Author Contributions

Conceptualization, J.W. and P.L.; methodology, J.W.; validation, K.C. and T.X.; data processing and analyzing, J.W. and T.X.; resources, P.L.; data curation, T.X.; writing—original draft, K.C. and T.X.; writing—review and editing, J.W. and P.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by a Hainan Province Science and Technology Special Fund under Grant ZDYF2025SHFZ058, in part by the Youth Innovation Promotion Association, Chinese Academy of Sciences, under Grant 2022022, in part by the South China Sea Nova project of Hainan Province under Grant NHXXRCXM202340, and in part by the Haikou Key Science and Technology Project under Grant 2024020.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

We appreciate Jingyuan Bai’s assistance during the paper revision phase.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, X.; Wang, Y.; Zang, C.; Chen, X.; Ren, H.; Cui, G. Intelligent Maritime Radar Target Detection With Partial Annotation via Progressive Learning. IEEE Sens. J. 2024, 24, 34987–34998. [Google Scholar] [CrossRef]
  2. Chen, X.; Su, N.; Huang, Y.; Guan, J. False-Alarm-Controllable Radar Detection for Marine Target Based on Multi Features Fusion via CNNs. IEEE Sens. J. 2021, 21, 9099–9111. [Google Scholar] [CrossRef]
  3. Xue, J.; Fan, Z.; Xu, S. Adaptive Coherent Detection for Maritime Radar Range-Spread Targets in Correlated Heavy-Tailed Sea Clutter With Lognormal Texture. IEEE Geosci. Remote Sens. Lett. 2024, 21, 3505805. [Google Scholar] [CrossRef]
  4. Li, S.; Yang, X.; Wang, J. Sea Surface Object Detection Based on Background Dynamic Perception and Cross-Layer Semantic Interaction. In Proceedings of the 2023 IEEE International Conference on Multimedia and Expo (ICME), Brisbane, QLD, Australia, 10–14 July 2023; pp. 72–77. [Google Scholar] [CrossRef]
  5. Wang, J.; Li, S. Maritime Radar Target Detection Model Self-Evolution Based on Semisupervised Learning. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5101011. [Google Scholar] [CrossRef]
  6. Wan, H.; Tian, X.; Liang, J.; Shen, X. Sequence-Feature Detection of Small Targets in Sea Clutter Based on Bi-LSTM. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4208811. [Google Scholar] [CrossRef]
  7. Su, N.; Chen, X.; Guan, J.; Huang, Y.; Wang, X.; Xue, Y. Radar Maritime Target Detection via Spatial–Temporal Feature Attention Graph Convolutional Network. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5102615. [Google Scholar] [CrossRef]
  8. Zhao, W.; Jin, M.; Cui, G.; Wang, Y. Eigenvalues-Based Detector Design for Radar Small Floating Target Detection in Sea Clutter. IEEE Geosci. Remote Sens. Lett. 2022, 19, 3509105. [Google Scholar] [CrossRef]
  9. Zhang, J.; Ding, T.; Zhang, L. Longtime Coherent Integration Algorithm for High-Speed Maneuvering Target Detection Using Space-Based Bistatic Radar. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5100216. [Google Scholar] [CrossRef]
  10. OpenStreetMap. Available online: https://www.openstreetmap.org (accessed on 29 July 2025).
  11. Cheng, D.; Zhang, C.; Li, Y.; Xia, S.; Wang, G.; Huang, J.; Zhang, S.; Xie, J. GB-DBSCAN: A fast granular-ball based DBSCAN clustering algorithm. Inf. Sci. 2024, 674, 120731. [Google Scholar] [CrossRef]
  12. Guo, P.; Liu, Z.; Wang, J. Radar group target recognition based on HRRPs and weighted mean shift clustering. J. Syst. Eng. Electron. 2020, 31, 1152–1159. [Google Scholar] [CrossRef]
  13. Guo, Z.; Liu, H.; Pang, L.; Fang, L.; Dou, W. DBSCAN-based point cloud extraction for Tomographic synthetic aperture radar (TomoSAR) three-dimensional (3D) building reconstruction. Int. J. Remote Sens. 2021, 42, 2327–2349. [Google Scholar] [CrossRef]
  14. Li, J.; Cheng, X.; Wu, Z.; Guo, W. An Over-Segmentation-Based Uphill Clustering Method for Individual Trees Extraction in Urban Street Areas From MLS Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2206–2221. [Google Scholar] [CrossRef]
  15. Zhou, M.; Ma, L.; Wang, N.; Yang, Y.; Sun, J. Land-sea separation algorithm based on phase correlation for marine surveillance radar. In Proceedings of the 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Chongqing, China, 11–13 December 2019; pp. 1–4. [Google Scholar] [CrossRef]
  16. Xu, S.; Bai, X.; Ren, Q.; Li, D. Sea–Land Segmentation Algorithm Based on Multiframe Radar Echoes. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5110310. [Google Scholar] [CrossRef]
  17. Shui, P.L.; Xia, X.Y.; Zhang, Y.S. Sea–Land Segmentation in Maritime Surveillance Radars via K-Nearest Neighbor Classifier. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 3854–3867. [Google Scholar] [CrossRef]
  18. Zhu, R.; Zhang, T.; Li, J.; Wei, F.; Yu, W. A Network for Merging SAR Image Sea-Land Segmentation and Coastline Detection Tasks. IEEE Geosci. Remote Sens. Lett. 2024, 21, 4017305. [Google Scholar] [CrossRef]
  19. Li, K.; Shan, T.; Zhang, Y. Sea-Land Clutter Segmentation Algorithm Based on Multi-measure Fusion with SVM Classifier. In Proceedings of the 2021 13th International Conference on Communication Software and Networks (ICCSN), Chongqing, China, 4–7 June 2021; pp. 94–98. [Google Scholar] [CrossRef]
  20. Garcia, F.D.A.; Rodriguez, A.C.F.; Fraidenraich, G.; Filho, J.C.S.S. CA-CFAR Detection Performance in Homogeneous Weibull Clutter. IEEE Geosci. Remote Sens. Lett. 2019, 16, 887–891. [Google Scholar] [CrossRef]
Figure 1. Map and corresponding radar scan results. Radar scanning can only distinguish between the presence or absence of targets, but it cannot determine their attributes. The map is obtained from OpenStreetMap. The copyright and license information can be found at https://www.openstreetmap.org/copyright (accessed on 29 July 2025) [10].
Figure 1. Map and corresponding radar scan results. Radar scanning can only distinguish between the presence or absence of targets, but it cannot determine their attributes. The map is obtained from OpenStreetMap. The copyright and license information can be found at https://www.openstreetmap.org/copyright (accessed on 29 July 2025) [10].
Applsci 15 08703 g001
Figure 2. The overall structure of our proposed method.
Figure 2. The overall structure of our proposed method.
Applsci 15 08703 g002
Figure 3. Radar deployment site for maritime observation. (a) Radar deployment location. (b) Radar screen.
Figure 3. Radar deployment site for maritime observation. (a) Radar deployment location. (b) Radar screen.
Applsci 15 08703 g003
Figure 4. Two comparison examples before and after augmentation based on radar echo data. (a) Sample 1—Unfilled. (b) Sample 1—Filled. (c) Sample 2—Unfilled. (d) Sample 2—Filled.
Figure 4. Two comparison examples before and after augmentation based on radar echo data. (a) Sample 1—Unfilled. (b) Sample 1—Filled. (c) Sample 2—Unfilled. (d) Sample 2—Filled.
Applsci 15 08703 g004
Figure 5. Comparison of attribute recognition results using different different neural networks. (a) Ground truth; (b) result for LeNet; (c) result for ResNet; (d) result for GoogleNet; (e) result for DenseNet; (f) result for DynseNet.
Figure 5. Comparison of attribute recognition results using different different neural networks. (a) Ground truth; (b) result for LeNet; (c) result for ResNet; (d) result for GoogleNet; (e) result for DenseNet; (f) result for DynseNet.
Applsci 15 08703 g005
Table 1. Radar parameters.
Table 1. Radar parameters.
ParameterFrequency RangePulse WidthAntena SpeedAntena LengthAntena ModePolarizationHorizontal Beam WidthVertical Beam Width
Value9.38–9.44 GHz4 μ s24 rpm2 mSpinHH 1 22
Table 2. Detail information of the constructed ocean observation dataset.
Table 2. Detail information of the constructed ocean observation dataset.
SubsetObservation PeriodsNumber of Land SamplesNumber of Vessel SamplesNumber of Total SamplesWeather
Training
Set
D184365031876837Sunny
D243824,14321,52445,667Sunny
Test
Set
D1230929016,38725,677Rainy
D227716,78710,72027,507Sunny
Table 3. Comparison of experimental results.
Table 3. Comparison of experimental results.
IndexModel VariantAccuracy
UnfilledFilled
D1D2D1D2
#1LeNet0.63210.63840.74200.8738
#2ResNet0.86720.79840.89680.9545
#3GoogleNet0.89340.80680.93320.9555
#4DenseNet0.89490.82680.94760.9636
#5DynseNet0.90670.83160.96360.9757
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Xiao, T.; Chen, K.; Liu, P. DynseNet: A Dynamic Dense-Connection Neural Network for Land–Sea Classification of Radar Targets. Appl. Sci. 2025, 15, 8703. https://doi.org/10.3390/app15158703

AMA Style

Wang J, Xiao T, Chen K, Liu P. DynseNet: A Dynamic Dense-Connection Neural Network for Land–Sea Classification of Radar Targets. Applied Sciences. 2025; 15(15):8703. https://doi.org/10.3390/app15158703

Chicago/Turabian Style

Wang, Jingang, Tong Xiao, Kang Chen, and Peng Liu. 2025. "DynseNet: A Dynamic Dense-Connection Neural Network for Land–Sea Classification of Radar Targets" Applied Sciences 15, no. 15: 8703. https://doi.org/10.3390/app15158703

APA Style

Wang, J., Xiao, T., Chen, K., & Liu, P. (2025). DynseNet: A Dynamic Dense-Connection Neural Network for Land–Sea Classification of Radar Targets. Applied Sciences, 15(15), 8703. https://doi.org/10.3390/app15158703

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop