Next Article in Journal
Remote Sensing Application in Ecological Restoration Monitoring: A Systematic Review
Next Article in Special Issue
SPA: Annotating Small Object with a Single Point in Remote Sensing Images
Previous Article in Journal
Optimizing the Matching Area for Underwater Gravity Matching Navigation Based on a New Gravity Field Feature Parameters Selection Method
Previous Article in Special Issue
OPT-SAR-MS2Net: A Multi-Source Multi-Scale Siamese Network for Land Object Classification Using Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Tracking Matched Filter with Adaptive Feedback Recurrent Neural Network for Accurate and Stable Ship Extraction in UAV Remote Sensing Images

School of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang 524025, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(12), 2203; https://doi.org/10.3390/rs16122203
Submission received: 9 May 2024 / Revised: 11 June 2024 / Accepted: 12 June 2024 / Published: 17 June 2024

Abstract

:
In an increasingly globalized world, the intelligent extraction of maritime targets is crucial for both military defense and maritime traffic monitoring. The flexibility and cost-effectiveness of unmanned aerial vehicles (UAVs) in remote sensing make them invaluable tools for ship extraction. Therefore, this paper introduces a training-free, highly accurate, and stable method for ship extraction in UAV remote sensing images. First, we present the dynamic tracking matched filter (DTMF), which leverages the concept of time as a tuning factor to enhance the traditional matched filter (MF). This refinement gives DTMF superior adaptability and consistent detection performance across different time points. Next, the DTMF method is rigorously integrated into a recurrent neural network (RNN) framework using mathematical derivation and optimization principles. To further improve the convergence and robust of the RNN solution, we design an adaptive feedback recurrent neural network (AFRNN), which optimally solves the DTMF problem. Finally, we evaluate the performance of different methods based on ship extraction accuracy using specific evaluation metrics. The results show that the proposed methods achieve over 99% overall accuracy and KAPPA coefficients above 82% in various scenarios. This approach excels in complex scenes with multiple targets and background interference, delivering distinct and precise extraction results while minimizing errors. The efficacy of the DTMF method in extracting ship targets was validated through rigorous testing.

1. Introduction

In today’s globalized world, maritime ship detection is crucial for national defense, military security [1], and maritime traffic monitoring [2]. First and foremost, real-time surveillance of vast sea areas is essential to identify and track potential maritime threats, such as enemy ships, submarines, or other suspicious vessels. This vigilance plays a critical role in safeguarding national maritime security and sovereignty [3]. Moreover, as maritime routes grow increasingly congested due to the expansion of global trade [4], an effective ship monitoring system is imperative to prevent accidents like collisions and groundings, ensuring safe navigation [5].
In this context, unmanned aerial vehicle (UAV) remote sensing technology has emerged as a powerful tool for enhancing military defense and maritime traffic planning [6,7]. UAV remote sensing, with its high flexibility and low operational costs [8], complements satellite remote sensing by providing localized and high-resolution data. While UAVs can be rapidly deployed in a variety of weather conditions, their effectiveness may be limited under severe weather circumstances such as high winds. Despite this, they remain particularly valuable for swift emergency response under most operational environments [9]. Unlike traditional satellite or airborne remote sensing, UAVs can adjust flight paths and altitudes flexibly, focusing on specific areas, particularly those with challenging or inaccessible geography. Moreover, UAVs deliver near real-time data crucial for swift decision-making and response in search and rescue operations or environmental monitoring. Modern UAVs often come equipped with advanced data processing systems that can analyze images mid-flight, offering real-time insights [10,11,12]. Furthermore, UAVs can capture targets from various angles and altitudes [13], improving the accuracy of ship identification and surveillance. This study aims to propose a fast and accurate ship identification method for effective monitoring. Typically, researchers rely on two primary techniques for ship extraction: traditional target detection from remote sensing images and deep learning approaches.
Remote sensing target extraction is fundamentally a binary classification problem, with its primary objective being the effective separation of the object of interest from the complex background [14,15]. Given the widespread acquisition and application of remote sensing images, numerous classical target detection algorithms have been developed and widely applied in recent decades. These algorithms are designed to minimize the interference of background information while enhancing the expression of target features, thus making targets more prominent and easier to detect in complex environments. Some of the representative algorithms include the match filter (MF) [16,17], adaptive coherence estimator (ACE) [18,19], and constrained energy minimization (CEM) [20,21]. Stephanie et al. [22] describe the ACE algorithm as a generalized likelihood ratio test (GLRT) within a homogeneous setting, where the covariance matrix of the auxiliary data is proportional to that of the measured vectors. The MF approach resembles ACE by treating target detection as a hypothesis-testing problem, assuming that targets and backgrounds follow different probability models. Through GLRT, MF effectively distinguishes targets from their background. This statistical target detection technique relies on the accurate differentiation of target and background probability distributions for successful identification. In practice, MF usually leverages local statistical information through a double-window technique to gather accurate statistics of the surrounding background. The CEM method proposed by William et al. [23] seeks to maximize the spectral response of the target while suppressing the background response, enabling effective separation between them. The CEM method employs a finite impulse response (FIR) detector to minimize the energy across the image while maintaining a fixed target spectral response value.
Although classical remote sensing target extraction methods have yielded positive results across various fields, the complexity and variability of real-world application scenarios necessitate further refinement. Many researchers have worked to improve traditional methods to enhance detection accuracy in different contexts. Various strategies have been proposed to better tailor target detection methods to specific environmental needs. Xiong et al. [24] combined the CEM method with a neural dynamics algorithm to extract Arctic sea ice from remote sensing images, demonstrating its efficacy in noisy environments. Shuo et al. [25] introduced an algorithm that integrates sparsity with both CEM and ACE. Similarly, Chen et al. [26] developed a noise-resistant matched filter scheme using Newton’s algorithm to identify islands and reefs. In a related study, Chaillan et al. [27] proposed a stochastic matched filter for synthetic aperture radar (SAR) image wake monitoring, using a discrete Radon transform to detect straight-line patterns. This technique is followed by a stochastic matched filter that enhances the signal-to-noise ratio of the observation. Despite these improvements, these methods lack spatial and dynamic information regarding the underlying optimization problems, resulting in suboptimal performance for ship extraction tasks. Moreover, current UAV remote sensing ship extraction techniques still have significant room for enhancement. Optimizing pixel-oriented remote sensing feature detection algorithms remains challenging, especially with phenomena like “homospectral–heterospectral”, affecting feature extraction.
In recent years, deep learning has gained popularity and has been applied across numerous engineering fields. Compared to object-oriented methods, deep learning has significant advantages in feature learning, end-to-end learning, adaptability, nonlinear modeling, and handling big data, particularly for remote sensing target extraction, such as ship monitoring. Deep learning has achieved remarkable success in fields like image recognition, speech recognition, and natural language processing, becoming a central focus in AI research. Kim et al. [28] employed the Faster-R-CNN network combined with Bayesian methods for ship detection and classification, achieving an average detection accuracy of 93.92%. The S-CNN model proposed by Zhang et al. [29] addresses suboptimal detection when confronted with varied classes and sizes of ship targets. Wang et al. [30] developed GT-YOLO, an enhanced model based on YOLOv5s, incorporating a feature fusion module with an attention mechanism to improve network feature fusion. This model introduces separable convolution to enhance the detection accuracy of small targets and low-resolution images. Finally, Zhao et al. [31] proposed a Domain Adaptive (DA) Transformer target detection method to tackle challenges posed by unlabeled multi-source satellite-borne SAR images of ships.
While deep learning has achieved remarkable success in various domains, it is crucial to recognize its limitations. Deep learning models often require large amounts of labeled data, especially for complex tasks involving extensive datasets [32,33,34]. Thus, the data volume needed for optimal generalization expands significantly. Collecting substantial labeled data is a time-consuming and labor-intensive process, incurring high costs and requiring significant human resources [35,36,37]. Moreover, when training data are limited or the model complexity is excessive, deep learning models are prone to overfitting [38,39]. This occurs when the model performs well on the training data but its performance deteriorates with unseen data, impairing its predictive capacity. Additionally, the opaque nature of deep learning models makes it challenging to understand their decision-making processes and how features are extracted. This lack of transparency is particularly problematic in sensitive fields such as medical diagnosis and legal decision-making. Furthermore, the training and inference processes of deep learning models require significant computational resources, particularly for deep models with extensive datasets. These models rely heavily on advanced computational hardware and sufficient resources to function effectively. Consequently, training complex deep learning models with limited resources can be time-consuming and inefficient, posing a challenge for applications that require rapid and accurate ship monitoring and extraction from remote sensing imagery.
Meanwhile, recurrent neural networks (RNNs) [40,41,42,43,44] have gained popularity due to their efficiency and robustness in solving real-time problems. However, existing RNN models are fundamentally designed for dynamic control and optimization tasks, making them less suitable for remote sensing target extraction. Additionally, the activation function [45] is a critical component affecting the convergence speed and accuracy of RNN models. Despite its importance, few studies have focused on developing specialized activation functions to improve RNN performance in remote sensing target extraction.
This paper proposes a dynamic tracking matched filter (DTMF) scheme for the extraction of ships from UAV remote sensing images. DTMF incorporates a dynamic penalty term based on the MF and combines the dynamic adjustments of the regularization parameter and the time variable to strengthen the orientation towards satisfying the constraints. Furthermore, the time variable is introduced as a reconciliation factor, whereby the regularization parameter grows exponentially with time, adapting to the dynamic changes in the system. This ensures that the algorithm accurately tracks the target spectral vectors over time, and is able to have good adaptability and sustained detection performance at different time points. Subsequently, DTMF is integrated into the RNN solution framework through a rigorous mathematical derivation. In order to enhance the resilience and precision of the detection scheme, a novel nonlinear activation function is introduced and an AFRNN model that can be dynamically calibrated to adapt to fluctuations in the input data or environmental conditions is proposed. This model eliminates the time lag problem and facilitates rapid convergence. In addition, a systematic theoretical analysis and corresponding results of the AFRNN model are presented to investigate and ensure its convergence and robustness. The technical route of this paper is shown in Figure 1.
This paper is divided into five sections. The initial section of the paper presents a summary of the current state of research in the field of remote sensing target extraction and RNN methods, along with an analysis of their respective limitations. This serves to establish a foundation for the proposed method. The second part of the paper proposes the DTMF method and transforms it into a dynamic system of equations in order to prepare for the subsequent solution. The third part incorporates the DTMF dynamic equation set into the RNN solving framework and proposes an AFRNN model for solving. Subsequently, the proposed AFRNN model is subjected to comprehensive algorithmic analysis and some theorem proofs are provided to demonstrate the algorithm’s feasibility. The fourth part compares the visualization results of the proposed DTMF ship target extraction method with those of the traditional remote sensing target extraction method. It discusses the advantages and disadvantages of the different algorithms and verifies the superiority of this paper’s method. The principal contributions of this paper are as follows:
  • A novel DTMF ship target extraction model is proposed. DTMF introduces dynamic penalty terms based on MF, combining the dynamic adjustment of regularization parameters and time variables to enhance the orientation towards satisfying the constraints. Furthermore, the time variable is introduced as a reconciliation factor, whereby the regularization parameter grows exponentially with time, adapting to the dynamic changes in the system. This ensures that the algorithm accurately tracks the target spectral vectors over time, and is able to demonstrate good adaptability and sustained detection performance at different time points;
  • From the control point of view, this paper proposes an AFRNN model based on RNN for solving the DTMF ship extraction method. The essence of the AFRNN model is to introduce an adaptive feedback term on the basis of the gradient RNN, and to design a special nonlinear projection function, which can be adapted to adjust dynamically according to the changes in the input data or the environment and eliminate the time lag problem;
  • The efficacy of the proposed AFRNN model in addressing the DTMF method for remote sensing ship extraction is evidenced by the corresponding quantitative and visual simulation experiments and outcomes.

2. Dynamic Tracking Matched Filter

This section presents a description of the DTMF method for ship extraction. In order to facilitate the solution of the DTMF method, it is first constructed as a dynamic quadratic programming problem, which is subsequently simplified for subsequent solving.

2.1. Model Construction

In practical applications, the remote sensing image of the ship can be expressed as X = [ x 1 , x 2 , . . . , x n ] R m × n , where each column of the matrix X belongs to R m space. Here, m represents the number of spectral bands, and n the number of pixels. Let p R m be a column vector representing the spectral feature of the target. Then, the matched filter coefficients can be represented by w R m , which is the linear operator that transforms the space. To maximize the output signal to the interference-plus-noise ratio (SINR), we need to adjust w to make the output w T x focus on the direction of the target feature vector s . In this way, the output of the filter for the background clutter y can be modeled as a normal distribution, where the variance Var ( y ) = w T Φ w . Thus, the matched filter is the optimal linear estimator that minimizes the output variance.
minimize w T Φ w , subject to w T p = 1 .
The coefficients of the matched filter are denoted as w . Then, by reconstructing the original image using the optimal filter coefficients, we obtain the filtered output image. Finally, by applying the relevant image thresholding segmentation methods, the detection of the desired target information can be achieved. The central idea of the penalty function method is to add a penalty term to the objective function, which includes a penalty factor proportional to the degree of constraint violation. As the optimization process progresses, the method enhances the effect of the penalty term by increasing a dynamic weight factor, thereby guiding the solution towards satisfying the constraints. In this way, we can effectively solve complex constrained optimization problems. Then, by incorporating temporal information, we extend the equality-constrained matched filter in Equation (1) to a dynamic optimization:
min w T ( t ) Φ w ( t ) + λ Ξ ( w T ( t ) p 1 ) , s . t . w T ( t ) D = 1 .
In the optimization process, the goal is to solve the dynamic unknown vector w T ( t ) R m in real time, where t [ 0 , + ) . The optimization objective is to minimize the objective function w T ( t ) Φ w ( t ) + λ Ξ ( w T ( t ) p 1 ) . The main component of the objective function, w T ( t ) Φ w ( t ) , represents the variance of the filter output signal. By minimizing this term, the effect of background noise on the filter output can be reduced. The penalty term, λ Ξ ( w T ( t ) p 1 ) , is used to guide the optimization process towards a solution that satisfies the constraints. Here, λ is a regularization parameter that controls the weight of the penalty term in the overall objective function, and Ξ ( · ) is a penalty function that measures the deviation of w T ( t ) p 1 .
In reality, the background of remote sensing images is highly complex, and phenomena such as ’same object, different spectrum’ and ’same spectrum, different object’ frequently occur, impeding the ability to obtain the spectral vectors of the target of interest in their entirety. Consequently, the spectral vectors analogous to or similar to the target spectral vectors are obtained through sampling. The coefficient matrix D R m × m is the ensemble of sampled spectral vectors, and w T ( t ) D = 1 ensures a fixed gain in the direction of the target spectral feature.
The regularization parameter λ balances the importance of minimizing energy with satisfying the constraints. Its size directly affects the convergence and computational efficiency of the problem. To ensure computational efficiency and convergence, it is crucial to choose appropriate penalty functions and parameters. Theoretically, under the condition that the penalty function is satisfied, the regularization parameter needs to be continuously increased during the iteration process. To achieve this, we introduce the time variable as a harmonizing factor, ensuring that the regularization parameter exponentially increases with time. This means that the system’s tolerance for error decreases over time, and as time progresses, the system increasingly ensures that the filter coefficients accurately track the target spectral vector. This dynamic adjustment is essential in systems where the spectral characteristics change over time, necessitating continuous parameter adjustments to maintain optimal performance.
To minimize algorithm complexity and save computational resources, we design the penalty function as Ξ ( · ) = ( · ) 2 . This squared term is continuously differentiable, and the penalty rapidly increases with the degree of deviation, imposing greater penalties on solutions that deviate further from the constraints. This ensures that the solution moves towards satisfying the constraints. Additionally, the penalty term is always non-negative, ensuring that it is always an increment to the total objective function. This maintains the bounds of the optimization function, making the optimization process more stable.
Based on the above analysis, we design the following optimization model:
min w T ( t ) Φ w ( t ) + λ e t ( w T ( t ) p 1 ) 2 , s . t . w T ( t ) D = 1 .
This optimization model finds the optimal filter coefficients w ( t ) by minimizing the sum of the filter output signal variance and the penalty term. The penalty term ensures a fixed gain in the direction of the target spectral feature, allowing for target detection in a complex background. By dynamically updating the filter coefficients, this model can adapt to real-time requirements in practical applications.

2.2. Modeling Simplicity

To further expand and simplify the optimization problem, we need to extract the term e t ( w T ( t ) p ( p T w ( t ) ) ) . Given the term ( w T ( t ) p 1 ) 2 , we can use the square of a binomial formula to expand it:
( w T ( t ) p 1 ) 2 = ( w T ( t ) p ) 2 2 w T ( t ) p + 1 ,
we can transform the penalty termination into the following form:
λ e t ( w T ( t ) p 1 ) 2 = λ e t ( w T ( t ) p ) 2 2 w T ( t ) p + 1 .
First, we extract the term w T ( t ) p ( p T w ( t ) ) :
λ e t ( w T ( t ) p ) 2 = λ e t ( w T ( t ) p ( p T w ( t ) ) ) ,
we can rewrite this as:
λ e t ( w T ( t ) p ) 2 = w T ( t ) ( λ e t p p T ) w ( t ) .
The optimization problem simplifies to:
min w T ( t ) Φ + λ e t p p T w ( t ) 2 λ e t w T ( t ) p + λ e t , s . t . w T ( t ) D = 1 .
We introduce the Lagrange multiplier μ and construct the Lagrangian function L :
L ( w ( t ) , μ ) = w T ( t ) Φ + λ e t p p T w ( t ) 2 λ e t w T ( t ) p + λ e t + μ ( w T ( t ) D 1 )
To solve the optimization problem, we need to take the partial derivatives of L and set them to zero:
L w ( t ) = 2 Φ + λ e t p p T w ( t ) 2 λ e t p + μ D = 0 , L μ = w T ( t ) D 1 = 0 .
In summary, when Equation (6) is satisfied, the DTMF can obtain the optimal solution. Subsequently, to simplify the expression, the following matrix is defined:
K ( t ) = Φ + λ e t p p T 1 2 D D T 0 R m + n , U ( t ) = w ( t ) μ R m , Ψ ( t ) = λ e t p 1 R m
As a result, the complex equation to be solved can be transformed into a simple linear equation as follows:
K ( t ) U ( t ) = Ψ ( t ) .

3. Adaptive Feedback Recurrent Neural Network

In the previous section, we transformed the DTMF ship target extraction model into a dynamic quadratic programming problem. The introduction of the time variable prevents this problem from being treated like a conventional static optimization problem. Therefore, traditional optimization algorithms such as the most rapid descent method and Newton’s method are not applicable in this scenario. These classical optimization methods usually rely on the information of the derivatives of the objective function to approach the optimal solution step by step, and they each have fixed accuracy limits. However, the dynamic objective function changes over time, which means that each iteration instant will face a different optimization scheme than the previous instant. The iterative approach in traditional algorithms, based on the derivative information of the moment, lacks immediate compensation for the time-varying parameters, which results in an irreparable time delay, no matter how the step size and sampling interval are set.
Based on the above problems, this section carries out the study for solving the DTMF target extraction method constructed in the previous chapter for target extraction of UAV remote sensing imagery ships.
First, we construct an error function based on the above dynamic linear Equation (7):
ϵ ( t ) = K ( t ) U ( t ) Ψ ( t ) ,
Based on this error function, we introduce the original zeroing neural network (OZNN) [40] and construct it as the following first-order linear differential equation:
ϵ ˙ ( t ) + σ ϵ ( t ) = 0 ,
where σ > 0 is a scaling parameter that adjusts the convergence speed of the linear model. Previous research has demonstrated that the dynamical system (9) can globally converge.
The Gradient Neural Network (GNN) [46] model requires the definition of an energy function based on the error and seeks the optimal solution along the negative gradient direction of the energy function. The formula for the GNN model used to solve the dynamic quadratic programming problem is as follows:
U ˙ ( t ) = K T ( t ) ( K ( t ) U ( t ) Ψ ( t ) ) .
Although the GNN model (10) eliminates the need for matrix inversion, thus substantially reducing computational complexity, it incurs time delays when dealing with dynamic problems. Therefore, the GNN model (10) is not suitable for all practical application scenarios.
Remark 1. 
It should be noted that K ( t ) K T ( t ) is a time-varying real symmetric matrix. According to the theory of matrix diagonalization and similarity, it can be equivalent to a time-varying diagonal array. Therefore, we can obtain the following expression:
K ( t ) K T ( t ) λ 1 ( t ) 0 0 0 λ 2 ( t ) 0 0 0 λ n + 1 ( t )
λ i ( i = 1 , 2 , . . . , n + 1 ) 0 are the eigenvalues of the matrix K ( t ) K T ( t ) at each moment in time. In addition, all eigenvalues λ i ( t ) satisfy the following inequality:
0 λ m λ i ( t ) λ M
where λ m and λ M , respectively, represent the global minimum and global maximum eigenvalues of the time-symmetric matrix.
In solving the dynamic quadratic programming problem, although the GNN model (9) uses parallel computation, it lacks mechanisms to effectively cope with rapid changes in the relevant parameters, which means that there is still a gap between the solution and the theoretical solution, even at infinite time [47].
Theorem 1. 
GNN model (9) allows ϵ ( t ) 2 2 for global convergence to a constant, which we denote as:
lim t ϵ ( t ) 2 = ξ δ λ m ,
In the above model, the error vector norm K ( t ) U ( t ) Ψ ( t ) 2 is bounded. Specifically, K ( t ) U ( t ) Ψ ( t ) 2 ξ ; λ M denotes the smallest eigenvalue of K ( t ) K T ( t ) .
Proof. 
First, we define a Lyapunov candidate function [48]:
L 1 ( t ) = ϵ T ( t ) ϵ ( t )
The time derivative of Equation (11) is:
L 1 ( t ) = ϵ T ( t ) ϵ ( t ) = ϵ T ( t ) ( K ( t ) U ˙ ( t ) + K ˙ ( t ) U ( t ) Ψ ˙ ( t ) ) = ϵ T ( t ) ( K ( t ) ( δ K T ( t ) ϵ ( t ) ) + K ˙ ( t ) ( U ( t ) Ψ ( t ) ) = δ K T ( t ) ϵ ( t ) 2 2 + ϵ T ( t ) ( K ( t ) ( U ( t ) Ψ ˙ ( t ) ) δ m ϵ ( t ) 2 2 + ϵ ( t ) 2 K ˙ ( t ) ( U ( t ) Ψ ˙ ( t ) ) 2 ϵ ( t ) 2 ( δ m ϵ ( t ) 2 ζ ) .
Finally, to demonstrate the convergence properties, the GNN model Equation (9) guarantees that the ϵ ( t ) 2 is always less than or equal to ξ / δ m . Therefore, for the final part of the proof, we can write:
lim t ϵ ( t ) 2 = ξ δ λ m ,
Due to physical constraints, the value of ξ cannot increase indefinitely, which leads to the GNN model (9) being unable to converge ϵ ( t ) exactly to zero. In other words, the GNN model cannot solve the dynamic quadratic programming problem precisely. The proof is complete. □
To compensate for this limitation, we introduce an unknown adaptive feedback term ϖ ( t ) :
U ˙ ( t ) = δ K T ( t ) ( K ( t ) U ( t ) Ψ ( t ) ) + ϖ ( t ) .
From the perspective of convergence, the adaptive feedback term is incrementally scaled up as the error function converges, which in turn exponentially reduces the convergence time of the model. When the error function ϵ ( t ) achieves convergence to 0, we can obtain:
U ˙ ( t ) = δ K T ( t ) ( K ( t ) U ( t ) Ψ ( t ) ) = ϖ ( t ) .
Therefore, we obtain a completely new GNN model:
U ˙ ( t ) = δ K T ( t ) ( K ( t ) U ( t ) Ψ ( t ) ) K 1 ( t ) ( K ˙ ( t ) U ( t ) Ψ ˙ ( t ) ) .
In order to improve the convergence speed of model (13), we design a Nonlinear Response Power-Law Modulation Function ϱ ( · ) :
ϱ ( ϵ i ) = ϵ i + 1 ϵ ν sign ( ϵ i ) | ϵ i | ν ,
where ν is a design parameter and sign ( · ) denotes the sign function. Therefore, the final form of the AFRNN model can be written as:
U ˙ ( t ) = δ K T ( t ) ϱ ( K ( t ) U ( t ) Ψ ( t ) ) K 1 ( t ) ( K ˙ ( t ) U ( t ) Ψ ˙ ( t ) ) .
The Nonlinear Response Power-Law Modulation Function ϱ ( · ) is the key to efficient convergence of the AFRNN model (15). This mechanism allows the model to accelerate the convergence process as the error function decreases.
AFRNN model (15) is a dynamic system constructed based on explicit formulas. Starting from any initial state, the model evolves over time, gradually approaching the theoretical solution by continuously updating parameters. This process relies on the current time-varying coefficient matrix and its derivative information over a predetermined time span and is implemented through specific algebraic operations. Figure 2 shows the framework structure of how the AFRNN model (15) updates parameters in a unified manner.
Remark 2. 
The decision to utilize an RNN-based learning approach in preference to a deep learning approach was based on three key considerations. Firstly, methods such as Convolutional Neural Networks (CNNs) or Visual Transformers (ViTs) necessitate the availability of a substantial quantity of data for training purposes. However, our detection scheme is a learning-free and model-driven approach that avoids the need for redundant training. This makes the DTMF with an AFRNN model detection scheme an economically viable and adaptable option for software or hardware implementation. Secondly, deep learning approaches necessitate the mapping of unknown features from input to output. In contrast, our detection scheme is based on rigorous mathematical derivation and modeling, which renders it highly interpretable. Furthermore, it provides a systematic theoretical analysis and results to ensure its derivation. Thirdly, the DTMF method implemented in the AFRNN model can be employed to solve the Internet optimization problem in real time, thereby achieving the detection operation. Nevertheless, in analogous circumstances, the efficacy of deep learning-based methodologies can be significantly impaired.

3.1. Complexity Analysis

Here, we present a complexity analysis of floating point operations. Before discussing the time and space complexity of the AFRNN model (15), the size of the matrices and vectors involved in the algorithm needs to be clarifed some. We have K ( t ) R G × G and U ( t ) , Ψ ( t ) R G , where G = m + n . Then, some flops on matrix-vector calculus [49] and storage requirements for matrix operations are introduced as follows:
(1)
The addition or subtraction of two vectors requires n flops.
(2)
Multiplication of a scalar and a vector a R n requires n flops.
(3)
Multiplication of a vector a R n and a full-rank matrix A R n × n requires 2 n ( n 1 ) flops.
(4)
The inverse of matrix A R n × n requires n 3 flops.
(5)
Multiplication of a scalar and a vector a R n occupies n storage space.
(6)
Multiplication of a vector a R n and a full-rank matrix A R n × n occupies n 2 storage space.
(7)
The transpose of matrix A R n × n occupies n 2 storage space.
(8)
The inverse of matrix A R n × n occupies n 2 storage space.
In light of the above representations, calculating δ K T ( t ) ϱ ( K ( t ) U ( t ) Ψ ( t ) ) needs 5 G 2 + G flops and occupies n 2 storage space, and calculating K 1 ( t ) ( K ˙ ( t ) U ( t ) Ψ ˙ ( t ) ) needs G 3 + 4 G 2 G flops and occupies n 2 storage space. Thus, it costs G 3 + 9 G flops for AFRNN model (15) at every time instant totally with an overall storage requisition of n 2 . Furthermore, in the field of computer science, the term “time complexity” is employed to describe the computational resources required to execute an algorithm. In contrast, “space complexity” refers to the memory space requisition during an algorithm’s execution. It is a quantitative assessment of the amount of memory an algorithm uses when processing data. Essentially, this means that an algorithm with greater computational complexity will incur a higher time complexity and space complexity. Following this premise, the computational time complexity and space complexity of the proposed AFRNN model (15) are O ( G 3 ) and O ( G 2 ) , respectively. For medium-sized problems, i.e., where the computational requirements grow polynomially with the problem size, the complexity of the AFRNN model (15) is acceptable in many practical applications, especially as modern computers can quickly handle large matrix operations. Matrix operations are naturally parallel, which means that the AFRNN model (15) can make good use of multi-core processors, GPUs, or distributed computing systems to further reduce computation time. Furthermore, an analytical assessment of the space complexity associated with the AFRNN model (15) underscores its pronounced superiority in computational resource management. In particular, the AFRNN model (15) shows robust adaptability to medium-scale challenges, with space requirements in the order of O ( G 2 ) , congruent with the storage capacities of prevailing computing infrastructures, and it shows excellent space efficiency and memory management capabilities, which is of great significance for improving calculation efficiency and processing speed. This feature makes the AFRNN model (15) suitable not only for prototypical theoretical investigations but also for concrete engineering and scientific computational tasks.

3.2. Convergence Analysis

In order to explain the introduction of ϱ ( t ) , the convergence of the model needs to be prioritized.
Theorem 2. 
When solving a dynamic quadratic programming problem, the global index of the norm of the model’s allowed error converges to 0, which is:
lim t ϵ i ( t ) = lim t ϵ i ( 0 ) e 0 T λ i ( τ ) d τ = 0 ,
where α represents a constant, and i = 1 , 2 , , n + l .
Proof. 
Model (13) can be equivalent to:
K ( t ) U ˙ ( t ) = δ K T ( t ) ( K ( t ) U ( t ) Ψ ( t ) ) K ˙ ( t ) U ( t ) Ψ ˙ ( t ) .
Next, we can obtain:
ϵ ˙ ( t ) = δ K ( t ) K T ( t ) ϵ ( t )
Then, the i-th subsystem of model (13) can be expressed as:
ϵ ˙ i ( t ) = δ λ i ( t ) ϵ i ( t ) .
Solving differential Equation (18), we can obtain:
ϵ i ( t ) = ϵ i ( 0 ) e δ 0 t λ i ( τ ) d τ .
Due to λ m λ i ( t ) λ M , we can prove as follows:
e δ M t e δ 0 t λ i ( τ ) d τ e δ m t .
Using a known λ M > 0 and λ m > 0 , we can obtain the following equation:
lim t e δ M t = 0 , lim t e δ m t = 0 .
Thus, model (13) is able to solve the dynamic quadratic programming problem in such a way that the global exponent of the steady-state error converges to 0. The proof is complete. □
Nonetheless, model (13) still suffers from a significant drawback: the convergence time of the model is almost infinitely long. In practical applications, a short time is usually needed to obtain the trajectory of the theoretical solution. Therefore, ϱ ( t ) is designed to address this shortcoming.
To ensure that the Nonlinear Response Power-Law Modulation Function ϱ ( · ) does not affect the convergence of the AFRNN model, we propose the following theory:
Theorem 3. 
Any monotonically increasing odd function can be used as a Power-Law Modulation Function ϱ ( · ) for the model without affecting its convergence.
Proof. 
Lyapunov candidate function for the i-th subsystem of the AFRNN model (15) is considered as follows:
L ( t ) = 1 2 ϵ 2 ( t ) ,
Its time derivative is:
d L ( t ) d t = ϵ ( t ) ϵ ˙ ( t ) δ λ ( t ) ϵ ( t ) ϵ ˙ ( t ) = δ λ ( t ) ϵ ( t ) ϵ ˙ ( t ) .
Obviously, since the Nonlinear Response Power-Law Modulation Function ϱ ( · ) is a monotonically increasing odd function, we derive the following conclusion:
ϵ i ( t ) ϱ ( ϵ i ( t ) ) 0 ,
In turn, we can obtain:
d L i ( t ) d t 0 .
Therefore, the introduction of the Nonlinear Response Power-Law Modulation Function ϱ ( · ) does not affect the convergence of the model and any monotonically increasing function can be used to activate AFRNN model (15). The proof is complete. □
After power-law modulation of the AFRNN model (15), the time derivative of the error function is changed. After rigorous theoretical analysis, the AFRNN model (15) under power-law modulation has an upper bound on the convergence time, and this upper bound can be adjusted by parameters. In order to verify the accelerated convergence effect of the AFRNN model (15), we propose the following theorem.
Theorem 4. 
AFRNN model (15) for solving dynamic quadratic programming problems allows ϵ ( t ) to converge globally in a finite time. The convergence time t M is capped at:
t M ln ϵ M 1 ν ( 0 ) + e 1 ν 1 ν δ λ m ( 1 ν ) ,
where ϵ M ( 0 ) denotes the element of the initial error vector ϵ ( 0 ) with the largest absolute value.
Proof. 
Since each element shares the same dynamic system, the subsystems of the AFRNN model (15) belonging to ϵ M ( t ) can be expressed as:
ϵ ˙ M ( t ) = δ λ ¯ ( t ) ϱ ( ϵ M ( t ) ) ,
where λ ¯ ( t ) is the time-varying eigenvalue of K ( t ) K T ( t ) .
In the same dynamical system, ϵ M ( t ) must be the last to converge to 0. In other words, the time required for ϵ M ( t ) to converge to 0 is the maximum convergence time. Since the AFRNN model is symmetric, to simplify the proof process, we assume ϵ M ( t ) > 0 . Then, we can obtain:
ϵ ˙ M ( t ) = δ λ ¯ ( t ) ϵ M ( t ) + e 1 ν ϵ M ν ( t ) ,
Equation (20) can be rewritten as:
ϵ M ν ( t ) d ϵ M ( t ) d t = δ λ ¯ ( t ) ϵ M 1 ν ( t ) + e 1 ν .
We denote h ( t ) = ϵ M 1 ν ( t ) + e 1 ν ; then, we can obtain:
d h ( t ) d t = ( 1 ν ) ϵ M ν ( t ) d ϵ M ( t ) d t .
Combining Equations (21) and (22), we can obtain the following equation:
d h ( t ) d t + δ λ ¯ ( t ) ( 1 ν ) h ( t ) = 0 ,
According to the formulae for solving first-order differential equations, we have:
h ( t ) = h ( 0 ) e δ ( 1 ν ) 0 T λ ¯ ( τ ) d τ .
According to Theorem 2, when t = t M , ϵ M ( t M ) = 0 :
h ( t M ) = e 1 ν = ϵ M 1 ν ( 0 ) + e 1 ν e δ ( 1 ν ) 0 t M λ ( τ ) d τ ,
Equation (26) can be restated as:
0 t M λ ( τ ) d τ = ln ϵ M 1 ν ( 0 ) + e 1 ν 1 ν δ ( 1 ν ) .
because λ ¯ ( t ) { λ i ( t ) | ( i = 1 , 2 , , n + l ) } , we have:
0 t M λ ( τ ) d τ 0 t M λ m d τ = λ m t M .
In addition, we can derive:
0 t M λ ¯ ( t ) ( τ ) d τ 0 t M λ m d τ = λ m t M .
Based on the above derivation, an upper bound on the convergence time t M is obtained:
t M ln ϵ M 1 ν ( 0 ) + e 1 ν 1 ν δ λ m ( 1 ν ) .
The proof is complete. □

3.3. Robustness Analysis

In engineering calculations, while the pursuit of a disturbance-free computational environment is an ideal goal, it is often unattainable due to physical constraints and practical conditions. Hence, the robustness of the model—the ability to maintain the accuracy of computation results despite the presence of higher-order residuals, hardware-induced rounding errors, and other forms of disturbances—becomes crucial. These perturbations can originate from multiple sources, including external noise caused by hardware and the working environment, as well as internal errors during data storage and signal transmission, which can be either static or dynamic.
The presence of such noise and errors clearly affects the solution algorithms. They not only reduce the performance of the algorithm but may also lead to meaningless results in practical applications. Therefore, when designing solution algorithms and computational models, it is imperative to consider these factors to ensure that the model can operate not only in an ideal state but more importantly, maintain its performance and accuracy under the complex conditions of the real world.
Firstly, we consider the impact of a type of static noise on the model, which can be described as follows:
ρ ( t ) = ρ ¯ R .
Theorem 5. 
AFRNN model can converge to a fixed range under the influence of constant noise ρ ( t ) :
lim t ϵ ( t ) 2 n ϱ 1 ρ ¯ δ λ m .
Proof. 
The AFRNN model (15) under constant noise ρ can be rewritten as:
ϵ ˙ ( t ) = δ K ( t ) K T ( t ) ϵ ( t ) + ρ ¯ ,
Similarly, its sub-model can be expressed as:
ϵ ˙ i ( t ) = δ λ i ( t ) ϕ ( ϵ i ( t ) ) + ρ ¯ .
Define a new Lyapunov positive definite function: y ˙ j ( t ) = ϵ j 2 ( t ) / 2 . This can be derived and simplified to yield the following expression:
y ˙ j ( t ) = ϵ j ( t ) δ λ j ( t ) ϕ ( ϵ j ( t ) ) ρ ¯ ϵ j ( t ) δ λ m ϕ ( ϵ j ( t ) ) ρ ¯ .
According to the Liapunov stability principle, the results in the following three cases need to be considered separately:
(1) ϵ j ( t ) < 0
If δ λ m ϱ ( ϵ j ( t ) ) ρ ¯ < 0 , i.e., y ˙ j ( t ) < 0 , then ϵ ( t ) 2 is globally convergent, and we have:
ϵ j ( t ) < ϱ 1 ρ ¯ δ λ m .
When δ λ m ϱ ( ϵ j ( t ) ) ρ ¯ 0 , AFRNN model Equations (4)–(7) can achieve global convergence. In this scenario, we can demonstrate that y ˙ j ( t ) 0 . If ϵ ( t ) 2 is within the permissible error range and y j ( t ) > 0 , it implies that 0 > ϵ j ( t ) > ϱ 1 ρ ¯ δ λ m . The boundary of the steady-state error, ϵ j ( t ) , can diminish to 0 over time. To summarize, ϵ j ( t ) fluctuates within the bounded range ( 0 , ϱ 1 ρ ¯ δ λ m .
(2) ϵ j ( t ) = 0
At this point, ϱ ( ϵ j ( t ) ) = 0 and ϵ j ˙ ( t ) = ρ ¯ . This moment indicates that the value of ϵ j ˙ ( t ) depends on the sign of ρ ¯ . Obviously, this is only a transient state, and subsequently, the state changes to case 1 and case 3.
(3) ϵ j ( t ) > 0
If δ λ m ϱ ( ϵ j ( t ) ) ρ ¯ > 0 , then y ˙ j ( t ) < 0 . According to the Lyapunov stability principle, gradually converges to ϵ ˙ j ( t ) state that satisfies ϵ j ( t ) > ϱ 1 ρ ¯ δ λ m .
When δ λ m ϱ ( ϵ j ( t ) ) ρ ¯ 0 , similar to case 1, ϵ j ( t ) becomes divergent after converging to the critical point y ˙ j ( t ) = 0 . At this moment, there are ϵ j ( t ) < ϱ 1 ρ ¯ δ λ m . Clearly, ϵ j ( t ) will diverge to the boundaries then stabilise around y ˙ j ( t ) = 0 .
Taking the above analyses together, each subsystem of the AFRNN model eventually reaches stability at y ˙ j ( t ) = 0 and necessarily satisfies b. Therefore, we can conclude the following:
lim t ϵ ( t ) 2 n ϱ 1 ρ ¯ δ λ m .
At this point, the proof is complete. □
The input to the noise is typically not constant; therefore, expanding our assumptions regarding the noise structure becomes necessary:
ρ ( t ) R n s . t . ρ ( t ) 2 ι ¯ ,
AFRNN model analyses the impact of periodic and non-periodic bounded noise, even when information on the derivatives of the noise is unavailable. This theorem is presented:
Theorem 6. 
Under conditions of bounded time-varying noise ρ ( t ) , the steady-state error ϵ ( t ) 2 of the AFRNN model converges to a specific range with high confidence. It is important to acknowledge, however, that this model may not be suitable for all scenarios and further research may be necessary to fully understand its limitations:
lim t ϵ ( t ) 2 ι ¯ δ λ m .
Proof. 
While the AFRNN model primarily utilizes a tonal power law structure, it is worth noting that linear functions can also be considered as a subset of this structure. For the purposes of simplifying the analysis process, we have defined this subset as a linear function. As a result, ϵ ˙ ( t ) = δ K ( t ) K T ( t ) ϱ ϵ ( t ) can be further simplified as follows:
ϵ ˙ ( t ) = δ K ( t ) K T ( t ) ϱ ϵ ( t ) + ρ ( t ) ,
The submodel that corresponds to this is expressed as:
ϵ ˙ j ( t ) = δ λ j ( t ) ϵ j ( t ) + ρ j ( t ) .
By combining Equation (31) with the general solution of the first-order differential equation, the following expression is obtained:
ϵ j ( t ) = ϵ j ( 0 ) e 0 T δ λ j ( τ ) d τ + e 0 T δ λ j ( τ ) d τ 0 T e 0 s δ λ j ( τ ) d τ ρ j ( s ) d s ,
The equation can be simplified to:
ϵ j ( t ) = ϵ j ( 0 ) e γ + e γ t 0 T e γ ρ j ( s ) d s ,
In which Y = 0 T δ λ j ( τ ) d τ . Knowing λ j ( t ) 0 , we have:
lim t 0 T δ λ j ( τ ) d τ = .
The following inequality can be derived from the trigonometric inequality:
| ϵ j ( t ) | | ϵ j ( 0 ) e γ | + e γ 0 T e γ ρ j ( s ) d s ,
The following inequality is derived based on Lobida’s law:
lim t | ϵ j ( t ) | lim t | ϵ j ( 0 ) e γ | + ι δ λ j ( t ) ι δ λ m .
It can be inferred from Equation (32) that:
lim t ϵ ( t ) 2 ι δ λ m .
At this point, the proof is complete. □

4. Identification Experiments and Accuracy Assessment

This section introduces data sources and preprocessing. For comparison, we selected classic remote sensing target extraction methods such as ACE, CEM, and MF for comparison to verify our high accuracy and stability. It should be noted that all experiments are conducted using MATLAB 2017A on a computer with Windows 10, an AMD Ryzen 5 3600 6-Core CPU @3.60 GHz, and 16 GB RAM. The experimental parameters are as follows: dynamic penalty factor λ = 1 , scale factor δ = 1 and nonlinear scale v = 0.7 .

4.1. Dataset

This research investigates the Zhanjiang Port Fishery Port region and Tongming Port in Zhanjiang City, as depicted in the red boxes in in Figure 3, serving as the primary research areas. On-site aerial photography was conducted in these regions to systematically collect data for experimentation. Image source for Figure 3 is from earthexplorer.
Furthermore, the remote sensing images presented in this paper were captured using the DJI M300 RTK (Shenzen, China), which is developed and manufactured by DJI in Shenzhen, China. The DJI M300 RTK is equipped with a high-precision RTK navigation system, which is capable of achieving centimeter-level positioning accuracy. This ensures that the vehicle is able to fly stably in the air and to accurately reach the designated destination. The UAV in this study is equipped with the MS600 PRO (Tianjin, China) multispectral camera, which is a custom-developed multispectral camera based on the DJI PSDK, manufactured by YUSENSE in Qingdao, China. It can be seamlessly connected to the DJI M200 and M300 RTK UAV flight platforms. The MS600 PRO has six spectral channels, each of which employs a 1.2-megapixel high-dynamic-range full-range shutter, a CMOS detector, six waveforms, and a 1.2-megapixel high-dynamic-range full-range shutter, and a CMOS detector. The CMOS detector has six bands. The acquisition of UAV image data is influenced by a number of factors, including flight speed, flight altitude, weather conditions, attitude orientation, and the parameter settings of the gimbaled intelligent camera. These factors have the potential to significantly impact the quality and effectiveness of the acquired data. In order to ensure the acquisition of high-quality image data, a significant number of preliminary tests must be conducted in order to determine the optimal settings. In this paper, the acquisition was conducted under clear and cloudless weather conditions. It is recommended that the flight altitude be set at 100 and 150 meters. The objective of these measures is to optimize the image acquisition process and ensure the quality of the image data.
During UAV aerial photography, remote sensing images may be affected by a variety of factors, including optical distortion and other issues. To overcome these problems, an image processing pipeline is crucial, which includes steps such as image stitching and orthorectification. Before orthorectification, the original data need to be initially processed, which includes counting target key points, automatic aerial triangulation, bundle block adjustment, and camera self-calibration, etc., to remove factors that affect image quality, such as UAV calibration gray plate, underexposure or overexposure, and inaccurate focus. This research uses Pix4Dmapper automated 3D modeling software and ENVI to preprocess UAV remote sensing data.

4.2. Parameter Descriptions

Confusion matrices [50] play a crucial role in remote sensing image extraction, providing a basic and intuitive method for evaluating the accuracy of extraction models. It quantifies model performance by comparing the model’s prediction results on test images with real ground object images and counting the number of correct and incorrect observations. The confusion matrix is particularly useful when dealing with binary classification problems because it provides a clear analytical framework for the positive (1, Positive) or negative (0, Negative) results predicted by the model. The core components of the confusion matrix include four basic parameters: true examples (TP), true negatives (TN), false positives (FP) and false negatives (FN). These parameters describe in detail the various possibilities of model predictions. The results are shown in Table 1. These key parameters are the cornerstone of understanding and evaluating the performance of a classification model. They not only help us intuitively see the performance of the model in actual tests but are also important indicators for calculating model efficiency, such as accuracy, recall, precision and F1 scores, etc.
The confusion matrix is an extremely valuable tool in remote sensing image classification tasks, but its limitations when dealing with large-scale data sets cannot be ignored. The assessment of the quality of a model based on the mere counting of pixels in remote sensing images may prove to be inadequate. This is particularly pertinent in the field of remote sensing, where the quantity of data is voluminous and the intricacy of the subject matter is considerable. Therefore, in order to achieve a more comprehensive model performance evaluation, the basic statistical results of the confusion matrix must be used in conjunction with other detailed evaluation indicators, including overall accuracy (OA), precision, recall, F-score, and calculating time (CT). The formulae for these are as follows:
O A = T P + T N T P + T N + F P + F N , Precision = T P T P + F P , Recall = T P T P + F N , FA = F P F P + T N , AA = 1 2 T P T P + F N + T N T N + F P , F - score = ( 1 + β 2 ) ( Precision · Recall ) β 2 · Precision + Recall .
Furthermore, this research employs the KAPPA coefficient [51] as the principal indicator for the evaluation of model performance. The formula is as follows:
Kappa = p 0 p e 1 p e , p 0 = T P + T N T P + T N + F P + F N , p e = ( T P + F P ) ( T P + F N ) + ( F N + T N ) ( F P + T N ) ( T P + T N + F P + F N ) 2 .

4.3. Extraction Results and Accuracy Evaluation

In this subsection, we divide the experiments into three different sub-subsections, with the first two focusing on different UAV imaging altitudes: 100 m and 150 m. This division corresponds to the different flight altitudes used during the experiments. To evaluate the generalization ability of our proposed method and verify its performance, we use the public HRSC2016 [52] dataset in Section 4.3.3. This dataset is known for its diversity and complexity, making it an excellent benchmark for testing our method. In addition, we present a structured evaluation framework in Section 4.2 to facilitate a detailed comparison between the DTMF method and three other classic ship extraction methods. This framework aims to comprehensively evaluate their performance under different imaging conditions, especially in terms of ship extraction accuracy.

4.3.1. Imaging Height 100 m

This sub-subsection presents the results of an experimental investigation conducted on images captured at a flight height of 100 m. The numerical results of different precisions recorded in Table 1 further quantify the effectiveness of these four methods in extracting ships in different regions, providing a detailed performance comparison. Using the visual extraction results displayed in Figure 4, we show the extraction effects of the four methods in an intuitive way, so that the evaluation process is not limited to numerical analysis, but also provides a visual comparison.
This research uses the graythresh function in MATLAB, which automatically selects the optimal threshold based on the OTSU algorithm to convert the image into a binary image. Through threshold segmentation, we obtain clear extraction results in the form of binary images. It is worth noting that, in order to ensure the fairness of the research and avoid biases that may be introduced by customized pre-processing and post-processing processes for remote sensing images, the image processing processes and related parameter settings of all methods are consistent with the methods used in this research. This step ensures the objectivity and comparability of the analysis results and provides a reliable basis for evaluating the performance of different algorithms in ground object detection.
After threshold segmentation, it is necessary to visually compare the extraction map of each algorithm with the true value map of the feature to be extracted Figure 4b and evaluate the parameters through index calculation. It can be intuitively seen from Figure 4c–f that the method in this research has the best fresh extraction effect.
In addition, we represent different categories in the contrast map by using four colors, where red is used to mark misclassified ship regions, green represents missed extraction regions, and black and gray represent seawater and land, respectively. This method not only enhances the interpretability of results but also facilitates the identification of errors and omissions at a glance.
As can be seen from Figure 4a, there are significant red, green, and blue (RGB) noise spots in the image, which may be caused by sensor abnormalities during image capture or the special properties of spectral reflection. There is widespread noise in the image, which may be misidentified as a small ship or part of a ship. We can intuitively see that the dtmf method can accurately extract the shape and outline of the ship, while the other three methods all have a certain degree of missed detection and value detection of the shape.
From Table 1, we can see that the DTMF method shows excellent performance on almost all evaluation metrics, especially in terms of overall precision (0.9988) and recall (0.9668), which shows that it is very good at correctly identifying ship pixels efficient. In addition, DTMF also exhibits a high Kappa coefficient (0.9768), indicating that its results are highly consistent and reliable. Although the precision rate of DTMF (0.9201) is slightly lower than that of CEM (0.9719), the recall rate of CEM is significantly lower (0.5836), indicating that although it can accurately identify ships, it misses a large number of actual ship targets. This low recall may limit its utility in practical applications.
Figure 5a shows several ships in relatively dark water. The ships are arranged in a rough diagonal shape, pointing from the upper left to the lower right. The largest ship is at the bottom of the array, with two smaller ships above it. While the ships themselves are more visible, there is still a lot of colored noise in the background. These noises visually resemble small boats or other objects, which can pose a challenge to pixel-based detection algorithms. Color distortion in the image creates a large amount of red and green noise, and this color distribution may affect the performance of the extraction method.
As can be seen from Figure 5c–f, the dtmf method can effectively overcome the interference of background noise and extract the shape and contour of the hull. The other three methods all experienced a certain degree of missed detections and false detections.
We can conclude from Table 2 that the DTMF method performs optimally in sample region 2. It achieved an OA of 0.9979, meaning it was able to correctly distinguish between ship and non-ship pixels in most cases. DTMF also performed best in terms of precision, reaching 0.9385, indicating that its recognition results for ships have a high proportion of true examples. In terms of Recall, DTMF leads with a result of 0.9582 and can identify most of the real ship pixels. At the same time, its AA and Kappa coefficient are 0.9784 and 0.9471, respectively, both showing high classification consistency and reliability.
Figure 6a depicts sample region 3, wherein the ship is moored alongside the pier. Due to the phenomenon of light reflection, the spectral characteristics of the ship are very similar to those of the pier. This results in the phenomenon of different objects having the same spectrum. Figure 6c–f illustrates the extraction effects of various methods. It can be observed that the DTMF method demonstrates the most effective background suppression, while also retaining the outline of the ship to the greatest extent. Table 3 shows that the DTMF method performs best in terms of OA (0.9989), Recall (0.9236), AA (0.9617), and KAPPA coefficient (0.9528). At the same time, the FA of the DTMF method is extremely low (0.0002) and the CT is the shortest (0.0141), indicating that it not only has an advantage in accuracy but also has high computational efficiency. Various indicators show that the DTMF method can accurately and efficiently extract ships while dealing with the complexity of the background and the similarity of spectral features in sample region 3, showing the best overall performance.

4.3.2. Imaging Height 150 m

In this sub-subsection, an experimental investigation is conducted on images captured at an altitude of 150 m.
Figure 7a presents a remote sensing image of a group of ships at night or under low light conditions. The ships are distributed throughout the image, and their orientations and positions vary, with some aligned in a straight line and others slightly off. There is also a large amount of color noise in the image, which may be due to noise generated by the light-sensitive elements of the remote sensing equipment under low-light conditions. As can be seen in Figure 7c–f, DTMF method can detect the edges and contours of the ship very well. While the other three methods have misdetected the edges. As illustrated in Figure 7c–f, the DTMF method is capable of accurately detecting the edges and contours of the ship. In contrast, the other three methods have incorrectly identified the edges. The clarity of ship edges is a crucial factor in the detection of remote sensing targets, as it enables the algorithms to correctly segment and identify these targets. Table 4 indicates that the DTMF method once again demonstrates its superiority in the extraction of ships from remote sensing images, as evidenced by the results of the performance evaluation of sample region 3. All the metrics have been optimized.
As illustrated in Figure 8a, sample region 5 exhibits a luxuriant growth of trees, which exudes a verdant hue. The low color contrast between the boat color and the surrounding seawater and forest may present a challenge for the threshold-based segmentation method. Additionally, the reflected light from the water surface may influence the pixel values in the image, thereby increasing the difficulty in distinguishing the boat from the reflective water surface. As illustrated in Figure 8c–f, the DTMF method is the most effective of the four in terms of extraction results, with no instances of misdetection of the ship contours. However, the DTMF method does exhibit misdetection of the ship contours. The remaining three methods mis-extract the shoals and water bodies as ship hulls.
As illustrated in Table 5, the performance metrics for each method in the performance evaluation in sample region 5 demonstrate that the DTMF method is significantly more effective than the other three methods. The DTMF method has an OA of 0.9964, indicating that it is capable of accurately distinguishing between ship and non-ship pixels in the majority of cases. Furthermore, the DTMF method exhibits the highest precision at 0.8733. Although this value is lower than that of the DTMF method in sample regions 1–3 for DTMF method, it implies that DTMF has relatively few false positives when the predicted pixel is a ship in sample region 4. As demonstrated by the recall, the DTMF method is able to correctly identify true ship pixels at a rate of 0.9450. This metric is of significant importance, as it demonstrates the efficacy of the DTMF method in accurately identifying genuine ship pixels. The Kappa coefficient of DTMF reaches 0.9059, which is considerably higher than those of other methods. In contrast, MF, CEM and ACE all exhibit lower performance, particularly in terms of precision and Kappa coefficient.
Figure 9a depicts sample region 6, which exhibits a more complex background that may impede the ship extraction process to some extent. The ships were situated in close proximity to the jetty, and the boundaries between them were not entirely clear. Additionally, the complex rust and mottled colors on the hulls of ships can create challenges in spectral differentiation. The containers or cargo on the quayside may also be spectrally similar to ships, further complicating the extraction process. The hulls of different ships also exhibit different spectral characteristics. The simultaneous existence of spectra that are different for the same object and objects that are different for the same spectrum makes the extraction task more challenging. In Figure 9c–f, the results of the different detection methods are presented. The DTMF method is capable of overcoming the issue of homogeneous and heterogeneous spectra by suppressing the same interfering background as the hull spectra. Furthermore, all the hulls with different spectra are extracted. The DTMF method is the closest to the real situation on the ground, while the other methods show multiple missed or false detections, with the CEM and ACE methods in particular having a high number of false alarms.
As demonstrated in Table 6, the DTMF method exhibited the most optimal performance in the assessment of ship extraction within the sample region 6. In particular, the DTMF method achieved a value of 0.9946 in terms of OA, which highlights its accuracy in the ship extraction task. Although the MF method exhibits a slightly superior precision of 0.8482, indicating enhanced accuracy in ship classification, the DTMF method also demonstrates a high level of precision (0.8048). In terms of recall, the CEM method leads with 0.8359, demonstrating a superior ability to recognize genuine ship pixels. However, DTMF maintains a commendable level of recognition with a recall of 0.7017. In the evaluation of AA, the CEM method stands out with 0.9128, demonstrating its balanced detection ability across all categories. Meanwhile, the DTMF method also achieves an AA of 0.8699, which ensures its consistent performance in the detection of different types of ships. Most importantly, DTMF outperforms other methods with a Kappa coefficient of 0.8268, demonstrating its advantage in classification consistency. While other methods may have a slight advantage in specific metrics, the overall performance of DTMF is significantly superior in terms of accuracy, reliability, and consistency.

4.3.3. HRSC2016

This sub-subsection employs the public dataset HRSC2016 to assess the generalization capacity of the proposed method. Four distinct scenes from the HRSC2016 dataset were selected to assess the extraction capabilities of the various methods. The results of the visualization, as shown in Figure 10, demonstrate that the proposed method is able to extract the target contour with the greatest accuracy, while also avoiding erroneous extraction in the four groups of experiments. The quantitative results are presented in Table 7.
As demonstrated in Table 7, the quantitative results reveal that the four methods of DTMF, MF, CEM and ACE exhibit varying degrees of efficacy in the four tests of HRSC2016. The DTMF method demonstrated the most optimal performance in OA, F-score, KAPPA coefficient and computational efficiency, exhibiting exceptional capabilities in terms of high precision, high recall and rapid processing. The overall accuracy of the method exceeds 0.99 in all four tests, and its recall rate and F-score demonstrate its resilience in complex backgrounds, rendering it an optimal choice for real-time applications. Although the MF method achieves extremely high accuracy, especially up to 0.9997 in some tests, its recall rate is relatively low, indicating that it has certain deficiencies in detecting all targets. Concurrently, the MF method has the shortest calculation time and is therefore well-suited to application scenarios that are sensitive to calculation time. The CEM method demonstrates excellent recall performance, with a recall rate of nearly 1 in most tests. This evidence supports the assertion that the method is highly effective in object detection. Nevertheless, the accuracy and overall performance of the method are subject to fluctuations, and its computational efficiency is moderate. Although the ACE method exhibits a high recall rate, its low precision and lengthy calculation process restrict its practical applicability. The overall precision and comprehensive score of the ACE method are the lowest among the four methods. In conclusion, the DTMF method demonstrates satisfactory performance across a range of indicators, thereby corroborating its capacity for generalization across disparate data sets.

4.4. Discussion

The efficacy of the DTMF method is demonstrated in a series of experimental analyses across a range of scenarios. The DTMF method is effective in processing remote sensing images containing multiple targets and complex background interference. It is particularly noteworthy for its ability to enhance target signals, dynamically adapt to changing conditions, reduce background interference, optimize resources, and ensure system robustness. In the extraction of targets from UAV remote sensing images captured at varying altitudes, the DTMF method demonstrated superior performance, as evidenced by the OA reaching 0.99 and the Kappa coefficient reaching 0.82, both of which were higher than those of the three comparison methods. Although MF and CEM may occasionally be comparable to DTMF in some indicators, DTMF consistently demonstrates superior performance in almost all indicators and sample areas. Furthermore, it effectively suppresses the phenomenon of different objects with the same spectrum and different spectra with the same object in the remote sensing target extraction task, and verifies the stability of DTMF in ship detection in different scenarios. In the experimental extraction task of HRSC2016, the OA of the DTMF method reached 0.99, while the KAPPA coefficient reached 0.86, both of which were higher than those of the three comparison methods. It is necessary to verify the ability of the DTMF method to generalize across different datasets.

5. Conclusions

This research presents a DTMF method for the extraction of ships from UAV remote sensing images. In contrast to the MF, DTMF is capable of adapting to the dynamic changes in the system, ensuring that the system is able to accurately track the target spectral vectors over time and provide sustained detection performance at different time points. Furthermore, in order to address DTMF, this paper develops the AFRNN model from the perspective of the control domain. ARFNN model introduces an adaptive feedback term based on the gradient RNN and designs a special nonlinear projection function, which can be dynamically adapted according to the changes in the input data or the environment. This eliminates the time lag problem. Subsequently, this paper presents rigorous theoretical proof that the AFRNN model for solving DTMF has high extraction accuracy and strong robustness, thereby ensuring the extraction performance of DTMF in complex environments. Finally, the quantitative and visual results demonstrate that the DTMF method exhibits clear advantages over other classical remote sensing target extraction methods in addressing the challenges of ’same object, different spectra’ and ’same spectrum, different objects’ in complex scenes. In the extraction experiment, this method with an overall accuracy (OA) of over 0.99 and a Kappa coefficient exceeding 0.82. This demonstrates that the method has a significant advantage in effectively overcoming the phenomenon of missed detection and false detection. The extraction effect is clear, the accuracy rate is high, and the leakage and misdetection can be effectively overcome. This evidence demonstrates the efficacy and superiority of the DTMF method as a means of extracting ships from UAV remote sensing images.

Author Contributions

Conceptualization, S.D. and D.F.; methodology, S.D. and Y.S.; software, S.D., Y.S. and Y.L.; validation, D.F.; formal analysis, S.D. and Y.S.; investigation, D.F.; resources, S.D.; data curation, S.D. and Y.Z.; writing—original draft preparation, S.D. and D.F.; writing—review andediting, S.D. and Y.S.; visualization, S.D. and Y.L.; supervision, S.D. and D.F.; project administration, D.F.; funding acquisition, D.F. All authors have read and agreed to the publishedversion of the manuscript.

Funding

This research was funded in part by the National Key Research and Development Program of China under Grant 2022yFC3103101, Key Special Project for Introduced Talents Team of Southern Marine Science and Engineering Guangdong Laboratory under Contract GML2021GD0809, National Natural Science Foundation of China under Contract 42206187, Key projects of the Guangdong Education Department under Grant 2023ZDZX4009.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, J.; Wen, G. Maritime Target Detection and Tracking. In Proceedings of the 2019 IEEE 2nd International Conference on Automation, Electronics and Electrical Engineering (AUTEEE), Shenyang, China, 22–24 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 309–314. [Google Scholar]
  2. Perera, L.P.; Oliveira, P.; Soares, C.G. Maritime Traffic Monitoring Based on Vessel Detection, Tracking, State Estimation, and Trajectory Prediction. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1188–1200. [Google Scholar] [CrossRef]
  3. Knapp, S.; Franses, P.H. Comprehensive Review of the Maritime Safety Regimes: Present Status and Recommendations for Improvements. Transp. Rev. 2010, 30, 241–270. [Google Scholar] [CrossRef]
  4. Peters, K. Deep Routeing and the Making of ‘Maritime Motorways’: Beyond Surficial Geographies of Connection for Governing Global Shipping. Geopolitics 2020, 25, 43–64. [Google Scholar] [CrossRef]
  5. Pedersen, P.T. Review and Application of Ship Collision and Grounding Analysis Procedures. Mar. Struct. 2010, 23, 241–262. [Google Scholar] [CrossRef]
  6. Chaturvedi, S.K.; Sekhar, R.; Banerjee, S.; Kamal, H. Comparative Review Study of Military and Civilian Unmanned Aerial Vehicles (UAVs). INCAS Bull. 2019, 11, 181–182. [Google Scholar] [CrossRef]
  7. Mohd Noor, N.; Abdullah, A.; Hashim, M. Remote Sensing UAV/UAV and Its Applications for Urban Areas: A Review. In IOP Conference Series: Earth and Environmental Science; IOP Publishing: Bristol, UK, 2018; Volume 169. [Google Scholar]
  8. Yao, H.; Qin, R.; Chen, X. Unmanned Aerial Vehicle for Remote Sensing Applications—A Review. Remote Sens. 2019, 11, 1443. [Google Scholar] [CrossRef]
  9. Shakhatreh, H.; Sawalmeh, A.; Al-Fuqaha, A.; Dou, Z.; Almaitta, E.; Khalil, I.; Othman, N.S.; Khreishah, A.; Guizani, M. Unmanned Aerial Vehicles (UAVs): A Survey on Civil Applications and Key Research Challenges. IEEE Access 2019, 7, 48572–48634. [Google Scholar] [CrossRef]
  10. Tilon, S.; Nex, F.; Vosselman, G.; Sevilla de la Llave, I.; Kerle, N. Towards improved unmanned aerial vehicle edge intelligence: A road infrastructure monitoring case study. Remote Sens. 2022, 14, 4008. [Google Scholar] [CrossRef]
  11. Turan, M.; Ferhat, G.; Adil, A. An Analytical Approach to the Concept of Counter-UA Operations (CUAOPS) SWOT Analysis of Unmanned Aircraft Systems. J. Intell. Robot. Syst. 2012, 65, 73–91. [Google Scholar] [CrossRef]
  12. Fraga-Lamas, P.; Fernández-Caramés, T.M. Tactical edge iot in defense and national security. In IoT for Defense and National Security; Wiley: Hoboken, NJ, USA, 2022; pp. 377–396. [Google Scholar]
  13. Quigley, M.; Goodrich, M.A.; Griffiths, S.; Eldredge, A.; Beard, R.W. Target Acquisition, Localization, and Surveillance Using a Fixed-Wing Mini-UAV and Gimbaled Camera. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; IEEE: Piscataway, NJ, USA, 2005. [Google Scholar]
  14. Kerekes, J.P.; Baum, J.E. Spectral Imaging System Analytical Model for Subpixel Object Detection. IEEE Trans. Geosci. Remote Sens. 2002, 40, 1088–1101. [Google Scholar] [CrossRef]
  15. Manolakis, D.; Siracusa, C.; Shaw, G. Hyperspectral Subpixel Target Detection Using the Linear Mixing Model. IEEE Trans. Geosci. Remote Sens. 2001, 39, 1392–1409. [Google Scholar] [CrossRef]
  16. Turin, G. An Introduction to Matched Filters. IRE Trans. Inf. Theory 1960, 6, 311–329. [Google Scholar] [CrossRef]
  17. Chaudhuri, S.; Chatterjee, S.; Katz, N.; Nelson, M.; Goldbaum, M. Detection of Blood Vessels in Retinal Images Using Two-Dimensional Matched Filters. IEEE Trans. Med Imaging 1989, 8, 263–269. [Google Scholar] [CrossRef] [PubMed]
  18. Manolakis, D.; Marden, D.; Shaw, G.A. Hyperspectral Image Processing for Automatic Target Detection Applications. Linc. Lab. J. 2003, 14, 79–116. [Google Scholar]
  19. Stefanou, M.S.; Kerekes, J.P. Image-Derived Prediction of Spectral Image Utility for Target Detection Applications. IEEE Trans. Geosci. Remote Sens. 2009, 48, 1827–1833. [Google Scholar] [CrossRef]
  20. Farrand, W.H.; Harsanyi, J.C. Mapping the Distribution of Mine Tailings in the Coeur d’Alene River Valley, Idaho, through the Use of a Constrained Energy Minimization Technique. Remote Sens. Environ. 1997, 59, 64–76. [Google Scholar] [CrossRef]
  21. Ren, H.; Chang, C.-I.; Du, Q.; Jensen, J.R. Comparison between Constrained Energy Minimization-Based Approaches for Hyperspectral Imagery. In Proceedings of the IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, Greenbelt, MD, USA, 27–28 October 2003; IEEE: Piscataway, NJ, USA, 2003. [Google Scholar]
  22. Bidon, S.; Besson, O.; Tourneret, J.-Y. The Adaptive Coherence Estimator is the Generalized Likelihood Ratio Test for a Class of Heterogeneous Environments. IEEE Signal Process. Lett. 2008, 15, 281–284. [Google Scholar] [CrossRef]
  23. Yang, S.; Shi, Z.; Tang, W. Robust Hyperspectral Image Target Detection Using an Inequality Constraint. IEEE Trans. Geosci. Remote Sens. 2014, 53, 3389–3404. [Google Scholar] [CrossRef]
  24. Xiong, Y.; Wang, J.; Zhang, S.; Zhang, Y. Ice Identification with Error-Accumulation Enhanced Neural Dynamics in Optical Remote Sensing Images. Remote Sens. 2023, 15, 5555. [Google Scholar] [CrossRef]
  25. Yang, S.; Shi, Z. SparseCEM and SparseACE for Hyperspectral Image Target Detection. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2135–2139. [Google Scholar] [CrossRef]
  26. Chen, Y.; Fu, D.; Wang, D.; Huang, H.; Si, Y.; Du, S. Noise-Tolerant Matched Filter Scheme Supplemented with Neural Dynamics Algorithm for Sea Island Extraction. CAAI Trans. Intell. Technol. 2024, 1, 12323. [Google Scholar] [CrossRef]
  27. Chaillan, F.; Courmontagne, P. On the Use of the Stochastic Matched Filter for Ship Wake Detection in SAR Images. In Proceedings of the OCEANS 2006 Conference, Boston, MA, USA, 18–21 September 2006; IEEE: Piscataway, NJ, USA, 2006. [Google Scholar]
  28. Kim, K.; An, H.; Kim, J. Probabilistic Ship Detection and Classification Using Deep Learning. Appl. Sci. 2018, 8, 936. [Google Scholar] [CrossRef]
  29. Zhang, R.; Yao, J.; Zhang, K.; Feng, C.; Zhang, J. S-CNN-Based Ship Detection from High-Resolution Remote Sensing Images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 423–430. [Google Scholar] [CrossRef]
  30. Wang, Y.; Wang, B.; Huo, L.; Fan, Y. GT-YOLO: Nearshore Infrared Ship Detection Based on Infrared Images. J. Mar. Sci. Eng. 2024, 12, 213. [Google Scholar] [CrossRef]
  31. Zhao, S.; Luo, Y.; Zhang, T.; Guo, W.; Zhang, Z. A Domain Specific Knowledge Extraction Transformer Method for Multisource Satellite-Borne SAR Images Ship Detection. ISPRS J. Photogramm. Remote Sens. 2023, 198, 16–29. [Google Scholar] [CrossRef]
  32. Najafabadi, M.M.; Villanustre, F.; Khoshgoftaar, T.M.; Seliya, N.; Wald, R.; Muharemagic, E. Deep learning applications and challenges in big data analytics. J. Big Data 2015, 2, 1. [Google Scholar] [CrossRef]
  33. Sun, C.; Shrivastava, A.; Singh, S.; Gupta, A. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
  34. Alzubaidi, L.; Bai, J.; Al-Sabaawi, A.; Santamaría, J.; Albahri, A.S.; Al-dabbagh, B.S.N.; Fadhel, M.A.; Manoufali, M.; Zhang, J.; Al-Timemy, A.H.; et al. A survey on deep learning tools dealing with data scarcity: Definitions, challenges, solutions, tips, and applications. J. Big Data 2023, 10, 46. [Google Scholar] [CrossRef]
  35. Lai, Z.; Chauhan, J.; Chen, D.; Dugger, B.N.; Cheung, S.C.; Chuah, C. Semi-Path: An interactive semi-supervised learning framework for gigapixel pathology image analysis. Smart Health 2023, 32, 100474. [Google Scholar] [CrossRef]
  36. Zhang, K.; Yang, X.; Xu, L.; Thé, J.; Tan, Z.; Yu, H. Enhancing coal-gangue object detection using GAN-based data augmentation strategy with dual attention mechanism. Energy 2024, 287, 129654. [Google Scholar] [CrossRef]
  37. Dou, B.; Zhu, Z.; Merkurjev, E.; Ke, L.; Chen, L.; Jiang, J.; Zhu, Y.; Liu, J.; Zhang, B.; Wei, G.W. Machine learning methods for small data challenges in molecular science. Chem. Rev. 2023, 13, 8736–8780. [Google Scholar] [CrossRef] [PubMed]
  38. Rocks, J.W.; Mehta, P. Memorizing without overfitting: Bias, variance, and interpolation in overparameterized models. Phys. Rev. Res. 2022, 4, 013201. [Google Scholar] [CrossRef] [PubMed]
  39. Castiglioni, I.; Rundo, L.; Codari, M.; Di Leo, G.; Salvatore, C.; Interlenghi, M.; Gallivanone, F.; Cozzi, A.; D’Amico, N.C.; Sardanelli, F. AI applications to medical images: From machine learning to deep learning. Phys. Med. 2022, 83, 9–24. [Google Scholar] [CrossRef]
  40. Zhang, Y.; Wang, J. Recurrent Neural Networks for Nonlinear Output Regulation. Automatica 2001, 37, 1161–1173. [Google Scholar] [CrossRef]
  41. Shi, Y.; Li, S.; Wang, L.; Liu, Z. Novel Discrete-Time Recurrent Neural Networks Handling Discrete-Form Time-Variant Multi-Augmented Sylvester Matrix Problems and Manipulator Application. IEEE Trans. Neural Netw. Learn. Syst. 2020, 33, 587–599. [Google Scholar] [CrossRef] [PubMed]
  42. Zhang, Y.; Wang, J.; Shi, Y.; Li, S. A Recurrent Neural Network Approach for Visual Servoing of Manipulators. In Proceedings of the 2017 IEEE International Conference on Information and Automation (ICIA), Macao, China, 18–20 July 2017; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
  43. Liao, S.; Yin, Y.; Tang, Y.; Wang, J. Modified Gradient Neural Networks for Solving the Time-Varying Sylvester Equation with Adaptive Coefficients and Elimination of Matrix Inversion. Neurocomputing 2020, 379, 1–11. [Google Scholar] [CrossRef]
  44. Luo, X.; Huang, J.; Yang, S.; Li, S. A Novel Recurrent Neural Network for Robot Control. In Robot Control and Calibration: Innovative Control Schemes and Calibration Algorithms; Springer Nature: Singapore, 2023; pp. 33–49. [Google Scholar]
  45. Xiao, L.; Zhang, Y. Different Zhang Functions Resulting in Different ZNN Models Demonstrated via Time-Varying Linear Matrix–Vector Inequalities Solving. Neurocomputing 2013, 121, 140–149. [Google Scholar] [CrossRef]
  46. Zhang, Y.; Chen, K.; Tan, H.-Z. Performance Analysis of Gradient Neural Network Exploited for Online Time-Varying Matrix Inversion. IEEE Trans. Autom. Control 2009, 54, 1940–1945. [Google Scholar] [CrossRef]
  47. Jin, L.; Lin, W.; Li, S. Gradient-Based Differential Neural-Solution to Time-Dependent Nonlinear Optimization. IEEE Trans. Autom. Control 2022, 68, 620–627. [Google Scholar] [CrossRef]
  48. Jarlebring, E.; Poloni, F. Iterative Methods for the Delay Lyapunov Equation with T-Sylvester Preconditioning. Appl. Numer. Math. 2019, 135, 173–185. [Google Scholar] [CrossRef]
  49. Hunger, R. Floating Point Operations in Matrix-Vector Calculus; Institute for Circuit Theory and Signal Processing, Munich University of Technology: Munich, Germany, 2005; Volume 2019. [Google Scholar]
  50. Goutte, C.; Gaussier, E. A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation. In Proceedings of the European Conference on Information Retrieval, Santiago de Compostela, Spain, 21–23 March 2005; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  51. Bloch, D.A.; Kraemer, H.C. 2 × 2 Kappa Coefficients: Measures of Agreement or Association. Biometrics 1989, 45, 269–287. [Google Scholar] [CrossRef] [PubMed]
  52. Liu, Z.; Wang, H.; Weng, L.; Yang, Y. Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1074–1078. [Google Scholar] [CrossRef]
Figure 1. Technical line.
Figure 1. Technical line.
Remotesensing 16 02203 g001
Figure 2. Logical block diagram of the AFRNN model.
Figure 2. Logical block diagram of the AFRNN model.
Remotesensing 16 02203 g002
Figure 3. Research region.
Figure 3. Research region.
Remotesensing 16 02203 g003
Figure 4. Results of different methods for ship extraction in sample region 1 of UAV remote sensing image.
Figure 4. Results of different methods for ship extraction in sample region 1 of UAV remote sensing image.
Remotesensing 16 02203 g004
Figure 5. Results of different methods for ship extraction in sample region 2 of UAV remote sensing image.
Figure 5. Results of different methods for ship extraction in sample region 2 of UAV remote sensing image.
Remotesensing 16 02203 g005
Figure 6. Results of different methods for ship extraction in sample region 3 of UAV remote sensing image.
Figure 6. Results of different methods for ship extraction in sample region 3 of UAV remote sensing image.
Remotesensing 16 02203 g006
Figure 7. Results of different methods for ship extraction in sample region 4 of UAV remote sensing image.
Figure 7. Results of different methods for ship extraction in sample region 4 of UAV remote sensing image.
Remotesensing 16 02203 g007
Figure 8. Results of different methods for ship extraction in sample region 5 of UAV remote sensing image.
Figure 8. Results of different methods for ship extraction in sample region 5 of UAV remote sensing image.
Remotesensing 16 02203 g008
Figure 9. Results of different methods for ship extraction in sample region 6 of UAV remote sensing image.
Figure 9. Results of different methods for ship extraction in sample region 6 of UAV remote sensing image.
Remotesensing 16 02203 g009
Figure 10. The results of the visualization of the extraction effects of different methods were verified in four different scenarios. (ae) represent the extraction experiments of test 1. The images are presented from left to right, with the original image at the top, followed by the DTMF extraction results, MF extraction results, CEM extraction results, and ACE extraction results. (fj) represent the extraction experiments of test 2, with the original image displayed from left to right. The subsequent images are the DTMF extraction results, MF extraction results, CEM extraction results, and ACE extraction results. (ko) represent the extraction experiments of test 3, with the original image displayed from left to right. The subsequent images are the DTMF extraction results, MF extraction results, CEM extraction results, and ACE extraction results. (pt) represent the extraction experiments of test 4, with the original image displayed on the left and the DTMF, MF, CEM and ACE extraction results displayed in succession.
Figure 10. The results of the visualization of the extraction effects of different methods were verified in four different scenarios. (ae) represent the extraction experiments of test 1. The images are presented from left to right, with the original image at the top, followed by the DTMF extraction results, MF extraction results, CEM extraction results, and ACE extraction results. (fj) represent the extraction experiments of test 2, with the original image displayed from left to right. The subsequent images are the DTMF extraction results, MF extraction results, CEM extraction results, and ACE extraction results. (ko) represent the extraction experiments of test 3, with the original image displayed from left to right. The subsequent images are the DTMF extraction results, MF extraction results, CEM extraction results, and ACE extraction results. (pt) represent the extraction experiments of test 4, with the original image displayed on the left and the DTMF, MF, CEM and ACE extraction results displayed in succession.
Remotesensing 16 02203 g010
Table 1. Region 1 extraction accuracy evaluation results.
Table 1. Region 1 extraction accuracy evaluation results.
Sample Region 1
Evaluation IndexOAPrecisionRecallAAFAF-ScoreKappaCT
DTMF0.99880.92010.96680.98290.00100.94270.97680.0449
MF0.99820.89040.94800.94370.00150.90690.91740.0187
CEM0.99550.97190.58360.79170.00020.77780.72710.0509
ACE0.99730.97000.77010.88490.00120.93600.85738.0028
Table 2. Region 2 extraction accuracy evaluation results.
Table 2. Region 2 extraction accuracy evaluation results.
Sample Region 2
Evaluation IndexOAPrecisionRecallAAFAF-ScoreKappaCT
DTMF0.99790.93850.95820.97840.01540.94740.94710.0184
MF0.99340.90450.75760.87800.01860.82450.82120.0078
CEM0.99100.74890.84120.91770.06430.79220.78780.0196
ACE0.90850.66800.86600.92850.10490.75430.74843.0190
Table 3. Region 3 extraction accuracy evaluation results.
Table 3. Region 3 extraction accuracy evaluation results.
Sample Region 3
Evaluation IndexOAPrecisionRecallAAFAF-ScoreKappaCT
DTMF0.99890.98520.92360.96170.00020.95340.95280.0141
MF0.99631.00000.68660.84330.00000.81420.81230.0351
CEM0.99810.92530.91700.95810.00090.92120.92020.0385
ACE0.99780.90300.91440.95660.00120.90870.90766.1368
Table 4. Region 4 extraction accuracy evaluation results.
Table 4. Region 4 extraction accuracy evaluation results.
Sample Region 4
Evaluation IndexOAPrecisionRecallAAFAF-ScoreKappaCT
DTMF0.99910.98890.97500.98750.01190.98210.97680.0134
MF0.99700.95260.96790.98300.02750.96020.95860.0058
CEM0.99720.95790.96690.98260.01060.96210.96090.0142
ACE0.96920.89010.96340.97940.02990.92580.92232.0782
Table 5. Region 5 extraction accuracy evaluation results.
Table 5. Region 5 extraction accuracy evaluation results.
Sample Region 5
Evaluation IndexOAPrecisionRecallAAFAF-ScoreKappaCT
DTMF0.99640.87330.94500.97120.03280.90820.90590.0375
MF0.98610.59660.82210.90570.11930.69160.68450.0375
CEM0.98010.47940.63280.80980.13880.54560.53550.0999
ACE0.97650.40820.53490.76000.15130.46310.451216.747
Table 6. Region 6 extraction accuracy evaluation results.
Table 6. Region 6 extraction accuracy evaluation results.
Sample Region 6
Evaluation IndexOAPrecisionRecallAAFAF-ScoreKappaCT
DTMF0.99460.80480.70170.86990.03820.74920.82680.0098
MF0.99430.84820.61660.80770.02470.71410.76700.0040
CEM0.98800.68960.83590.91280.08400.75590.56170.0101
ACE0.95220.08180.33800.64870.46170.13160.12461.4815
Table 7. HRSC2016 extraction experiment quantitative results.
Table 7. HRSC2016 extraction experiment quantitative results.
Evaluation IndexOAPrecisionRecallAAFAF-ScoreKappaCT
test 1
DTMF0.99950.95160.93300.96640.02150.94220.94190.0248
MF0.99880.97120.74480.87240.00010.84310.84250.0097
CEM0.99900.83490.96870.98390.00080.96090.89640.0194
ACE0.95830.09660.99050.97430.04180.17600.16926.4374
test 2
DTMF0.99860.89400.99350.99610.00130.94110.94040.0256
MF0.99720.99970.74370.87190.00000.85290.85150.0107
CEM0.99810.88730.95150.97510.00130.91830.91730.0215
ACE0.97550.30620.97800.97670.02460.46630.45737.3774
test 3
DTMF0.99290.99200.76610.88300.00020.86450.86100.0210
MF0.96650.40900.97220.96930.03360.57570.56130.0088
CEM0.98080.55010.98420.98250.01930.70570.69660.0176
ACE0.76360.08740.96560.86210.24120.16030.12276.2302
test 4
DTMF0.99360.94030.96470.98020.00440.95230.94890.0265
MF0.98570.95860.82020.90880.00250.88400.87640.0110
CEM0.98170.80720.95270.96830.01620.87400.86420.0223
ACE0.89140.36560.86100.87730.10630.51330.46327.4502
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fu, D.; Du, S.; Si, Y.; Zhong, Y.; Li, Y. Dynamic Tracking Matched Filter with Adaptive Feedback Recurrent Neural Network for Accurate and Stable Ship Extraction in UAV Remote Sensing Images. Remote Sens. 2024, 16, 2203. https://doi.org/10.3390/rs16122203

AMA Style

Fu D, Du S, Si Y, Zhong Y, Li Y. Dynamic Tracking Matched Filter with Adaptive Feedback Recurrent Neural Network for Accurate and Stable Ship Extraction in UAV Remote Sensing Images. Remote Sensing. 2024; 16(12):2203. https://doi.org/10.3390/rs16122203

Chicago/Turabian Style

Fu, Dongyang, Shangfeng Du, Yang Si, Yafeng Zhong, and Yongze Li. 2024. "Dynamic Tracking Matched Filter with Adaptive Feedback Recurrent Neural Network for Accurate and Stable Ship Extraction in UAV Remote Sensing Images" Remote Sensing 16, no. 12: 2203. https://doi.org/10.3390/rs16122203

APA Style

Fu, D., Du, S., Si, Y., Zhong, Y., & Li, Y. (2024). Dynamic Tracking Matched Filter with Adaptive Feedback Recurrent Neural Network for Accurate and Stable Ship Extraction in UAV Remote Sensing Images. Remote Sensing, 16(12), 2203. https://doi.org/10.3390/rs16122203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop