Next Article in Journal
Evaluation and Comparison of Spatio-Temporal Relationship between Multiple Satellite Aerosol Optical Depth (AOD) and Near-Surface PM2.5 Concentration over China
Next Article in Special Issue
Absorption Pruning of Deep Neural Network for Object Detection in Remote Sensing Imagery
Previous Article in Journal
A W-Band Active Phased Array Miniaturized Scan-SAR with High Resolution on Multi-Rotor UAVs
Previous Article in Special Issue
UltraHi-PrNet: An Ultra-High Precision Deep Learning Network for Dense Multi-Scale Target Detection in SAR Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Line Segment Detection and Large Scene Airport Detection for PolSAR

1
College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
2
Research Center on Flood & Drought Disaster Reduction of the Ministry of Water Resources, China Institute of Water Resources and Hydropower Research, Beijing 100038, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(22), 5842; https://doi.org/10.3390/rs14225842
Submission received: 7 September 2022 / Revised: 9 November 2022 / Accepted: 13 November 2022 / Published: 18 November 2022

Abstract

:
In this paper, we propose a fast Line Segment Detection algorithm for Polarimetric synthetic aperture radar (PolSAR) data (PLSD). We introduce the Constant False Alarm Rate (CFAR) edge detector to obtain the gradient map of the PolSAR image, which tests the equality of the covariance matrix using the test statistic in the complex Wishart distribution. A new filter configuration is applied here to save time. Then, the Statistical Region Merging (SRM) framework is utilized for the generation of line-support regions. As one of our main contributions, we propose a new Statistical Region Merging algorithm based on gradient Strength and Direction (SRMSD). It determines the merging predicate with consideration of both gradient strength and gradient direction. For the merging order, we set it by bucket sort based on the gradient strength. Furthermore, the pixels are restricted to belong to a unique region, making the algorithm linear in time cost. Finally, based on Markov chains and a contrario approach, the false alarm control of line segments is implemented. Moreover, a large scene airport detection method is designed based on the proposed line segment detection algorithm and scattering characteristics. The effectiveness and applicability of the two methods are demonstrated with PolSAR data provided by UAVSAR.

1. Introduction

Detection of line segments in images, as a basic processing function, plays a crucial role in various applications. Line segments can describe a variety of objects, such as roads, rivers, airports, and more [1]. More importantly, when the quality of the image decreases, line segments are more stable and clear than edge features [2]. Therefore, line segment detection has been successfully applied to image registration [3,4], target detection [5,6], target recognition [7], change detection [8,9], etc.
The polarimetric synthetic aperture radar (PolSAR) is an advanced imaging radar system which measures the reflectivity of targets using four polarizations [10]. The additional information in PolSAR can be utilized by target detection algorithms to improve performance; for instance, Li et al. [11] implemented marine oil slick detection using polarimetric decomposition components and descriptors, and He et al. [12] used multi-channel scattering information for ship detection of PolSAR data. Hence, image interpretation by PolSAR data has received increasing attention. This paper aims to design an algorithm for detecting line segments of PolSAR images, providing basic characteristics for advanced image interpretation tasks.
Recently, many approaches have been developed for line segment detection. One of the well-known methods is Hough Transform (HT), proposed by Duda and Hart [13], which has attracted much attention. This method constructs the parameter space using edge pixels and performs peak detection within the parameter space to determine the position of line segments. Based on the concept of HT, a large number of line segment detection algorithms have been proposed [14,15,16]. However, such methods perform line segment detection using binary edge mapping, discarding the strength and direction information of the gradient, which makes it more susceptible to interference from noise. To enable such methods to be applied to SAR images with strong speckle [17,18], Wei et al. [19] proposed the Image Edge Field Accumulation (IEFA) straight-line detection method, utilizing both edge strength and direction information for peak detection to detect line segments. Subsequently, Wei et al. [2] used the Fourier transform and the central slice theorem to transform the image from the space domain to the frequency domain, which accelerates the straight-line extraction process. In addition, Xiong et al. [20] proposed to decompose the edge information in parameter space into horizontal and vertical components, stripping off part of the random edge information, which reduces the interference of pseudo-peaks with the line segment detection results.
Other typical line segment detection methods are referred to as non-HT methods. Non-HT methods use local information to detect line segments. The Line Segment Detector (LSD) [21] and Edge Drawing Line Segments Detection (EDLines) [22] methods are representative algorithms. They group the pixels by the gradient direction and validate each group of pixels by the Helmholtz principle [23] and a contrario approach. By introducing the knowledge of undirected graphs, Cho et al. [24] designed a Linelet-based line segment detection (Linelet-LSD) method. Linelets are defined as horizontally or vertically connected pixels. Linelets and the relationships between them form an undirected graph, which can help to achieve line segment detection. Moreover, with the development of deep learning [25,26], Li et al. [27] implemented the detection of line segments using a deep learning-based model. For the characteristics of SAR images, Liu et al. [28] modified the LSD, obtaining Line Segment Detector for SAR images (LSDSAR).
Line segment detection for optical and SAR images has been thoroughly studied. However, due to the multiplicative noise and multi-polarization information of PolSAR data, the gradient calculation methods used for optical and SAR data cannot obtain accurate gradient results from PolSAR data. Furthermore, there are fewer studies on line segment detection for PolSAR data. An important contribution was made by D. Borghys [29], who proposed a scheme for detecting linear features in PolSAR images using the Hotellings test. Based on this, Jin [30] proposed a linear feature detection method for PolSAR images by replacing the Hotellings test with the Wilks test. In [31], a fuzzy linear feature detector was constructed from the Wishart likelihood ratio test statistic. Nevertheless, all the above algorithms detect many curves in PolSAR data along with the line segments. Moreover, the width of the obtained linear region is large, which leads to difficulty in describing the line segments accurately. To address the above problems, we propose a PolSAR image Line Segment Detection method (PLSD) which concentrates on detecting the line segments in PolSAR images. PLSD can be divided into three stages. In the first stage, the gradient map of the PolSAR image is acquired by the improved Constant False Alarm Rate (CFAR) edge detector [32]. Then, Statistical Region Merging based on gradient Strength and Direction (SRMSD) is applied to obtain line-support regions. Finally, to validate the line-support regions, an extension of the a contrario approach is utilized, which is based on the Helmholtz principle [23] and Markov chains. In addition, based on PLSD, we propose a large scene airport detection method for PolSAR images. The experimental results validate the effectiveness of the two methods and demonstrate the potential of line segments for application in PolSAR images.
In all, the main contributions of this paper are as follows:
  • The literature on line segment detection is surveyed and the importance of line segments in the image interpretation task is determined. Then, we propose a new fast line segment detection algorithm, PLSD, to detect line segments in PolSAR images. It can detect line segments in linear time.
  • Based on PLSD and the scattering characteristics of airports, we propose an airport detection method for large scenes.
  • The results of the above two detection methods and the state-of-the-art (SOTA) method are compared on several PolSAR images. The superiority of the two models is demonstrated.
The rest of this paper is organized as follows. Section 2 describes the design details of the PLSD and the airport detection method. Experimental results are presented and analyzed in Section 3. Section 4 discusses the parameter settings and algorithm complexity of the line segment detector. Finally, Section 5 draws conclusions.

2. Method

This section introduces our proposed methods, including the line segment detector and the airport detection method. First, the line segment detector is described. Specifically, the edge detector with the covariance matrix is described first, followed by SRMSD, then the line segment validation method is presented. Finally, the complete PLSD algorithm is provided and the airport detection algorithm based on the line segment detector is introduced.

2.1. Line Segment Detection for PolSAR (PLSD)

To obtain line segments from PolSAR images, PLSD is divided into three steps: (1) obtaining the gradient strength and gradient direction of each point in PolSAR images by improved edge detector with covariance matrix; (2) performing SRMSD to acquire line-support regions; and (3) utilizing the Helmholtz principle [23] and Markov chains, the line-support region is validated, removing the false alarms. The flowchart of the PLSD is illustrated in Figure 1.

2.1.1. Edge Detector with Covariance Matrix

PolSAR measures the backscattering coefficient of the target using multiple combinations of transmitting and receiving antenna polarizations. In the case of a full PolSAR, each resolution unit is described by a 2 × 2 complex scattering matrix S, as shown in Equation (1):
S = S h h S h v S v h S v v
where the elements S i j in the scattering matrix represent the complex scattering coefficient, with i denoting the transmitting polarization and j denoting the receiving polarization.
Under the monostatic backscattering case, there exists a reciprocity theorem, i.e., S h v = S v h ; the scattering matrix becomes symmetric, and can be reduced to a three-dimensional single-look scattering vector Ω = S h h 2 S h v S v v T , with T being transpose. We discuss this backscattering case in the following part of this paper.
For the purpose of speckle reduction, multi-look processing is usually performed on PolSAR data. For multi-look PolSAR data, each pixel can be represented by a covariance matrix C, which is defined as
C = 1 L i = 1 L Ω i × Ω i T
where the superscript T denotes the conjugate transpose, L represents the number of looks, and Ω i is the scattering vector to be averaged. The covariance matrix C follows a complex Wishart distribution, i.e.,  C W C q , L , , with  = E Ω × Ω T and with q representing the dimension of the scattering vector. For a fully polarimetric SAR under the monostatic backscattering case reciprocity, q = 3. This can be described by the following probability density function:
P C | = L q × L × | C | L q × exp L × T r 1 C K L , q × | | L
K L , q = π q q 1 / 2 j = 1 q Γ L j + 1
where Γ is the Gamma function, while T r and | | denote the trace and determinant, respectively.
Suppose C x and C y are independent and both follow a complex Wishart distribution, i.e.,  C x W C q , L , x and C y W C q , L , y . Then, their equality can be tested by the Wishart likelihood ratio test (LRT) statistic, Q [33], and Q becomes
Q = 2 × L 2 × q × L L 2 × q × L × | C x | L × | C y | L | C x + C y | 2 × L
To simplify the calculation, the ln operation is applied to Q according to [33]:
ln Q = L × 2 × q × ln 2 + ln | C x | + ln | C y | 2 ln | C x + C y |
The term ln Q is limited by [ , 0], with  ln Q = 0 for C x = C y .
Based on Equation (6), the polarimetric CFAR Edge Detector [32] is proposed. It visits each pixel in the image continuously, applying a set of filters with different orientations for each pixel. The filter estimates the average covariance matrix on both sides of the center pixel and then calculates the Wishart likelihood ratio test statistic as a measure of the probability of edge pixels. Usually, to achieve better edge detection, eight sets of filters with different orientations are provided at each pixel in the image. This increases the computational complexity of edge detection. We designed the filter setup as shown in Figure 2 according to the approach in [21], which reduces computational complexity and maintains good accuracy. To be specific, with the LRT statistic ln Q calculated as shown in Equation (6) for the left and right filter configurations shown in Figure 2a, we can find the horizontal gradient intensity of the center pixel. The size of the filter window is set by a speckle suppression parameter ρ , i.e.,  w f = log 10 × ρ , l f = 2 × w f + 1 . The gradient G v x , y along the vertical direction can be computed in the same way, as shown in Figure 2b. The strength | G R x , y | and direction a n g G R x , y at a position ( x , y ) are defined as
| G R x , y | = G h x , y 2 + G v x , y 2
a n g G R x , y = arctan G v x , y G h x , y

2.1.2. Statistical Region Merging Based on Gradient Strength and Direction

In optical images, the gradient map is obtained by performing a difference operation on adjacent pixels. To ensure that the gradient of each point is only affected by the adjacent pixels, the region affected by the pixel mutation is small, i.e., the width of the region having the same gradient direction is small. Therefore, grouping the adjacent pixels with similar gradient directions provides accurate line-support regions.
Nevertheless, this method is not applicable to PolSAR images. In [28], LSDSAR was used to replace the finite-differences method with the ratio-based calculation method in order to obtain a more accurate gradient map. However, LSDSAR does not consider the problem that the gradient maps obtained by the ratio-based calculation method have a large width at the pixel mutations. The same problem appears in the gradient maps obtained by the CFAR edge detector. Therefore, a line-support region generation algorithm based on the gradient direction alone increases the width of the line-support region and even change its direction, decreasing the detection accuracy of the line segment. In the proposed method, we consider both gradient strength and gradient direction to diminish the damage. As shown in Figure 3c, when the the pixel mutation is closer, the gradient strength is higher. Hence, the utilization of gradient strength can effectively distinguish edge points from interference points. In addition, the average attributes of the regions are more representative compared to the attributes of the pixels at the edges of the regions,; thus, the Statistical Region Merging (SRM) algorithm is used. Overall, we propose a line-support region generation algorithm called SRMSD that is more suitable for PolSAR data.
Meanwhile, the seed pixel selection of statistical region merging has an important influence on the final result. Pixels with higher gradient strength are selected as seed pixels first, as they are more likely to belong to line segments. In addition, the gradient direction of the seed pixels is considered to be similar to the line segment direction. Therefore, the gradient direction of each tested pixel is compared with the gradient direction of the seed pixel, preventing the appearance of line segments in the region with slow gradient direction change, i.e., the scene shown in Figure 4a. The effectiveness of this measure is demonstrated in Figure 4b,c.
Figure 5 illustrates the process of SRMSD. Specifically, a pixel is first selected as the seed pixel, as shown in Figure 5a. The gradient direction of the seed pixel is α 0 , and the initial region angle is set to α 0 as well. Then, the pixels adjacent to the line-support region are tested and added it to the region if it satisfies the following conditions:
(1)
Gradient strength equals the average region strength up to a strength tolerance μ ;
(2)
Gradient direction equals the region angle up to an angle tolerance η ;
(3)
Gradient direction equals the initial region angle α 0 up to an angle tolerance η .
At each iteration, the region angle α is updated to
α = arctan i = 1 n sin a n g i i = 1 n cos a n g i
where n is the number of pixels in the line-support region. The process is repeated until none of the adjacent pixels satisfies the join condition. The final result of the SRMSD is obtained as shown in Figure 5c.
Moreover, Algorithm 1 provides the implementation details of SRMSD according to [28]. It is worth noting that we specify that a pixel can only be added to a unique line-support region, making the SRMSD greedy and linear in time.
Algorithm 1 SRMSD
Require: 
A gradient strength image G s ; a gradient direction image G d ; a starting pixel (x, y); an angle threshold μ ; a strength threshold η ; An image Status that records whether pixels have been added to a region.
Ensure: 
Line-support Region: region.
1:
region ← (x, y)
2:
α r e g i o n G d ( x , y )
3:
S x ← cos( α r e g i o n )
4:
S y ← sin( α r e g i o n )
5:
for pixel P ( x r , y r ) in region do
6:
  for  P a ( x a , y a ) adjacent to P and S t a t u s ( x a , y a ) = Not Used do
7:
   if Diff( G d ( x a , y a ) , α r e g i o n ) < μ and Diff( G d ( x a , y a ) , G d ( x , y ) ) < μ and Diff( G s ( x a , y a ) ,
    G s ( x r , y r ) ) < η  then
8:
   Add P a to region.
9:
    S t a t u s ( x a , y a ) ← Used.
10:
    S x S x + cos( G d ( x a , y a ) ).
11:
    S y S y + sin( G d ( x a , y a ) ).
12:
    α r e g i o n ← arctan( S y / S x ).
13:
   end if
14:
  end for
15:
end for
16:
return region;

2.1.3. Line Segment Validation

Following the SRMSD, further validation is required for each line-support region. The line-support region is composed of a set of pixels, while the line segment is represented by a rectangle [34]. To unify them, a rectangle is used to approximate each line-support region. The rectangle with four parameters, i.e., length, width, angle, and center of the rectangle, is shown in Figure 6. The length and width of the rectangle are defined as the length and width of the minimum bounding rectangle for the line-support region. With the gradient strength as the mass of the pixel, the center of the rectangle is defined as the center of mass [35], and the first inertia axis to select the rectangle’s angle.
The validation of line segments using the a contrario approach is already present in several methods, such as LSD, EDLines, etc. In the a contrario approach, the detection of line segments is treated as a simplified hypothesis testing problem. It is based on the Helmholtz principle [23], i.e., no meaningful structure should happen by chance in a random configuration. A line segment, as a structure with perceptual significance, has obvious anisotropy; thus, it should not appear in the background model. The background model, as Desolneux et al. [36] state, is simply one in which all gradient directions are independent and uniformly distributed. Such a background model has good isotropy; therefore, line segments can be judged as outliers in the background model.
However, the filter-based gradient calculation method requires local averaging, resulting in gradient directions that do not satisfy the assumption of independent distribution. Fortunately, Myaskouvskey et al. [37] extended the a contrario approach by introducing Markov chains, allowing the modelling of the background model with the presence of low-order dependent events. Here, we say that the extended background model is H 0 . For a random image I 0 (defined on the grid Ψ = 1 , N × 1 , M ) obeying the H 0 model, I 0 should satisfy the following:
(1)
m Ψ , A n g l e I 0 m is uniformly distributed over 0 , 2 π ;
(2)
The family A n g l e I 0 m m Ψ follows a Markov chain of order one.
Prior to validation by a contrario approach, the aligned pixels within the rectangle, i.e., where the gradient direction of the pixel is approximately orthogonal to the rectangle’s direction, need to be counted. A rectangle is considered a valid line-support region if the percentage of aligned pixels in the rectangle exceeds the density threshold D. Otherwise, we let μ = μ / 2 and perform SRMSD again.
For a rectangle r with m ( X 1 , , X m ) pixels, the joint distribution of the pixels is described by a Markov chain of order one; that is, we specify that for all 1 < i m ,
P X i = x i | X i 1 , , X 1 = x 1 = P X i = x i | X i 1 = x i 1
Therefore, the distribution of aligned pixels is characterized by P X 1 = x 1 | X 0 = x 0 for x 1 , x 0 0 , 1 .
Suppose that r contains n(r) pixels and k aligned pixels, and that the number of aligned pixels of rectangle r in I 0 is k 0 . The Number of False Alarms (NFA) [37] of r is denoted as
N F A r = N R × P H 0 k 0 k
where N R denotes the number of rectangles in I, which can be approximated as 5 × ( M × N ) 5 / 2 in an image of size M × N [28]. As I 0 follows the Markov chain assumption, P H 0 k 0 k can be found by
P H 0 k 0 k = x 1 + + x n r k P X 1 = x 1 × t = 1 n r P X t = x t | X t 1 = x t 1
A heavy computational burden results from the straightforward computation of Equation (12). Therefore, a dynamic programming algorithm [38] is employed to achieve more efficient computation. Letting Y t = j = t n X j , it can be observed that for  t < n 1 we have
P Y t k = P Y t + 1 k | X t = 0 × P X t = 0 + P Y t + 1 k 1 | X t = 1 × P X t = 1
In addition, for  x 0 , 1 and k 1 ,
P Y t + 1 k | X t = x = y 0 , 1 P Y t + 2 k y , X t + 1 = y | X t = x = P Y t + 2 k | X t + 1 = 0 × P X t + 1 = 0 | X t = x + P Y t + 2 k 1 | X t + 1 = 1 × P X t + 1 = 1 | X t = x
and P Y n k | X n 1 = x is provided by
P Y n k | X n 1 = x = 1 P 1 | x 0 k = 0 , k = 1 , o t h e r w i s e .

2.1.4. The Complete PLSD Algorithm

The complete PLSD algorithm is shown in Algorithm 2 with reference to [21,28]. Status determines that pixels are assigned to one unique line-support region within SRMSD, which is the key to ensuring that the algorithm completes in linear time.
Algorithm 2 PLSD
Require: 
An image I; Speckle suppression parameter ρ ; the angle tolerance μ ; the strength tolerance η ; the NFA threshold ϵ and the density threshold D.
Ensure: 
A list of line segment L.
1:
Initialization:
  • Apply the CFAR edge detector with parameter ρ on the input image to obtain the gradient strength map G s and the gradient direction map G d .
  • Compute O r d e r e d L i s t , sorting the pixels of I in descending order according to gradient strength.
  • Define S t a t u s , All( S t a t u s ) ← Not Used.
2:
for pixel P (x, y) in OrderedList do
3:
   region ← SRMSD(P (x, y), G s , G d , μ , η , Status);
4:
   rect ← Rectangle(region);
5:
   while AlignedPixelDensity(rect, μ ) < D do
6:
     region ← Refine(region);
7:
     rect ← Rectangle(region);
8:
   end while
9:
   nfa ← NFA(rect);
10:
   if nfa< ϵ  then
11:
     ADD rect to L;
12:
     Status(region) ← Used;
13:
   else
14:
     region ← Not Used;
15:
   end if
16:
end for
17:
return L;

2.2. PLSD-Based Airport Detection on PolSAR Images

One possible application of line segment detection is airport detection, because the edges of runways appear as line segments on the image. It is extensively used for pre-processing of aircraft detection [39,40]. In addition to the ability to form line segments, a priori knowledge that surface scattering is the dominant scattering mechanism in the airport region can help to detect the airport. Based on the line segment detector and scattering characteristics, we design a new large scene airport detection method.
The airport detection method is divided into two steps. In the first stage, Yamaguchi four-component-based Fuzzy C-Means (Y-FCM) is utilized to acquire the airport Region Of Interest (ROI). In the second stage, the precise airport region is obtained by PLSD to validate the airport ROI. Figure 7 shows the flowchart of the airport detection method.
(1) Y-FCM: We apply the Yamaguchi four-component decomposition [41] to the PolSAR data. Four scattering components are obtained, i.e., the surface scattering component ( P S ), the double-bounce scattering component ( P D ), the volume scattering component ( P V ), and the helix scattering component ( P H ). Different terrains have different polarization scattering properties. The main scattering mechanism of the runway is surface scattering, as it has a moderately rough surface. Fuzzy C-Means (FCM) [42], as a soft clustering algorithm, allows each sample to belong to multiple classes with varying degrees of membership. Moreover, as FCM is more robust to the selection of initial clustering centers, excellent clustering results can be obtained. With P S , P D , P V , P H as the characteristic of each pixel, FCM is utilized to classify the pixels with different scattering. The Y-FCM is implemented as described below.
Set the number of class m and the initial cluster centers c j j = 1 , 2 , 3 , m . To reduce the subsequent computational complexity, m can be set to a larger value. In our experiments, m is set to 15. The initial clustering centers are obtained by taking a random choice.
Then, the degree of membership u i j of pixel x i and the j-th class is calculated based on P S , P D , P V , P H :
u i j = 1 | | x i c j | | 2 2 j = 1 m 1 | | x i c j | | 2 2
After each iteration, the cluster centers need to be updated, as follows:
c j = i = 1 N u i j 2 × x i i = 1 N u i j 2
Considering the small sum of P D , P V , and P H in the runway, we take the class with the smallest sum of P D , P V , and P H as the ROI for subsequent detection. It is worth noting that Y-FCM performs only one iteration to obtain the accurate ROI.
(2) PLSD: Next, PLSD is performed to acquire line segments in the PolSAR images. To test the parallelism of the line segments, we define the rule for parallel lines according to [43]. Suppose the line segments L and S consist of the point sets L x , x = 1 , , N L and S x , x = 1 , , N S , respectively, with D L and D S being their angles. If L and S are parallel, they must first satisfy the angle similarity condition, i.e., D L , D S < A n g t h r e d . Furthermore, for each point L i on the line segment L, we must find the line P that passes through L i and is perpendicular to L. If P intersects S at S i , we find the Euclidean distance d i s i between L i and S i and obtain the average value dis of the distance set d i s x , x = 1 , , N L . The percentage of points in the set d i s x , x = 1 , , N L smaller than dis d i s x is P e r L . The proportion P e r S can be obtained in the same way. If both P e r L and P e r S are smaller than the threshold P e r t h r e d , the line segments L and S are considered parallel. In our experiments, we set A n g t h r e d = 3° and P e r t h r e d = 0.7.
Finally, the ROI is validated by line segments. A region is considered an airport region if there are parallel line segments within the ROI.

3. Result

In this section, the results of our proposed algorithms and other comparative methods are reported for PolSAR images. First, the PolSAR datasets are presented, followed by the performance comparison of different methods and in-depth analysis of SRMSD. Finally, the applicability of the line segment detection method is demonstrated based on airport detection.

3.1. Datasets

Here, we choose three pieces of PolSAR data to demonstrate the effectiveness of our proposed algorithm, consisting of a river, a road, and an airport, as shown in Figure 8. The PolSAR data were provided by the UAVSAR project. They are acquired in L-band with an azimuth resolution of 4.9 m and a range resolution of 7.2 m, and are based on a four-look GRD product. More detailed information on the data is listed in Table 1.
Figure 8a is of a short road in Fort Smith, Canada with a simple background. Figure 8b shows the Kona airport, located on Hawaii Island, America. The difference between the runway of the airport and the surrounding environment forms the line segment. Figure 8c is an area located in Calumet, Louisiana, America, containing a river with a well-defined edge. The complexity of its scenarios poses a considerable challenge to the line segment detector.

3.2. Comparison of Line Segment Detection Algorithms

In this subsection, we compare the line segment detection results of six methods, i.e., LSD [21], EDLines [22], Linelet-LSD [23], LSDSAR [28], HT [15], and our proposed method. The parameter settings of comparison methods are set as optimal values according to independently experiments with ten times for each image. Note that all the methods are implemented using the MATLAB language on a PC with a 2.6-GHz i5 CPU.
Figure 9 presents the line segment detection results of different methods for Figure 8a,b. The first and third rows show the detection results on the PauliRGB image, and the results on the solid color background are shown in the second and fourth rows. Figure 9a shows the results of LSD, where we can see that most of the detected line segments are locate in the region of gradient variation. Nevertheless, there are errors of miss detection, such as parts of roads and runways. Figure 9b,e suffers from the same errors of line segment miss detection. In addition, they over-cut the line segments into small line segments. The results in Figure 9c demonstrate that the Linelet-LSD method can effectively detect more line segments. However, it generates more false alarms as well, and appears to engage in serious over-cutting of the line segments. The results of LSDSAR depicted in Figure 9d seem better than all of the former methods, with most of the line segments detected well and with fewer over-cutting errors. Nevertheless, the most serious false alarms appear in these results, and line segments even appear in the ocean. Figure 9f shows the results of our proposed method; it can be observed that it finds a better tradeoff between accurate detection of line segments and removal of false alarms.
In addition, the number of line segments and the time cost of the different methods are provided in Table 2. It can be seen that our line segment detector has the best detection performance, and the detection time is lower than most of the others.
The detection performance of LSD, Linelet-LSD, LSDSAR, and PLSD in complex scenarios is discussed below. Figure 10 shows the line segment detection results of the four methods in a complex scenario. From Figure 10a,b, it can be seen that when using the LSD, there are many obvious line segment regions missed in the results, although most of the detected line segments are correct (for example, the regions marked by red circles in Figure 10a,b). This indicates that the edge detection method of LSD, i.e., the difference method, is not well adapted to PolSAR images. The results of Linelet-LSD in Figure 10c,d seem to detect more line segments. Nevertheless, the over-cutting of line segments is very serious, as Linelet-LSD detects Linelets (short line segments) in horizontal or vertical directions and forms line segments by aggregating them. Figure 10e,f shows the result of the LSDSAR method, where it can be seen that most of the line segments can be correctly discriminated with fewer over-cutting results than LSD and Linelet-LSD. However, in the homogeneous region the detection result is not very satisfactory, as there exist a large number of false alarms. In addition, line segments are detected in curved regions. The detection results of our proposed approach are shown in Figure 10g,h, where it can be seen that both false alarms and over-cutting are well controlled. The reason for this is that our proposed method both considers the statistical model of the distribution in PolSAR images and utilizes the a contrario approach to validate the line-support region. More importantly, we incorporate both the direction and strength information of the gradient into the SRM, resulting in more accurate detection of line-support regions with less over-cutting than other methods. Therefore, it can be demonstrated that our method outperforms the other three methods in terms of the ability to the preservation of complete line segments and elimination of false alarms.

3.3. In-Depth Analysis of the SRMSD

In this subsection, we focus on the performance improvement introduced by the addition of gradient strength. Figure 11a,b shows the results of line segment detection with and without strength information, respectively. Figure 11c,e shows the enlarged results of the two selected areas marked with red rectangles in Figure 11a,b. What we can observe from Figure 11a,b is that the addition of strength information removes false alarms. Most of the removed line segments exist in the interior of the river, as shown in Figure 11e,f. Due to the fact that the gradient calculation with filter-based methods generates interference points at the pixel mutations, they have similar gradient directions to the edge points, making SRM by the gradient direction alone unable to distinguish the edge points from the interference points. A large number of line segments appear in between the rivers that are expected to not appear, as shown in Figure 11e. In comparison, our method considers the gradient strength, separating the edge points from the interference points. Therefore, the line segment in the middle of the river is removed, as shown in Figure 11c.
To further evaluate the effect of gradient strength addition, Table 3 displays the statistics on the detection results for both cases. It can be seen that the difference in detection time between the two cases is not significant. Therefore, the incorporation of gradient strength imposes almost no additional time burden on the algorithm. Meanwhile, the addition of strength information reduces the number of incorrect line segments. Furthermore, the line segments in Figure 11a have a smaller average width than thoes in Figure 11b, demonstrating the superiority of our new SRMSD algorithm.
The histograms of the line-width obtained by the line segment detector with and without strength information are shown in Figure 12a,b, respectively. It can be seen that the width of the line segment is concentrated between four and five after the incorporation of the strength information. In comparison, the width of the line segment without strength information is more apparent in [5,6]. Furthermore, the number of line segments in Figure 12a is lower than in Figure 12b when the width of the line segments exceeds 5, demonstrating the effectiveness of the incorporated strength information.

3.4. Comparison of Airport Detection Methods

Four PolSAR images were used to demonstrate the effectiveness of our airport detection method. The images are L-band GRD products provided by the UAVSAR project. Details are shown in Table 4. The airports are located in mountainous, seaside, and urban areas.
In order to demonstrate the effectiveness of our method, we compared it with CSA-SOACM [44]. CSA-SOACM utilizes local contrast to obtain the airport ROI; then, the ROI is examined by line segments. Finally, the airport contour is refined in the airport area with Active Contour Model (ACM). Furthermore, we replaced PLSD with other line segment detectors. However, LSD and Linelet-LSD are difficult to apply to airport detection in large scenarios due to heavy time consumption. Meanwhile, owing to the poor results of EDLines and HT for line segment detection, all airport targets were lost. Only LSDSAR-based airport detection (LSDSAR-AD) was able to achieve promising detection results.
Figure 13 presents the results of airport detection. The airport areas are marked with a red rectangle. It can be seen that our method accurately detects the airports. CSA-SOACM shows false alarms and incomplete detection. This because of the CSA-SOACM not utilizing the polarization information of PolSAR data as well as due to for its inaccurate line segment detection. Although two airports are detected by LSDSAR-AD, a large number of false alarms occur. This can be attributed to the result of line segment detection by LSDSAR. LSDSAR detects a large number of line segments in homogeneous regions, leading to the false alarms.
The results of airport detection prove the superiority of PLSD and demonstrate the applicability of line segment detection.

4. Discussion

This section discusses the parameter settings and algorithmic complexity of PLSD. First, the effects of different free parameter settings are analyzed and the optimal parameters determined on our dataset are provided, followed by analysis of the algorithm’s complexity.

4.1. Parameter Settings

There are five parameters in PLSD: the speckle suppression parameter ρ , angle tolerance μ , strength tolerance η , NFA threshold ϵ , and density threshold D; ρ is used to control the size of the filter for the gradient calculation. Increasing the value of ρ helps to suppress the speckle, providing more accurate gradient calculation results; in turn, however, it increases the number of interference points. According to our experiments, we find that setting ρ to 4 provides the best performance in most scenarios. However, it may not be able to distinguish line segments that are very close to each other, such as the river branches in Figure 8c. Setting ρ to a smaller value, e.g., 2, can solve this problem. Here, μ and η are the angle tolerance and strength tolerance used in the search for line-support regions, respectively. A small value is more restrictive, leading to over-cutting of line segments. A large value leads to the merging of unrelated pixels, resulting in large regions. We set μ to 22.5 according to [45], and obtained good results. For the selection of η , we analyzed the gradient strength obtained by ρ at different values, as shown in Figure 14. We chose two rows of the PauliRGB image; the strength profiles are displayed in Figure 14a. Although the value of ρ is different, the gradient strength always shows a large drop somewhere, owing to the filter deviating from the pixel mutation. The points are marked in Figure 14b–f with green dotted lines, with the gradient strength showing a large decrease. With η set to 3, the edge points (points inside the dotted line) and interference points (points outside the dotted line) are well distinguished. Similar to other contrario approaches, the NFA threshold ϵ represents an upper bound on the average number of detections that we allow in a pure noise image. Usually, setting it to 1 yields good performance. The density threshold D is used to control the density of aligned pixels in the line-support region. During SRMSD, the angle tolerance μ decreases until the density of aligned pixels in the line-support region is greater than D. The presence of speckles leads to inaccurate gradient direction detection; thus, too large values of D have the effect of over-cutting the line segments into small subsegments, as shown in Figure 15. Therefore, we suggest a value of 0.4 for D, and confirm its validity is in the experiments.
Unless explicitly mentioned, we use the default setting ρ = 4, μ = 22.5°, η = 3, ϵ = 1, and D = 0.4 in our proposed algorithm.

4.2. Complexity of the PLSD Algorithm

For our proposed algorithm, the computational costs mainly consist of gradient strength and direction acquisition, line-support region generation, and line segment validation. The computation of the gradient strength and direction is proportional to the number of pixels and the size of the filter. Pixel sorting is achieved by bucket sorting, an operation that can be performed in linear time. The computational time of the line-support region generation algorithm is proportional to the number of visited pixels. It is worth noting that no overlap exists between the regions. Thus, the number of visited pixels is proportional to the total number of pixels in the image. The line segment validation can be divided into two tasks, i.e., counting aligned pixels in the line-support region and calculation of NFA. The former is proportional to the total number of pixels involved in all regions. The latter needs to be computed for each region to ensure that it is proportional to the number of regions. Both the number of line-support regions and the total number of pixels involved in the line-support regions are at most equal to the number of pixels. All in all, the execution time of PLSD is proportional to the number of pixels in the image.

5. Conclusions

This paper proposes a fast line segment detection algorithm for PolSAR images. To reduce the time consumption of the CFAR edge detector, a new filter setup is designed. Then, the SRM framework is utilized to quickly find the line-support region. Considering both gradient strength and gradient direction, we design a new similarity measure that effectively suppresses the interference gradient points and makes the line-support region narrower. Finally, according to the Helmholtz principle and a contrario approach, the line-support regions are validated to remove false alarms. Meanwhile, we analyze the free parameters of PLSD and provide a default setting, which achieves satisfactory results in all our experiments. Furthermore, we propose a large scene airport detection method based on PLSD and scattering characteristics. The experimental results demonstrate that PLSD is able to detect line segments more accurately than other line segment detectors while effectively controlling time consumption in large scenes.
To a certain extent, the quality of the edge detector determines the accuracy of subsequent line segment detection. In future research, we plan to modify the filter shape and distribution probability model of the edge detection algorithm to achieve more accurate line segment detection in heterogeneous regions. Moreover, we intend to reduce the computational complexity with the adaptive removal of regions with weak gradient strength before acquiring line segment support regions.

Author Contributions

Conceptualization, D.W. and Q.L.; Methodology, D.W. and F.M.; Software, F.M.; Data curation, Q.L.; Writing—Original draft preparation, D.W. and F.M.; Visualization, D.W.; Investigation, F.M. and Q.Y.; Supervision, F.M. and Q.L.; Resources, Q.L. and Q.Y.; Validation, D.W.; Writing—Reviewing and Editing, Q.L. and F.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by the national Natural Science Foundation of China: 61871413, 62171016, 62201027; Fundamental Research Funds for the Central Universities: XK2020-03.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, L.; Weng, T.; Xing, J.; Li, Z.; Yuan, Z.; Pan, Z.; Tan, S.; Luo, R. Employing deep learning for automatic river bridge detection from SAR images based on adaptively effective feature fusion. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102425. [Google Scholar] [CrossRef]
  2. Wei, Q.R.; Feng, D.Z.; Zheng, W.; Zheng, J.B. Rapid line-extraction method for SAR images based on edge-field features. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1865–1869. [Google Scholar] [CrossRef]
  3. Zhao, M.; Wu, Y.; Pan, S.; Zhou, F.; An, B.; Kaup, A. Automatic registration of images with inconsistent content through line-support region segmentation and geometrical outlier removal. IEEE Trans. Image Process. 2018, 27, 2731–2746. [Google Scholar] [CrossRef] [PubMed]
  4. Sui, H.; Xu, C.; Liu, J.; Hua, F. Automatic optical-to-SAR image registration by iterative line extraction and Voronoi integrated spectral point matching. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6058–6072. [Google Scholar] [CrossRef]
  5. Liu, N.; Cui, Z.; Cao, Z.; Pi, Y.; Dang, S. Airport detection in large-scale SAR images via line segment grouping and saliency analysis. IEEE Geosci. Remote Sens. Lett. 2018, 15, 434–438. [Google Scholar] [CrossRef]
  6. Tang, G.; Xiao, Z.; Liu, Q.; Liu, H. A novel airport detection method via line segment classification and texture classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2408–2412. [Google Scholar] [CrossRef]
  7. Luo, Y.T.; Zhao, L.Y.; Zhang, B.; Jia, W.; Xue, F.; Lu, J.T.; Zhu, Y.H.; Xu, B.Q. Local line directional pattern for palmprint recognition. Pattern Recognit. 2016, 50, 26–44. [Google Scholar] [CrossRef]
  8. Huang, J.; Liu, Y.; Wang, M.; Zheng, Y.; Wang, J.; Ming, D. Change Detection of High Spatial Resolution Images Based on Region-Line Primitive Association Analysis and Evidence Fusion. Remote Sens. 2019, 11, 2484. [Google Scholar] [CrossRef] [Green Version]
  9. Yue, Z.; Gao, F.; Xiong, Q.; Wang, J.; Huang, T.; Yang, E.; Zhou, H. A novel semi-supervised convolutional neural network method for synthetic aperture radar image recognition. Cogn. Comput. 2021, 13, 795–806. [Google Scholar] [CrossRef] [Green Version]
  10. Yin, Q.; Hong, W.; Zhang, F.; Pottier, E. Optimal combination of polarimetric features for vegetation classification in PolSAR image. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 3919–3931. [Google Scholar] [CrossRef]
  11. Li, G.; Li, Y.; Hou, Y.; Wang, X.; Wang, L. Marine oil slick detection using improved polarimetric feature parameters based on polarimetric synthetic aperture radar data. Remote Sens. 2021, 13, 1607. [Google Scholar] [CrossRef]
  12. He, J.; Wang, Y.; Liu, H.; Wang, N. PolSAR ship detection using local scattering mechanism difference based on regression kernel. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1725–1729. [Google Scholar] [CrossRef]
  13. Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
  14. Samal, A.; Edwards, J. Generalized Hough transform for natural shapes. Pattern Recognit. Lett. 1997, 18, 473–480. [Google Scholar] [CrossRef]
  15. Cha, J.; Cofer, R.H.; Kozaitis, S.P. Extended Hough transform for linear feature detection. Pattern Recognit. 2006, 39, 1034–1043. [Google Scholar] [CrossRef]
  16. Fernandes, L.A.; Oliveira, M.M. Real-time line detection through an improved Hough transform voting scheme. Pattern Recognit. 2008, 41, 299–314. [Google Scholar] [CrossRef]
  17. Ma, F.; Zhang, F.; Xiang, D.; Yin, Q.; Zhou, Y. Fast task-specific region merging for SAR image segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  18. Ma, F.; Zhang, F.; Yin, Q.; Xiang, D.; Zhou, Y. Fast SAR image segmentation with deep task-specific superpixel sampling and soft graph convolution. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–16. [Google Scholar] [CrossRef]
  19. Wei, Q.R.; Feng, D.Z. Extracting line features in SAR images through image edge fields. IEEE Geosci. Remote Sens. Lett. 2016, 13, 540–544. [Google Scholar] [CrossRef]
  20. Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.; Xu, J. Robust line detection of synthetic aperture radar images based on vector radon transformation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 5310–5320. [Google Scholar] [CrossRef]
  21. Von Gioi, R.G.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 32, 722–732. [Google Scholar] [CrossRef] [PubMed]
  22. Akinlar, C.; Topal, C. EDLines: A real-time line segment detector with a false detection control. Pattern Recognit. Lett. 2011, 32, 1633–1642. [Google Scholar] [CrossRef]
  23. Desolneux, A.; Moisan, L.; Morel, J.M. From Gestalt Theory to Image Analysis: A Probabilistic Approach; Springer Science & Business Media: New York, NY, USA, 2007; Volume 34. [Google Scholar]
  24. Cho, N.G.; Yuille, A.; Lee, S.W. A novel linelet-based representation for line segment detection. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 1195–1208. [Google Scholar] [CrossRef] [PubMed]
  25. Yang, J.Y.; Li, H.C.; Hu, W.S.; Pan, L.; Du, Q. Adaptive Cross-Attention-Driven Spatial–Spectral Graph Convolutional Network for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  26. Li, H.C.; Hu, W.S.; Li, W.; Li, J.; Du, Q.; Plaza, A. A3CLNN: Spatial, spectral and multiscale attention ConvLSTM neural network for multisource remote sensing data classification. IEEE Trans. Neural Netw. Learn. Syst. 2020, 33, 747–761. [Google Scholar] [CrossRef]
  27. Zhao, K.; Han, Q.; Zhang, C.B.; Xu, J.; Cheng, M.M. Deep hough transform for semantic line detection. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4793–4806. [Google Scholar] [CrossRef]
  28. Liu, C.; Abergel, R.; Gousseau, Y.; Tupin, F. LSDSAR, a Markovian a contrario framework for line segment detection in SAR images. Pattern Recognit. 2020, 98, 107034. [Google Scholar] [CrossRef] [Green Version]
  29. Borghys, D.; Lacroix, V.; Perneel, C. Edge and line detection in polarimetric SAR images. In Proceedings of the 2002 International Conference on Pattern Recognition, Quebec City, QC, Canada, 11–15 August 2002; Volume 2, pp. 921–924. [Google Scholar]
  30. Jin, R.; Zhou, W.; Yin, J.; Yang, J. Cfar line detector for polarimetric sar images using wilks’ test statistic. IEEE Geosci. Remote Sens. Lett. 2016, 13, 711–715. [Google Scholar] [CrossRef]
  31. Zhou, G.; Cui, Y.; Chen, Y.; Yang, J.; Rashvand, H.; Yamaguchi, Y. Linear feature detection in polarimetric SAR images. IEEE Trans. Geosci. Remote Sens. 2010, 49, 1453–1463. [Google Scholar] [CrossRef]
  32. Schou, J.; Skriver, H.; Nielsen, A.A.; Conradsen, K. CFAR edge detector for polarimetric SAR images. IEEE Trans. Geosci. Remote Sens. 2003, 41, 20–32. [Google Scholar] [CrossRef]
  33. Conradsen, K.; Nielsen, A.A.; Schou, J.; Skriver, H. A test statistic in the complex Wishart distribution and its application to change detection in polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 4–19. [Google Scholar] [CrossRef] [Green Version]
  34. Kahn, P.; Kitchen, L.; Riseman, E.M. Real-Time Feature Extraction: A Fast Line Finder for Vision-Guided Robot Navigation; University of Massachusetts Amherst: Amherst, MA, USA, 1987. [Google Scholar]
  35. Kahn, P.; Kitchen, L.; Riseman, E.M. A fast line finder for vision-guided robot navigation. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 1098–1102. [Google Scholar] [CrossRef]
  36. Desolneux, A.; Moisan, L.; Morel, J.M. Meaningful alignments. Int. J. Comput. Vis. 2000, 40, 7–23. [Google Scholar] [CrossRef]
  37. Myaskouvskey, A.; Gousseau, Y.; Lindenbaum, M. Beyond independence: An extension of the a contrario decision procedure. Int. J. Comput. Vis. 2013, 101, 22–44. [Google Scholar] [CrossRef]
  38. Leiserson, C.E.; Rivest, R.L.; Cormen, T.H.; Stein, C. Introduction to Algorithms; MIT Press: Cambridge, MA, USA, 1994; Volume 3. [Google Scholar]
  39. Chen, L.; Luo, R.; Xing, J.; Li, Z.; Yuan, Z.; Cai, X. Geospatial transformer is what you need for aircraft detection in SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  40. Sun, X.; Wang, P.; Yan, Z.; Xu, F.; Wang, R.; Diao, W.; Chen, J.; Li, J.; Feng, Y.; Xu, T.; et al. FAIR1M: A benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2022, 184, 116–130. [Google Scholar] [CrossRef]
  41. Yamaguchi, Y.; Moriyama, T.; Ishido, M.; Yamada, H. Four-component scattering model for polarimetric SAR image decomposition. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1699–1706. [Google Scholar] [CrossRef]
  42. Lin, Y.; Chen, S. A centroid auto-fused hierarchical fuzzy c-means clustering. IEEE Trans. Fuzzy Syst. 2020, 29, 2006–2017. [Google Scholar] [CrossRef]
  43. Liu, C.; Xiao, Y.; Yang, J.; Yin, J. Harbor detection in polarimetric sar images based on the characteristics of parallel curves. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1400–1404. [Google Scholar] [CrossRef]
  44. Zhang, Q.; Zhang, L.; Shi, W.; Liu, Y. Airport extraction via complementary saliency analysis and saliency-oriented active contour model. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1085–1089. [Google Scholar] [CrossRef]
  45. Burns, J.B.; Hanson, A.R.; Riseman, E.M. Extracting straight lines. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 425–455. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the PLSD.
Figure 1. Flowchart of the PLSD.
Remotesensing 14 05842 g001
Figure 2. Edge detection filter configuration, l f : length of the filter, w f : width of the filterfileter, l f = 2 × w f + 1. (a) Filter setting for the horizontal gradient calculation. (b) Filter setting for the vertical gradient calculation.
Figure 2. Edge detection filter configuration, l f : length of the filter, w f : width of the filterfileter, l f = 2 × w f + 1. (a) Filter setting for the horizontal gradient calculation. (b) Filter setting for the vertical gradient calculation.
Remotesensing 14 05842 g002
Figure 3. Gradient map. (a) PauliRGB map. (b) Gradient direction map. (c) Gradient strength map.
Figure 3. Gradient map. (a) PauliRGB map. (b) Gradient direction map. (c) Gradient strength map.
Remotesensing 14 05842 g003
Figure 4. Effect of the seed pixel direction. (a) Scenes with slow directional changes. (b) With seed pixel direction information. (c) Without seed pixel direction information.
Figure 4. Effect of the seed pixel direction. (a) Scenes with slow directional changes. (b) With seed pixel direction information. (c) Without seed pixel direction information.
Remotesensing 14 05842 g004
Figure 5. Process of SRMSD. The arrow represents the gradient direction and the length represents the gradient strength. (a) Selection of the seed pixel. (b) Iterative process of SRMSD. (c) Results of SRMSD.
Figure 5. Process of SRMSD. The arrow represents the gradient direction and the length represents the gradient strength. (a) Selection of the seed pixel. (b) Iterative process of SRMSD. (c) Results of SRMSD.
Remotesensing 14 05842 g005
Figure 6. Rectangular description of the line-support region.
Figure 6. Rectangular description of the line-support region.
Remotesensing 14 05842 g006
Figure 7. Flowchart of the airport detection method. ⊕ means take the intersection. Red rectangles mark the airport region.
Figure 7. Flowchart of the airport detection method. ⊕ means take the intersection. Red rectangles mark the airport region.
Remotesensing 14 05842 g007
Figure 8. PauliRGB images of the three PolSAR data. (a) Simple road area. (b) Kona airport. (c) Complex river area.
Figure 8. PauliRGB images of the three PolSAR data. (a) Simple road area. (b) Kona airport. (c) Complex river area.
Remotesensing 14 05842 g008
Figure 9. Line segment detection results of different methods for PolSAR image. (a) LSD. (b) EDLines. (c) Linelet-LSD. (d) LSDSAR. (e) HT. (f) PLSD.
Figure 9. Line segment detection results of different methods for PolSAR image. (a) LSD. (b) EDLines. (c) Linelet-LSD. (d) LSDSAR. (e) HT. (f) PLSD.
Remotesensing 14 05842 g009
Figure 10. Line segment detection results of different methods in complex scenarios. (a,b) LSD. (c,d) Linelet-LSD. (e,f) LSDSAR. (g,h) PLSD. Red circle: obvious line areas missed in the results.
Figure 10. Line segment detection results of different methods in complex scenarios. (a,b) LSD. (c,d) Linelet-LSD. (e,f) LSDSAR. (g,h) PLSD. Red circle: obvious line areas missed in the results.
Remotesensing 14 05842 g010
Figure 11. Enhancement of gradient strength. (a) With gradient strength. (b) Without gradient strength. (c,d) Enlarged results of the red rectangular areas in (a). (e,f) Enlarged results of the red rectangular areas in (b).
Figure 11. Enhancement of gradient strength. (a) With gradient strength. (b) Without gradient strength. (c,d) Enlarged results of the red rectangular areas in (a). (e,f) Enlarged results of the red rectangular areas in (b).
Remotesensing 14 05842 g011
Figure 12. Statistics on line segment width. (a) With strength information. (b) Without strength information.
Figure 12. Statistics on line segment width. (a) With strength information. (b) Without strength information.
Remotesensing 14 05842 g012
Figure 13. Comparison of airport detection results. (a) Ours. (b) CSA-SOACM. (c) LSDSAR-AD.
Figure 13. Comparison of airport detection results. (a) Ours. (b) CSA-SOACM. (c) LSDSAR-AD.
Remotesensing 14 05842 g013
Figure 14. Analytical results for μ with different ρ . (a) PauliRGB Image (red lines represent the pixel locations, which are the 50th and 150th rows, respectively). (b) ρ = 1. (c) ρ = 2. (d) ρ = 3. (e) ρ = 4. (f) ρ = 5.
Figure 14. Analytical results for μ with different ρ . (a) PauliRGB Image (red lines represent the pixel locations, which are the 50th and 150th rows, respectively). (b) ρ = 1. (c) ρ = 2. (d) ρ = 3. (e) ρ = 4. (f) ρ = 5.
Remotesensing 14 05842 g014
Figure 15. Line segment detection results for different values of D. (a) D = 0.4. (b) D = 0.8.
Figure 15. Line segment detection results for different values of D. (a) D = 0.4. (b) D = 0.8.
Remotesensing 14 05842 g015
Table 1. Descriptions of three PolSAR data items for line detection.
Table 1. Descriptions of three PolSAR data items for line detection.
ImageSize (Pixel)Acquisition DateLocation
Figure 8a 300 × 300 September 2017Fort Smith, Canada
Figure 8b 700 × 700 January 2012Hawaii Island, America
Figure 8c 2281 × 1400 May 2015Calumet, Louisiana, America
Table 2. Line segment detection results.
Table 2. Line segment detection results.
MethodImageNumber of Line SegmentsTimes (s)
LSDFigure 8c
Figure 8b
Figure 8a
319
26
1
4145.76
60.12
0.85
EDLinesFigure 8c
Figure 8b
Figure 8a
728
152
9
46.29
8.54
2.12
Linelet-LSDFigure 8c
Figure 8b
Figure 8a
4782
580
33
15,007.67
211.27
8.31
LSDSARFigure 8c
Figure 8b
Figure 8a
2663
442
52
8.84
1.74
0.28
HTFigure 8c
Figure 8b
Figure 8a
62
69
6
0.11
0.05
0.04
PLSDFigure 8c
Figure 8b
Figure 8a
538
49
2
6.69
1.52
0.27
Table 3. Comparison of results with and without gradient strength.
Table 3. Comparison of results with and without gradient strength.
Number of Line SegmentsAverage Width of Line Segments (Pixel)Time (s)
With strength5384.87536.48
Without strength5755.73946.41
Table 4. Description of the data for airport detection.
Table 4. Description of the data for airport detection.
NameImage Size (Pixel)Airport Size (Pixel)Acquisition DataLocation
Coldfoot Airport4297 × 2697249 × 21410/2015Coldfoot, America
Perales Airport4164 × 2878136 × 42704/2014Ibague, Colombia
Kona Airport3195 × 2141658 × 17701/2012Hawaii Island, America
Changuinola Airport3086 × 216255 × 18702/2010Changuinola, Panama
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, D.; Liu, Q.; Yin, Q.; Ma, F. Fast Line Segment Detection and Large Scene Airport Detection for PolSAR. Remote Sens. 2022, 14, 5842. https://doi.org/10.3390/rs14225842

AMA Style

Wang D, Liu Q, Yin Q, Ma F. Fast Line Segment Detection and Large Scene Airport Detection for PolSAR. Remote Sensing. 2022; 14(22):5842. https://doi.org/10.3390/rs14225842

Chicago/Turabian Style

Wang, Daochang, Qi Liu, Qiang Yin, and Fei Ma. 2022. "Fast Line Segment Detection and Large Scene Airport Detection for PolSAR" Remote Sensing 14, no. 22: 5842. https://doi.org/10.3390/rs14225842

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop