Next Article in Journal
A New Method of UAV Swarm Formation Flight Based on AOA Azimuth-Only Passive Positioning
Previous Article in Journal
Insulator Extraction from UAV LiDAR Point Cloud Based on Multi-Type and Multi-Scale Feature Histogram
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Coverage Path Planning with Adaptive Hyperbolic Grid for Step-Stare Imaging System

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
Changchun Changguang Insight Vision Optoelectronic Technology Co., Ltd., Changchun 130102, China
Drones 2024, 8(6), 242; https://doi.org/10.3390/drones8060242
Submission received: 3 April 2024 / Revised: 26 May 2024 / Accepted: 31 May 2024 / Published: 4 June 2024
(This article belongs to the Section Drone Design and Development)

Abstract

:
Step-stare imaging systems are widely used in aerospace optical remote sensing. In order to achieve fast scanning of the target region, efficient coverage path planning (CPP) is a key challenge. However, traditional CPP methods are mostly designed for fixed cameras and disregard the irregular shape of the sensor’s projection caused by the step-stare rotational motion. To address this problem, this paper proposes an efficient, seamless CPP method with an adaptive hyperbolic grid. First, we convert the coverage problem in Euclidean space to a tiling problem in spherical space. A spherical approximate tiling method based on a zonal isosceles trapezoid is developed to construct a seamless hyperbolic grid. Then, we present a dual-caliper optimization algorithm to further compress the grid and improve the coverage efficiency. Finally, both boustrophedon and branch-and-bound approaches are utilized to generate rotation paths for different scanning scenarios. Experiments were conducted on a custom dataset consisting of 800 diverse geometric regions (including 2 geometry types and 40 samples for 10 groups). The proposed method demonstrates comparable performance of closed-form path length relative to that of a heuristic optimization method while significantly improving real-time capabilities by a minimum factor of 2464. Furthermore, in comparison to traditional rule-based methods, our approach has been shown to reduce the rotational path length by at least 27.29% and 16.71% in circle and convex polygon groups, respectively, indicating a significant improvement in planning efficiency.

1. Introduction

The advancement of aerospace remote sensing is driven by the need for high-resolution and long-range imaging capability. However, there has been an increasing demand for imaging systems that provide wide area coverage in recent years. Consequently, a concept known as wide-area persistent surveillance (WAPS) has been proposed [1,2,3,4]. WAPS requires an optical imaging system that combines a wide field of view (FOV) to cover a relatively large geographic area and high resolving power to improve the detection for dim and small targets. However, due to the limitations of size, weight, and power (SWaP), achieving both of these requirements simultaneously is challenging. In existing implementations, fisheye and reflective panoramic lenses can provide a large FOV, but the optical system usually suffers from large aberrations and distortions [5] that degenerate the image quality, and the narrow aperture of the entrance pupil makes it difficult to improve the resolving power [6]. Spherical lenses with concentrically arranged fiber bundles or detector arrays [7,8] may result in a significant increase in volume and pose challenges for achieving long focal length. Camera arrays can offer a balance between spatial resolution and FOV, but this also comes at the cost of SWaP [9].
A feasible solution to achieve both high resolution and wide coverage is to employ an area-scan camera with a single lens in a step-stare manner [10,11,12,13]. The main concept is to utilize a two-axis gimbal assembly to steer narrow-FOV optics to rapidly scan the region of interest, capturing a sub-region once a step and, finally, stitching the images to create an equivalent wide-FOV picture [14,15,16,17]. Compared to other implementations, this mechanism allows for the design of optical systems with larger apertures and longer focal lengths while maintaining the same SWaP. However, the step-stare scheme has a drawback in that it operates on a time-shared pattern. This leads to a low temporal resolution, making it challenging to detect moving targets in the stitched imagery. The update frequency of the panoramic image depends entirely on the scanning efficiency of the system. Therefore, it is crucial to design an efficient coverage path planning (CPP) method.
In recent years, the CPP problem has been extensively studied due to the rapid development of robotics and UAV technology [18,19,20,21]. Unlike point-to-point path planning, CPP emphasizes both completeness and efficiency. Completeness of coverage is particularly important in some applications, such as demining [22,23]. Efficiency involves satisfying constraints such as path length and energy consumption. Most traditional CPP methods assume that the camera is rigidly mounted and the sensor’s ground footprint is a fixed, rectangular cell. Nevertheless, this assumption is not applicable in the step-stare system because of the rotation movement. Furthermore, these methods often prioritize path planning over coverage planning, whereas the latter is more critical for WAPS applications.
The majority of research to date has concentrated on the development of coverage strategies based on the translational movement of UAVs. However, the lack of gimbal restricts their remote sensing capacity to the performance of fast scanning of large ROIs. Conversely, the step-stare mechanism provides two additional degrees of freedom to the camera, which are typically pitch and roll. This effectively increases the observation’s DOFs from four (i.e., translation and yaw) to six. Nevertheless, there has been a dearth of literature discussing the coverage planning problem in rotational space, which represents a research gap.
This study investigates the CPP problem for step-stare imaging systems. The system’s rotational motion causes the sensor’s ground footprint to change dynamically, making it difficult to analyze coverage completeness in the Euclidean plane. To address this issue, we examine such a coverage problem in a spherical space and propose an approximate tessellation to determine the orientation distribution of the sensors. In this arrangement, the sensors’ footprints form a hyperbolic grid that seamlessly covers the target region. Next, we propose a dual-caliper algorithm inspired by the rotating-caliper algorithm to optimize the grid layout. In summary, the main contributions of this paper are as follows.
  • To the best of our knowledge, we are probably the first to study the coverage planning problem under pure rotational motion. Furthermore, in contrast to traditional heuristic algorithms for solving mixed-integer programming problems, we propose an efficient approximation method and framework based on computational geometry.
  • For the first time, spherical tiling is successfully applied in the field of coverage planning. Specifically, to achieve complete coverage of the target region, we convert the coverage problem to a tiling problem on a virtual scanning sphere, then propose a spherical approximate tiling method and a corresponding hyperbolic grid of the footprint, which offers seamless coverage.
  • By fully utilizing the properties of conic and projective geometry, we propose a dual-caliper optimization method to compress the hyperbolic grid. This method employs two types of “caliper” to find the optimal cell stride by computing the supporting hypersurface of the ROI in two orthogonal directions. The experimental results demonstrate its superior performance and low computational complexity.
  • In order to enhance the heterogeneity of experimental data, we propose a bespoke dataset generation methodology for the evaluation of the CPP of the step-stare camera. Circular and convex polygonal regions with varying locations and sizes are randomly generated by means of a candidate radius sampling strategy and convex hull computation.
The main structure of this paper is as follows. Works related to coverage path planning are reviewed in Section 2. Section 3 introduces the system model of step-stare imaging and the problem formulation. Section 4 details the proposed coverage path planning method characterized by an adaptive hyperbolic grid. Section 5 builds the simulation experiments and shows the results. Section 6 presents our conclusions and directions for future work using our proposed method.

2. Related Works

The coverage path planning (CPP) problem originated in the field of robotics and has recently gained attention in remote sensing with the development of UAVs. The objective is to find a path that maximizes camera ground coverage of the target area while optimizing specific metrics, such as path length, travel time, energy consumption, or number of turns. According to the early taxonomy [18,19], the CPP problem can be solved through approaches with exact cellular decomposition, approximate cellular decomposition, and no decomposition.

2.1. Exact Cellular Decomposition

This methodology’s basic idea is divide and conquer. The complex target region is first divided into several disjoint sub-areas, each of which is treated as a node. This allows the target region to be modeled as an adjacency graph. Next, the optimal scanning direction is determined for each sub-area, along with the connection relationship between the entrance and exit points of neighboring sub-areas. Finally, a simple back-and-forth movement is used for each sub-area to complete the coverage. The optimization objective of this method is to mainly reduce turning maneuvers due to their significant impact on energy consumption and mission time. Research on this method primarily focuses on optimal decomposition, optimization of sweep direction, and merging of adjacent polygons.
The trapezoidal decomposition method [24] decomposes the target region into several trapezoidal regions by emitting a vertical line through the vertices of polygonal obstacles from left to right. However, this method requires too much redundant back-and-forth motion. On the other hand, the boustrophedon decomposition method [25] improves the method of sub-area updating. When the vertical line’s connectivity increases, a new sub-area is created. Conversely, when the connectivity decreases, the sub-areas merge. This decomposition of the target region into larger non-convex regions reduces the number of sub-areas. However, in more complex target regions or environments with wind, neither method may be optimal. Coombes et al. [26] introduced the wind consideration in the cost function, added additional cells outside the region of interest to facilitate finding better flight paths, and developed a dynamic programming approach to solve the cell recombination problem. Tang et al. [27] proposed an optimal region decomposition algorithm that uses a depth-first search to merge sub-areas, followed by the minimum-width method to determine the sweep angle. Finally, a genetic algorithm is used to determine the visit order between each sub-area. The visit order between sub-regions is determined by the genetic algorithm.

2.2. Approximate Cellular Decomposition

Approximate cellular decomposition is a grid-based representation of the region of interest (ROI). The method approximates the target area with a polygon consisting of fixed-size and fixed-shape cells, usually corresponding to the camera sensor’s ground footprint. Once the UAV has traveled through the centers of all the cells, coverage is complete. If the starting and ending points of the path are not joined, the problem can be formulated as solving the Hamiltonian path problem with minimum cost; otherwise, the problem becomes a Traveling Salesman Problem (TSP).
Nam et al. [28] obtained the coverage path by using a wavefront algorithm based on gradient descending search, then further smoothed the trajectory using the cubic interpolation algorithm. In [29], Cao et al. proposed an improved approach to the traditional probabilistic roadmap algorithm by using constraints on path length and the number of turns to generate a straight path and using the endpoint as the sampling node, which resulted in an improved coverage rate and a reduced number of turns and repetition rate. Shang et al. [30] introduced a cost function to measure quality and efficiency and developed a greedy heuristic. Shao et al. [31] proposed a replanning sidewinder algorithm for three-axis steerable sensors in agile earth-observing satellites. The algorithm replans the sidewinder after each move and avoids revisiting old tiles by continuously correcting the grid origin to achieve a dynamic wavefront. Vasquez-Gomez et al. [32] used a rotating-caliper algorithm to obtain optimal edge–vertex back-and-forth paths, taking into account the starting and ending points.
Approximate cellular decomposition methods primarily focus on solving the Hamiltonian path problem or the traveling salesman problem through approximate or heuristic optimization methods. However, they lack coverage optimization for ROI.

2.3. No Decomposition

In addition to the previously mentioned algorithms, some algorithms achieve coverage planning with no decomposition. Mansouri et al. [33] modeled the coverage planning problem as a mixed-integer optimization problem, considering UAV azimuth rotation. They investigated three heuristic optimization methods, namely pattern search, genetic algorithm, and particle swarm optimization. As a result, they achieved at least 97.0% coverage in five test cases. Papaioannou et al. [34] addressed the problem of UAV coverage planning with pitch-angle gimbal control and introduced visibility constraints. However, these methods suffer from high computational complexity, which limits their application in time-critical scenarios.

3. System Model and Problem Formulation

3.1. System Model

Figure 1 depicts a schematic diagram of a step-stare imaging system. The system is mounted on an airborne platform and utilizes two-axis motors to drive an optical assembly for periodic rapid scans of single or multiple ROIs. During each step, the servos maintain a constant line-of-sight (LOS) orientation, allowing the area scan camera to stare at the target area. At the end of each scan cycle, the images from each stare are stitched together to form a single large-FOV equivalent image. The center of all the footprints form a grid graph. Definitions of the relevant coordinate systems can be found in Appendix A. The main notations adopted in this study are listed in Table A1.
We should mention that this paper makes the following assumptions: (a) During a scan, the center of the projection undergoes no translational motion, and the heading angle remains constant based on the fact that step-stare optical systems are typically driven by high-torque motors, resulting in fast LOS movement. (b) The gimbal configuration of the imaging system has two axes, with the outermost axis being the roll axis and the innermost axis being the pitch axis. This configuration is commonly used for wide-area imaging systems. (c) The aircraft attitude is disregarded because gyro-attitude stabilizers can typically achieve stabilization accuracy within 0 . 1 [35]. (d) The shape of the ROI is a circle or convex polygon. (e) The profile of the camera sensor (such as CCD or CMOS) is a non-square rectangle. (f) The ROI is located on a planar surface, i.e., variations in terrain undulations and the curvature of the earth are not taken into account.
Based on the given assumptions, each camera sensor’s ground footprint is a quadrilateral region created by the projection of the camera sensor onto the ground. It is the intersection of the quadrangular pyramid formed by the camera’s FOV and the ground. The closed region determined by the footprint is defined as a cell and denoted as C i (i refers to the index).
C i = H c w ( K c , ξ i , X c , S )
where H c w ( · ) denotes the projective transformation from F c to F w , S denotes the camera sensor’s geometry, X c denotes the coordinates of the origin of F c in F w , ξ i denotes the ith orientation of the camera, and K c is the camera calibration matrix:
K c = f p x f p y 1
where f is the camera’s focal length, and ( p x , p y ) represents the coordinates of the camera’s principal point. From Equation (1), we can infer that the shape of the cell correlates with the orientation of the camera at each step.

3.2. Problem Formulation

Let C R O I R 2 denote an ROI. The coverage path planning problem involves finding a set of camera orientations ( Ξ ) and an optimal rotation path, with the goal of minimizing the path length and maximizing the coverage completeness of C R O I . The optimization problem can be decomposed into a path planning problem and a coverage planning problem, which are illustrated in Figure 2.

3.2.1. Path Planning Problem

The camera orientations of an ROI can be represented as a undirected weighted graph ( Γ = ( Ξ , E ) , where element ξ i of Ξ is the vertex, and the path length between ξ i and ξ j is the edge ( e i , j )). The label of each e i , j is defined as follows:
a i , j = 1 if path exists from vertex i to vertex j 0 otherwise
The goal is to minimize the path length of camera rotation.
min i = 1 n j i , j = 1 n e i , j a i , j
This is a traveling salesman problem (TSP), and there are numerous proven methods for solving it. Since the coverage area of a step-stare system increases superlinearly with the tilt angle, the observation nodes of the ROI are typically small-scale, making coverage planning more important than path planning.

3.2.2. Coverage Planning Problem

Let { C i } denote a cover of ROI, and we have the following:
C c o v = i [ 1 , n ] C i
We wish to optimize the approximation of C c o v to C R O I while minimizing the cardinality of Ξ .
min Ξ , n ( A r e a ( C R O I C R O I C c o v ) ) + λ n
s . t . ξ i [ π 2 , π 2 ] × [ π 2 , π 2 ]
n N
| Ξ | = n
where A r e a ( · ) denotes the function that calculates the area, and λ is the hyperparameter of the penalty term. Fixing n relaxes this problem into a maximal coverage location problem (MCLP) [36], which is known to be NP-hard. Therefore, the coverage planning problem is NP-hard.
The NP-hard problem cannot be solved in polynomial time. To address this problem, our methodology involves using a low-complexity approximation method to obtain a locally optimal solution. This approach provides a “good-enough” solution and may even reach the global optimum in some cases.

4. Proposed Method

The proposed method is detailed in this section. A complete flowchart of the algorithm is depicted in Figure 3, comprising three main modules. During the initialization phase, the ROI parameters, the camera’s parameters, and the necessary rotation matrix are prepared, and the radius of the virtual scanning sphere is calculated. These parameters are then provided to the coverage planning module. To achieve complete coverage of an ROI, we present a seamless hyperbolic grid (SHG) based on zonal isosceles trapezoids. Then, we propose a novel dual-caliper optimization (DCO) algorithm for circular and polygonal regions, which yields a more compact set of camera orientations. Finally, a rotation path is determined using the bound-and-branch method and the boustrophedon method.

4.1. Seamless Hyperbolic Grid

Conventional methods for coverage planning typically assume vertical imaging situations, where the camera’s ground footprint is rectangular, similar to the frame sensor. Such a grid has a rectangular geometry and can easily be used for planar tiling. However, determining a seamless tiling grid from the perspective of Euclidean geometry is not convenient for a step-stare imaging system with rotational motion. Inspired by the tiling (tessellation) problem, the completeness of coverage is investigated from a perspective of spherical tiling. Here, the term tiling (tessellation) refers to covering of a surface by one or more geometric shapes with no gaps or overlaps.
Several studies have been conducted on spherical tiling problems in astronomy, geosciences, astronautics, and multi-camera stitching [37,38,39,40,41]. However, they usually focus on equal-area or Platonic polyhedron tiling, which do not consider the projective geometric properties of the sensor. Next, the spherical tiling problem of a rectangular sensor’s projections is discussed in detail.

4.1.1. Equivalent Spherical Tiling

Step-stare motion covers a spherical space, so a virtual scanning sphere (VSS) is defined to represent this.
Definition 1.
A virtual scanning sphere of a camera is a sphere centered at the origin of the camera coordinate system such that it touches each of the camera sensor’s vertices, regardless of the camera’s orientation. It can be expressed as follows:
S 2 : = { ( θ , ϕ , r ) | ( θ , ϕ ) [ π 2 , π 2 ] × [ π , π ] , r = f 2 + ( l x 2 ) 2 + ( l y 2 ) 2 }
where f is the focal length of the camera, and l x and l y are the width and height of the sensor, respectively. We denotes the radius of the VSS as r V S S .
A VSS of a single-lens camera is illustrated in Figure 4a. The sensor plane, the VSS, and the ROI plane are isomorphic because the central projection between them is a homograph. Thus, we can convert the ROI’s complete covering problem into an equivalent spherical tiling problem, that is, by finding a set of ξ i such that
P R O I P i
P i = H c v ( K c , ξ i , X c , S )
where P i is the sensor’s image of H c v , and P R O I is the ROI’s image of H w v . Since the sensor’s FOV can be considered a non-square rectangular pyramid, the contour of P i is a spherical quadrilateral. In particular, it has equal opposite sides, unequal adjacent sides, and equal interior angles that are greater than π 2 . This shape is defined as a spherical pseudo rectangle (SPR). The following theorem holds for SPRs:
Theorem 1.
Congruent SPRs cannot achieve spherical tiling.
Proof. 
Suppose there exists a spherical region that can be tiled by a number of congruent SPRs. Let the interior angle of an SPR be κ , and let the degree of one of the vertices in this tiling pattern be n N ; then, n κ = 2 π is satisfied. Since κ ( π 2 , π ) , we get n = 3 , i.e., the vertex is trivalent. In this case, there are only four possible edge combinations (aaaa, aaab, aabb, and aabc) for a spherical quadrilateral [42]. Since the edge combination of an SPR is abab, a contradiction arises.
Figure 5a depicts the abab edge combination of an SPR. The straight line segments (a and b) depicted in the figure are, in fact, geodesic segments on the sphere. Figure 5b illustrates one possible SPR tiling pattern in which the longer edge serves as a common side. In this way, a third SPR ( A 1 A 2 A 3 A 4 ) has an edge combination where two edges (a) are adjacent. This contradicts the fact that an SPR must be in a form of a b a b and that two edges (a) should be separate. A similar contradiction can be observed in the other SPR tiling pattern, as illustrated in Figure 5c.    □
Theorem 1 claims that for a step-stare imaging system with a non-square rectangular sensor, there exists no Ξ that can make the corresponding camera footprints a tiling of the ROI’s neighborhood. Thus, alternative approximation methods need to be found to achieve seamless coverage.

4.1.2. Approximate Tiling Method

As can be seen from Figure 4a, given a VSS with a camera sensor oriented towards ( θ , ϕ ) , the corners on the sensor’s base side determine an arc of a parallel (line of latitude), while the corners on the leg determining an arc of a great circle. By referencing the definition of an spherical isosceles trapezoid [43], we define the geometry formed by these four arcs as a zonal isosceles trapezoid (ZIT).
Definition 2.
A spherical quadrilateral is called a zonal isosceles trapezoid if one pair of its opposite sides are arcs of parallels and the other pair are symmetric geodesics.
Given a ZIT, we call the arc of a parallel a base and the arc of a geodesic a leg.
Inspired by recursive zonal equal-area partition [44], we align a set of camera sensors on a parallel of a VSS, naming the set of their orientations a zonal cover (denoted by Ξ Z ). In this way, the envelope of these sensors’ corners forms a ZIT (denoted by T Z ). Let P Z be the union of all P i (the ith sensor’s projection on VSS) in a zonal cover and T Z be the maximal inscribed ZIT of P Z . The relationship between T Z , P Z , and T Z is illustrated in Figure 6.
For a ZIT, we have the following theorem:
Theorem 2.
Given a VSS and a separate ROI, there exists a set of ZITs on the VSS such that the union of P Z , i contains P R O I , namely P Z , i P R O I , where P R O I denotes the ROI’s image of H w v and P Z , i denotes the region enclosed by the ZIT ( T Z , i ).
Proof. 
P R O I is bounded in a hemisphere ( S + 2 ) of the VSS, i.e., P R O I S + 2 . A set of adjacent T Z , i is arrange along the meridional direction without gaps (as shown in Figure 4b) such that the latitude range of P Z , i is greater than π . Then, the width of each T Z , i is adjusted such that the longitude range of P Z , i is greater than π . In this way, we can make the hemisphere a subset of P Z , i , i.e., S + 2 P Z , i .    □
According to Theorem 2, we need to further investigate the following two problems:
  • How to determine each roll angle (or latitude) of a sensor in a zonal cover ( Ξ Z );
  • How to determine each pitch angle (or longitude) of the zonal cover ( Ξ Z ) in the meridional direction.

4.1.3. Zonal Arrangement

We propose the arrangement of zonal ϕ i in such a way that the low-latitude sides of adjacent sensors are connected side by side, as shown in Figure 6a. Then, we obtain the recursive formula for each ϕ i in Ξ Z as follows:
ϕ i + 1 = ϕ i + 1 ϕ + ( i + 1 ) · Δ ϕ
Δ ϕ = g ( θ b a s e l )
g ( θ ) = arccos ( 1 l x 2 2 ( r V S S · c o s θ ) 2 )
where θ b a s e l is the low latitude of the corresponding T Z that is also the latitude of a ZIT’s base, and 1 ϕ + ( · ) is an indicator function that indicates ϕ i + 1 is larger than ϕ i .
From Equation (9c), we can deduce that | g ( θ b a s e h ) | > | g ( θ b a s e l ) | , which means that the FOVs of adjacent sensors overlap. Thus, this method provides an approximate tiling. The approximation error converges as the absolute latitude value decreases.

4.1.4. Meridional Arrangement

The goal of meridional arrangement is to find a recursive formula to tiling each zonal cover’s T Z along the meridional direction.
Given a zonal cover and an attached camera, for a point (A) on a sensor’s base, we have the following equation according to Proposition A2:
sin θ b x = sin θ m x r V S S 2 ( 0.5 l x ) 2 r V S S 2 ( 0.5 l x ) 2 + d 2
where θ b x is the latitude of A; θ m x is the latitude of the midpoint (M) of the base; and d is the Euclidean distance from A to M, i.e., d [ 0 , 0.5 l x ] . When d = 0.5 l x , | θ b x | reaches the minimum value, at this time the point, A is a corner of the sensor. Let θ c x denote the corner’s latitude; then, we have the following:
sin θ c x = sin θ m x r V S S 2 ( 0.5 l x ) 2 r V S S
According to the definitions of T Z and ZIT, a sensor’s corner always lies on the base line of T Z ; thus, we have the following:
θ c x = θ b a s e
where θ b a s e is the latitude of a ZIT’s base.
Let P T Z denote the spherical region enclosed by T Z . Then, based on Equations (11) and (12), we can obtain the latitude range of P T Z (denoted by Θ T Z ) as follows:
Θ T Z = [ θ m x S , θ b a s e N ] ; if θ b a s e S > 0 , θ b a s e N > 0 θ b a s e S , θ b a s e N ] ; if θ b a s e S < 0 , θ b a s e N > 0 θ b a s e S , θ m x N ] ; if θ b a s e S < 0 , θ b a s e N < 0
where the subscripts N and S denote the northern and southern part, respectively.
Combined with Equation (9c), P T Z can be expressed as follows:
P T Z = { ( θ , ϕ , r ) | θ Θ T Z , ϕ [ ϕ m i n g ( θ ) , ϕ m a x + g ( θ ) ] , r = r V S S }
where r V S S is the radius of VSS, and ϕ m i n and ϕ m a x are the minimum and maximum longitude of Ξ Z , respectively.
The latitude ( θ i ) of the ith zonal cover is defined as the latitude of its attached sensor’s center; then we have the following:
θ i = 1 2 ( θ m x N , i + θ m x S , i )
θ m x N , i = θ i + 1 2 θ y
θ m x S , i = θ i 1 2 θ y
θ y = 2 arctan l y / 2 f
where θ y is the sensor’s FOV along the Y s axis.
Given a pair of adjacent zonal covers aligned in the meridional direction, let i H and i L denote the indices of relative high-latitude (closer to the poles) and relative low-latitude (closer to the equator) cover, respectively. According to Equation (13), if two P T Z can be tiled along meridional direction, then the following equations must hold:
θ m x S , i H = θ b a s e N , i L if θ m x S , i H > 0
θ m x N , i H = θ b a s e S , i L if θ m x N , i H < 0
Therefore, a recursive formula of θ i can be obtained by combining Equations (11), (15), and (16) as follows:
θ i + 1 = f ( θ i ) = arcsin ( sin ( θ i + 1 2 s θ y ) · ( r V S S 2 ( 0.5 l x ) 2 r V S S ) k ) + 1 2 s θ y
s = 1 ; i f θ i + 1 > θ i 1 ; i f θ i + 1 < θ i
k = 1 ; i f 1 i H ( i + 1 ) = 1 1 ; i f 1 i H ( i + 1 ) = 0
where 1 i H ( · ) is the indicator function that determines whether i + 1 is i H .
1 i H ( i + 1 ) = 1 ; if θ i [ 1 2 θ y , 1 2 θ y ] θ i < 1 2 θ y , θ i + 1 < θ i θ i > 1 2 θ y , θ i + 1 > θ i 0 ; otherwise

4.1.5. Hyperbolic Grid

Since a set of T Z can form a tiling on a VSS, the projection of the corresponding sensor centers on the ground forms a grid graph. Given a parallel of a VSS with a latitude of θ , its conic coefficient matrix on the plane ( Z v = r V S S · sin θ in F v ) can be expressed as follows:
C s = 1 0 0 0 1 0 0 0 ( r V S S cos θ ) 2
Under a point transformation ( X = H w v X ), a conic C s can be transformed as follows:
C s = ( H w v ) T C s H w v
where C s is the projection of C s on the ground.
By considering the transformation as an imaging process, the finite projective camera model can be used to obtain H w v [45] as follows:
X = H w v X = R v 1 0 0 0 1 0 0 0 Z 0 0 1 X Y 1
R v = K v R w v [ I | C ˜ w ]
K v = r V S S sin θ 0 r V S S sin θ 0 1
R w v = R b v R w b
where Z is the elevation value of the ROI in F w , and C ˜ w is the inhomogeneous coordinate of O v in F w . Since R w b is related to the attitude of the aircraft, we can infer from knowledge of conic curves that the shape of C s could be all kinds of conic curves, i.e., ellipse, parabola, and hyperbola. But under the assumptions presented in Section 3, we can ignore the effect of the aircraft’s attitude; thus, C s takes the form of a hyperbola. Therefore, by using the approximate tiling method, the centers of each camera ground footprint are located on a bundle of hyperbolas, forming a seamless hyperbolic grid (SHG).

4.2. Dual-Caliper Optimization

As described above, a seamless hyperbolic grid ensures coverage completeness. However, to reduce the number of step-stare steps and shorten the path length, the grid layout needs further optimization. For this purpose, we propose a novel dual-caliper optimization (DCO) method. Before that, we present the necessary definitions, which are illustrated in Figure 7.
Definition 3.
A supporting hyperbola of a convex set (P) is a hyperbola with one of its branches (B) passing through a point of P so that the interior of P lies entirely on one side of B and the corresponding focus lies on the opposite side. Point C is called a support point.
Definition 4.
An extreme hyperbola is a supporting hyperbola that is a projection of the VSS’s parallel.
Definition 5.
A zonal slice of a convex set (P) is a closed region covered by the central projection of a spherical zone of a VSS.
Definition 6.
Given a zonal slice (Z) of a convex set (P), the partial boundary shared by Z and P is called the slice boundary.
Definition 7.
An extreme orientation of a zonal cover ( Ξ Z ) is an element ( θ , ϕ ) of Ξ Z such that the projection of its corresponding camera sensor’s vertical outer sideline admits a supporting line of a zonal slice formed by the extended base of T Z .
According to the definition of zonal cover, the extreme orientation has the following property:
Property 1.
Given a zonal cover ( Ξ Z ), the longitude (ϕ) of an extreme orientation is either the maximum or the minimum of all the longitudes in Ξ Z .
In this section, we first obtain the extreme hyperbola of the ROI and determine the optimal arrangement of the zonal covers based on the latitude range. Then, the extreme orientation of each zonal cover is calculated and the zonal stride is optimized. The method described above involves solving for the supporting hypersurface (i.e., supporting hyperbola and supporting line) in both meridional and zonal directions, which resembles using two types of calipers to measure the size of the ROI. Therefore, we name this method dual-caliper optimization (DCO), as shown in Figure 7a, inspired by the rotating caliper algorithm [46] in computational geometry.

4.2.1. Meridional Curved Caliper

In the first phase of DCO, we use a meridional curved caliper to determine two extreme hyperbolas of an ROI. Specific geometric algorithms differ depending on the shape of the ROI, such as circular or convex polygonal, based on the assumptions presented in Section 3. It is worth noting that a hyperbola mathematically has two branches, only one of which is the true projection of the parallel on the sphere, called the valid branch in this paper, while the other one is called the fake branch.
(1) For a circular ROI:
The conic matrix of a circular ROI on the ground can be expressed as follows:
C r o i = 1 0 X r o i 0 1 Y r o i X r o i Y r o i ( X r o i 2 + Y r o i 2 r r o i 2 )
where X ˜ r o i = [ X r o i , Y r o i , Z r o i ] T is the center, and r r o i is the radius. The goal is to solve θ such that the following simultaneous equations have only one real solution:
X T C r o i X = 0
X T C s X = 0
where X T represents the homogeneous coordinates of a point on the ground. The above system of equations means that C s (the image of C s ( θ ) under H v w ) and C r o i are tangent, and they form a quartic equation with nonlinear terms of unknown θ , which is difficult to solve. We employ a numerical method to cope with this problem.
All linear combinations of the conics that pass through the intersection of C s and C r o i can be represented by a bundle of conics ( C = C s + λ C r o i ). We now search for a suitable λ such that C is degenerate, which means that it can be factorized into a product of two linear polynomials over a complex field. The discriminant of a degenerate conic C equals zero.
det ( C ) = det ( C s + λ C r o i ) = 0
This equation is essentially a cubic equation with three solutions in the complex plane. According to the fundamental theorem of algebra, a cubic equation with an unknown λ has at least one real root. This means that at least one degenerate conic in the bundle ( C ) is real. In this way, a real line pair can be decomposed from C by a method introduced in [47]. Let C consist of two distinct lines, namely l and m , as follows:
C = l m T + m l T
The adjoint of C is
C = ( l m T + m l T ) = ( l × m ) ( l × m ) T = p p T
According to the definition of cross product, p = l × m is the intersection of lines l and m . Let p = [ p 1 , p 2 , p 3 ] T ; then, p p T can be represented as the definition of a cross product as follows:
p p T = p 1 2 p 1 p 2 p 1 p 3 p 2 p 1 p 2 2 p 2 p 3 p 3 p 1 p 3 p 2 p 3 2 = B
Now, we obtain p = C o l u m n ( B , i ) / B i , i , where the subscript ( i , i ) denotes the index of any non-zero diagonal element in B , and C o l u m n ( B , i ) denotes the ith column of B . It can be derived that the skew-symmetric matrix ( M p ) of p is m l T l m T . Combining Equation (25), we have the following:
C + M p = 2 m l T
where m l T is a square matrix with rank = 1. Then, by selecting any non-zero element in C + M p , we can find the homogeneous coordinates for l and m as the corresponding row and column of C + M p , respectively.
To solve for the intersection point ( p ) of line m (resp. l ) and circle C r o i , we first extract two different arbitrary points ( p 1 and p 2 ) from m (resp. l ); then, p can be expressed as p 1 + k p 2 . Since C r o i is a symmetric square matrix, we obtain a quadratic equation with an unknown k, as follows:
( p 2 T C r o i p 2 ) k 2 + 2 p 1 T C r o i p 2 k + p 1 T C r o i p 1 = 0
If the above equation has two conjugate complex solutions, then there is no real intersection between line m (or l ) and circle C r o i . On the other hand, two distinct real solutions indicate that they intersect at two real points, and one real repeated root means they are tangent. If a real solution exists, we can calculate the corresponding latitude ( θ p ) of p using Equation (A4) and compare it with the θ of C s . If and only if they are equal, the intersection is on the valid branch, and the circular ROI intersects the hyperbola of θ .
By performing the above steps iteratively using binary search, we can obtain a θ that forms an extreme hyperbola of the ROI. The Algorithm 1 pseudocode is presented below. [ θ l o w e r , θ u p p e r ] represents the search interval, and set { P i } can be considered the “sign” indicating whether C s and C r o i intersect. Figure 8a provides a clear illustration of this algorithm. Since the aperture angle of the ROI ( θ a p ) is less than θ o p (i.e., 2 arctan r a o i h ), the following propositions hold: θ b 1 and θ b 2 guarantee C s and C r o i will not intersect, while θ b 0 guarantees intersection. The algorithm repeatedly bisects the interval in two directions, as illustrated by the arrows, until the interval is sufficiently small. Subsequently, we obtain the solution of θ . The red hyperbolas represent the solution’s corresponding C s .
Algorithm 1: Meridional Curved Caliper Algorithm for a Circular ROI
Drones 08 00242 i001
(2) For convex polygonal ROIs:
Figure 8b provides an illustration of this algorithm for convex polygonal ROIs. Given a convex polygon ( B R O I ), the procedure to determine the latitude ( θ ) of the extreme hyperbola is as follows. First, we compute the latitude of all vertices of B R O I by using Equation (A4) and select the vertex with the largest (resp. smallest) latitude ( θ 1 ), as denoted by V 1 . Next, the projection ( C 1 ) on the ground corresponding to θ 1 is obtained by Equations (19) and (20). Finally, using Equation (29), we can determine the intersection of C 1 with l 1 and l 2 , which are the adjacent line segments of V 1 . If there exists no real solution other than V 1 , then θ 1 is the latitude of the extreme hyperbola, as shown in Figure 9a. Otherwise, based on Theorem 3, we further derive two candidate θ solutions numerically such that the corresponding valid branch of the hyperbola is tangent to l 1 and l 2 (as shown in Figure 9b), respectively. Finally, the largest (resp. smallest) value of these two candidates is selected as the latitude of an extreme hyperbola.
Theorem 3.
Given a VSS, a convex polygon ( B R O I ), and a vertex (V) of B R O I with the maximum or minimum latitude, an extreme hyperbola of B R O I must either pass through V or be tangent to one of its adjacent edges, without exception.
Proof. 
Figure 9 illustrates the projection of a bundle of parallels of a VSS. The hyperbolas closer to the upper side correspond to larger latitudes (and vice versa). Let V 1 be the vertex in B R O I with the largest latitude in F v , and let V 2 be an adjacent vertex whose corresponding latitude is no greater than that of V 1 . Let V 1 V 2 ¯ be an open line segment, and let C 1 denote the hyperbola that passes through V 1 . Only two possible topological relations exist between V 1 V 2 ¯ and C 1 . The first is that C 1 and V 1 V 2 ¯ have no intersection (as shown in Figure 9a) in such a way that C 1 becomes an extreme hyperbola ( C s u p p o r t ). The second is that C 1 and V 1 V 2 ¯ have one intersection, which lies in V 1 V 2 ¯ , as shown in Figure 9b. In the latter case, there must exist a hyperbola with a greater latitude than C 1 , which is tangent to V 1 V 2 ¯ . Similarly, the above conclusion holds for the vertex with the smallest latitude. □

4.2.2. Meridional Stride Optimization

The two extreme hyperbolas of the ROI determine the minimum requirement of the meridional FOV to cover the ROI. Moreover, the latitudes of the corresponding extreme zonal covers, denoted by θ i N * and θ i S * , can be derived by using the equations presented in Section 4.1.4. Since [ θ i N * , θ i S * ] is the optimal meridional covering range, the meridional stride between adjacent zonal covers can be further optimized.
First, we compute the size (n) of { Ξ Z , i } . By starting from θ i S * and following Equation (17) based on a seamless hyperbolic grid, we obtain the latitude of each Ξ Z , i recursively until it exceeds the northernmost bound ( θ i N * ); then, the iteration count is n. By adding a refinement term ( λ ) to Equation (17), the optimization of meridional stride becomes equivalent to finding a suitable λ that satisfies the following equation:
θ n = f ( θ n 1 ) λ = f ( f ( θ n 2 ) λ ) λ = f ( ( f ( θ 1 ) λ ) . . . ) λ = θ i N *
where θ 1 = θ i S * , and f ( · ) is the recursive operator in Equation (17). This is essentially a nested nonlinear equation. Since θ n θ i N * is a function of λ and is monotone in the range of [ 0 , 1 2 s θ y ] , the root λ can be solved by a numerical method such as binary search.

4.2.3. Zonal Straight Caliper

After meridional stride optimization, we can collect a set of zonal covers with determined latitudes ( { θ i } ). Next, we aim to solve the optimal range of longitude ( ϕ ) for each zonal cover ( Ξ Z , i ). Given a zonal cover ( Ξ Z ) and a zonal slice ( C s l i c e ) enclosed by the projection of T Z ’s spherical zone, we define the zonal straight caliper of C s l i c e as a pair of supporting lines, which are the projections of the camera sensor’s outer vertical sideline on the ground, as shown in Figure 7a. In such case, the orientation of the camera is the extreme orientation (Definition 7), and the supporting line is called an extreme line. The corresponding extreme orientations of the zonal straight caliper determine the minimum zonal FOV to cover the zonal slice.
The algorithm of a zonal straight caliper consists of the following three steps: (1) Find all candidate support points on the slice boundary. (2) Select the support point from the candidates. (3) Obtain the longitude of extreme orientation based on the support point. It is worth noting that the specific methods in step 1 and step 2 depend on the ROI’s geometry.
Step 1: For a circular ROI, the slice boundary is an arc, and the candidate support point could be either the endpoint of the arc or the point at which the arc and the supporting line are tangent. For a convex polygonal ROI, the slice boundary is a polygonal chain; then, the candidate support point could be any vertex in the chain.
Step 2: For a circular ROI, according to Proposition A3, we can determine if each endpoint ( X ) of the slice boundary is the support point. If neither X is the support point, binary search can be used to find ϕ such that the projection of a camera sensor’s outer vertical sideline ( H w c ( θ , ϕ ) ) T · m ( X ) and the slice boundary are tangent, where H w c and m are as follows:
H w c ( θ , ϕ ) = K c R p c R r p ( θ ) R b r ( ϕ ) R w b [ I | C ˜ w ] 1 0 0 0 1 0 0 0 Z r o i 0 0 1
m ( X ) = x + × x i f 1 z o n a l + ( X ) = 1 x + + × x + i f 1 z o n a l + ( X ) = 1
where x + , x + + , x + , and x are the four corner points of the sensor represented by homogeneous coordinates in F s , which indicate the top right, bottom right, bottom left, and top left, respectively. 1 z o n a l + ( · ) is a sign function that indicates X ’s position in the zonal slice, with a value of 1 if on the left flank and 1 otherwise.
For a convex polygonal ROI, Property 1 is referenced as the rule used to select a support point from candidates.
Step 3: According to Equation (9c), we can calculate the longitude ( ϕ ) of an extreme orientation based on the support point ( X ) as follows:
ϕ = ϕ p 1 z o n a l + ( X ) · 1 2 · arccos ( 1 l x 2 2 r V S S 2 · c o s 2 ( θ p ) )
where ( θ p , ϕ p ) is the latitude–longitude coordinate of X in F v , which can be calculated by Equation (A4).

4.2.4. Zonal Stride Optimization

After obtaining the longitudinal bound ( { ϕ i , j W , ϕ i , j E } ) for Ξ Z , i (subscripts W and E denote the zonal direction of the bound), further optimization of the zonal stride is needed to make the zonal cells more compact.
When using the default longitude step ( Δ ϕ i = g ( θ b a s e l ) ) of the seamless hyperbolic grid in Equation (9c), the number of camera sensors ( n i ) in Ξ Z , i is expressed as follows:
n i = 1 + | ϕ i , j E ϕ i , j W | Δ ϕ i
Thus, the optimal longitudinal step ( Δ ϕ i * ) is expressed as follows:
Δ ϕ i * = | ϕ i , j E ϕ i , j W | n i 1
Then, the optimized recursive formula of the longitude of each camera in a zonal cover is as follows:
ϕ i , j = ϕ i , j E ( j 1 ) Δ ϕ i * ; j [ 1 , n i ]

4.3. Path Planning

After determining the optimized coverage of the ROI, a rotation path should be provided such that the optical system performs a “stare” at each orientation step. For scenarios requiring periodic visits, the path must be a closed loop. In this case, path planning is equivalent to the symmetric traveling salesman problem (TSP). The TSP can be characterized as a Dantzig–Fulkerson–Johnson (DFJ) model [48] as follows, where Equation (37d) stipulates that no subtour is allowed for this path.
min i = 1 n j = 1 , j i n c i , j x i , j
s . t . i = 1 , i j n x i , j = 1 ; j { 1 , , n }
j = 1 , j i n x i , j = 1 ; i { 1 , , n }
i Q j i , j Q x i , j | Q | 1 ; Q { 1 , , n } , | Q | 2
where x i , j is a binary variable indicating whether there is a path from ξ i to ξ j , and c i , j represents the distance between ξ i and ξ j . For a step-stare system, since the motors of the pitch and roll gimbals are driven simultaneously during each step movement, we adopt the Chebyshev distance ( c i , j ) as follows:
c i , j = max ( | θ i θ j | , | ϕ i ϕ j | )
The TSP is an integer programming problem that can be solved using exact, approximation, or heuristic algorithms. Among them, branch and bound is a commonly used exact algorithm to provide an exact solution and has been adopted in many modern TSP solvers [49]. Considering that the step scale is typically small in most scenarios, we use branch and bound as the planning method for closed paths.
In some scenarios, it may be necessary to visit multiple ROIs sequentially, leading to an open-form path for each ROI. To generate such a path, we utilize the boustrophedon method. This facilitates the fast motion of the optical servo in a single direction and simplifies the mechanism for image smear compensation.

5. Results

This section introduces the dataset setup and experimental evaluation metrics, followed by an ablation experiment to verify the necessity of each component in the proposed method. Finally, we evaluate several typical coverage path planning methods for performance comparison.

5.1. Experimental Setup and Dataset Generation

In order to fulfill the typical requirements of wide-area persistent surveillance, the parameters were initially set according to the specifications outlined in Table 1. The camera parameters (focal length, pixel size, and number of pixels) were selected based on off-the-shelf uncooled long-wave infrared cameras. The spatial resolution threshold value was chosen as the lowest threshold that is acceptable for state-of-the-art computer vision algorithms to detect moving targets such as vessels and vehicles. In addition, the heading angle of the UAV was set to a fixed initial value of 20 , and the ground elevation was set to zero for convenience of calculation. Given that the ROIs are randomly generated, the heading angle setting of the UAV does not influence the experimental conclusions.
As shown in Figure 10, the ROI ( C R O I ) is limited to a circle ( S M ) defined by the parameters listed in Table 1, namley S M = { X M ; | X M X c | r M } , where X c is the planar coordinate of O c in F w . The radius ( r M ) can be calculated using basic trigonometry and the pinhole camera model as follows:
r M = h · ( b · h f · S R t ) 2 1
where b is the pixel size, h is the relative height, S R t is the spatial resolution threshold of the LOS, and f is the camera’s focal length. As indicated by Equation (39), the variables b and h are inversely related to r M , while f and S R t are positively related to r M . It should be noted that these parameters only affect the position and geometry of the generated ROIs utilized in this experiment.
The dataset of the ROI consists of two categories, namely circles and convex polygons, each of which is generated randomly within S M . For a circular ROI, we employ the candidate radius sampling strategy [50]; the process is illustrated in Figure 11a. First, the radius ( r R O I ) is randomly sampled in [ b · h f · max ( n x , n y ) , r M ] to provide scale variation, where the lower bound is designed to prevent the generation of a tiny ROI. Next, the polar coordinates of the center ( r p , α p ) are generated randomly within the range of [ 0 , r M r R O I ] × [ 0 , 2 π ] to ensure adequate location diversity. For a convex polygonal ROI, we create a random number of points within S M , then compute the convex hull, as shown in Figure 11b. Each convex polygon sample is determined by a set of Cartesian coordinates of the vertices.
Moreover, the ROI should be large enough to prevent the algorithms from generating insufficient cells, which would lead to ineffective performance comparisons. For this purpose, we pre-screened the synthetic samples using our proposed AHG method to discard samples with fewer than 6 cells or more than 15 cells. Finally, the ROIs in the dataset were categorized into 10 groups based on the number of cells in our proposed algorithm’s results, and 40 ROI samples were retained for each group.

5.2. Evaluation Metric

Regarding performance evaluation, we utilize the three most commonly used metrics for coverage path planning, namely path length (PL), coverage rate (CR, denoted as r C R ), and computation time (CT). Specifically, PL uses the Chebyshev distance to evaluate the angular traveling length, while CT uses CPU consumption time to evaluate the algorithm’s complexity. r C R characterizes the completeness of ROI coverage, which is defined as follows:
r C R = A r e a ( C R O I C c o v ) A r e a ( C R O I )
Moreover, we introduce an additional metric, namely the number of cells (NC), which also represents the number of steps. Clearly, NC affects the complexity of the servo control algorithm and the update frequency of the panorama imagery of the system.

5.3. Ablation Study

As mentioned above, this paper presents a coverage path planning method with adaptive hyperbolic grid (AHG). It consists of two main components: seamless hyperbolic grid (SHG) and dual calipers optimization (DCO). SHG achieves complete coverage (i.e., CR = 1) for any bounded ROI, while DCO approximates the ROI with variable quadrilaterals and compresses the grid’s scale. To evaluate the benefits of these two components, we designed the ablation experiment by replacing SHG in the proposed method with a fixed-step grid and replacing DCO with a scanline-based flood fill. Experiments were performed on our synthetic ROI dataset. Metrics of P r o b ( r C R = 1 ) and NC are utilized for comparison; here, P r o b refers to statistical probability.
Table 2 presents the statistical results of P r o b ( r C R = 1 ) . The method with DCO alone achieves 98.0% and 97.5% complete coverage on each category, indicating its ability to provide seamless coverage in most but not all cases. On the other hand, the method with SHG consistently achieves remarkable 100% on all samples, proving its effectiveness as a means of achieving complete coverage.
The quantitative result of NC for the circle and convex polygon categories is visualized in Figure 12, with lower values indicating better results. Let the method without SHG and DCO serve as the baseline. It can be observed that the proposed AHG (with SHG and DCO) can achieve the best performance on all datasets. Compared to the baseline, using only SHG improved NC by an average of 5.1% and 1.2% for each category, respectively. In contrast, the DCO-only method significantly improves them by an average of 26.4% and 30.4%, achieving the second-best results. Notably, the performance with only DCO approaches the best level. This indicates that the DCO component can better exploit the geometry correlation between the cells, thus optimizing the grid scale more effectively.

5.4. Performance Comparison with Other Methods

This subsection evaluates the efficiency of various algorithms using our custom ROI dataset, which comprises diverse geometries. All the methods involved were implemented in Matlab R2022b, and the experiments were conducted on an off-the-shelf computer with an AMD Ryzen 5 7500F CPU.
The ROIs were utilized as inputs to the algorithms, which output LOS orientations to create closed-form and open-form paths. Each LOS corresponds to an image obtained from a single exposure. More specifically, to validate the performance of the proposed AHG method, we compared its experimental results with three typical coverage planning methods, namely flood fill (FF) [51], particle swarm optimization (PSO) [33], and replanning sidewinder (RS) [31]. The flood fill algorithm is commonly utilized for coverage path planning on a rectangular grid with uniformly fixed strides. In this experiment, we set the stride of ξ to ( θ y , θ x ) . In addition, we adopt the efficient scan-line strategy and use collision detection to determine if the camera footprint meets the boundary of the ROI. PSO is a no-decomposition method that models the coverage problem as a mixed-integer optimization problem and iteratively searches for the optimal solution using a particle swarm. This algorithm may consume a large number of iterations, especially if the number of cells that need to be optimized is a variable. In contrast to [33], which ignores this parameter, we use the result generated by our approach as the upper bound, then perform a binary search to iterate the PSO and determine the optimal value. Moreover, the cost function is defined as 1 r C R , and the threshold is set to 2%. Replanning sidewinder is an approach of approximate cellular decomposition. Since the original article [31] did not mention either the size of the cell or the specific numerical algorithm for optimizing the grid origin, we adopted the camera’s nadir footprint and chose the commonly used golden ratio search algorithm to remove taboo tiles and prevent revisiting of a previous cell.
For path planning, we use both the boustrophedon and branch-and-bound methods for evaluation, except for PSO, which exclusively employs the branch-and-bound method because of its metaheuristic nature and non-parameterizable grid. In particular, both opposite scanning directions are evaluated in boustrophedon, and the path with the shortest length is selected as the final result.
Figure 13a,b provide the results of boustrophedon path length comparison between the proposed AHG method, FF, and RS. We find that AHG achieves the best result in all data groups. In the circle category, AHG reduces the path length remarkably by an average of 43.71% and 30.41% compared to FF and RP, respectively. In the convex polygon category, the average improvement gains are 47.31% and 26.35%, respectively. Figure 13c,d present the comparison results of path length generated by the branch-and-bound method. Our proposed AHG method outperforms FF and RP excessively, with relative gains of 42.62% and 27.29% in the circle category and 38.61% and 16.71% in polygon category, respectively. Compared to PSO, the performance of our method is slightly inferior by an overall average of 4.32% and 6.71% in each category but performs better by 6.75% and 2.3% for groups 1 and 6 in the circle category, respectively, and by 0.13% for group 1 of the polygon category. It is shown that although the proposed AHG is a rule-based method, it achieves superior path length relative to heuristic optimization in a few scenarios.
Table 3 shows the comparison results of coverage rate. To prevent misinterpretation due to quantization errors, the mean value was truncated to one decimal place, while the standard error was rounded up to one decimal place. We can observe that both AHG and RP can achieve a 100% coverage ratio on every sample. However, AHG achieves this through a simple, seamless hyperbolic grid, while RP employs tedious one-dimensional cell searching at the expense of a higher NC. Moreover, neither the FF nor the PSO methods can achieve complete coverage for every dataset. This is mainly due to the fact that FF employs fixed strides that cannot ensure overlapping FOVs of adjacent sensors, while the PSO method aims to optimize coverage with a certain cost threshold, resulting in the coverage of gaps caused by convergent residuals. Additionally, the average CR residual of PSO exceeds 2% in group 3 of the circle category, indicating that PSO fails to reach a globally optimal solution that meets the threshold requirement within the maximum number of iterations.
According to the NC comparison in Table 4, PSO outperforms all other methods on all datasets due to its global optimization mechanism. AHG is ranked first alongside PSO in two groups (1 and 6) of the circle category and ranked second in the remaining groups. This is mainly because it is an approximation method for coverage planning, providing a local optimal solution in a constrained search space. On the other hand, FF yielded unsatisfactory results due to the lack of a grid optimization scheme, and the NC of RP was negatively affected by the disparity between the camera ground footprint and the fixed-size cell.
Table 5, Table 6 and Table 7 summarize the comparison results of coverage computation time (CCT), overall computation time with boustrophedon path planning (OCTB), and overall computation time with branch-and-bound path planning (OCTBB), respectively. Here, the overall computation time involves both coverage planning and path planning. Among the compared approaches, AHG achieves the best in CCT, OCTB, and OCTBB among all categories. This is mainly due to the fact that AHG only imposes the computational burden of calculating the boundary of each zonal slice. In contrast, FF must compute the collision between each cell and the ROI, RP must determine taboo tiles and optimize cells’ origins for each cell commitment, and PSO is required to explore the ROI with a sizable particle swarm. Furthermore, it can be observed from Table 5 that AHG has a worst CCT of only 12.7 ms, indicating that it provides real-time planning capability. In contrast, PSO has an optimal CCT of 31.3 s, making it suitable only for offline scenarios.
For visual comparison, the coverage paths of four methods are shown in Figure 14, where the circular and polygonal ROIs are colored red, the cells (camera footprints) are colored blue, and the paths generated by the boustrophedon and branch-and-bound methods are highlighted with red bold lines. The figures demonstrate that the cells produced by AHG are more compact than those generated by FF and RS and comparable to those generated by PSO. Figure 14a,e clearly depict the effect of dual-caliper optimization, as the aligned zonal covers’ projections approximate the ROI in the heading direction and “clamp” each zonal slice in the cross direction. Particularly, in the circle category, our AHG method achieves the same optimal cell count as PSO but with a superior path length. Figure 14b,f demonstrate that the FF method produces redundant cells due to a lack of optimization for ROI. It also has a negative effect on the scanning rate. Figure 14c,g illustrate that the NC obtained by the PSO method is optimal. However, the non-uniform stride of the cells prevents parameterization, making PSO inapplicable for boustrophedon path planning methods. Figure 14d,h indicate that the RS method generates redundant cells due to improper approximate cellular decomposition.
This experiment validates the efficiency and low complexity of this algorithm with a customized dataset in the dimensions of ROI shape, viewing distance, and size. However, it should be noted that there are some limitations when applying our method. (1) This method is more suitable for circular ROIs and convex polygonal ROIs and does not currently support concave polygonal or annular ROIs. (2) This method is more suitable for “bird’s-eye-view” flying scenarios at relatively high altitudes. It does not, however, address the issue of occlusion that is often encountered when a UAV flies at low altitudes.

6. Conclusions

This paper proposes a novel coverage path planning method with an adaptive hyperbolic grid (AHG) for step-stare imaging systems. The key points of this method are to arrange each stare by constructing an approximate tiling in a virtual spherical space and compress the hyperbolic grid by measuring the scale with dual calipers. First of all, to address the issue of coverage completeness, we convert the coverage problem in Euclidean space to a tiling problem in spherical space. A virtual scanning sphere model is constructed, and an approximate tiling by spherical zonal isosceles trapezoid is proposed to achieve seamless coverage planning. Next, to further optimize the grid layout, we propose a dual-caliper optimization method by exploiting the geometry correlation between conic and convex polygons. Experiments based on diverse geometries and viewpoints demonstrate that the proposed AHG method can achieve complete coverage for circular and convex polygonal ROIs and exhibits competitive performance with low computational complexity. Additionally, this study provides researchers with a novel perspective for solving the coverage planning problem in the case of rotational motion. This method could be applied to many practical applications besides WAPS, including robot sensing [52], mass and slope movement monitoring [53,54], and ecological observation [55].
Although the proposed method’s effectiveness and efficiency were validated through comprehensive experiments and analysis, there are still some issues that require further discussion in the future.
  • We constrained a step-stare imaging system with basic pitch and roll axes. However, for multi-axis systems with additional gimbals or mechanical linkage, the orientation of the sensor’s footprint can be adjusted with more degrees of freedom, which can further optimize the grid layout. The coverage optimization of a multi-axis system will be studied in the future.
  • We assumed the carrier platform moves at a slow speed and the position of the camera is stationary during the scanning process. However, for vehicle carriers that exhibit high-speed maneuvering, the hyperbolic grid becomes time-varying, which may invalidate the coverage path plan. Further investigation is needed to design efficient coverage path planning methods in such scenarios.
  • In path planning, we chose the simple Chebyshev distance as a metric. But in engineering applications, the trajectory of gimbals is usually optimized as a smooth curve [56] (e.g., Bezier curve, Dubins curve, etc.). Therefore, in future studies, the curve path length should be considered as the evaluation metric.
  • The obstacle effect represents a significant factor in the planning process. When an obstacle is present in the field of view, the target area may become a concave set due to occlusion. In future work, we will incorporate obstacle effect constraints into the optimization problem and investigate the coverage planning problem in this context.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

We would like to express our gratitude to Chunlei Yang for his valuable assistance in developing the prototype of the optoelectronic system.

Conflicts of Interest

Author Jiaxin Zhao was employed by the company Changchun Changguang Insight Vision Optoelectronic Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AHGAdaptive hyperbolic grid
CCTCoverage computation time
CPPCoverage path planning
CRCoverage rate
CTComputation time
DCODual-caliper optimization
DOFDegrees of freedom
FFFlood fill
FOVField of view
LOSLine of sight
PSOParticle swarm optimization
PLPath length
ROIRegion of interest
RSReplanning sidewinder
SHGSeamless hyperbolic grid
SPRSpherical pseudo rectangle
TSPTraveling salesman problem
UAVUnmanned aerial vehicle
VSSVirtual scanning sphere
WAPSWide-area persistent surveillance
ZITZonal isosceles trapezoid

Appendix A. Coordinate Systems and Transformations

The coordinate systems defined in this paper are presented in this section. With the assumptions in Section 3, F b , F v , F r , F p , and F c have origins identical origins to that of the ideal center of projection of the optical system, as illustrated in Figure A1.
World coordinate system ( F w ) is defined as a Cartesian coordinate system for a local region of the Earth’s surface where the curvature of the Earth can be ignored. The Z w axis faces upwards. The X w and Y w axes are located on the surface of the region of interest.
A body coordinate system ( F b ) is used to represent the outer frame of the step-stare imaging system. X b represents the roll axis of the step-stare imaging system and generally runs in the flight direction. Z b represents the azimuth axis.
A virtual scanning sphere coordinate system ( F v ) is a latitude–longitude coordinate system. With reference to F b , we define X v = Z b , Y v = Y b , and Z v = X b . The intersections of the X v axis and the virtual scanning sphere are defined as the north pole and south pole such that the north pole (resp. south pole) is in the positive (resp. negative) direction of X v . The prime meridian is defined as the intersection of the O v Z v X v plane and the lower hemisphere.
A roll gimbal coordinate system ( F r ) is a coordinate system used to represent the roll-axis frame. The rotation matrix from F b to F r is expressed as follows:
R b r = 1 0 0 0 cos ϕ sin ϕ 0 sin ϕ cos ϕ
where ϕ is the roll angle of the step-stare system.
A pitch gimbal coordinate system ( F p ) is a coordinate system used to represent the pitch-axis frame. The rotation matrix from F r to F p is expressed as follows:
R r p = cos θ 0 sin θ 0 1 0 sin θ 0 cos θ
where θ is the pitch angle of the step-stare system.
A camera coordinate system ( F c ) represents the frame of a camera. The sensor lies in the plane of Z c = f . The relation between F c and F p is expressed by X p = Y c , Y p = X c , and Z p = Z c . The rotation matrix from F p to F c is expressed as follows:
R p c = 0 1 0 1 0 0 0 0 1
A sensor coordinate system ( F s ) is a 2D coordinate frame used to define a rectangular sensor such as CCD or CMOS, where the origin is at the top-left corner of the sensor, and the X s (resp. Y s ) axis is oriented in the same direction as X c (resp. Y c ).
Figure A1. Coordinate systems and transformations.
Figure A1. Coordinate systems and transformations.
Drones 08 00242 g0a1

Appendix B. Symbol Notation

The main notations used throughout this paper are listed in Table A1.
Table A1. Symbol notation.
Table A1. Symbol notation.
SymbolDescription
F Coordinate system
H Projective mapping between different F
H Projective matrix between different F
R Rotation matrix between different F
C i Cell, i.e., enclosed area defined by the i t h camera’s ground footprint
θ Pitch gimbal angle or latitude
ϕ Roll gimbal angle or longitude
fFocal length
θ x Sensor’s horizontal field of view
θ y Sensor’s vertical field of view
l x The width of the sensor
l y The height of the sensor
ξ Sensor’s orientation expressed by gimbal angles ( θ , ϕ )
Ξ A set of ξ
Ξ Z A zonal set of ξ
K Calibration matrix
PA given region’s image of H v on a sphere
P i The ith sensor’s projection on a sphere
P Z The union of all P i in a zonal cover
T Z The zonal isosceles trapezoid determined by sensors’ corner envelope
T Z The maximal inscribed zonal isosceles trapezoid of P Z
X Homogeneous coordinates of a point
X ˜ Inhomogeneous coordinates of a point

Appendix C. Supplementary Propositions

Proposition A1.
Given a point ( X ˜ ) in F w , its latitude and longitude in F v can be calculated as follows:
v = R w v ( X ˜ X ˜ v )
q = v | v | = ( q 1 , q 2 , q 3 ) T
θ = arcsin ( q 1 )
ϕ = arcsin ( q 2 cos θ )
where X ˜ is the inhomogeneous coordinate of a point in F w , X ˜ v is the inhomogeneous coordinate of O v in F w , and R w v is the rotation matrix from F w to F v .
Proposition A2.
Given a sphere with center O, let A B ¯ be a chord of a small circle with the center (N) and let M be a midpoint of A B ¯ , as shown in Figure A2. Then, we have the following:
sin O A N = sin O M N · sin O A M
Proof. 
From Figure A2, we can observe that sin O M N = O N / O M , sin O A N = O N / O A , and sin O A M = O M / O A . □
Figure A2. Proposition A2.
Figure A2. Proposition A2.
Drones 08 00242 g0a2
Proposition A3.
Given a VSS, a circular region ( C R O I ), a zonal slice ( C s l i c e ), and a supporting line ( m ) of C s l i c e with support point X , let X be an intersection point of m and the extended base of C s l i c e except X . Then, X must not be inside the ROI.
Proof. 
Let the projection of the corresponding spherical zone of C s l i c e be C z o n e , and we have the following:
C s l i c e = C z o n e C R O I
Since X is on the supporting line of C s l i c e but X is not the support point, we have X C s l i c e ¯ . And since X lies in the extended base of C s l i c e and C s l i c e C z o n e , we have X C z o n e . Now, we can deduce the following:
X C s l i c e ¯ C z o n e = ( C ¯ z o n e C ¯ R O I ) C z o n e = C ¯ R O I C z o n e
so we have X C R O I . □

References

  1. Cobb, M.; Reisman, M.; Killam, P.; Fiore, G.; Siddiq, R.; Giap, D.; Chern, G. Wide-area motion imagery vehicle detection in adverse conditions. In Proceedings of the 2023 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Saint Louis, MO, USA, 27–29 September 2023; pp. 1–6. [Google Scholar] [CrossRef]
  2. Negin, F.; Tabejamaat, M.; Fraisse, R.; Bremond, F. Transforming temporal embeddings to keypoint heatmaps for detection of tiny Vehicles in Wide Area Motion Imagery (WAMI) sequences. In Proceedings of the IEEE 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–20 June 2022; pp. 1431–1440. [Google Scholar] [CrossRef]
  3. Sommer, L.; Kruger, W.; Teutsch, M. Appearance and motion based persistent multiple object tracking in Wide Area Motion Imagery. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021; pp. 3871–3881. [Google Scholar] [CrossRef]
  4. Li, X.; He, B.; Ding, K.; Guo, W.; Huang, B.; Wu, L. Wide-Area and Real-Time Object Search System of UAV. Remote. Sens. 2022, 14, 1234. [Google Scholar] [CrossRef]
  5. Luo, X.; Zhang, F.; Pu, M.; Guo, Y.; Li, X.; Ma, X. Recent Advances of Wide-Angle Metalenses: Principle, Design, and Applications. Nanophotonics 2021, 11, 1–20. [Google Scholar] [CrossRef]
  6. Driggers, R.; Goranson, G.; Butrimas, S.; Holst, G.; Furxhi, O. Simple Target Acquisition Model Based on Fλ/d. Opt. Eng. 2021, 60, 023104. [Google Scholar] [CrossRef]
  7. Stamenov, I.; Arianpour, A.; Olivas, S.J.; Agurok, I.P.; Johnson, A.R.; Stack, R.A.; Morrison, R.L.; Ford, J.E. Panoramic Monocentric Imaging Using Fiber-Coupled Focal Planes. Opt. Express 2014, 22, 31708. [Google Scholar] [CrossRef] [PubMed]
  8. Huang, Y.; Fu, Y.; Zhang, G.; Liu, Z. Modeling and Analysis of a Monocentric Multi-Scale Optical System. Opt. Express 2020, 28, 32657. [Google Scholar] [CrossRef] [PubMed]
  9. Yuan, X.; Ji, M.; Wu, J.; Brady, D.J.; Dai, Q.; Fang, L. A Modular Hierarchical Array Camera. Light Sci. Appl. 2021, 10, 37. [Google Scholar] [CrossRef] [PubMed]
  10. Daniel, B.; Henry, D.J.; Cheng, B.T.; Wilson, M.L.; Edelberg, J.; Jensen, M.; Johnson, T.; Anderson, S. Autonomous collection of dynamically-cued multi-sensor imagery. In Proceedings of the SPIE Defense, Security, and Sensing, Orlando, FL, USA, 13 May 2011; p. 80200A. [Google Scholar]
  11. Kruer, M.R.; Lee, J.N.; Linne Von Berg, D.; Howard, J.G.; Edelberg, J. System considerations of aerial infrared imaging for wide-area persistent surveillance. In Proceedings of the SPIE Defense, Security and Sensing, Orlando, FL, USA, 13 May 2011; p. 80140J. [Google Scholar]
  12. Driggers, R.G.; Halford, C.; Theisen, M.J.; Gaudiosi, D.M.; Olson, S.C.; Tener, G.D. Staring Array Infrared Search and Track Performance with Dither and Stare Step. Opt. Eng. 2018, 57, 1. [Google Scholar] [CrossRef]
  13. Driggers, R.; Pollak, E.; Grimming, R.; Velazquez, E.; Short, R.; Holst, G.; Furxhi, O. Detection of Small Targets in the Infrared: An Infrared Search and Track Tutorial. Appl. Opt. 2021, 60, 4762. [Google Scholar] [CrossRef]
  14. Sun, J.; Ding, Y.; Zhang, H.; Yuan, G.; Zheng, Y. Conceptual Design and Image Motion Compensation Rate Analysis of Two-Axis Fast Steering Mirror for Dynamic Scan and Stare Imaging System. Sensors 2021, 21, 6441. [Google Scholar] [CrossRef]
  15. Xiu, J.; Huang, P.; Li, J.; Zhang, H.; Li, Y. Line of Sight and Image Motion Compensation for Step and Stare Imaging System. Appl. Sci. 2020, 10, 7119. [Google Scholar] [CrossRef]
  16. Fu, Q.; Zhang, X.; Zhang, J.; Shi, G.; Zhao, S.; Liu, M. Non-Rotationally Symmetric Field Mapping for Back-Scanned Step/Stare Imaging System. Appl. Sci. 2020, 10, 2399. [Google Scholar] [CrossRef]
  17. Miller, J.L.; Way, S.; Ellison, B.; Archer, C. Design Challenges Regarding High-Definition Electro-Optic/Infrared Stabilized Imaging Systems. Opt. Eng. 2013, 52, 061310. [Google Scholar] [CrossRef]
  18. Choset, H. Coverage for Robotics—A Survey of Recent Results. Ann. Math. Artif. Intell. 2001, 31, 113–126. [Google Scholar] [CrossRef]
  19. Galceran, E.; Carreras, M. A Survey on Coverage Path Planning for Robotics. Robot. Auton. Syst. 2013, 61, 1258–1276. [Google Scholar] [CrossRef]
  20. Cabreira, T.; Brisolara, L.; Ferreira, P.R., Jr. Survey on Coverage Path Planning with Unmanned Aerial Vehicles. Drones 2019, 3, 4. [Google Scholar] [CrossRef]
  21. Tan, C.S.; Mohd-Mokhtar, R.; Arshad, M.R. A Comprehensive Review of Coverage Path Planning in Robotics Using Classical and Heuristic Algorithms. IEEE Access 2021, 9, 119310–119342. [Google Scholar] [CrossRef]
  22. Đakulovic, M.; Petrovic, I. Complete Coverage Path Planning of Mobile Robots for Humanitarian Demining. Ind. Robot. Int. J. 2012, 39, 484–493. [Google Scholar] [CrossRef]
  23. Acar, E.U.; Choset, H.; Zhang, Y.; Schervish, M. Path Planning for Robotic Demining: Robust Sensor-Based Coverage of Unstructured Environments and Probabilistic Methods. Int. J. Robot. Res. 2003, 22, 441–466. [Google Scholar] [CrossRef]
  24. Latombe, J.C. Robot Motion Planning; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; pp. 206–207. [Google Scholar]
  25. Choset, H.; Pignon, P. Coverage path planning: The boustrophedon cellular decompositio. In Field and Service Robotics; Springer: London, UK, 1998; pp. 203–209. [Google Scholar]
  26. Coombes, M.; Fletcher, T.; Chen, W.-H.; Liu, C. Optimal Polygon Decomposition for UAV Survey Coverage Path Planning in Wind. Sensors 2018, 18, 2132. [Google Scholar] [CrossRef]
  27. Tang, G.; Tang, C.; Zhou, H.; Claramunt, C.; Men, S. R-DFS: A Coverage Path Planning Approach Based on Region Optimal Decomposition. Remote Sens. 2021, 13, 1525. [Google Scholar] [CrossRef]
  28. Nam, L.; Huang, L.; Li, X.J.; Xu, J. An approach for coverage path planning for UAVs. In Proceedings of the 2016 IEEE 14th International Workshop on Advanced Motion Control (AMC), Auckland, New Zealand, 22–24 April 2016; pp. 411–416. [Google Scholar]
  29. Cao, Y.; Cheng, X.; Mu, J. Concentrated Coverage Path Planning Algorithm of UAV Formation for Aerial Photography. IEEE Sens. J. 2022, 22, 11098–11111. [Google Scholar] [CrossRef]
  30. Shang, Z.; Bradley, J.; Shen, Z. A Co-Optimal Coverage Path Planning Method for Aerial Scanning of Complex Structures. Expert Syst. Appl. 2020, 158, 113535. [Google Scholar] [CrossRef]
  31. Shao, E.; Byon, A.; Davies, C.; Davis, E.; Knight, R.; Lewellen, G.; Trowbridge, M.; Chien, S. Area coverage planning with 3-axis steerable, 2D framing sensors. In Proceedings of the Scheduling and Planning Applications Workshop, International Conference on Automated Planning and Scheduling, Delft, The Netherlands, 26 June 2018. [Google Scholar]
  32. Vasquez-Gomez, J.I.; Marciano-Melchor, M.; Valentin, L.; Herrera-Lozada, J.C. Coverage Path Planning for 2D Convex Regions. J. Intell. Robot. Syst. 2020, 97, 81–94. [Google Scholar] [CrossRef]
  33. Mansouri, S.S.; Kanellakis, C.; Georgoulas, G.; Kominiak, D.; Gustafsson, T.; Nikolakopoulos, G. 2D Visual Area Coverage and Path Planning Coupled with Camera Footprints. Control. Eng. Pract. 2018, 75, 1–16. [Google Scholar] [CrossRef]
  34. Papaioannou, S.; Kolios, P.; Theocharides, T.; Panayiotou, C.G.; Polycarpou, M.M. Integrated Guidance and Gimbal Control for Coverage Planning With Visibility Constraints. IEEE Trans. Aerosp. Electron. Syst. 2022, 59, 1–15. [Google Scholar] [CrossRef]
  35. Li, S.; Zhong, M. High-Precision Disturbance Compensation for a Three-Axis Gyro-Stabilized Camera Mount. IEEE/ASME Trans. Mechatron. 2015, 20, 3135–3147. [Google Scholar] [CrossRef]
  36. Megiddo, N.; Zemel, E.; Hakimi, S.L. The Maximum Coverage Location Problem. SIAM J. Algebr. Discret. Methods 1983, 4, 253–261. [Google Scholar] [CrossRef]
  37. Zhang, X.; Zhao, P.; Hu, Q.; Ai, M.; Hu, D.; Li, J. A UAV-Based Panoramic Oblique Photogrammetry (POP) Approach Using Spherical Projection. ISPRS J. Photogramm. Remote Sens. 2020, 159, 198–219. [Google Scholar] [CrossRef]
  38. Beckers, B.; Beckers, P. A General Rule for Disk and Hemisphere Partition into Equal-Area Cells. Comput. Geom. 2012, 45, 275–283. [Google Scholar] [CrossRef]
  39. Liang, X.; Ben, J.; Wang, R.; Liang, Q.; Huang, X.; Ding, J. Construction of Rhombic Triacontahedron Discrete Global Grid Systems. Int. J. Digit. Earth 2022, 15, 1760–1783. [Google Scholar] [CrossRef]
  40. Li, G.; Wang, L.; Zheng, R.; Yu, X.; Ma, Y.; Liu, X.; Liu, B. Research on Partitioning Algorithm Based on Dynamic Star Simulator Guide Star Catalog. IEEE Access 2021, 9, 54663–54670. [Google Scholar] [CrossRef]
  41. Kim, J.-S.; Hwangbo, M.; Kanade, T. Spherical Approximation for Multiple Cameras in Motion Estimation: Its Applicability and Advantages. Comput. Vis. Image Underst. 2010, 114, 1068–1083. [Google Scholar] [CrossRef]
  42. Ueno, Y.; Yoshio, Y. Examples of Spherical Tilings by Congruent Quadrangles. In Memoirs of the Faculty of Integrated Arts and Sciences; IV, Science Reports; Hiroshima University: Hiroshima, Japan, 2001; Volume 27, pp. 135–144. [Google Scholar]
  43. Avelino, C.P.; Santos, A.F. Spherical F-Tilings by Scalene Triangles and Isosceles Trapezoids, I. Eur. J. Comb. 2009, 30, 1221–1244. [Google Scholar] [CrossRef]
  44. Leopardi, P. A Partition of the Unit Sphere into Regions of Equal Area and Small Diameter. Electron. Trans. Numer. Anal. 2006, 25, 309–327. [Google Scholar]
  45. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: New York, NY, USA, 2003; pp. 153–157. [Google Scholar]
  46. Toussaint, G. Solving geometric problems with the rotating calipers. In Proceedings of the 1983 IEEE MELECON, Athens, Greece, 24–26 May 1983. [Google Scholar]
  47. Richter-Gebert, J. Perspectives on Projective Geometry: A Guided Tour Through Real and Complex Geometry, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 190–193. [Google Scholar]
  48. Dantzig, G.; Fulkerson, R.; Johnson, S. Solution of a Large-Scale Traveling-Salesman Problem. J. Oper. Res. Soc. Am. 1954, 2, 393–410. [Google Scholar] [CrossRef]
  49. Sanches, D.; Whitley, D.; Tinós, R. Improving an exact solver for the traveling salesman problem using partition crossover. In Proceedings of the 2017 Genetic and Evolutionary Computation Conference, Berlin, Germany, 15–19 July 2017; pp. 337–344. [Google Scholar]
  50. Tao, R.; Gavves, E.; Smeulders, A.W.M. Siamese instance search for tracking. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1420–1429. [Google Scholar]
  51. He, Y.; Hu, T.; Zeng, D. Scan-flood fill (SCAFF): An efficient automatic precise region filling algorithm for complicated regions. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 15–20 June 2019; pp. 761–769. [Google Scholar]
  52. Verma, V.; Carsten, J.; Ravine, M.; Kennedy, M.R.; Edgett, K.S.; Culver, A.; Ruoff, N.; Williams, N.; Beegle, L. How do we get robots to take self-portraits on Mars? Perseverance-ingenuity and curiosity selfies. In Proceedings of the 2022 IEEE Aerospace Conference (AERO), Big Sky, MT, USA, 5 March 2022; pp. 1–14. [Google Scholar]
  53. Urban, R.; Štroner, M.; Blistan, P.; Kovanič, Ľ.; Patera, M.; Jacko, S.; Ďuriška, I.; Kelemen, M.; Szabo, S. The Suitability of UAS for Mass Movement Monitoring Caused by Torrential Rainfall—A Study on the Talus Cones in the Alpine Terrain in High Tatras, Slovakia. ISPRS Int. J. Geo-Inf. 2019, 8, 317. [Google Scholar] [CrossRef]
  54. Núñez-Andrés, M.A.; Prades-Valls, A.; Matas, G.; Buill, F.; Lantada, N. New Approach for Photogrammetric Rock Slope Premonitory Movements Monitoring. Remote Sens. 2023, 15, 293. [Google Scholar] [CrossRef]
  55. Lyons, M.B.; Brandis, K.J.; Murray, N.J.; Wilshire, J.H.; McCann, J.A.; Kingsford, R.T.; Callaghan, C.T. Monitoring Large and Complex Wildlife Aggregations with Drones. Methods Ecol. Evol. 2019, 10, 1024–1035. [Google Scholar] [CrossRef]
  56. Mier, G.; Valente, J.; de Bruin, S. Fields2Cover: An Open-Source Coverage Path Planning Library for Unmanned Agricultural Vehicles. IEEE Robot. Autom. Lett. 2023, 8, 2166–2172. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of step-stare imaging system.
Figure 1. Schematic diagram of step-stare imaging system.
Drones 08 00242 g001
Figure 2. Formulation of the coverage path planning problem for step-stare imaging systems.
Figure 2. Formulation of the coverage path planning problem for step-stare imaging systems.
Drones 08 00242 g002
Figure 3. Flowchart of our proposed AHG method.
Figure 3. Flowchart of our proposed AHG method.
Drones 08 00242 g003
Figure 4. Virtual scanning sphere of a single-lens camera and zonal isosceles trapezoid. (a) The projection of the camera sensor forms a spherical pseudo rectangle. (b) A set of zonal isosceles trapezoids aligned in the meridional direction.
Figure 4. Virtual scanning sphere of a single-lens camera and zonal isosceles trapezoid. (a) The projection of the camera sensor forms a spherical pseudo rectangle. (b) A set of zonal isosceles trapezoids aligned in the meridional direction.
Drones 08 00242 g004
Figure 5. Spherical tiling of an SPR. (a) The edge combination of an SPR is a b a b . (b) Two SPRs are arranged with the longer edge serving as a common side. (c) Two SPRs are arranged with the shorter edge serving as a common side.
Figure 5. Spherical tiling of an SPR. (a) The edge combination of an SPR is a b a b . (b) Two SPRs are arranged with the longer edge serving as a common side. (c) Two SPRs are arranged with the shorter edge serving as a common side.
Drones 08 00242 g005
Figure 6. Zonal isosceles trapezoid ( T Z ), the union of the sensor’s projection on a VSS ( P Z ), and the maximal inscribed ZIT of P Z ( T Z ). (a) The envelope of the sensors’ corners forms a T Z . (b) T Z approximates the contour of P Z . (c) T Z is the maximal inscribed ZIT of P Z .
Figure 6. Zonal isosceles trapezoid ( T Z ), the union of the sensor’s projection on a VSS ( P Z ), and the maximal inscribed ZIT of P Z ( T Z ). (a) The envelope of the sensors’ corners forms a T Z . (b) T Z approximates the contour of P Z . (c) T Z is the maximal inscribed ZIT of P Z .
Drones 08 00242 g006
Figure 7. Definitions used in Section 4.2. (a) A circular ROI is clamped by a meridional curved caliper. A zonal slice with four corners ( X 1 , X 2 , X 3 , and X 4 ) is a part of the ROI, cut by the projection of a spherical zone. The zonal slice is clamped by a zonal straight caliper. (b) The parallels determined by a zonal set of camera sensors enclose a spherical zone. The vertical outer camera has an extreme orientation such that its outer vertical side projects to a supporting line of a zonal slice.
Figure 7. Definitions used in Section 4.2. (a) A circular ROI is clamped by a meridional curved caliper. A zonal slice with four corners ( X 1 , X 2 , X 3 , and X 4 ) is a part of the ROI, cut by the projection of a spherical zone. The zonal slice is clamped by a zonal straight caliper. (b) The parallels determined by a zonal set of camera sensors enclose a spherical zone. The vertical outer camera has an extreme orientation such that its outer vertical side projects to a supporting line of a zonal slice.
Drones 08 00242 g007
Figure 8. Meridional curved caliper algorithm. (a) Circular ROI. (b) Convex polygonal ROI.
Figure 8. Meridional curved caliper algorithm. (a) Circular ROI. (b) Convex polygonal ROI.
Drones 08 00242 g008
Figure 9. The geometric relationship between a convex polygon and its extreme hyperbola. (a) The extreme hyperbola is the one that passes through V 1 . (b) The extreme hyperbola is tangent to an adjacent edge of V 1 .
Figure 9. The geometric relationship between a convex polygon and its extreme hyperbola. (a) The extreme hyperbola is the one that passes through V 1 . (b) The extreme hyperbola is tangent to an adjacent edge of V 1 .
Drones 08 00242 g009
Figure 10. The dataset is generated from S M defined by the experimental parameters.
Figure 10. The dataset is generated from S M defined by the experimental parameters.
Drones 08 00242 g010
Figure 11. Dataset generation process. (a) Circular ROI generation. First, r R O I is generated from [ b · h f · max ( n x , n y ) , r M ] . Subsequently, ( r p , α p ) are randomly sampled from [ 0 , r M r R O I ] × [ 0 , 2 π ] . (b) Polygonal ROI generated by computing the convex hull of random points within S M .
Figure 11. Dataset generation process. (a) Circular ROI generation. First, r R O I is generated from [ b · h f · max ( n x , n y ) , r M ] . Subsequently, ( r p , α p ) are randomly sampled from [ 0 , r M r R O I ] × [ 0 , 2 π ] . (b) Polygonal ROI generated by computing the convex hull of random points within S M .
Drones 08 00242 g011
Figure 12. The NC performance of the ablations on each component of DCO. A lower value is preferable. (a) Performance on the dataset with a circular ROI. (b) Performance on the dataset with a convex polygonal ROI.
Figure 12. The NC performance of the ablations on each component of DCO. A lower value is preferable. (a) Performance on the dataset with a circular ROI. (b) Performance on the dataset with a convex polygonal ROI.
Drones 08 00242 g012
Figure 13. Path length comparison results (lower values are preferable). (a) Boustrophedon path length in circle group. (b) Boustrophedon path length in convex polygon group. (c) Branch-and-bound path length in circle group. (d) Branch-and-bound path length in convex polygon group.
Figure 13. Path length comparison results (lower values are preferable). (a) Boustrophedon path length in circle group. (b) Boustrophedon path length in convex polygon group. (c) Branch-and-bound path length in circle group. (d) Branch-and-bound path length in convex polygon group.
Drones 08 00242 g013
Figure 14. Coverage path obtained via various methods. (ad) The coverage cells and paths for a circular ROI generated by AHG, FF, PSO, and RS, respectively. The paths are produced by the brand-and-bound method. (eh) The coverage cells and paths (except (g)) for a convex polygonal ROI generated by AHG, FF, PSO, and RS, respectively. The paths are produced by the boustrophedon method.
Figure 14. Coverage path obtained via various methods. (ad) The coverage cells and paths for a circular ROI generated by AHG, FF, PSO, and RS, respectively. The paths are produced by the brand-and-bound method. (eh) The coverage cells and paths (except (g)) for a convex polygonal ROI generated by AHG, FF, PSO, and RS, respectively. The paths are produced by the boustrophedon method.
Drones 08 00242 g014
Table 1. Experimental parameters.
Table 1. Experimental parameters.
ParameterValue
Focal length50 mm
Pixel size12 μ m
Number of pixels 640 × 480
Relative height5000 m
Spatial resolution threshold *2 m
* This threshold indicates the least acceptable spatial resolution.
Table 2. Performance of P r o b ( r C R = 1 ) on our synthetic dataset.
Table 2. Performance of P r o b ( r C R = 1 ) on our synthetic dataset.
SHGDCOCircular ROIConvex Polygonal ROI
69.5%42.0%
100.0%100.0%
98.0%97.5%
100.0%100.0%
Table 3. Coverage rate (CR) performance of various methods.
Table 3. Coverage rate (CR) performance of various methods.
No.CircleConvex Polygon
AHG (%)FF (%)PSO (%)RP (%)AHG (%)FF (%)PSO (%)RP (%)
1 100.0 ± 0.0 100.0 ± 0.0 98.8 ± 1.4 ̲ 100.0 ± 0.0 100.0 ± 0.0 99.9 ± 0.1 ̲ 99.6 ± 0.4 100.0 ± 0.0
2 100.0 ± 0.0 99.9 ± 0.1 ̲ 98.1 ± 1.6 100.0 ± 0.0 100.0 ± 0.0 99.9 ± 0.1 ̲ 99.5 ± 0.5 100.0 ± 0.0
3 100.0 ± 0.0 99.9 ± 0.1 ̲ 97.9 ± 1.6 100.0 ± 0.0 100.0 ± 0.0 99.9 ± 0.1 ̲ 99.2 ± 0.4 100.0 ± 0.0
4 100.0 ± 0.0 100.0 ± 0.0 99.0 ± 0.2 ̲ 100.0 ± 0.0 100.0 ± 0.0 99.9 ± 0.1 ̲ 99.2 ± 0.6 100.0 ± 0.0
5 100.0 ± 0.0 99.9 ± 0.1 ̲ 98.1 ± 1.5 100.0 ± 0.0 100.0 ± 0.0 99.9 ± 0.1 ̲ 99.1 ± 0.3 100.0 ± 0.0
6 100.0 ± 0.0 99.9 ± 0.1 ̲ 98.5 ± 1.2 100.0 ± 0.0 100.0 ± 0.0 99.9 ± 0.1 ̲ 98.9 ± 0.5 100.0 ± 0.0
7 100.0 ± 0.0 99.9 ± 0.1 ̲ 98.8 ± 0.8 100.0 ± 0.0 100.0 ± 0.0 99.9 ± 0.1 ̲ 99.0 ± 0.2 100.0 ± 0.0
8 100.0 ± 0.0 99.9 ± 0.1 ̲ 98.9 ± 0.6 100.0 ± 0.0 100.0 ± 0.0 99.9 ± 0.1 ̲ 99.1 ± 0.2 100.0 ± 0.0
9 100.0 ± 0.0 99.9 ± 0.1 ̲ 99.0 ± 0.3 100.0 ± 0.0 100.0 ± 0.0 99.9 ± 0.1 ̲ 99.1 ± 0.3 100.0 ± 0.0
10 100.0 ± 0.0 99.9 ± 0.1 ̲ 98.0 ± 1.0 100.0 ± 0.0 100.0 ± 0.0 99.9 ± 0.1 ̲ 99.0 ± 0.3 100.0 ± 0.0
The bold numbers represent the optimal results, while the underlined numbers represent suboptimal results.
Table 4. Number of cells (NC) for various methods.
Table 4. Number of cells (NC) for various methods.
No.CircleConvex Polygon
AHGFFPSORPAHGFFPSORP
1 6.0 ± 0.0 9.1 ± 0.4 6.0 ± 0.2 11.0 ± 2.4 6.0 ± 0.0 ̲ 9.4 ± 1.1 5.8 ± 0.4 8.2 ± 1.1
2 7.0 ± 0.0 ̲ 9.3 ± 0.9 6.6 ± 0.5 10.7 ± 2.4 7.0 ± 0.0 ̲ 10.6 ± 1.1 6.3 ± 0.9 9.4 ± 1.7
3 8.0 ± 0.0 ̲ 10.6 ± 1.5 7.3 ± 1.1 13.7 ± 3.4 8.0 ± 0.0 ̲ 11.3 ± 1.6 6.5 ± 1.2 10.2 ± 1.8
4 9.0 ± 0.0 ̲ 11.8 ± 3.1 7.2 ± 1.0 12.4 ± 2.5 9.0 ± 0.0 ̲ 12.1 ± 1.9 8.0 ± 1.3 12.1 ± 2.0
5 10.0 ± 0.0 ̲ 12.9 ± 2.0 9.9 ± 0.4 16.8 ± 4.2 10.0 ± 0.0 ̲ 13.9 ± 1.3 8.2 ± 1.1 13.3 ± 1.3
6 11.0 ± 0.0 13.5 ± 1.6 ̲ 11.0 ± 0.0 18.2 ± 2.6 11.0 ± 0.0 ̲ 13.9 ± 1.6 8.8 ± 1.5 12.7 ± 1.4
7 12.0 ± 0.0 ̲ 15.0 ± 2.1 11.4 ± 0.9 16.0 ± 3.9 12.0 ± 0.0 ̲ 16.1 ± 1.9 9.8 ± 1.8 14.2 ± 1.9
8 13.0 ± 0.0 ̲ 18.9 ± 1.6 11.7 ± 1.1 21.2 ± 3.3 13.0 ± 0.0 ̲ 15.9 ± 1.8 10.7 ± 1.2 15.1 ± 1.9
9 14.0 ± 0.0 ̲ 20.1 ± 1.5 12.6 ± 1.2 20.6 ± 2.8 14.0 ± 0.0 ̲ 18.2 ± 1.7 11.6 ± 1.4 16.8 ± 2.2
10 15.0 ± 0.0 ̲ 18.1 ± 1.6 14.2 ± 1.3 23.8 ± 4.6 15.0 ± 0.0 ̲ 18.2 ± 1.8 12.9 ± 1.6 17.4 ± 2.3
The bold numbers represent the optimal results, while the underlined numbers represent suboptimal results.
Table 5. Coverage planning computation time performance of various methods.
Table 5. Coverage planning computation time performance of various methods.
No.CircleConvex Polygon
AHG (ms)FF (ms)PSO (s)RP (ms)AHG (ms)FF (ms)PSO (s)RP (ms)
1 9.2 ± 4.6 16.4 ± 1.8 40.4 ± 10.2 14.8 ± 4.2 ̲ 3.7 ± 0.7 24.5 ± 2.8 31.3 ± 6.5 11.2 ± 2.4 ̲
2 7.4 ± 0.6 16.4 ± 1.6 49.0 ± 13.2 14.0 ± 4.1 ̲ 4.4 ± 1.0 28.2 ± 4.2 32.5 ± 5.2 14.2 ± 4.1 ̲
3 8.7 ± 2.3 18.9 ± 2.5 ̲ 55.6 ± 17.3 19.3 ± 6.0 5.2 ± 1.9 28.8 ± 4.2 36.6 ± 11.2 15.6 ± 6.7 ̲
4 9.0 ± 3.7 19.8 ± 4.4 46.3 ± 17.0 17.4 ± 4.5 ̲ 5.4 ± 1.5 30.5 ± 4.9 50.8 ± 15.1 19.0 ± 6.4 ̲
5 8.9 ± 0.9 22.2 ± 2.8 ̲ 78.0 ± 15.4 24.1 ± 7.3 7.7 ± 6.0 35.5 ± 4.6 52.5 ± 13.4 23.6 ± 7.4 ̲
6 8.8 ± 0.4 23.3 ± 2.4 ̲ 91.4 ± 15.6 26.4 ± 5.2 8.5 ± 7.6 35.8 ± 5.1 58.0 ± 16.2 22.8 ± 8.8 ̲
7 12.5 ± 5.7 26.4 ± 3.9 99.5 ± 27.8 24.3 ± 7.7 ̲ 6.6 ± 1.9 39.2 ± 4.9 73.0 ± 23.3 24.2 ± 7.4 ̲
8 10.4 ± 4.3 31.1 ± 3.2 ̲ 109.7 ± 40.3 32.9 ± 7.7 6.2 ± 0.9 39.3 ± 4.5 80.6 ± 21.3 28.3 ± 8.6 ̲
9 12.7 ± 9.3 32.3 ± 2.1 131.2 ± 28.5 31.2 ± 6.1 ̲ 8.5 ± 3.8 44.9 ± 4.8 99.4 ± 33.1 35.3 ± 11.9 ̲
10 11.7 ± 2.6 29.7 ± 2.3 ̲ 160.7 ± 34.1 36.8 ± 9.0 7.7 ± 2.3 45.2 ± 6.2 119.8 ± 27.9 34.7 ± 12.5 ̲
The bold numbers represent the optimal results, while the underlined numbers represent suboptimal results.
Table 6. Overall computation time (with boustrophedon) performance of various methods.
Table 6. Overall computation time (with boustrophedon) performance of various methods.
No.CircleConvex Polygon
AHG (ms)FF (ms)RS (ms)AHG (ms)FF (ms)RS (ms)
1 9.5 ± 5.2 16.5 ± 1.9 15.0 ± 4.2 ̲ 3.8 ± 0.7 24.6 ± 2.8 11.3 ± 2.4 ̲
2 7.5 ± 0.6 16.5 ± 1.6 14.1 ± 4.1 ̲ 4.6 ± 1.0 28.3 ± 4.2 14.3 ± 4.1 ̲
3 8.8 ± 2.4 19.0 ± 2.5 ̲ 19.4 ± 5.9 5.3 ± 1.9 28.9 ± 4.2 15.7 ± 6.7 ̲
4 9.1 ± 3.7 19.9 ± 4.4 17.5 ± 4.5 ̲ 5.5 ± 1.5 30.6 ± 4.9 19.1 ± 6.4 ̲
5 9.0 ± 0.9 22.3 ± 2.8 ̲ 24.2 ± 7.3 7.8 ± 6.0 35.6 ± 4.6 23.8 ± 7.8 ̲
6 8.9 ± 0.4 23.4 ± 2.4 ̲ 26.5 ± 5.2 9.0 ± 8.5 36.0 ± 5.2 23.0 ± 9.1 ̲
7 12.7 ± 6.2 26.6 ± 4.1 24.4 ± 7.8 ̲ 6.7 ± 1.9 39.3 ± 4.9 24.4 ± 7.4 ̲
8 10.5 ± 4.3 31.2 ± 3.2 ̲ 33.1 ± 8.1 6.3 ± 0.9 39.4 ± 4.6 28.4 ± 8.6 ̲
9 12.8 ± 9.3 32.4 ± 2.1 31.3 ± 6.1 ̲ 8.6 ± 3.8 45.0 ± 4.8 35.4 ± 11.9 ̲
10 11.8 ± 2.6 29.8 ± 2.3 ̲ 36.9 ± 9.0 7.8 ± 2.4 45.3 ± 6.2 34.9 ± 12.5 ̲
The bold numbers represent the optimal results, while the underlined numbers represent suboptimal results.
Table 7. Overall computation time (with branch and bound) performance of various methods.
Table 7. Overall computation time (with branch and bound) performance of various methods.
No.CircleConvex Polygon
AHG (ms)FF (ms)PSO (s)RP (ms)AHG (ms)FF (ms)PSO (s)RP (ms)
1 13.5 ± 5.4 20.1 ± 2.3 40.4 ± 10.2 19.8 ± 5.8 ̲ 7.8 ± 1.6 28.9 ± 3.6 31.3 ± 6.5 16.0 ± 3.4 ̲
2 10.9 ± 0.6 20.4 ± 3.3 49.1 ± 13.2 18.1 ± 5.4 ̲ 10.9 ± 2.5 34.9 ± 5.2 32.5 ± 5.2 20.2 ± 5.4 ̲
3 12.3 ± 2.5 22.5 ± 2.6 ̲ 55.6 ± 17.3 23.8 ± 7.1 12.8 ± 2.9 35.8 ± 5.9 36.6 ± 11.2 23.4 ± 7.5 ̲
4 14.1 ± 5.4 26.7 ± 8.3 46.3 ± 17.0 23.4 ± 7.4 ̲ 12.1 ± 3.6 36.2 ± 4.9 50.8 ± 15.1 26.6 ± 9.9 ̲
5 12.8 ± 1.4 26.5 ± 4.0 ̲ 78.0 ± 15.4 30.6 ± 10.3 17.0 ± 7.5 44.3 ± 5.4 52.5 ± 13.4 33.2 ± 9.1 ̲
6 12.4 ± 0.7 28.0 ± 3.6 ̲ 91.4 ± 15.6 34.7 ± 8.2 18.9 ± 11.2 45.3 ± 7.8 58.0 ± 16.2 32.5 ± 10.3 ̲
7 16.6 ± 6.5 32.9 ± 7.3 99.5 ± 27.8 29.6 ± 8.8 ̲ 15.6 ± 4.7 49.1 ± 5.9 73.0 ± 23.3 32.5 ± 8.0 ̲
8 14.0 ± 4.4 43.4 ± 7.1 ̲ 109.7 ± 40.3 46.8 ± 17.5 16.8 ± 5.2 49.6 ± 6.6 80.6 ± 21.3 38.4 ± 10.7 ̲
9 16.5 ± 9.3 48.3 ± 7.8 131.2 ± 28.5 41.3 ± 17.6 ̲ 18.2 ± 5.3 58.1 ± 6.1 99.4 ± 33.1 45.4 ± 12.2 ̲
10 18.0 ± 6.5 38.4 ± 4.9 ̲ 160.7 ± 34.1 45.9 ± 12.3 18.4 ± 5.0 55.1 ± 6.8 119.8 ± 27.9 45.7 ± 14.1 ̲
The bold numbers represent the optimal results, while the underlined numbers represent suboptimal results.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, J. Coverage Path Planning with Adaptive Hyperbolic Grid for Step-Stare Imaging System. Drones 2024, 8, 242. https://doi.org/10.3390/drones8060242

AMA Style

Zhao J. Coverage Path Planning with Adaptive Hyperbolic Grid for Step-Stare Imaging System. Drones. 2024; 8(6):242. https://doi.org/10.3390/drones8060242

Chicago/Turabian Style

Zhao, Jiaxin. 2024. "Coverage Path Planning with Adaptive Hyperbolic Grid for Step-Stare Imaging System" Drones 8, no. 6: 242. https://doi.org/10.3390/drones8060242

APA Style

Zhao, J. (2024). Coverage Path Planning with Adaptive Hyperbolic Grid for Step-Stare Imaging System. Drones, 8(6), 242. https://doi.org/10.3390/drones8060242

Article Metrics

Back to TopTop