Next Article in Journal
The Influence of Satellite-Derived Environmental and Oceanographic Parameters on Marine Turtle Time at Surface in the Gulf of Mexico
Previous Article in Journal
Integrating Adversarial Generative Network with Variational Autoencoders towards Cross-Modal Alignment for Zero-Shot Remote Sensing Image Scene Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Line-of-Sight Target Detection Based on Dual-View Observation with Single-Channel UWB Radar

1
School of Mechanical and Electrical Engineering, Chengdu University of Technology, Chengdu 610059, China
2
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(18), 4532; https://doi.org/10.3390/rs14184532
Submission received: 23 July 2022 / Revised: 5 September 2022 / Accepted: 8 September 2022 / Published: 11 September 2022
(This article belongs to the Section Urban Remote Sensing)

Abstract

:
Non-line-of-sight (NLOS) target detection utilizing multipath plays an important role in anti-terrorism, urban warfare, indoor rescue and intelligent driving. In this paper, an imaging method based on single-channel ultra-wideband (UWB) radar and dual-view observation is proposed for detecting NLOS targets. First, the cause of producing ghosts from a single view in a previous study is analyzed in detail. On this basis, with focusing on dual-view observation, the ghost problem is handled at two levels. From the level of the radar echo, path partition is conducted with the utilization of dual-view observation, solving the problem of uneven energy among multipaths. After that, two images are generated by multipath imaging, respectively, which reduces the accumulation of ghosts. From the level of the image, owing to the distinct distribution of ghosts between the imaging result, the ghosts at different locations are eliminated by image fusion. Experimental results demonstrate that compared with single view and direct multipath imaging, the approach based on dual-view observation successfully eliminates most ghosts while retaining the targets in both single-target and double-targets scenarios, which verifies the effectiveness of the method.

1. Introduction

The detection of non-line-of-light (NLOS) targets in the urban environment has an important application value in the fields of military and social security, such as urban warfare, indoor rescue and disaster rescue [1,2,3,4,5,6]. In recent years, NLOS target detection has paid more attention to intelligent driving, mobile communication and other areas of life [7,8,9,10,11,12]. Normally, NLOS scenarios can be divided into two types. One is that the targets exist in an enclosed space, and the other is that the targets hide behind the corner. Unlike the common target detection in the line-of-sight (LOS) region, the electromagnetic (EM) wave is unable to be transmitted directly to the targets in the NLOS scenarios. However, the NLOS targets can be detected by penetration or a multipath. Among them, targets in the enclosed environments are usually detected by through-the-wall (TTW) radar, which utilizes the penetration of low-frequency EM waves to detect the target behind the wall [13,14]. In addition, targets behind the corner can be detected by multipath propagation, including diffraction, specular reflection and diffuse reflection.
The application of multipath in radar originated from a multipath exploitation project in [15,16]. In the early research, the feasibility analysis of obtaining NLOS target information from multipath signals is mainly focused [17,18,19,20]. Recently, using multipath signals to detect NLOS targets has gradually become universal. According to single and multiple multipath signals, NLOS localization can be classified into two types. The first type only utilizes a single robust multipath echo, which is very common in the millimeter wave (MMW) radar. Since the diffraction ability of EM wave is weak in the MMW band and the severe attenuation of higher-order multipath, only the first-order reflection path is prone to utilizing it effectively. In [21], a localization algorithm based on phase comparison among the multiple channels was proposed. The false target range was obtained via exploiting the fast Fourier transform (FFT) and the false target azimuth was derived by the phase differences between echoes among the multiple channels. Then, the true target position was inversely calculated according to the geometric symmetry of the false target and true target. In [22], target localization was realized by a synthetic bistatic MMW radar. A virtual bistatic was formed through the movement of MMW radar. The approach avoided the disadvantage of the inaccurate direction of the arrival (DOA) estimation. It is noticed that the above methods applied only the single bounce multipath. The other type focuses on exploiting multiple multipath signals in low frequency. In the low-frequency band, both diffraction and reflection can be exploited, and there will be more robust multipaths to extract target information. Some studies have been conducted to localize targets via multiple paths. In [23], a method of NLOS human target detection in a real scene via one- and two-wall reflections was proposed. In the experiments, two moving targets were detected simultaneously and separated when they kept a farther distance each other. Subsequently, a moving NLOS target localization method based on numerical simulation was proposed in [24]. Specifically, the scanning narrow beam radar system was utilized to prevent the mutual interference of various multipaths. This solution assumed that radar can provide sufficient angular resolution to cover a monitored region. Combining the information of radar beam direction and the detected distance of the target, the position of the target could be easily obtained.
Different from the narrow-beam radar system, the wide-beam radar finds it difficult to obtain the angle information from the paths, resulting in the mutual interference of the multipaths, namely the ambiguity of the multipath propagation. The increase in positioning error was caused by the multipath ambiguity. In order to deal with this problem, some methods were proposed in related works [25,26,27]. For example, a target localization method based on diffraction path, one-bounce reflection path and their combination was proposed in [25]. The multipath echoes were separated by a parallel threshold-based detector. After that, a global nearest-neighbor (GNN) algorithm was exploited to modify the range estimates of returns. The obtained range estimates were assigned to the corresponding propagation paths, mitigating the ambiguities of the multipath. The experimental results demonstrated that correct location estimation was achieved with robust multipath range estimates produced by the GNN algorithm. In [26], considering the complexity of the multipath measurement model and the nonlinear relationship between target position and multipath delay, a filter called the generalized likelihood ratio test (GLRT) particle filter was presented, which succeeded to alleviate the ambiguities and improve the root-mean-square error considerably. It is worth noting that this method is dependent on the assumption of an ideal signal model without clutter. Nevertheless, multipath signals can be severely interfered by clutters in the practical environment. Two algorithms were developed in [27] to reduce ambiguities in localization with taking into account information from different paths simultaneously. Firstly, a multipath-propagation-based matched filter was proposed. The filter jointly processed the information provided by the multipath returns to detect and estimate the location of the NLOS target. Only the rough information of the scenario was required as a prior. Subsequently, a square law combiner (SLC) algorithm was designed to realize the same work by computing an incoherent integration for the multipath delays. However, there is diversity in the computation and memory requirements between two algorithms. Furthermore, owing to the existence of special paths, a path selection algorithm was designed to optimize the detection probability. The simulated and experimental results demonstrated that the two algorithms could improve detection performance and retrieve the target position with higher accuracy.
The above methods need to precisely separate the desired multipaths. However, in the actual situation of multiple targets, multipaths are more likely to interfere and alias with each other, which makes it difficult to accurately extract the multipaths corresponding to the targets. Different from the methods based on multipath separation, our previous work [28] proposed an approach of direct multipath imaging based on the propagation characteristics of different multipath signals. This approach avoided separating or matching the multipath, but directly mapped radar echo into multiple imaging trajectories according to the position of virtual radars. The target was focused by incoherent addition at last. The experimental results manifested that the target could be located via this method under the circumstances of interference. Whereas, the ghosts appeared in the results after the difference in energy between the multipaths, which were more prominent in multi-objective cases. As a continuation of the previous study, this paper focuses on solving the ghost problem. In order to suppress ghosts, the view dependence of ghosts and the limitation of single view are considered. Therefore, utilizing a dual-view for detection is determined. The multipaths are strengthened respectively based on the dual-view, and then divided into two parts of the corresponding view for multipath imaging. Due to the different distribution of ghosts from the two views, the staggered ghosts can be eliminated by image fusion. The main contents and innovations are as follows.
  • The causes of producing ghosts in the previous method are analyzed in detail from the perspective of a range profile, and the distribution of the ghost is demonstrated, respectively. Two key factors influencing the ghost formation are revealed.
  • Based on the directivity of the radar beam, we propose to abandon the use of single view detection but employ dual-view joint detection to achieve the reinforcement of the weak path.
  • To maximize the usage efficiency of the path and decrease the accumulation of ghosts, the selected multipaths are firstly divided into two complementary parts based on the two views. Then, the two parts are applied to perform multipath imaging, respectively. By utilizing the difference of ghost distribution in the dual-view, multiplication fusion of the imaging results is carried out to eliminate the staggered ghosts.
  • Real experiments are conducted in both a single and double target scenario, verifying the superiority of the proposed approach in comparison with the direct multipath imaging.
The rest of this article is organized as follows. In Section 2, the multipath propagation model of the L-shaped corner scenario is established. In Section 3, the causes and situations of producing ghosts by the previous method [28] are analyzed, and the key factors affecting the ghosts are summarized. Section 4 proposes a novel method based on a dual-view observation to suppress ghosts. Real experimental results validate the effectiveness of the method in Section 5 and the related discussions are presented in Section 6. The paper is concluded in Section 7.

2. Multipath Propagation Model

In this section, a typical L-shaped multipath propagation model is established. In addition, to achieve more accurate positioning, six saliency multipaths are picked out from the propagation model as a preparation for the imaging approach.
The L-shaped corner scenario, as a typical scene of NLOS environment, is the multipath propagation scenario considered in this paper. By abstracting the real sheltered environment, an L-shaped scenario is established with a two-dimensional Cartesian coordinate system, as shown in Figure 1. The origin O = ( 0 , 0 ) is placed at the left endpoint of upper surface of Wall-3. To be specific, the L-shaped environment consists of four walls. Wall-1 and Wall-2 form the corner C, which is located at ( x C , y C ) . Wall-3 and Wall-4 fill the exterior of the L-shaped corridor. The ordinate of the upper surface of Wall-3 is y = 0 , and the abscissa of the left surface of Wall-4 is x = x W 4 .
A single-channel UWB radar is used to sense the NLOS human target. To simplify the scene model, it is assumed that the EM reflection at all walls is specular, and the rough layout of the L-shaped corner scenario is known as a prior. In Figure 1, the radar R = ( x R , y R ) toward the corner C is located at the corridor of Wall-1 and Wall-3. The whole scene is divided into the LOS region and NLOS region by a straight line through R and C. The target P = ( x p , y p ) is located in the NLOS region of another corridor of Wall-2 and Wall-4. The signal can be directly reflected by the target in the LOS region and recovered by the receiver, yet the target in the NLOS region needs to be detected by diffraction at the corner and reflection on the wall. The reflection path can be regarded as a direct propagation of the EM wave, which is emitted by mirror symmetric virtual radar with respect to the reflective wall. To obtain a saliency path, only diffraction and first-order reflection are considered, primarily. Three monotonic paths can be gathered from the L-shaped environment.
P a t h - 1 : R C P C R . The EM wave transmits to the corner and diffracts to the target, and then returns along the original path. Since the position of the radar to the corner C remains unchanged, C can be regarded as a new virtual radar source after the diffraction.
P a t h - 2 : R F 1 P F 1 R . The EM wave is reflected on F 1 and reflection propagates to the target, then it returns along the original path. According to the reflection theorem, the path can be expressed as R 1 P R 1 , where R 1 represents the mirror symmetry radar of R with respect to the upper surface of Wall-3.
P a t h - 3 : R F 2 P F 2 R . The EM wave is reflected on F 2 and reflection propagates to the target. The echo returns according to the original path. Likewise, the path can be expressed as R 2 P R 2 , where R 2 denotes the mirror symmetry radar of R with respect to the left surface of Wall-4.
Furthermore, the high-order reflection path will be formed due to continuous reflection on the walls, such as the second-order path: R F 3 F 4 P F 4 F 3 R . It is worth noting that the signal suffers severe attenuation with more bounce reflection. Since the continuous ejection of signals between walls, the length of high-order paths is highly correlated with the width of the corridor. It means that the air attenuation of high-order reflection varies in L-shaped scenarios with different widths, namely, the energy of high-order reflection return is unstable. The experimental result in [29] shows that the energy of EM wave attenuates significantly alter, reflecting twice between walls. To ensure the salience of the path, the second-order reflection path and higher-order reflection path are disused, and only the diffraction path and the first-order reflection path are considered.
Besides the pure single paths, three new composite paths can be obtained by combining single paths.
P a t h - 4 : R C P F 1 R . The EM wave transmits to the corner and diffracts to the target, and then reflected back to the receiver on F 1 .
P a t h - 5 : R C P F 2 R . The EM wave propagates to the target by diffraction at the corner, and then reflects back to the receiver on F 2 .
P a t h - 6 : R F 1 P F 2 R . The EM wave propagates to the target by reflection on F 1 , and then reflects back to the receiver on F 2 .
Thus, six robust propagation paths are selected as described, which will be utilized in multipath imaging. Moreover, these saliency paths are related to virtual radar sources C, R 2 and R 3 ; thus, the length of each path can be calculated more simply by the mirror symmetry principle, which is expressed as
r i = 2 ( | | R C | | + | | C P | | ) , i = 1 , 2 ( | | R 2 C | | + | | C P | | ) , i = 2 , 2 ( | | R 3 C | | + | | C P | | ) , i = 3 , | | R C | | + | | C P | | + | | P R 2 | | , i = 4 , | | R C | | + | | C P | | + | | P R 3 | | , i = 5 , | | R 2 P | | + | | P R 3 | | , i = 6 ,
where | | | | represents the Euclidean distance. The coordinate of R 2 is ( x R , − y R ) and the coordinate of R 3 is (2 x W 4 x R , y R ).

3. Direct Multipath Imaging and Analysis of Ghost Formation

To ascertain the causes of ghosting in the previous method in detail [28], this section briefly describes the imaging approach at first. On this basis, three cases of ghosting are derived through the analysis of the range profiles, demonstrating the corresponding imaging results. Finally, two key factors to suppress ghosts are generalized.

3.1. Imaging Approach

If the transmitted impulse signal of single-channel UWB radar is s ( t ) , then the echo signal y ( t ) can be expressed as
y ( t ) = i = 1 I σ i s ( t τ i ) + φ ( t ) + n ( t ) ,
where I is the number of the propagation paths, meaning that the receiving antenna uniformly collects all echoes from various multipaths. σ i denotes the attenuation coefficient of EM waves propagating in each path. τ i represents the propagation delay of the transmit-receive path. φ ( t ) represents the echo of the static wall and n ( t ) denotes the noise in the environment.
Based on radar returns, the image result is generated by the back projection (BP) algorithm [30]. Different from the method separating the desired path from the echo, the echo is mapped into the scene according to the selected path in the previous method. Specifically, the scenario is divided into M × N pixels, where the coordinates of pixels p are expressed as
Z p = ( x p i , y p j ) , i = 1 , 2 , , M , j = 1 , 2 , , N .
Then, each individual pixel is considered as the target. By replacing the target P in Equation (1) with pixel p in the scene, the propagation paths corresponding to each pixel, which are also called focusing paths, are obtained. The distances of the EM wave propagating along the focusing paths are denoted by
d i = 2 ( | | R C | | + | | C p | | ) , i = 1 , 2 ( | | R 2 C | | + | | C p | | ) , i = 2 , 2 ( | | R 3 C | | + | | C p | | ) , i = 3 , | | R C | | + | | C p | | + | | p R 2 | | , i = 4 , | | R C | | + | | C p | | + | | p R 3 | | , i = 5 , | | R 2 p | | + | | p R 3 | | , i = 6 .
Thus, the focusing delay with respect to each focusing path can be calculated by
τ i = d i c ,
where c denotes the propagation velocity of EM wave in the air.
The amplitude of each pixel can be computed by inserting the focusing delay τ i into Equation (2), and then the operation is repeated at each pixel, which is expressed as
G i = y ( τ i ( x p 1 , y p 1 ) ) y ( τ i ( x p 1 , y p N ) ) y ( τ i ( x p M , y p 1 ) ) y ( τ i ( x p M , y p N ) ) ,
where G i denotes imaging matrix, which also represents the result of performing the multipath imaging on the radar echo according to each focusing path.
Finally, the incoherent addition is exploited to fuse six images to focus on the target position. Therefore, the target image is given by
G = i = 1 6 a b s ( G i ) .
As can be observed from Figure 2, the six trajectories will eventually intersect at the target point. In order to facilitate the understanding, a brief description of the previous approach summarized as three steps is shown in Table 1.
It is worth mentioning that with the aim to facilitate observation of the trajectories focusing on the target, the demonstration result in Figure 2 is based on the assumption that only the corresponding saliency path is remained in echo when performing each multipath imaging. However, since the echo is uniformly, all the paths in the echo will be mapped into the scene rather than only the selected saliency path. The unconsidered paths will affect the imaging results, and even cause ghosts. Therefore, the actual imaging result is more complex than Figure 2.
In the subsequent content, the echo signal is analyzed with some details of the imaging process to explore the causes of the generation of ghosts.

3.2. Problem Analysis

In order to analyze the characteristics of the multipaths in returns, two sets of radar echo with respect to the single and double NLOS target of an L-shaped corridor are collected. To facilitate detection, the target sways slightly with static position. A single-channel UWB radar transmitting impulse signal is used to sense the micro motion of the human. Figure 3 shows the range profiles generated by the echoes of the single target and two targets, respectively.
The range of the selected paths can be calculated by Equation (1); hence, the corresponding path can be distinguished from the range profile. It can be observed from Figure 3a that the six paths of the multipath propagation model are obvious in the range profile, indicating that these paths have adequate robustness in the case of the single target. Nevertheless, when detecting two targets (see Figure 3b), the ambiguity of the path increases rapidly, manifesting as the difficulty to separate the mixing multipaths generated by the two targets.
To be more specific, there are three issues worth considering in the two range profiles, which are discussed in the following content.
  • Case 1: Some paths are feeble among the saliency paths. The energy difference is obvious with the existence of weak paths (such as P 1 ( P a t h - 1 )) and strong paths (such as P 3 ( P a t h - 3 )) in both two range profiles. The weak path contributes little to target localization because the corresponding trajectory generated by the BP imaging is feeble. In addition, owing to all multipaths in the echo being involved in each BP imaging process, when the incoherent addition is performed between the imaging result of the weak path with a retained strong path trajectory and the result produced by the strong path, the overlap of strong paths will bring a ghost with higher energy than the target. For instance, in Figure 4a, the imaging results with respect to P a t h - 1 and P a t h - 3 are superimposed. P a t h - 1 exhibits lower energy while P a t h - 3 is strong; thus, the ghost formed by the self-intersection of P a t h - 3 will have higher energy than the real target generated by the overlap of P a t h - 1 and P a t h - 3 . Therefore, the ghost is to be mistaken as a target. Notably, the demonstration is implemented with the utilization of only two selected multipaths. When more strong paths participate in the BP imaging of the weak path, ghost spots will be generated with a larger quantity. It is difficult to extract the target if the energy of the ghost is equal to or stronger than that of the target.
  • Case 2: Path aliasing occurs in the range profile. In the case of a single target, P 3 and P 4 ( P a t h - 4 ) overlap partially in the range profile, causing two trajectories close to each other after mapping. If supposing that the two paths have equal energy, four points will be generated under superposition of the two imaging results, including three ghosts close to the target. They are prone to merging into a larger unit, which is regarded as the individual target, as shown in Figure 4b. Besides, the aliasing phenomenon is more serious in the range profile of the double target, which is presented as a complete mixing of some paths. In Figure 3b, many multipaths between P 13 and P 17 are mixed with each other, resulting in severe attenuation of these multipaths and extremely reducing the effective paths for imaging.
  • Case 3: Second-order reflection path exhibits strong energy in the range profile, such as P 7 in Figure 3a and P 17 in Figure 3b. Similar to the first case, the high-order strong path is likely to intersect with other strong multipaths to form ghosts in the procedure of incoherent addition. Further, this abandoned multipath is unrelated to target localization, suggesting that the emergence of high-order strong multipath reduces the quality of imaging results. Furthermore, in the actual detecting environment, it is hard to estimate whether the echo of the high-order path presents strong energy. This uncertainty makes it hard to exploit high-order strong paths.
The aforementioned cases can jointly affect imaging, causing the distribution of ghosts to be more chaotic. Figure 5 shows the results of single and double target by the direct multipath imaging algorithm. In Figure 5a, the target point is formed by the intersecting trajectories of the selected paths. However, there are many ghosts in the result, and some even have stronger energy than the target point. Despite the serious interference, there are still two or three available paths with strong energy in the range profile; thus, the target point with a certain energy can still be observed in the imaging results. Different from the condition of a single target, more severe aliasing makes saliency multipaths less profitable for double target detection, greatly reducing the quality of the imaging result. In Figure 5b, there is still a tiny trace of target-1, but target-2 is completely unable to be found out. Combined with the aforementioned case analysis, it can be inferred that the number of strong selected paths is a key factor to highlight the target and suppress ghosts in NLOS detection.
Figure 4. Different cases of ghosting in L-shaped corner scenario. (a) Case 1. (b) Case 2.
Figure 4. Different cases of ghosting in L-shaped corner scenario. (a) Case 1. (b) Case 2.
Remotesensing 14 04532 g004
Additionally, from the analysis of the echo signal, it is known that the incoherent addition of each two imaging matrices can generate ghosts under the condition of obvious difference in multipath energy. Beyond that, ghosts with distinct energy are also accumulated by the incoherent summation of six scene matrices, which indicates that ghosts are not only produced by incoherent addition, but also gathered together in the final imaging results. It is obviously not conducive to the extraction of targets. Thus, how to decrease the ghost accumulation is another key to inhibit the ghost. Our following approach focuses on obtaining strong paths and reducing the aggregation of ghosts to optimize the imaging result.

4. Proposed Method

Given the two factors summarized, a method based on dual view and path partition is proposed to suppress ghosts. To facilitate the understanding and application of the proposed method, the flowchart of the approach is given, as shown in Figure 6. Specifically, the limitation of the single view is discussed at first and then the utilization of the dual-view is derived to enhance saliency multipaths with pertinence. Subsequently, the paths are partitioned into two parts corresponding to the view to fully utilize the enhanced paths. After analyzing the demonstration of imaging results from two views, image fusion is proposed to eliminate ghosts. The algorithms of multipath imaging of single-view and image fusion of dual-view are eventually described.

4.1. Energy Redistribution via Dual-View

Path energy is closely related to radar beam. Due to the directivity of the radar beam, the energy of emitted signals varies in different directions. Specifically, when the radar is positioned in a space with antenna toward the corner as a single view (View 1) in Figure 7a, the main lobe is in the direction of the corner and right wall (Wall-4) and the sidelobe directs to the lower wall (Wall-3). Hence, the origin energy of the diffraction and the Wall-4 reflection are stronger but the Wall-3 reflection energy is relatively weak. In other words, the weaker Wall-3 reflection is prone to produce a weak path, which impairs the advantage of the selected saliency path. Thus, it can be concluded that the utilization efficiency of the selected saliency paths decreases for the directivity of the radar beam when performing single-view observation.
The directivity of radar beam causes the energy of Wall-3 reflection to be feeble, but it also induces high reinforcement to the energy of the path corresponding to main lobe. Based on this, we propose to employ a new view to enhance the energy of the Wall-3 reflection with respect to the sidelobe, as shown in Figure 7b. In this view, the main lobe is turned toward Wall-3 by rotating the radar for enhancing the energy of the Wall-3 reflection, which is initially weak in View 1. Notably, the reinforced path corresponding to the main lobe in one view is also the weak path corresponding to the sidelobe in the other view. Accordingly, the previous view (View 1) and the additional view (View 2) are exploited jointly in the form of a dual-view rather than a single-view, for mutually compensating the weak path energy. Specifically, in View 1, the radar is placed in the original direction, namely towards the corner C and also in the direction of Wall-3 (see Figure 7a). Then, the radar is rotated to turn it in View 2, namely toward Wall-2 (see Figure 7b). In View 1, the energy of diffraction and Wall-4 reflection with respect to main lobe is strong. Likewise, the Wall-3 reflection energy is strong in View 2. In this way, the path energy is redistributed through the new view, thereby enhancing the saliency paths from two views.

4.2. Path Partition

For the six saliency paths, the energies of these paths are not induced uniform enhancement by dual-view observation, but are strengthened, respectively, in each view. Thus, to further exploit the advantages of the dual-view, a pertinent partition of the path is conducted before multipath imaging. In brief, only the enhanced paths are picked out for imaging to make these strong paths as beneficial as possible to target formation rather than producing ghosts. In terms of View 1, it is a special view because it enhances both the diffraction and Wall-4 reflection. Based on this feature, the pure paths of diffraction and Wall-3 reflection ( P a t h - 1 , P a t h - 3 ) and the mixed path of them ( P a t h - 5 ) are exploited as a focusing path for imaging, and the other paths of the selected paths are abandoned. Similarly, the enhanced paths are utilized to perform the multipath imaging in View 2. The difference is that the main lobe only strengthens the Wall-3 reflection in View 2. Therefore, the enhancement is not as high as View 1, but the paths related to Wall-3 reflection are still strengthened, and they are the remainder of the selected paths. However, the diffraction wave of this view originates from the lower sidelobe of the emitted signal, which suggests that the related path P a t h - 4 is not robust. Hence P a t h - 4 is disused; namely, the focusing paths of View 2 are P a t h - 2 and P a t h - 6 , eventually.
Since the original six-path imaging has now become a three-path and two-path imaging, the aggregation of ghosts is decreased. If multipath imaging is performed by the paths of the two parts, respectively, the demonstration of imaging results from two views can be obtained, which are shown in Figure 8. In each figure, the red square denotes the target point, while the black square represents the ghost spot. It notices that even only strong saliency paths are applied for imaging, ghosts can still form. However, benefiting from path partition, there are fewer ghosts produced by the overlap of a lesser amount of paths in each view. Moreover, since the focusing paths of the two parts are different, the position of most ghosts in the two images are staggered. On this basis, image multiplication fusion can be utilized to eliminate the staggered ghosts.

4.3. Single-View Multipath Imaging and Dual-View Image Fusion

Multiplication fusion is applied to fuse the images formed from the dual view. By means of this approach, the target points are self-integrated and retained, while the ghost spots are fused and annihilated with the blank portion of the image. In this way, numerous ghosts can be eliminated while the targets are merged. In the following content, the algorithm of the image fusion will be described.
Before image fusion, BP imaging must be completed to obtain the corresponding imaging matrix of two images. The distance of each pixel to the radar according to the corresponding path can be obtained via Equation (1)
R i = r i ( 1 , 1 ) r i ( 1 , N ) r i ( ( M , 1 ) r i ( ( M , N ) ,
where R i represents the distance matrix corresponding to different paths.
Based on the signal model in Equation (2), the collected radar echo is preprocessed to obtain the range profile matrix with clutter reduction. To elaborate the imaging algorithm, the content of preprocessing is omitted here. Since the imaging result describes the localization at a certain time, only one period of the range profile matrix is needed for imaging. According to the value of each pixel in the distance matrix, the corresponding amplitude is indexed in the range profile Ψ as the image value of this pixel, which is expressed as
A i ( Z p ) = Ψ ( R i ( Z p ) ) ,
where A i represents the imaging matrix corresponding to the saliency path. Based on the proposed method, the saliency paths corresponding to each view are utilized to perform multipath imaging, respectively. Thus, in the View 1, the image I 1 is yielded by the incoherent addition of A i ( i = 1 , 3 , 5 ) . Likewise, the image I 2 is obtained by the incoherent addition of A i ( i = 2 , 6 ) in View 2, which can be expressed as
I 1 = A 1 + A 3 + A 5 , I 2 = A 2 + A 6 .
When the two images are generated directly, ghosts exist in each generated image because of the deficiency of imaging paths and the presence of non-imaging paths. However, the distribution of ghosts is different and the target position is almost invariant in the two images. Thus, we fuse the two images utilizing the multiplication fusion method [31] for the final image with the suppressing ghost, which can be calculated by
I p ( Z p ) = I 1 ( Z p ) I 2 ( Z p ) ,
where ⊙ denotes the Hadamard product. On the basis of image fusion, the final image I p will eventually contain a noticeable real target with a clear background.

5. Experimental Results

To evaluate the performance of the method in a real urban environment, single-target and double-target detection experiments are carried out respectively in a building, which is shown in Figure 9. A single-channel UWB radar, which transmits the step frequency modulation wave (SFCW), is employed in this experiment. The detailed parameters of the radar are given in Table 2. The experimental scenario in Figure 9 is an L-shaped corridor, and the whole scene is about 8 m × 8 m. The architectural composition can be obtained by other auxiliary tools [32], thus the rough layout of the environment is known as a priori information. To ensure that the scene resolution is less than the range resolution, the scenario is divided into 512 × 512. Note that the left end of Wall-3 is the origin O, and the position of corner C is (2.1 m, 2.35 m). The length of Wall-3 along the horizontal direction is 6.6 m. Similarly, the location of the central point of the Wall-4 is (6.6 m, 4 m), and the length along the vertical direction is 8 m. The targets stand in the NLOS region. When the radar collects data, there are only target personnel in the scene.

5.1. Preprocessing

Before imaging, the N periods of echo data collected by the radar should be preprocessed to obtain the range profile. The IQ digital signals are combined into complex signals at first. Then, to reduce energy leakage and suppress sidelobes, a hamming window is added to the echo signal [33]. Finally, FFT is used for pulse compression to generate the range profile [34]. After that, the above operations are repeated in each period to obtain a two-dimensional range profile matrix with the period as row and range as column.
In the actual scenario of NLOS target detection, the direct coupling signals between the radar transceiver antenna and the wall echo are strong static clutter, which will cover the moving target echo and make it difficult to recognize the human target in the range profile. Since the wall is motionless, the amplitude and the range of the static clutter is stable in different periods. However, the human target is dynamic. Breathing, heartbeat and other parameters of subtle variation of human body that can be detected by UWB radar make the amplitude of the multipath echo periodically change [35]. For this reason, the moving target indicator (MTI) can be used to restrain the static clutter [36]. Specifically, the range profile Ψ after filtering the static clutter can be obtained by subtracting the mean value of all of the periods from each period, which can be expressed as
Ψ = m = 1 M n = 1 N ( Ψ ( n , m ) 1 N i = 1 N Ψ ( i , m ) ) ,
where N and M denote the number of acquisition periods and Fourier transform points, respectively. The filtered range profile will be directly utilized for multipath imaging.

5.2. Experiment in Real Urban Scenes

5.2.1. Single Micro-Motion Target

The simple single target scenario is tested first. The radar is located at (1.23 m, 1.72 m) and the target stands on the position of (3.30 m, 3.80 m). Radar is facing the corner in View 1, and in the direction of the lower wall in View 2, which is a clockwise rotation of 90° of View 1. Keeping the radar position unchanged, 400 periods of data are collected from two views. During the acquisition process, the target keeps slightly shaking, and the shaking amplitude is 5 to 10 cm.
Figure 10 shows the range profiles of 400 periods from two views, wherein Figure 10a is the range profile of View 1, and Figure 10b is the range profile of View 2. By comparing the range profile of two views, it can be observed that the new view does shift strong energy to the path corresponding to the sidelobe. However, in Figure 10b, the paths in the strong energy region are dense, and there is a certain overlap between them. Additionally, there are other high-order strong paths in the two range profiles except the first-order paths. Although the problem of strong high-order paths and path overlap still exists in the two images, the strong paths in the two range images appear in accordance with our expectations, namely, the P a t h - 1 ( P 1 ), P a t h - 3 ( P 3 ) and P a t h - 5 ( P 5 ) in View 1 exhibit strong energy, and the P a t h - 2 ( P 2 ) and P a t h - 6 ( P 6 ) in View 2 similarly show strong energy.
Based on the range profiles from two views, multipath images are produced by using the proposed algorithm. Figure 11a,b show the imaging results of the two views, respectively. Figure 11c is the fusion result of the imaging results of the two views. In addition, the result of direct multipath imaging is given in Figure 11d. As can be observed from Figure 11a,b, ghosts with different energies appear besides the target point, and some ghosts are even stronger than the target. However, there are less ghosts by comparing with Figure 11d. Moreover, the ghosts in Figure 11a,b are mainly manifested as points rather than areas, such as Figure 11d, which prevents ghosts from overlapping. In addition, the distribution of ghost spots between Figure 11a,b is obviously different. The ghosts in Figure 11a are scattered, while the ghosts in Figure 11b are rather concentrated, and there is almost no ghost with overlapping positions between the two images. For this reason, the ghost spots in the L-shaped corridor are almost completely eliminated in the fusion results, and only the target is residual.
Under the effect of image fusion, the final target is the intersection of the target points with respect to the two views. The maximum energy of the target is recorded as the target position, which is (3.21 m, 3.64 m), and the corresponding localization error is 0.184 m.

5.2.2. Double Micro-Motion Targets

To further inspect the performance of the dual-view imaging method in a more complex situation, we perform an experiment in a double target scenario. The radar is located at (1.23 m, 1.72 m). Two targets are located at (3.30 m, 3.70 m) and (4.40 m, 4.30 m) in the NLOS area, respectively. Other conditions are the same as a single target experiment.
Figure 12 shows the range profiles of two targets from two views. Figure 13a–c show the imaging results of View 1 and View 2, and the fusion of them in the case of double target, respectively. The imaging result of direct multipath imaging is shown in Figure 13d. Different from Figure 10, path overlap is more distinct in Figure 12 due to more multipaths in the case of the double target. However, the path energies of both targets are enhanced by dual-view observation. Compared with Figure 3b, the corresponding path of target-2 is obviously highlighted, resulting in the presence of the target point in dual-view imaging results (see Figure 13a,b). Likewise, target-1 also transforms into a strong point in Figure 13c, rather than a feeble spot in Figure 13d. In addition, it is observed from Figure 13a,b that ghost distribution in each view of the double target is very different from that of the single target in Figure 11a,b. On the one hand, the added target brings more ghosts, which increases the number of ghosts in the scene. On the other hand, due to the mutual interference between targets, the energy of the ghost is changed. Thus, some obvious ghosts in the single target scenario are likely to disappear in the double target scenario, and vice versa. These two reasons jointly affect the display of ghosts.
Meanwhile, in Figure 13a,b, some ghosts are scattered around the targets, owing to the overlap caused by the contiguous path. However, this phenomenon is not so serious that the ghost and the target merge with each other. Moreover, many ghosts are staggered between the two views, so that they can be successfully removed in the process of image fusion. In comparison with the result of direct multipath imaging in Figure 13d, the ghosts are greatly suppressed and the disappeared target-2 reappears in Figure 13c. Although there are still some strong ghosts remained in the LOS region due to the same position in two views, it has little influence for target extraction in the clean NLOS region.
The extracted positions of the two targets are (3.31 m, 3.35 m) and (4.33 m, 4.15 m), respectively. The localization errors are 0.350 m and 0.166 m with respect to the two targets. The experimental results reveal that the dual-view observation is effective in eliminating ghosts and has a positive performance even in the condition of two targets.

5.2.3. Robustness Validation

To substantiate the robustness and applicability of the proposed method, two sets of experiments are carried out. The experimental environment is consistent. The radar is still placed at (1.23 m, 1.72 m).
First, the proposed method is applied to a targetless scene to examine the suppression effect of the proposed method on ghosts caused by environmental clutter. The imaging results are demonstrated in Figure 14. In this case, the received returns mainly include spatial clutter and noise, which have relatively small difference in energy. Accordingly, in Figure 14a,b, visible ghosts permeate in the scene. However, due to the constraint of the observation view and the imaging path, ghosts between the two views still remain staggered to a certain extent. Hence, ghosts can be significantly suppressed after image fusion in Figure 14c. Although there are still ghosts remaining in some areas, the imaging quality has been improved overall.
In addition, we adjust View 2 and retest in the single and double target scenarios. View 2 is replaced by View 3, which is formed by rotating View 1 by about 45°. After the processing of the proposed method, the results are obtained in Figure 15. In the case of a single target, the energy of target under View 3 in Figure 15b is weaker compared to View 2. However, the target is still a visible point. In the case of a double target, target-2 is more energetic and target-1 is relative weaker. It is evident that the sensitive detection area shifts with the change of observation angle. However, the ghost distribution between the two views is still very different, which is reflected in both the single target and double target case. Therefore, the results after performing image fusion are favorable.

6. Discussion

In the NLOS detection of urban environment, the NLOS localization based on multipath requires high-quality echo signal, otherwise it is prone to generating a ghost. By using the method of dual-view observation, ghosts are suppressed and a superior imaging result is obtained. In the experiments, there are actually two sets of comparisons to verify the effectiveness of the method. One is the comparison between single and double target, and the other is the comparison between the original method and the proposed method. The range profiles in the result (Figure 10 and Figure 12) demonstrate that the dual-view detection method strengthens some first-order paths, which presents weak energy in single-view. Although there are other strong paths in range profiles, the selected paths are manifested as strong paths, even in the double-target scenario. However, the strong high-order paths in the range profile intensify the ghost formation, and the multipath overlap also causes the ghost to appear around the target (Figure 11a,b and Figure 13a,b). Fortunately, the selected saliency paths are divided according to the dual-view, so in the imaging results, the targets are highlighted. Especially in the case of double target, the problem of target-2 loss caused by direct multipath imaging is ameliorated, and target-1 is greatly strengthened. [From the imaging results of the method in previous research [28], it can be observed that ghosts are prone to merging into a region (Figure 11d and Figure 13d). In contrast, the path partition in the proposed method reduces the accumulation of ghosts, making it easier for ghosts to stagger rather than merge with each other.
On the whole, judging from the final imaging result, our approach has prominent improvement in target highlighting and ghost suppression. The problem of targets being covered by ghosts in the previous method is solved. Experimental results also indicate that the method is still applicable under the circumstances of interference and multipath energy loss. Nevertheless, in terms of positioning accuracy, the proposed method has a certain decline compared to the previous method. Due to the irregular shape of the human body and the roughness of the wall, a certain deviation might occur in the propagation length of the EM wave. Furthermore, the two groups of data are collected at different times, and the position of the target in the micro-motion state changes. These two factors conduce to offset of the target point between the two views. It is slight movement in the single-target case. However, the migration of target-1 is obvious in the dual-target imaging results. This phenomenon can result in the elimination of targets in the procedure of image fusion. Thus, it is necessary to compensate for this to procure more accurate positioning in future work.
In addition, the study in this paper only considers the L-shaped scenario composed of two reflective walls. Actually, the L-shaped corner consisting of single reflective wall is common as a NLOS scenario. For instance, if there is no lower wall (Wall-3), View 2 that depends on this reflective wall is inapplicable, as well as the corresponding multipaths. However, from the perspective of observation, a new view can be created by changing the angle of radar toward the remaining reflective wall. However, this operation will be mainly manifested in the energy difference of multipaths between the two views, which means that the use of high-order reflections and radar with higher resolution are required to present this difference more clearly. This circumstance will be taken into account in the future research. Moreover, the method will be extended to a further experimental analysis of other urban NLOS environments (such as the T-shaped scenario and cross-shaped scenario).

7. Conclusions

In summary, to obtain better detection results in the NLOS region of urban environment, an imaging method of NLOS targets based on a dual-view observation has been proposed in this paper. Based on a multipath imaging algorithm and propagation model, three phenomena of path ambiguity in a range profile and their influence on imaging results are analyzed. The analysis reveals that increasing the strong selected paths and reducing the accumulation of ghosts are two keys to suppress the ghost. Focusing on these two critical factors, we mainly implement ghost suppression at two levels. From the level of radar echo, based on the directivity of the radar beam, a new view is created by rotating the radar toward another reflective wall. The weak selected paths in the original view are highlighted in the new view. Thus, we jointly utilize the two views to increase the amount of strong selected paths. On the basis of a dual-view, the selected paths are partitioned into distinct parts related to each view, which maximizes the utilization efficiency of the paths and reduces the aggregation of ghosts. At the imaging level, the image fusion method is implemented to suppress the ghosts due to the different distribution of ghosts in the two views. The experimental results demonstrate that, compared with direct multipath imaging in a single-view observation, the proposed method can effectively eliminate most ghosts, optimizing the imaging result.
There are still some limitations in the proposed approach. It can be observed from the experimental results that the position deviation of the target induces the improvement of the positioning error. Hence, it is necessary to compensate for this deviation. Moreover, the effective detection area of diffraction and first-order reflection is limited. When the target exists in a farther region, high-order paths are required for precise positioning. In addition, we need to conduct more experiments on various NLOS scenarios to cope with a complex urban environment. Therefore, further research will focus on these problems to improve applicability and localization accuracy.

Author Contributions

Conceptualization, Y.Y. and Y.J.; methodology, Y.Y. and G.C.; software, C.C.; validation, Y.Y. and C.C.; formal analysis, Y.Y. and Y.J.; data curation, C.C.; writing—original draft preparation, Y.Y.; writing—review and editing, Y.J. and C.C.; funding acquisition, Y.J., G.C. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported financially by Sichuan Science and Technology Program under Grants 2022YFS0531, 2021YFS0313 and 2020YFG0458, and the National Natural Science Foundation of China under Grant 61871080.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors acknowledge the above funds for supporting this research and all editors and reviewers for their comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NLOSNon-line-of-sight
UWBUltrawide Band
LOSLine-of-sight
EMElectromagnetic
TTWThrough-the-wall
MMWMillimeter Wave
FFTFast Fourier Transform
DOADirection of Arrival
GNNGlobal Nearest-neighbor
GLRTGeneralized Likelihood Ratio Test
SLCSquare Law Combiner
BPBack Projection
MTIMoving Target Indication

References

  1. Chen, J.; Zhang, Y.; Guo, S.; Cui, G.; Wu, P.; Jia, C.; Kong, L. Joint Estimation of NLOS Building Layout and Targets via Sparsity-Driven Approach. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  2. Setlur, P.; Amin, M.; Ahmad, F. Multipath model and exploitation in through-the-wall radar and urban sensing. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011; pp. 2676–2679. [Google Scholar]
  3. Wu, K.; Fan, C.; Huang, X.; Li, X. Multistatic radars locating a human being behind corners based on the method of mirror localization. In Proceedings of the 8th International Conference on Signal Processing Systems, Auckland, New Zealand, 21–24 November 2016; pp. 110–113. [Google Scholar]
  4. Vaghefi, R.M.; Buehrer, R.M. Target tracking in NLOS environments using semidefinite programming. In Proceedings of the MILCOM 2013—2013 IEEE Military Communications Conference, San Diego, CA, USA, 18–20 November 2013; pp. 169–174. [Google Scholar]
  5. Wang, L.; Chen, R.; Shen, L.; Qiu, H.; Li, M.; Zhang, P.; Pan, Y. NLOS mitigation in sparse anchor environments with the misclosure check algorithm. Remote Sens. 2019, 11, 773. [Google Scholar] [CrossRef]
  6. Liu, F.; Li, X.; Wang, J.; Zhang, J. An adaptive UWB/MEMS-IMU complementary kalman filter for indoor location in NLOS environment. Remote Sens. 2019, 11, 2628. [Google Scholar] [CrossRef]
  7. Solomitckii, D.; Barneto, C.B.; Turunen, M.; Allén, M.; Zhabko, G.P.; Zavjalov, S.V.; Volvenko, S.V.; Valkama, M. Millimeter-wave radar scheme with passive reflector for uncontrolled blind urban intersection. IEEE Trans. Veh. Technol. 2021, 70, 7335–7346. [Google Scholar] [CrossRef]
  8. Zhang, S.; Yang, C.; Jiang, D.; Kui, X.; Guo, S.; Zomaya, A.Y.; Wang, J. Nothing blocks me: Precise and real-time LOS/NLOS path recognition in RFID systems. IEEE Internet Things J. 2019, 6, 5814–5824. [Google Scholar] [CrossRef]
  9. Kong, S.H.; Jun, S.Y. Cooperative positioning technique with decentralized malicious vehicle detection. IEEE Trans. Intell. Transp. Syst. 2017, 19, 826–838. [Google Scholar] [CrossRef]
  10. Wang, Y.; Wu, X.; Cheng, L. A novel non-line-of-sight indoor localization method for wireless sensor networks. J. Sens. 2018, 2018, 3715372. [Google Scholar] [CrossRef]
  11. Mendrzik, R.; Wymeersch, H.; Bauch, G.; Abu-Shaban, Z. Harnessing NLOS components for position and orientation estimation in 5G millimeter wave MIMO. IEEE Trans. Wirel. Commun. 2018, 18, 93–107. [Google Scholar] [CrossRef]
  12. Si, M.; Wang, Y.; Xu, S.; Sun, M.; Cao, H. A Wi-Fi FTM-based indoor positioning method with LOS/NLOS identification. Appl. Sci. 2020, 10, 956. [Google Scholar] [CrossRef]
  13. Silva, B.; Hancke, G.P. Ranging error mitigation for through-the-wall non-line-of-sight conditions. IEEE Trans. Ind. Inform. 2020, 16, 6903–6911. [Google Scholar] [CrossRef]
  14. Ding, H.; Chen, C.; Peng, S.; Li, X.; Zheng, L. Multistatic ultra-wideband localization for NLOS environments. In Proceedings of the 2012 Second International Conference on Intelligent System Design and Engineering Application, Sanya, China, 6–7 January 2012; pp. 380–384. [Google Scholar]
  15. Baranoski, E. Multipath exploitation radar industry day. In DARPA Strategic Technology Office Presentation 2007; Defense Advanced Research Projects Agency Strategic Technology Office: Arlington, VA, USA, 2007. [Google Scholar]
  16. Durek, J. Multipath Exploitation Radar Data Collection Review; Defense Advanced Research Projects Agency Strategic Technology Office: Arlington, VA, USA, 2009. [Google Scholar]
  17. Linnehan, R.; Schindler, J. Multistatic scattering from moving targets in multipath environments. In Proceedings of the 2009 IEEE Radar Conference, Pasadena, CA, USA, 4–8 May 2009; pp. 1–6. [Google Scholar]
  18. Deiana, D.; Kossen, A.; Van Rossum, W. Multipath exploitation in an urban environment using a MIMO surveillance radar. In Proceedings of the 11-th International Radar Symposium, Vilnius, Lithuania, 16–18 June 2010; pp. 1–4. [Google Scholar]
  19. Sume, A.; Gustafsson, M.; Herberthson, M.; Janis, A.; Nilsson, S.; Rahm, J.; Orbom, A. Radar detection of moving targets behind corners. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2259–2267. [Google Scholar] [CrossRef]
  20. Gustafsson, M.; Andersson, Å.; Johansson, T.; Nilsson, S.; Sume, A.; Örbom, A. Extraction of human micro-doppler signature in an urban environment using a “sensing-behind-the-corner” radar. IEEE Geosci. Remote Sens. Lett. 2015, 13, 187–191. [Google Scholar] [CrossRef]
  21. Zhao, Q.; Cui, G.; Guo, S.; Yi, W.; Kong, L.; Yang, X. Millimeter wave radar detection of moving targets behind a corner. In Proceedings of the 2018 21st International Conference on Information Fusion (FUSION), Cambridge, UK, 10–13 July 2018; pp. 2042–2046. [Google Scholar]
  22. Du, H.; Fan, C.; Cao, C.; Xu, Z.; Huang, X. A novel NLOS target localization method with a synthetic bistatic MMW radar. In Proceedings of the 2020 IEEE 11th Sensor Array and Multichannel Signal Processing Workshop (SAM), Hangzhou, China, 8–11 June 2020; pp. 1–5. [Google Scholar]
  23. Johansson, T.; Örbom, A.; Sume, A.; Rahm, J.; Nilsson, S.; Herberthson, M.; Gustafsson, M.; Andersson, Å. Radar measurements of moving objects around corners in a realistic scene. Radar Sens. Technol. XVIII 2014, 9077, 531–540. [Google Scholar]
  24. Johansson, T.; Andersson, Å.; Gustafsson, M.; Nilsson, S. Positioning of moving non-line-of-sight targets behind a corner. In Proceedings of the 2016 European Radar Conference (EuRAD), London, UK, 5–7 October 2016; pp. 181–184. [Google Scholar]
  25. Zetik, R.; Eschrich, M.; Jovanoska, S.; Thoma, R.S. Looking behind a corner using multipath-exploiting UWB radar. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 1916–1926. [Google Scholar] [CrossRef]
  26. Rabaste, O.; Bosse, J.; Chonavel, T.; Thai, K.-P.-H. GLRT particle filter for tracking NLOS target in around-the-corner radar. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 3216–3220. [Google Scholar]
  27. Rabaste, O.; Bosse, J.; Poullin, D.; Sáenz, I.D.H.; Letertre, T.; Chonavel, T. Detection–Localization Algorithms in the Around-the-Corner Radar Problem. IEEE Trans. Aerosp. Electron. Syst. 2019, 55, 2658–2673. [Google Scholar]
  28. Tang, Q.; Li, J.; Wang, L.; Jia, Y.; Cui, G. Multipath Imaging for NLOS Targets Behind an L-shaped Corner with Single-channel UWB Radar. IEEE Sens. J. 2021, 22, 1531–1540. [Google Scholar] [CrossRef]
  29. Li, S.; Guo, S.; Chen, J.; Yang, X.; Fan, S.; Jia, C.; Cui, G.; Yang, H. Multiple targets localization behind L-shaped corner via UWB radar. IEEE Trans. Veh. Technol. 2021, 70, 3087–3100. [Google Scholar] [CrossRef]
  30. Xu, M.; Wang, L.V. Universal back-projection algorithm for photoacoustic computed tomography. Phys. Rev. E 2005, 71, 016706. [Google Scholar] [CrossRef]
  31. Ahmad, F.; Amin, M.G. Multi-location wideband synthetic aperture imaging for urban sensing applications. J. Frankl. Inst. 2008, 345, 618–639. [Google Scholar] [CrossRef]
  32. Mastin, A.; Kepner, J.; Fisher, J. Automatic registration of LIDAR and optical images of urban scenes. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 2639–2646. [Google Scholar]
  33. Lewis, B.L. Range-time-sidelobe reduction technique for FM-derived polyphase PC codes. IEEE Trans. Aerosp. Electron. Syst. 1993, 29, 834–840. [Google Scholar] [CrossRef]
  34. Farnett, E.C.; Stevens, G.H.; Skolnik, M. Pulse compression radar. Radar Handb. 1990, 2, 10–11. [Google Scholar]
  35. Immoreev, I.I.; Fedotov, P.D.V. Ultra wideband radar systems: Advantages and disadvantages. In Proceedings of the 2002 IEEE Conference on Ultra Wideband Systems and Technologies (IEEE Cat. No. 02EX580), Baltimore, MD, USA, 21–23 May 2002; pp. 201–205. [Google Scholar]
  36. Aubry, A.; De Maio, A.; Carotenuto, V.; Farina, A. Radar phase noise modeling and effects-part I: MTI filters. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 698–711. [Google Scholar] [CrossRef]
Figure 1. Multipath propagation model in L-shaped corner scenario.
Figure 1. Multipath propagation model in L-shaped corner scenario.
Remotesensing 14 04532 g001
Figure 2. Six paths intersect at the target point in L-shaped corner scenario.
Figure 2. Six paths intersect at the target point in L-shaped corner scenario.
Remotesensing 14 04532 g002
Figure 3. Range profiles in the L-shaped corner scenario for (a) single target and (b) double target.
Figure 3. Range profiles in the L-shaped corner scenario for (a) single target and (b) double target.
Remotesensing 14 04532 g003
Figure 5. Imaging results of direct incoherent addition of six paths. (a) Result of single target. (b) Result of double target.
Figure 5. Imaging results of direct incoherent addition of six paths. (a) Result of single target. (b) Result of double target.
Remotesensing 14 04532 g005
Figure 6. Flowchart of dual-view observation method.
Figure 6. Flowchart of dual-view observation method.
Remotesensing 14 04532 g006
Figure 7. Two views in detecting NLOS target. (a) View 1. (b) View 2.
Figure 7. Two views in detecting NLOS target. (a) View 1. (b) View 2.
Remotesensing 14 04532 g007
Figure 8. The distributions of target and ghosts in two views. (a) View 1. (b) View 2.
Figure 8. The distributions of target and ghosts in two views. (a) View 1. (b) View 2.
Remotesensing 14 04532 g008
Figure 9. Photographs of the experimental environment. (a) Single target scenario. (b) Double target scenario.
Figure 9. Photographs of the experimental environment. (a) Single target scenario. (b) Double target scenario.
Remotesensing 14 04532 g009
Figure 10. Dual-view range profiles for single target. (a) Range profile of View 1. (b) Range profile of View 2.
Figure 10. Dual-view range profiles for single target. (a) Range profile of View 1. (b) Range profile of View 2.
Remotesensing 14 04532 g010
Figure 11. Multipath imaging results via dual-view in single target scenario. (a) Imaging result of Path-1, Path-3 and Path-5 through range profiles of View 1. (b) Imaging result of Path-2 and Path-6 through range profiles of View 2. (c) Fusion of (a,b). (d) Imaging result of previous algorithm.
Figure 11. Multipath imaging results via dual-view in single target scenario. (a) Imaging result of Path-1, Path-3 and Path-5 through range profiles of View 1. (b) Imaging result of Path-2 and Path-6 through range profiles of View 2. (c) Fusion of (a,b). (d) Imaging result of previous algorithm.
Remotesensing 14 04532 g011
Figure 12. Dual-view range profiles for double target. (a) Range profile of View 1. (b) Range profile of View 2.
Figure 12. Dual-view range profiles for double target. (a) Range profile of View 1. (b) Range profile of View 2.
Remotesensing 14 04532 g012
Figure 13. Multipath imaging results via dual-view in double target scenario. (a) Imaging result of Path-1, Path-3 and Path-5 through range profiles of View 1. (b) Imaging result of Path-2 and Path-6 through range profiles of View 2. (c) Fusion of (a,b). (d) Imaging result of previous algorithm.
Figure 13. Multipath imaging results via dual-view in double target scenario. (a) Imaging result of Path-1, Path-3 and Path-5 through range profiles of View 1. (b) Imaging result of Path-2 and Path-6 through range profiles of View 2. (c) Fusion of (a,b). (d) Imaging result of previous algorithm.
Remotesensing 14 04532 g013
Figure 14. Multipath imaging results via dual-view in targetless scenario. (a) Imaging result of Path-1, Path-3 and Path-5 through range profiles of View 1. (b) Imaging result of Path-2 and Path-6 through range profiles of View 2. (c) Fusion of (a,b).
Figure 14. Multipath imaging results via dual-view in targetless scenario. (a) Imaging result of Path-1, Path-3 and Path-5 through range profiles of View 1. (b) Imaging result of Path-2 and Path-6 through range profiles of View 2. (c) Fusion of (a,b).
Remotesensing 14 04532 g014
Figure 15. Multipath imaging results after replacing View 2 with View 3 in both single and double target scenarios. (ac) Imaging result of View 1, View 3 and fusion in single target scenario. (df) Imaging result of View 1, View 3 and fusion in double target scenario.
Figure 15. Multipath imaging results after replacing View 2 with View 3 in both single and double target scenarios. (ac) Imaging result of View 1, View 3 and fusion in single target scenario. (df) Imaging result of View 1, View 3 and fusion in double target scenario.
Remotesensing 14 04532 g015
Table 1. A brief description of the previous approach.
Table 1. A brief description of the previous approach.
Our Previous Approach
Step 1:Calculate the focusing delay of EM propagation along the prepared paths from the radar to each pixel in the scene.
Step 2:Utilize the obtained delay to compute the value of each pixel based on the BP algorithm and gather six imaging matrices.
Step 3:Produce the target image by the incoherent addition of six imaging matrices.
Table 2. The key parameters of UWB Radar.
Table 2. The key parameters of UWB Radar.
ParametersValue
Start frequency1.6 GHz
End frequency2.2 GHz
Step frequency200 MHz
Radiation power18 dBm
Number of steps301
Range resolution0.25 m
Maximum unambiguous range75 m
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, Y.; Chen, C.; Jia, Y.; Cui, G.; Guo, S. Non-Line-of-Sight Target Detection Based on Dual-View Observation with Single-Channel UWB Radar. Remote Sens. 2022, 14, 4532. https://doi.org/10.3390/rs14184532

AMA Style

Yang Y, Chen C, Jia Y, Cui G, Guo S. Non-Line-of-Sight Target Detection Based on Dual-View Observation with Single-Channel UWB Radar. Remote Sensing. 2022; 14(18):4532. https://doi.org/10.3390/rs14184532

Chicago/Turabian Style

Yang, Yiping, Chuan Chen, Yong Jia, Guolong Cui, and Shisheng Guo. 2022. "Non-Line-of-Sight Target Detection Based on Dual-View Observation with Single-Channel UWB Radar" Remote Sensing 14, no. 18: 4532. https://doi.org/10.3390/rs14184532

APA Style

Yang, Y., Chen, C., Jia, Y., Cui, G., & Guo, S. (2022). Non-Line-of-Sight Target Detection Based on Dual-View Observation with Single-Channel UWB Radar. Remote Sensing, 14(18), 4532. https://doi.org/10.3390/rs14184532

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop