Next Article in Journal
A Compact Low-Power LoRa IoT Sensor Node with Extended Dynamic Range for Channel Measurements
Next Article in Special Issue
An Efficient ISAR Imaging of Targets with Complex Motions Based on a Quasi-Time-Frequency Analysis Bilinear Coherent Algorithm
Previous Article in Journal
Deterministic Propagation Modeling for Intelligent Vehicle Communication in Smart Cities
Previous Article in Special Issue
Evaluation of Image Reconstruction Algorithms for Confocal Microwave Imaging: Application to Patient Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Differential Ultra-Wideband Microwave Imaging: Principle Application Challenges

1
Electronic Measurements and Signal Processing Group, Technische Universität Ilmenau, 98693 Ilmenau, Germany
2
Biosignal Processing Group, Technische Universität Ilmenau, 98693 Ilmenau, Germany
3
Faculty of Electrical Engineering, K. N. Toosi University of Technology, 16317 Tehran, Iran
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(7), 2136; https://doi.org/10.3390/s18072136
Submission received: 9 May 2018 / Revised: 25 June 2018 / Accepted: 30 June 2018 / Published: 3 July 2018
(This article belongs to the Special Issue Sensors for Microwave Imaging and Detection)

Abstract

:
Wideband microwave imaging is of interest wherever optical opaque scenarios need to be analyzed, as these waves can penetrate biological tissues, many building materials, or industrial materials. One of the challenges of microwave imaging is the computation of the image from the measurement data because of the need to solve extensive inverse scattering problems due to the sometimes complicated wave propagation. The inversion problem simplifies if only spatially limited objects—point objects, in the simplest case—with temporally variable scattering properties are of interest. Differential imaging uses this time variance by observing the scenario under test over a certain time interval. Such problems exist in medical diagnostics, in the search for surviving earthquake victims, monitoring of the vitality of persons, detection of wood pests, control of industrial processes, and much more. This paper gives an overview of imaging methods for point-like targets and discusses the impact of target variations onto the radar data. Because the target variations are very weak in many applications, a major issue of differential imaging concerns the suppression of random effects by appropriate data processing and concepts of radar hardware. The paper introduces related methods and approaches, and some applications illustrate their performance.

1. Introduction

Microwave imaging may be considered as a method to reconstruct the spatial distribution of matter (bodies, objects) based on the electromagnetic sounding of an observation space of interest. Figure 1 demonstrates the principle. The observation space may be completely or partially surrounded by antennas. The antennas emit electromagnetic fields into the observation space and receive the fields scattered at the surface of the bodies due to the different propagation parameters at both sides of the boundaries. Such propagation parameters are either the permittivity ε _ and the permeability μ _ of the involved substances or the propagation speed c _ = 1 / ε _ μ _ and the intrinsic impedance Z _ = μ _ / ε _ . Typically, every substance is characterized by a specific set of such parameters, which makes different substances distinguishable by electromagnetic sounding.
Obviously, by referring to Figure 1, one can measure all combinations between the antenna feeding signals a i ; i = 1 K and the received signals b j ; j = 1 K , where the received signals are affected by the inner structure of the observation space. Hence, from the measurement, we get a set of response functions S j i ; i , j = 1 K
[ b _ 1 b _ 2 b _ K ] = [ S _ 11 S _ 12 S _ 1 K S _ 21 S _ 22 S _ 2 K S _ K 1 S _ K 2 S _ K K ] [ a _ 1 a _ 2 a _ K ] b _ ( f ) = S _ ( f ) a _ ( f )
or
[ b 1 b 2 b K ] = [ S 11 S 12 S 1 K S 21 S 22 S 2 K S K 1 S K 2 S K K ] [ a 1 a 2 a K ] b ( t ) = S ( t ) a ( t ) ,
which carry the information (encoded by Maxwell’s equations and the boundary conditions) about the observation space of interest. Note that the matrices of the response functions are symmetric due to reciprocity (i.e., S _ = S _ T or S = S T , where T means transpose). The underscored quantities in the upper equation refer to complex valued frequency functions, which are gained from sinewave measurements. The lower equation deals with real valued time functions resulting from measurements with arbitrary-shaped ultra-wideband (UWB) signals, where the symbol means convolution [1]. Finally, the aim of the imaging procedure is to “decode” the internal structure of the observation space of interest from the measured quantities. For that purpose, the equations describing wave propagation and scattering have to be inverted, using a method called “inverse scattering”. A comprehensive introduction in this technique and an example for ground penetrating radar are, for example, given in [2,3]. The solution of inverse problems is typically ill-conditioned and numerically quite expensive. Hence, one relies in many cases on some prior knowledge about the observation space of interest and on simplifications of the inversion procedure (e.g., by omitting multipath propagation (often assigned as Born approximation), which is however only applicable in low contrast scenarios).
Furthermore, one should note that the material parameters ε _ , μ _ or Z _ , c _ are typically frequency-dependent, and they may be complex valued (assigned by underscore character) with the imaginary part related to propagation losses. If these losses are too large in the scenario under test (SUT), the observation space becomes opaque, and we will not be able to explore its inner structure. Fortunately, most substances (with the exception of metals) are more or less transparent for electromagnetic waves within the microwave frequencies, which makes this imaging method attractive for analyzing the internal structure of optically opaque objects.
The image resolution (i.e., the ability to separate two small, identical, and closely located objects) depends basically on a spatial parameter of the sounding wave (i.e., wavelength in the case of sinewaves or pulse length and coherence length, respectively, in the cases of short-pulse or UWB-spread spectrum signals), and the solid angle under which the objects are illuminated and observed [1]. The last point is particularly interesting if the test area cannot be completely surrounded by the antennas, which is often the case in practice. From this observation, it becomes clear that microwave images will never have a resolution as good as optical images due to the much larger wavelength or coherence length of these waves, and it becomes also obvious that the sounding waves should operate at frequencies as high as possible. However, the upper frequency bound is often limited by the rising propagation losses with increasing frequency. In many applications, the water content of the different substances (e.g., biological tissue or soil) represents a limiting factor, because water induces losses rapidly increasing beyond 2 GHz. Newer discussions on that topic [4] suspect a usable bandwidth of up to 40 GHz, which will be, however, quite challenging to implement in practice for penetration depths larger than 1 cm.
A further important feature of image quality is the contrast that represents the ratio between the “brightest” and “darkest” image pixels. Bright points are linked to strong reflections caused by extended boundaries (i.e., large bodies) with large gradients of permittivity or permeability. Dark image pixels correspond to weak scattering objects, which are typically small and/or have low permittivity/permeability contrast with respect to their environment. The achievable contrast depends on many factors:
  • the sidelobes of the point-spread function of the imaging system, which are a matter of the antenna array structure, the number of antennas, and the bandwidth of the received signals [1,5];
  • multipath signals caused by scattering at dominant objects overpowering the reflections of weak bodies;
  • device internal clutter, caused by imperfections of the measurement equipment (which can only be suppressed up to a certain degree by appropriate device calibration);
  • time extension of the sounding waves due to the limited decay rate of the antenna impulse response; and
  • receiver noise, propagation loss, and others.
This basically prevents the detection or analysis of weak scattering objects by microwave sensing if they are embedded in a strong, multipath environment.
However, there is an exception if the weak scattering target of interest is subjected to some variations, which may be caused by the following:
  • temporal fluctuations inherently connected with the test scenario (e.g., the vital motion of inner organs of humans and animals, the breathing motion of buried survivors after an earthquake, and the motion of wood-destroying insects, as well as slowly running events, such as the putrefaction of biological substances, the healing process after a medical surgery [6], post-event monitoring of stroke [7], and many more);
  • a targeted influence of the hidden object of interest via modification of its position in space, its volume, and its permittivity or permeability (e.g., the targeting of malignant tissue by nanoparticles, permittivity variation by local heating or cooling [8,9], water accumulation in hygroscopic substances, etc.); and
  • small deviations between two largely identical SUTs (cancer in one of the two female breasts [10], foreign objects in chocolate, other identical food pieces, etc.).
In all of these examples, we are only interested in the imaging and localization, respectively, of small differences—hence, we call it differential imaging—in the scattering scenario in its different states. Basically, the imaging may be based on a tomographic approach (see e.g., [9]) or on radar image processing. Tomographic methods usually work with narrowband signals and require a dense antenna array, while radar-based techniques need wideband sounding signals but do not require a dense antenna array [5]. The aim of the paper is to introduce the reader to the concept of differential imaging and to familiarize him or her with the experimental challenges of a successful data capturing and target extraction. For the sake of introduction, we assume here that the objects of interest are relatively small compared to the size of the observation space. This allows us to deal with the simpler radar imaging approach instead of the inverse scattering applied in microwave tomography. The experimental challenges will, however, not be touched by that simplification.
In what follows, a simplified signal model to describe the scattering scenario will be introduced with emphasis on the most important aspects of differential imaging. Based on this consideration the most important parameters of UWB devices for differential imaging will be considered. Several examples and measurements are shown for illustration.

2. Signal Model for Small Time-Variant Scattering Objects

By restricting ourselves to small objects of interest, which are sufficiently far from the antennas, the set of differential equations (i.e., Maxwell equations) describing the scenario may be approximated by a simple transmission model. It will be finally based on the Friis transmission formula and the radar equation, which we extend to time-domain conditions here. In order to make that approximation already valid for short distances, the involved antennas should be quite small.
In what follows, we first introduce this transmission model for the simple case of free space propagation and summarize some methods of localizing/imaging of small, invariant objects without the need of an inversion of Maxwell’s equations. We then discuss variant objects under free space conditions and some methods are introduced to detect them under noisy conditions. For the sake of brevity, we will only refer in this connection to the time variance of those objects. But, this approach will also include the “targeted object modification” (which is typically time-dependent) and the “difference between two scenarios”-method, if we regard the related measurements as sequentially done in (observation) time T . Finally, we will extend the scenario model to strong multipath conditions (i.e., the case will be considered in which a weak time-variant object is embedded in a strong multipath environment).

2.1. Invariant Object in Free Space and Its Localization

The generic situation is depicted in Figure 2. The antennas and target are referred to as points in space assigned by their position vectors r i and r q . We call these points radiation and scattering centers, respectively. The interaction of the involved objects—the antennas and target—with the electric field, we describe by impulse response functions (IRF) T i ( t ) , R i ( t ) and Γ i ( t ) , Λ ( t ) . Herein, t refers to the propagation time (often also called fast time). T i ( t ) and R i ( t ) are the transmission IRFs of antenna i if it works in transmitter and receiver mode, respectively. Both are linked via the reciprocity relation T i ( t ) = 1 2 π c d d t R i ( t ) , where c is the speed of light in the propagation medium. Γ i ( t ) and Λ ( t ) represent reflection IRFs of the antenna feeding and the target. For an introduction into the concept of impulse response functions see [1]. Any angular dependencies and polarimetric issues of T i , R i and Λ , as well as the aperture reflection of the antenna, we will omit here for the sake of brevity. For such a scenario, the individual response functions in (1) can be expressed by the time domain Friis-formula and radar equation by ( δ ( t ) —Dirac delta function):
S i i ( t ) = Γ i ( t ) δ ( t 2 τ a ) + 1 r i q 2 T i ( t ) Λ ( t ) R i ( t ) δ ( t 2 ( τ i q + τ a ) ) S j i ( t ) = 1 r j i T j ( t ) R i ( t ) δ ( t ( τ j i + 2 τ a ) ) + 1 r i q r j q T j ( t ) Λ ( t ) R i ( t ) δ ( t ( τ i q + τ j q + 2 τ a ) ) τ i q = r i q c ; r i q = | r q r i | ; τ j i = r j i c ; r j i = | r i r j | .
Herein, the left term of S i i represents the feed point reflections of the antenna, and the right term is a mono-static radar equation, while S j i is derived from the Friis-formula (left term; often called crosstalk) and a bi-static radar equation (right term). Multiple reflections between antenna–antenna and antennas–target have been omitted in Equation (2). The delay times τ a , τ j i , τ i q refer to the propagation delay between the measurement plan and the radiation center of the antennas, the propagation time between two antennas, and the propagation time between the antenna and the target, respectively.
Following (2), the response matrix S can be decomposed into two parts as follows:
S = S 0 + S Λ ,
where S 0 only involves antenna effects (i.e., feed point reflection and crosstalk) and S Λ is called the multi-static response matrix, which covers all transmission paths including the scattering object.
Figure 3 illustrates typical signals for an idealized scenario with electrically short antennas and a point scatterer. Note that the antenna IRF in transmission and receiving mode is simply the derivative T i d d t and a delta function R j ( t ) δ ( t ) , respectively. The IRF of a point scatterer leads to a second derivative Λ ( t ) d 2 d t 2 [1]. The signal b j i refers to the received signal of antenna j , if antenna i is stimulated by the pulse.
For the reconstruction of the target locations (i.e., the image of the scenario) from the measurements b j i , one has several options.
Method 1: The volume to be observed is subdivided into a grid of voxels. In order to get the intensity value I p of the voxel located at position r v , one superimposes the signal components received by all antennas with propagation times corresponding to the related voxel-antenna distance. Note, for that purpose, the wave speed is supposed to be known. In a mathematically generalized form, inspired from the p-norm, this may be expressed in different ways, as for example by ( p = 0 , 1 , 2 , 3 , ):
I 1 , p ( r v ) = w ( t ) | i = 1 K j = 1 K ( h j i ( t ) b j i ( t + τ v ) ) p | d t p I 2 , p ( r v ) = w ( t ) i = 1 K j = 1 K | h j i ( t ) b j i ( t + τ v ) | p d t p τ v = | r i r v | + | r j r v | c + 2 τ a .
If the voxel position coincides with the target position r v = r q , the signals are coherently superimposed, leading to a large intensity value. In the opposite case (i.e., r v r q ), the signals are incoherently added, so the voxel intensity tends to small values. For the suppression of measurement errors, sidelobes, or other image defects, it may be meaningful to modify every signal b j i ( t ) by an individual weighting function h j i ( t ) before summing (see e.g., [11,12,13,14,15] for examples). Furthermore, in Equation (4), w ( t ) represents a gating function whose duration is on the order of the width of the stimulus pulse. This ensures that only the desired signal sections are added up. In Equation (4), for p , typically 1 or 2 are also assigned as a delay-and-sum approach, which is widely used in the literature.
Figure 4A illustrates an intensity plot based on the delay-and-sum approach for a simple two-dimensional (2D)-scenario with a five-element linear antenna array. Obviously, the method provokes many sidelobes, which limit the contrast of the image. To improve the contrast, the number of antennas has to be increased or other methods of signal superposition have to be applied. Equations (5) and (6) depict two examples. The first method, we call the delay-and-multiply approach. A related result is demonstrated in Figure 4B. Obviously, the contrast is dramatically increased, because the sidelobes have disappeared. Nevertheless, the method needs some care, because an unwanted zero in the data, which actually should be superimposed, will “destroy” the related target.
I 3 ( r v ) = w ( t ) i j | h j i ( t ) b j i ( t + τ v ) | d t .
Finally, the routine Equation (6) superimposes a set of cross-correlation functions—also assigned as delay-multiply-and-sum—determined from the captured response signals [16,17,18,19,20].
I 4 ( r v ) = | w ( t ) i = 1 i k , l K j = 1 j k , l K k = 1 K l = 1 K h j i k l ( t ) b j i ( t + τ v ( j i ) ) b l k ( t τ v ( l k ) ) d t | τ v ( j i ) = | r i r v | + | r j r v | c + 2 τ a ; τ v ( l k ) = | r k r v | + | r l r v | c + 2 τ a .
Method 2: Instead of superimposing the measured signals as demonstrated in Equations (4)–(6), one may also try to numerically solve the problem of target localization. If we consider a single measurement b j i ( t ) (compare Figure 3) or even a cross-correlation function between signals b j i ( t ) and b l k ( t ) , we can identify the propagation time of arrival (ToA) and the propagation time difference of arrival (TDoA), respectively, between the involved antennas and the target. Based on the known wave speed, the propagation distance may be estimated, so a quadric surface (sphere, ellipsoid, elliptic hyperboloid) can be calculated on which the target is localized. By repeating this for different antenna positions, the intersections of all quadric surfaces finally gives the actual target position.
In what follows, the localization of a single target shall be illustrated. For the sake of demonstration, we restrict ourselves to mono-static measurements b i i ( t ) solely, which provide us the target range via roundtrip time measurement r i q = τ i q c / 2 (refer to Figure 3). Corresponding to the measurement setup in Figure 2, one finds:
r i q 2 = ( r q r i ) T ( r q r i ) = r q 2 + r i 2 2 r i T r q ,
where r i and r q represent [ 3 , 1 ] position vectors. In order to remove r q 2 from the equation, the results from two different antennas are subtracted. For an antenna array of K 4 antennas, this leads to a set of P = K ! / ( 2 ( K 2 ) ! ) 6 combinations, which are arranged in following manner:
[ ( r 1 r 2 ) T ( r 1 r 3 ) T ( r i r j ) T ] r q = 1 2 [ ( r 1 2 r 2 2 ) ( r 1 q 2 r 2 q 2 ) ( r 1 2 r 3 2 ) ( r 1 q 2 r 3 q 2 ) ( r i 2 r j 2 ) ( r i q 2 r j q 2 ) ] = R r q = 1 2 Q .
Here, R represents a [ P , 3 ] matrix of the known antenna position vectors, r q is the [ 3 , 1 ] column vector of the wanted target position, and Q is a [ P , 1 ] column vector built from known antenna distances and the measured target distances. The minimum least square solution of the overdetermined Equation (8) for the target position is given by the following:
r q = 1 2 ( R T R ) 1 R T Q .
This type of solution is optimal if the measurement errors of the involved quantities are Gaussian distributed. The squaring of r i q in Equation (8) leads, however, to a non-Gaussian error distribution, and Equation (9) will then provide a biased solution.
In order to avoid the bias, a maximum likelihood approach can be applied, which aims to maximize the probability density function (PDF) in Equation (10) with respect to the target position r q :
p ( r v | D ) = e 1 2 ( D v D ) T Σ 1 ( D v D ) det ( 2 π Σ ) r q = arg max r v { p ( r v | D ) } = arg min r v { ( D v D ) T Σ 1 ( D v D ) } .
Here again, one subdivides the volume to be observed in voxels assigned by their position r v . The distances of a voxel to all antenna positions are summarized in the [ 1 , K ] column vector D v = [ | r 1 r v | | r 2 r v | | r K r v | ] T . The actual distances between the antennas and the target gained from roundtrip time measurements, we have arranged in the [ 1 , K ] column vector D = [ r 1 q r 2 q r K q ] T , and the uncertainties of the measurement are represented by the [ K , K ] covariance matrix Σ σ 2 I , which is a diagonal matrix if the range measurements are mutually independent. I is the identity matrix, and σ 2 gives the variance of the range estimation, which can be taken often as equal for all measurement channels.
Figure 5 depicts a simple 2D-example of the PDF illustrating the impact of the measurement uncertainty σ and the array structure onto the target localization. In this context, it should not go unmentioned that the measurement uncertainty is not only due to random measurement errors, but also caused by the ambiguity of the distance definition of spatially extended objects [21,22]. Furthermore, the method is only able to estimate the position of a single target, so in multi-target scenarios, the roundtrip times extracted from the measurement data must be correctly assigned to the related targets beforehand.
Method 3: The last imaging approach, at least that we will mention here, exploits the reciprocity of the transmission paths characterized by the multi-static response matrix S Λ . Further it is limited to a small number of point scatterers involved in the test scenario; hence, the method is also assigned as sparse scene imaging. For details of the method, the interested reader is referred to [1,23,24,25,26,27]. For the first trial, the reciprocity properties of S Λ were used by the DORT (Decomposition de l'Operateur de Retournement Temporel) approach [28] with the aim of concentrating wave energy at the position of the strongest scatterer under the multi-path condition. Related ideas may also be adapted for imaging purposes, which exploit Eigen-value or singular value decompositions and the MUSIC concept for target localization. It should be noted that this method assumes knowledge of the wave propagation speed within the scenario of the test.

2.2. Time-Variant Objects and Their Emphasis from Noise

Restricting ourselves only to the multi-static transmission matrix, its components may be expressed as follows in the case of a time-variant scenario (compare also Equation (2)):
S Λ , j i ( t , T ) = 1 r i q ( T ) r j q ( T ) Θ ( t ) Λ ( t , T ) δ ( t ( r i q ( T ) + r j q ( T ) c ) ) .
Here, T symbolizes the observation (slow) time. Assuming antennas at fixed positions, the time variance can only affect the target reflectivity Λ ( t , T ) and the target ranges r i q ( T ) , r j q ( T ) . Observation time-independent components in (11) are summarized by Θ ( t ) = T j ( t ) R i ( t ) δ ( t 2 τ a ) .
In order to capture the time variance of the scenario, it is measured in typically regular time intervals Δ T , and the measurements are displayed as functions of the observation time T (i.e., b ( t , T ) = S ( t , T ) a ( t ) ). The related representation is called a radargram, as exemplified in Figure 6A for a walking person, which moves toward the radar antennas and back. From these measurements, the roundtrip time τ 2 has to be estimated. This is not a task of an unambiguous result, because the time shape of the back scattered signal permanently changes (refer Figure 6C). As a consequence, the precision of the target position does not only depend on the range resolution of the UWB radar, but also on the variability of the target response, which is a matter of geometric shape variations during walking. In this paper, as a kind of compromise, the energetic center τ 2 is used to estimate the target distance as follows:
τ 2 2 ( T ) = t b 2 ( t , T ) d t b 2 ( t , T ) d t r q ( T ) = τ 2 ( T ) c 2 .
By removing the range influence from the radar data via Equation (13) below, one actually may observe the variability of the backscattering while walking (Figure 6C).
b ( t , T ) = r q 2 b ( t + τ 2 ( T ) , T ) .
In order to image the target motion in space, one can follow one of the approaches discussed in Section 2.1 for every timepoint of the observation time.
In many applications, the targets are small, weakly reflecting and moving only a little. This issue, we will address below. A small-sized target leads to Rayleigh-scattering, which provokes a twofold differentiation of the incident field. Hence, in the case of a time variance, such a target may only vary the strength of the backscattering caused by variations of its permittivity, permeability, or volume. This, we will express by:
Λ ( t , T ) = ( Λ 0 + Δ Λ ( T ) ) d 2 d t 2
Here, Λ 0 represents the average reflection strength of the target, and Δ Λ symbolizes its variable part. If the target finally also moves a little by Δ r , the receiver signals can be modeled by (15). Without loss of generality, we only refer to the mono-static components here. Joining the observation time-independent part Θ ( t ) with the twofold derivative in Equation (14) Φ ( t ) = d 2 ( Θ ( t ) a ( t ) ) / d t 2 and insertion of Equation (14) in Equation (11), the receiving signal becomes as follows:
b i i ( t , T ) = Λ 0 + Δ Λ ( T ) ( r i q , 0 + Δ r i q ( T ) ) 2 Φ ( t ) δ ( t 2 ( r i q , 0 + Δ r i q ( T ) ) c ) + ν ˜ i i ( t , T ) = Λ 0 + Δ Λ ( T ) ( r i q , 0 + Δ r i q ( T ) ) 2 Φ ( ξ 2 Δ r i q ( T ) c ) + ν ˜ i i ( t , T ) .
To shorten the notation, we include the delay term in the argument of Φ and substitute ξ = t 2 r i q , 0 / c , where r i q , 0 = r i q ( T = 0 ) is the range of the target at the beginning of the observation time. In order to approach realistic conditions, we have also added receiver noise now, which is modeled by the random process ν ˜ i i ( t , T ) . Developing Equation (15) in a Taylor series and omitting higher terms and cross-terms yields the following (using Φ ˙ = d Φ / d t ):
b i i ( t , T ) Λ 0 r i q , 0 [ Φ ( ξ ) A + ( Δ Λ ( T ) Λ 0 Δ r i q ( T ) r i q , 0 ) B Φ ( ξ ) + 2 c Φ ˙ ( ξ ) Δ r i q ( T ) C + + 2 c 2 Φ ¨ ( ξ ) ( Δ r i q ( T ) ) 2 D + ] + ν ˜ i i ( t , T ) .
As seen from Equation (16), the radar response b i i ( t , T ) is composed from several components. Term A represents the observation time-independent part (i.e., the response of the target in a static state). Term B is an amplitude modulation of the channel response, where the second term Δ r i q / r i q , 0 is usually negligible compared to the effect in terms C and D. Terms C and D are caused from time-delay modulations. The modulation term C is dominant at the signal edges (large first derivative) of the received signals. It is proportional to the target motion Δ r i q ( T ) . The modulation term D mainly appears in the region of peak values of the receiving signal, because there the magnitude of the second derivative is maximum. This term is usually of minor interest, because it is less sensitive than term C and it provides only the squared target motion. Figure 7 illustrates both terms for a weak sinusoidal target motion. Figure 7A picks out the signal section, which is located around the target reflection (compare Figure 3 for the complete signal). In practice, the received signal is sampled. Hence, we know it only at discrete timepoints. Five such sampling points t 1 t 5 are selected, and their variation in observation time is emphasized. Two points at t 1 , t 5 are placed on a falling signal edge, one point at t 3 is located at the rising edge, and two points at t 2 , t 4 are to be found close to signal peaks. Figure 7B plots the observation time variation of the different samples. One can observe the following:
  • the largest modulation is provided by the sample located at the steepest part of the received signal;
  • the modulations at rising and falling edges are inverted; and
  • the modulation at the signal peaks has double frequency (due to the squaring) and is quite weak.
For motionless targets, where only its reflectivity Δ Λ is time variant, the modulation term B is responsible. It provides the largest contribution at the signal peaks.
In the case of properly designed measurement receivers, which are typically based on sub-sampling, the receiver noise ν ˜ i j ( t , T ) of the different measurements may be considered as Gaussian distributed with a white spectrum in both t and T , as well as uncorrelated between different response functions and measurement channels. Moreover, as will be seen in the next section, the strength of the noise will depend on the received signal itself, and its variance becomes dependent on propagation time:
var { ν i i ( t , T ) } = σ v 2 ( t ) = σ n 2 + ( b ˙ i i ( t ) ) 2 φ j 2 ,
(i.e., the noise increases at steep signal edges). Here, σ n 2 characterizes the variance of the additive random effect as thermal and quantization noise, while φ j 2 refers to the variance of the sampling jitter, which can be considered as “time noise”. These properties of the raw (i.e., unfiltered) data for mono- and bi-static channels are summarized in the following (Gaussian distributed; white; uncorrelated):
ν ˜ j i ( t , T ) N ( 0 , σ ν 2 ( t ) ) cov { ν ˜ j i ( t 1 , T ) , ν ˜ j i ( t 2 , T ) } = { σ ν 2 ( t ) ; t 1 = t 2 = t 0 ; t 1 t 2 ; cov { ν ˜ j i ( t , T 1 ) , ν ˜ j i ( t , T 2 ) } = { σ ν 2 ( t ) ; T 1 = T 2 0 ; T 1 T 2 cov { ν ˜ j i , ν ˜ l k } = { σ ν 2 ( t ) ; i = k ; j = l 0 ; i k ; j l .
Because in the case of weak targets, the modulation effects will also be quite weak, the major challenge will be to detect the time-variable targets under noise conditions. Many methods are investigated with that goal [29,30,31,32,33,34,35,36,37]. Here, for the sake of brevity, we will only consider the most effective method, which is based on a matched filter concept. For that purpose, we pick out the data samples captured at an arbitrary time point t 0 and call the related signal x ( T ) . Removing the DC-value b i i ( t 0 ) ¯ , x ( T ) may be decomposed into an observation time-dependent modulation function χ ( T ) and a propagation time-dependent value x 0 ( t 0 ) that determines the strength of the modulation dependent on the sample position as follows:
x ( T ) = x 0 ( t 0 ) χ ( T ) + ν ˜ x = b i i ( t 0 , T ) b i i ( t 0 ) ¯ = { ν ˜ x case   1   b i i ( t 0 ) χ ( T ) + ν ˜ x case   2 ; χ ( T ) = Δ Λ ( T ) / Λ 0 b ˙ i i ( t 0 ) χ ( T ) + ν ˜ x case   3 : χ ( T ) = 2 Δ r i q ( T ) / c b ¨ i i ( t 0 ) χ 2 ( T ) + ν ˜ x     case   4 : χ ( T ) = 2 Δ r i q ( T ) / c .
In case 1, the selected time sample is located within a flat part of the receiving signal, so we get only noise ( x 0 = 0 ). In case 2, the time sample is located close to a signal peak, and the time variance is caused from a variation of the target reflectivity. Cases 3 and 4, refer to a weakly moving target if the sampling time is placed on a signal edge (case 3; i.e., one of the blue signals in Figure 7B) or close to a signal peak (case 4; i.e., one of the red signals in Figure 7B). Case 4 is mostly out of interest and will not be considered further.
In order to suppress the noise, the deterministic part x 0 ( t 0 ) χ ( T ) in Equation (19) has to be emphasized. Assuming we know approximately the time shape of the modulation function, except for an unknown time delay ζ ( T ) χ ( T τ 0 ) , this is the case when the test scenario may be externally modulated by the operator or, in the case of heartrate and breathing detection, where one can assume an approximately sinusoidal modulation. From the assumed modulation function and the measurement data, one can establish a summation over the observation time period T 0 = N Δ T ( Δ T = repetition interval between consecutive measurements, see Figure 7; N = number of repetitions), which finally represents a cross-correlation function as follows:
C x ζ ( k ) = n = 1 N x ( n Δ T ) ζ ( n Δ T k Δ T ) = n = 1 N ( x 0 ( t 0 ) χ ( n Δ T ) + ν ˜ x ) ζ ( n Δ T k Δ T ) .
Using Equation (18), its expected value and variance result in the following:
E { C x ζ ( k ) } = x 0 ( t 0 ) n = 1 N χ ( n Δ T ) ζ ( ( n k ) Δ T ) var { C x ζ ( k ) } = σ v 2 ( t 0 ) n = 1 N ζ 2 ( n Δ T ) = N σ v 2 ( t 0 ) ζ r m s 2 .
In the case of a perfect match between modulation and reference function ζ ( T ) = χ ( T τ 0 ) , the signal-to-noise ratio at a given time sample t 0 yields the following:
S N R ( t 0 ) = max ( E 2 { C x ζ ( k ) } ) var { C x ζ ( k ) } = ( x 0 ( t 0 ) N ζ r m s 2 ) 2 N σ ν 2 ( t 0 ) ζ r m s 2 = N ζ r m s 2 x 0 2 ( t 0 ) σ v 2 ( t 0 ) .
As seen from Equation (22), the detection performance can be arbitrarily increased by extending the integration time T 0 (i.e., increasing N ). However, for vital data detection, the modulation is not stable over time. Hence, T 0 should not be selected too long in order to avoid de-correlation [38].
In the simplest case of sinusoidal modulation, the correlation function (20) is implemented by a Fourier-transform in observation time direction ( ϕ —“observation time” frequency) as follows:
| B _ i i ( t , ϕ ) | = T 0 b i i ( t , T ) e j 2 π ϕ T d T .
Figure 8 depicts an example. Obviously it is hardly possible to identify the moving target in the original radar data if it is too noisy. However, already the correlation over the short duration T 0 makes the target visible, and further enlargement of the integration time increasingly suppresses the noise.
In many cases, however, the target modulation is not predictable in advance. Hence, the question arise from where to take the reference modulation in Equation (20). One possible option is to illuminate the target from a second antenna at a slightly different position. The resulting radar data is b j j ( t , T ) . According to Equation (19), we again pick out data samples captured at t 1 and call the related signal y ( T ) as follows:
y ( T ) = b j j ( t 1 , T ) b j j ( t 1 ) ¯ = y 0 ( t 1 ) χ ( T ) + ν ˜ y .
Because the antennas observe the same object, the modulation function χ ( t ) is identical in x ( t ) and y ( t ) . Taking the cross energy from both signals, we get the following:
  E yx ( t 0 , t 1 )= n=1 N y( nΔT )x( nΔT ) = n=1 N ( y 0 ( t 1 )χ( nΔT )+ ν ˜ y )( x 0 ( t 0 )χ( nΔT )+ ν ˜ x ) E{ E yx ( t 0 , t 1 ) }= x 0 ( t 0 ) y 0 ( t 1 ) n=1 N χ 2 ( nΔT ) =N x 0 ( t 0 ) y 0 ( t 1 ) χ rms 2 var{ E yx ( t 0 , t 1 ) } = n=1 N var{ y 0 χ ν ˜ x + x 0 χ ν ˜ y + ν ˜ y ν ˜ x } =N( [ y 0 2 ( t 1 ) σ x 2 ( t 0 )+ x 0 2 ( t 0 ) σ y 2 ( t 1 ) ] χ rms 2 + σ x 2 ( t 0 ) σ y 2 ( t 1 ) ).
Assuming the target modulation at both signals is largest at the sampling points t 0 , t 1 and both receivers have identical noise behavior, we can state that y 0 ( t 1 ) x 0 ( t 0 ) = x 0 and σ x 2 ( t 0 ) σ y 2 ( t 1 ) = σ 2 , so the signal-to-noise ratio becomes the following:
S N R ( t 0 , t 1 ) = E 2 { E y x ( t 0 , t 1 ) } var { E y x ( t 0 , t 1 ) } = ( N x 0 2 χ r m s 2 ) 2 N ( 2 x 0 2 χ r m s 2 σ 2 + σ 4 ) 1 2 N χ r m s 2 x 0 2 σ 2 ,
which gives a result of SNR only twice as poor as in the ideal case for Equation (22).
Because the optimum sampling positions t 0 , t 1 are not known prior, one has to run through all possible combinations leading to the cross-energy matrix, as illustrated in Figure 9. To calculate the [ M , M ] cross-energy matrix E i j , we assume that the radar data are given by two [ M , N ] matrices B i i and B j j ( M = number of samples in propagation time; N = number of impulse responses measured during the observation interval T 0 ). In a first step, the DC-value of every row is removed from both matrices leading to B i i and B j j . Finally, from this, the cross-energy is calculated from the following:
E i j = B i i ( B j j ) T .
The maximum position of that matrix gives the roundtrip times from both antennas. The idea behind the cross-energy is the noise independency of the merged signals, so with increasing integration time, the noise will mutually cancel out. Such noise independency, we also observe in single antenna measurements as long as the signals are not joined with themselves (i.e., the diagonal elements of the cross-energy matrix have to be set zero).
E i i = B i i ( B i i ) T ; E i i ( k , k ) = 0 .
Singular value decomposition is also often proposed for noise reduction purposes. It will, however, not help if the signal to be detected is already buried beneath noise.

2.3. Time-Variant Objects in Multi-Path Environment

The experimental situation is roughly illustrated in Figure 10 by two examples. One of them is with mostly free space propagation, real static objects (wall, furniture), and a target, which moves over distances much larger than the radar range resolution. The second example deals with minor motion detection (much smaller than the radar range resolution) and a wave propagation in a lossy environment, which is not very stable, because a person is not able to keep limbs truly motionless.
We first restrict ourselves to a stable propagation environment, as depicted in Figure 10A. If threefold and higher order reflections are omitted, we can identify four different types of propagation paths. The first type ① refers to all paths, which only involve static objects. The second ② is linked with the object of interest, and the third ③ contains a twofold scattering, one by the target and one by a static object. The forth transmission path ④ symbolizes the transmission behavior of the wall, which affects the antenna signal (e.g., by multiple wall reflections). Merging all this together, the received signal for a single antenna arrangement may be modeled in simplified form as:
b ( t , T ) = T ( t ) Ξ ( t ) ( Λ S ( t τ S ) + Λ T ( t τ T ( T ) , T ) + Λ S ( t τ S T ) Λ T ( t , T ) ) Ξ ( t ) R ( t ) a ( t ) + ν ˜ ( t , T ) = b S ( t τ S ) + b T ( t τ T ( T ) , T ) + b S T ( t τ S T ( T ) , T ) + ν ˜ ( t , T ) .
Here, we neither respect any angular dependency of the impulse response function nor the range influence (i.e., wave spreading and attenuation) of the signal magnitude or the polarization of the electric fields. Without loss in general, we only involve one static and one time-variable target. The different symbols stand for antenna impulse responses T ( t ) , R ( t ) , wall transmission Ξ ( t ) , scattering behavior of the static object Λ S ( t ) , and scattering of time-variable object Λ T ( t , T ) . The symbol τ indicates the related path propagation time. In Equation (2), the path delay was respected by convolution with Dirac functions. Here, it is part of the arguments of the function to shorten the notation. In the bottom line of Equation (29), the different components of the transmission paths are merged into different functions. b S ( t ) symbolizes all transmission paths (including multiple reflections), which do not change in observation time. This part represents the strongest component of the measured signal. It is often orders of magnitude larger than the other components. b T ( t , T ) refers to the time-variable target, in which we are actually interested, and b S T ( t , T ) represents multipath components, which involve the time-variant target. b S ( t ) and b S T ( t , T ) are often assigned as clutter. They must be removed from the measured signal.
Before we do that, the noise term in Equation (29) must be accounted for more seriously, because this will be important for the estimation of the clutter reduction. As already mentioned in Equation (17), the randomness of the measurement is caused by amplitude noise (additive noise) n ˜ N ( 0 , σ n 2 ) and “time” noise (i.e., sampling jitter) Δ τ ˜ j N ( 0 , φ j 2 ) . In well-designed receiver electronics, both can be assumed to be Gaussian distributed, white, independent, and ergodic. By considering both noise terms separately, Equation (29) has to be modified to the following:
b ( t , T ) = b S ( t + Δ τ ˜ j ) + b T ( t + Δ τ ˜ j , T ) + b S T ( t + Δ τ ˜ j , T ) + n ˜ ( t , T ) .
The propagation delay is omitted in the signal components for the sake of a shorter notation. Due to the ergodicity of the additive noise, the randomness of the sampling procedure does not influence its statistical properties, where the jitter Δ τ ˜ j is omitted in the noise term n ˜ ( t , T ) .
In order to suppress the static paths, one needs to know b S ( t ) . It can either be determined from measurements where the target is still absent or it is estimated by averaging over the captured data. In both cases, one performs an integration in observation time in order to reduce the noise influence. So, we can approximately write the following:
b S ( t ) ¯ 1 T 0 T 0 b ( t , T ) d T b S ( t ) p Δ τ ( t ) .
Equation (31) assumes that the variable target parts are canceled out due to a sufficiently long integration. Furthermore, it shows that the time shape of the static signal is slightly modified by “a low pass filter” whose IRF is given by the PDF p Δ τ ( t ) of the jitter. However, this becomes only remarkable if the standard deviation φ j of the jitter is on the order of the rise time of the signals. The additive noise in Equation (31) is omitted, because it is largely suppressed by the averaging. In practical implementation, the integration in Equation (31) may be performed over the whole captured data set or even by slighting averaging or low pass filtering the observation time. The signal b S ( t ) ¯ is often assigned as background.
By subtracting b S ( t ) ¯ from the measured data, the result after some manipulation is as follows:
c ( t , T ) = b ( t , T ) b S ( t ) ¯ b S ( t ) ( δ ( t ) p Δ τ ( t ) ) + b ˙ S ( t ) Δ τ ˜ j + b T ( t , T ) + b ˙ T ( t , T ) Δ τ ˜ j + b S T ( t , T ) + b ˙ S T ( t , T ) Δ τ ˜ j + n ˜ b ˙ S ( t ) φ j + b ˙ S ( t ) Δ τ ˜ j + b T ( t , T ) + b S T ( t , T ) + n ˜ .
The second line in Equation (32) comes from a Taylor series expansion of Equation (30) and the assumption of a not significantly large jitter, so higher terms of the series may be neglected. Further on, the third line ignores the jitter-affected time-variant signals, because they are of very low magnitude, and p Δ τ ( t ) is approximated by the first element of its Taylor series.
As we can observe from Equation (32), we get after background removal the wanted signal b T ( t , T ) , but it is still bothered by other components. One of them is the multipath component b S T ( t , T ) . It is illustrated in Figure 11. Figure 11A is based on the same scenario as already discussed in Figure 6, but now it refers to the completely captured data set. Because the person was walking in a room, it created a shadow on the wall opposite to the antenna, which may have the same strength as the wanted signal. However, the signal from the shadow has a larger propagation time compared to the direct target reflection. This gives us the opportunity to single out the signal from the shadow and the direct target reflection. This will work better with a larger bandwidth of radar, because due to the better range resolution, targets close to the wall may be separated from their shadows more easily.
Then, we still have the additive noise n ˜ in Equation (32) affecting the detection performance. This noise has been treated in many previous papers and will hence not be considered here in detail. A more serious effect comes from the jitter, which leads to the bias term b ˙ S ( t ) φ j and the random term b ˙ S ( t ) Δ τ ˜ j . The bias term is independent of the observation time and thus will be less critical for the target detection (note that φ i is independent of t and T ). The opposite is valid for the random term. Δ τ ˜ j is random in t and T . Furthermore, its random effect on c ( t , T ) is weighted by the first derivative b ˙ S ( t ) of the very strong signal scattered from the static objects. Under strong multipath conditions, these signals are typically spread over the whole radar range. Figure 11B gives an example of signal spreading even under simple conditions. To get a better impression of the process of dying out, the data are logarithmically scaled by keeping the signal sign. The scaling function is given by b log = sign ( b ) ( max [ 20 lg ( b / b max ) , D ] + D ) . D [ dB ] is the dynamic range over which the data are depicted.
The strength of the background is often 2–3 orders of magnitude larger than the target reflections and with increasing bandwidth of the radar, the signal derivation leads to an additional emphasis of their influence. Hence, jitter may seriously affect the detection performance of a time-variable target under strong multipath conditions (especially if it is at the same range as a strong target—compare Figure 10—person, and file cabinet). UWB radar experiments for motion detection under (nearly) free space conditions are therefore less trustworthy.
So far, we have assumed that the clutter objects are static. In a situation as depicted in Figure 10B, this cannot be strictly presumed, because a living organism can never suppress completely the motion of its limbs. Hence, by referring to the signal model in Equation (30), we also have to take into account a minor time variance of the signal component b s ( t ) b s ( t , T ) . Under the condition that all motion effects are small (i.e., time-delay modulations are smaller than the rise time of the sounding signal), we can follow the approach introduced with respect to Equation (19). That is, we pick out data samples at different propagation timepoints t k and observe their magnitude dependent on the observation time b ( t k , T ) or c ( t k , T ) . The modulation of the majority of data samples will follow the global motion of the scenario under testing conditions (e.g., the arm), and only few data samples will also contain a modulation caused by the target of interest. The goal is to separate both modulations and to extract the target.
Under the assumption that both modulations are independent or orthogonal, respectively, this can be done by principal component analysis (PCA), which exploits singular value decomposition (SVD) [40].
Figure 12 gives an example. Assuming, we want to measure the artery motion in the arm. It is approximated in our example by the sinusoidal modulation χ ( T ) (Figure 12A). Due to the vital motion of the body, we get an additional, irregular modulation ζ ( T ) , which is even stronger than the signal of interest. Both modulations overlap in the radar signal, but they are connected with slightly different roundtrip times. These two signals affect the radar signal in the same way, as depicted in Figure 7. Because we are interested in a periodic motion, FFT (Fast Fourier Transform) is performed in observation time with the hope of finding the wanted signal (Figure 12B). However, we do not succeed, because the perturbing modulation ζ ( T ) suppresses the modulation χ ( T ) of interest. Exploiting PCA on the radargram leads to two separable principal components (Figure 12C). Because the second principal component meets best our expectation about the wanted signal, the related signal parts are extracted from the radar signal. Now, the spectrum shows the wanted sinusoidal modulation signal (Figure 12D).

3. Device Requirements for Differential Imaging

This section will summarize the requirements for the most important parameters of UWB devices for differential imaging. Basically, there are several approaches possible to implement UWB devices [1]. They differ mainly by the test signal under use, such as stepped sinewave, sub-nanosecond pulse, binary pseudo random code, multi-sine, and random noise. Seen from the perspective of implementation cost, bandwidth, measurement speed, and multi-channel capability, however, only two approaches will remain the focus of interest. These are the short pulse and pseudo random noise excitation. The last one mostly applies M-sequences, but basically, it is not restricted to that type of pseudo random code. In what follows, we only include these two principles in our discussion.

3.1. Pulse and M-Sequence Radar

The basic structure of pulse and M-sequence are depicted in Figure 13. The pulse radar (Figure 13A) is based on a pulse shaper, which launches periodically sub-nanosecond pulses triggered by a clock generator of repetition rate τ R 1 . The data capturing is organized via subsampling in order to reduce the sampling rate of the ADC (Analog to digital converter). In the shown example, sequential sampling is used, by which one data sample per launched pulse is captured. By a programmable delay line, the sampling point is moved over the time interval of interest τ R O I τ R . The step size of the delay variation determines the equivalent sampling rate Δ τ = f e q 1 , which has to meet the Nyquist sampling criteria for the sounding signal. This is not required from the actual sampling rate. For modifications of the sampling principle (e.g., interleaved sampling) see [1,41]. The peak power of the transmitted signal P ^ , its average power P , and the achievable bandwidth B ¨ (note B ¨ is a two-sided bandwidth) are as follows:
P ^ V 0 2 ; P τ 0 τ R V 0 2 ; B ¨ τ 0 1 .
In the case of the M-sequence radar the sounding signal is generated by a fast linear feedback shift register (LFSR) of length m , which is pushed by a microwave source of frequency f c . It provides N m = 2 m 1 chips per period. For data gathering, an interleaved subsampling approach is used. Due to spectrum limitations below f c / 2 , the equivalent sampling rate is preferentially selected to f e q = f c so that a binary divider can simply control the sampling [1]. Transmitter power and double-sided bandwidth are as follows:
P ^ P V 0 2 ; B ¨ = f c = 1 τ 0 .
However, the received signal b ( t ) is mostly worthless as it leaves a chaotic impression. Therefore, one has to take its circular cross-correlation c b m ( t ) with the M-sequence code m ( t ) for further data processing [1] in order to get the wanted impulse response function as follows:
c b m ( t ) = 0 τ R b ( ξ ) m c ( ξ + t ) d ξ .

3.2. Bandwidth, Measurement Rate, and Antenna Array

The bandwidth of the sounding signal determines the range resolution performance of the radar. Hence, it should be as large as possible. The bandwidth is fixed by the width of the sounding pulse or the chip width of the M-sequence, respectively. The upper-band limits are, however, often restricted by the frequency-dependent propagation losses of the test scenario. Because these losses grow with the increasing target distance, the range resolution will worsen the deeper a target is located in a lossy material.
The measurement rate ϕ R has to respect the Nyquist sampling criteria of the temporal target variation. For heart rate monitoring, for example, at least several tens of impulse response functions per second have to be collected if one is also interested in the higher harmonics of the heart motion.
In the UWB case, λ / 2 -antenna spacing is not an issue, so often a low number of antenna positions is already sufficient for localization purposes in a scenario with a low number of targets. The antennas must be kept in stable positions, because even minor movements cause signal modulation, which represses the target modulation. In order to permit measurements in highly time-varying scenarios, parallel operation of all receiver channels is required. To illuminate the scenario from different aspect angles, several transmitting antennas need to be involved. Because orthogonal UWB signals are usually not available or difficult to generate, the antennas are not allowed to transmit in parallel. Consequently, the different transmitters have to be activated sequentially. This lowers the measurement rate wherefore the number of transmitters should be limited to the minimum, because the Nyquist sampling criteria for the temporal target variation applies to the whole cycle of data collection.

3.3. Unambiguity Range and Data Throughput

The sounding signals, pulse, or M-sequence, are periodically repeated with the interval τ R . All targets whose distances to the antenna are shorter than
R 0 = τ R c 2
can be unambiguously arranged to their correct range by radar measurements. In UWB imaging scenarios, one is often only interested in a small observation area or volume, so the distances of interest are quite short. In breast cancer imaging, for example, a range of about 10 cm would be already sufficient. This may lead to the suspicion that an unambiguity range of a bit larger than 10 cm could be sufficient. But, that does not take into account the unwanted transmission paths at the antenna-feeding cable, multipath propagation, or antenna back radiation into the surrounding space. Signals, which are subjected to a delay larger than τ R , are folded in the captured principal signal segment and appear as “ghosts”, which are not distinguishable from signals with the correct delay. As long as the ghosts do not vary in observation time, they are not critical in differential imaging, because they will be eliminated by the background removal in Equation (32). However, because the object motions in the surrounding of an imaging experiment are mostly not under the control of the operator, they will affect sensitive measurements if they are not able to die off within the time window τ R . Figure 11C illustrates an example of a too short time interval τ R , which did not allow the waves to die out.
The total net data throughput H [ bits / s ] of the receivers of a differential imaging system results from the following:
H = N B N S K L ϕ R
where N B is the word length of a data sample, N S is the number of data samples per response functions, K , L are the number of transmitter and receiver channels, respectively, and ϕ R is the measurement rate. In order to meet the Nyquist rate of IRF acquisition, the minimum number of data samples N s to be collected is as follows ( B ¨ is the double-sided bandwidth):
N s { τ R O I B ¨ pulse   radar τ R f c = N m M sequence   radar .
While an M-sequence radar has to collect the data over the full period τ R of the sounding signal, a pulse radar can be limited to a time window of interests τ R O I . Note that Equation (38) represents an absolute minimum. In the case of pulse radar, one records usually many more samples in order to get a quasi-continuous impression of the signal. If fast ADCs are applied, one is able to collect more samples as required within the time interval ϕ R 1 , which should be used to improve the noise behavior by synchronous averaging.
Finally, it should be noted that in case of a large unambiguity range (i.e., large τ R ), the average power of a pulse radar will drastically decrease—compare with Equation (33)—while for an M-sequence radar, such an effect will not be observed.

3.4. Random Effects

As already mentioned above, UWB devices are subjected to two types of randomness. These are the uncertainties of voltage capturing due to thermal noise and quantization effects symbolized by n ˜ and the uncertainties (jitter) of the sampling time Δ τ ˜ j .
Assuming a frequency-independent noise power spectral density Φ n , the power of the additive noise P n = var { n ˜ } = σ n 2 Φ n B ¨ will rise with the bandwidth B ¨ . The measures to combat against the impact of this noise are to increase the average power of the sounding signal and to emphasize the deterministic measurement effects against the noise by an appropriate integration over a long observation time (e.g., synchronous averaging; matched filtering—compare Equation (20) and Figure 8 and Figure 9).
The impact of sampling jitter is illustrated in Figure 14. Sampling jitter causes data capturing at incorrect and unknown time instances. Because the captured voltage has to be assigned to the pre-defined sampling time t 0 , the timing uncertainty transforms into amplitude noise dependent on the signal slope. The integral effect of the jitter on the signal quality may be expressed by the signal-to-noise ratio relating the average signal power P to the jitter induced power P j as follows:
S N R j = P P j = 1 τ R τ R b 2 ( t ) d t φ j 2 τ R τ R b ˙ 2 ( t ) d t = | B _ ( f ) | 2 d f 4 π 2 φ j 2 f 2 | B _ ( f ) | 2 d f 3 π 2 φ j 2 B ¨ 2 .
| B _ ( f ) | 2 represents the power spectrum of the sounding signal. Applying Parseval’s theorem and the differentiation rule of Fourier transform and assuming for the sake of simplicity a constant spectrum of the sounding signal within the band limits B ¨ / 2 B ¨ / 2 , one finds that the jitter effect is independent of the actual time shape and the power of the sounding signal. This result is confirmed by simulations and measurements in [42].
However, this approach obstructs the view of the actual situation. Jitter-induced noise is non-ergodic. Therefore, its actual dependency on time should be considered for better understanding of its impact. As is obvious from Figure 14, jitter develops its effect mainly on signal edges. Hence, in the case of pulse radar, the whole jitter power is concentrated at the edges of strong signal parts.
It remains the question how the output signal c b m ( t ) of an M-sequence radar (see Equation (35)) is affected by jitter and noise:
c b m ( t ) = 1 τ R 0 τ R ( b 0 ( ξ ) + b ˙ 0 ( ξ ) Δ τ ˜ j + n ˜ ) m c ( ξ + t ) d ξ E { c b m ( t ) } = 1 τ R 0 τ R b 0 ( ξ ) m c ( ξ + t ) d ξ var { c b m ( t ) } = 1 N m ( σ n 2 + b ˙ 0 2 ¯ φ j 2 ) ; with   b ˙ 0 2 ¯ = 1 τ R 0 τ R b ˙ 0 2 ( t ) d t and   sin ce   0 τ R m 2 ( t ) d t = 1 .
As we can observe from Equation (40), the expected value is a bias-free estimation of the wanted correlation function, and the total noise depends on the strength of the received signal due to the jitter. Unlike the pulse radar, the noise level is not focused on signal edges but is evenly distributed throughout the signal (i.e., it is converted into additive noise, which depends on the signal level and bandwidth (due to differentiation)). Additionally, the noise level decays with increasing length N m of the M-sequence. The maximum noise level appears if there is a direct connection between the receiver and transmitter (i.e., b 0 ( t ) = m b ( t ) ). Assuming again constant spectral power within the stimulation band B ¨ / 2 B ¨ / 2 or f c / 2 f c / 2 , respectively, we find the following:
var { c b m ( t ) } | max = 1 N m ( σ n 2 + m ˙ b 2 ¯ φ j 2 ) = 1 N m ( σ n 2 + φ j 2 | j 2 π f M b ( f ) | 2 d f ) 1 N m ( σ n 2 + 1 3 ( π V 0 f c φ j ) 2 ) .
In order to stay in the digital domain, we characterize the overall sensitivity of the receiver (i.e., ADC resolution and track-and-hold noise) by its effective number of bits E N O B = log 2 ( 2 V / σ n ) ( 2 V —ADC full scale range) so that we get from Equation (41) equality between quantization/thermal noise and jitter induced noise under the following condition:
φ j , 0 3 π f c 2 E N O B 1 .
This is an important design rule for M-sequence devices in order to balance between resolution, technical effort, and data rate. Assuming an operational frequency band 0 5 GHz ( f c = 10 GHz ) and E N O B = 10 bit , the standard deviation of the jitter should not exceed φ j 100 fs . This is a challenging task. Therefore, the timing concept of the M-sequence radar is specifically designed for low jitter generation (see [21]). The time reference is provided by a low-phase-noise single-tone microwave source. Other timing-critical components, such as LFSR, binary divider, and track-and-hold circuit are monolithically integrated in a high frequency SiGe-technology providing low-noise signals of steep trigger edges. Figure 15 illustrates the noise behavior of a commercial M-sequence device [43]. Figure 15A shows the pure additive noise n ˜ , because no signal was fed into the receiver (blue spectrum). Figure 15B,C are showing the spectrum of the observation time noise for the maximum input signal (red spectra). Figure 15B refers to a voltage sample placed at a horizontal signal part, and Figure 15C depicts the fluctuations at the steepest signal edge. The noise level in Figure 15A,B are nearly identical. Hence, from Equation (42), the sampling jitter of the device is sufficiently small, so it does not degrade the noise performance of the device. In Figure 15C, the noise level for frequencies above ϕ 0.6 Hz is about the same as in the two previous cases, which also confirms Equations (41) and (42). For frequencies below ϕ < 0.6 Hz , the noise increases inversely with frequency. The actual reason for this behavior has still to be investigated. Because the measurement was done in a non-temperature-controlled room, one reason could be minor temperature variations, which affect the propagation time of the cable connecting transmitter and receiver. Thanks to the noise suppression of correlation (see Equation (40)), the remaining temporal fluctuation of an M-sequence device is typically in the range of few femtoseconds. Temperature fluctuations in the range of only one-hundredth of a degree lead already to runtime changes on RF-cables that exceed the measurement uncertainty of the device.
Finally, it should be pointed to an effect that leads to an erroneous roundtrip time measurement due to noise [44]. Assuming the roundtrip time is determined by the first crossing of a threshold. The situation is illustrated in Figure 16. The rising edge of a signal is affected by random noise, whose standard deviation is indicated by error bars. A related presentation is depicted in Figure 14C showing the PDFs p b ( i Δ t ) ( V ) of the voltage samples collected at different sampling points i Δ t .
Hence, the probability that a voltage sample captured at time position i Δ t hits the threshold is as follows:
P ( i Δ t ) = V T H p b ( i Δ t ) ( V ) d V .
Further, the probability that the threshold is passed the first time at sample position k Δ t is (probability of first success) as follows:
P H ( k Δ t ) = P ( k Δ t ) i = 0 k 1 ( 1 P ( i Δ t ) ) ; k > i .
Such a probability distribution is illustrated in Figure 16, indicating that the most probable time point for threshold crossing is located before the correct value. In order to minimize this bias error, the noise at the signal edge should be weak and the density of the sampling points should not be much larger than required by the Nyquist sampling criteria. However, the determination of the correct position of the threshold crossing then needs a suitable interpolation between the voltage samples laying on either side of the threshold.

4. Demonstration Examples

For illustration of the differential imaging approach, three examples shall be shown. A few further examples can be found in [45,46,47,48,49,50,51,52]. First, we consider breast cancer imaging via modulation of targeted nanoparticle. The other two examples are restricted mainly to the detection of moving objects, because this is the key issue in differential imaging.
A major problem in microwave breast cancer detection is the low contrast of the malignant tissue and the strong clutter caused from the skin and glandular tissue. In order to circumvent such clutter, it is proposed to target the malignant tissue with nanoparticles [53,54] and to observe the difference before and after the injection of such particles. Because their accumulation in the breast needs longer time, the patient may not be able to stay in the measurement position. This degrades the reproducibility of the repetition measurement [55].
Another approach is based on nanoparticles, which can be modulated by an external magnetic field [56,57,58,59,60]. In that case, the measurements are done after accumulation of nanoparticles in the malignant tissue of the breast so that the patient may keep in position the whole time needed for the data collection. Figure 17 illustrates the basic concept and some results [57,59]. The test setup is shown in Figure 17A. A test glass is filled with 2 mL of a liquid enriched with a certain amount of magnetite nanoparticles. This glass is placed in a block of phantom material mimicking the breast tissue. Both are finally arranged between the poles of an electromagnet and the microwave backscattering, and transmission is measured with an M-sequence device. For the shown example, the path length in both cases is about 5 cm. For the radar measurements, small active antennas are applied (see inset of Figure 17A). Figure 17B shows radargram examples of the backscattered data after static background removal for on–off keying and sinusoidal modulation, respectively. The radargrams of the transmission measurement look similar. The variation of the target response is in the range of ±10−4 below the maximum signal magnitude caused by the strongest static propagation path (e.g., antenna cross talk or skin reflection). The radargram example refers to 6 mg of magnetite and a modulation strength of H max = 60 kA m 1 . The related signal-to-noise ratios between peak value and noise floor are about 18 dB for the pulse modulation and 20 dB for the sinusoidal variation, respectively. It is evident that the signals will be lost in noise by reducing the number of nanoparticles. However, as discussed in Figure 8, we can provide a SNR-improvement by correlating (matched filtering) the radargram in the observation time direction with the modulation signal. This is depicted in Figure 17C for the sinusoidal modulation and matched filtering via Fourier transform over a measurement time of about 200 s. This improves the SNR-value to 37 dB. The actual sensitivity of the measurement arrangement is depicted in Figure 17D. It shows the spectral peak value as a function of the magnetite mass and modulation strength of the magnetic field. As demonstrated, the detection threshold is in the range of mg and can be further reduced by technical improvements and methods of SNR enhancement. The detectable mass relates to a number of nanoparticles, which can physiologically be accumulated in malignant tissue. It should further be noted that electronic components of the microwave UWB radar, which are placed close to the magnetic modulation field, should not be composed from ferromagnetic materials, such as iron or nickel.
The imaging concept is finally depicted in Figure 18. A breast mold is—for the sake of demonstration—partially equipped with antennas. In the shown imaging example, only the upper antennas are actually used. Four of them act as receivers, and five provide sequentially the ultra-wideband sounding field. The mold is arranged between the poles of an electromagnet and a breast phantom with a test glass of nanoparticles (WHKS 1S12, Liquids Research Limited, Bangor, UK) diluted in distilled water is placed into the mold. The liquid volume is located about 2 cm below the surface. The breast phantom consists of a 2 mm thick skin layer made by silicone mixed with Carbon Black [61,62], and the healthy tissue surrogate is an oil–gelatin mixture [63].
Figure 18C–E show the result of the three-dimensional (3D)-imaging seen from different aspect angles. The target is represented by a blue colored isosurface embracing all voxels having an intensity larger than 90% of maximum voxel intensity. The imaging algorithm is based on the delay-and-sum approach (compare Equation (4)) taking only such signal components that are modulated by the magnetic field. The imaged target creates the impression of a slightly curved rod. The different planes inserted into the breast volume nicely indicate the sidelobes of the imaging procedure, and they give an impression about the resolution. As we can observe, the resolution within the coronal plane is much better than in the other two planes. The reason is the antenna arrangement. The resolution in the coronal plane is better, because the angle in the coronal plane under which the antennas “see” the target is much larger than the related angle in the parasagittal plane.
In order to select an appropriate modulation frequency for the nanoparticles, the intrinsic time variance of a female breast was investigated (see Figure 19). For that purpose, a healthy female volunteer placed her left breast in the antenna mold. The volunteer person was laying in prone position, and she was not moving. The signals after background removal and Fourier transform in observation time are depicted in Figure 19B. Ideally, only noise should remain. But, not surprisingly, one finds not only the breathing rate and the heart rate, but also strong random variations at the position of the skin in the data. They may be caused by minor global body motions, which the person can never suppress if not anesthetized. Fortunately, these signals die out above 2 Hz so that a good choice for the modulation rate for the nanoparticles would fix it beyond 2 Hz.
While in the first demonstration example, the modulation rate may be appropriately selected to avoid overlap with other motions, this degree of freedom is not given in the second one. Here, the goal is to detect, for example, the motion of the carotid artery. Figure 20 depicts the measurement setup and the results. The measurements are done from a distance of about half a meter. The person is sitting in quiet position. Nevertheless, minor movements by breathing or swallow cannot be avoided by the volunteer. These are the dominant variations as it can be observed in Figure 20C. To remove their influence on the radar data, a PCA/SVD (Principal Component Analysis/Singular Value Decomposition) method may be used. The improved spectrum is shown in Figure 12D, in which the unwanted components are largely suppressed.
The final example refers to a situation where the time variations of the scenario are not predictable. In the shown case (see Figure 21), the task was to decide whether or not termites destroy valuable museum exhibits. Termites are hardly visible, because they stay inside the wood. However, they migrate through corridors, which they have eaten into the wood. Hence, there is a good chance to detect their motion and to localize them by high-resolution UWB radar. The roundtrip time where the motion artifacts appear indicates whether the motion comes from inside or outside the wood. If it comes from inside, the wood is affected by wooden pests.

5. Conclusions

Microwave imaging is advantageous if the internal structure of optically opaque test scenarios must be investigated. However, in the case of strong multipath scenarios and weak scattering targets, microwave imaging creates challenges for the imaging algorithms and the measurement precision of the scattered field. In order to get reasonable precision, extensive calibrations typically have to be performed with the aim of reducing the systematic errors of the measurement arrangement. However, not infrequently, these calibrations have only limited success because precise calibration standards (covering also the antenna behavior) are missing and/or the reproducibility of the measurements is insufficient.
The situation relaxes if one is interested only in a low number of spatially limited targets, which show some inherently or externally induced time variations of their scattering behavior. One can find or implement such conditions in many tasks of medical microwave imaging, non-destructive testing, law enforcement, and so forth. Due to the time variance of the targets, the perturbing, strong multipath signals and the device clutter may be largely suppressed by respecting only time-variable signal components. Furthermore, the limited spatial extension of the targets reduces the imaging problem to a localization task. Nevertheless, the localization may also be challenging, because multipath components involving the object of interest may appear to be a time-varying target. Such signal components must be excluded prior to target localization.
Microwave localization or imaging requires the observation of the scene from several antenna positions. Due to the time variance of the scenario, the measurements have to be done in parallel with a sufficiently high measurement rate in order to be able to follow the temporal variations of the scenario under testing conditions. This requires UWB devices with many synchronously operating measurement channels and high measurement speed. From the perspective of system cost and measurement speed, the preferable device concepts should be based on sounding signals using sub-nanosecond pulses or wideband pseudo-noise signals, such as M-sequences.
In many practical cases, the targets of interest only weakly scatter the sounding waves. Hence, the noise behavior of the measuring devices becomes a major issue. In the case of UWB devices, two types of randomness are important. These are the randomness of the voltage capturing usually assigned as additive noise (thermal noise, quantization noise) and the randomness of the sampling, also called jitter. While the influence of the additive noise onto the measurement signal may be counteracted by increasing the power of the sounding signal, the impact of jitter does not depend on the signal strength. Jitter provokes additional randomness at steep signal edges. Hence, if there are strong scatterers in the SUT, they will increase the noise level proportional to the level of the sounding signal. Consequently, the detection performance of weak targets will be much worse. Because additive noise and jitter are mainly caused by internal imperfections of the measurement device, one should prefer a device concept that suppresses these random effects as much as possible. As shown in the paper, the pseudo-noise approach outperforms the impulse radar with regard to these aspects. Firstly, the time-extended sounding signal (e.g., M-sequence) provides high signal power for additive noise suppression, even if the signal voltage remains low. Secondly, properly designed M-sequence devices are rigidly synchronized so that any jitter generation may be largely reduced. Finally, the impulse compression (always required in M-sequence devices) spreads the noise provoked by the remaining jitter over the whole impulse response function. Hence, there will no noise elevation at signal edges so that strong static scatterer will not influence the detection of weak time-variable targets.

Author Contributions

Conceptualization, J.S. and M.H.; Funding acquisition, J.S. and M.H.; Investigation, S.L., T.J., and S.C.; Methodology, J.S.; Project administration, J.S. and M.H.; Supervision, J.S. and M.H.; Original draft preparation, J.S.; Review and editing of the final manuscript, S.L., T.J., S.C., and M.H.

Funding

This work was funded by the Deutsche Forschungsgemeinschaft in the framework of the project ultraMAMMA (HE 6015/1-1, SA 1035/5-1, HI 698/13-1), by the Deutsche Bundesstiftung Umwelt (project DBU AZ 31865-45) and the Alexander von Humboldt Foundation.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results

References

  1. Sachs, J. Handbook of Ultra-Wideband Short-Range Sensing-Theory, Sensors, Applications; Wiley-VCH: Berlin, Germany, 2012; p. 840. [Google Scholar]
  2. Nikolova, N.K. Introduction to Microwave Imaging; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar]
  3. Palmeri, R.; Bevacqua, M.T.; Donato, L.D.; Crocco, L.; Isernia, T. Microwave imaging of non-weak targets in stratified media via virtual experiments and compressive sensing. In Proceedings of the 11th European Conference on Antennas and Propagation (EUCAP), Paris, France, 19–24 March 2017; pp. 1711–1715. [Google Scholar]
  4. Meo, S.D.; Espín-López, P.F.; Martellosio, A.; Pasian, M.; Matrone, G.; Bozzi, M.; Magenes, G.; Mazzanti, A.; Perregrini, L.; Svelto, F.; et al. On the feasibility of breast cancer imaging systems at millimeter-waves frequencies. IEEE Trans. Microw. Theory Tech. 2017, 65, 1795–1806. [Google Scholar] [CrossRef]
  5. Zhuge, X.; Yarovoy, A.G. Study on two-dimensional sparse mimo uwb arrays for high resolution near-field imaging. IEEE Trans. Antennas Propag. 2012, 60, 4173–4182. [Google Scholar] [CrossRef]
  6. Lee, D.; Velander, J.; Nowinski, D.; Augustine, R. A preliminary research on skull healing utilizing short pulsed radar technique on layered cranial surgery phantom models. Prog. Electromagn. Res. 2018, 84, 1–9. [Google Scholar] [CrossRef]
  7. Scapaticci, R.; Bucci, O.M.; Catapano, I.; Crocco, L. Differential microwave imaging for brain stroke followup. Int. J. Antennas Propag. 2014, 2014. [Google Scholar] [CrossRef]
  8. Haynes, M.; Stang, J.; Moghaddam, M. Real-time microwave imaging of differential temperature for thermal therapy monitoring. IEEE Trans. Biomed. Eng. 2014, 61, 1787–1797. [Google Scholar] [CrossRef] [PubMed]
  9. Scapaticci, R.; Bellizzi, G.G.; Cavagnaro, M.; Lopresto, V.; Crocco, L. Exploiting microwave imaging methods for real-time monitoring of thermal ablation. Int. J. Antennas Propag. 2017, 2017. [Google Scholar] [CrossRef]
  10. Abbosh, A.M.; Mohammed, B.; Bialkowski, K. Differential microwave imaging of breast pair for tumor detection. In Proceedings of the IEEE MTT-S 2015 International Microwave Workshop Series on RF and Wireless Technologies for Biomedical and Healthcare Applications (IMWS-BIO), Taipei, Taiwan, 21–23 September 2015; pp. 63–64. [Google Scholar]
  11. Margrave, G.F. Numerical Methods of Exploration Seismology with Algorithms in MATLAB. 2001. Available online: https://www.crewes.org/ResearchLinks/FreeSoftware/NumMeth.pdf (accessed on 2 June 2018).
  12. Savelyev, T.G.; Van Kempen, L.; Sahli, H. Deconvolution techniques. In Ground Penetrating Radar, 2nd ed.; Daniels, D., Ed.; Institution of Electrical Engineers: London, UK, 2004. [Google Scholar]
  13. Bond, E.J.; Xu, L.; Hagness, S.C.; Van Veen, B.D. Microwave imaging via space-time beamforming for early detection of breast cancer. In Proceedings of the 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, Orlando, FL, USA, 13–17 May 2002; pp. III-2909–III-2912. [Google Scholar]
  14. Jian, L.; Stoica, P.; Zhisong, W. On robust capon beamforming and diagonal loading. IEEE Trans. Signal Process. 2003, 51, 1702–1715. [Google Scholar] [CrossRef]
  15. Lorenz, R.G.; Boyd, S.P. Robust minimum variance beamforming. IEEE Trans. Signal Process. 2005, 53, 1684–1696. [Google Scholar] [CrossRef] [Green Version]
  16. Hooi Been, L.; Nguyen Thi Tuyet, N.; Er-Ping, L.; Nguyen Duc, T. Confocal microwave imaging for breast cancer detection: Delay-multiply-and-sum image reconstruction algorithm. IEEE Trans. Biomed. Eng. 2008, 55, 1697–1704. [Google Scholar] [CrossRef]
  17. Zetik, R.; Sachs, J.; Thoma, R. Modified cross-correlation back projection for uwb imaging: Numerical examples. In Proceedings of the IEEE International Conference on Ultra-Wideband (ICUWB), Zurich, Switzerland, 5–8 September 2005. [Google Scholar]
  18. Matrone, G.; Savoia, A.S.; Caliano, G.; Magenes, G. The delay multiply and sum beamforming algorithm in ultrasound b-mode medical imaging. IEEE Trans. Med. Imaging 2015, 34, 940–949. [Google Scholar] [CrossRef] [PubMed]
  19. Senglee, F.; Kashyap, S. Cross-correlated back projection for uwb radar imaging. In Proceedings of the Antennas and Propagation Society International Symposium, Monterey, CA, USA, 20–25 June 2004. [Google Scholar]
  20. Zhou, L.; Huang, C.; Su, Y. A fast back-projection algorithm based on cross correlation for GPR imaging. IEEE Geosci. Remote Sens. Lett. 2012, 9, 228–232. [Google Scholar] [CrossRef]
  21. Sachs, J.; Herrmann, R.; Kmec, M. Time and range accuracy of short-range ultra-wideband pseudo-noise radar. Appl. Radio Electron. 2013, 12, 105–113. [Google Scholar]
  22. Sachs, J. On the range estimation by uwb-radar. In Proceedings of the IEEE International Conference on Ultra-Wideband (ICUWB), Paris, France, 1–3 September 2014. [Google Scholar]
  23. Kerbrat, E.; Prada, C.; Cassereau, D.; Ing, R.K.; Fink, M. Detection and imaging in complex media with the D.O.R.T. Method. In Proceedings of the IEEE Ultrasonics Symposium, San Juan, Puerto Rico, USA, 22–25 October 2000. [Google Scholar]
  24. Devaney, A.J. Time reversal imaging of obscured targets from multistatic data. IEEE Trans. Antennas Propag. 2005, 53, 1600–1610. [Google Scholar] [CrossRef]
  25. Bellomo, L.; Saillard, M.; Pioch, S.; Belkebir, K.; Chaumet, P. An ultrawideband time reversal-based radar for microwave-range imaging in cluttered media. In Proceedings of the 13th International Conference on Ground Penetrating Radar (GPR), Lecce, Italy, 21–25 June 2010. [Google Scholar]
  26. Kosmas, P.; Laranjeira, S.; Dixon, J.H.; Li, X.; Chen, Y. Time reversal microwave breast imaging for contrast-enhanced tumor classification. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Buenos Aires, Argentina, 31 August–4 September 2010. [Google Scholar]
  27. Yavuz, M.E.; Teixeira, F.L. Ultrawideband microwave sensing and imaging using time-reversal techniques: A review. Remote Sens. 2009, 1, 466–495. [Google Scholar] [CrossRef]
  28. Fink, M.; Prada, C. Acoustic time-reversal mirrors. Inverse Probl. 2001, 17, R1–R38. [Google Scholar] [CrossRef]
  29. Zhen, Z.; Fang, L. Application of wavelet analysis technique in the signal denoising of life sign detection. Phys. Proced. 2012, 24, 2124–2130. [Google Scholar] [CrossRef]
  30. Li, J.; Liu, L.; Zeng, Z.; Liu, F. Advanced signal processing for vital sign extraction with applications in uwb radar detection of trapped victims in complex environments. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 783–791. [Google Scholar]
  31. Nezirovic, A.; Yarovoy, A.G.; Ligthart, L.P. Signal processing for improved detection of trapped victims using uwb radar. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2005–2014. [Google Scholar] [CrossRef]
  32. Mabrouk, M.; Rajan, S.; Bolic, M.; Batkin, I.; Dajani, H.R.; Groza, V.Z. Detection of human targets behind the wall based on singular value decomposition and skewness variations. In Proceedings of the 2014 IEEE Radar Conference, Cincinnati, OH, USA, 19–23 May 2014; pp. 1466–1470. [Google Scholar]
  33. Lazaro, A.; Girbau, D.; Villarino, R. Techniques for clutter suppression in the presence of body movements during the detection of respiratory activity through UWB radars. Sensors 2014, 14, 2595–2618. [Google Scholar] [CrossRef] [PubMed]
  34. Conte, E.; Filippi, A.; Tomasin, S. Ml period estimation with application to vital sign monitoring. IEEE Signal Process. Lett. 2010, 17, 905–908. [Google Scholar] [CrossRef]
  35. Lv, H.; Qi, F.; Zhang, Y.; Jiao, T.; Liang, F.; Li, Z.; Wang, J. Improved detection of human respiration using data fusion based on a multistatic uwb radar. Remote Sens. 2016, 8, 773. [Google Scholar] [CrossRef]
  36. Li, W.Z. A new method for non-line-of-sight vital sign monitoring based on developed adaptive line enhancer using low entre frequency uwb radar. Prog. Electromagn. Res. 2013, 133, 535–554. [Google Scholar] [CrossRef]
  37. Ossberger, G.; Buchegger, T.; Schimback, E.; Stelzer, A.; Weigel, R. Non-invasive respiratory movement detection and monitoring of hidden humans using ultra wideband pulse radar. In Proceedings of the 2004 International Workshop on Joint UWBST & IWUWBS, Kyoto, Japan, 18–21 May 2004. [Google Scholar]
  38. Zaikov, E. M-sequence radar sensor for search and rescue of survivors beneath collapsed buildings. In Handbook of Ultra-Wideband Short-Range Sensing: Theory, Sensors, Applications; Sachs, J., Ed.; Wiley-VCH: Weinheim, Germany, 2012. [Google Scholar]
  39. Sachs, J.; Helbig, M.; Herrmann, R.; Kmec, M.; Schilling, K.; Zaikov, E. Remote vital sign detection for rescue, security, and medical care by ultra-wideband pseudo-noise radar. Ad Hoc Netw. 2012, 13, 42–53. [Google Scholar] [CrossRef]
  40. Blum, A.; Hopcroft, J.; Kannan, R. Foundations of Data Science. 2018. Available online: https://www.cs.cornell.edu/jeh/book.pdf (accessed on 2 June 2018).
  41. Yunqiang, Y.; Fathy, A.E. Development and implementation of a real-time see-through-wall radar system based on FPGA. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1270–1280. [Google Scholar] [CrossRef]
  42. Zeng, X.; Monteith, A.; Fhager, A.; Persson, M.; Zirath, H. Noise performance comparison between two different types of time-domain systems for microwave detection. Int. J. Microw. Wirel. Technol. 2017, 9, 535–542. [Google Scholar] [CrossRef]
  43. Ilmsens. M:Explore. Available online: https://www.uwb-shop.com/products/m-explore/ (accessed on 2 June 2018).
  44. Elkhouly, E.; Fathy, A.E.; Mahfouz, M.R. Signal detection and noise modeling of a 1-d pulse-based ultra-wideband ranging system and its accuracy assessment. IEEE Trans. Microw. Theory Tech. 2015, 63, 1746–1757. [Google Scholar] [CrossRef]
  45. Amin, M.G. Through-the-Wall Radar Imaging; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  46. Fiser; Helbig, M.; Ley, S.; Sachs, J.; Vrba, J. Feasibility study of temperature change detection in phantom using m-sequence radar. In Proceedings of the 10th European Conference on Antennas and Propagation (EuCAP), Davos, Switzerland, 10–15 April 2016; pp. 1–4. [Google Scholar]
  47. Sachs, J.; Helbig, M.; Kmec, M.; Herrmann, R.; Schilling, K.; Plattes, S.; Fritsch, H.C. Remote heartbeat capturing of high yield cows by uwb radar. In Proceedings of the International Radar Symposium, Dresden, Germany, 24–26 June 2015. [Google Scholar]
  48. Sachs, J.; Herrmann, R. M-sequence based ultra-wideband sensor network for vitality monitoring of elders at home. IET Radar Sonar Navig. 2015, 9, 125–137. [Google Scholar] [CrossRef]
  49. Helbig, M.; Zender, J.; Ley, S.; Sachs, J. Simultaneous electrical and mechanical heart activity registration by means of synchronized ECG and M-sequence UWB sensor. In Proceedings of the 10th European Conference on Antennas and Propagation (EuCAP), Davos, Switzerland, 10–15 April 2016; pp. 1–3. [Google Scholar]
  50. Kosch, O.; Thiel, F.; Schwarz, U.; di Clemente, F.S.; Hein, M.A.; Seifert, F. UWB cardiovascular monitoring for enhanced magnetic resonance imaging. In Handbook of Ultra-Wideband Short-Range Sensing: Theory, Sensors, Applications; Sachs, J., Ed.; Wiley-VCH: Weinheim, Germany, 2012. [Google Scholar]
  51. Rovňáková, J.; Kocur, D. Experimental comparison of two UWB radar systems for through-wall tracking application. Acta Electrotech. Inform. 2012, 12, 59–66. [Google Scholar] [CrossRef]
  52. Helbig, M.; Koch, J.H.; Ley, S.; Herrmann, R.; Kmec, M.; Schilling, K.; Sachs, J. Development and test of a massive MIMO system for fast medical UWB imaging. In Proceedings of the International Conference on Electromagnetics in Advanced Applications (ICEAA), Verona, Italy, 11–15 September 2017; pp. 1331–1334. [Google Scholar]
  53. Klemm, M.; Leendertz, J.; Gibbins, D.; Craddock, I.J.; Preece, A.; Benjamin, R. Towards contrast enhanced breast imaging using ultra-wideband microwave radar system. In Proceedings of the IEEE Radio and Wireless Symposium (RWS), New Orleans, LA, USA, 10–14 January 2010. [Google Scholar]
  54. Mashal, A.; Sitharaman, B.; Booske, J.H.; Hagness, S.C. Dielectric characterization of carbon nanotube contrast agents for microwave breast cancer detection. In Proceedings of the IEEE Antennas and Propagation Society International Symposium, Charleston, SC, USA, 1–5 June 2009. [Google Scholar]
  55. Ley, S.; Helbig, M.; Sachs, J.; Faenger, B.; Hilger, I. Initial volunteer trial based on ultra-wideband pseudo-noise radar. In Proceedings of the IMBioC, Gothenburg, Sweden, 15–17 May 2017. [Google Scholar]
  56. Bellizzi, G.; Bucci, O.M.; Capozzoli, A. Broadband spectroscopy of the electromagnetic properties of aqueous ferrofluids for biomedical applications. J. Magn. Magn. Mater. 2010, 322, 3004–3013. [Google Scholar] [CrossRef]
  57. Bellizzi, G.G.; Bellizzi, G.; Bucci, O.M.; Crocco, L.; Helbig, M.; Ley, S.; Sachs, J. Optimization of working conditions for magnetic nanoparticle enhanced ultra-wide band breast cancer detection. In Proceedings of the 10th European Conference on Antennas and Propagation (EuCAP), Davos, Switzerland, 10–15 April 2016; pp. 1–3. [Google Scholar]
  58. Bellizzi, G.; Bellizzi, G.G.; Bucci, O.M.; Crocco, L.; Helbig, M.; Ley, S.; Sachs, J. Optimization of the working conditions for magnetic nanoparticle-enhanced microwave diagnostics of breast cancer. IEEE Trans. Biomed. Eng. 2018, 65. [Google Scholar] [CrossRef] [PubMed]
  59. Ley, S.; Helbig, M.; Sachs, J.; Frick, S.; Hilger, I. First trials towards contrast enhanced microwave breast cancer detection by magnetic modulated nanoparticles. In Proceedings of the 9th European Conference on Antennas and Propagation (EuCAP), Lisbon, Portugal, 13–17 April 2015. [Google Scholar]
  60. Ley, S.; Helbig, M.; Sachs, J. MNP enhanced microwave breast cancer imaging based on ultra-wideband pseudo-noise sensing. In Proceedings of the 11th European Conference on Antennas and Propagation (EUCAP), Paris, France, 19–24 March 2017. [Google Scholar]
  61. Helbig, M.; Dahlke, K.; Hilger, I.; Kmec, M.; Sachs, J. UWB microwave imaging of heterogeneous breast phantoms. Biomed. Eng. Tech. 2012, 57, 486–489. [Google Scholar] [CrossRef]
  62. Garrett, J.; Fear, E. A new breast phantom with a durable skin layer for microwave breast imaging. IEEE Trans. Antennas Propag. 2015, 63, 1693–1700. [Google Scholar] [CrossRef]
  63. Lazebnik, M.; Madsen, E.L.; Frank, G.R.; Hagness, S.C. Tissue-mimicking phantom materials for narrowband and ultrawideband microwave applications. Phys. Med. Biol. 2005, 50, 4245–4258. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Generic microwave imaging setup. Note that in some applications, the antennas may also be located inside the observation area.
Figure 1. Generic microwave imaging setup. Note that in some applications, the antennas may also be located inside the observation area.
Sensors 18 02136 g001
Figure 2. Arbitrary distributed antenna array with single target in free space.
Figure 2. Arbitrary distributed antenna array with single target in free space.
Sensors 18 02136 g002
Figure 3. Typical signals for an idealized scenario with electrically small antennas and a point scatterer. Antenna i is stimulated with a Gaussian pulse, while all other antennas only act as receiver a j = 0 ; j i .
Figure 3. Typical signals for an idealized scenario with electrically small antennas and a point scatterer. Antenna i is stimulated with a Gaussian pulse, while all other antennas only act as receiver a j = 0 ; j i .
Sensors 18 02136 g003
Figure 4. Intensity plot of a two-dimensional (2D)-scenario with two point scatterers of equal reflectivity. (A) Delay-and-Sum approach. (B) Delay-and-Multiply method.
Figure 4. Intensity plot of a two-dimensional (2D)-scenario with two point scatterers of equal reflectivity. (A) Delay-and-Sum approach. (B) Delay-and-Multiply method.
Sensors 18 02136 g004
Figure 5. Simulated example of the probability density function (PDF) for a 2D-scenario. The standard deviation of the measurements was assumed to be σ = 0.3 m . (Top) the observation area is surrounded by the antenna. (Bottom) the observation area is illuminated only from one side.
Figure 5. Simulated example of the probability density function (PDF) for a 2D-scenario. The standard deviation of the measurements was assumed to be σ = 0.3 m . (Top) the observation area is surrounded by the antenna. (Bottom) the observation area is illuminated only from one side.
Sensors 18 02136 g005
Figure 6. Data of a walking person measured from a mono-static radar. (A) radargram; (B) target range; and (C) time variance of target backscattering.
Figure 6. Data of a walking person measured from a mono-static radar. (A) radargram; (B) target range; and (C) time variance of target backscattering.
Sensors 18 02136 g006
Figure 7. Weakly sinusoidal moving point scatterer. (A) Section of the time-variant receiving signal for the noise-free case. The variation of some data samples captured at propagation time t 1 t 5 is emphasized. (B) Time variance of the selected data samples (DC-value ignored).
Figure 7. Weakly sinusoidal moving point scatterer. (A) Section of the time-variant receiving signal for the noise-free case. The variation of some data samples captured at propagation time t 1 t 5 is emphasized. (B) Time variance of the selected data samples (DC-value ignored).
Sensors 18 02136 g007
Figure 8. Noise-affected radar data of a sinusoidal moving point target (frequency ϕ 0 ) according to Figure 7. (A) Radargram of noise-free data. (B) Radargram of noise-affected data. (C) Spectrum of noise-free radar data. Note also the quadratic term. The larger the range variation, the stronger the higher harmonics will be [39]. (D) Spectrum of noise-affected data integrated over T 0 . (E) Spectrum for integration time 100 T 0 .
Figure 8. Noise-affected radar data of a sinusoidal moving point target (frequency ϕ 0 ) according to Figure 7. (A) Radargram of noise-free data. (B) Radargram of noise-affected data. (C) Spectrum of noise-free radar data. Note also the quadratic term. The larger the range variation, the stronger the higher harmonics will be [39]. (D) Spectrum of noise-affected data integrated over T 0 . (E) Spectrum for integration time 100 T 0 .
Sensors 18 02136 g008
Figure 9. Cross-energy matrix for two independent measurements of a single moving target.
Figure 9. Cross-energy matrix for two independent measurements of a single moving target.
Sensors 18 02136 g009
Figure 10. Illustration of time-variant multipath scenario. (A) Through-wall radar to detect humans behind walls; (B) Medical microwave imaging to detect artery pulsation (e.g., in the upper arm of a human).
Figure 10. Illustration of time-variant multipath scenario. (A) Through-wall radar to detect humans behind walls; (B) Medical microwave imaging to detect artery pulsation (e.g., in the upper arm of a human).
Sensors 18 02136 g010
Figure 11. Illustration of multipath influence onto radar data: (A) Radar data of a moving person after background removal. The horizontal signal trace represents the third path illustrated in Figure 10: antenna to person, forward scattering of person to the back wall and back to the antenna. (B) Radar data after background removal gained from moved sheet metal. The experiment was done in an ordinary laboratory space with a 12th order M-sequence device (total length of pulse response 307 ns). (C) Measurement done with 8th order M-sequence device showing a too short unambiguity range (length of pulse response 19 ns; see also Section 3.3).
Figure 11. Illustration of multipath influence onto radar data: (A) Radar data of a moving person after background removal. The horizontal signal trace represents the third path illustrated in Figure 10: antenna to person, forward scattering of person to the back wall and back to the antenna. (B) Radar data after background removal gained from moved sheet metal. The experiment was done in an ordinary laboratory space with a 12th order M-sequence device (total length of pulse response 307 ns). (C) Measurement done with 8th order M-sequence device showing a too short unambiguity range (length of pulse response 19 ns; see also Section 3.3).
Sensors 18 02136 g011
Figure 12. Separation of time-variable clutter by principal component analysis (PCA). (A) Modulation functions of the radar signal. (B) Spectral representation of radar data (see also Figure 8). (C) The first three principal components of the radargram. (D) Spectrum after removing the first principal component.
Figure 12. Separation of time-variable clutter by principal component analysis (PCA). (A) Modulation functions of the radar signal. (B) Spectral representation of radar data (see also Figure 8). (C) The first three principal components of the radargram. (D) Spectrum after removing the first principal component.
Sensors 18 02136 g012aSensors 18 02136 g012b
Figure 13. Block schematic of pulse radar (A) and M-sequence radar (B). Both principles are working in the baseband. For extensions to bandpass principles see [1,41]. Note that the actual sounding signal of an M-sequence radar is the bandlimited M-sequence m b ( t ) .
Figure 13. Block schematic of pulse radar (A) and M-sequence radar (B). Both principles are working in the baseband. For extensions to bandpass principles see [1,41]. Note that the actual sounding signal of an M-sequence radar is the bandlimited M-sequence m b ( t ) .
Sensors 18 02136 g013
Figure 14. Impact of sampling jitter on data capturing. (A) Illustration of the effect at a single sampling point t 0 . (B) Measurement example. (C) Probability density functions of a rising signal edge.
Figure 14. Impact of sampling jitter on data capturing. (A) Illustration of the effect at a single sampling point t 0 . (B) Measurement example. (C) Probability density functions of a rising signal edge.
Sensors 18 02136 g014
Figure 15. Spectral noise power of 12th order M-sequence device m: explore. (A) receiver input open. (B) maximum input signal; sample taken from flat signal part. (C) maximum input signal, sample taken from steepest signal part.
Figure 15. Spectral noise power of 12th order M-sequence device m: explore. (A) receiver input open. (B) maximum input signal; sample taken from flat signal part. (C) maximum input signal, sample taken from steepest signal part.
Sensors 18 02136 g015
Figure 16. Threshold crossing of a noisy signal edge.
Figure 16. Threshold crossing of a noisy signal edge.
Sensors 18 02136 g016
Figure 17. Modulation of microwave scattering of nanoparticles by an external magnetic field. (A) Test-setup. (B) Radargram for on–off keying (above) and sinusoidal H-field modulation (below). (C) “Observation time” spectrum for sinusoidal field modulation. (D) Strength of signal variation (backscattering) as a function of nanoparticle mass and magnetic field strength.
Figure 17. Modulation of microwave scattering of nanoparticles by an external magnetic field. (A) Test-setup. (B) Radargram for on–off keying (above) and sinusoidal H-field modulation (below). (C) “Observation time” spectrum for sinusoidal field modulation. (D) Strength of signal variation (backscattering) as a function of nanoparticle mass and magnetic field strength.
Sensors 18 02136 g017
Figure 18. Breast cancer imaging via modulated nanoparticles. (A) Breast mold partially equipped with antennas. (B) Measurement setup. (CE) Three-dimensional (3D)-images under different perspectives. For the sake of better illustration, the transversal and parasagittal plane do not cross the voxel of maximum intensity. But, the coronal plane includes that voxel.
Figure 18. Breast cancer imaging via modulated nanoparticles. (A) Breast mold partially equipped with antennas. (B) Measurement setup. (CE) Three-dimensional (3D)-images under different perspectives. For the sake of better illustration, the transversal and parasagittal plane do not cross the voxel of maximum intensity. But, the coronal plane includes that voxel.
Sensors 18 02136 g018
Figure 19. Impact of vital motions onto radar data (A) measurement setup (courtesy B. Faenger). (B) Spectral radar data. Every line represents the spectrum of an individual sampling point of the impulse response function.
Figure 19. Impact of vital motions onto radar data (A) measurement setup (courtesy B. Faenger). (B) Spectral radar data. Every line represents the spectrum of an individual sampling point of the impulse response function.
Sensors 18 02136 g019
Figure 20. Measurement of carotid artery motion. (A) Anatomy of the human head (courtesy Bruce Blaus) (B) Measurement setup. (C) Spectrum of the range sample with strongest variation. (D) Motion spectrum after removal of unwanted neck motion.
Figure 20. Measurement of carotid artery motion. (A) Anatomy of the human head (courtesy Bruce Blaus) (B) Measurement setup. (C) Spectrum of the range sample with strongest variation. (D) Motion spectrum after removal of unwanted neck motion.
Sensors 18 02136 g020
Figure 21. Termite detection in exhibits (courtesy: B. Landsberger, National Museums Berlin).
Figure 21. Termite detection in exhibits (courtesy: B. Landsberger, National Museums Berlin).
Sensors 18 02136 g021

Share and Cite

MDPI and ACS Style

Sachs, J.; Ley, S.; Just, T.; Chamaani, S.; Helbig, M. Differential Ultra-Wideband Microwave Imaging: Principle Application Challenges. Sensors 2018, 18, 2136. https://doi.org/10.3390/s18072136

AMA Style

Sachs J, Ley S, Just T, Chamaani S, Helbig M. Differential Ultra-Wideband Microwave Imaging: Principle Application Challenges. Sensors. 2018; 18(7):2136. https://doi.org/10.3390/s18072136

Chicago/Turabian Style

Sachs, Jürgen, Sebastian Ley, Thomas Just, Somayyeh Chamaani, and Marko Helbig. 2018. "Differential Ultra-Wideband Microwave Imaging: Principle Application Challenges" Sensors 18, no. 7: 2136. https://doi.org/10.3390/s18072136

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop