Extraction of Human Limbs Based on Micro-Doppler-Range Trajectories Using Wideband Interferometric Radar

In this paper, we propose to extract the motions of different human limbs by using interferometric radar based on the micro-Doppler-Range signature (mDRS). As we know, accurate extraction of human limbs in motion has great potential for improving the radar performance on human motion detection. Because the motions of human limbs usually overlap in the time-Doppler plane, it is extremely hard to separate human limbs without other information such as the range or the angle. In addition, it is also difficult to identify which part of the body each signal component belongs to. In this work, the overlaps of multiple components can be solved, and the motions from different limbs can be extracted and classified as well based on the extracted micro-Doppler-Range trajectories (MDRTs) along with a proposed three-dimensional constant false alarm (3D-CFAR) detection. Three experiments are conducted with three different people on typical human motions using a 77 GHz radar board of 4 GHz bandwidth, and the results are validated by the measurements of a Kinect sensor. All three experiments were repeatedly conducted for three different people of different heights to test the repeatability and robust of the proposed approach, and the results met our expectations very well.


Introduction
Using radar to detect human motion for classification and recognition has attracted significant attention in recent years [1][2][3].Recent works have been focused on human skeletal posture estimation [4][5][6], human parsing [7], and 3D body mesh estimation [8,9] based on millimeter-wave MIMO radar.Except for the potential applications for human-computer interaction and gait recognition [10,11], there is a great potential to monitor human motions without privacy invasion for the physical health care of elderly people and patients [12,13].Human radar micro-Doppler signatures (mDS), as a significant feature that can be used to classify and recognize human motions, contain the time-varying velocity information of human limbs [14].In human-computer interaction applications, the device should conjecture the intention of human behavior before the response.In automated driving applications, the driving-assistant system needs to judge the intention of pedestrian before taking actions.The accurate extraction of the motion data of different limbs allows for the quantitative interpretation of human motion, which helps to understand the behavior.Compared to the classification without separation via preprocessing, the separated mDS of different body parts can significantly improve the classification accuracy [15].Traditionally, the mDS is extracted in the time-Doppler domain [16], which is inadequate for discriminating human limbs because only the velocity information is used.To extract motion data of human limbs  In this paper, we propose an interferometric radar approach to extract human limbs from micro-Doppler-Range trajectories (MDRTs).The time-Doppler-Range characteristics are used to represent the micromotions of a moving target aiming to solve the overlapping problem existing in the time-Doppler domain, after which, the interferometric phase is utilized to acquire the angle information.Benefitting from the angle information, the elevation and azimuth positions can be fixed, and thus different human limbs can be classified according to the micro-Doppler-Range signatures (mDRSs), i.e., different limbs can thus be separated and extracted.
The remainder of this paper is arranged as follows.Section 2 briefly presents the fundamentals of mDS and radar interferometry.Based on the mDRS, the method for limbs extraction by using wideband interferometric radar is developed in Section 3. Practical experiments are described with the results analyzed in Section 4, and finally the paper is concluded in Section 5.

Human Micro-Doppler Signature
Micromotions of a target or a structure, such as human arms swinging when walking, induce the well-known micro-Doppler phenomenon in radar detection.The mDS reflects the motion kinematics of a target in the time-Doppler domain, which can be obtained by taking the time-frequency analysis on radar echo signals [16].
A human is one of the representative targets of micromotion signatures.When a human walks towards radar, their limbs, such as arms and legs, all exhibit micromotions with different velocities.So, the received radar signal containing the information of all micromotions can be expressed after demodulation as where A i is the amplitude, and f di (t) is the micro-Doppler frequency corresponding to the micromotion component (MMC) i.By conducting Short-Time Fourier Transform (STFT) on s r (t), we obtain where w(t) is the time window function; the micromotions can thus be characterized in the time-frequency plane, and the mDS can be obtained.Figure 1 shows the mDS of a human walking towards a Ku-band single channel radar with no arm swinging.There are three major m-D components induced by the torso, left leg and right leg, respectively.As shown in Figure 1, the first component is induced by the torso exhibiting the strongest intensity, whose velocity exhibits a pseudo-periodic oscillation sors 2023, 23,7544 leg and right leg, respectively.As shown in Figure 1, the first component is the torso exhibiting the strongest intensity, whose velocity exhibits a pseudocillation between about −0.5 m/s and −1.5 m/s.The remaining two components by legs exhibiting the largest radial velocity about −3.5 m/s in peak.However, when the radar echo data of Figure 1 are used to extract the m of limbs, the following problems are encountered.
1.The velocities of the torso, legs, and arms are hard to identify and in by professionals without prior knowledge about human motions.2. The multiple m-D components including the torso, legs, and arm lapped with each other in the mDS.Thus, it is hard to extract hum curately only based on mDS.As we will show in Section 3, the overla lem can be solved by incorporating the range information into mDS 3.Although the velocity components may be separated from the mD decomposition [17,18], it is still difficult to identify which limbs ind As shown later, this problem can be solved by utilizing the int phases obtained by interferometric radar.

Radar Interferometry
Let us consider an interferometric radar composed of two antennas as sho 2, whose positions are (− , 0) and ( , 0), respectively, i.e., the baseline leng distance between the target and the middle of two antennas is , and  ≫ difference between  and  is negligible, i.e.,   = , and  −  = ∆ However, when the radar echo data of Figure 1 are used to extract the micromotions of limbs, the following problems are encountered.

1.
The velocities of the torso, legs, and arms are hard to identify and interpret even by professionals without prior knowledge about human motions.

2.
The multiple m-D components including the torso, legs, and arms are overlapped with each other in the mDS.Thus, it is hard to extract human limbs accurately only based on mDS.As we will show in Section 3, the overlapping problem can be solved by incorporating the range information into mDS.

3.
Although the velocity components may be separated from the mDS by signal decomposition [17,18], it is still difficult to identify which limbs induced them.As shown later, this problem can be solved by utilizing the interferometric phases obtained by interferometric radar.

Radar Interferometry
Let us consider an interferometric radar composed of two antennas as shown in Figure 2, whose positions are − d 2 , 0 and d 2 , 0 , respectively, i.e., the baseline length is d.The distance between the target and the middle of two antennas is R, and R d.Thus, the difference between θ 1 and θ 2 is negligible, i.e.,θ 1 ≈ θ 2 = θ, and cillation between about −0.5 m/s and −1.5 m/s.The remaining two components by legs exhibiting the largest radial velocity about −3.5 m/s in peak.However, when the radar echo data of Figure 1 are used to extract the m of limbs, the following problems are encountered.
1.The velocities of the torso, legs, and arms are hard to identify and in by professionals without prior knowledge about human motions.2. The multiple m-D components including the torso, legs, and arm lapped with each other in the mDS.Thus, it is hard to extract hum curately only based on mDS.As we will show in Section 3, the overla lem can be solved by incorporating the range information into mDS 3.Although the velocity components may be separated from the mD decomposition [17,18], it is still difficult to identify which limbs in As shown later, this problem can be solved by utilizing the int phases obtained by interferometric radar.

Radar Interferometry
Let us consider an interferometric radar composed of two antennas as sho 2, whose positions are (− , 0) and ( , 0), respectively, i.e., the baseline leng distance between the target and the middle of two antennas is , and  ≫ difference between  and  is negligible, i.e.,   = , and  −  = ∆ Assuming 1 is used for transmitting, and both antennas are use ing, the measured phase difference ∆ of these two antennas can be express Assuming Antenna1 is used for transmitting, and both antennas are used for receiving, the measured phase difference ∆φ of these two antennas can be expressed as where λ is the wavelength of the central frequency of the transmitted signal.Therefore, the azimuth angle θ and position X can be estimated by Interferometric radar can discriminate multiple targets by estimating their angles, referring to the normal of the baseline according to the measured interferometric phases [33], and the angle information can then be used to extract the motion components induced by different human limbs.Let us suppose a human is walking towards a radar, whose arms and legs are at different elevation positions and whose left limbs and right limbs are at different azimuth positions.It is possible to discriminate and extract the different limbs by using an interferometric radar.However, if the limbs have the same radial velocity at the same time, different m-D components or different micromotions will overlap with each other as shown in Figure 1, In this case, if we just utilize the interferometric phase alone, we still cannot extract the MMCs accurately based on the human mDSs.In the next section, we show that different MMCs can be well separated and extracted by incorporating range information to mDS, utilizing the interferometric information together.

Micro-Doppler-Range Trajectory Extraction
According to the fact that human mDS can provide an aggregation of the time-velocity distribution of human limbs, the precondition for separating the limbs from mDS using interferometric radar is that the MMCs are not overlapped.However, as described above, the overlapping problem is unavoidable in human mDS.To solve the overlaps and extract human limbs accurately, we propose to use the micro-Doppler-Range signature (mDRS), which contains both range and velocity information.

Human mDRS
Just as the mDS can be presented in the time-Doppler plane, the micro-Range Signature (mRS) induced by micromotions of limbs can also be presented in the time-Range plane if high resolution range information is available by using wideband radar [22].However, no matter the mDS or the mRS is used, overlaps between human limbs are unavoidable.
Figure 3 shows the simulated mDS, mRS, mDRS, and real Kinect data of a pedestrian.The simulation is conducted using the method proposed in [34].To simplify the discussion, only the feet and torso are taken into consideration in this simulation.There are two overlaps, one is the mDS overlap, and the other is the mRS overlap.
Sensors 2023, 23, 7544 5 of 18 where  is the wavelength of the central frequency of the transmitted signal.Therefore, the azimuth angle  and position  can be estimated by Interferometric radar can discriminate multiple targets by estimating their angles, referring to the normal of the baseline according to the measured interferometric phases [33], and the angle information can then be used to extract the motion components induced by different human limbs.Let us suppose a human is walking towards a radar, whose arms and legs are at different elevation positions and whose left limbs and right limbs are at different azimuth positions.It is possible to discriminate and extract the different limbs by using an interferometric radar.However, if the limbs have the same radial velocity at the same time, different m-D components or different micromotions will overlap with each other as shown in Figure 1, In this case, if we just utilize the interferometric phase alone, we still cannot extract the MMCs accurately based on the human mDSs.In the next section, we show that different MMCs can be well separated and extracted by incorporating range information to mDS, utilizing the interferometric information together.

Micro-Doppler-Range Trajectory Extraction
According to the fact that human mDS can provide an aggregation of the time-velocity distribution of human limbs, the precondition for separating the limbs from mDS using interferometric radar is that the MMCs are not overlapped.However, as described above, the overlapping problem is unavoidable in human mDS.To solve the overlaps and extract human limbs accurately, we propose to use the micro-Doppler-Range signature (mDRS), which contains both range and velocity information.

Human mDRS
Just as the mDS can be presented in the time-Doppler plane, the micro-Range Signature (mRS) induced by micromotions of limbs can also be presented in the time-Range plane if high resolution range information is available by using wideband radar [22].However, no matter the mDS or the mRS is used, overlaps between human limbs are unavoidable.
Figure 3 shows the simulated mDS, mRS, mDRS, and real Kinect data of a pedestrian.The simulation is conducted using the method proposed in [34].To simplify the discussion, only the feet and torso are taken into consideration in this simulation.There are two overlaps, one is the mDS overlap, and the other is the mRS overlap.1. mDS overlap.In Figure 3a, the circled part in red denotes the mDS overlap, which is labeled by  .They happen at the instants when both feet are on the ground.The red box in Figure 3c shows the corresponding diagram of the  state.
As the distances of the two feet from the radar are different in the  state, there is no overlap for the mRS in Figure 3b, shown by the red dotted circle.2. mRS overlap.The green solid circle marked as  in Figure 3b denotes the mRS overlap.In this situation, both feet have the same distance relative to the radar.The green box in Figure 3c shows the diagram of the  state.
Although the two feet have the same range, their velocities are different, i.e., the standing foot is at zero velocity, while the other foot is at the maximum radial velocity within a gait cycle.As shown in Figure 3a, the mDSs of the two feet do not overlap with each other.For a pedestrian, whose two feet usually either have the same velocity or have the same distance, either the mDS overlap or the mRS overlap is usually unavoidable.However, as shown in Figure 3a,b, when the two feet have the same velocity, their distances relative to radar are different, while when the two feet have the same distance, their velocities are different.This is to say that the overlapping problem can be well handled if both the velocity information and the distance information, i.e., the mDRS, are used.
Figure 3d presents the simulated mDRS of a pedestrian, where both the feet and the torso are well-separated without overlapping.This is true for swinging arms; although they are not shown here, the case is the same.Moreover, the real Kinect data of a pedestrian shown in Figure 3e also demonstrate that there are no overlapping problems when cooperating the range information with velocity information.Thus, if we take the mDRS to solve the overlapping problem, good results can be expected.mDS overlap.In Figure 3a, the circled part in red denotes the mDS overlap, which is labeled by T 1 .They happen at the instants when both feet are on the ground.The red box in Figure 3c shows the corresponding diagram of the T 1 state.
As the distances of the two feet from the radar are different in the T 1 state, there is no overlap for the mRS in Figure 3b, shown by the red dotted circle.2.
mRS overlap.The green solid circle marked as T 2 in Figure 3b denotes the mRS overlap.In this situation, both feet have the same distance relative to the radar.The green box in Figure 3c shows the diagram of the T 2 state.
Although the two feet have the same range, their velocities are different, i.e., the standing foot is at zero velocity, while the other foot is at the maximum radial velocity within a gait cycle.As shown in Figure 3a, the mDSs of the two feet do not overlap with each other.
For a pedestrian, whose two feet usually either have the same velocity or have the same distance, either the mDS overlap or the mRS overlap is usually unavoidable.However, as shown in Figure 3a,b, when the two feet have the same velocity, their distances relative to radar are different, while when the two feet have the same distance, their velocities are different.This is to say that the overlapping problem can be well handled if both the velocity information and the distance information, i.e., the mDRS, are used.
Figure 3d presents the simulated mDRS of a pedestrian, where both the feet and the torso are well-separated without overlapping.This is true for swinging arms; although they are not shown here, the case is the same.Moreover, the real Kinect data of a pedestrian shown in Figure 3e also demonstrate that there are no overlapping problems when cooperating the range information with velocity information.Thus, if we take the mDRS to solve the overlapping problem, good results can be expected.

Interferometric Geometry and Retrieval of Positions
In this work, the simplest and widely used interferometric radar geometry formed by three antennas in L-shape [35,36] is adopted, where both the elevation interferometry and the azimuth interferometry are constructed as shown in Figure 4. Three antennas are located at (0, 0, 0), (d a , 0, 0), and (0, 0, d e ), respectively, where Antenna1 both transmits and receives signals, while Antenna2 and Antenna3 only receive signals.This system constructs two orthogonal interferometric baselines, i.e., the horizontal and vertical baselines, which can be utilized to obtain the azimuth and elevation angle positions corresponding to different limbs of the pedestrian as shown in the following.Let us use S 1 ( f , r, t), S 2 ( f , r, t), and S 3 ( f , r, t) to denote the mDRSs obtained from the received echoes by three antennas.

Interferometric Geometry and Retrieval of Positions
In this work, the simplest and widely used interferometric radar geometry formed by three antennas in L-shape [35,36] is adopted, where both the elevation interferometry and the azimuth interferometry are constructed as shown in Figure 4. Three antennas are located at (0, 0, 0) , ( , 0, 0) , and (0, 0,  ), respectively, where 1 both transmits and receives signals, while 2 and 3 only receive signals.This system constructs two orthogonal interferometric baselines, i.e., the horizontal and vertical baselines, which can be utilized to obtain the azimuth and elevation angle positions corresponding to different limbs of the pedestrian as shown in the following.Let us use  (, , ),  (, , ), and  (, , ) to denote the mDRSs obtained from the received echoes by three antennas.In real situations, the radar echoes are usually influenced by various interferences, e.g., the signal to noise ratio (SNR) and the background clutter.Therefore, the constant false alarm rate (CFAR) [37] method is usually used to detect a moving human target under a complex environment full of interference.Different from the traditional one-dimensional or two-dimensional CFAR, here, a three-dimensional CFAR (3D-CFAR) scheme is proposed to achieve a better performance, i.e., a 3D-CFAR window is applied to the data cube of the time-Doppler-range with the threshold  given by where  is the false alarm probability set as a constant, and  is the interferometric power calculated as where  is the number of guard cells in the CFAR window, and  ( ,  ,  ) ( = 1, 2, 3) are the echoes received by three antennas corresponding to the  th guard cell in the Range-Doppler-Time cube.After the echoes from all the cells have been examined, those echoes whose powers exceed the corresponding thresholds proceed to the interferometric processing.
It is worth noting that there are two improvements in this work compared to the traditional processing flow of point cloud generation [38].As shown in Figure 5, one is the use of the sliding window STFT instead of the Doppler FFT in the slow time domain, and the sliding window processing reduces the time interval; the other is the use of 3D-CFAR instead of 2D-CFAR to realize better target detection for 3D (Time-Doppler-Range) data.In real situations, the radar echoes are usually influenced by various interferences, e.g., the signal to noise ratio (SNR) and the background clutter.Therefore, the constant false alarm rate (CFAR) [37] method is usually used to detect a moving human target under a complex environment full of interference.Different from the traditional one-dimensional or two-dimensional CFAR, here, a three-dimensional CFAR (3D-CFAR) scheme is proposed to achieve a better performance, i.e., a 3D-CFAR window is applied to the data cube of the time-Doppler-range with the threshold T given by where P is the false alarm probability set as a constant, and σ 2 m is the interferometric power calculated as where N g is the number of guard cells in the CFAR window, and S n ( f i , r i , t i )(n = 1, 2, 3) are the echoes received by three antennas corresponding to the ith guard cell in the Range-Doppler-Time cube.After the echoes from all the cells have been examined, those echoes whose powers exceed the corresponding thresholds proceed to the interferometric processing.
It is worth noting that there are two improvements in this work compared to the traditional processing flow of point cloud generation [38].As shown in Figure 5, one is the use of the sliding window STFT instead of the Doppler FFT in the slow time domain, and the sliding window processing reduces the time interval; the other is the use of 3D-CFAR instead of 2D-CFAR to realize better target detection for 3D (Time-Doppler-Range) data.
Having obtained the above interferometric phases, the azimuth position X and the elevation position Z can be calculated, respectively, via (5) as Therefore, the corresponding spatial position corresponding to every mDRS component can be obtained; then, their attributions can be identified according to their spatial positions, and thus the MDRTs of human limbs can be extracted.

Extraction of the Micro-Doppler-Range Trajectory
In view of the characteristics of a pedestrian, the MMCs of arms and legs are adequate for interpreting human motion in most cases.Based on the above analyses, we summarize the method for extracting the mDRS trajectory by interferometric radar as the flowchart shown in Figure 5.
As shown in Figure 6, the elevation threshold ℎ is set to discriminate the motions of arms and legs according to their elevation positions, while the azimuth threshold ℎ is set to classify the left and right limbs.These two thresholds are utilized together to extract the MDRTs of limbs.As shown in Figure 4, Antenna1 and Antenna2 form the azimuth interferometer, while Antenna1 and Antenna3 form the elevation interferometer.As a result, the azimuth phase difference ∆φ a and the elevation phase difference ∆φ e can be expressed, respectively, as ) Having obtained the above interferometric phases, the azimuth position X and the elevation position Z can be calculated, respectively, via (5) as Therefore, the corresponding spatial position corresponding to every mDRS component can be obtained; then, their attributions can be identified according to their spatial positions, and thus the MDRTs of human limbs can be extracted.

Extraction of the Micro-Doppler-Range Trajectory
In view of the characteristics of a pedestrian, the MMCs of arms and legs are adequate for interpreting human motion in most cases.Based on the above analyses, we summarize the method for extracting the mDRS trajectory by interferometric radar as the flowchart shown in Figure 5.
As shown in Figure 6, the elevation threshold th e is set to discriminate the motions of arms and legs according to their elevation positions, while the azimuth threshold th a is set to classify the left and right limbs.These two thresholds are utilized together to extract the MDRTs of limbs.Because the torso is the strongest scatter of the human body, its height is use define the elevation threshold.It is clear that the highest elevation positions above torso should be the shoulders, while the lowest positions should be the hips.There we set the following elevation threshold to discriminate the upper body and the lo body, where  and  are the elevation positions of the shoulder and ground (as show Figure 4), respectively.The azimuth threshold ℎ can also be obtained from the echo data.For a wal human, there is always a foot standing on the ground without introducing micro-Dop while the upper body keeps moving.Therefore, we take the azimuthal average of the per body as the azimuth threshold ℎ .As shown in Figure 6, the azimuth threshold is the azimuth center of the torso.Thus, we can obtain the azimuth threshold ℎ by mating the azimuth center of the torso ℎ = ∑ ( ) , such that ( ) ℎ , ∀ ∈ ( ,  , … ,  ), where  ,  , … ,  are the indexes of the echo data from the upper body, i.e., their el tion positions are higher than the elevation threshold ℎ .
The procedure for setting thresholds includes the following four steps: (1) Select the strongest scatter of the echo data at each moment and take the highest vation position as the shoulder position.(2) Get the elevation threshold ℎ according to the relative elevation referring to shoulder by (12).(3) Determine the indexes (i.e. ,  ,  , … ,  ) of echo data from the upper body.(4) Take the average of the corresponding azimuth positions ( ) ( = 1, 2, … , ) a azimuth threshold ℎ as conducted in (13).
Finally, the MDRTs referring to different limbs can be categorized and extracte using the thresholds as shown in Table 2.
We highlight the real-time implementation of our algorithm.As shown in Figure 5 major time-consuming steps of our algorithm are the range compression and the slow- Because the torso is the strongest scatter of the human body, its height is used to define the elevation threshold.It is clear that the highest elevation positions above the torso be the shoulders, while the lowest positions should be the hips.Therefore, we set the following elevation threshold to discriminate the upper body and the lower body, where Z S and Z G are the elevation positions of the shoulder and ground (as shown in Figure 4), respectively.The azimuth threshold th a can also be obtained from the echo data.For a walking human, there is always a foot standing on the ground without introducing micro-Doppler, while the upper body keeps moving.Therefore, we take the azimuthal average of the upper body as the azimuth threshold th a .As shown in Figure 6, the azimuth threshold th a is the azimuth center of the torso.Thus, we can obtain the azimuth threshold th a by estimating the azimuth center of the torso where k 1 , k 2 , . . ., k N are the indexes of the echo data from the upper body, i.e., their elevation positions are higher than the elevation threshold th e .The procedure for setting thresholds includes the following four steps: (1) Select the strongest scatter of the echo data at each moment and take the highest elevation position as the shoulder position.(2) Get the elevation threshold th e according to the relative elevation referring to the shoulder by ( 12).(3) Determine the indexes (i.e., k 1 , k 2 , . . . ,k N ) of echo data from the upper body.(4) Take the average of the corresponding azimuth positions X(k i ) (i = 1, 2, . . ., N) as the azimuth threshold th a as conducted in (13).
Finally, the MDRTs referring to different limbs can be categorized and extracted by using the thresholds as shown in Table 2.
We highlight the real-time implementation of our algorithm.As shown in Figure 5, the major time-consuming steps of our algorithm are the range compression and the slow-time Doppler processing, which can all be completed via the FFT.As for the CFAR step, it should not cause trouble for the current DSP chips [39].All in all, the proposed algorithm is appropriate for real-time implementation, and it is not difficult.In the following, the experiments carried out to validate the proposed approach are described; the motion components can not only be separated but also identified with the corresponding limbs.

Experimental Setup
The experiment setup includes a radar demo board AWR1843 produced by Texas Instruments and a Kinect sensor developed by Microsoft as shown in Figure 7.The AWR1843 works at 77 GHz (λ 0 = 3.896 mm) with a bandwidth of 4 GHz, which has three transmitting antennas and four receiving antennas.Here, only two transmitting antennas and two receiving antennas are configured to form the vertical and horizontal baselines with d a = d e = 1 2 λ 0 , which is equivalent to the radar configuration of one transmitting three receiving (1T3R) as shown in Figure 4.The Kinect sensor can provide the motion data of human joints, which are used to validate the effectiveness of the proposed method.

𝑋 𝑡ℎ 𝑍
In the following, the experiments carried out to validate the proposed described; the motion components can not only be separated but also iden corresponding limbs.

Experimental Setup
The experiment setup includes a radar demo board AWR1843 produce struments and a Kinect sensor developed by Microsoft as shown in Figure 7. works at 77 GHz ( = 3.896 mm) with a bandwidth of 4 GHz, which has thr antennas and four receiving antennas.Here, only two transmitting antenn ceiving antennas are configured to form the vertical and horizontal baseli  =  , which is equivalent to the radar configuration of one transmitting (1T3R) as shown in Figure 4.The Kinect sensor can provide the motion data o which are used to validate the effectiveness of the proposed method.Although the Kinect can just provide the distances of joints, their diffe calculated to obtain the velocities.We should mention that the output fra Kinect sensor is only 30 FPS, and the measurement is vulnerable to the env iations (such as the light intensity and temperature).In addition, the skele the ends of the limbs reveals the greatest instability [40], especially at the h As we know, fluctuations in the distance caused by skeleton tracking erro more serious fluctuations in velocity.To mitigate these effects, a low pass fi to Kinect data as preprocessing [42].In the experimental scene, stable ligh temperature are kept to guarantee the quality of Kinect data.
Since the radar and the Kinect are very close to each other compared wi Although the Kinect can just provide the distances of joints, their differentials can be calculated to obtain the velocities.We should mention that the output frame rate of the Kinect sensor is only 30 FPS, and the measurement is vulnerable to the environment variations (such as the light intensity and temperature).In addition, the skeleton tracking at the ends of the limbs reveals the greatest instability [40], especially at the hand joints [41].As we know, fluctuations in the distance caused by skeleton tracking errors will induce more serious fluctuations in velocity.To mitigate these effects, a low pass filter is applied to Kinect data as preprocessing [42].In the experimental scene, stable light and suitable temperature are kept to guarantee the quality of Kinect data.
Since the radar and the Kinect are very close to each other compared with the distance to the target as shown in Figure 7, they are supposed to be situated at the origin of the coordinates system, i.e., (0, 0, 0).And the ground is situated at z = −0.9 m.We describe three experiments that were conducted, i.e., swinging hands without moving, marking time, and walking.Three volunteers participated in the experiments, whose heights are listed in Table 3.

Experiment Swinging Arms
In this experiment, the experimenter stood still on the ground and swung both arms with a cycle of about 1 s.The distance between the experimenter and the radar was about 2.5 m, i.e., the coordinates were (0, 2.5, −0.9).The experimental results are presented in Figure 8, where the results are grouped in different columns.Figure 8a shows the traditional mDSs of the three volunteers when swinging arms, Figure 8b shows the mDRSs of the swinging arms with the azimuth position information presented.As can be seen from Figure 8b, the overlaps of the multiple motion components exhibited in Figure 8a have been well eliminated by using the range information provided by the wideband radar.time, and walking.Three volunteers participated in the experiments, whose heights are listed in Table 3.

Experiment Swinging Arms
In this experiment, the experimenter stood still on the ground and swung both arms with a cycle of about 1 s.The distance between the experimenter and the radar was about 2.5 m, i.e., the coordinates were (0, 2.5, −0.9).The experimental results are presented in Figure 8, where the results are grouped in different columns.Figure 8a shows the traditional mDSs of the three volunteers when swinging arms, Figure 8b shows the mDRSs of the swinging arms with the azimuth position information presented.As can be seen from Figure 8b, the overlaps of the multiple motion components exhibited in Figure 8a have been well eliminated by using the range information provided by the wideband radar.Because only the arms were in motion in this experiment, the azimuth information is enough for discriminating between the right arm and the left arm.The azimuth position information is exhibited in Figure 8b as different colors, which was obtained from the azimuth interferometric phase according to (10)  Because only the arms were in motion in this experiment, the azimuth information is enough for discriminating between the right arm and the left arm.The azimuth position information is exhibited in Figure 8b as different colors, which was obtained from the azimuth interferometric phase according to (10), e.g., the green approximately represents the −0.3 m azimuth position, while the purple represents the 0.3 m azimuth position.The results were in accordance with the actual situation with the right arm at the negative azimuth position and the left arm at the positive azimuth position.
As shown in Figure 8b, the two arms' mDRSs can be discriminated, their MDRTs can thus be extracted separately, as shown in Figure 8c,d, respectively, and finally, the motions of the left and right arms can be perfectly separated for all volunteers.As mentioned above, the Kinect tracks the human joints, while the radar detects the limbs.In fact, the end of each limb has the maximum radial speed for that limb; thus, the joints are extracted from the Kinect data because they correspond to the ends of the limbs, which are then used to match the envelop of the corresponding limbs.For instance, the hand joints of the Kinect data were extracted to match with the echo data of the arms.In this paper, the Kinect data were taken as an approximate truth value utilized to qualitatively evaluate the accuracy of the extracted limbs by radar, which are denoted by the red lines in Figure 8.As we can see from Figure 8c,d, the red lines were highly consistent with the envelops of the extracted positions.

Experiment on Marking Time
In this subsection, the experiments conducted on marking time with both the arms and the legs in motion are described, i.e., more micromotions will be involved and extracted.In this experiment, the volunteers still stood about 2.5 m away from the radar as before, i.e., their positions were (0, 2.5, −0.9).The motion time circle was about 1.5 s.Particularly, because the volunteers did not move forward or backward, their feet had zero radial velocity relative to both the radar and the Kinect; hence, the maximum Doppler frequencies of the legs were mainly induced by the knees.
Figure 9a presents the mDSs of the three volunteers when marking time; it is clearly shown that the mDSs of the different limbs overlap.Figure 9b presents the mDRSs of marking time with the elevation position information presented, while Figure 9c presents the mDRSs with the azimuth position information presented.As can be seen from Figure 9b, the mDRSs were more complicated than that of the swinging arms.It is a more challenging task to discriminate and extract the micromotions of arms and legs.
As shown in Figure 9b, the light blue trajectories cover the largest elevation range, i.e., they belong to the swing arms.The dark blue and purple trajectories correspond to the left and right legs according to their elevation positions.In Figure 9c, there are mainly two motion types differentiable according to the azimuth position information, i.e., the purple color denotes the motions of the right limbs including the right hand and the right leg, while the green color denotes the motions of the left limbs including the left hand and the left leg.
The extracted results of the marking time experiments of the three volunteers are presented in Figure 9d-g.As shown in Figure 9d,e the motions of the right and left arms have been perfectly separated, and it is also true for the motions of the right and left legs, as shown in Figure 9f,g.The hand and knee joints were extracted from the Kinect data, which are presented by red lines in Figure 9d-g.The MDRTs of the arms and legs extracted by radar all agreed very well with the trajectories of the Kinect.As shown in Figure 9b, the light blue trajectories cover the largest elevation range, i.e., they belong to the swing arms.The dark blue and purple trajectories correspond to the left and right legs according to their elevation positions.In Figure 9c, there are mainly two motion types differentiable according to the azimuth position information, i.e., the purple color denotes the motions of the right limbs including the right hand and the right leg, while the green color denotes the motions of the left limbs including the left hand and the left leg.

Experiment on Walking
In the last experiment, micromotions of walking human were extracted, which are much more complicated than that of the previous two experiments because the arms and legs induce micro-Doppler frequencies that are significantly larger than before, resulting in much more serious overlapping problems.During the experiment, the volunteers walked away from the radar from about 1.5 m to 4.0 m at a speed of around 1 m/s, i.e., walked from (0, 1.5 m, −0.9 m) to (0, 4.0 m, −0.9 m).In this case, the speed of the feet was greater than that of the knees.
The experimental results are presented in Figure 10.As can be seen from Figure 10a, serious overlaps are exhibited in the traditional mDS images as before, although the body induced a different shape.The information of the elevation position and azimuth position corresponding to different limbs is provided along with the mDRSs in Figure 10b,c, respectively.Compared with Figure 9b, the purple color in Figure 10b is much more obvious, and the denoted position is about −0.8 m, i.e., the corresponding velocities refer to the feet.Figure 10d-g present the extracted MDRTs of different limbs, from which one can see that the right arms and legs and the left arms and legs have all been separated very well for all three volunteers, and all are in good accordance with the results of the Kinect.Last but not least, one may consider the scenarios with multiple human targets.Generally speaking, clustering and tracking steps may be required before limb separation, and more powerful radar with more channels can be applied to cope with this situation.We should emphasize that if multiple human targets are covered by the radar beam at the same time and they can be resolved in range, then the proposed approach still can be Last but not least, one may consider the scenarios with multiple human targets.Generally speaking, clustering and tracking steps may be required before limb separation, and more powerful radar with more channels can be applied to cope with this situation.We should emphasize that if multiple human targets are covered by the radar beam at the same time and they can be resolved in range, then the proposed approach still can be applied to extracting the limbs of different humans.

Conclusions
In this paper, novel method is proposed for extracting micromotions of human limbs based on the MDRTs retrieved by using wideband interferometric radar, which has been configured as 1T3R mode, forming orthogonal interferometric baselines along the vertical and the horizontal directions.In this approach, the range information is first incorporated into the traditional mDS to eliminate the overlaps of different limbs.Then, the azimuth and elevation positions of the limbs are determined by utilizing the interferometric phases.Three experiments on swinging arms, marking time, and walking by three different volunteers were carried out, using a commercial off-the-shelf radar module.At the same time, a Kinect sensor was used to simultaneously record the micromotions to verify our experiment results.The proposed method integrated the time, Doppler, and range information altogether, to realize the time-Doppler-range 3D motion data extraction of human limbs using the obtained interferometric position information.
All the experiments demonstrated that the overlapping problems were solved very well.A wide prospect of applications of the proposed approach can be envisaged, e.g., surveillance of human activities, health care monitoring, human identification, as well as human-computer interaction.Future work will focus on human motion recognition and classification based on the extracted micromotions and based on which adaptive joints' extraction can be realized.We should point out that the challenge of extracting the left leg and the right leg should be considered, if the two legs move in a line or they are too close to each other in azimuth.So, it remains an open problem requiring further experiments using more powerful radar configuration, for instance, multistatic radar is expected to obtain more reliable and more robust extraction of human limbs and joints.
about −0.5 m/s and −1.5 m/s.The remaining two components are induced by legs exhibiting the largest radial velocity about −3.5 m/s in peak.

Figure 1 .
Figure 1.mDS of a pedestrian with no arm swinging, the echo strength is represente e.g. the red circled part denotes stronger echo.

Figure 1 .
Figure 1.mDS of a pedestrian with no arm swinging, the echo strength is represented by color, e.g. the red circled part denotes stronger echo.

Figure 1 .
Figure 1.mDS of a pedestrian with no arm swinging, the echo strength is represente e.g. the red circled part denotes stronger echo.

Figure 3 .
Figure 3. Radar simulation and real Kinect data of a pedestrian.(a) Simulated mDS.(b) Simulated mRS.(c) Two walking states.(d) Simulated mDRS.(e) Real data collected by Kinect.1.mDS overlap.In Figure3a, the circled part in red denotes the mDS overlap, which is labeled by T 1 .They happen at the instants when both feet are on the ground.The red box in Figure3cshows the corresponding diagram of the T 1 state.As the distances of the two feet from the radar are different in the T 1 state, there is no overlap for the mRS in Figure3b, shown by the red dotted circle.2.mRS overlap.The green solid circle marked as T 2 in Figure3bdenotes the mRS overlap.In this situation, both feet have the same distance relative to the radar.The green box in Figure3cshows the diagram of the T 2 state.Although the two feet have the same range, their velocities are different, i.e., the standing foot is at zero velocity, while the other foot is at the maximum radial velocity within a gait cycle.As shown in Figure3a, the mDSs of the two feet do not overlap with each other.

Figure 5 .
Figure 5. Flowchart for the extraction of MDRTs using interferometric radar.

Figure 8 .
Figure 8. Experiment results of the swinging arms of three volunteers.(a) mDS of swinging arms.(b) mDRS of swinging arms with azimuth positions presented.(c,d) Extracted MDRTs of the right and left arms compared with the measured trajectories by Kinect, which are denoted by red lines.
, e.g., the green approximately represents the −0.3 m azimuth position, while the purple represents the 0.3 m azimuth position.The results were in accordance with the actual situation with the right arm at the negative azimuth position and the left arm at the positive azimuth position.

Figure 8 .
Figure 8. Experiment results of the swinging arms of three volunteers.(a) mDS of swinging arms.(b) mDRS of swinging arms with azimuth positions presented.(c,d) Extracted MDRTs of the right and left arms compared with the measured trajectories by Kinect, which are denoted by red lines.

Figure 9 .
Figure 9. Experiment results on marking time for three volunteers.(a) mDS.(b) mDRS with elevation positions presented.(c) mDRS with azimuthal positions presented.(d-g) Extracted MDRTs of marking time compared with the measured trajectories by Kinect, which are denoted by red lines: (d) right arm, (e) left arm, (f) right leg, (g) left leg.

Figure 9 .
Figure 9. Experiment results on marking time for three volunteers.(a) mDS.(b) mDRS with elevation positions presented.(c) mDRS with azimuthal positions presented.(d-g) Extracted MDRTs of marking time compared with the measured trajectories by Kinect, which are denoted by red lines: (d) right arm, (e) left arm, (f) right leg, (g) left leg.

Sensors 2023, 23 , 7544 15 of 18 Figure 10 .
Figure 10.Experiment results on walking for three volunteers.(a) mDS of walking.(b) mDRS with elevation positions presented.(c) mDRS with azimuthal positions presented.(d-g) Extracted MDRTs compared with the measured trajectories by Kinect, which are denoted by red lines: (d) right arm, (e) left arm, (f) right leg, (g) left leg.

Figure 10 .
Figure 10.Experiment results on walking for three volunteers.(a) mDS of walking.(b) mDRS with elevation positions presented.(c) mDRS with azimuthal positions presented.(d-g) Extracted MDRTs compared with the measured trajectories by Kinect, which are denoted by red lines: (d) right arm, (e) left arm, (f) right leg, (g) left leg.

Table 1 .
Comparison of related works.

Table 2 .
Classification of limbs by thresholds.

Table 2 .
Classification of limbs by thresholds.

Table 3 .
Heights of the three volunteers.

Table 3 .
Heights of the three volunteers.