You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

27 September 2024

End-to-End Electrocardiogram Signal Transformation from Continuous-Wave Radar Signal Using Deep Learning Model with Maximum-Overlap Discrete Wavelet Transform and Adaptive Neuro-Fuzzy Network Layers

and
Interdisciplinary Program in IT-Bio Convergence System, Department of Electronics Engineering, Chosun University, Gwangju 61452, Republic of Korea
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Advances in Electrocardiogram (ECG) Signal Processing and Its Applications

Abstract

This paper is concerned with an end-to-end electrocardiogram (ECG) signal transformation from a continuous-wave (CW) radar signal using a specialized deep learning model. For this purpose, the presented deep learning model is designed using convolutional neural networks (CNNs) and bidirectional long short-term memory (Bi-LSTM) with a maximum-overlap discrete wavelet transform (MODWT) layer and an adaptive neuro-fuzzy network (ANFN) layer. The proposed method has the advantage of developing existing deep networks and machine learning to reconstruct signals through CW radars to acquire ECG biological information in a non-contact manner. The fully connected (FC) layer of the CNN is replaced by an ANFN layer suitable for resolving black boxes and handling complex nonlinear data. The MODWT layer is activated via discrete wavelet transform frequency decomposition with maximum-overlap to extract ECG-related frequency components from radar signals to generate essential information. In order to evaluate the performance of the proposed model, we use a dataset of clinically recorded vital signs with a synchronized reference sensor signal measured simultaneously. As a result of the experiment, the performance is evaluated by the mean squared error (MSE) between the measured and reconstructed ECG signals. The experimental results reveal that the proposed model shows good performance in comparison to the existing deep learning model. From the performance comparison, we confirm that the ANFN layer preserves the nonlinearity of information received from the model by replacing the fully connected layer used in the conventional deep learning model.

1. Introduction

Many people and businesses have set as their ideal goal the systematic management of an individual’s mental and physical health, and the identification of emotional states that are not directly expressed, and the provision of convenience and services accordingly. Artificial intelligence has recently attracted attention as an essential technology for realizing this ideal. The information obtained from human-like biological signals, facial expressions, voices, and behaviors can infer unexpressed health or expressions of opinion, thereby enabling an appropriate response even when people are physically challenged or unable to express their own intentions [1]. In particular, ECG signals from a person’s heart contain high-quality information about the person which can be used to identify their health status and emotional status [2]. In general, in order to acquire an ECG signal, a professional technique needs to be followed, attaching pads and devices to the body without movement. However, in order to analyze ECG signals in everyday environments, pads and devices that work in a contact manner are not suitable [3]. Deep learning is a technology that classifies and predicts using high-dimensional nonlinear features extracted from large-scale data, making it suitable for radars with high-frequency bands to pass through the human body and analyze the reflected signals to acquire ECG information. In order to rebuild an ECG in a non-contact manner through radar signals, there is a limit in terms of checking changes in the signal over time. When a high-frequency radar passes or is reflected through a human body, it is received, including the activity signal information performed therein. In this case, signal-processing technologies such as noise filters and heart rate extraction are applied by checking the frequency band that includes the desired information [4,5]. In order to check the frequency which changes over time, a method of analyzing signals in a high-dimensional manner by converting a one-dimensional radar signal into a two-dimensional time–frequency image is also being studied. A signal transformed into a two-dimensional format can be treated as an image, enabling the use of image preprocessing techniques and learning through CNN-based models, which are capable of handling a vast amount of embedded information.
Existing models extract features from signals and transfer them to the FC layer, resulting in linear weights being applied, leading to output values. In contrast, fuzzy logic is applied to the input data using various membership functions (MFs), expressing the degree of membership for each one and generating rules based on these memberships, thereby creating linguistic rules which capture the relationships between feature channels. Although feature extraction abilities for large-scale data are somewhat lacking, this model can generate linguistic rules, enabling transparent learning and excellent processing abilities for uncertainty and ambiguous data acquired in the real world. In the case of the conventional fully connected (FC) layer, it is difficult to represent channel-specific relationships by the linear combination of high-dimensional nonlinear representations obtained from large-scale input data. As a result, the nonlinear features passing through the FC layer risk being weakened or lost, and the expressive power may decrease due to the flat combination approach. However, in the case of the ANFN layer, it usually has useful characteristics to solve these problems. This layer has the capability of handling complex nonlinear data and is advantageous for identifying relationships through clustering between input channels. Moreover, even if there is noise in the delivered feature map, it can obtain the robustness and stability of signal reconstruction.
Thus, a deep–fuzzy method is being studied, combining two different methods to simultaneously use nonlinear feature extraction capabilities for large-scale data and processing capabilities for uncertain data [6].
In this study, we propose an improvement compared to the traditional fully connected (FC) layer—a linear classification layer commonly used in deep network models—using a fuzzy classification layer. This layer emphasizes the nonlinear features extracted by the model and enhances robustness against noise by incorporating fuzzy logic. Consequently, it becomes more effective in handling uncertain and ambiguous data and is better suited for processing nonlinear features, thereby increasing the model’s robustness to heterogeneous characteristics.
We describe, in Section 2, the study of converting radar signals into ECGs and the study of fusing deep networks and fuzzy layers. Section 3 describes the MODWT theory and the algorithms needed to design the output layer using fuzzy logic. Section 4 compares general deep network performance using the MODWT layer and that of the proposed model using the fuzzy layer, and, finally, Section 5 describes the conclusions.

3. CNN and Bi-LSTM Model with MODWT and ANFN Layers

We performed the following experiments to demonstrate whether the model’s performance was sufficient to replace the fully connected (FC) layer with the adaptive neuro-fuzzy network (ANFN) layer with the nonlinear clustering treatment in the deep network generating the ECG, as shown in Figure 1. This section addresses the algorithm of MODWT in the deep network layer and the algorithmic methodology for calculating the membership function (MF) and applying rule weights by transferring the values delivered in the intermediate layer to the ANFN layer.
Figure 1. Overview of the reconstruction of a CW radar signal into an ECG signal.
In the existing FC layer, the received feature value was weighted for each channel to obtain a linear sum using the desired number of output channels. While the existing method for obtaining the desired output by applying weights to nonlinear channel information was also valid, we sought higher predictions by leveraging fuzzy layers that could handle nonlinearities more appropriately. Figure 2 shows the proposed method for converting the CW signals obtained through radar into an ECG signal. Obtaining ECG-related information from radar signals in the time domain was insufficient. Thus, we converted them into two-dimensional image data in the time–frequency domain to extract ECG information included in the low-frequency band. Afterwards, the combined model of CNN and Bi-LSTM was employed. To generate the inference using the fuzzy concept rather than conventional linear methods of nonlinear feature channel data, we obtained reconstructed ECG signals by replacing the fully connected layer with an ANFN layer. Here, the fuzzy c-means (FCM) clustering method in the ANFN layer was used to efficiently compress a number of received channel data.
Figure 2. Overview of the ECG signal’s reconstruction process using MODWT, deep learning, and ANFN.

3.1. MODWT (Maximum-Overlap Discrete Wavelet Transform) Layer

The MODWT is an extended transformation of the discrete wavelet transform (DWT) that can represent data in the time domain as multi-resolution-like spectrograms, scalograms, etc. The MODWT can also be applied without limitation to the signal length, unlike DWT. By overlapping signal sampling, a high-resolution time–frequency representation can be obtained. In addition, it has a shift variance for time movement; thus, it can be used for noise removal or voice feature extraction. The MODWT can be converted through the scaling filter and the wavelet filter, as shown in Equations (1)–(4) below, depending on each scale level and the index of the filter coefficient.
H j , k = H 2 j 1 k m o d   N m = 0 j 2 G 2 m k m o d   N
W j , t = 1 N k = 0 N 1 H j , k e i 2 π n k N
G j , k = m = 0 j 1 G 2 m k m o d   N
V j , t = 1 N k = 0 N 1 G j , k e i 2 π n k / N
The signal converted by the above method can be returned to the original signal through inverse transformation. The standard algorithm for MODWT implements direct cyclic synthesis in the time domain. The MODWT implementation in our experiment performed a circular convolution in the Fourier domain. The wavelet and scaling filter coefficients at level j were calculated by taking the inverse discrete Fourier transform (DFT) of the product of the DFTs. The DFT of the product was the DFT of the signal and the DFT of the j level wavelet or scaling filter. The MODWT has a fixed-frequency band, so it may not be sensitive to nonlinear changes in the signal. However, for ECG reconstruction, in this study, the purpose was to extract heart information in the low-frequency band. For this reason, we used the MODWT, which enabled multiple-resolution analysis and allowed the easy separation of the desired frequency band.

3.2. CNN and Bi-LSTM Model

The CNN is a technique for extracting features of image or time-series input data through a convolutional filter. As shown in Equation (5), where X is the input data, W is the filter matrix, and b is the bias value, the window filter of the set size moves to perform a convolution, and a feature map is output with channels as the number of filters. The output feature map emphasizes nonlinearity through normalization and activation functions.
F e a t u r e   m a p = X W i , j = M = 1 M N = 1 N X i + m 1 , j + n 1 · W m , n + b
We designed a hybrid convolutional autoencoder and a Bi-LSTM network to reconstruct the ECG signal. The first one-dimensional (1D) convolutional layer filtered the signal. Then, the convolutional autoencoder removed most of the high-frequency noise and captured the high-level patterns of the entire signal. Furthermore, the transposed 1D convolution layer was used to upsample 1D feature maps in the final stage of the CNN. The convolution operation downsampled the input by applying a sliding convolution filter to the input. By flattening the input and output, the convolution operation was computed by the convolution matrix and the bias vector that could be derived from the layer weights and biases. Similarly, the transposed convolution operation upsampled the input by applying a sliding convolution filter to the input.
The Bi-LSTM is a model that learns the bidirectional dependencies of time-series data, and extended features can be obtained because the LSTM is applied in both directions. As shown in Equations (6)–(11), where W indicates the input weights, R is the recurrent weight, and b is the bias, this model consists of an oblivion gate that determines whether to exclude cell information from the previous time period and an input and output gate. The role of the Bi-LSTM layer in this paper is to further refine the signal details.
I n p u t   g a t e   I t = σ g W i x t + R i H t 1 + b i
F o r g e t   g a t e   F t = σ g W f x t + R f H t 1 + b f
C e l l   s t a t e   G t = σ c W g x t + R g H t 1 + b g
O u t p u t   g a t e   O t = σ g W o x t + R o H t 1 + b o
C t = F t · C t 1 + I t · G t
H t = O t   ·   σ c ( C t )
The Bi-LSTM should specify the number of hidden units. Eight units are used in this paper. The number of hidden units corresponds to the amount of information that the layer maintains between each time step. The hidden state can contain information from all previous time steps, regardless of the sequence length. If the length of the hidden units is too long, the layer may overfit the training data. The hidden state does not limit the number of time steps that the layer processes in one iteration.

3.3. ANFN (Adaptive Neuro-Fuzzy Network) Layer

Deep learning extracts nonlinear features from large-scale input data and transforms them into high-dimensional feature channels. However, the transferred data are cumbersome to handle in general fuzziness, creating problems in the course of the dimension and increasing amount of computation. The fuzzy c-means (FCM) clustering method can be used to estimate appropriate clusters with the transferred data and compress them to the desired number of output channels. The FCM algorithm is one of various data clustering techniques, representing the degree to which a data point belongs to each cluster as a probability, and it generates a rule value by synthesizing the probability of belonging to the set clusters. As shown in Figure 3, the clusters are generated for each input data channel using FCM clustering. The data point expresses the membership value to which it belongs to the cluster for each channel as the probability of a real number between 0 and 1. This represents to which cluster each data point belongs to more strongly. The membership values are then leveraged to generate and learn the rules through rule consequents and rule weights, and the final output is calculated. The output value calculates the loss value with the MSE (mean square error) compared to the actual ECG and updates the training parameters contained in CNN and Bi-LSTM with an ANFN layer. The pseudo-codes of the ANFN are described in Algorithm 1.
Algorithm 1. Pseudo-code of ANFN
1:
Initialization: center, sigma, ruleConsequents, ruleWeights
2:
X = feature data propagated by the Bi-LSTM
3:
NormalizedData = (X − min(X))/(max(X) − min(X))
4:
For C = 1:numClusters
5:
sig = sigma(C,:), cnt = centers(C,:)
6:
Distance = NormalizedData − cnt, Membership = 0
7:
SquaredDistance = sum(Distance), Squaredsig = sig2
8:
For K = 1:input_channel
9:
Membership = Membership + exp(-SquaredDistance/(2 × Squaredsig(K)))
10:
End
11:
MembershipValues(k) = Membership
12:
End
13:
ruleOutput = MembershipValues × ruleConsequents
14:
weightedSum = sum(ruleOutput × ruleWeights)
15:
sumWeights = sum(ruleWeights)
16:
finalOutput = weightedSum/sumWeights
17:
loss = mse(finalOutput,trueSignal)
18:
Update of weights and biases in the CNN and Bi-LSTM
center, sigma, ruleConsequents, ruleWeights in the FAL
Figure 3. Process procedure in the design of the ANFN.

4. Experimental Results

In this section, we evaluate the performance of an end-to-end model designed to convert a CW (continuous-wave) radar signal recorded in a non-contact manner into an ECG through frequency decomposition and feature extraction from the public dataset. The performance of the proposed model is compared with the conventional CNN with an FC (fully connected) layer through the mean square error (MSE). The compared deep learning networks generate complex nonlinear features from the input data, and the FC layers derive the final classification results by applying weights to these features. However, this approach does not fully reflect the interaction between the feature channels, potentially overlooking important relationships, and has the disadvantage of limiting nonlinearity. Therefore, the proposed method can preserve the relationships between the input channels and improves the nonlinear data-processing power, enabling a more accurate reconstruction.
The synchronized radar ECG dataset consisted of synchronized data collected over 24 h through a CW radar and a reference device, the Task Force Monitor 3040i from CNSystems Medizintechnik GmbH (Graz, Austria) [20]. The equipment operated at 24 GHz in the ISM band based on a six-port technology. For efficient learning, a portion of the data acquired (data from 6 subjects out of 30 healthy subjects) were used. All subjects were required to complete a questionnaire on epidemiological data such as age, sex, weight, and medical history. In addition, the condition of the subjects was briefly checked by examining their blood pressure, heart rate, and heart sounds. If all the criteria were confirmed to be positive, the subject was included in the study, and their measurements were made available. This had been based on the six-port technology but was expanded into individual components to become a portable radar system. Figure 4 shows the CW radar signal and the corresponding ECG signal. We can recognize that it is almost impossible to identify any correlation between the CW radar signals and the corresponding reference ECG measurements.
Figure 4. Plot of demodulated radar signal and synchronized ECG: (a) CW radar samples and (b) synchronized ECG samples.
We used an Adam optimizer and chose to shuffle the data at every epoch for 600 epochs as the training option parameters. The first convolution1dLayer was replaced by an MODWT layer. The MODWT layer was configured to have the same filter size and number of output channels to maintain the same number of learning parameters. Based on our previous observation, only components in a certain frequency range were kept. Instead of the conventional FC layer, we replaced the ANFN layer based on FCM clustering to effectively process the nonlinear characteristics and have a robust processing power for ambiguous results. Here, FCM clustering was selected for processing a high-dimensional representation. The rules were generated by the membership value of the input data point, and then the weights were obtained so that each rule could make a valid decision through the ANFN. Figure 5 shows the membership function of each input channel. We could obtain a higher accuracy compared to conventional linear processing by learning the center value, the sigma value of each MF, and the shape through the ANFN layer.
Figure 5. Membership function for each input channel.
Figure 6 shows some samples of reconstructed ECG signals and measured ECG signals. As shown in Figure 6, it can be confirmed that the proposed method predicted the pattern of the heart rate more accurately. Figure 7 compares the ECG signal reconstructed by the proposed model from CW radar signals recorded in a non-contact manner and the measured ECG signals. As shown in Figure 7, the experimental results reveal that the proposed deep learning model showed a good reconstruction performance with a small loss.
Figure 6. Comparison between the signal predicted through MCBF-net and the true signal: (a) reconstructed ECG signal by MCBF-net and (b) actual ECG signal.
Figure 7. Prediction performance of the actual ECG signal and the reconstructed ECG signal. (Blue signal: the actual ECG; red signal: the reconstructed ECG; and green signal: the difference between the actual ECG and the reconstructed ECG.)
To evaluate the performance of the proposed method, we used three measures: the MSE, the SNR, and the PSNR. Firstly, the MSE is the average obtained by squaring the difference between the original signal and the reconstructed signal. A lower MSE indicates a better reconstruction performance, while a higher MSE reflects a poorer performance. The SNR represents the ratio between the strength of the signal and the noise. A high SNR indicates a good quality of the reconstruction signal, while a low SNR indicates a poor quality. Finally, the PSNR calculates the peak signal-to-noise ratio between the original signal and the reconstructed signal. This ratio is often used as a quality measure between two signals: the higher the PSNR, the better the quality of the reconstructed signal.
As a result of comparing the loss values through the MSE, the conventional CNN with only the FC layer showed an average loss value of 0.0138, while the proposed model with ANFN layer recorded a lower loss value of 0.010. Table 1 lists the performance of the previous two models and the proposed model. As listed in Table 1, the SNR was 7.514 and the PSNR 18.909 for the signal reconstructed using the existing deep network with only the FC layer. Meanwhile, the deep network with the ANFN layer showed a better performance than the previous models, with an SNR of 9.023 and a PSNR of 20.418.
Table 1. Performance comparison between the previous two models and the proposed model.
Figure 8 and Table 1 show the MSE histograms and the performance table of the predicted ECG signals obtained using the proposed model, respectively. As shown in Figure 8 and Table 1, the design of the deep learning model with an FC layer exhibited a higher error distribution compared to the other methods. The model combining the MODWT and deep learning with FC layers showed the most moderate error distribution out of all three methods. Finally, the proposed model with MODWT and ANFN layers demonstrated the lowest error distribution, indicating a superior reconstruction performance with less deviation from the original signal in comparison to the previous two methods.
Figure 8. Performance comparison by MSE between measured and reconstructed ECG signals.
As a result, in order to reconstruct an ECG signal from a CW radar signal, it is essential to check changes in the frequency domain alongside conducting an analysis in the time domain. Therefore, the reconstruction performance was the lowest when the MODWT was not used. On the other hand, the existing FC method based on the MODWT was sufficiently effective. However, when the FC layer was replaced with an ANFN layer, we achieved reconstruction stability by maintaining the nonlinear characteristics seen in deep learning. Therefore, the proposed method demonstrated that a nonlinearity process through an ANFN layer was more effective than through an FC layer.

5. Conclusions

We designed a specialized deep learning model for end-to-end electrocardiogram (ECG) signal transformation from a continuous-wave (CW) radar signal. The proposed deep learning model was composed of convolutional neural networks (CNNs) and bidirectional long short-term memory (Bi-LSTM) with a maximum-overlap discrete wavelet transform (MODWT) layer and an adaptive neuro-fuzzy network (ANFN) layer. From the experimental results, we could recognize the fact that the FC layer linearly summed the input nonlinear data by applying channel-specific weights, but this process did not fully represent the complex nonlinear patterns extracted from the model and lacked reflection of channel-specific interactions. In the case of the ANFN layer, it had a high processing power for complex patterns and nonlinear data, and it was more effective in dealing with these problems because it generated linguistic rules by reflecting the interactions between each feature channel. The experimental results showed that, when the deep learning model was combined with the MODWT and ANFN layers, the reconstruction performance was effective in handling the nonlinear representation of the model. Furthermore, we found that the overall ECG signal reconstruction was stable, presenting superior performance in comparison to other methods. However, it took more processing time than the existing models due to the replacement of the ANFN layer. In future research, we shall study ANFN transformation to make the overall model explainable when designing deep learning models.

Author Contributions

Conceptualization, T.-W.K. and K.-C.K.; methodology, T.-W.K. and K.-C.K.; software, T.-W.K. and K.-C.K.; validation, T.-W.K. and K.-C.K.; formal analysis, T.-W.K. and K.-C.K.; investigation, T.-W.K. and K.-C.K.; resources, K.-C.K.; data curation, K.-C.K.; writing—original draft preparation, T.-W.K.; writing—review and editing, K.-C.K.; visualization, T.-W.K. and K.-C.K.; supervision, K.-C.K.; project administration, K.-C.K.; and funding acquisition, K.-C.K. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by a research fund from Chosun University (2024).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data are available in a publicly accessible repository [20].

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Casado, C.; Cañellas, M.L.; López, M.B. Depression Recognition Using Remote Photoplethysmography from Facial Videos. IEEE Trans. Affect. Comput. 2023, 14, 3305–3316. [Google Scholar] [CrossRef]
  2. Melzi, P.; Tolosana, R.; Vera-Rodriguez, R. ECG Biometric Recognition: Review, System Proposal, and Benchmark Evaluation. IEEE Access 2023, 11, 15555–15566. [Google Scholar] [CrossRef]
  3. Alam, A.; Ansari, A.Q.; Urooj, S. Design of Contactless Capacitive Electrocardiogram (ECG) Belt System. In Proceedings of the 2022 IEEE Delhi Section Conference (DELCON), New Delhi, India, 11–13 February 2022. [Google Scholar]
  4. Hossain, S.; Uddin, S.D.; Islam, S.M.M. Heart Rate Variability Assessment Using Single Channel CW Doppler Radar. In Proceedings of the 2023 IEEE Microwaves, Antennas, and Propagation Conference (MAPCON), Ahmedabad, India, 11–14 December 2023. [Google Scholar]
  5. Li, J.F.; Yang, C.L. High-Accuracy Cardiac Activity Extraction Using RLMD-Based Frequency Envelogram in FMCW Radar Systems. In Proceedings of the 2023 IEEE/MTT-S International Microwave Symposium—IMS 2023, San Diego, CA, USA, 11–16 June 2023. [Google Scholar]
  6. Pashikanti, R.S.; Patil, C.Y.; Shinde, A.A. Cardiac Arrhythmia Classification using Deep Convolutional Neural Network and Fuzzy Inference System. In Proceedings of the 2022 International Conference on Artificial Intelligence of Things (ICAIoT), Istanbul, Turkey, 29–30 December 2022. [Google Scholar]
  7. Sharma, A.; Gowda, D.; Anandaram, H. Extraction of Fetal ECG Using ANFIS and the Undecimated-Wavelet Transform. In Proceedings of the 2022 IEEE 3rd Global Conference for Advancement in Technology (GCAT), Bangalore, India, 7–9 October 2022. [Google Scholar]
  8. Cao, B.; Zhao, M.; Liu, B.; Ping, Q.; He, M. Research on non-contact electrocardiogram monitoring based on millimeter-wave radar and residual Unet. In Proceedings of the IET International Radar Conference (IRC 2023), Chongqing, China, 3–5 December 2023. [Google Scholar]
  9. Wu, Y.; Ni, H.; Mao, C.; Han, J. Contactless Reconstruction of ECG and Respiration Signals with mmWave Radar Based on RSSRnet. IEEE Sensors J. 2023, 24, 6358–6368. [Google Scholar] [CrossRef]
  10. Chen, J.; Zhang, D.; Wu, Z.; Zhou, F.; Sun, Q.; Chen, Y. Contactless Electrocardiogram Monitoring with Millimeter Wave Radar. IEEE Trans. Mob. Comput. 2022, 23, 270–285. [Google Scholar] [CrossRef]
  11. Abdelmadjid, M.A.; Boukadoum, M. Neural Network-Based Signal Translation with Application to the ECG. In Proceedings of the 2022 20th IEEE Interregional NEWCAS Conference (NEWCAS), Quebec City, QC, Canada, 19–22 June 2022. [Google Scholar]
  12. Toda, D.; Anzai, R.; Ichige, K.; Saito, R.; Ueki, D. ECG Signal Reconstruction Using FMCW Radar and Convolutional Neural Network. In Proceedings of the 2021 20th International Symposium on Communications and Information Technologies (ISCIT), Tottori, Japan, 19–22 October 2021. [Google Scholar]
  13. Li, H.; Liu, Y.; Zhou, M.; Cao, Z.; Zhai, X.; Zhang, Y. Non-Contact Heart Rate Detection Technology Based on Deep Learning. In Proceedings of the 2023 International Seminar on Computer Science and Engineering Technology (SCSET), New York, NY, USA, 29–30 April 2023. [Google Scholar]
  14. Jang, Y.I.; Kyu Kwon, N. Comparison of the Signal Processing Methods to Enhance the Performance of the Signal Re-Construction System with Deep Learning. In Proceedings of the 2022 13th Asian Control Conference (ASCC), Jeju, Republic of Korea, 4–7 May 2022. [Google Scholar]
  15. Yamamoto, K.; Hiromatsu, R.; Ohtsuki, T. ECG Signal Reconstruction via Doppler Sensor by Hybrid Deep Learning Model with CNN and LSTM. IEEE Access 2020, 8, 130551–130560. [Google Scholar] [CrossRef]
  16. Cerda-Dávila, D.A.; Reyes, B.A. Exploring the Reconstruction of Electrocardiograms from Photoplethysmograms via Deep Learning. In Proceedings of the 2023 IEEE EMBS R9 Conference, Guadalajara, Mexico, 5–7 October 2023. [Google Scholar]
  17. Shyu, K.K.; Chiu, L.J.; Lee, P.L.; Tung, T.H.; Yang, S.H. Detection of Breathing and Heart Rates in UWB Radar Sensor Data Using FVPIEF-Based Two-Layer EEMD. IEEE Sens. J. 2019, 19, 774–784. [Google Scholar] [CrossRef]
  18. Yang, D.; Zhu, Z.; Liang, B. Vital Sign Signal Extraction Method Based on Permutation Entropy and EEMD Algorithm for Ultra-Wideband Radar. IEEE Access 2019, 7, 178879–178890. [Google Scholar] [CrossRef]
  19. Kim, D.; Choi, J.; Yoon, J.; Cheon, S.; Kim, B. HeartBeatNet: Enhancing Fast and Accurate Heart Rate Estimation With FMCW Radar and Lightweight Deep Learning. IEEE Sens. Lett. 2024, 8, 6004004. [Google Scholar] [CrossRef]
  20. Schellenberger, S.; Shi, K.; Steigleder, T.; Malessa, A.; Michler, F.; Hameyer, L.; Neumann, N.; Lurz, F.; Weigel, R.; Ostgathe, C.; et al. A dataset of clinically recorded radar vital signs with synchronised reference sensor signals. Sci. Data 2020, 7, 297. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.