Deep Learning for Non-Invasive Blood Pressure Monitoring: Model Performance and Quantization Trade-Offs
Abstract
:1. Introduction
- Develop accurate models for estimating mean, systolic, and diastolic blood pressure.
- Optimize these models for deployment on resource-constrained devices.
- Evaluate model performance in terms of accuracy, computational efficiency, and inference time.
- Proposing a novel Edge AI-based blood pressure estimation framework.
- Demonstrating the feasibility of continuous monitoring on wearable devices without the need for subject-specific calibration.
- Providing insights into the trade-offs between accuracy and computational efficiency in Edge AI applications for healthcare.
2. Related Works
3. Dataset Description
4. Methodologies
- Initial Convolution: A convolution layer (64 filters, kernel size 7) extracts low-level features from raw PPG signals. Batch Normalization and ReLU activation stabilizes training dynamics.
- Residual Blocks: Three residual blocks progressively increase filter depths (64→128→256) [Equation (2)].
- 3.
- Global Pooling: Global average pooling aggregates temporal features into a fixed size representation.
- 4.
- Dense Layer: Fully connected layer (128→64→3) refine extracted features before predicting SBP, DBP, and MBP.
- Initial Feature Extraction: The input layer processes raw PPG signals through a convolutional layer (3232 filters, kernel size 1515), followed by batch normalization and ReLU activation. Max-pooling reduces the temporal resolution while retaining salient features.
- Hierarchical Feature Extraction: Two additional convolutional layers (6464 and 128,128 filters, kernel size 77) extract higher-order features. Each convolutional layer is followed by batch normalization, ReLU activation, max-pooling, and spatial dropout (p = 0.1) to prevent overfitting in time-series data.
- Attention Mechanism: An attention layer computes importance weights for each time step using a dense layer with a tanh activation function. A softmax operation normalizes the weights across the time axis. The weighted feature map is computed through element-wise multiplication of the attention weights with the feature map.
- Global Pooling: Global average pooling (mean trends) and global max pooling (extreme values) are applied to the weighted feature map. The pooled outputs are concatenated to form a compact representation capturing both average and extreme signal characteristics.
- Dense Layers: Two fully connected layers (128,128 and 6464 neurons with ReLU activation) refine the features before outputting predictions for systolic (SBP), diastolic (DBP), and mean arterial pressure (MBP).
- Loss Function: A custom loss function was designed to incorporate physiological constraints (Equation (3)):
- Positional Encoding: Since transformers lack inherent temporal awareness, sinusoidal positional encodings are added to the input embeddings (Equations (4) and (5)):
- 2.
- Embedding Layer: A dense layer maps the input signal into a higher-dimensional space ().
- 3.
- Multi-Head Self-Attention: Four transformer layers are stacked, each consisting of multi-head self-attention (88 heads) followed by feed-forward networks. Residual connections and layer normalization stabilize training.
- 4.
- Global Pooling: The final transformer output is aggregated using global average pooling to reduce dimensionality while retaining salient features.
- 5.
- Output Layers: Dense layers (6464 neurons with ReLU activation) refine the features before predicting SBP, DBP, and MBP.
4.1. Experimental Design
4.2. Experimental Results
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhou, Z.-B.; Cui, T.-R.; Li, D.; Jian, J.-M.; Li, Z.; Ji, S.-R.; Li, X.; Xu, J.-D.; Liu, H.-F.; Yang, Y.; et al. Wearable Continuous Blood Pressure Monitoring Devices Based on Pulse Wave Transit Time and Pulse Arrival Time: A Review. Materials 2023, 16, 2133. [Google Scholar] [CrossRef] [PubMed]
- Montagna, S.; Pengo, M.F.; Ferretti, S.; Borghi, C.; Ferri, C.; Grassi, G.; Muiesan, M.L.; Parati, G. Machine Learning in Hypertension Detection: A Study on World Hypertension Day Data. J. Med. Syst. 2023, 47, 1. [Google Scholar] [CrossRef]
- El Hajj, C.; Kyriacou, P.A. Cuffless and Continuous Blood Pressure Estimation from PPG Signals Using Recurrent Neural Networks. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; IEEE: New York, NY, USA, 2020; pp. 4269–4272. [Google Scholar] [CrossRef]
- Sun, B.; Bayes, S.; Abotaleb, A.M.; Hassan, M. The Case for tinyML in Healthcare: CNNs for Real-Time On-Edge Blood Pressure Estimation. In Proceedings of the SAC ’23: 38th ACM/SIGAPP Symposium on Applied Computing, Tallinn, Estonia, 27–31 March 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 629–638. [Google Scholar] [CrossRef]
- Xing, X.; Sun, M. Optical blood pressure estimation with photoplethysmography and FFT-based neural networks. Biomed. Opt. Express 2016, 7, 3007–3020. [Google Scholar] [CrossRef]
- González, S.; Hsieh, W.-T.; Chen, T.P.-C. A benchmark for machine-learning based non-invasive blood pressure estimation using photoplethysmogram. Sci. Data 2023, 10, 149. [Google Scholar] [CrossRef] [PubMed]
- Pandit, J.A.; Lores, E.; Batlle, D. Cuffless Blood Pressure Monitoring. Clin. J. Am. Soc. Nephrol. 2020, 15, 1531–1538. [Google Scholar] [CrossRef]
- Kario, K. Management of Hypertension in the Digital Era. Hypertension 2020, 76, 640–650. [Google Scholar] [CrossRef]
- Wang, L.; Tian, S.; Zhu, R. A new method of continuous blood pressure monitoring using multichannel sensing signals on the wrist. Microsyst. Nanoeng. 2023, 9, 117. [Google Scholar] [CrossRef]
- Mukkamala, R.; Stergiou, G.S.; Avolio, A.P. Cuffless Blood Pressure Measurement. Annu. Rev. Biomed. Eng. 2022, 24, 203–230. [Google Scholar] [CrossRef]
- Athaya, T.; Choi, S. A Review of Noninvasive Methodologies to Estimate the Blood Pressure Waveform. Sensors 2022, 22, 3953. [Google Scholar] [CrossRef]
- Ding, X.; Zhang, Y.-T. Pulse transit time technique for cuffless unobtrusive blood pressure measurement: From theory to algorithm. Biomed. Eng. Lett. 2019, 9, 37–52. [Google Scholar] [CrossRef]
- Huynh, T.H.; Jafari, R.; Chung, W.-Y. An Accurate Bioimpedance Measurement System for Blood Pressure Monitoring. Sensors 2018, 18, 2095. [Google Scholar] [CrossRef] [PubMed]
- Ibrahim, B.; Jafari, R. Cuffless blood pressure monitoring from a wristband with calibration-free algorithms for sensing location based on bio-impedance sensor array and autoencoder. Sci. Rep. 2022, 12, 319. [Google Scholar] [CrossRef]
- Wang, T.-W.; Syu, J.-Y.; Chu, H.-W.; Sung, Y.-L.; Chou, L.; Escott, E.; Escott, O.; Lin, T.-T.; Lin, S.-F. Intelligent Bio-Impedance System for Personalized Continuous Blood Pressure Measurement. Biosensors 2022, 12, 150. [Google Scholar] [CrossRef] [PubMed]
- Berkelmans, G.F.N.; Kuipers, S.; Westerhof, B.E.; Man, A.M.E.S.-D.; Smulders, Y.M. Comparing volume-clamp method and intra-arterial blood pressure measurements in patients with atrial fibrillation admitted to the intensive or medium care unit. J. Clin. Monit. Comput. 2018, 32, 439–446. [Google Scholar] [CrossRef]
- Yamakoshi, K.; Kamiya, A.; Shimazu, H.; Ito, H.; Togawa, T. Noninvasive automatic monitoring of instantaneous arterial blood pressure using the vascular unloading technique. Med. Biol. Eng. Comput. 1983, 21, 557–565. [Google Scholar] [CrossRef] [PubMed]
- Zakrzewski, A.M. Arterial Blood Pressure Estimation Using Ultrasound. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2017. Available online: https://dspace.mit.edu/handle/1721.1/111743 (accessed on 22 September 2024).
- Meusel, M.; Wegerich, P.; Bode, B.; Stawschenko, E.; Kusche-Vihrog, K.; Hellbrück, H.; Gehring, H. Measurement of Blood Pressure by Ultrasound—The Applicability of Devices, Algorithms and a View in Local Hemodynamics. Diagnostics 2021, 11, 2255. [Google Scholar] [CrossRef]
- Desebbe, O.; Anas, C.; Alexander, B.; Kouz, K.; Knebel, J.-F.; Schoettker, P.; Creteur, J.; Vincent, J.-L.; Joosten, A. Evaluation of a novel optical smartphone blood pressure application: A method comparison study against invasive arterial blood pressure monitoring in intensive care unit patients. BMC Anesthesiol. 2022, 22, 259. [Google Scholar] [CrossRef]
- Ogedegbe, G.; Pickering, T. Principles and techniques of blood pressure measurement. Cardiol. Clin. 2010, 28, 571–586. [Google Scholar] [CrossRef]
- Kurylyak, Y.; Lamonaca, F.; Grimaldi, D. A Neural Network-based method for continuous blood pressure estimation from a PPG signal. In Proceedings of the 2013 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Minneapolis, MN, USA, 6–9 May 2013; pp. 280–283. [Google Scholar] [CrossRef]
- Schlesinger, O.; Vigderhouse, N.; Eytan, D.; Moshe, Y. Blood Pressure Estimation From PPG Signals Using Convolutional Neural Networks And Siamese Network. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1135–1139. [Google Scholar] [CrossRef]
- Casadei, B.C.; Gumiero, A.; Tantillo, G.; Della Torre, L.; Olmo, G. Systolic Blood Pressure Estimation from PPG Signal Using ANN. Electronics 2022, 11, 2909. [Google Scholar] [CrossRef]
- Mukkamala, R.; Hahn, J.-O.; Chandrasekhar, A. 11—Photoplethysmography in noninvasive blood pressure monitoring. In Photoplethysmography; Allen, J., Kyriacou, P., Eds.; Academic Press: Cambridge, MA, USA, 2022; pp. 359–400. [Google Scholar] [CrossRef]
- Abadade, Y.; Temouden, A.; Bamoumen, H.; Benamar, N.; Chtouki, Y.; Hafid, A.S. A Comprehensive Survey on TinyML. IEEE Access 2023, 11, 96892–96922. [Google Scholar] [CrossRef]
- Zaidi, S.A.R.; Hayajneh, A.M.; Hafeez, M.; Ahmed, Q.Z. Unlocking Edge Intelligence Through Tiny Machine Learning (TinyML). IEEE Access 2022, 10, 100867–100877. [Google Scholar] [CrossRef]
- Han, H.; Siebert, J. TinyML: A Systematic Review and Synthesis of Existing Research. In Proceedings of the 2022 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Jeju Island, Republic of Korea, 21–24 February 2022; pp. 269–274. [Google Scholar] [CrossRef]
- Kallimani, R.; Pai, K.; Raghuwanshi, P.; Iyer, S.; López, O.L.A. TinyML: Tools, applications, challenges, and future research directions. Multimed. Tools Appl. 2024, 83, 29015–29045. [Google Scholar] [CrossRef]
- Hymel, S.; Banbury, C.; Situnayake, D.; Elium, A.; Ward, C.; Kelcey, M.; Baaijens, M.; Majchrzycki, M.; Plunkett, J.; Tischler, D.; et al. Edge Impulse: An MLOps Platform for Tiny Machine Learning. arXiv 2023, arXiv:2212.03332. [Google Scholar] [CrossRef]
- Tsoukas, V.; Boumpa, E.; Giannakas, G.; Kakarountas, A. A Review of Machine Learning and TinyML in Healthcare. In Proceedings of the PCI ’21: 25th Pan-Hellenic Conference on Informatics, Volos, Greece, 26–28 November 2021; Association for Computing Machinery: New York, NY, USA, 2022; pp. 69–73. [Google Scholar] [CrossRef]
- Anbukarasu, P.; Nanisetty, S.; Tata, G.; Ray, N. Tiny-HR: Towards an interpretable machine learning pipeline for heart rate estimation on edge devices. arXiv 2022, arXiv:2208.07981. [Google Scholar] [CrossRef]
- Tang, Q.; Chen, Z.; Ward, R.; Menon, C.; Elgendi, M. Subject-Based Model for Reconstructing Arterial Blood Pressure from Photoplethysmogram. Bioengineering 2022, 9, 402. [Google Scholar] [CrossRef] [PubMed]
- Pal, R.; Rudas, A.; Kim, S.; Chiang, J.N.; Cannesson, M. A signal processing tool for extracting features from arterial blood pressure and photoplethysmography waveforms. medRxiv 2024. [Google Scholar] [CrossRef]
- Beh, W.-K.; Yang, Y.-C.; Wu, A.-Y. Quality-Aware Signal Processing Mechanism of PPG Signal for Long-Term Heart Rate Monitoring. Sensors 2024, 24, 3901. [Google Scholar] [CrossRef]
- Park, J.; Seok, H.S.; Kim, S.-S.; Shin, H. Photoplethysmogram Analysis and Applications: An Integrative Review. Front. Physiol. 2022, 12, 808451. [Google Scholar] [CrossRef]
- Joseph, G.; Joseph, A.; Titus, G.; Thomas, R.M.; Jose, D. Photoplethysmogram (PPG) signal analysis and wavelet de-noising. In Proceedings of the 2014 Annual International Conference on Emerging Research Areas: Magnetics, Machines and Drives (AICERA/iCMMD), Kottayam, India, 24–26 July 2014; pp. 1–5. [Google Scholar] [CrossRef]
- Kluska, P.; Zięba, M. Post-training Quantization Methods for Deep Learning Models. In Intelligent Information and Database Systems; Nguyen, N.T., Jearanaitanakij, K., Selamat, A., Trawiński, B., Chittayasothorn, S., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 467–479. [Google Scholar] [CrossRef]
- Kwasniewska, A.; Szankin, M.; Ozga, M.; Wolfe, J.; Das, A.; Zajac, A.; Ruminski, J.; Rad, P. Deep Learning Optimization for Edge Devices: Analysis of Training Quantization Parameters. In Proceedings of the IECON 2019—45th Annual Conference of the IEEE Industrial Electronics Society, Lisbon, Portugal, 14–17 October 2019; pp. 96–101. [Google Scholar] [CrossRef]
- Wu, H.; Judd, P.; Zhang, X.; Isaev, M.; Micikevicius, P. Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation. arXiv 2020, arXiv:2004.09602. [Google Scholar] [CrossRef]
- Kulkarni, U.; Meena, S.M.; Joshua, P.; Kruthika, K.; Platel, G.; Jadhav, K.; Singh, A. Performance Improvements in Quantization Aware Training and Appreciation of Low Precision Computation in Deep Learning. In Advances in Signal Processing and Intelligent Recognition Systems; Thampi, S.M., Krishnan, S., Hegde, R.M., Ciuonzo, D., Hanne, T., Kannan, R.J., Eds.; Springer: Singapore, 2021; pp. 90–107. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
Model | SBP MAE | SBP RMSE | SBP ME | SBP SD | SBP R2 | DBP MAE | DBP RMSE | DBP ME | DBP SD | DBP R2 | MBP MAE | MBP RMSE | MBP ME | MBP SD | MBP R2 |
Attentive BPNet | 11.55 | 13.78 | −10.65 | 8.75 | −0.44 | 4.09 | 5.36 | 1.76 | 5.06 | 0.15 | 4.56 | 6.09 | −2.33 | 5.63 | 0.28 |
Attentive BPNet—batch size 64 | 13.27 | 16.59 | −11.21 | 12.24 | −1.08 | 5.22 | 6.86 | 1.99 | 6.56 | −0.40 | 6.25 | 8.30 | −2.37 | 7.95 | −0.34 |
Attentive BPNetl—Batch Size 8 | 7.77 | 10.85 | −5.00 | 9.63 | 0.11 | 6.67 | 8.15 | 5.62 | 5.90 | −0.97 | 5.13 | 6.91 | 2.04 | 6.60 | 0.07 |
Attentive BPNet—Batch Size 4 | 7.55 | 10.44 | −5.74 | 8.75 | 0.18 | 7.88 | 8.96 | 7.31 | 5.19 | −1.39 | 5.04 | 6.47 | 2.95 | 5.76 | 0.19 |
Attentive BPNet—Batch Size 16 | 10.76 | 14.96 | −3.15 | 14.63 | −0.69 | 8.36 | 9.67 | 5.68 | 7.82 | −1.77 | 7.85 | 9.82 | 2.05 | 9.61 | −0.88 |
Attentive BPNet- Wide and Deep batch 32 | 12.57 | 16.72 | −5.10 | 15.93 | −1.12 | 14.17 | 16.02 | 12.92 | 9.47 | −6.62 | 10.86 | 12.91 | 6.34 | 11.24 | −2.24 |
Attentive BPNet—Wide and Deep batch 4 | 14.10 | 18.33 | −4.18 | 17.85 | −1.54 | 13.10 | 15.24 | 11.50 | 10.01 | −5.90 | 11.49 | 13.81 | 6.27 | 12.30 | −2.71 |
Attentive BPNet—Wide and Deep batch 256 | 17.46 | 21.88 | 4.92 | 21.31 | −2.62 | 19.98 | 22.89 | 18.98 | 12.80 | −14.56 | 18.35 | 21.92 | 15.54 | 15.46 | −8.35 |
TransfoRhythm | 9.39 | 12.11 | 0.37 | 12.11 | −0.11 | 9.46 | 10.64 | 8.71 | 6.11 | −2.36 | 8.05 | 9.77 | 5.79 | 7.87 | −0.86 |
Feature Engineering—Separate Models | 10.22 | 12.68 | 5.45 | 11.45 | −0.22 | 19.22 | 20.65 | 18.99 | 8.11 | −11.66 | 12.08 | 13.46 | 10.75 | 8.09 | −2.52 |
Feature Engineering—Separate Models—iteration 2 | 10.16 | 13.97 | −6.49 | 12.37 | −0.48 | 11.05 | 12.69 | 10.07 | 7.72 | −3.78 | 10.40 | 11.75 | 8.87 | 7.70 | −1.69 |
Feature Engineering—Separate Models—iteration 3 | 10.52 | 13.06 | 5.59 | 11.81 | −0.29 | 10.89 | 12.12 | 9.23 | 7.86 | −3.36 | 12.44 | 13.86 | 10.51 | 9.03 | −2.74 |
Feature Engineering—Separate Models—iteration 4 | 13.38 | 15.69 | 9.63 | 12.39 | −0.86 | 9.07 | 10.62 | 6.26 | 8.58 | −2.35 | 11.10 | 12.66 | 8.61 | 9.28 | −2.12 |
Residual Enhaced Convolutional Network | 6.36 | 8.93 | −1.11 | 8.86 | 0.40 | 8.35 | 9.78 | 7.81 | 5.88 | −1.84 | 6.90 | 8.40 | 5.58 | 6.28 | −0.37 |
Residual Enhaced Convolutional Network—deeper network | 12.73 | 15.63 | 10.52 | 11.55 | −0.85 | 13.07 | 14.96 | 12.67 | 7.95 | −5.64 | 12.82 | 15.03 | 12.19 | 8.79 | −3.40 |
Architecture | Original Size | Post-Quantization | Reduction |
---|---|---|---|
Attentive BPNet | 5.64 MB | 0.50 MB | 91.13% |
Transformer-Based | 9.28 MB | 1.10 MB | 88.15% |
Residual-Enhanced | 1.40 MB | 0.13 MB | 90.71% |
Attentive BP Net | |||||
---|---|---|---|---|---|
Model | Orginal Model | Dynamic Range Quantized Model | Float 16 | Full Integer Quantization | Integer-Only Quantization |
Model Size (MB) | 5.64 | 0.5 | 0.94 | 0.52 | 0.52 |
Size reduction (%) | - | −91.13 | −83.3333 | −90.78 | −90.78 |
Inference Time | 30.46 | 24.78 | 25.02 | 36.63 | 37.78 |
MAE | 7.84 | 7.84 | 7.84 | 7.78 | 7.78 |
RMSE | 10.42 | 10.41 | 10.42 | 10.36 | 10,365 |
ME (Bias) | −0.07 | 0 | −0.07 | −0.19 | −0.19 |
SD | 10.42 | 10.41 | 10.42 | 10.36 | 10.36 |
Transformer-Based Model | |||||
Model Size (MB) | 9.28 | 1.77 | 2.01 | 1.1 | 1.1 |
Size reduction (%) | −80.92 | −78.34 | −88.14 | −88.14 | |
Inference Time | 32.61 | 1793.35 | 1249.69 | 3087.91 | 3034.05 |
MAE | 10.87 | 10.86 | 10.87 | 8.25 | 60.65 |
RMSE | 12.85 | 12.84 | 12.85 | 11.25 | 62.33 |
ME (Bias) | 7.79 | 7.79 | 7.8 | −3.45 | −60.67 |
SD | 10.21 | 10.21 | 10.21 | 10.71 | 14.3 |
Residual-Enhanced Convolutional Network | |||||
Model Size (MB) | 1.4 | 0.23 | 0.52 | 0.13 | 0.13 |
Size reduction (%) | −83.57 | −62.85 | −90.71 | −90.71 | |
Inference Time | 6.23 | 4.66 | 1.2 | 2.6 | 2.6 |
MAE | 7.77 | 7.75 | 7.77 | 7.65 | 7.65 |
RMSE | 10.85 | 10.84 | 10.85 | 10.83 | 10.83 |
ME (Bias) | −5 | −4.9 | −5 | −5 | −5.2 |
SD | 9.63 | 9.64 | 9.63 | 9.69 | 9.64 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Devadasan, A.V.; Sengupta, S.; Masum, M. Deep Learning for Non-Invasive Blood Pressure Monitoring: Model Performance and Quantization Trade-Offs. Electronics 2025, 14, 1300. https://doi.org/10.3390/electronics14071300
Devadasan AV, Sengupta S, Masum M. Deep Learning for Non-Invasive Blood Pressure Monitoring: Model Performance and Quantization Trade-Offs. Electronics. 2025; 14(7):1300. https://doi.org/10.3390/electronics14071300
Chicago/Turabian StyleDevadasan, Anbu Valluvan, Saptarshi Sengupta, and Mohammad Masum. 2025. "Deep Learning for Non-Invasive Blood Pressure Monitoring: Model Performance and Quantization Trade-Offs" Electronics 14, no. 7: 1300. https://doi.org/10.3390/electronics14071300
APA StyleDevadasan, A. V., Sengupta, S., & Masum, M. (2025). Deep Learning for Non-Invasive Blood Pressure Monitoring: Model Performance and Quantization Trade-Offs. Electronics, 14(7), 1300. https://doi.org/10.3390/electronics14071300