Deep Learning for Point-of-Care Ultrasound Image Quality Enhancement: A Review
Abstract
:1. Introduction
2. Materials and Methods
2.1. Literature Search
2.1.1. Search Strategy
2.1.2. Eligibility Criteria
2.1.3. Selection Procedure
2.2. Categorization According to Quality Enhancement Aspects
Quality Enhancement Aspect | Definition |
---|---|
1. Spatial resolution | The ability to differentiate two adjacent structures as being distinct from one another: either parallel (axial resolution) or perpendicular (lateral resolution) to the direction of the ultrasound beam [29]. |
2. Contrast resolution | The ability to distinguish between different echo amplitudes of adjacent structures through image intensity variations [29]. |
3. Detail enhancement of structures | Enhancement of texture, edges, or boundaries between structures. |
4. Noise | Minimization of random variability that is not part of the desired signal. |
5. General quality improvement | Mapping low-quality images to high-quality reference images, where the quality disparities are inherent to differences in the capture process and not artificially induced. |
2.3. Data Extraction
2.4. Statistical Analysis
3. Results
3.1. Study Selection
3.2. Quality Enhancement Aspects
3.3. Study Characteristics
3.4. Study Outcomes
Study | Aim | Dataset (Availability) | Ultrasound Specifications | Deep Learning Algorithm | Loss Function |
---|---|---|---|---|---|
Awasthi et al., 2022 [32] | Reconstruction of high-quality high-bandwidth images from low-bandwidth images. | Phantom: Five separate datasets, tissue-mimicking, commercial, and in vitro porcin carotid artery (private). | Verasonics: L11-5v transducer with PWs at range −25° to 25°. LQ: Limited bandwidth down to 20%, HQ: Full bandwidth. | Residual Encoder–Decoder Net | Scaled MSE |
Gasse et al., 2017 [33] | Reconstruct high-quality US images from a small number of PW acquisitions. | In vivo: Carotid, thyroid, and liver regions of healthy subjects. Phantom: Gammex (private). | Verasonics: ATL L7-4 probe (5.2 MHz, 128 elements) with range ±15°. LQ: 3 PWs. HQ: 31 PWs. | CNN | L2 loss |
Goudarzi et al., 2020 [34] | Achieve the quality of multifocus US images by using a mapping function on a single-focus US image. | Phantom: CIRS phantom and ex vivo lamb liver. Simulation: Field II software [34] (private). | E-CUBE 12 Alpinion machine: L3-12H transducer (8.5 MHz). LQ: image with single focal point. HQ: Multi-focus image with 3 focal points. | Boundary-Seeking GAN | Binary cross-entropy (discriminator), MSE + boundary-seeking loss (generator) |
Guo et al., 2020 [35] | Improve the quality of handheld US devices using a small number of plane waves. | In vivo: Dataset provided by Zhang et al. [42] (carotid artery and brachioradialis images of healthy volunteers). Phantom: PICMUS [97] dataset, CIRS phantom. Simulation: US images from natural images using Field II software (only for pre-training LG-Unet) (private and public). | (Derived from dataset sources) In vivo: Verasonics: L10-5 probe (7.5 MHz). LQ: 3 PWs, HQ: Compounded image of 31 PWs with range −15° to 15°. Phantom data: Verasonics: L11 probe (5.2 MHz, 128 elements). | Local Global Unet (LG-Unet) + Simplified Residual Network (S_ResNet) | MSE + SSIM (LG-Unet) and L1 (S_Resnet) |
Huang et al., 2018 [36] | Improve the quality of ultrasonic B-mode images from 32 to that from 128 channels. | Simulation: Field II software (private). | Simulation dataset at 5 MHz center frequency, 0.308 mm pitch, 71% bandwidth. LQ: 32-channel image. HQ: 128-channel image. | Context Encoder Reconstruction GAN | Not reported |
Khan et al., 2021 [15] | Contrast and resolution enhancement of handheld POCUS images. | In vivo: Carotid and thyroid regions. Phantom: ATS-539 phantom. Simulation: Intermediate domain images generated by downgrading the in vivo and phantom images acquired from high-end system (private). | LQ: NPUS050 portable US system was used as low-quality input. HQ: E-CUBE 12R US, L3-12 transducer. | Cascade application of unsupervised self-consistent CycleGAN + supervised super-resolution network. | Cycle consistency + adversarial loss (cycleGAN), MAE + SSIM (super-resolution network) |
Lu et al., 2020 [37] | High-quality reconstruction for DW imaging using a small number (3) of DW transmissions competing with those obtained by compounding with 31 DWS. | In vivo: Thigh muscle, finger phalanx, and liver regions. Phantom: CIRS and Gammex (private). | Verasonics: ATL P4-2 transducer. LQ: 3 DWs. HQ: Compounded image of 31 DWs. | CNN with Inception Module | MSE |
Lyu et al., 2023 [38] | Reconstruct super-resolution high-quality images from single-beam plane wave images. | PICMUS 2016 dataset [97] modulated following the guidelines of CUBDL consisting of Simulation: Generated with Field II software. Phantom: CIRS. In vivo: Carotid artery of healthy volunteer (public). | (Derived from dataset source)Verasonics: L11 probe with range −16° to 16°. LQ: Single PW image. HQ: PW images synthesized from 75 different angles using CPWC. | U-Shaped GAN based on Attention and Residual Connection (ARU-GAN) | Combination of MS-SSIM, classical adversarial, and perceptual loss |
Moinuddin et al., 2022 [39] | Enhance US images using a network where the task of noise suppression and resolution enhancement are carried out simultaneously. | In vivo: Breast US (BUS) dataset [98], for which high resolution and low noise label images were generated using NLLR normal filtration. Simulation: Salient object detection (SOD) dataset [99] augmentated using image formaton physics information divided in two datasets (public). | (Derived from dataset source)Siemens ACUSON Sequoia C512, 17L5 HD transducer (8.5 MHz). | Deep CNN | MSE |
Monkam et al., 2023 [40] | Suppress speckle noise and enhance texture and fine details. | Simulation: Original low-quality US images of HC18 Challenge fetal dataset [100], from which high-quality target images and additional low-quality images were generated (for training and testing). In vivo: publicly available datasets: HC18 Challenge (fetal) [100], BUSI (breast), CCA (common carotid artery) (for testing) (public). | (Derived from HC18 dataset source)Voluson E8 or the Voluson 730 US device. | U-Net with added feature refinement attention block (US-Net) | L1 loss |
Tang et al., 2021 [41] | Reconstruct high-resolution, high-quality plane wave images from low-quality plane wave images from different angles. | PICMUS 2016 dataset [97] modulated following the guidelines of CUBDL consisting of Simulation: Generated with Field II software. Phantom: CIRS. In vivo: Carotid artery of healthy volunteer (public). | (Derived from dataset source)Verasonics: L11 probe with range −16° to 16°. LQ: PW image using 3 angles. HQ: PW images synthesized from 75 different angles using CPWC. | Attention Mechanism and U-Net-Based GAN | Cross-entropy + MSE + perceptual loss |
Zhang et al., 2018 [42] | Reconstruct high-quality US images from small number of PWs (3). | In vivo: Carotid artery and brachioradioalis of heathy volunteers. Phantom: CIRS phantom, ex vivo swine muscles (private). | Verasonics: L10-5 (7.5 MHz) with range −15° to 15°. LQ: 3 PWs. HQ: Coherent compounding using 31 PWs. | GAN with feedforward CNN as both generator and discriminator network | MSE + adversarial loss (generator), binary cross-entropy (discriminator) |
Zhou et al., 2018 [43] | Improve the image quality of a single-angle PW image to that of a PW image synthesized from 75 different angles. | PICMUS 2016 dataset [97] synthesized by three different beamforming methods: In vivo: (1) thyroid gland and (2) carotid arteries of human volunteers (public). Phantom: CIRS phantom. Simulation: (1) point image and (2) cyst images generated using Field II software. | (Derived from dataset sources)Verasonics: L11 probe with range −16° to 16°. LQ: Single PW image. HQ: PW images synthesized from 75 different angles. | Multi-scaled CNN | MSE |
Zhou et al., 2020 [6] | Improve quality of portable US by mapping low-quality images to corresponding high-quality images. | Single-/multi-angle PWI simulation, phantom and in vivo data (only used for transfer learning). For training and testing: In vivo: Carotid and thyroid images of healthy volunteers. Phantom: CIRS and self-made gelatin and raw pork. Simulation: Field II software (private). | LQ: mSonics MU1, L10-5v. transducer. HQ: Verasonics—L11-4v transducer (phantom data) and Toshiba Aplio 500, 7.5 MHz (clinical data). | Two-stage GAN with U-Net and Gradual Learning Strategy. | MSE + SSIM + Conv loss |
Zhou et al., 2021 [14] | Enhance video quality of handheld US devices. | In vivo: Single- and multi-angle PW videos (only for training). Handheld and high-end images and videos of different bodyparts of healthy volunteers (for training and testing) (private). | PW videos: Verasonics—L11-4v transducer (6.25 MHz, 128-element) with range −16° to 16°. High-end US (HQ): Toshiba Aplio 500 device. Handheld US (LQ): mSonics MU1, L10-5 transducer. | Low-rank Representation Multi-pathway GAN | Adversarial + MSE + ultrasound-specific perceptual loss |
Study | Computation Time(Source Code Availability) | Number of Images in Test Set | Performance (±SD) of Low-Quality Input Image | Performance (±SD) of Generated Image |
---|---|---|---|---|
Awasthi et al., 2022 [32] | “Light weight” (available) | Phantom: dataset 1: n = 134 dataset 2: n = 90 dataset 3: n = 31 dataset 4: n = 70 dataset 5: n = 239 | Phantom: dataset 1: PSNR = 17.049 ± 1.107, RMSE = 0.141 ± 0.016, PC = 0.788 dataset 2: PSNR = 15.768 ± 1.376, RMSE = 0.165 ± 0.026 dataset 3: PSNR = 13.885 ± 1.276, RMSE = 0.204 ± 0.032 dataset 4: PSNR = 16.297 ± 1.212, RMSE = 0.155 ± 0.021 dataset 5: PSNR = 15.487 ± 1.876, RMSE = 0.172 ± 0.040 | Phantom: dataset 1: PSNR = 20.903 ± 1.189, RMSE = 0.091 ± 0.012, PC = 0.86 dataset 2: PSNR = 20.523 ± 1.242, RMSE = 0.095 ± 0.013 dataset 3: PSNR = 13.985 ± 1.120, RMSE = 0.201 ± 0.025 dataset 4: PSNR = 21.457 ± 1.238, RMSE = 0.085 ± 0.012 dataset 5: PSNR = 17.654 ± 1.536, RMSE = 0.133 ± 0.022 |
Gasse et al., 2017 [33] | Not reported (not available) | Mixed test set of in vivo and phantom data: n = 1000 | Only graphs given showing CR and LR reached by the proposed model with 3 PWs compared to the standard compounding of an increasingly larger number of PWs. | - |
Goudarzi et al., 2020 [34] | Not reported (available) | Phantom (CIRS): n = - Simulation: n = 360 | Phantom: FWHM = 1.52, CNR = 9.6 Simulation: SSIM = 0.622 ± 0.02, PSNR = 23.27 ± 1, FWHM = 1.3, CNR = 7.2 | Phantom: FWHM = 1.44, CNR = 11.1 Simulation: SSIM = 0.769 ± 0.017, PSNR = 25.32 ± 0.919, FWHM = 1.09, CNR = 8.02 |
Guo et al., 2020 [35] | Not reported (not available) | 225 (out of 9225) patch images from the in vivo, phantom, and simulation dataset (distribution between datasets not reported). | In vivo: PSNR = 16.04 Phantom: FWHM = 1.8 mm, CR = 0.36, CNR = 24.93 | In vivo: PSNR = 18.94 Phantom: FWHM = 1.3 mm, CR = 0.79, CNR = 32.81 |
Huang et al., 2018 [36] | Not reported (not available) | Simulation: n = 1 | Simulation: CNR: 0.939, PICMUS CNR: 2.381, FWHM: 13.34 | Simulation: CNR: 1.508, PICMUS CNR: 6.502, FWHM: 11.15 |
Khan et al., 2021 [15] | 13.18 ms (not available) | In vivo: n = 43 Phantom: n = 32 | Not reported | Gain compared to simulated intermediate quality images of in vivo and phantom data (only measuring fitness of super-resolution network): PSNR = 13.58, SSIM = 0.63 Non-reference metrics for entire proposed method for in vivo and phantom data: CR = 14.96, CNR = 2.38, GCNR = 0.8604 (which is 21.77%, 30.06% and 44.42% higher than those of the low-quality input images.) |
Lu et al., 2020 [37] | 0.75 ± 0.03 ms (not available) | Mixed in vivo and phantom data: n = 1000 | Mixed in vivo and phantom data: PSNR = 29.24 ± 1.57, SSIM = 0.83 ± 0.15, MI = 0.51 ± 0.16 Non-reference metrics are only shown in graph form for low-quality images. | Mixed in vivo and phantom data: PSNR = 31.13 ± 1.47, SSIM = 0.93 ± 0.06, MI = 0.82 ± 0.20,CR (near field) = 19.54, CR (far field) = 14.95, CNR (near field) = 7.63, CNR (far field) = 5.21, LR (near field) = 0.90, LR (middle field) = 1.64, LR (far field) = 2.35 |
Lyu et al., 2023 [38] | Not reported (not available) | In vivo: n = 150 Phantom: n = 150 Simulation: n = 150 | No performance metrics available for low-quality images; only for other traditional deep learning methods for comparison. | In vivo: PSNR = 26.508, CW-SSIM = 0.876, NCC = 0.943 Phantom: FWHM = 0.424, CR = 26.900, CNR = 3.693 Simulation: FWHM = 0.277, CR = 39.472, CNR = 5.141 |
Moinuddin et al., 2022 [39] | Not reported | In vivo: n = 33 Simulation: SOD-1: n = 200 SOD-2: n = 200 Evaluated with 5-fold cross-validation approach. | In vivo: PSNR = 26.0071 ± 2.3083, SSIM = 0.7098 ± 0.0761 Simulation: SOD-1: PSNR = 12.1587 ± 0.7839, SSIM = 0.5570 ± 0.1205 SOD-2: PSNR = 12.5272 ± 0.8243, SSIM = 0.1556 ± 0.1451, GCNR = 0.9936 ± 0.0039 | In vivo: PSNR = 26.9112 ± 2.3025, SSIM = 0.7522 ± 0.0635 Simulation: SOD-1: PSNR = 25.5275 ± 2.9712, SSIM = 0.6946 ± 0.1267 SOD-2: PSNR = 32.4719 ± 2.6179, SSIM = 0.8785 ± 0.0766, GCNR = 0.9966 ± 0.0026 |
Monkam et al., 2023 [40] | 52.16 ms (not available) | In vivo: HC18: n = 30 BUSI: n = 30 CCA: n = 30 Simulation: HC18: n = 335 | No performance metrics available for low-quality images; only for other enhancement methods for comparison. | In vivo: HC18: SNR = 39.32, CNR = 1.10, AGM = 27.46, ENL = 15.71 BUSI: SNR = 34.54, CNR = 4.20, AGM = 39.88, ENL = 17.04 CCA: SNR = 40.87, CNR = 2.59, AGM = 35.92, ENL = 23.03 Simulation: HC18: SSIM = 0.9155, PSNR = 32.87, EPI = 0.6371 |
Tang et al., 2021 [41] | Not reported (not available) | n = 360 (total number of images in test set for the in vivo, phantom, and simulation datasets, distribution not reported) | Phantom: FWHM = 0.5635, CR = 8.718, CNR = 1.109, GCNR = 0.609 Simulation: FWHM = 0.2808, CR = 13.769, CNR = 1.576, GCNR = 0.735 | In vivo: PSNR = 28.278, SSIM = 0.659, MI = 0.9980, NCC = 0.963 Phantom: FWHM = 0.3556, CR = 24.571, CNR = 2.495, GCNR = 0.915 Simulation: FWHM = 0.2695, CR = 39.484, CNR = 5.617, GCNR = 0.998 |
Zhang et al., 2018 [42] | Not reported (not available) | In vivo: n = 500 phantom: n = 30 | Mixed in vivo and phantom test set:FWHM = 0.50, CR = 10.23, CNR = 1.30 | Mixed in vivo and phantom test set:FWHM = 0.53, CR = 19.46, CNR = 2.25 |
Zhou et al., 2018 [43] | Not reported (not available) | In vivo: Thyroid dataset: n = 30 Simulation: Point dataset: n = 30 Cyst dataset: n = 30 Evaluated with 5-fold cross-validation approach. | In vivo: Thyroid dataset: PSNR = 14.9235, SSIM = 0.0291, MI = 0.3474 Simulation: Point dataset: PSNR = 24.1708, SSIM = 0.1962, MI = 0.4124, FWHM = 0.49 Cyst dataset: PSNR = 15.8860, SSIM = 0.5537, MI = 1.1976, CR = 137.0473 | In vivo: Thyroid dataset: PSNR = 21.7248, SSIM = 0.3034, MI = 0.8856 Simulation: Point dataset: PSNR = 36.5884, SSIM = 0.9216, MI = 0.4483, FWHM = 0.196 Cyst dataset: PSNR = 24.0167, SSIM = 0.6135, MI = 1.5622, CR = 184.0432 |
Zhou et al., 2020 [6] | Not reported (not available) | In vivo: n = 94 Phantom: n = 40 Simulation: n = 56 Evaluated with 5-fold cross validation approach. | In vivo: PSNR = 8.65 ± 1.32, SSIM = 0.18 ± 0.04, MI = 0.22 ± 0.13, BRISQUE = 38.91 ± 4.99 Phantom: PSNR = 15.26 ± 2.91, SSIM = 0.12 ± 0.03, MI = 0.20 ± 0.11, BRISQUE = 24.61 ± 4.50 Simulation: PSNR = 16.38 ± 2.35, SSIM = 0.19 ± 0.06, MI = 0.22 ± 0.16, BRISQUE = 29.08 ± 3.45 | In vivo: PSNR = 18.08 ± 1.57, SSIM = 0.41 ± 0.05, MI = 0.68 ± 0.18, BRISQUE = 35.25 ± 4.13 Phantom: PSNR = 24.70 ± 1.11, SSIM = 0.64 ± 0.07, MI = 0.26 ± 0.09, BRISQUE = 21.68 ± 3.36 Simulation: PSNR = 28.50 ± 2.01, SSIM = 0.59 ± 0.02, MI = 0.42 ± 0.04, BRISQUE = 23.30 ± 3.09 |
Zhou et al., 2021 [14] | Not reported (not available) | In vivo: n = 40 videos For full-reference methods, a single frame in handheld video was used, and most-similar frame in high-end video was selected. | In vivo: PSNR = 12.68 ± 3.45, SSIM = 0.24 ± 0.06, MI = 0.71 ± 0.09, NIQE = 19.48 ± 4.66, ultrasound quality score = 0.06 ± 0.03 | In vivo: PSNR = 19.95 ± 3.24, SSIM = 0.45 ± 0.06, MI = 1.05 ± 0.07, NIQE = 6.95 ± 1.97, ultrasound quality score = 0.89 ± 0.16 |
3.5. Meta-Analysis Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Search Strings
References
- Hashim, A.; Tahir, M.J.; Ullah, I.; Asghar, M.S.; Siddiqi, H.; Yousaf, Z. The utility of point of care ultrasonography (POCUS). Ann. Med. Surg. 2021, 71, 102982. [Google Scholar] [CrossRef] [PubMed]
- Riley, A.; Sable, C.; Prasad, A.; Spurney, C.; Harahsheh, A.; Clauss, S.; Colyer, J.; Gierdalski, M.; Johnson, A.; Pearson, G.D.; et al. Utility of hand-held echocardiography in outpatient pediatric cardiology management. Pediatr. Cardiol. 2014, 35, 1379–1386. [Google Scholar] [CrossRef] [PubMed]
- Gilbertson, E.A.; Hatton, N.D.; Ryan, J.J. Point of care ultrasound: The next evolution of medical education. Ann. Transl. Med. 2020, 8, 846. [Google Scholar] [CrossRef] [PubMed]
- Stock, K.F.; Klein, B.; Steubl, D.; Lersch, C.; Heemann, U.; Wagenpfeil, S.; Eyer, F.; Clevert, D.A. Comparison of a pocket-size ultrasound device with a premium ultrasound machine: Diagnostic value and time required in bedside ultrasound examination. Abdom. Imaging 2015, 40, 2861–2866. [Google Scholar] [CrossRef]
- Han, P.J.; Tsai, B.T.; Martin, J.W.; Keen, W.D.; Waalen, J.; Kimura, B.J. Evidence basis for a point-of-care ultrasound examination to refine referral for outpatient echocardiography. Am. J. Med. 2019, 132, 227–233. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Z.; Wang, Y.; Guo, Y.; Qi, Y.; Yu, J. Image Quality Improvement of Hand-Held Ultrasound Devices with a Two-Stage Generative Adversarial Network. IEEE Trans. Biomed. Eng. 2020, 67, 298–311. [Google Scholar] [CrossRef]
- Nelson, B.P.; Sanghvi, A. Out of hospital point of care ultrasound: Current use models and future directions. Eur. J. Trauma Emerg. Surg. 2016, 42, 139–150. [Google Scholar] [CrossRef] [PubMed]
- Kolbe, N.; Killu, K.; Coba, V.; Neri, L.; Garcia, K.M.; McCulloch, M.; Spreafico, A.; Dulchavsky, S. Point of care ultrasound (POCUS) telemedicine project in rural Nicaragua and its impact on patient management. J. Ultrasound 2015, 18, 179–185. [Google Scholar] [CrossRef]
- Stewart, K.A.; Navarro, S.M.; Kambala, S.; Tan, G.; Poondla, R.; Lederman, S.; Barbour, K.; Lavy, C. Trends in Ultrasound Use in Low and Middle Income Countries: A Systematic Review. Int. J. MCH AIDS 2020, 9, 103–120. [Google Scholar] [CrossRef]
- Becker, D.M.; Tafoya, C.A.; Becker, S.L.; Kruger, G.H.; Tafoya, M.J.; Becker, T.K. The use of portable ultrasound devices in low- and middle-income countries: A systematic review of the literature. Trop. Med. Int. Health 2016, 21, 294–311. [Google Scholar] [CrossRef]
- McBeth, P.B.; Hamilton, T.; Kirkpatrick, A.W. Cost-effective remote iPhone-teathered telementored trauma telesonography. J. Trauma Acute Care Surg. 2010, 69, 1597–1599. [Google Scholar] [CrossRef] [PubMed]
- Evangelista, A.; Galuppo, V.; Méndez, J.; Evangelista, L.; Arpal, L.; Rubio, C.; Vergara, M.; Liceran, M.; López, F.; Sales, C. Hand-held cardiac ultrasound screening performed by family doctors with remote expert support interpretation. Heart 2016, 102, 376–382. [Google Scholar] [CrossRef] [PubMed]
- Salimi, N.; Gonzalez-Fiol, A.; Yanez, N.D.; Fardelmann, K.L.; Harmon, E.; Kohari, K.; Abdel-Razeq, S.; Magriples, U.; Alian, A. Ultrasound Image Quality Comparison Between a Handheld Ultrasound Transducer and Mid-Range Ultrasound Machine. Pocus J. 2022, 7, 154–159. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Z.; Guo, Y.; Wang, Y. Handheld Ultrasound Video High-Quality Reconstruction Using a Low-Rank Representation Multipathway Generative Adversarial Network. IEEE Trans. Neural Networks Learn. Syst. 2020, 32, 575–588. [Google Scholar] [CrossRef]
- Khan, S.; Huh, J.; Ye, J.C. Contrast and resolution improvement of pocus using self-consistent cyclegan. In Proceedings of the MICCAI Workshop on Domain Adaptation and Representation Transfer; Springer: Cham, Switzerland, 2021; pp. 158–167. [Google Scholar]
- Jafari, M.H.; Girgis, H.; Van Woudenberg, N.; Moulson, N.; Luong, C.; Fung, A.; Balthazaar, S.; Jue, J.; Tsang, M.; Nair, P. Cardiac point-of-care to cart-based ultrasound translation using constrained CycleGAN. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 877–886. [Google Scholar] [CrossRef] [PubMed]
- Henderson, R.; Murphy, S. Portability Enhancing Hardware for a Portable Ultrasound System. 2017. US Patent No. 9,629,606, 25 April 2017. [Google Scholar]
- Lockwood, G.R.; Talman, J.R.; Brunke, S.S. Real-time 3-D ultrasound imaging using sparse synthetic aperture beamforming. IEEE T-UFFC 1998, 45, 980–988. [Google Scholar] [CrossRef]
- Matrone, G.; Savoia, A.S.; Caliano, G.; Magenes, G. The delay multiply and sum beamforming algorithm in ultrasound B-mode medical imaging. IEEE Trans. Med Imaging 2014, 34, 940–949. [Google Scholar] [CrossRef]
- Ortiz, S.H.C.; Chiu, T.; Fox, M.D. Ultrasound image enhancement: A review. Biomed. Signal Process. Control 2012, 7, 419–428. [Google Scholar] [CrossRef]
- Anaya-Isaza, A.; Mera-Jiménez, L.; Zequera-Diaz, M. An overview of deep learning in medical imaging. Inform. Med. Unlocked 2021, 26, 100723. [Google Scholar] [CrossRef]
- Zhang, H.M.; Dong, B. A review on deep learning in medical image reconstruction. J. Oper. Res. Soc. China 2020, 8, 311–340. [Google Scholar] [CrossRef]
- Liu, J.; Li, K.; Dong, H.; Han, Y.; Li, R. Medical Image Processing based on Generative Adversarial Networks: A Systematic Review. Curr Med Imaging 2023, 20, e15734056258198. [Google Scholar] [CrossRef] [PubMed]
- Lepcha, D.C.; Goyal, B.; Dogra, A.; Sharma, K.P.; Gupta, D.N. A deep journey into image enhancement: A survey of current and emerging trends. Inf. Fusion 2023, 93, 36–76. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Bmj 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
- Chakraborty, C.; Bhattacharya, M.; Pal, S.; Lee, S.S. From machine learning to deep learning: An advances of the recent data-driven paradigm shift in medicine and healthcare. Curr. Res. Biotechnol. 2023, 7, 100164. [Google Scholar] [CrossRef]
- Makwana, G.; Yadav, R.N.; Gupta, L. Analysis of Various Noise Reduction Techniques for Breast Ultrasound Image Enhancement. In Internet of Things and Its Applications: Select Proceedings of ICIA 2020; Springer: Berlin/Heidelberg, Germany, 2022; pp. 303–313. [Google Scholar]
- Michailovich, O.V.; Tannenbaum, A. Despeckling of medical ultrasound images. IEEE T-UFFC 2006, 53, 64–78. [Google Scholar] [CrossRef] [PubMed]
- Ng, A.; Swanevelder, J. Resolution in ultrasound imaging. Contin. Educ. Anaesthesia, Crit. Care Pain 2011, 11, 186–192. [Google Scholar] [CrossRef]
- Sara, U.; Akter, M.; Uddin, M.S. Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef]
- Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
- Awasthi, N.; van Anrooij, L.; Jansen, G.; Schwab, H.M.; Pluim, J.P.; Lopata, R.G. Bandwidth Improvement in Ultrasound Image Reconstruction Using Deep Learning Techniques. Healthcare 2022, 11, 123. [Google Scholar] [CrossRef] [PubMed]
- Gasse, M.; Millioz, F.; Roux, E.; Garcia, D.; Liebgott, H.; Friboulet, D. High-quality plane wave compounding using convolutional neural networks. IEEE T-UFFC 2017, 64, 1637–1639. [Google Scholar] [CrossRef]
- Goudarzi, S.; Asif, A.; Rivaz, H. Fast multi-focus ultrasound image recovery using generative adversarial networks. IEEE Trans. Comput. Imaging 2020, 6, 1272–1284. [Google Scholar] [CrossRef]
- Guo, B.; Zhang, B.; Ma, Z.; Li, N.; Bao, Y.; Yu, D. High-quality plane wave compounding using deep learning for hand-held ultrasound devices. In Proceedings 16, Proceedings of the Advanced Data Mining and Applications: 16th International Conference, ADMA 2020, Foshan, China, 12–14 November 2020; Springer: Abingdon, UK; pp. 547–559.
- Huang, C.Y.; Chen, O.T.C.; Wu, G.Z.; Chang, C.C.; Hu, C.L. Ultrasound imaging improved by the context encoder reconstruction generative adversarial network. In Proceedings of the 2018 IEEE International Ultrasonics Symposium, Kobe, Japan, 22–25 October 2018; pp. 1–4. [Google Scholar]
- Lu, J.; Millioz, F.; Garcia, D.; Salles, S.; Liu, W.; Friboulet, D. Reconstruction for diverging-wave imaging using deep convolutional neural networks. IEEE T-UFFC 2020, 67, 2481–2492. [Google Scholar] [CrossRef] [PubMed]
- Lyu, Y.; Jiang, X.; Xu, Y.; Hou, J.; Zhao, X.; Zhu, X. ARU-GAN: U-shaped GAN based on Attention and Residual connection for super-resolution reconstruction. Comput. Biol. Med. 2023, 164, 107316. [Google Scholar] [CrossRef] [PubMed]
- Moinuddin, M.; Khan, S.; Alsaggaf, A.U.; Abdulaal, M.J.; Al-Saggaf, U.M.; Ye, J.C. Medical ultrasound image speckle reduction and resolution enhancement using texture compensated multi-resolution convolution neural network. Front. Physiol. 2022, 13, 2326. [Google Scholar] [CrossRef] [PubMed]
- Monkam, P.; Lu, W.; Jin, S.; Shan, W.; Wu, J.; Zhou, X.; Tang, B.; Zhao, H.; Zhang, H.; Ding, X. US-Net: A lightweight network for simultaneous speckle suppression and texture enhancement in ultrasound images. Comput. Biol. Med. 2023, 152, 106385. [Google Scholar] [CrossRef] [PubMed]
- Tang, J.; Zou, B.; Li, C.; Feng, S.; Peng, H. Plane-Wave Image Reconstruction via Generative Adversarial Network and Attention Mechanism. IEEE Trans. Instrum. Meas. 2021, 70, 4505115. [Google Scholar] [CrossRef]
- Zhang, X.; Li, J.; He, Q.; Zhang, H.; Luo, J. High-quality reconstruction of plane-wave imaging using generative adversarial network. In Proceedings of the 2018 IEEE International Ultrasonics Symposium, Kobe, Japan, 22–25 October 2018; pp. 1–4. [Google Scholar]
- Zhou, Z.; Wang, Y.; Yu, J.; Guo, W.; Fang, Z. Super-resolution reconstruction of plane-wave ultrasound imaging based on the improved CNN method. In Proceedings of the VipIMAGE 2017: Proceedings of the VI ECCOMAS Thematic Conference on Computational Vision and Medical Image Processing Porto, Portugal, 18–20 October 2017; Springer: Berlin/Heidelberg, Germany; pp. 111–120.
- Zhou, Z.; Wang, Y.; Yu, J.; Guo, Y.; Guo, W.; Qi, Y. High Spatial-Temporal Resolution Reconstruction of Plane-Wave Ultrasound Images with a Multichannel Multiscale Convolutional Neural Network. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2018, 65, 1983–1996. [Google Scholar] [CrossRef] [PubMed]
- Goudarzi, S.; Asif, A.; Rivaz, H. High Frequency Ultrasound Image Recovery Using Tight Frame Generative Adversarial Networks. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2020, 2020, 2035–2038. [Google Scholar] [CrossRef] [PubMed]
- Goudarzi, S.; Asif, A.; Rivaz, H. Multi-focus ultrasound imaging using generative adversarial networks. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 1118–1121. [Google Scholar]
- Lu, J.; Liu, W. Unsupervised super-resolution framework for medical ultrasound images using dilated convolutional neural networks. In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, China, 27–29 June 2018; pp. 739–744. [Google Scholar]
- Li, Y.; Lu, W.; Monkam, P.; Wang, Y. IA-Noise2Noise: An Image Alignment Strategy for Echocardiography Despeckling. In Proceedings of the 2023 IEEE International Ultrasonics Symposium, Montreal, QC, Canada, 3–8 September 2023; pp. 1–3. [Google Scholar]
- Mansouri, N.J.; Khaissidi, G.; Despaux, G.; Mrabti, M.; Clézio, E.L. Attention gated encoder-decoder for ultrasonic signal denoising. IAES Int. J. Artif. Intell. 2023, 12, 1695–1703. [Google Scholar] [CrossRef]
- Basile, M.; Gibiino, F.; Cavazza, J.; Semplici, P.; Bechini, A.; Vanello, N. Blind Approach Using Convolutional Neural Networks to a New Ultrasound Image Denoising Task. In Proceedings of the 2023 IEEE International Workshop on Biomedical Applications, Technologies and Sensors, BATS 2023, Catanzaro, Italy, 28–29 September 2023; pp. 68–73. [Google Scholar] [CrossRef]
- Shen, Z.; Tang, C.; Xu, M.; Lei, Z. Removal of Speckle Noises from Ultrasound Images Using Parallel Convolutional Neural Network. Circuits Syst. Signal Process. 2023, 42, 5041–5064. [Google Scholar] [CrossRef]
- Gan, J.; Wang, L.; Liu, Z.; Wang, J. Multi-scale ultrasound image denoising algorithm based on deep learning model for super-resolution reconstruction. In Proceedings of the ACM International Conference Proceeding Series, Guangzhou, China, 25–27 August 2023; pp. 6–11. [Google Scholar] [CrossRef]
- Asgariandehkordi, H.; Goudarzi, S.; Basarab, A.; Rivaz, H. Deep Ultrasound Denoising Using Diffusion Probabilistic Models. In Proceedings of the IEEE International Ultrasonics Symposium, Montreal, QC, Canada, 3–8 September 2023. [Google Scholar] [CrossRef]
- Liu, J.; Li, C.; Liu, L.; Chen, H.; Han, H.; Zhang, B.; Zhang, Q. Speckle noise reduction for medical ultrasound images based on cycle-consistent generative adversarial network. Biomed. Signal Process. Control 2023, 86, 105150. [Google Scholar] [CrossRef]
- Mahmoudi Mehr, O.; Mohammadi, M.R.; Soryani, M. Deep Learning-Based Ultrasound Image Despeckling by Noise Model Estimation. Iran. J. Electr. Electron. Eng. 2023, 19, 1–13. [Google Scholar] [CrossRef]
- Senthamizh Selvi, R.; Suruthi, S.; Samyuktha Shrruthi, K.R.; Varsha, B.; Saranya, S.; Babu, B. Ultrasound Image Denoising Using Cascaded Median Filter and Autoencoder. In Proceedings of the 4th International Conference on Smart Electronics and Communication, ICOSEC 2023, Trichy, India, 20–22 September 2023; pp. 296–302. [Google Scholar] [CrossRef]
- Mikaeili, M.; Bilge, H.S. Evaluating Deep Neural Network Models on Ultrasound Single Image Super Resolution. In Proceedings of the TIPTEKNO 2023—Medical Technologies Congress, Famagusta, Cyprus, 10–12 November 2023. [Google Scholar] [CrossRef]
- Liu, H.; Liu, J.; Hou, S.; Tao, T.; Han, J. Perception consistency ultrasound image super-resolution via self-supervised CycleGAN. Neural Comput. Appl. 2023, 35, 12331–12341. [Google Scholar] [CrossRef]
- Vetriselvi, D.; Thenmozhi, R. Advanced Image Processing Techniques for Ultrasound Images using Multiscale Self Attention CNN. Neural Process. Lett. 2023, 55, 11945–11973. [Google Scholar] [CrossRef]
- Gomez, Y.Z.O.; Costa, E.T. Ultrasound Speckle Filtering Using Deep Learning. In Proceedings of the IFMBE Proceedings; Springer: Cham, Switzerland, 2024; Volume 99, pp. 283–289. [Google Scholar] [CrossRef]
- Li, Y.; Zeng, X.; Dong, Q.; Wang, X. RED-MAM: A residual encoder-decoder network based on multi-attention fusion for ultrasound image denoising. Biomed. Signal Process. Control 2023, 79, 104062. [Google Scholar] [CrossRef]
- Yang, T.; Wang, W.; Cheng, G.; Wei, M.; Xie, H.; Wang, F.L. FDDL-Net: Frequency domain decomposition learning for speckle reduction in ultrasound images. Multimed. Tools Appl. 2022, 81, 42769–42781. [Google Scholar] [CrossRef]
- Karaoğlu, O.; Bilge, H.S.; Uluer, I. Removal of speckle noises from ultrasound images using five different deep learning networks. Eng. Sci. Technol. Int. J. 2022, 29, 101030. [Google Scholar] [CrossRef]
- Markco, M.; Kannan, S. Texture-driven super-resolution of ultrasound images using optimized deep learning model. Imaging Sci. J. 2024, 72, 643–656. [Google Scholar] [CrossRef]
- Karthiha, G.; Allwin, S. Speckle Noise Suppression in Ultrasound Images Using Modular Neural Networks. Intell. Autom. Soft Comput. 2023, 35, 1753–1765. [Google Scholar] [CrossRef]
- Kalaiyarasi, M.; Janaki, R.; Sampath, A.; Ganage, D.; Chincholkar, Y.D.; Budaraju, S. Non-additive noise reduction in medical images using bilateral filtering and modular neural networks. Soft Comput. 2023, 1–10. [Google Scholar] [CrossRef]
- Sawant, A.; Kasar, M.; Saha, A.; Gore, S.; Birwadkar, P.; Kulkarni, S. Medical Image De-Speckling Using Fusion of Diffusion-Based Filters And CNN. In Proceedings of the 8th International Conference on Advanced Computing and Communication Systems, ICACCS 2022, Coimbatore, India, 25–26 March 2022; pp. 1197–1203. [Google Scholar] [CrossRef]
- Dutta, S.; Georgeot, B.; Kouame, D.; Garcia, D.; Basarab, A. Adaptive Contrast Enhancement of Cardiac Ultrasound Images using a Deep Unfolded Many-Body Quantum Algorithm. In Proceedings of the IEEE International Ultrasonics Symposium (IUS), Venice, Italy, 10–13 October 2022. [Google Scholar] [CrossRef]
- Sanjeevi, G.; Krishnan Pathinarupothi, R.; Uma, G.; Madathil, T. Deep Learning Pipeline for Echocardiogram Noise Reduction. In Proceedings of the 2022 IEEE 7th International conference for Convergence in Technology, I2CT 2022, Mumbai, India, 7–9 April 2022. [Google Scholar] [CrossRef]
- Suseela, K.; Kalimuthu, K. An efficient transfer learning-based Super-Resolution model for Medical Ultrasound Image. J. Phys. Conf. Ser. 1964, 1964, 062050. [Google Scholar] [CrossRef]
- Chennakeshava, N.; Luijten, B.; Drori, O.; Mischi, M.; Eldar, Y.C.; Van Sloun, R.J.G. High resolution plane wave compounding through deep proximal learning. In Proceedings of the IEEE International Ultrasonics Symposium (IUS), Las Vegas, NV, USA, 7–11 September 2020. [Google Scholar] [CrossRef]
- Dong, G.; Ma, Y.; Basu, A. Feature-Guided CNN for Denoising Images from Portable Ultrasound Devices. IEEE Access 2021, 9, 28272–28281. [Google Scholar] [CrossRef]
- Jarosik, P.; Lewandowski, M.; Klimonda, Z.; Byra, M. Pixel-Wise Deep Reinforcement Learning Approach for Ultrasound Image Denoising. In Proceedings of the IEEE International Ultrasonics Symposium (IUS), Xi’an, China, 11–16 September 2021. [Google Scholar] [CrossRef]
- Kumar, M.; Mishra, S.K.; Joseph, J.; Jangir, S.K.; Goyal, D. Adaptive comprehensive particle swarm optimisation-based functional-link neural network filtre model for denoising ultrasound images. IET Image Process. 2021, 15, 1232–1246. [Google Scholar] [CrossRef]
- Shen, Z.; Li, W.; Han, H. Deep Learning-Based Wavelet Threshold Function Optimization on Noise Reduction in Ultrasound Images. Sci. Program. 2021, 2021, 3471327. [Google Scholar] [CrossRef]
- Kokil, P.; Sudharson, S. Despeckling of clinical ultrasound images using deep residual learning. Comput. Methods Programs Biomed. 2020, 194, 105477. [Google Scholar] [CrossRef] [PubMed]
- Feng, X.; Huang, Q.; Li, X. Ultrasound image de-speckling by a hybrid deep network with transferred filtering and structural prior. Neurocomputing 2020, 414, 346–355. [Google Scholar] [CrossRef]
- Ma, Y.; Yang, F.; Basu, A. Edge-guided CNN for denoising images from portable ultrasound devices. In Proceedings of the International Conference on Pattern Recognition, Milan, Italy, 10–15 January 2021; pp. 6826–6833. [Google Scholar] [CrossRef]
- Lan, Y.; Zhang, X. Real-time ultrasound image despeckling using mixed-attention mechanism based residual UNet. IEEE Access 2020, 8, 195327–195340. [Google Scholar] [CrossRef]
- Vasavi, G.; Jyothi, S. Noise Reduction Using OBNLM Filter and Deep Learning for Polycystic Ovary Syndrome Ultrasound Images. In Proceedings of the Learning and Analytics in Intelligent Systems; Springer: Cham, Switzerland, 2020; Volume 16, pp. 203–212. [Google Scholar] [CrossRef]
- Shelgaonkar, S.L.; Nandgaonkar, A.B. Deep Belief Network for the Enhancement of Ultrasound Images with Pelvic Lesions. J. Intell. Syst. 2018, 27, 507–522. [Google Scholar] [CrossRef]
- Singh, P.; Mukundan, R.; De Ryke, R. Feature Enhancement of Medical Ultrasound Scans Using Multifractal Measures. In Proceedings of the 2019 IEEE International Conference on Signals and Systems, ICSigSys 2019, Bandung, Indonesia, 16–18 July 2019; pp. 85–91. [Google Scholar] [CrossRef]
- Choi, W.; Kim, M.; Haklee, J.; Kim, J.; Beomra, J. Deep CNN-Based Ultrasound Super-Resolution for High-Speed High-Resolution B-Mode Imaging. In Proceedings of the IEEE International Ultrasonics Symposium (IUS), Kobe, Japan, 22–25 October 2018. [Google Scholar] [CrossRef]
- Ando, K.; Nagaoka, R.; Hasegawa, H. Speckle reduction of medical ultrasound images using deep learning with fully convolutional network. Jpn. J. Appl. Phys. 2020, 59, SKKE06. [Google Scholar] [CrossRef]
- Temiz, H.; Bilge, H.S. Super Resolution of B-Mode Ultrasound Images with Deep Learning. IEEE Access 2020, 8, 78808–78820. [Google Scholar] [CrossRef]
- Liu, J.; Liu, H.; Zheng, X.; Han, J. Exploring multi-scale deep encoder-decoder and patchgan for perceptual ultrasound image super-resolution. In Proceedings of the Communications in Computer and Information Science; Springer: Singapore, 2020; Volume 1265, pp. 47–59. [Google Scholar] [CrossRef]
- Mishra, D.; Chaudhury, S.; Sarkar, M.; Soin, A.S. Ultrasound image enhancement using structure oriented adversarial network. IEEE Signal Process. Lett. 2018, 25, 1349–1353. [Google Scholar] [CrossRef]
- Mishra, D.; Tyagi, S.; Chaudhury, S.; Sarkar, M.; Singhsoin, A. Despeckling CNN with Ensembles of Classical Outputs. In Proceedings of the International Conference on Pattern Recognition, Beijing, China, 20–24 August 2018; Volume 2018, pp. 3802–3807. [Google Scholar] [CrossRef]
- Oliveira-Saraiva, D.; Mendes, J.; Leote, J.; Gonzalez, F.A.; Garcia, N.; Ferreira, H.A.; Matela, N. Make It Less Complex: Autoencoder for Speckle Noise Removal-Application to Breast and Lung Ultrasound. J. Imaging 2023, 9, 217. [Google Scholar] [CrossRef] [PubMed]
- Vimala, B.B.; Srinivasan, S.; Mathivanan, S.K.; Muthukumaran, V.; Babu, J.C.; Herencsar, N.; Vilcekova, L. Image Noise Removal in Ultrasound Breast Images Based on Hybrid Deep Learning Technique. Sensors 2023, 23, 1167. [Google Scholar] [CrossRef] [PubMed]
- Sineesh, A.; Shankar, M.R.; Hareendranathan, A.; Panicker, M.R.; Palanisamy, P. Single Image based Super Resolution Ultrasound Imaging Using Residual Learning of Wavelet Features. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2023, 2023, 10340196. [Google Scholar] [CrossRef]
- Li, X.; Wang, Y.; Zhao, Y.; Wei, Y. Fast Speckle Noise Suppression Algorithm in Breast Ultrasound Image Using Three-Dimensional Deep Learning. Front. Physiol. 2022, 13, 880966. [Google Scholar] [CrossRef]
- Tamang, L.D.; Kim, B.W. Super-Resolution Ultrasound Imaging Scheme Based on a Symmetric Series Convolutional Neural Network. Sensors 2022, 22, 3076. [Google Scholar] [CrossRef] [PubMed]
- Balamurugan, M.; Chung, K.; Kuppoor, V.; Mahapatra, S.; Pustavoitau, A.; Manbachi, A. USDL: Inexpensive Medical Imaging Using Deep Learning Techniques and Ultrasound Technology. Proc. Des. Med. Devices Conf. 2020, 2020, 5. [Google Scholar] [CrossRef]
- Yu, H.; Ding, M.; Zhang, X.; Wu, J. PCANet based nonlocal means method for speckle noise removal in ultrasound images. PLoS ONE 2018, 13, e0205390. [Google Scholar] [CrossRef] [PubMed]
- S, L.S.; M, S. Bayesian Framework-Based Adaptive Hybrid Filtering for Speckle Noise Reduction in Ultrasound Images Via Lion Plus FireFly Algorithm. J. Digit. Imaging 2021, 34, 1463–1477. [Google Scholar] [CrossRef]
- Liebgott, H.; Rodriguez-Molares, A.; Cervenansky, F.; Jensen, J.A.; Bernard, O. Plane-wave imaging challenge in medical ultrasound. In Proceedings of the 2016 IEEE International Ultrasonics Symposium (IUS), Tours, France, 18–21 September 2016; pp. 1–4. [Google Scholar]
- Yap, M.H.; Pons, G.; Marti, J.; Ganau, S.; Sentis, M.; Zwiggelaar, R.; Davison, A.K.; Marti, R. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J. Biomed. Health Inform. 2017, 22, 1218–1226. [Google Scholar] [CrossRef]
- Xia, C.; Li, J.; Chen, X.; Zheng, A.; Zhang, Y. What is and what is not a salient object? Learning salient object detector by ensembling linear exemplar regressors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4142–4150. [Google Scholar]
- van den Heuvel, T.L.; de Bruijn, D.; de Korte, C.L.; Ginneken, B.v. Automated measurement of fetal head circumference using 2D ultrasound images. PLoS ONE 2018, 13, e0200412. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
van der Pol, H.G.A.; van Karnenbeek, L.M.; Wijkhuizen, M.; Geldof, F.; Dashtbozorg, B. Deep Learning for Point-of-Care Ultrasound Image Quality Enhancement: A Review. Appl. Sci. 2024, 14, 7132. https://doi.org/10.3390/app14167132
van der Pol HGA, van Karnenbeek LM, Wijkhuizen M, Geldof F, Dashtbozorg B. Deep Learning for Point-of-Care Ultrasound Image Quality Enhancement: A Review. Applied Sciences. 2024; 14(16):7132. https://doi.org/10.3390/app14167132
Chicago/Turabian Stylevan der Pol, Hilde G. A., Lennard M. van Karnenbeek, Mark Wijkhuizen, Freija Geldof, and Behdad Dashtbozorg. 2024. "Deep Learning for Point-of-Care Ultrasound Image Quality Enhancement: A Review" Applied Sciences 14, no. 16: 7132. https://doi.org/10.3390/app14167132
APA Stylevan der Pol, H. G. A., van Karnenbeek, L. M., Wijkhuizen, M., Geldof, F., & Dashtbozorg, B. (2024). Deep Learning for Point-of-Care Ultrasound Image Quality Enhancement: A Review. Applied Sciences, 14(16), 7132. https://doi.org/10.3390/app14167132