Next Article in Journal
A Narrative Review of Speech and EEG Features for Schizophrenia Detection: Progress and Challenges
Next Article in Special Issue
Synthetic Inflammation Imaging with PatchGAN Deep Learning Networks
Previous Article in Journal
The Power of ECG in Semi-Automated Seizure Detection in Addition to Two-Channel behind-the-Ear EEG
Previous Article in Special Issue
Federated End-to-End Unrolled Models for Magnetic Resonance Image Reconstruction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow

1
Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA
2
Department of Biomedical Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
3
Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel
*
Authors to whom correspondence should be addressed.
Bioengineering 2023, 10(4), 492; https://doi.org/10.3390/bioengineering10040492
Submission received: 21 March 2023 / Revised: 12 April 2023 / Accepted: 18 April 2023 / Published: 20 April 2023
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Over the last decade, artificial intelligence (AI) has made an enormous impact on a wide range of fields, including science, engineering, informatics, finance, and transportation. In recent years, it has also created new opportunities for improving the quality and efficiency of patient care. As the first breakthroughs in deep learning (DL) were largely concerned with image perception, interpretation, and analysis, it is not surprising that harnessing this technology has led to enormous progress in medical imaging. For example, DL techniques have enabled state-of-the-art results in image formation and analysis, pathology detection, and protocol planning [1,2,3,4,5,6,7,8,9,10,11].
This special issue of Bioengineering, entitled: “AI in MRI: Frontiers and Applications” focuses on the application of AI in magnetic resonance imaging (MRI), an area that has recently seen a surge of activity. MRI is an imaging modality that allows for the non-invasive visualization of the body’s internal structure and function, and is widely used for various clinical applications in areas including neurology, oncology, cardiology, and orthopedics, and in adult, pediatric, and fetal imaging. MRI has several advantages over other imaging modalities, including a lack of ionizing radiation, the ability to generate excellent soft-tissue contrast, and the ability to acquire images in any plane, orientation, and depth. Nevertheless, MRI is characterized by a few substantial limitations, primarily its relatively long scan time, which translates into high cost and increased sensitivity to motion artifacts. Recently, AI-based techniques have enabled considerable progress in addressing these limitations, providing accelerated acquisition [12,13,14,15,16] and motion robustness [17,18,19,20,21].
This special issue features seventeen papers that showcase the added value of employing AI-based solutions for a wide range of MRI-associated tasks, occurring at different points along the imaging pipeline. These original research reports can be divided into four main categories: MRI Acceleration, Image Synthesis and Parameter Quantification, Automated Segmentation, and Scan Planning.

1. MRI Acceleration

In the last few decades, extensive research efforts were dedicated to accelerating MRI via the development of advanced data sampling and reconstruction techniques [22,23,24,25,26,27,28,29,30,31,32,33]. Such techniques commonly involve rapid acquisition schemes that “break” the classical Nyquist-sampling criterion, a process known as undersampling. As this approach leads to image-domain artifacts, carefully designed reconstruction techniques are essential to allow for clinical-quality-preserving image recovery. Recently, DL techniques have enabled state-of-the-art results in this task, enabling high acceleration and excellent reconstruction quality [7,8,9,12,13,14,15,16,34,35,36,37,38,39]. Their success can be attributed to the ability to learn image priors in a data-driven manner instead of the hand-crafted manner practiced in compressed sensing and dictionary learning [25,29,35]. Furthermore, physics-guided unrolled neural networks combine the benefits of DL-based artifact-removal modules with data consistency blocks, which incorporate a physics-based model of the imaging system [9]. A large body of work has demonstrated the benefits of DL for image reconstruction in 2D MRI scans [7,8,9,34,35,36,37,38,39]. More recently, attention has shifted to harnessing DL for accelerating higher-dimensional MRI scans, such as dynamic (temporal) MRI. In this issue, Oscanoa et al. provide a comprehensive review of DL-based reconstruction methods for dynamic cardiac MRI, with connections to relevant theory [40].
One research direction that has seen a recent flair of activity is the development of AI techniques for the joint optimization of a non-Cartesian k-space sampling trajectory and an image-reconstruction network [41,42,43,44]. In this issue, Radhakrishna and Ciuciu [45] introduce a generic framework, dubbed projection for jointly learning non-Cartesian trajectories while optimizing reconstructor trajectories (PROJECTOR). This framework ensures that the learned trajectories are compatible with gradient-related hardware constraints. In contrast to previous techniques that enforce such constraints via penalty terms, PROJECTOR enforces them through embedded projection steps that project the learned trajectory on a feasible set. Retrospective experiments with 2D and 3D MRI data indicate that the PROJECTOR-generated trajectories can exploit the full possible range of gradients and slew rates well and produce sharp images. In another work, Hossain et al. [46] propose a new sampling pattern for 2D MRI, which combines the random and non-random sampling of the phase-encoding direction. The authors also introduce an advanced fully dense attention convolutional neural network (FDA-CNN), which reduces the number of redundant features using attention gates. The article by Cho et al. [47] proposes a different strategy for synergistic acquisition/reconstruction design. The authors propose to combine a wave-encoded sampling strategy with an unrolled neural network. Their strategy exploits the inherent similarity of images acquired with different contrasts of echo times, which can be used for accelerating quantitative MRI.
Here, several papers introduce techniques for developing DL models while facing data-related challenges. Zou et al. [48] introduce a new framework for dynamic MRI reconstruction without ground truth data, namely self-supervised collaborative learning (SelcCoLearn). This framework splits the undersampled k-space measurements into two datasets and uses them as inputs for two neural networks. Those networks have the same structure but different weights, and they are trained in parallel. The authors introduce a co-training loss that promotes the consistency of the predictions of the two networks. Experiments with cardiac data indicate that SelCoLearn produces high-quality reconstructions of dynamic MRI data. Additionally, Deveshwar et al. [49] introduce a method for synthesizing multi-coil complex-valued data from magnitude-only data; this can be useful for leveraging the high number of DICOM images that are stored in clinical databases. Their method uses conditional generative adversarial networks (GANs) for generating synthetic-phase images and ESPIRiT [28] for generating sensitivity maps from publicly available databases. The authors demonstrate that training variational networks on the synthesized data yields results comparable to training on raw k-space data. In a different study, Levac et al. [50] addressed the challenge of training MRI reconstruction models on heterogeneous data across multiple clients (data sites) while keeping the storage of individual scans local. The authors investigate an adaptive federated learning approach, where a global model is first trained across multiple clients without sharing any raw data between them, and then each client uses a small number of available datasets to fine-tune the global model. Numerical experiment results demonstrate that this approach can boost the performance of both under-represented clients, which participated in the federated training, and clients that were absent from it.
MRI scans can also be accelerated by reducing the number of scan repetitions, which are commonly required for improving the signal-to-noise ratio (SNR). Mohammadi et al. [51] propose a DL-based method for denoising low-SNR rectal cancer diffusion-weighted images (DWI) obtained with a high b-value. According to their method, DWI images acquired using a low b-value (characterized by high SNR) are used for guidance. The results, ranked using blind radiologist tests, indicate that the method enables an eight-fold scan time acceleration.

2. Image Synthesis and Parameter Quantification

The ability to derive meaningful tissue-characterizing images from raw data is yet another appealing application of AI in MRI. In this issue, Wu et al. designed a convolutional neural network (CNN) for the synthesis of water/fat images from dual- (instead of multi-) echo images [52]. In addition to the high fidelity shown in the output images, the proposed method demonstrated a 10-fold acceleration in computation time and a generalization ability to unseen organs and metal-artifact-containing images. In a different study, Zou et al. [53] proposed a manifold-learning framework that enables the reconstruction of free-breathing cardiac MRI data and the synthesis of cardiac cine movies. This framework enables the generation of synthetic breath-held cine movies with data on demand, e.g., movies with different inversion contrasts. Additionally, it enables the estimation of T1 maps with specific respiratory phases.
The accurate quantification of biophysical parameters is a long-sought-after goal in MRI. It is motivated by the superior reproducibility and improved diagnostic ability offered by distilled biological information. Traditionally, the derivation of parameter tissue maps required repeated acquisition in close to steady-state conditions, which yielded very long acquisition times. However, recently proposed frameworks for AI-based acquisition and quantification have rendered the rapid extraction of these parameters a viable option. A few such examples include the mapping of T1 and T2 relaxation times [54,55,56,57,58], semisolid magnetization transfer (MT) and chemical exchange saturation transfer (CEST) proton volume fraction and exchange rate [59,60,61,62,63], and susceptibility [64].
In this issue, Amer et al. combined quantitative T2 and proton density parameter maps with a multi-step classification pipeline aimed at segmenting and differentiating the various leg tissues [65]. By exploiting both fully and weakly supervised architectures, they were able to distinguish between the muscle, subcutaneous adipose, and infiltrated adipose tissues. Next, they exploited the resulting tissue areas for deriving a disease severity biomarker in muscle dystrophies. In another transverse relaxation rate quantification study, Lu et al. [66] designed and trained a cascade of two CNNs for image denoising and R2* mapping, and utilized it for iron-loaded liver relaxometry.

3. Automated Segmentation in Data-Challenging Regimes

AI techniques have recently led to state-of-the-art results in the automated segmentation of structure and pathology. For example, a significant body of work has been dedicated to the segmentation of brain tumors [67,68,69] and abdominal tissues/organs of interest [70,71,72]. Nevertheless, the development of AI techniques requires large training datasets, which are often scarce due to the high cost of data labeling. Moreover, the “off-label” use of other datasets could lead to biased results [73]. To overcome these hurdles, two papers investigated the benefits of pre-training segmentation networks on different datasets for solving different tasks. Dhaene et al. [74] proposed a method for automated segmentation of cardiac MRI (CMR) data, focusing on an MRI sequence that yields tagged MRI (which is useful for myocardial strain measurement). At present, publicly available tagged CMR datasets with myocardial annotations are scarce. The authors introduce a CycleGAN network that can transform cine data to synthetic tagged CMR data, and investigate the use of the synthetic data for training two segmentation networks. They show that pre-training the networks with the synthetic tagged-MRI data leads to faster convergence and better performance compared with training the networks from scratch. Their strategy achieves state-of-the-art results while using only a small dataset of real tagged CMR images.
Dominic et al. [75] suggest pre-training segmentation models on “pretext tasks”, where images are perturbed and the model is trained to restore them. They investigate two such tasks: context prediction, where random image pixels are set to zero, and context restoration, where image patches are randomly swapped. Their results demonstrate that pre-training increases the robustness of the segmentation models in limited labeled data regimes.
Another research direction that draws significant attention is end-to-end design of reconstruction and segmentation techniques. Although these two tasks are often addressed separately, there could be much benefit in solving them in tandem. This special issue includes a paper that summarizes the K2S challenge, which focused on this end-to-end approach and was hosted at the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) (Singapore, 2022) [76]. The challenge participants were required to submit DL models that can generate segmentation maps directly from 8x undersampled raw MRI measurements. The challenge organizers created a unique dataset consisting of 300 knee MRI scans, accompanied by radiologist-approved tissue segmentation labels. A total of twelve teams submitted their work to this challenge, and four of them obtained top performance. All the top submissions produced high-quality segmentation maps of knee cartilage and bone, which were suitable for downstream biomarker analysis. Interestingly, the organizers found no correlation between the reconstruction and segmentation metrics.

4. MRI Scan Planning

Automated scan prescription is an emerging AI application, which holds new prospects for clinical workflow optimization. At present, MRI scans necessitate a time-demanding manual prescription, based on human expertise. Two papers in this special issue propose novel techniques for automating this process. Lei et al. developed an automatic system for field-of-view (FOV) prescription using an intra-stack attention neural network [77]. The suggested system outperforms standard CNN models while producing prescriptions that were not significantly different than those produced by a radiologist. The method was validated using a challenging set of pediatric pelvic and abdominal images, where a typically large variance in body shape is expected. A radiologist confirmed the quality of the output segmentation maps, rating 69 out of the 80 examined images as clinically acceptable. The inference time was less than 0.5 s, rendering this approach as a promising tool for accelerating the clinical imaging pipeline.
Eisenstat et al. addressed the task of automated fetal MRI planning [78]. Determining the fetus’s presentation is an important element in the sequence planning, as it affects the mode of delivery. The authors designed a CNN-based architecture, dubbed Fet-Net, for the automatic classification of a 2D slice image into one of four presentation categories. Trained on 143 3D MRI datasets, the method’s performance was better than those of alternative methods.

5. Conclusions

This special issue includes seventeen papers that showcase the recent developments in harnessing AI to improve MRI workflow. The reported techniques involve various “intervention points” along the imaging pipeline, including protocol planning, data acquisition, image reconstruction, quantitative parameter mapping, and automated segmentation.
Another medical imaging regime where AI has brought considerable benefits is automated diagnosis and prognosis. AI has been found useful, for example, for the diagnosis of breast and prostate cancer from MRI [79,80], the diagnosis of COVID-19 from medical images [81,82], and fault detection in health management [83]. Furthermore, AI-based methods have led to state-of-the-art results in lesion detection and classification [84,85,86,87,88].
Two dominant trends can be identified in the papers published in this issue. First, the emergence of methods for addressing the lack or scarcity of open-access training data, a known obstacle for algorithm development [73]. Here, this challenge was addressed using data-style transfer [74], manifold learning directly from undersampled dynamic MRI data [53], complex-valued data synthesis with GANs [49], pre-training on “pretext tasks” [75], and federated learning [50]. The second trend is the shift towards more comprehensive AI pipelines, which aim to address more than one component of the MRI workflow. It includes frameworks that jointly optimize the sampling pattern and reconstruction [45] and techniques that generate segmentations directly from undersampled raw MRI measurements, thereby conducting both reconstruction and segmentation [76].
In summary, this issue serves as a further compelling evidence for the continuous contribution and promise of AI-based strategies for the MRI field. We expect that the upcoming years will see a consistent rise in the practical use of AI in medical imaging, with further impact on emerging applications, such as low-field MRI [89,90] and real-time MRI for MR-guided interventions [91,92,93].

Funding

This work was supported by the Ministry of Innovation, Science and Technology, Israel, the Weizmann Institute Women’s Postdoctoral Career Development Award in Science, and a grant from the Tel Aviv University Center for AI and Data Science (TAD).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef]
  2. Panayides, A.S.; Amini, A.; Filipovic, N.D.; Sharma, A.; Tsaftaris, S.A.; Young, A.; Foran, D.; Do, N.; Golemati, S.; Kurc, T.; et al. AI in medical imaging informatics: Current challenges and future directions. IEEE J. Biomed. Health Inform. 2020, 24, 1837–1857. [Google Scholar] [CrossRef]
  3. Castiglioni, I.; Rundo, L.; Codari, M.; Di Leo, G.; Salvatore, C.; Interlenghi, M.; Gallivanone, F.; Cozzi, A.; D’Amico, N.C.; Sardanelli, F. AI applications to medical images: From machine learning to deep learning. Phys. Med. 2021, 83, 9–24. [Google Scholar] [CrossRef]
  4. Reader, A.J.; Corda, G.; Mehranian, A.; da Costa-Luis, C.; Ellis, S.; Schnabel, J.A. Deep learning for PET image reconstruction. IEEE Trans. Radiat. Plasma Med. Sci. 2020, 5, 1–25. [Google Scholar] [CrossRef]
  5. Domingues, I.; Pereira, G.; Martins, P.; Duarte, H.; Santos, J.; Abreu, P.H. Using deep learning techniques in medical imaging: A systematic review of applications on CT and PET. Artif. Intell. Rev. 2020, 53, 4093–4160. [Google Scholar] [CrossRef]
  6. Zhou, S.K.; Greenspan, H.; Davatzikos, C.; Duncan, J.S.; Van Ginneken, B.; Madabhushi, A.; Prince, J.L.; Rueckert, D.; Summers, R.M. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. Proc. IEEE 2021, 109, 820–838. [Google Scholar] [CrossRef]
  7. Lundervold, A.S.; Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Z. Med. Phys. 2019, 29, 102–127. [Google Scholar] [CrossRef] [PubMed]
  8. Knoll, F.; Hammernik, K.; Zhang, C.; Moeller, S.; Pock, T.; Sodickson, D.K.; Akcakaya, M. Deep-learning methods for parallel magnetic resonance imaging reconstruction: A survey of the current approaches, trends, and issues. IEEE Signal Process. Mag. 2020, 37, 128–140. [Google Scholar] [CrossRef] [PubMed]
  9. Hammernik, K.; Küstner, T.; Yaman, B.; Huang, Z.; Rueckert, D.; Knoll, F.; Akçakaya, M. Physics-Driven Deep Learning for Computational Magnetic Resonance Imaging. arXiv 2022, arXiv:2203.12215. [Google Scholar]
  10. Dar, S.U.; Yurt, M.; Karacan, L.; Erdem, A.; Erdem, E.; Cukur, T. Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Trans. Med. Imaging 2019, 38, 2375–2388. [Google Scholar] [CrossRef]
  11. Akçakaya, M.; Yaman, B.; Chung, H.; Ye, J.C. Unsupervised deep learning methods for biological image reconstruction and enhancement: An overview from a signal processing perspective. IEEE Signal Process. Mag. 2022, 39, 28–44. [Google Scholar] [CrossRef] [PubMed]
  12. Wang, S.; Su, Z.; Ying, L.; Peng, X.; Zhu, S.; Liang, F.; Feng, D.; Liang, D. Accelerating magnetic resonance imaging via deep learning. In Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 514–517. [Google Scholar]
  13. Hammernik, K.; Klatzer, T.; Kobler, E.; Recht, M.P.; Sodickson, D.K.; Pock, T.; Knoll, F. Learning a variational network for reconstruction of accelerated MRI data. Magn. Reson. Med. 2018, 79, 3055–3071. [Google Scholar] [CrossRef] [PubMed]
  14. Zhu, B.; Liu, J.Z.; Cauley, S.F.; Rosen, B.R.; Rosen, M.S. Image reconstruction by domain-transform manifold learning. Nature 2018, 555, 487–492. [Google Scholar] [CrossRef] [PubMed]
  15. Aggarwal, H.K.; Mani, M.P.; Jacob, M. MoDL: Model-based deep learning architecture for inverse problems. IEEE Trans. Med. Imaging 2018, 38, 394–405. [Google Scholar] [CrossRef]
  16. Schlemper, J.; Caballero, J.; Hajnal, J.V.; Price, A.; Rueckert, D. A deep cascade of convolutional neural networks for MR image reconstruction. In Proceedings of the Information Processing in Medical Imaging: 25th International Conference, IPMI 2017, Boone, NC, USA, 25–30 June 2017; pp. 647–658. [Google Scholar]
  17. Johnson, P.M.; Drangova, M. Conditional generative adversarial network for 3D rigid-body motion correction in MRI. Magn. Reson. Med. 2019, 82, 901–910. [Google Scholar] [CrossRef]
  18. Küstner, T.; Fuin, N.; Hammernik, K.; Bustin, A.; Qi, H.; Hajhosseiny, R.; Masci, P.G.; Neji, R.; Rueckert, D.; Botnar, R.M.; et al. CINENet: Deep learning-based 3D cardiac CINE MRI reconstruction with multi-coil complex-valued 4D spatio-temporal convolutions. Sci. Rep. 2020, 10, 13710. [Google Scholar] [CrossRef]
  19. Oksuz, I.; Clough, J.R.; Ruijsink, B.; Anton, E.P.; Bustin, A.; Cruz, G.; Prieto, C.; King, A.P.; Schnabel, J.A. Deep learning-based detection and correction of cardiac MR motion artefacts during reconstruction for high-quality segmentation. IEEE Trans. Med. Imaging 2020, 39, 4001–4010. [Google Scholar] [CrossRef]
  20. Shimron, E.; De Goyeneche, A.; Halgaren, A.; Syed, A.B.; Vasanawala, S.; Wang, K.; Lustig, M. BladeNet: Rapid PROPELLER Acquisition and Reconstruction for High spatio-temporal Resolution Abdominal MRI. In Proceedings of the ISMRM Annual Meeting, London, UK, 7–12 May 2022. [Google Scholar]
  21. Pawar, K.; Chen, Z.; Shah, N.J.; Egan, G.F. Suppressing motion artefacts in MRI using an Inception-ResNet network with motion simulation augmentation. NMR Biomed. 2022, 35, e4225. [Google Scholar] [CrossRef]
  22. Sodickson, D.K.; Manning, W.J. Simultaneous acquisition of spatial harmonics (SMASH): Fast imaging with radiofrequency coil arrays. Magn. Reson. Med. 1997, 38, 591–603. [Google Scholar] [CrossRef]
  23. Pruessmann, K.P.; Weiger, M.; Scheidegger, M.B.; Boesiger, P. SENSE: Sensitivity encoding for fast MRI. Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med. 1999, 42, 952–962. [Google Scholar] [CrossRef]
  24. Griswold, M.A.; Jakob, P.M.; Heidemann, R.M.; Nittka, M.; Jellus, V.; Wang, J.; Kiefer, B.; Haase, A. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med. 2002, 47, 1202–1210. [Google Scholar] [CrossRef]
  25. Lustig, M.; Donoho, D.; Pauly, J.M. Sparse MRI: The application of Compressed Sensing for rapid MR imaging. Magn. Reson. Med. 2007, 58, 1182–1195. [Google Scholar] [CrossRef]
  26. Lustig, M.; Pauly, J.M. SPIRiT: Iterative self-consistent parallel imaging reconstruction from arbitrary k-space. Magn. Reson. Med. 2010, 64, 457–471. [Google Scholar] [CrossRef]
  27. Vasanawala, S.; Murphy, M.; Alley, M.T.; Lai, P.; Keutzer, K.; Pauly, J.M.; Lustig, M. Practical parallel imaging compressed sensing MRI: Summary of two years of experience in accelerating body MRI of pediatric patients. In Proceedings of the 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Chicago, IL, USA, 30 March–2 April 2011; pp. 1039–1043. [Google Scholar]
  28. Uecker, M.; Lai, P.; Murphy, M.J.; Virtue, P.; Elad, M.; Pauly, J.M.; Vasanawala, S.S.; Lustig, M. ESPIRiT—An eigenvalue approach to autocalibrating parallel MRI: Where SENSE meets GRAPPA. Magn. Reson. Med. 2014, 71, 990–1001. [Google Scholar] [CrossRef]
  29. Otazo, R.; Candes, E.; Sodickson, D.K. Low-rank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components. Magn. Reson. Med. 2015, 73, 1125–1136. [Google Scholar] [CrossRef] [PubMed]
  30. Feng, L.; Axel, L.; Chandarana, H.; Block, K.T.; Sodickson, D.K.; Otazo, R. XD-GRASP: Golden-angle radial MRI with reconstruction of extra motion-state dimensions using compressed sensing. Magn. Reson. Med. 2016, 75, 775–788. [Google Scholar] [CrossRef] [PubMed]
  31. Feng, L.; Benkert, T.; Block, K.T.; Sodickson, D.K.; Otazo, R.; Chandarana, H. Compressed sensing for body MRI. J. Magn. Reson. Imaging 2017, 45, 966–987. [Google Scholar] [CrossRef] [PubMed]
  32. Wang, X.; Tan, Z.; Scholand, N.; Roeloffs, V.; Uecker, M. Physics-based reconstruction methods for magnetic resonance imaging. Philos. Trans. R. Soc. A 2021, 379, 20200196. [Google Scholar] [CrossRef]
  33. Shimron, E.; Grissom, W.; Azhari, H. Temporal differences (TED) compressed sensing: A method for fast MRgHIFU temperature imaging. NMR Biomed. 2020, 33, e4352. [Google Scholar] [CrossRef]
  34. Sandino, C.M.; Cheng, J.Y.; Chen, F.; Mardani, M.; Pauly, J.M.; Vasanawala, S.S. Compressed sensing: From research to clinical practice with deep neural networks: Shortening scan times for magnetic resonance imaging. IEEE Signal Process. Mag. 2020, 37, 117–127. [Google Scholar] [CrossRef]
  35. Ravishankar, S.; Ye, J.C.; Fessler, J.A. Image reconstruction: From sparsity to data-adaptive methods and machine learning. Proc. IEEE 2019, 108, 86–109. [Google Scholar] [CrossRef] [PubMed]
  36. Liang, D.; Cheng, J.; Ke, Z.; Ying, L. Deep MRI reconstruction: Unrolled optimization algorithms meet neural networks. arXiv 2019, arXiv:1907.11711. [Google Scholar]
  37. Yaman, B.; Hosseini, S.A.H.; Moeller, S.; Ellermann, J.; Uğurbil, K.; Akçakaya, M. Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. Magn. Reson. Med. 2020, 84, 3172–3191. [Google Scholar] [CrossRef] [PubMed]
  38. Chen, Y.; Schönlieb, C.B.; Liò, P.; Leiner, T.; Dragotti, P.L.; Wang, G.; Rueckert, D.; Firmin, D.; Yang, G. AI-based reconstruction for fast MRI—A systematic review and meta-analysis. Proc. IEEE 2022, 110, 224–245. [Google Scholar] [CrossRef]
  39. Ramzi, Z.; Chaithya, G.; Starck, J.L.; Ciuciu, P. NC-PDNet: A density-compensated unrolled network for 2D and 3D non-Cartesian MRI reconstruction. IEEE Trans. Med. Imaging 2022, 41, 1625–1638. [Google Scholar] [CrossRef]
  40. Oscanoa, J.A.; Middione, M.J.; Alkan, C.; Yurt, M.; Loecher, M.; Vasanawala, S.S.; Ennis, D.B. Deep Learning-Based Reconstruction for Cardiac MRI: A Review. Bioengineering 2023, 10, 334. [Google Scholar] [CrossRef] [PubMed]
  41. Weiss, T.; Senouf, O.; Vedula, S.; Michailovich, O.; Zibulevsky, M.; Bronstein, A. PILOT: Physics-informed learned optimized trajectories for accelerated MRI. arXiv 2019, arXiv:1909.05773. [Google Scholar]
  42. Aggarwal, H.K.; Jacob, M. J-MoDL: Joint model-based deep learning for optimized sampling and reconstruction. IEEE J. Sel. Top. Signal Process. 2020, 14, 1151–1162. [Google Scholar] [CrossRef]
  43. Wang, G.; Luo, T.; Nielsen, J.F.; Noll, D.C.; Fessler, J.A. B-spline parameterized joint optimization of reconstruction and k-space trajectories (bjork) for accelerated 2d mri. IEEE Trans. Med. Imaging 2022, 41, 2318–2330. [Google Scholar] [CrossRef]
  44. Lazarus, C.; Weiss, P.; Chauffert, N.; Mauconduit, F.; El Gueddari, L.; Destrieux, C.; Zemmoura, I.; Vignaud, A.; Ciuciu, P. SPARKLING: Variable-density k-space filling curves for accelerated T2*-weighted MRI. Magn. Reson. Med. 2019, 81, 3643–3661. [Google Scholar] [CrossRef]
  45. Radhakrishna, C.G.; Ciuciu, P. Jointly Learning Non-Cartesian k-Space Trajectories and Reconstruction Networks for 2D and 3D MR Imaging through Projection. Bioengineering 2023, 10, 158. [Google Scholar] [CrossRef] [PubMed]
  46. Hossain, M.B.; Kwon, K.C.; Imtiaz, S.M.; Nam, O.S.; Jeon, S.H.; Kim, N. De-Aliasing and Accelerated Sparse Magnetic Resonance Image Reconstruction Using Fully Dense CNN with Attention Gates. Bioengineering 2022, 10, 22. [Google Scholar] [CrossRef]
  47. Cho, J.; Gagoski, B.; Kim, T.H.; Tian, Q.; Frost, R.; Chatnuntawech, I.; Bilgic, B. Wave-Encoded Model-Based Deep Learning for Highly Accelerated Imaging with Joint Reconstruction. Bioengineering 2022, 9, 736. [Google Scholar] [CrossRef]
  48. Zou, J.; Li, C.; Jia, S.; Wu, R.; Pei, T.; Zheng, H.; Wang, S. SelfCoLearn: Self-supervised collaborative learning for accelerating dynamic MR imaging. Bioengineering 2022, 9, 650. [Google Scholar] [CrossRef] [PubMed]
  49. Deveshwar, N.; Rajagopal, A.; Sahin, S.; Shimron, E.; Larson, P.E.Z. Synthesizing Complex-Valued Multicoil MRI Data from Magnitude-Only Images. Bioengineering 2023, 10, 358. [Google Scholar] [CrossRef] [PubMed]
  50. Levac, B.; Arvinte, M.; Tamir, J. Federated End-to-End Unrolled Models for Magnetic Resonance Image Reconstruction. Bioengineering 2023, 10, 364. [Google Scholar] [CrossRef]
  51. Mohammadi, M.; Kaye, E.A.; Alus, O.; Kee, Y.; Golia Pernicka, J.S.; El Homsi, M.; Petkovska, I.; Otazo, R. Accelerated Diffusion-Weighted MRI of Rectal Cancer Using a Residual Convolutional Network. Bioengineering 2023, 10, 359. [Google Scholar] [CrossRef] [PubMed]
  52. Wu, Y.; Alley, M.; Li, Z.; Datta, K.; Wen, Z.; Sandino, C.; Syed, A.; Ren, H.; Xing, L.; Lustig, M.; et al. Deep Learning-Based Water-Fat Separation from Dual-Echo Chemical Shift-Encoded Imaging. Bioengineering 2022, 9, 579. [Google Scholar] [CrossRef]
  53. Zou, Q.; Priya, S.; Nagpal, P.; Jacob, M. Joint cardiac T1 mapping and cardiac cine using manifold modeling. Bioengineering 2023, 10, 345. [Google Scholar] [CrossRef]
  54. Ma, D.; Gulani, V.; Seiberlich, N.; Liu, K.; Sunshine, J.L.; Duerk, J.L.; Griswold, M.A. Magnetic resonance fingerprinting. Nature 2013, 495, 187–192. [Google Scholar] [CrossRef]
  55. Liu, F.; Feng, L.; Kijowski, R. MANTIS: Model-Augmented Neural neTwork with Incoherent k-space Sampling for efficient MR parameter mapping. Magn. Reson. Med. 2019, 82, 174–188. [Google Scholar] [CrossRef] [PubMed]
  56. Cohen, O.; Zhu, B.; Rosen, M.S. MR fingerprinting deep reconstruction network (DRONE). Magn. Reson. Med. 2018, 80, 885–894. [Google Scholar] [CrossRef] [PubMed]
  57. Chen, Y.; Fang, Z.; Hung, S.C.; Chang, W.T.; Shen, D.; Lin, W. High-resolution 3D MR Fingerprinting using parallel imaging and deep learning. Neuroimage 2020, 206, 116329. [Google Scholar] [CrossRef] [PubMed]
  58. Feng, L.; Ma, D.; Liu, F. Rapid MR relaxometry using deep learning: An overview of current techniques and emerging trends. NMR Biomed. 2022, 35, e4416. [Google Scholar] [CrossRef]
  59. Perlman, O.; Zhu, B.; Zaiss, M.; Rosen, M.S.; Farrar, C.T. An end-to-end AI-based framework for automated discovery of rapid CEST/MT MRI acquisition protocols and molecular parameter quantification (AutoCEST). Magn. Reson. Med. 2022, 87, 2792–2810. [Google Scholar] [CrossRef] [PubMed]
  60. Chen, L.; Schär, M.; Chan, K.W.; Huang, J.; Wei, Z.; Lu, H.; Qin, Q.; Weiss, R.G.; van Zijl, P.C.; Xu, J. In vivo imaging of phosphocreatine with artificial neural networks. Nat. Commun. 2020, 11, 1072. [Google Scholar] [CrossRef] [PubMed]
  61. Perlman, O.; Ito, H.; Herz, K.; Shono, N.; Nakashima, H.; Zaiss, M.; Chiocca, E.A.; Cohen, O.; Rosen, M.S.; Farrar, C.T. Quantitative imaging of apoptosis following oncolytic virotherapy by magnetic resonance fingerprinting aided by deep learning. Nat. Biomed. Eng. 2022, 6, 648–657. [Google Scholar] [CrossRef]
  62. Perlman, O.; Farrar, C.T.; Heo, H.Y. MR fingerprinting for semisolid magnetization transfer and chemical exchange saturation transfer quantification. NMR Biomed. 2022, e4710. [Google Scholar] [CrossRef]
  63. Weigand-Whittier, J.; Sedykh, M.; Herz, K.; Coll-Font, J.; Foster, A.N.; Gerstner, E.R.; Nguyen, C.; Zaiss, M.; Farrar, C.T.; Perlman, O. Accelerated and quantitative three-dimensional molecular MRI using a generative adversarial network. Magn. Reson. Med. 2022, 89, 1901–1914. [Google Scholar] [CrossRef]
  64. Jung, W.; Bollmann, S.; Lee, J. Overview of quantitative susceptibility mapping using deep learning: Current status, challenges and opportunities. NMR Biomed. 2022, 35, e4292. [Google Scholar] [CrossRef]
  65. Amer, R.; Nassar, J.; Trabelsi, A.; Bendahan, D.; Greenspan, H.; Ben-Eliezer, N. Quantification of Intra-Muscular Adipose Infiltration in Calf/Thigh MRI Using Fully and Weakly Supervised Semantic Segmentation. Bioengineering 2022, 9, 315. [Google Scholar] [CrossRef] [PubMed]
  66. Lu, Q.; Wang, C.; Lian, Z.; Zhang, X.; Yang, W.; Feng, Q.; Feng, Y. Cascade of Denoising and Mapping Neural Networks for MRI R2* Relaxometry of Iron-Loaded Liver. Bioengineering 2023, 10, 209. [Google Scholar] [CrossRef] [PubMed]
  67. Işın, A.; Direkoğlu, C.; Şah, M. Review of MRI-based brain tumor image segmentation using deep learning methods. Procedia Comput. Sci. 2016, 102, 317–324. [Google Scholar] [CrossRef]
  68. Akkus, Z.; Galimzianova, A.; Hoogi, A.; Rubin, D.L.; Erickson, B.J. Deep learning for brain MRI segmentation: State of the art and future directions. J. Digit. Imaging 2017, 30, 449–459. [Google Scholar] [CrossRef] [PubMed]
  69. Grøvik, E.; Yi, D.; Iv, M.; Tong, E.; Rubin, D.; Zaharchuk, G. Deep learning enables automatic detection and segmentation of brain metastases on multisequence MRI. J. Magn. Reson. Imaging 2020, 51, 175–182. [Google Scholar] [CrossRef] [PubMed]
  70. Estrada, S.; Lu, R.; Conjeti, S.; Orozco-Ruiz, X.; Panos-Willuhn, J.; Breteler, M.M.; Reuter, M. FatSegNet: A fully automated deep learning pipeline for adipose tissue segmentation on abdominal dixon MRI. Magn. Reson. Med. 2020, 83, 1471–1483. [Google Scholar] [CrossRef] [PubMed]
  71. Chen, Y.; Ruan, D.; Xiao, J.; Wang, L.; Sun, B.; Saouaf, R.; Yang, W.; Li, D.; Fan, Z. Fully automated multiorgan segmentation in abdominal magnetic resonance imaging with deep neural networks. Med. Phys. 2020, 47, 4971–4982. [Google Scholar] [CrossRef] [PubMed]
  72. Altini, N.; Prencipe, B.; Cascarano, G.D.; Brunetti, A.; Brunetti, G.; Triggiani, V.; Carnimeo, L.; Marino, F.; Guerriero, A.; Villani, L.; et al. Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey. Neurocomputing 2022, 490, 30–53. [Google Scholar] [CrossRef]
  73. Shimron, E.; Tamir, J.I.; Wang, K.; Lustig, M. Implicit data crimes: Machine learning bias arising from misuse of public data. Proc. Natl. Acad. Sci. USA 2022, 119, e2117203119. [Google Scholar] [CrossRef] [PubMed]
  74. Dhaene, A.P.; Loecher, M.; Wilson, A.J.; Ennis, D.B. Myocardial Segmentation of Tagged Magnetic Resonance Images with Transfer Learning Using Generative Cine-To-Tagged Dataset Transformation. Bioengineering 2023, 10, 166. [Google Scholar] [CrossRef] [PubMed]
  75. Dominic, J.; Bhaskhar, N.; Desai, A.D.; Schmidt, A.; Rubin, E.; Gunel, B.; Gold, G.E.; Hargreaves, B.A.; Lenchik, L.; Boutin, R.; et al. Improving Data-Efficiency and Robustness of Medical Imaging Segmentation Using Inpainting-Based Self-Supervised Learning. Bioengineering 2023, 10, 207. [Google Scholar] [CrossRef]
  76. Tolpadi, A.A.; Bharadwaj, U.; Gao, K.T.; Bhattacharjee, R.; Gassert, F.G.; Luitjens, J.; Giesler, P.; Morshuis, J.N.; Fischer, P.; Hein, M.; et al. K2S Challenge: From Undersampled K-Space to Automatic Segmentation. Bioengineering 2023, 10, 267. [Google Scholar] [CrossRef] [PubMed]
  77. Lei, K.; Syed, A.B.; Zhu, X.; Pauly, J.M.; Vasanawala, S.V. Automated MRI Field of View Prescription from Region of Interest Prediction by Intra-Stack Attention Neural Network. Bioengineering 2023, 10, 92. [Google Scholar] [CrossRef] [PubMed]
  78. Eisenstat, J.; Wagner, M.W.; Vidarsson, L.; Ertl-Wagner, B.; Sussman, D. Fet-Net Algorithm for Automatic Detection of Fetal Orientation in Fetal MRI. Bioengineering 2023, 10, 140. [Google Scholar] [CrossRef] [PubMed]
  79. Hu, Q.; Whitney, H.M.; Giger, M.L. A deep learning methodology for improved breast cancer diagnosis using multiparametric MRI. Sci. Rep. 2020, 10, 10536. [Google Scholar] [CrossRef]
  80. Liu, S.; Zheng, H.; Feng, Y.; Li, W. Prostate cancer diagnosis using deep learning with 3D multiparametric MRI. In Medical Imaging 2017: Computer-Aided Diagnosis; SPIE: Orlando, FL, USA, 2017; Volume 10134, pp. 581–584. [Google Scholar]
  81. Song, Y.; Zheng, S.; Li, L.; Zhang, X.; Zhang, X.; Huang, Z.; Chen, J.; Wang, R.; Zhao, H.; Chong, Y.; et al. Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 18, 2775–2780. [Google Scholar] [CrossRef]
  82. Khanna, V.V.; Chadaga, K.; Sampathila, N.; Prabhu, S.; Chadaga, R.; Umakanth, S. Diagnosing COVID-19 using artificial intelligence: A comprehensive review. Netw. Model. Anal. Health Inform. Bioinform. 2022, 11, 25. [Google Scholar] [CrossRef]
  83. Zhang, L.; Lin, J.; Liu, B.; Zhang, Z.; Yan, X.; Wei, M. A review on deep learning applications in prognostics and health management. IEEE Access 2019, 7, 162415–162438. [Google Scholar] [CrossRef]
  84. Dalmis, M.U.; Gubern-Mérida, A.; Vreemann, S.; Bult, P.; Karssemeijer, N.; Mann, R.; Teuwen, J. Artificial intelligence—Based classification of breast lesions imaged with a multiparametric breast MRI protocol with ultrafast DCE-MRI, T2, and DWI. Investig. Radiol. 2019, 54, 325–332. [Google Scholar] [CrossRef]
  85. Vladimirov, N.; Perlman, O. Molecular MRI-Based Monitoring of Cancer Immunotherapy Treatment Response. Int. J. Mol. Sci. 2023, 24, 3151. [Google Scholar] [CrossRef]
  86. Zhuo, Z.; Zhang, J.; Duan, Y.; Qu, L.; Feng, C.; Huang, X.; Cheng, D.; Xu, X.; Sun, T.; Li, Z.; et al. Automated Classification of Intramedullary Spinal Cord Tumors and Inflammatory Demyelinating Lesions Using Deep Learning. Radiol. Artif. Intell. 2022, 4, e210292. [Google Scholar] [CrossRef]
  87. Rocca, M.A.; Anzalone, N.; Storelli, L.; Del Poggio, A.; Cacciaguerra, L.; Manfredi, A.A.; Meani, A.; Filippi, M. Deep learning on conventional magnetic resonance imaging improves the diagnosis of multiple sclerosis mimics. Investig. Radiol. 2021, 56, 252–260. [Google Scholar] [CrossRef] [PubMed]
  88. Whitney, H.M.; Li, H.; Ji, Y.; Liu, P.; Giger, M.L. Comparison of breast MRI tumor classification using human-engineered radiomics, transfer learning from deep convolutional neural networks, and fusion methods. Proc. IEEE 2019, 108, 163–177. [Google Scholar] [CrossRef] [PubMed]
  89. Arnold, T.C.; Freeman, C.W.; Litt, B.; Stein, J.M. Low-field MRI: Clinical promise and challenges. J. Magn. Reson. Imaging 2023, 57, 25–44. [Google Scholar] [CrossRef]
  90. Koonjoo, N.; Zhu, B.; Bagnall, G.C.; Bhutto, D.; Rosen, M. Boosting the signal-to-noise of low-field MRI with deep learning image reconstruction. Sci. Rep. 2021, 11, 8248. [Google Scholar] [CrossRef] [PubMed]
  91. Nayak, K.S.; Lim, Y.; Campbell-Washburn, A.E.; Steeden, J. Real-time magnetic resonance imaging. J. Magn. Reson. Imaging 2022, 55, 81–99. [Google Scholar] [CrossRef]
  92. Goodburn, R.J.; Philippens, M.E.; Lefebvre, T.L.; Khalifa, A.; Bruijnen, T.; Freedman, J.N.; Waddington, D.E.; Younus, E.; Aliotta, E.; Meliadò, G.; et al. The future of MRI in radiation therapy: Challenges and opportunities for the MR community. Magn. Reson. Med. 2022, 88, 2592–2608. [Google Scholar] [CrossRef] [PubMed]
  93. Cusumano, D.; Boldrini, L.; Dhont, J.; Fiorino, C.; Green, O.; Güngör, G.; Jornet, N.; Klüter, S.; Landry, G.; Mattiucci, G.C.; et al. Artificial Intelligence in magnetic Resonance guided Radiotherapy: Medical and physical considerations on state of art and future perspectives. Phys. Med. 2021, 85, 175–191. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shimron, E.; Perlman, O. AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow. Bioengineering 2023, 10, 492. https://doi.org/10.3390/bioengineering10040492

AMA Style

Shimron E, Perlman O. AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow. Bioengineering. 2023; 10(4):492. https://doi.org/10.3390/bioengineering10040492

Chicago/Turabian Style

Shimron, Efrat, and Or Perlman. 2023. "AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow" Bioengineering 10, no. 4: 492. https://doi.org/10.3390/bioengineering10040492

APA Style

Shimron, E., & Perlman, O. (2023). AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow. Bioengineering, 10(4), 492. https://doi.org/10.3390/bioengineering10040492

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop