Topic Editors

1. Department of Medical Physiology and Biophysics, Faculty of Medicine, University of Seville, 41004 Sevilla, Spain
2. National Accelerators' Center, University of Seville, 41004 Sevilla, Spain

Artificial Intelligence (AI) in Medical Imaging

Abstract submission deadline
closed (30 November 2021)
Manuscript submission deadline
closed (31 January 2022)
Viewed by
25219

Topic Information

Dear Colleagues,

Artificial Intelligence and especially Machine Learning are playing an increasing role in Medical Imaging. On one hand, AI presents solutions which are of (in)sufficient accuracy or are difficult to implement in a clinical workflow. On the other hand, MD wants to have insight into how the classification or suggested diagnosis or outcome was calculated instead of just seeing an expensive AI black box. AI needs a common discussion platform where all key opinion leaders can express their concerns and results in an organized way.

In this topic, we invite AI/ML developers, computer scientists, academics, MDs (radiologists, neurologists, critical care doctors, etc.), and hospital managers to present their results of implementation of AI into medical imaging of any modality, the effectiveness of this process, and the gains and losses in terms of patient outcome and (monetary) investments. We welcome original articles, case reports, and reviews.  

Prof. Dr. Marcin Balcerzyk
Topic Editor

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Life
life
3.2 4.3 2011 18 Days CHF 2600
Information
information
2.4 6.9 2010 14.9 Days CHF 1600
AI
ai
3.1 7.2 2020 17.6 Days CHF 1600
Journal of Imaging
jimaging
2.7 5.9 2015 20.9 Days CHF 1800

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (5 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
18 pages, 5321 KiB  
Article
Deep Learning Applied to Intracranial Hemorrhage Detection
by Luis Cortés-Ferre, Miguel Angel Gutiérrez-Naranjo, Juan José Egea-Guerrero, Soledad Pérez-Sánchez and Marcin Balcerzyk
J. Imaging 2023, 9(2), 37; https://doi.org/10.3390/jimaging9020037 - 7 Feb 2023
Cited by 15 | Viewed by 4549
Abstract
Intracranial hemorrhage is a serious medical problem that requires rapid and often intensive medical care. Identifying the location and type of any hemorrhage present is a critical step in the treatment of the patient. Detection of, and diagnosis of, a hemorrhage that requires [...] Read more.
Intracranial hemorrhage is a serious medical problem that requires rapid and often intensive medical care. Identifying the location and type of any hemorrhage present is a critical step in the treatment of the patient. Detection of, and diagnosis of, a hemorrhage that requires an urgent procedure is a difficult and time-consuming process for human experts. In this paper, we propose methods based on EfficientDet’s deep-learning technology that can be applied to the diagnosis of hemorrhages at a patient level and which could, thus, become a decision-support system. Our proposal is two-fold. On the one hand, the proposed technique classifies slices of computed tomography scans for the presence of hemorrhage or its lack of, and evaluates whether the patient is positive in terms of hemorrhage, and achieving, in this regard, 92.7% accuracy and 0.978 ROC AUC. On the other hand, our methodology provides visual explanations of the chosen classification using the Grad-CAM methodology. Full article
(This article belongs to the Topic Artificial Intelligence (AI) in Medical Imaging)
Show Figures

Graphical abstract

18 pages, 1130 KiB  
Article
A Hybrid 3D-2D Image Registration Framework for Pedicle Screw Trajectory Registration between Intraoperative X-ray Image and Preoperative CT Image
by Roshan Ramakrishna Naik, Anitha Hoblidar, Shyamasunder N. Bhat, Nishanth Ampar and Raghuraj Kundangar
J. Imaging 2022, 8(7), 185; https://doi.org/10.3390/jimaging8070185 - 6 Jul 2022
Cited by 7 | Viewed by 3081
Abstract
Pedicle screw insertion is considered a complex surgery among Orthopaedics surgeons. Exclusively to prevent postoperative complications associated with pedicle screw insertion, various types of image intensity registration-based navigation systems have been developed. These systems are computation-intensive, have a small capture range and have [...] Read more.
Pedicle screw insertion is considered a complex surgery among Orthopaedics surgeons. Exclusively to prevent postoperative complications associated with pedicle screw insertion, various types of image intensity registration-based navigation systems have been developed. These systems are computation-intensive, have a small capture range and have local maxima issues. On the other hand, deep learning-based techniques lack registration generalizability and have data dependency. To overcome these limitations, a patient-specific hybrid 3D-2D registration principled framework was designed to map a pedicle screw trajectory between intraoperative X-ray image and preoperative CT image. An anatomical landmark-based 3D-2D Iterative Control Point (ICP) registration was performed to register a pedicular marker pose between the X-ray images and axial preoperative CT images. The registration framework was clinically validated by generating projection images possessing an optimal match with intraoperative X-ray images at the corresponding control point registration. The effectiveness of the registered trajectory was evaluated in terms of displacement and directional errors after reprojecting its position on 2D radiographic planes. The mean Euclidean distances for the Head and Tail end of the reprojected trajectory from the actual trajectory in the AP and lateral planes were shown to be 0.6–0.8 mm and 0.5–1.6 mm, respectively. Similarly, the corresponding mean directional errors were found to be 4.90 and 20. The mean trajectory length difference between the actual and registered trajectory was shown to be 2.67 mm. The approximate time required in the intraoperative environment to axially map the marker position for a single vertebra was found to be 3 min. Utilizing the markerless registration techniques, the designed framework functions like a screw navigation tool, and assures the quality of surgery being performed by limiting the need of postoperative CT. Full article
(This article belongs to the Topic Artificial Intelligence (AI) in Medical Imaging)
Show Figures

Figure 1

11 pages, 1212 KiB  
Article
Deep Neural Network for Cardiac Magnetic Resonance Image Segmentation
by David Chen, Huzefa Bhopalwala, Nakeya Dewaswala, Shivaram P. Arunachalam, Moein Enayati, Nasibeh Zanjirani Farahani, Kalyan Pasupathy, Sravani Lokineni, J. Martijn Bos, Peter A. Noseworthy, Reza Arsanjani, Bradley J. Erickson, Jeffrey B. Geske, Michael J. Ackerman, Philip A. Araoz and Adelaide M. Arruda-Olson
J. Imaging 2022, 8(5), 149; https://doi.org/10.3390/jimaging8050149 - 23 May 2022
Cited by 6 | Viewed by 4121
Abstract
The analysis and interpretation of cardiac magnetic resonance (CMR) images are often time-consuming. The automated segmentation of cardiac structures can reduce the time required for image analysis. Spatial similarities between different CMR image types were leveraged to jointly segment multiple sequences using a [...] Read more.
The analysis and interpretation of cardiac magnetic resonance (CMR) images are often time-consuming. The automated segmentation of cardiac structures can reduce the time required for image analysis. Spatial similarities between different CMR image types were leveraged to jointly segment multiple sequences using a segmentation model termed a multi-image type UNet (MI-UNet). This model was developed from 72 exams (46% female, mean age 63 ± 11 years) performed on patients with hypertrophic cardiomyopathy. The MI-UNet for steady-state free precession (SSFP) images achieved a superior Dice similarity coefficient (DSC) of 0.92 ± 0.06 compared to 0.87 ± 0.08 for a single-image type UNet (p < 0.001). The MI-UNet for late gadolinium enhancement (LGE) images also had a superior DSC of 0.86 ± 0.11 compared to 0.78 ± 0.11 for a single-image type UNet (p = 0.001). The difference across image types was most evident for the left ventricular myocardium in SSFP images and for both the left ventricular cavity and the left ventricular myocardium in LGE images. For the right ventricle, there were no differences in DCS when comparing the MI-UNet with single-image type UNets. The joint segmentation of multiple image types increases segmentation accuracy for CMR images of the left ventricle compared to single-image models. In clinical practice, the MI-UNet model may expedite the analysis and interpretation of CMR images of multiple types. Full article
(This article belongs to the Topic Artificial Intelligence (AI) in Medical Imaging)
Show Figures

Figure 1

21 pages, 618 KiB  
Review
Systematic Review of Computer Vision Semantic Analysis in Socially Assistive Robotics
by Antonio Victor Alencar Lundgren, Matheus Albert Oliveira dos Santos, Byron Leite Dantas Bezerra and Carmelo José Albanez Bastos-Filho
AI 2022, 3(1), 229-249; https://doi.org/10.3390/ai3010014 - 17 Mar 2022
Cited by 10 | Viewed by 5706
Abstract
The simultaneous surges in the research on socially assistive robotics and that on computer vision can be seen as a result of the shifting and increasing necessities of our global population, especially towards social care with the expanding population in need of socially [...] Read more.
The simultaneous surges in the research on socially assistive robotics and that on computer vision can be seen as a result of the shifting and increasing necessities of our global population, especially towards social care with the expanding population in need of socially assistive robotics. The merging of these fields creates demand for more complex and autonomous solutions, often struggling with the lack of contextual understanding of tasks that semantic analysis can provide and hardware limitations. Solving those issues can provide more comfortable and safer environments for the individuals in most need. This work aimed to understand the current scope of science in the merging fields of computer vision and semantic analysis in lightweight models for robotic assistance. Therefore, we present a systematic review of visual semantics works concerned with assistive robotics. Furthermore, we discuss the trends and possible research gaps in those fields. We detail our research protocol, present the state of the art and future trends, and answer five pertinent research questions. Out of 459 articles, 22 works matching the defined scope were selected, rated in 8 quality criteria relevant to our search, and discussed in depth. Our results point to an emerging field of research with challenging gaps to be explored by the academic community. Data on database study collection, year of publishing, and the discussion of methods and datasets are displayed. We observe that the current methods regarding visual semantic analysis show two main trends. At first, there is an abstraction of contextual data to enable an automated understanding of tasks. We also observed a clearer formalization of model compaction metrics. Full article
(This article belongs to the Topic Artificial Intelligence (AI) in Medical Imaging)
Show Figures

Figure 1

16 pages, 17025 KiB  
Article
Breast Histopathological Image Classification Method Based on Autoencoder and Siamese Framework
by Min Liu, Yu He, Minghu Wu and Chunyan Zeng
Information 2022, 13(3), 107; https://doi.org/10.3390/info13030107 - 24 Feb 2022
Cited by 20 | Viewed by 3880
Abstract
The automated classification of breast cancer histopathological images is one of the important tasks in computer-aided diagnosis systems (CADs). Due to the characteristics of small inter-class and large intra-class variances in breast cancer histopathological images, extracting features for breast cancer classification is difficult. [...] Read more.
The automated classification of breast cancer histopathological images is one of the important tasks in computer-aided diagnosis systems (CADs). Due to the characteristics of small inter-class and large intra-class variances in breast cancer histopathological images, extracting features for breast cancer classification is difficult. To address this problem, an improved autoencoder (AE) network using a Siamese framework that can learn the effective features from histopathological images for CAD breast cancer classification tasks was designed. First, the inputted image is processed at multiple scales using a Gaussian pyramid to obtain multi-scale features. Second, in the feature extraction stage, a Siamese framework is used to constrain the pre-trained AE so that the extracted features have smaller intra-class variance and larger inter-class variance. Experimental results show that the proposed method classification accuracy was as high as 97.8% on the BreakHis dataset. Compared with commonly used algorithms in breast cancer histopathological classification, this method has superior, faster performance. Full article
(This article belongs to the Topic Artificial Intelligence (AI) in Medical Imaging)
Show Figures

Figure 1

Back to TopTop