Revolutionizing Medical Image Analysis with Deep Learning

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 June 2024) | Viewed by 21251

Special Issue Editors


E-Mail Website
Guest Editor
Department of Software Engineering, Faculty of Electrical Engineering, Computer Science and Information Technology Osijek, 31000 Osijek, Croatia
Interests: image compression; image processing; computer vision; machine learning; medical image processing and analysis; visual computing

E-Mail Website
Guest Editor
Department of Software Engineering, Faculty of Electrical Engineering, Computer Science and Information Technology Osijek, 31000 Osijek, Croatia
Interests: image processing; computer vision; deep learning; machine learning; medical image processing and analysis; visual computing

E-Mail Website
Guest Editor
Department of Electronics and Telecommunications, Polytechnic University of Turin, Turin, Italy
Interests: biomedical signal and image processing and classification; biophysical modelling; clinical studies; mathematical biology and physiology; noninvasive monitoring of the volemic status of patients; nonlinear biomedical signal processing; optimal non-uniform down-sampling; systems for human–machine interaction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue of the MDPI journal Electronics, titled "Revolutionizing Medical Image Analysis with Deep Learning," focuses on the growing trend of using deep learning algorithms in the field of medical imaging. Medical imaging is an essential component in the diagnosis, treatment, and monitoring of various diseases and conditions, and deep learning has the potential to significantly improve the accuracy, efficiency, and reliability of medical image analysis.

The goal of this Special Issue is to bring together recent advances and cutting-edge research in the use of deep learning in medical image analysis. The Issue also aims to provide a comprehensive overview of the current state-of-the-art and to highlight the challenges, opportunities, and future directions of this rapidly evolving field.

The scope of the Special Issue is interdisciplinary, bringing together experts from various fields, such as computer science, engineering, medicine, and biology. The Issue is designed to be a useful resource for researchers, clinicians, and practitioners in the field of medical imaging, and to provide them with valuable insights into the latest developments and trends in deep learning.

The focus of the Special Issue is on the practical applications and novel approaches of deep learning in medical image analysis, including but not limited to:

  • Novel applications of deep learning (DL) in medical image processing and analysis;
  • DL approaches for medical image segmentation and classification (X-rays, CT, MRI, PET, ultrasound);
  • DL approaches for medical image registration, super-resolution, and resampling;
  • Un/semi/weakly-supervised learning for medical image processing and analysis;
  • Domain adaptation, transfer learning, and adversarial learning in medical imaging with DL;
  • Multi-modal medical imaging data fusion and integration with DL;
  • Joint latent space learning with DL for medical imaging and non-imaging data integration;
  • Spatiotemporal medical imaging and image analysis using DL;
  • Novel datasets, challenges, and benchmarks for application and evaluation of DL, annotation efficient approaches to DL;
  • Comprehensive surveys and reviews on medical image processing and analysis.

This Special Issue provides a valuable supplement to the existing literature in the field by bringing together a wide range of perspectives on the use of deep learning in medical image analysis. The Issue is an excellent resource for researchers, clinicians, and practitioners interested in exploring the potential of deep learning for medical image analysis.

Dr. Irena Galić
Dr. Marija Habijan
Dr. Antonio Lanata
Dr. Luca Mesin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical image analysis
  • image processing
  • image classification
  • image segmentation
  • image registration
  • image reconstruction
  • X-rays
  • CT scans
  • MRI
  • PET
  • ultrasound
  • computer-aided diagnosis
  • computer-aided treatment planning
  • artificial intelligence
  • deep learning
  • machine learning
  • neural networks

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 2747 KiB  
Article
Pine Wilt Disease Segmentation with Deep Metric Learning Species Classification for Early-Stage Disease and Potential False Positive Identification
by Nikhil Thapa, Ridip Khanal, Bhuwan Bhattarai and Joonwhoan Lee
Electronics 2024, 13(10), 1951; https://doi.org/10.3390/electronics13101951 - 16 May 2024
Viewed by 538
Abstract
Pine Wilt Disease poses a significant global threat to forests, necessitating swift detection methods. Conventional approaches are resource-intensive but utilizing deep learning on ortho-mapped images obtained from Unmanned Aerial Vehicles offers cost-effective and scalable solutions. This study presents a novel method for Pine [...] Read more.
Pine Wilt Disease poses a significant global threat to forests, necessitating swift detection methods. Conventional approaches are resource-intensive but utilizing deep learning on ortho-mapped images obtained from Unmanned Aerial Vehicles offers cost-effective and scalable solutions. This study presents a novel method for Pine Wilt Disease detection and classification using YOLOv8 for segmenting diseased areas, followed by cropping the diseased regions from the original image and applying Deep Metric Learning for classification. We trained a ResNet50 model using semi-hard triplet loss to obtain embeddings, and subsequently trained a Random Forest classifier tasked with identifying tree species and distinguishing false positives. Segmentation was favored over object detection due to its ability to provide pixel-level information, enabling the flexible extension of subsequent bounding boxes. Deep Metric Learning-based classification after segmentation was chosen for its effectiveness in handling visually similar images. The results indicate a mean Intersection over Union of 83.12% for segmentation, with classification accuracies of 98.7% and 90.7% on the validation and test sets, respectively. Full article
(This article belongs to the Special Issue Revolutionizing Medical Image Analysis with Deep Learning)
Show Figures

Figure 1

21 pages, 6303 KiB  
Article
SSGNet: Selective Multi-Scale Receptive Field and Kernel Self-Attention Based on Group-Wise Modality for Brain Tumor Segmentation
by Bin Guo, Ning Cao, Peng Yang and Ruihao Zhang
Electronics 2024, 13(10), 1915; https://doi.org/10.3390/electronics13101915 - 14 May 2024
Viewed by 598
Abstract
Medical image processing has been used in medical image analysis for many years and has achieved great success. However, one challenge is that medical image processing algorithms ineffectively utilize multi-modality characteristics to further extract features. To address this issue, we propose SSGNet based [...] Read more.
Medical image processing has been used in medical image analysis for many years and has achieved great success. However, one challenge is that medical image processing algorithms ineffectively utilize multi-modality characteristics to further extract features. To address this issue, we propose SSGNet based on UNet, which comprises a selective multi-scale receptive field (SMRF) module, a selective kernel self-attention (SKSA) module, and a skip connection attention module (SCAM). The SMRF and SKSA modules have the same function but work in different modality groups. SMRF functions in the T1 and T1ce modality groups, while SKSA is implemented in the T2 and FLAIR modality groups. Their main tasks are to reduce the image size by half, further extract fused features within the groups, and prevent information loss during downsampling. The SCAM uses high-level features to guide the selection of low-level features in skip connections. To improve performance, SSGNet also utilizes deep supervision. Multiple experiments were conducted to evaluate the effectiveness of our model on the BraTS2018 dataset. SSGNet achieved Dice coefficient scores for the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) of 91.04, 86.64, and 81.11, respectively. The results show that the proposed model achieved state-of-the-art performance compared with more than twelve benchmarks. Full article
(This article belongs to the Special Issue Revolutionizing Medical Image Analysis with Deep Learning)
Show Figures

Figure 1

13 pages, 2292 KiB  
Article
Sample Size Effect on Musculoskeletal Segmentation: How Low Can We Go?
by Roel Huysentruyt, Ide Van den Borre, Srđan Lazendić, Kate Duquesne, Aline Van Oevelen, Jing Li, Arne Burssens, Aleksandra Pižurica and Emmanuel Audenaert
Electronics 2024, 13(10), 1870; https://doi.org/10.3390/electronics13101870 - 10 May 2024
Viewed by 560
Abstract
Convolutional Neural Networks have emerged as a predominant tool in musculoskeletal medical image segmentation. It enables precise delineation of bone and cartilage in medical images. Recent developments in image processing and network architecture desire a reevaluation of the relationship between segmentation accuracy and [...] Read more.
Convolutional Neural Networks have emerged as a predominant tool in musculoskeletal medical image segmentation. It enables precise delineation of bone and cartilage in medical images. Recent developments in image processing and network architecture desire a reevaluation of the relationship between segmentation accuracy and the amount of training data. This study investigates the minimum sample size required to achieve clinically relevant accuracy in bone and cartilage segmentation using the nnU-Net methodology. In addition, the potential benefit of integrating available medical knowledge for data augmentation, a largely unexplored opportunity for data preprocessing, is investigated. The impact of sample size on the segmentation accuracy of the nnU-Net is studied using three distinct musculoskeletal datasets, including both MRI and CT, to segment bone and cartilage. Further, the use of model-informed augmentation is explored on two of the above datasets by generating new training samples implementing a shape model-informed approach. Results indicate that the nnU-Net can achieve remarkable segmentation accuracy with as few as 10–15 training samples on bones and 25–30 training samples on cartilage. Model-informed augmentation did not yield relevant improvements in segmentation results. The sample size findings challenge the common notion that large datasets are necessary to obtain clinically relevant segmentation outcomes in musculoskeletal applications. Full article
(This article belongs to the Special Issue Revolutionizing Medical Image Analysis with Deep Learning)
Show Figures

Figure 1

18 pages, 9876 KiB  
Article
Classification of the Relative Position between the Third Molar and the Inferior Alveolar Nerve Using a Convolutional Neural Network Based on Transfer Learning
by Shih-Lun Chen, He-Sheng Chou, Yueh Chuo, Yuan-Jin Lin, Tzu-Hsiang Tsai, Cheng-Hao Peng, Ai-Yun Tseng, Kuo-Chen Li, Chiung-An Chen and Tsung-Yi Chen
Electronics 2024, 13(4), 702; https://doi.org/10.3390/electronics13040702 - 9 Feb 2024
Viewed by 1020
Abstract
In recent years, there has been a significant increase in collaboration between medical imaging and artificial intelligence technology. The use of automated techniques for detecting medical symptoms has become increasingly prevalent. However, there has been a lack of research on the relationship between [...] Read more.
In recent years, there has been a significant increase in collaboration between medical imaging and artificial intelligence technology. The use of automated techniques for detecting medical symptoms has become increasingly prevalent. However, there has been a lack of research on the relationship between impacted teeth and the inferior alveolar nerve (IAN) in DPR images. The severe compression of teeth against the IAN may necessitate the requirement for nerve canal treatment. To reduce the occurrence of such events, this study aims to develop an auxiliary detection system capable of precisely locating the relative positions of the IAN and impacted teeth through object detection and image enhancement. This system is designed to shorten the duration of examinations for dentists while concurrently mitigating the chances of diagnostic errors. The innovations in this research are as follows: (1) using YOLO_v4 to identify impacted teeth and the IAN in DPR images achieves an accuracy of 88%. However, the developed algorithm in this study achieves an accuracy of 93%. (2) Image enhancement is utilized in this study to expand the dataset, with an accuracy of up to 2~3% enhancement in detecting diseases. (3) The segmentation technique proposed in this study surpasses previous methods by achieving 6% higher accuracy in dental diagnosis. Full article
(This article belongs to the Special Issue Revolutionizing Medical Image Analysis with Deep Learning)
Show Figures

Figure 1

16 pages, 28030 KiB  
Article
EnNuSegNet: Enhancing Weakly Supervised Nucleus Segmentation through Feature Preservation and Edge Refinement
by Xiaohui Chen, Qisheng Ruan, Lingjun Chen, Guanqun Sheng and Peng Chen
Electronics 2024, 13(3), 504; https://doi.org/10.3390/electronics13030504 - 25 Jan 2024
Viewed by 753
Abstract
Nucleus segmentation plays a crucial role in tissue pathology image analysis. Despite significant progress in cell nucleus image segmentation algorithms based on fully supervised learning, the large number and small size of cell nuclei pose a considerable challenge in terms of the substantial [...] Read more.
Nucleus segmentation plays a crucial role in tissue pathology image analysis. Despite significant progress in cell nucleus image segmentation algorithms based on fully supervised learning, the large number and small size of cell nuclei pose a considerable challenge in terms of the substantial workload required for label annotation. This difficulty in acquiring datasets has become exceptionally challenging. This paper proposes a novel weakly supervised nucleus segmentation method that only requires point annotations of the nuclei. The technique is an encoder–decoder network which enhances the weakly supervised nucleus segmentation performance (EnNuSegNet). Firstly, we introduce the Feature Preservation Module (FPM) in both encoder and decoder, which preserves more low-level features from the shallow layers of the network during the early stages of training while enhancing the network’s expressive capability. Secondly, we incorporate a Scale-Aware Module (SAM) in the bottleneck part of the network to improve the model’s perception of cell nuclei at different scales. Lastly, we propose a training strategy for nucleus edge regression (NER), which guides the model to optimize the segmented edges during training, effectively compensating for the loss of nucleus edge information and achieving higher-quality nucleus segmentation. Experimental results on two publicly available datasets demonstrate that our proposed method outperforms state-of-the-art approaches, with improvements of 2.02%, 1.41%, and 1.59% in terms of F1 score, Dice coefficient, and Average Jaccard Index (AJI), respectively. This indicates the effectiveness of our method in improving segmentation performance. Full article
(This article belongs to the Special Issue Revolutionizing Medical Image Analysis with Deep Learning)
Show Figures

Figure 1

19 pages, 4144 KiB  
Article
Finger Vein Recognition Using DenseNet with a Channel Attention Mechanism and Hybrid Pooling
by Nikesh Devkota and Byung Wook Kim
Electronics 2024, 13(3), 501; https://doi.org/10.3390/electronics13030501 - 25 Jan 2024
Cited by 1 | Viewed by 945
Abstract
This paper proposes SE-DenseNet-HP, a novel finger vein recognition model that integrates DenseNet with a squeeze-and-excitation (SE)-based channel attention mechanism and a hybrid pooling (HP) mechanism. To distinctively separate the finger vein patterns from their background, original finger vein images are preprocessed using [...] Read more.
This paper proposes SE-DenseNet-HP, a novel finger vein recognition model that integrates DenseNet with a squeeze-and-excitation (SE)-based channel attention mechanism and a hybrid pooling (HP) mechanism. To distinctively separate the finger vein patterns from their background, original finger vein images are preprocessed using region-of-interest (ROI) extraction, contrast enhancement, median filtering, adaptive thresholding, and morphological operations. The preprocessed images are then fed to SE-DenseNet-HP for robust feature extraction and recognition. The DenseNet-based backbone improves information flow by enhancing feature propagation and encouraging feature reuse through feature map concatenation. The SE module utilizes a channel attention mechanism to emphasize the important features related to finger vein patterns while suppressing less important ones. HP architecture used in the transitional blocks of SE-DenseNet-HP concatenates the average pooling method with a max pooling strategy to preserve both the most discriminative and contextual information. SE-DenseNet-HP achieved recognition accuracy of 99.35% and 93.28% on the good-quality FVUSM and HKPU datasets, respectively, surpassing the performance of existing methodologies. Additionally, it demonstrated better generalization performance on the FVUSM, HKPU, UTFVP, and MMCBNU_6000 datasets, achieving remarkably low equal error rates (EERs) of 0.03%, 1.81%, 0.43%, and 1.80%, respectively. Full article
(This article belongs to the Special Issue Revolutionizing Medical Image Analysis with Deep Learning)
Show Figures

Figure 1

19 pages, 8895 KiB  
Article
Improving Generation and Evaluation of Long Image Sequences for Embryo Development Prediction
by Pedro Celard, Adrián Seara Vieira, José Manuel Sorribes-Fdez, Eva Lorenzo Iglesias and Lourdes Borrajo
Electronics 2024, 13(3), 476; https://doi.org/10.3390/electronics13030476 - 23 Jan 2024
Viewed by 732
Abstract
Generating synthetic time series data, such as videos, presents a formidable challenge as complexity increases when it is necessary to maintain a specific distribution of shown stages. One such case is embryonic development, where prediction and categorization are crucial for anticipating future outcomes. [...] Read more.
Generating synthetic time series data, such as videos, presents a formidable challenge as complexity increases when it is necessary to maintain a specific distribution of shown stages. One such case is embryonic development, where prediction and categorization are crucial for anticipating future outcomes. To address this challenge, we propose a Siamese architecture based on diffusion models to generate predictive long-duration embryonic development videos and an evaluation method to select the most realistic video in a non-supervised manner. We validated this model using standard metrics, such as Fréchet inception distance (FID), Fréchet video distance (FVD), structural similarity (SSIM), peak signal-to-noise ratio (PSNR), and mean squared error (MSE). The proposed model generates videos of up to 197 frames with a size of 128×128, considering real input images. Regarding the quality of the videos, all results showed improvements over the default model (FID = 129.18, FVD = 802.46, SSIM = 0.39, PSNR = 28.63, and MSE = 97.46). On the coherence of the stages, a global stage mean squared error of 9.00 was achieved versus the results of 13.31 and 59.3 for the default methods. The proposed technique produces more accurate videos and successfully removes cases that display sudden movements or changes. Full article
(This article belongs to the Special Issue Revolutionizing Medical Image Analysis with Deep Learning)
Show Figures

Figure 1

16 pages, 3127 KiB  
Article
A Convolutional Block Base Architecture for Multiclass Brain Tumor Detection Using Magnetic Resonance Imaging
by Muneeb A. Khan and Heemin Park
Electronics 2024, 13(2), 364; https://doi.org/10.3390/electronics13020364 - 15 Jan 2024
Cited by 1 | Viewed by 1261
Abstract
In the domain of radiological diagnostics, accurately detecting and classifying brain tumors from magnetic resonance imaging (MRI) scans presents significant challenges, primarily due to the complex and diverse manifestations of tumors in these scans. In this paper, a convolutional-block-based architecture has been proposed [...] Read more.
In the domain of radiological diagnostics, accurately detecting and classifying brain tumors from magnetic resonance imaging (MRI) scans presents significant challenges, primarily due to the complex and diverse manifestations of tumors in these scans. In this paper, a convolutional-block-based architecture has been proposed for the detection of multiclass brain tumors using MRI scans. Leveraging the strengths of CNNs, our proposed framework demonstrates robustness and efficiency in distinguishing between different tumor types. Extensive evaluations on three diverse datasets underscore the model’s exceptional diagnostic accuracy, with an average accuracy rate of 97.52%, precision of 97.63%, recall of 97.18%, specificity of 98.32%, and F1-score of 97.36%. These results outperform contemporary methods, including state-of-the-art (SOTA) models such as VGG16, VGG19, MobileNet, EfficientNet, ResNet50, Xception, and DenseNet121. Furthermore, its adaptability across different MRI modalities underlines its potential for broad clinical application, offering a significant advancement in the field of radiological diagnostics and brain tumor detection. Full article
(This article belongs to the Special Issue Revolutionizing Medical Image Analysis with Deep Learning)
Show Figures

Figure 1

14 pages, 6116 KiB  
Article
Machine and Deep Learning Algorithms for COVID-19 Mortality Prediction Using Clinical and Radiomic Features
by Laura Verzellesi, Andrea Botti, Marco Bertolini, Valeria Trojani, Gianluca Carlini, Andrea Nitrosi, Filippo Monelli, Giulia Besutti, Gastone Castellani, Daniel Remondini, Gianluca Milanese, Stefania Croci, Nicola Sverzellati, Carlo Salvarani and Mauro Iori
Electronics 2023, 12(18), 3878; https://doi.org/10.3390/electronics12183878 - 14 Sep 2023
Cited by 1 | Viewed by 1111
Abstract
Aim: Machine learning (ML) and deep learning (DL) predictive models have been employed widely in clinical settings. Their potential support and aid to the clinician of providing an objective measure that can be shared among different centers enables the possibility of building more [...] Read more.
Aim: Machine learning (ML) and deep learning (DL) predictive models have been employed widely in clinical settings. Their potential support and aid to the clinician of providing an objective measure that can be shared among different centers enables the possibility of building more robust multicentric studies. This study aimed to propose a user-friendly and low-cost tool for COVID-19 mortality prediction using both an ML and a DL approach. Method: We enrolled 2348 patients from several hospitals in the Province of Reggio Emilia. Overall, 19 clinical features were provided by the Radiology Units of Azienda USL-IRCCS of Reggio Emilia, and 5892 radiomic features were extracted from each COVID-19 patient’s high-resolution computed tomography. We built and trained two classifiers to predict COVID-19 mortality: a machine learning algorithm, or support vector machine (SVM), and a deep learning model, or feedforward neural network (FNN). In order to evaluate the impact of the different feature sets on the final performance of the classifiers, we repeated the training session three times, first using only clinical features, then employing only radiomic features, and finally combining both information. Results: We obtained similar performances for both the machine learning and deep learning algorithms, with the best area under the receiver operating characteristic (ROC) curve, or AUC, obtained exploiting both clinical and radiomic information: 0.803 for the machine learning model and 0.864 for the deep learning model. Conclusions: Our work, performed on large and heterogeneous datasets (i.e., data from different CT scanners), confirms the results obtained in the recent literature. Such algorithms have the potential to be included in a clinical practice framework since they can not only be applied to COVID-19 mortality prediction but also to other classification problems such as diabetic prediction, asthma prediction, and cancer metastases prediction. Our study proves that the lesion’s inhomogeneity depicted by radiomic features combined with clinical information is relevant for COVID-19 mortality prediction. Full article
(This article belongs to the Special Issue Revolutionizing Medical Image Analysis with Deep Learning)
Show Figures

Figure 1

Review

Jump to: Research

29 pages, 3595 KiB  
Review
Machine Learning Empowering Personalized Medicine: A Comprehensive Review of Medical Image Analysis Methods
by Irena Galić, Marija Habijan, Hrvoje Leventić and Krešimir Romić
Electronics 2023, 12(21), 4411; https://doi.org/10.3390/electronics12214411 - 25 Oct 2023
Cited by 4 | Viewed by 5295
Abstract
Artificial intelligence (AI) advancements, especially deep learning, have significantly improved medical image processing and analysis in various tasks such as disease detection, classification, and anatomical structure segmentation. This work overviews fundamental concepts, state-of-the-art models, and publicly available datasets in the field of medical [...] Read more.
Artificial intelligence (AI) advancements, especially deep learning, have significantly improved medical image processing and analysis in various tasks such as disease detection, classification, and anatomical structure segmentation. This work overviews fundamental concepts, state-of-the-art models, and publicly available datasets in the field of medical imaging. First, we introduce the types of learning problems commonly employed in medical image processing and then proceed to present an overview of commonly used deep learning methods, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs), with a focus on the image analysis task they are solving, including image classification, object detection/localization, segmentation, generation, and registration. Further, we highlight studies conducted in various application areas, encompassing neurology, brain imaging, retinal analysis, pulmonary imaging, digital pathology, breast imaging, cardiac imaging, bone analysis, abdominal imaging, and musculoskeletal imaging. The strengths and limitations of each method are carefully examined, and the paper identifies pertinent challenges that still require attention, such as the limited availability of annotated data, variability in medical images, and the interpretability issues. Finally, we discuss future research directions with a particular focus on developing explainable deep learning methods and integrating multi-modal data. Full article
(This article belongs to the Special Issue Revolutionizing Medical Image Analysis with Deep Learning)
Show Figures

Figure 1

14 pages, 5640 KiB  
Review
AI-Assisted CBCT Data Management in Modern Dental Practice: Benefits, Limitations and Innovations
by Renáta Urban, Sára Haluzová, Martin Strunga, Jana Surovková, Michaela Lifková, Juraj Tomášik and Andrej Thurzo
Electronics 2023, 12(7), 1710; https://doi.org/10.3390/electronics12071710 - 4 Apr 2023
Cited by 24 | Viewed by 6706
Abstract
Within the next decade, artificial intelligence (AI) will fundamentally transform the workflow of modern dental practice. This paper reviews the innovations and new roles of dental assistants in CBCT data management with the support of AI. Its use in 3D data management brings [...] Read more.
Within the next decade, artificial intelligence (AI) will fundamentally transform the workflow of modern dental practice. This paper reviews the innovations and new roles of dental assistants in CBCT data management with the support of AI. Its use in 3D data management brings new roles for dental assistants. Cone beam computed tomography (CBCT) technology is, together with intraoral 3D scans and 3D facial scans, commonly used 3D diagnostic in a modern digital dental practice. This paper provides an overview of the potential benefits of AI implementation for semiautomated segmentations in standard medical diagnostic workflows in dental practice. It discusses whether AI tools can enable healthcare professionals to increase their reliability, effectiveness, and usefulness, and addresses the potential limitations and errors that may occur. The paper concludes that current AI solutions can improve current digital workflows including CBCT data management. Automated CBCT segmentation is one of the current trends and innovations. It can assist professionals in obtaining an accurate 3D image in a reduced period of time, thus enhancing the efficiency of the whole process. The segmentation of CBCT serves as a helpful tool for treatment planning as well as communicating the problem to the patient in an understandable way. This paper highlights a high bias risk due to the inadequate sample size and incomplete reporting in many studies. It proposes enhancing dental workflow efficiency and accuracy through AI-supported cbct data management Full article
(This article belongs to the Special Issue Revolutionizing Medical Image Analysis with Deep Learning)
Show Figures

Figure 1

Back to TopTop