AI and Machine Learning in Medical Image Processing: Innovations and Applications

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 30 April 2026 | Viewed by 9154

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Blekinge Institute of Technology, Karlskrona, Sweden
Interests: AI; machine learning; medical image processing and analysis

E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, St. Mary’s College of Maryland, St. Mary's City, MD 20686, USA
Interests: AI; machine learning; computer vision; deep learning; data science

E-Mail Website
Guest Editor
School of Engineering and Computer Science, Victoria University of Wellington, Wellington 6012, New Zealand
Interests: artificial intelligence; affective computing; computer vision; machine learning; natural language processing

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) and machine learning (ML) have brought unprecedented changes to the field of medical image processing, offering new opportunities to improve diagnosis, treatment planning, and patient outcomes. This Special Issue, titled "AI and Machine Learning in Medical Image Processing: Innovations and Applications", aims to provide an overview of cutting-edge research, emerging methodologies, and impactful applications at the intersection of AI, ML, and medical imaging.

The goal of this issue is to showcase innovative approaches that address key challenges in medical image acquisition, analysis, interpretation, and clinical integration. Contributions highlight breakthroughs across a range of imaging modalities, including MRI, CT, ultrasound, and histopathology imaging, and cover topics such as segmentation, classification, detection, prediction models, and AI-augmented decision support systems.

By bringing together a diverse collection of theoretical advancements, novel algorithms, and real-world case studies, this Special Issue seeks to foster interdisciplinary collaboration among researchers, clinicians, and industry practitioners. Ultimately, it aims to illuminate how AI and ML technologies are used in medical imaging, paving the way for more accurate diagnoses, personalized therapies, and improved healthcare delivery.

We invite contributions that advance the understanding and application of AI and ML in medical imaging, address ethical and regulatory challenges, and propose visions for the future of this rapidly evolving field.

Dr. Huseyin Kusetogullari
Dr. Md. Haidar Sharif
Dr. Harisu Abdullahi Shehu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence (AI) in medical image processing
  • machine learning (ML) in medical image processing
  • deep learning in medical image processing
  • computer vision in medical imaging
  • medical image segmentation
  • diagnostic imaging
  • disease detection in medical images
  • clinical decision support systems
  • histopathological image analysis
  • advanced techniques for cancer detection, segmentation, and classification
  • applications in breast, prostate, brain, and other cancer imaging using AI
  • detection and pattern recognition in medical imaging
  • imaging modalities (MRI, CT, ultrasound, X-ray)
  • explainable AI (XAI) in healthcare
  • healthcare technology and innovation
  • predictive modeling in medical imaging
  • generative AI and large language models (LLMs) for medical imaging
  • big data analytics in healthcare
  • automated diagnosis systems
  • biomedical engineering applications
  • ethics and fairness in medical AI
  • systematic reviews in medical image analysis
  • medical imaging datasets and benchmarking

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 6504 KB  
Article
MyoNet: Deep Learning-Based Myocardial Strain Quantification from Cine Cardiac MRI
by Dayeong An, Andrew Nencka, Patrick Clarysse, Pierre Croisille, Carmen Bergom and El-Sayed Ibrahim
Bioengineering 2026, 13(3), 310; https://doi.org/10.3390/bioengineering13030310 - 7 Mar 2026
Viewed by 111
Abstract
To develop and assess MyoNet, a deep learning (DL)-based network for measuring myocardial regional function from cine cardiac magnetic resonance (CMR) images, and compare its efficacy with ResMyoNet as an efficient alternative to SinMod-derived reference. MyoNet was tested alongside ResMyoNet on datasets from [...] Read more.
To develop and assess MyoNet, a deep learning (DL)-based network for measuring myocardial regional function from cine cardiac magnetic resonance (CMR) images, and compare its efficacy with ResMyoNet as an efficient alternative to SinMod-derived reference. MyoNet was tested alongside ResMyoNet on datasets from Dahl salt-sensitive rat models undergoing radiation therapy (RT). Both networks were designed to extract displacement maps from cine images, were specifically optimized for detailed myocardial deformation, employed advanced convolution operations with alternating kernel sizes for spatial and temporal analysis, and robust loss functions. MyoNet demonstrated superior performance in myocardial strain measurement, achieving high consistency with the SinMod-derived reference strains. It outperformed ResMyoNet, achieving higher performance metrics, including SSIM of 0.961 and 0.960, ICC of 0.973 and 0.975, and Pearson CC of 0.973 and 0.953 for circumferential (Ecc) and radial (Err) strains, respectively. Its accuracy and efficiency in generating strain measurements were validated through comprehensive statistical analyses. MyoNet offers a significant advancement in myocardial strain analysis from cine CMR images, potentially revolutionizing cardiac imaging in pre-clinical studies. Its ability to provide detailed and reliable measurements positions it as a valuable tool for clinical applications, particularly in monitoring the cardiac health of cancer patients. Full article
Show Figures

Graphical abstract

24 pages, 4604 KB  
Article
Quantification of Craniofacial Growth Pattern Based on Deep Learning
by Ziyi Hu, Yuyanran Zhang, Ningtao Liu, Xin Gao, Ziyu Huang, Guanglin Wu, Zhiyong Zhang and Shuang Wang
Bioengineering 2026, 13(3), 277; https://doi.org/10.3390/bioengineering13030277 - 27 Feb 2026
Viewed by 231
Abstract
Background: Childhood and adolescence constitute a critical period for craniofacial growth. Understanding its developmental patterns is essential for clinical decision-making in orthodontics and maxillofacial surgery. Traditional cephalometric analysis relies on manual landmarking, which oversimplifies complex morphology and introduces subjectivity. Although deep learning, a [...] Read more.
Background: Childhood and adolescence constitute a critical period for craniofacial growth. Understanding its developmental patterns is essential for clinical decision-making in orthodontics and maxillofacial surgery. Traditional cephalometric analysis relies on manual landmarking, which oversimplifies complex morphology and introduces subjectivity. Although deep learning, a key artificial intelligence (AI) technology, has demonstrated remarkable performance in image analysis and classification, most methods still depend on manual annotations during training, perpetuating subjectivity and limiting model generalizability and robustness on large datasets. This hinders the development of objective, comprehensive methods to quantify craniofacial growth that account for its multi-tissue complexity. Methods: To address these limitations, this study developed an end-to-end deep learning framework based on lateral cephalometric radiographs from 41,625 individuals aged 4–18 years. Without relying on manual annotations, the model is designed to autonomously extract dynamic imaging features associated with continuous age intervals in craniofacial development and further discern features related to sexual dimorphism. Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to visualize the learned features, generating population-averaged saliency maps that highlight age-related and sex-related patterns. Furthermore, we introduced two novel quantitative metrics, the Age-related Saliency Index (ASI) and the Sex-related Saliency Index (SSI), to evaluate the significance of developmental and dimorphic characteristics in key craniofacial regions. Results: Age-related saliency maps extended the focus from external contours to internal anatomical details of the bones, intuitively visualizing the relative importance of multiple bone regions during dynamic development, with the ASI providing a quantitative prioritization of these regions. The Sex-related Saliency Index (SSI) quantified the dynamic evolution of sexual dimorphism, demonstrating that early-stage differences were widely distributed across cranial bones and gradually became concentrated in the mandibular region by adulthood. Conclusions: This study established an end-to-end deep learning framework for analyzing large-scale lateral cephalometric radiographs. By generating age- and sex-related average saliency maps and their corresponding quantitative indices, we visualized and quantified the spatiotemporal growth dynamics and sexual dimorphism across distinct craniofacial skeletal regions throughout development. These findings not only validate established developmental theories but also provide novel insights into the coordinated growth patterns of craniofacial bones and sex-specific radiological characteristics, offering clinicians objective quantitative references for assessing developmental stages and guiding the timing of interventions targeting specific craniofacial regions. Full article
Show Figures

Figure 1

16 pages, 1611 KB  
Article
Bridging Species with AI: A Cross-Species Deep Learning Model for Fracture Detection and Beyond
by Hanya T. Ahmed, Dagmar Berner, Qianni Zhang, Kristien Verheyen, Francisco Llabres-Diaz, Vanessa G. Peter and Yu-Mei Chang
Bioengineering 2026, 13(2), 213; https://doi.org/10.3390/bioengineering13020213 - 13 Feb 2026
Viewed by 422
Abstract
Fractures are a leading cause of morbidity and mortality in Thoroughbred racehorses, posing a significant threat to their welfare and careers. This study introduces a deep learning model specifically designed to facilitate fracture detection in equine athletes. By leveraging extensive training on human [...] Read more.
Fractures are a leading cause of morbidity and mortality in Thoroughbred racehorses, posing a significant threat to their welfare and careers. This study introduces a deep learning model specifically designed to facilitate fracture detection in equine athletes. By leveraging extensive training on human fracture data and refining the model with equine imaging, it highlights the transformative potential of transfer learning across species and medical contexts. This approach is not limited to equine fractures but could be adapted for use in detecting injuries or conditions in other veterinary species and even human healthcare applications. A comprehensive databank of radiographs, sourced from public archives and equine hospitals, was curated to encompass diverse conditions (fracture and non-fracture), ensuring robust pattern recognition. The architecture integrates a Vision Transformer for global context modelling with a ResNet backbone and loss function to optimize local feature extraction and cross-species adaptability. The pipeline achieved 96.7% accuracy for modality classification, 97.2% accuracy for projection recognition, and fracture localization intersection over union values of 0.71–0.84 across equine datasets. This work bridges advancements in human and veterinary medicine, opening pathways for AI-driven solutions that extend beyond fractures, fostering improved diagnostic precision and broader applications across species (felines, canines, etc.). By integrating advanced imaging techniques with AI, this study aims to set a foundation for more comprehensive and versatile health monitoring systems. Full article
Show Figures

Figure 1

18 pages, 7315 KB  
Article
Age Estimation of the Cervical Vertebrae Region Using Deep Learning
by Zhiyong Zhang, Ningtao Liu, Ziyi Hu, Zhang Guo, Wenfan Jin and Chunxia Yan
Bioengineering 2026, 13(1), 7; https://doi.org/10.3390/bioengineering13010007 - 22 Dec 2025
Cited by 1 | Viewed by 627
Abstract
Since skeletal development is largely completed by adulthood, it is difficult for traditional methods to capture subtle age-related structural changes in bones and surrounding tissues. Recent advances in deep learning have demonstrated remarkable potential in medical image-based age estimation. The cervical vertebrae, as [...] Read more.
Since skeletal development is largely completed by adulthood, it is difficult for traditional methods to capture subtle age-related structural changes in bones and surrounding tissues. Recent advances in deep learning have demonstrated remarkable potential in medical image-based age estimation. The cervical vertebrae, as captured in lateral cephalometric radiographs (LCR), have shown particular value in such tasks. To systematically investigate the contribution of different vertebral representations to age estimation, we developed four distinct input modes: (1) Contour (C); (2) Mask (M); (3) Cervical Vertebrae (CV) and (4) Cervical vertebrae region (SR). Using a large-scale LCR dataset of 20,174 subjects aged 4–40 years, grouped into 5-year intervals, we evaluated these modes with deep learning models. The Mean Absolute Error (MAE) was used to evaluate performance. Results indicated that the SR mode achieved the lowest overall MAE, particularly for the C1–C4 combination, followed by CV, while C and M modes showed similar and poorer performance. For subjects younger than 25 years, MAEs for individual vertebrae (C1–2, C3, C4) were less than 5 years across all modes; however, in the 26–40 years group, MAEs for C and M modes exceeded 10 years, whereas CV and SR modes remained below 10 years for most combinations. Combining vertebrae consistently improved accuracy over individual ones, with continuous combinations (e.g., C1–2 + C3) outperforming discontinuous ones (e.g., C1–2 + C4). Visualization of age-related salience revealed that salient regions varied by input mode and expanded with increased information content. These findings underscore the critical importance of incorporating peripheral soft tissue and comprehensive vertebral context for accurate age estimation across a wide age spectrum. Full article
Show Figures

Figure 1

29 pages, 1656 KB  
Article
An Empirical Evaluation of Low-Rank Adapted Vision–Language Models for Radiology Image Captioning
by Mahmudul Hoque, Raisa Nusrat Chowdhury, Md Rakibul Hasan, Ojonugwa Oluwafemi Ejiga Peter, Fahmi Khalifa and Md Mahmudur Rahman
Bioengineering 2025, 12(12), 1330; https://doi.org/10.3390/bioengineering12121330 - 5 Dec 2025
Cited by 1 | Viewed by 1410
Abstract
Rapidly growing medical imaging volumes have increased radiologist workloads, creating demand for automated tools that support interpretation and reduce reporting delays. Vision-language models (VLMs) can generate clinically relevant captions to accelerate report drafting, yet their varying parameter scales require systematic evaluation for clinical [...] Read more.
Rapidly growing medical imaging volumes have increased radiologist workloads, creating demand for automated tools that support interpretation and reduce reporting delays. Vision-language models (VLMs) can generate clinically relevant captions to accelerate report drafting, yet their varying parameter scales require systematic evaluation for clinical utility. This study evaluated ten multimodal models fine-tuned on the Radiology Objects in Context version 2 (ROCOv2) dataset containing 116,635 images across eight modalities. We compared four Large VLMs (LVLMs) including LLaVA variants and IDEFICS-9B against four Small VLMs (SVLMs) including MoonDream2, Qwen variants, and SmolVLM, alongside two fully fine-tuned baseline architectures (VisionGPT2 and CNN-Transformer). Low-Rank Adaptation (LoRA), applied to fewer than 1% of selected model parameters, proved optimal among adaptation strategies, outperforming broader LoRA configurations. Models were assessed on relevance (semantic similarity) and factuality (concept-level correctness) metrics. Performance showed clear stratification: LVLMs (0.273 to 0.317 overall), SVLMs (0.188 to 0.279), and baselines (0.154 to 0.177). LLaVA-Mistral-7B achieved the highest performance with relevance and factuality scores of 0.516 and 0.118, respectively, substantially exceeding the VisionGPT2 baseline (0.325, 0.028). Among the SVLMs, MoonDream2 demonstrated competitive relevance (0.466), approaching the performance of some LVLMs despite its smaller size. To investigate performance enhancement strategies for underperforming SVLMs, we prepended predicted imaging modality labels at inference time, which yielded variable results. These findings provide quantitative benchmarks for VLM selection in medical imaging, demonstrating that while model scale influences performance, architectural design and targeted adaptation enable select compact models to achieve competitive results. Full article
Show Figures

Graphical abstract

23 pages, 3035 KB  
Article
Predicting Major Depressive Disorder Using Neural Networks from Spectral Measures of EEG Data
by Igor Kozulin, Ekaterina Merkulova, Vasiliy Savostyanov, Haonan Shi, Xinyi Wang, Andrey Bocharov and Alexander Savostyanov
Bioengineering 2025, 12(11), 1251; https://doi.org/10.3390/bioengineering12111251 - 16 Nov 2025
Cited by 2 | Viewed by 1022
Abstract
Processing electroencephalogram (EEG) data using neural networks is becoming increasingly important in modern medicine. This study introduces the development of a neural network method using a combination of psychological questionnaire data and spectral characteristics of resting-state EEG. The data were collected from 71 [...] Read more.
Processing electroencephalogram (EEG) data using neural networks is becoming increasingly important in modern medicine. This study introduces the development of a neural network method using a combination of psychological questionnaire data and spectral characteristics of resting-state EEG. The data were collected from 71 individuals: 42 healthy and 29 with major depressive disorder (MDD). We evaluated four classes of algorithms—traditional machine learning, deep learning (LSTM), ablation analysis, and feature importance analysis—for two primary tasks: binary classification (healthy vs. MDD) and regression for predicting Beck Depression Inventory scores (BDI). Our results demonstrate that the superiority of a given method is task-dependent. For regression, an LSTM network applied to delta-rhythm EEG data achieved a breakthrough performance of R2 = 0.742 (MAE = 6.114), representing an 86% improvement over traditional Ridge regression. Ablation studies identified delta and alpha rhythms as the most informative neurophysiological biomarkers. Furthermore, feature importance analysis revealed a triad of dominant psychometric predictors: ruminative thinking (31.2%), age (27.9%), and hostility (18.5%), which collectively accounted for 75.2% of the feature importance in predicting severity. LSTM on spectral EEG data provides a superior quantitative assessment of depression severity, while Logistic Regression on psychometric or EEG data offers a highly reliable tool for screening and confirmatory diagnosis. This methodology provides a robust, objective framework for MDD diagnosis that is independent of a patient’s subjective self-assessment, thus facilitating enhanced clinical decision-making and personalized treatment monitoring. Full article
Show Figures

Graphical abstract

10 pages, 452 KB  
Article
Assessment of Apical Patency in Permanent First Molars Using Deep Learning on CBCT-Derived Pseudopanoramic Images: A Retrospective Study
by Suna Deniz Bostanci, Zeliha Hatipoğlu Palaz, Kevser Özdem Karaca, Muhammet Ali Akcayol and Mehmet Bani
Bioengineering 2025, 12(11), 1233; https://doi.org/10.3390/bioengineering12111233 - 11 Nov 2025
Viewed by 664
Abstract
Background: Assessment of root development and apical closure is critical in dental disciplines, including endodontics, trauma management, and age estimation. This study aims to leverage advances in deep learning Convolutional Neural Networks (CNNs) to automatically evaluate the apical region status of permanent first [...] Read more.
Background: Assessment of root development and apical closure is critical in dental disciplines, including endodontics, trauma management, and age estimation. This study aims to leverage advances in deep learning Convolutional Neural Networks (CNNs) to automatically evaluate the apical region status of permanent first molars, highlighting a digital health application of AI in dentistry. Methods: In this retrospective study, 262 Cone Beam Computed Tomography (CBCT) scans were reviewed, and 147 anonymized dental images were cropped from pseudopanoramic radiographs, including standard measurements. Tooth regions were resized to 471 × 1075 pixels and split into training (80%) and test (20%) sets. CNN performance was assessed using accuracy, precision, recall, F1-score, and receiver operating characteristic (ROC) curves with area under the curve (AUC), demonstrating AI-based image analysis in a dental context. Results: Precision, recall, and F1-scores were 0.79 for open roots and 0.81 for closed roots, with a macro average of 0.80 across all metrics. The overall accuracy and AUC were also 0.80. Conclusions: These results suggest that CNNs can be effectively used to assess apical patency from ROI images derived from pseudopanoramic radiographs. Full article
Show Figures

Figure 1

15 pages, 6292 KB  
Article
Enhanced Blood Cell Detection in YOLOv11n Using Gradient Accumulation and Loss Reweighting
by Min Feng and Juncai Xu
Bioengineering 2025, 12(11), 1188; https://doi.org/10.3390/bioengineering12111188 - 31 Oct 2025
Cited by 1 | Viewed by 1040
Abstract
Automated blood cell detection is of significant importance in the efficient and accurate diagnosis of hematological diseases. The application of this technology has advanced clinical practice in hematology and improved the speed and accuracy of diagnosis, thereby providing patients with more timely medical [...] Read more.
Automated blood cell detection is of significant importance in the efficient and accurate diagnosis of hematological diseases. The application of this technology has advanced clinical practice in hematology and improved the speed and accuracy of diagnosis, thereby providing patients with more timely medical intervention. In this study, the YOLOv11n model was optimized by integrating gradient accumulation and loss reweighting techniques to improve its detection performance for blood cells in clinical images. The optimized YOLOv11n model shows an improvement in performance. The mAP50 reached 0.9356, the mAP50-95 was 0.6620, and the precision and recall were better than those of existing methods. The model can effectively address issues such as dense cell distribution, cell overlap, and image artifacts. Therefore, it is highly applicable in real-time clinical applications. The results of the ablation experiment demonstrate that there is a synergistic effect between gradient accumulation and loss reweighting, which can improve detection accuracy without increasing the computational burden. The conclusion indicates that the optimized YOLOv11n model has important application prospects as an automated blood cell detection tool and has the potential to integrate with clinical workflows. Full article
Show Figures

Figure 1

27 pages, 3413 KB  
Article
DermaMamba: A Dual-Branch Vision Mamba Architecture with Linear Complexity for Efficient Skin Lesion Classification
by Zhongyu Yao, Yuxuan Yan, Zhe Liu, Tianhang Chen, Ling Cho, Yat-Wah Leung, Tianchi Lu, Wenjin Niu, Zhenyu Qiu, Yuchen Wang, Xingcheng Zhu and Ka-Chun Wong
Bioengineering 2025, 12(10), 1030; https://doi.org/10.3390/bioengineering12101030 - 26 Sep 2025
Cited by 1 | Viewed by 1531
Abstract
Accurate skin lesion classification is crucial for the early detection of malignant lesions, including melanoma, as well as improved patient outcomes. While convolutional neural networks (CNNs) excel at capturing local morphological features, they struggle with global context modeling essential for comprehensive lesion assessment. [...] Read more.
Accurate skin lesion classification is crucial for the early detection of malignant lesions, including melanoma, as well as improved patient outcomes. While convolutional neural networks (CNNs) excel at capturing local morphological features, they struggle with global context modeling essential for comprehensive lesion assessment. Vision transformers address this limitation but suffer from quadratic computational complexity O(n2), hindering deployment in resource-constrained clinical environments. We propose DermaMamba, a novel dual-branch fusion architecture that integrates CNN-based local feature extraction with Vision Mamba (VMamba) for efficient global context modeling with linear complexity O(n). Our approach introduces a state space fusion mechanism with adaptive weighting that dynamically balances local and global features based on lesion characteristics. We incorporate medical domain knowledge through multi-directional scanning strategies and ABCDE (Asymmetry, Border irregularity, Color variation, Diameter, Evolution) rule feature integration. Extensive experiments on the ISIC dataset show that DermaMamba achieves 92.1% accuracy, 91.7% precision, 91.3% recall, and 91.5% mac-F1 score, which outperforms the best baseline by 2.0% accuracy with 2.3× inference speedup and 40% memory reduction. The improvements are statistically significant based on a significance test (p < 0.001, Cohen’s d > 0.8), with greater than 79% confidence also preserved on challenging boundary cases. These results establish DermaMamba as an effective solution bridging diagnostic accuracy and computational efficiency for clinical deployment. Full article
Show Figures

Figure 1

Review

Jump to: Research

23 pages, 2696 KB  
Review
Diagnostic Imaging of the Skeletal System: Overview of Applications in Human and Veterinary Medicine
by Ana Javor, Nikola Štoković, Natalia Ivanjko, Iva Lukša, Hrvoje Capak and Zoran Vrbanac
Bioengineering 2025, 12(12), 1358; https://doi.org/10.3390/bioengineering12121358 - 13 Dec 2025
Viewed by 1049
Abstract
This paper provides a comprehensive overview of the application of various radiological modalities, with a critical comparison between human and veterinary medicine. The modalities discussed include conventional radiography, dual-energy X-ray absorptiometry (DXA), computed tomography (CT), magnetic resonance imaging (MRI), ultrasound (US), quantitative ultrasound [...] Read more.
This paper provides a comprehensive overview of the application of various radiological modalities, with a critical comparison between human and veterinary medicine. The modalities discussed include conventional radiography, dual-energy X-ray absorptiometry (DXA), computed tomography (CT), magnetic resonance imaging (MRI), ultrasound (US), quantitative ultrasound (QUS), positron emission tomography-computed tomography (PET-CT) and micro and nano computed tomography (micro-CT, nano-CT) in clinical practice and basic research of skeletal system. Radiological imaging plays a crucial role in the diagnosis, monitoring and research of skeletal system disorders in both human and veterinary medicine. In preclinical research, advanced diagnostic imaging modalities such as micro-CT and nano-CT allow for 3D quantification of trabecular and cortical bone microarchitecture for studies in bone biology, regenerative medicine and pharmacological research. Furthermore, the integration of artificial intelligence is advancing image interpretation, precision diagnostics and disease tracking. Despite their broad utility, imaging modalities must be selected based on clinical indication, species, age and anatomical region with consideration of radiation dose, cost and availability, especially in remote regions. For this reason, clinicians and radiologists remain an irreplaceable part of diagnostic imaging. Full article
Show Figures

Figure 1

Back to TopTop