Next Article in Journal
A Novel Approach to the Management of Children with Primary Nocturnal Enuresis
Previous Article in Journal
Co-Designing a National Family Handbook for Childhood Brain Tumor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Expanding Frontier: The Role of Artificial Intelligence in Pediatric Neuroradiology

1
Functional and Interventional Neuroradiology Unit, Bambino Gesù Children’s Hospital, IRCCS (Istituto di Ricovero e Cura a Carattere Scientifico), 00165 Rome, Italy
2
Neuroradiology Unit, NESMOS (Neuroscience, Mental Health and Sensory Organs) Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa, 1035-1039, 00189 Rome, Italy
3
Medical Physics Department, Bambino Gesù Children’s Hospital, 00165 Rome, Italy
*
Author to whom correspondence should be addressed.
Children 2025, 12(9), 1127; https://doi.org/10.3390/children12091127
Submission received: 24 July 2025 / Revised: 24 August 2025 / Accepted: 26 August 2025 / Published: 27 August 2025

Abstract

Artificial intelligence (AI) is revolutionarily shaping the entire landscape of medicine and particularly the privileged field of radiology, since it produces a significant amount of data, namely, images. Currently, AI implementation in radiology is continuously increasing, from automating image analysis to enhancing workflow management, and specifically, pediatric neuroradiology is emerging as an expanding frontier. Pediatric neuroradiology presents unique opportunities and challenges since neonates’ and small children’s brains are continuously developing, with age-specific changes in terms of anatomy, physiology, and disease presentation. By enhancing diagnostic accuracy, reducing reporting times, and enabling earlier intervention, AI has the potential to significantly impact clinical practice and patients’ quality of life and outcomes. For instance, AI reduces MRI and CT scanner time by employing advanced deep learning (DL) algorithms to accelerate image acquisition through compressed sensing and undersampling, and to enhance image reconstruction by denoising and super-resolving low-quality datasets, thereby producing diagnostic-quality images with significantly fewer data points and in a shorter timeframe. Furthermore, as healthcare systems become increasingly burdened by rising demands and limited radiology workforce capacity, AI offers a practical solution to support clinical decision-making, particularly in institutions where pediatric neuroradiology is limited. For example, the MELD (Multicenter Epilepsy Lesion Detection) algorithm is specifically designed to help radiologists find focal cortical dysplasias (FCDs), which are a common cause of drug-resistant epilepsy. It works by analyzing a patient’s MRI scan and comparing a wide range of features—such as cortical thickness and folding patterns—to a large database of scans from both healthy individuals and epilepsy patients. By identifying subtle deviations from normal brain anatomy, the MELD graph algorithm can highlight potential lesions that are often missed by the human eye, which is a critical step in identifying patients who could benefit from life-changing epilepsy surgery. On the other hand, the integration of AI into pediatric neuroradiology faces technical and ethical challenges, such as data scarcity and ethical and legal restrictions on pediatric data sharing, that complicate the development of robust and generalizable AI models. Moreover, many radiologists remain sceptical of AI’s interpretability and reliability, and there are also important medico-legal questions around responsibility and liability when AI systems are involved in clinical decision-making. Future promising perspectives to overcome these concerns are represented by federated learning and collaborative research and AI development, which require technological innovation and multidisciplinary collaboration between neuroradiologists, data scientists, ethicists, and pediatricians. The paper aims to address: (1) current applications of AI in pediatric neuroradiology; (2) current challenges and ethical considerations related to AI implementation in pediatric neuroradiology; and (3) future opportunities in the clinical and educational pediatric neuroradiology field. AI in pediatric neuroradiology is not meant to replace neuroradiologists, but to amplify human intellect and extend our capacity to diagnose, prognosticate, and treat with unprecedented precision and speed.

1. Introduction

Artificial intelligence (AI) and particularly machine learning (ML) and deep learning (DL) are revolutionarily shaping the entire landscape of medical imaging. In this futuristic scenario, radiology represents a privileged medical field since it produces a significant amount of data, namely, images. Currently, AI implementations in radiology are huge and extend from automating image analysis to enhancing workflow management [1], but AI-driven advances are not equally distributed among the subfields. While multiple AI applications in adult imaging have been shown, pediatric subspecialties, particularly pediatric neuroradiology, are now emerging as crucial frontiers where AI can significantly impact clinical practice [2]. A systematic review demonstrated that children represent just under 1% of available data in public medical imaging datasets. This issue is pivotal and shows the need for pediatric-specific big data and AI models [3].
Pediatric neuroradiology presents unique opportunities and challenges since neonates’ and small children’s brains are continuously developing, with age-specific changes in anatomy, physiology, and disease presentation. Therefore, the interpretation of pediatric neuroimaging requires specific knowledge of normal brain development and possible abnormalities. Furthermore, many pediatric neurological conditions are rare, further complicating diagnosis and management [4,5]. Despite these challenges, the potential benefits of AI in pediatric neuroradiology are substantial and early diagnosis is paramount since it may avoid lifelong consequences for neurodevelopmental outcomes.
AI holds tremendous promise in addressing many neuroradiology challenges, such as congenital brain malformations, epilepsy, brain tumours, metabolic and genetic disorders, traumatic brain injury, and perinatal hypoxic-ischaemic injuries [6]. By enhancing diagnostic accuracy, reducing interpretation times, and enabling earlier intervention, AI has the potential to significantly improve patient outcomes. Furthermore, as healthcare systems become increasingly burdened by rising demands and limited radiology workforce capacity, AI offers a scalable solution to support clinical decision-making, particularly in underserved regions where access to specialized pediatric neuroradiologists is limited [7,8].
However, there are several limitations in the integration of AI into pediatric neuroradiology, such as the scarcity of large, high-quality, annotated pediatric imaging datasets and ethical and legal restrictions on pediatric data sharing, that complicate the development of robust and generalizable AI models. Pediatric imaging protocols vary significantly between institutions, and normative data for children are inherently more variable than for adults due to the rapid developmental changes that occur during childhood. As a result, AI tools trained on adult data often fail to generalize to pediatric populations, necessitating the creation of pediatric-specific AI frameworks [1]. Moreover, many radiologists remain sceptical of AI’s interpretability and reliability, especially when used in high-stakes diagnostic scenarios involving young children. There are also important medico-legal questions around responsibility and liability when AI systems are involved in clinical decision-making. Overcoming these concerns will require not only technological innovation but also robust validation studies, regulatory clarity, and multidisciplinary collaboration between neuroradiologists, data scientists, ethicists, and pediatricians [2].
Therefore, AI integration to pediatric neuroradiology has to be human-centred, intelligible, standardized, and supervised to effectively benefit pediatric patients and neuroradiologists and respect the principles of diversity, equity, inclusion and data safety.
The paper aims to address: (1) current applications of AI in pediatric neuroradiology; (2) current challenges and ethical considerations related to AI implementation in pediatric neuroradiology; (3) future opportunities in the clinical and educational pediatric neuroradiology field.

2. Current Applications of AI in Pediatric Neuroradiology

2.1. AI-Powered Workflow Management in Pediatric Neuroradiology

Workflow management is a challenging organizational field in healthcare, encompassing various tasks from pediatric patients’ triage to exam scheduling and report prioritizing, with deep reverberations on institutional outcomes, workers’ wellbeing, and patients’ health. In this scenario, AI is increasingly being implemented to automate, improve, and speed up workflow management and enhance operational efficiency [1,7,8,9,10,11]. By automating routine tasks, providing intelligent clinical decision support, and optimizing resource allocation, AI can contribute to improved efficiency, diagnostic accuracy, and ultimately, better patient outcomes [11].

2.1.1. Triage in the Emergency Setting

First Aid is a melting pot of emergencies, urgencies, and anxiety-filled patients who need to be visited, triaged, and receive proper care. In this fast-paced environment, rapid identification and prioritization of critical cases is the everyday challenge. AI may properly support triage workers through algorithms that automatically or semi-automatically analyze patients’ demographic, clinical, laboratory, and anamnestic data to define the severity of disease and patients’ priority in the bigger picture of waiting patients [1,7,9]. Furthermore, AI can analyze a combination of factors, including vital signs, chief complaints, and medical history, to predict the need for critical care or hospitalization in pediatric patients presenting to the emergency department, outperforming conventional triage methods [7,9].

2.1.2. Exam Scheduling

Efficient exam scheduling optimizes resources and minimizes patient waiting times in pediatric neuroradiology departments. AI-powered tools analyze patients’ demographic, imaging, clinical, and laboratory data to produce an efficient worklist. Moreover, AI algorithms showed exceptional accuracy in predicting patients’ flow patterns, the likelihood of patients missing appointments, allowing for proactive interventions to avoid no-shows, and improving overall scheduling efficiency. Particularly, AI-supported notification systems include vocal or text-based notifications 24 h before the appointment as reminders or for appointment confirmation, as well as real-time chat boxes, offering flexibility to reschedule imaging exams due to cancellations or delays, thereby enhancing patients’ and families’ healthcare experience [1].

2.1.3. Imaging Protocol Optimization, Image Enhancement, and Synthetic Imaging

Optimizing the allocation of Imaging scanners and the selection of the optimal imaging protocol for pediatric patients is critical to ensure high-quality exams, reduce scanner time, and minimize radiation exposure [1,9,12]. Based on patient-specific characteristics, such as age, size, and clinical history, AI algorithms may suggest the most appropriate protocol. CT and X-Ray protocols are specifically tailored to pediatric patients’ demographic, clinical and laboratory data to reduce radiation exposure, and most MRI protocols are built to reduce scanning time and potentially avoid patients’ sedation. In fact, young pediatric patients and neonates or less-cooperative patients cannot stand still during time-consuming exams, such as brain and spine MRIs; therefore, it is crucial to choose the key sequences to avoid or reduce the time of sedation/anesthesia and artefacts [13]. Furthermore, AI-supported guidance on patient positioning, contrast dosing, and image sequencing can improve overall image quality, potentially decreasing the necessity for repeat scans and further limiting radiation exposure [1,9,12]. Moreover, AI-based post-processing algorithms can analyze and increase the quality of ultra-low-dose CT and low-quality MRI exams in order to offer high-quality diagnostic images to neuroradiologists (Figure 1). In pediatric CT, deep learning models and convolutional neural networks (CNNs) have shown substantial improvements in noise reduction and artefact removal, enabling high-quality diagnostic images even by reducing the radiation dose by 36–70% without losing diagnostic information [1,14,15]. AI can also be used to boost contrast in low-iodine-dose CT protocols, which is particularly beneficial in children to minimize the risk of contrast-induced nephropathy [16]. Particularly, the use of contrast agents in pediatric patients raises important concerns, including potential toxicity, especially in cases of repeated exposure, and deposition of contrast agents such as gadolinium in the brain and soft tissues. Moreover, the use of contrast agents may be impossible in cases of severe allergies and impaired renal function. It is crucial to propose AI-driven innovative strategies to avoid or drastically reduce the administration of gadolinium while maintaining the diagnostic value of the images. Specifically, with encoder–decoder DL models, it is possible to synthesize full-contrast MRI images from pre-contrast images by injecting only 10% of the standard gadolinium dose [17]. In MRI, AI methodologies enhance ultra-low-field MRI quality by applying deep learning-based reconstruction schemes to fully or undersampled k-space data, resulting in improved or preserved image quality in equivalent or reduced acquisition times, potentially increasing accessibility to this lower-cost imaging modality [18,19,20].
Synthetic imaging is a novel frontier, characterized by the possibility of creating neuroradiological images from a limited set of acquired data, or even from data acquired by a different imaging modality. The most promising application is synthetic MRI, in which a standardized set of sequences is obtained from a single rapid acquisition, significantly reducing scan times [21]. Moreover, intermodality synthesis allows AI to generate synthetic CT images from MRI data, which can be used for dose calculation in radiotherapy, avoiding additional radiation exposure from a dedicated CT scan [22]. Similarly, AI models can generate synthetic PET images from contrast-enhanced MRI, showing a strong correlation with real PET images for glioma grading and prognostication [23].
Figure 1. Compares a DTI axial sequence (A) to the same DTI sequence after the application of a deep convolutional neural network for denoising purposes [24]. The AI-enhanced DTI (B) shows improved quality and decreased noise artefacts (required time 1.1056 s). The network was applied to the data, and the results were visualized using MATLAB (MATLAB ver. 24.1–R2024a [The MathWorks Inc. (2024a). MATLAB version: 24.1 (R2024a), Natick, MA, USA: The MathWorks Inc. https://www.mathworks.com (accessed on 20 July 2025)]). DTI (Diffusion Tensor Imaging).
Figure 1. Compares a DTI axial sequence (A) to the same DTI sequence after the application of a deep convolutional neural network for denoising purposes [24]. The AI-enhanced DTI (B) shows improved quality and decreased noise artefacts (required time 1.1056 s). The network was applied to the data, and the results were visualized using MATLAB (MATLAB ver. 24.1–R2024a [The MathWorks Inc. (2024a). MATLAB version: 24.1 (R2024a), Natick, MA, USA: The MathWorks Inc. https://www.mathworks.com (accessed on 20 July 2025)]). DTI (Diffusion Tensor Imaging).
Children 12 01127 g001

2.1.4. Exam Prioritization and Report Generation

Exam prioritization is an AI-powered system that analyses acquired X-ray/CT/MRI images and flags studies characterized by potentially life-threatening findings, such as intracranial hemorrhages, and elevates them to the top of the radiologist’s worklist for immediate review, facilitating timely clinical decision-making and interventions [1,7]. For instance, AI algorithms can detect high-density lesions, suggestive of intracranial hemorrhages on CT scans, with high sensitivity (up to 95%) and specificity (up to 94%), significantly prioritizing the scans and reducing the time to diagnosis. This intelligent exam prioritization ensures that pediatric neuroradiologists focus their attention on the most urgent cases, potentially leading to earlier interventions and improved patient outcomes [7]. To furtherly speed up the healthcare process, AI-based algorithms analyze images and generate preliminary reports including clinically relevant labels and automated impressions of findings, which can then be reviewed and finalized by the radiologist, saving valuable time and reducing cognitive burden [1,7]. Moreover, AI can support the standardization of reports by offering a standard template to neuroradiologists, who are required to fill in the gaps with the required data and suggest a diagnosis. Apart from reducing reporting time, this opportunity is paramount because it paves the way to a global, standardized, readable, and understandable way of reporting imaging exams that avoids misunderstandings and simplifies the following treatment management. The continuous AI-driven worklist re-analysis and re-prioritization dynamically shape the exam and reporting prioritization, which has been demonstrated to significantly reduce report turnaround times for critical conditions [7].

2.1.5. Communication of Urgent and Unexpected Findings

Perfectly scheduling exams, optimizing imaging protocols, exam and report prioritization, and identifying the proper diagnosis are key links of a chain, whose last ring is the diagnosis communication to the referring physician and patients’ or families [25]. Communication in healthcare is key to optimal workflow management. Effective and timely communication of urgent and unexpected findings is paramount in pediatric neuroradiology to ensure appropriate and prompt patient management [26,27,28]. AI plays a vital role in workflow management by ensuring that radiology reports highlight critical findings and are delivered to the relevant referring physicians without delay. AI-driven communication platforms facilitate seamless information sharing and collaboration among the multidisciplinary team involved in the care of pediatric patients. Machine learning approaches can identify and flag radiology reports containing urgent findings that require prompt communication to referring physicians, further enhancing patient safety [1]. Although AI-powered tools retain a significant role in efficiently accelerating communication, they cannot take on the essential role of the neuroradiologists. Urgent and unexpected communication to pediatric patients and families cannot prescind from the human touch. Fear, discouragement, and loss of hope that patients may experience from the communication of a severe diagnosis must be addressed with human courage, professional assistance, and unwavering effort aiming at explaining the diagnosis, predicting the prognosis, and offering the best treatment to our little patients.

2.1.6. Workflow Organization and Workload Distribution

A pediatric neuroradiology department is a kaleidoscopic microcosm characterized by procedures, rules, and delicate balances that allow an optimal workflow organization and a proper workload distribution to the final goal of offering the best healthcare to pediatric patients. Recently, most pediatric neuroradiology departments worldwide have suffered from neuroradiologists shortage and overwhelming workloads that risk impairing the quality of service and leading to workers’ burnout [29]. AI have demonstrated significant accuracy in supporting the department workflow organization and management and workload distribution, with a reduction in management costs [30]. Particularly, AI models are capable of creating dynamic shift plans based on department needs, patients’ availability, and neuroradiologists’ expertise. A tailored assignment of specific neuroradiological exams to neuroradiologists with specific experience in the field increases diagnostic accuracy and efficiency, but also promotes a workload distribution based on equity principles, fostering a pleasant workplace and actively fighting staff burnout. Moreover, an AI-powered, automated algorithm can intelligently modulate the number and type of shifts and workloads based on multiple variables, including staff pathologies, maternity leaves, and paid time off, ensuring continuous coverage and supporting work–life balance for pediatric neuroradiologists, which in turn benefits pediatric patients.

2.2. Current Clinical Applications in Pediatric Neuroradiology

2.2.1. AI Implementation in Pediatric MRI Image Acquisition and Reconstruction, and Artefact Correction

The role of AI in imaging acquisition and image reconstruction is continuously expanding and aims to optimize imaging protocols to reduce radiation exposure, improve the quality of images, minimize the impact of artefacts, reduce the acquisition time, which is paramount considering the scarce pediatric tolerance to imaging, and cost-effective since it may support productivity and the imaging exam waiting lists.
MRI Image Acquisition and Reconstruction
MRI acquisitions and the following reconstruction are extremely time-consuming and may delay crucial diagnosis with dire consequences to pediatric patients. Recent DL models, and particularly convolutional neural networks (CNNs), have proved to be promising in image reconstruction since CNNs can learn complex mappings between undersampled K-space data and high-quality images, enabling faster acquisition times. Particularly, zero-filling and super-resolution methods may improve image resolution and quality from undersampled data, reducing the acquisition time [31,32,33,34,35] (Figure 2). Indeed, obtaining high-resolution images by adjusting scanner protocol parameters during acquisition can result in prolonged scan times, which are often not optimal in clinical practice. In contrast, the use of deep learning–based post-processing methods allows for the enhancement of standard-resolution images, significantly reducing the required acquisition time (Figure 3) [36]. Also, combining compressed sensing and DL has led to accelerated MRI acquisition since DL algorithms can learn optimal regularization terms, improving image quality and reducing artefacts compared to traditional compressed sensing methods [37,38,39,40,41,42]. These models are unified by the common intent of reducing the acquisition time, yet offering high-quality, diagnostic MRI images. These models have been applied to most MRI sequences, including DWI/ADC (Diffusion Weighted-Image/Apparent Diffusion Coefficient), T1WI, and T2WI, demonstrating improved image quality and reduced scan times [43,44,45].
Motion Artefact Correction
Motion artefacts are among the most common and challenging artefacts in the pediatric population, especially in neonates. AI-based motion correction techniques reduce the need for children’s sedation thanks to automatic and robust algorithms. Particularly, DL algorithms enable the detection and quantification of patients’ motion, enabling the selection of optimal sequences [46,47,48,49] and the proper planning for motion correction, which can be retrospective or prospective. The retrospective motion correction protocols are commonly based on DL algorithms that improve the quality of already-acquired images [50,51,52]. On the other hand, the prospective motion correction protocols enable the prediction and compensation of motion before and during the acquisition, by reducing the need for repeating scans and ensuring optimal images [46,53,54]. Prospective and retrospective motion correction protocols can be combined to offer an exceptional MRI image quality result [52]. Although the applications of AI-based motion correction artefact algorithms encompass all the fields of radiology, this AI is paramount in pediatric neuroradiology since subtle morphological and functional change detection may be severely impaired by motion artefacts [55,56].

2.2.2. Disease Classification, Prognostication, and Treatment Response Prediction

AI is a critical decision support tool for pediatric neuroradiologists, offering valuable assistance in classifying lesions, predicting prognosis and treatment response, and scheduling optimal patients’ follow-ups [12,27,57]. AI-based algorithms have demonstrated comparable to superior accuracy in basic imaging analysis to clinical experts. Moreover, AI algorithms have been used to classify pathologies and flag more complex cases for subspecialty consultation [58]. For example, Forestieri et al. applied machine learning and deep learning algorithms to analyze whole-body MRI acquisitions, successfully distinguishing chronic nonbacterial osteomyelitis lesions from normal bone marrow growth-related changes [59]. By integrating demographic, laboratory, imaging, and genetics data, AI models may predict patients’ prognosis and treatment response and offer insights into overall survival, progression-free survival, and neurodevelopmental outcomes [60,61]. In this regard, radiogenomics offers a thrilling correlation between imaging and genetics for different types of pediatric brain tumours to the pediatric neuroradiologist. These data support the differential diagnosis, suggest prognostication, and predict treatment response, contributing to the landscape of personalized medical approaches [1]. Treatment response is key in pediatric neuroradiology since it may help shape patients’ management and the optimal therapy timing and type. In addition, AI systems have been trained to automate patient follow-ups for significant incidental findings, ensuring that necessary steps are taken to address these issues, thereby improving patient outcomes and reducing potential liability [4]. The applications in neuroradiology are always expanding; therefore, we will focus on the most common ones.
Anatomical Segmentation and Quantitative Assessment in Pediatric Neuroradiology
Accurate segmentation and quantitative assessment of anatomical and pathological structures are essential to enhance precision and consistency in evaluating brain structures and, therefore, to ensure optimal comparative analyses and disease serial characterization over time. These advancements are particularly significant in monitoring conditions such as hydrocephalus and assessing the impact of treatments on brain development. Manual segmentation is a long and bias-prone operation, and pediatric brain MRI presents unique challenges for segmentation due to the developmental variability of anatomical structures and the presence of motion artefacts [62,63] (Figure 4). Therefore, automated AI-supported segmentation represents the future in this field. DL-based segmentation through CNNs, particularly U-Net and V-net architectures, has state-of-the-art performance in medical image segmentation [64,65,66,67]. For instance, segmentation may be applied to anatomical structures, such as specific brain regions, and pathological lesions, such as brain tumours, with the possibility of comparing lesion volumes and therefore progression or regression after therapy and planning surgery [68,69,70,71]. Moreover, Grimm et al. demonstrated the effectiveness of a CNN in segmenting cerebrospinal fluid and brain volumes in pediatric patients affected by hydrocephalus, achieving a Dice coefficient of 0.86, indicating high accuracy in segmentation tasks [72].
Tumour Detection, Characterization, and Evolution
The 2021 World Health Organisation (WHO) classification of brain tumours completely divided adult and pediatric tumours since pediatric brain cancer exhibits a completely peculiar and wide range of molecular and genetic types with different pathogenesis and neuroradiological presentation at MRI [74,75]. This distinction underscores the need for specialized diagnostic approaches in pediatric neuro-oncology [76]. AI, specifically DL, has emerged as a pivotal tool in the detection, characterization, and monitoring of pediatric brain tumours, offering advancements in diagnostic accuracy, treatment planning, and prognostication. DL models have been demonstrated to be promising in the differential diagnosis of pediatric brain tumours, such as ependymoma, pilocytic astrocytoma, and medulloblastoma [77,78,79]. For instance, Das et al. used textural analysis to classify childhood medulloblastoma into WHO-defined subtypes, achieving high accuracy rates (>90%) [80]. Similarly, Li et al. developed a DL algorithm to differentiate pediatric intracranial germ cell tumour subtypes and predicted survival outcomes based on MRI data, named iGNet, which achieved high diagnostic performance, with area under the curve (AUC) values between 0.869 and 0.950 [81]. Voicu et al. adopted a machine learning algorithm in combination with Diffusion Kurtosis Imaging (DKI) for the discrimination of pediatric fossa tumour types in order to improve diagnostic accuracy and inform clinical decision-making [82]. Di Giannatale et al. adopted AI-based radiomics to non-invasively characterize neuroblastoma tumours through the prediction of the CT-obtained MYCN amplification status, a marker linked to prognosis and tumour behaviour [83]. These studies highlight the potential of AI in enhancing the precision of tumour classification, which is crucial for determining appropriate therapeutic strategies. Moreover, brain tumour AI-based automated segmentation has been proven to be time-saving as compared to manual segmentation, and accurate for volumetric analysis and treatment monitoring, providing consistent, and reproducible results for assessing tumour burden, planning surgical interventions, and monitoring treatment response [64,69,84,85]. Kazerooni et al. conducted a multi-institutional study demonstrating the effectiveness of DL models in automated tumour segmentation and brain tissue extraction from multiparametric MRI scans of pediatric brain tumours [85]. The differentiations among different subtypes of tumours and the precise definition of evolution and regression carry significant value due to the clinical and treatment implications on the pediatric patient’s life quality and overall survival [86].
Traumatic Brain Injury
Pediatric traumatic brain injury (TBI) is often an emergency and, overall, a severe condition that requires early diagnosis and guided clinical or surgical management. Proper and repeated neuroimaging is key to identifying primary injuries, such as fractures, hemorrhage and contusions, and secondary injuries, encompassing brain edema and herniation, which can evolve over time [87]. While some injuries, like hemorrhages or contusions, are easily detectable through standard CT and MRI, subtle and severe pathologies, like diffuse axonal injury (DAI), are challenging entities that require advanced MRI sequences and may be overlooked in an emergency context in which the time is limited, resources can be not immediately available, and the workload is constantly increasing worldwide [87]. ML and DL algorithms can automatically identify abnormalities, quantify lesion volumes, and detect patterns associated with TBI, with similar to higher accuracy as compared to pediatric neuroradiologists [88,89]. AI-based prognostication is based on the combined analysis of clinical and laboratory data, and multimodal neuroimaging, to predict long-term neurodevelopmental outcomes and suggest proper treatment and rehabilitation planning [88,90,91,92,93].
Congenital Brain Malformations
Congenital brain malformations (CBMs) encompass multiple structural abnormalities secondary to disruptions in normal brain development during pregnancy, ranging from neuronal migration disorders to midline anomalies like the Chiari and Dandy–Walker malformations, which can result in significant neurological deficits, developmental delays, and life-threatening complications [94]. The kaleidoscopic complexity of the CBM differential diagnosis is based on the huge variability in neuroradiological presentation, evolving imaging phenotypes over time, and overlaps between different malformation subtypes. Standard and advanced MRIs are interpreted by high-skilled pediatric neuroradiologists, who are frequently lacking in non-specialized institutions, and inter-observer variability is high [95,96]. DL algorithms have been applied to fetal and neonatal US and MRI exams, showing optimal accuracy, consistency, and efficiency to automatically detect and classify congenital CBMs [97,98]. Particularly, CNNs have been trained to differentiate between normal and abnormal brain structures, to identify specific malformations such as Chiari II malformation and Dandy–Walker complex, and to automatically measure brain structure to longitudinally monitor disease progression or post-surgical outcomes [99,100,101,102]. Finally, AI-based CBM prognostication can be crucial in prenatal counselling, postnatal care planning, and treatment strategies [103].
Epilepsy Detection and Pre-Surgical Planning
Pediatric epilepsy affects 0.5–1% of children worldwide, with significant impacts on their neurological development and quality of life. Therefore, identifying the underlying cause is essential for specific medical or surgical treatments [104]. Unfortunately, the identification of epileptogenic foci, which is usually performed with MRI, may be challenging since some foci may be difficult to identify, such as focal cortical dysplasia (FCD) or hippocampal sclerosis [105]. CNNs and U-Net architectures have been trained on large, annotated datasets to automatically localize epileptogenic lesions in pediatric populations [106,107,108,109]. Particularly, Ganji et al. trained an ML algorithm to identify FCD type IIb, which is a common cause of drug-resistant epilepsy in children [110]. The automated ML algorithm showed optimal sensitivity (96.7%), specificity (100%), and accuracy (98.6%) in FCD type IIb identification and diagnosis, demonstrating its efficiency in presurgical assessment and in improving postsurgical outcomes [110]. The Multicenter Epilepsy Lesion Detection (MELD) project (https://github.com/MELDProject (accessed on 20 July 2025)) developed a robust and interpretable deep-learning algorithm for the detection of FCD on a large multicentre MRI cohort of patients [111,112,113]. Through the MELD algorithm, it is possible to enhance the sensitivity of detecting subtle cortical abnormalities, which are often missed by conventional imaging but are relevant in clinical workflows (Figure 5). Moreover, pre-surgical planning is an emerging application of AI, which is used to provide quantitative lesion maps and spatial localisation of epileptogenic zones, which can be co-registered with electroencephalography (EEG) and magnetoencephalography (MEG) findings [114]. For instance, DL tools have been used to detect hippocampal sclerosis through automated volumetric and texture analyses [115,116] and to predict seizure and surgical outcomes, with optimal results [109,117,118,119].
White Matter Disorders
Pediatric white matter disorders encompass a huge pathology spectrum, including complex conditions like pediatric-onset multiple sclerosis (POMS) and metabolic leukodystrophies. AI-powered models allow for qualitative pattern recognition and quantitative biomarker discovery. For instance, an AI application for MRI analysis for subtle changes and pattern recognition is highly relevant to the early detection and monitoring of white matter lesions characteristic of MS (multiple sclerosis) [2]. ML algorithms applied to MRI tractography facilitated automated white matter connectivity analysis and were able to identify and characterize subtle abnormalities in fibre tracts [122], while DL models allowed automatic detection of white matter injuries and punctate white matter lesions in preterm infants [123,124]. Particularly, Zhu et al. presented an ultrasound data-driven diagnostic system for white matter injury in preterm infants, which combined multi-task DL and traditional radiomics features to achieve automatic detection of white matter regions, and to design a fusion strategy of DL features and manual radiomics features to obtain stable and efficient diagnostic performance [123]. Ultrasound radiomics diagnostic system achieved an AUC of 0.845 in the testing set. Meanwhile, the multi-task deep learning model showed a Dice coefficient of 0.78 in WM segmentation and AN AUC of 0.863 in the prediction of white matter injury risk characterized in the testing cohort [123]. An interactome-driven prioritization algorithm applied to whole-exome and whole-genome sequencing data has achieved a high diagnostic yield, even identifying novel disease-causing genes and phenotypes for heterogeneous genetic white matter disorders (GWMDs) [125]. DL models have been applied to prognostication, and particularly to forecast disease progression and severity in children affected by MS and metabolic leukodystrophies [2]. Finally, AI supports personalized medicine by integrating patients’ clinical, laboratory, and imaging data to suggest optimal patient management and tailored therapies [2].
Neurodevelopmental Disorders
Neurodevelopmental disorders in pediatric neuroradiology include autism spectrum disorder (ASD), attention-deficit/hyperactivity disorder (ADHD), and developmental delay, which are often associated with subtle and complex alterations in brain structure and connectivity [126,127,128]. ML and DL algorithms analyze structural MRI data to search for associated morphological and functional abnormalities, such as atypical cortical thickness, altered white matter integrity, or disrupted functional connectivity in children with ASD or ADHD [129]. Furthermore, AI-based approaches facilitate the integration of neuroimaging with genetic, behavioural, and clinical data, providing a more comprehensive understanding of the underlying biology of neurodevelopmental disorders and supporting personalized treatment planning. Finally, AI can also be used to evaluate the progression of these disorders and to assess the effectiveness of interventions [130,131,132,133,134].

3. Current Challenges and Ethical Considerations

The integration of AI into pediatric neuroradiology shows both huge potential and technical and ethical challenges that must be carefully addressed. These considerations span from data privacy, AI model transparency and generalizability, regulatory and implementation hurdles, as well as legal responsibility.

3.1. Data Privacy and Security

The use of pediatric patients’ medical data requires high standards of privacy and confidentiality. AI models, especially those using large-scale neuroimaging datasets, strongly depend on the collection, storage, and sharing of sensitive patients’ information, with the constant risks of data breaches and unauthorized accesses [135]. Robust data anonymization, differential privacy techniques, and independent oversight are essential for protecting pediatric patients’ data [136].

3.2. Informed Consent and Pediatric Autonomy

Ethical challenges also include how to ensure proper and informed consent for pediatric patients’ data use in AI-based research. Commonly, parents express their consent for diagnostic and therapeutic procedures of pediatric patients. On the other hand, older children may want to assent or dissent regarding the use of their imaging data. Finally, the use of old MRI scans for AI model training should be taken into consideration for a secondary use without explicit consent, which should be ethically justified by institutional review boards [137].

3.3. Algorithmic Transparency and Explainability

AI models should be intelligible to pediatric neuroradiologists to let them understand and supervise how data are analyzed and results are obtained. The “black box” nature of some DL models raises accountability and trustworthiness concerns [138]. Recent studies advocate for the development of intelligible AI systems by the use of techniques like saliency maps, feature attribution, and model-agnostic methods to provide pediatric neuroradiologists with clear explanations [139]. It is paramount that AI models offer accurate, consistent and repeatable results, obtained through explainable paths and always under the supervision of pediatric neuroradiologists, who can actively shape the models through feedback or changes to the data or algorithms. Apart from the specific techniques and systems, diagnostic, prognostic and surgical decisions in pediatric neuroradiology need to be supported by AI, but preserve the human touch.

3.4. Bias and Representativeness, Robustness and Generalizability of AI Algorithms

Data scarcity and inhomogeneity, and the absence of large, well-annotated datasets are major hurdles in AI implementation in pediatric neuroradiology. Pediatric neuroimaging data are limited due to disease rarity, ethical concerns, and challenges associated with acquiring high-quality exams in pediatric patients. The lack of standardized imaging protocols and the absence of specific and univocal annotation guidelines lead to dataset heterogeneity with a deep impact on AI models’ performances and a lack of results’ generalizability. These issues hamper the creation of AI algorithms that are tailored for pediatric patients [140]. Moreover, pediatric patients range from neonates to teenagers, who exhibit a wide range of developmental stages, leading to significant variability in brain anatomy and pathology presentation. ML and DL models trained on datasets that underrepresent certain populations, based on age, ethnicity, disease type, or imaging modality, demonstrate reduced performance when applied to data acquired in different clinical environments. Possible solutions may be represented by transfer learning, image-to-image translation and AI-based augmentation of datasets. In pediatric neuroradiology, where diseases may manifest differently than in adults, transfer learning can lead to inaccurate or missed diagnoses due to the different MRI characteristics of pediatric and adult populations [141,142]. Image-to-image translation has shown good results since AI proved to be able to generate synthetic CT images from MRI data [20], synthetic PET images from contrast-enhanced MRI [24]. AI augmentation of datasets thanks to advanced generative adversarial networks has proved to be promising in tackling the small pediatric datasets issues [143]. On the other hand, synthetic imaging still faces regulatory hurdles, and concerns among clinicians, which should be promptly addressed. In conclusion, to ensure fairness, equity and inclusivity, datasets must be diverse, well-labelled, and representative of the full pediatric population.

3.5. Regulatory and PACS Implementation Hurdles

The regulatory landscape for AI in healthcare is constantly evolving, and obtaining approval for AI tools requires compliance with rigorous standards to ensure patient safety and efficacy [144]. Moreover, implementing AI solutions necessitates perfect integration with Picture Archiving and Communication Systems (PACS) and Electronic Health Records (EHRs), which represents a technical challenge and a need for workflow adjustments [144]. In this regard, AI vendors are called to develop user-friendly interfaces which easily communicate with PACS. Finally, the financial implications of developing, implementing, and maintaining AI systems are enormous. Healthcare institutions should invest in AI tools, considering key parameters such as improved diagnostic accuracy, workflow efficiency, and patient outcomes [144].

3.6. Clinical Responsibility and Human Oversight

AI should be viewed as an augmented intelligence tool, enhancing the pediatric neuroradiologist’s decision-making rather than replacing it. AI’s role must remain assistive, with ultimate responsibility resting with pediatric neuroradiologists, who should be involved in the design, validation, and deployment of AI models to ensure clinical relevance and safety [145,146].

4. Future Perspectives

4.1. Federated Learning

Federated learning (FL) is a pivotal opportunity to address pediatric patients’ data scarcity while supporting collaborative AI model training. FL allows AI models to learn from various data from multiple institutions without compromising patients’ privacy and enhances AI model robustness and generalizability in pediatric neuroradiology. FL demonstrated its efficiency in multi-institutional brain imaging analyses, underscoring its potential in pediatric settings. Particularly, Raggio et al. introduced FedSynthCT-Brain, which employs a cross-silo horizontal FL approach that allows multiple centres to collaboratively train a U-Net-based deep learning model to obtain synthetic brain CT images from MRI images [147]. The integration of multi-modal data, such as neuroimaging, clinical records, and molecular and genetic information, offers a wide perspective on pediatric neurological conditions. Such integration enhances diagnostic accuracy and supports patient-tailored treatment planning, as demonstrated by recent literature employing multi-modal approaches in neurodevelopmental disorder assessments [148]. The real-world applications of FL in pediatric neuroradiology are still limited, but may represent address the need for pediatric-specific big data and AI models.

4.2. Collaborative Research and AI Development

Collaboration among pediatric neuroradiologists, neuroscientists, and physicians is paramount for the successful and efficient implementation of AI into pediatric neuroradiology. AI models applied to clinical routine should be user-friendly, intelligible and aim at patients’ benefits [149]. standardized protocols and evaluation metrics are crucial for assessing AI model performance, refining AI models’ deficiencies, increasing algorithms’ generalizability, and facilitating regulatory approval. Creating representative datasets and validation frameworks is necessary to promote consistency and reliability in AI applications [150]. In conclusion, the future of AI in pediatric neuroradiology is promising and ongoing research, technological innovation, and collaborative efforts are key to efficiently implementing AI in pediatric neuroradiology.

5. Conclusions

Pediatric neuroradiology’s new era is enhanced by AI, which has been shown to be able to augment human expertise, redefine diagnostic precision, and potentially transform the way pediatric neurological disorders are diagnosed, prognosticated, and treated. AI’s effective integration in pediatric neuroradiology’s everyday routine will require strategic investments in research, education, and infrastructure, as well as a strong commitment to ethical, equitable, and patient-centred implementation. The application of artificial intelligence in pediatric neuroradiology does not replace the human touch, but amplifies human intellect and extends our capacity to diagnose, prognosticate, and treat with unprecedented precision and speed.

Author Contributions

Conceptualization, A.G. and D.L.; methodology, A.G., F.L. and A.N.; software, F.M., F.L. and A.N.; validation, D.L., A.B. and A.R.; formal analysis, A.G., F.L. and A.N.; investigation, A.G., F.L. and A.N.; data curation, A.G., F.M., F.L. and A.N.; writing—original draft preparation, A.G.; writing—review and editing, F.M., F.L., A.N., M.C.R.-E., C.G., A.R., A.B. and D.L.; visualization, A.G. and D.L.; supervision, C.G., A.R., D.L., A.B. and A.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

AI (artificial intelligence), DL (deep learning), ML (machine learning), CNN (convolutional neural network), WI (weighted image), DWI (diffusion weighted image), ADC (apparent diffusion coefficient), MRI (magnetic resonance imaging), CT (computed tomography), US (ultrasound), FL (federated learning), WHO (World Health Organisation), AUC (area under curve), TBI (pediatric traumatic brain injury), DAI (diffuse axonal injury), CBM (congenital brain malformation), FCD (focal cortical dysplasia), MELD (Multicenter Epilepsy Lesion Detection) EEG (electroencephalography), MEG (magnetoencephalography), ASD (autism spectrum disorder), ADHD (attention-deficit/hyperactivity disorder), POMS (pediatric-onset multiple sclerosis), MS (multiple sclerosis), GWMDs (genetic white matter disorders), PACS (Picture Archiving and Communication Systems), EHRs (Electronic Health Records), FLAIR (fluid-attenuated inversion recovery), MPRAGE (Magnetization Prepared Rapid Gradient Echo Imaging).

References

  1. Bhatia, A.; Khalvati, F.; Ertl-Wagner, B.B. Artificial Intelligence in the Future Landscape of Pediatric Neuroradiology: Opportunities and Challenges. AJNR Am. J. Neuroradiol. 2024, 45, 549–553. [Google Scholar] [CrossRef]
  2. Pringle, C.; Kilday, J.-P.; Kamaly-Asl, I.; Stivaros, S.M. The Role of Artificial Intelligence in Paediatric Neuroradiology. Pediatr. Radiol. 2022, 52, 2159–2172. [Google Scholar] [CrossRef]
  3. Hua, S.B.Z.; Heller, N.; He, P.; Towbin, A.J.; Chen, I.Y.; Lu, A.X.; Erdman, L. Lack of Children in Public Medical Imaging Data Points to Growing Age Bias in Biomedical AI. medRxiv 2025. [Google Scholar] [CrossRef] [PubMed]
  4. Martinelli, D.; Catesini, G.; Greco, B.; Guarnera, A.; Parrillo, C.; Maines, E.; Longo, D.; Napolitano, A.; De Nictolis, F.; Cairoli, S.; et al. Neurologic Outcome Following Liver Transplantation for Methylmalonic Aciduria. J. Inherit. Metab. Dis. 2023, 46, 450–465. [Google Scholar] [CrossRef] [PubMed]
  5. Siri, B.; Greco, B.; Martinelli, D.; Cairoli, S.; Guarnera, A.; Longo, D.; Napolitano, A.; Parrillo, C.; Ravà, L.; Simeoli, R.; et al. Positive Clinical, Neuropsychological, and Metabolic Impact of Liver Transplantation in Patients with Argininosuccinate Lyase Deficiency. J. Inherit. Metab. Dis. 2025, 48, e12843. [Google Scholar] [CrossRef] [PubMed]
  6. Straus Takahashi, M.; Donnelly, L.F.; Siala, S. Artificial Intelligence: A Primer for Pediatric Radiologists. Pediatr. Radiol. 2024, 54, 2127–2142. [Google Scholar] [CrossRef]
  7. Ranschaert, E.; Topff, L.; Pianykh, O. Optimization of Radiology Workflow with Artificial Intelligence. Radiol. Clin. N. Am. 2021, 59, 955–966. [Google Scholar] [CrossRef]
  8. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J.W.L. Artificial Intelligence in Radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef]
  9. Tejani, A.S.; Cook, T.S.; Hussain, M.; Sippel Schmidt, T.; O’Donnell, K.P. Integrating and Adopting AI in the Radiology Workflow: A Primer for Standards and Integrating the Healthcare Enterprise (IHE) Profiles. Radiology 2024, 311, e232653. [Google Scholar] [CrossRef]
  10. Bizzo, B.C.; Almeida, R.R.; Alkasab, T.K. Artificial Intelligence Enabling Radiology Reporting. Radiol. Clin. N. Am. 2021, 59, 1045–1052. [Google Scholar] [CrossRef]
  11. Davendralingam, N.; Sebire, N.J.; Arthurs, O.J.; Shelmerdine, S.C. Artificial Intelligence in Paediatric Radiology: Future Opportunities. Br. J. Radiol. 2021, 94, 20200975. [Google Scholar] [CrossRef]
  12. Sammer, M.B.K.; Akbari, Y.S.; Barth, R.A.; Blumer, S.L.; Dillman, J.R.; Farmakis, S.G.; Frush, D.P.; Gokli, A.; Halabi, S.S.; Iyer, R.; et al. Use of Artificial Intelligence in Radiology: Impact on Pediatric Patients, a White Paper From the ACR Pediatric AI Workgroup. J. Am. Coll. Radiol. 2023, 20, 730–737. [Google Scholar] [CrossRef]
  13. Moltoni, G.; Lucignani, G.; Sgrò, S.; Guarnera, A.; Rossi Espagnet, M.C.; Dellepiane, F.; Carducci, C.; Liberi, S.; Iacoella, E.; Evangelisti, G.; et al. MRI Scan with the “Feed and Wrap” Technique and with an Optimized Anesthesia Protocol: A Retrospective Analysis of a Single-Center Experience. Front. Pediatr. 2024, 12, 1415603. [Google Scholar] [CrossRef]
  14. Brendlin, A.S.; Schmid, U.; Plajer, D.; Chaika, M.; Mader, M.; Wrazidlo, R.; Männlin, S.; Spogis, J.; Estler, A.; Esser, M.; et al. AI Denoising Improves Image Quality and Radiological Workflows in Pediatric Ultra-Low-Dose Thorax Computed Tomography Scans. Tomography 2022, 8, 1678–1689. [Google Scholar] [CrossRef]
  15. Ng, C.K.C. Artificial Intelligence for Radiation Dose Optimization in Pediatric Radiology: A Systematic Review. Children 2022, 9, 1044. [Google Scholar] [CrossRef] [PubMed]
  16. Shin, D.-J.; Choi, Y.H.; Lee, S.B.; Cho, Y.J.; Lee, S.; Cheon, J.-E. Low-Iodine-Dose Computed Tomography Coupled with an Artificial Intelligence-Based Contrast-Boosting Technique in Children: A Retrospective Study on Comparison with Conventional-Iodine-Dose Computed Tomography. Pediatr. Radiol. 2024, 54, 1315–1324. [Google Scholar] [CrossRef]
  17. Gong, E.; Pauly, J.M.; Wintermark, M.; Zaharchuk, G. Deep Learning Enables Reduced Gadolinium Dose for Contrast-Enhanced Brain MRI. J. Magn. Reson. Imaging 2018, 48, 330–340. [Google Scholar] [CrossRef] [PubMed]
  18. Borisch, E.A.; Froemming, A.T.; Grimm, R.C.; Kawashima, A.; Trzasko, J.D.; Riederer, S.J. Model-Based Image Reconstruction with Wavelet Sparsity Regularization for through-Plane Resolution Restoration in T -Weighted Spin-Echo Prostate MRI. Magn. Reson. Med. 2023, 89, 454–468. [Google Scholar] [CrossRef] [PubMed]
  19. Li, Y.; Yang, M.; Bian, T.; Wu, H. MRI Super-Resolution Analysis via MRISR: Deep Learning for Low-Field Imaging. Information 2024, 15, 655. [Google Scholar] [CrossRef]
  20. Hewlett, M.; Petrov, I.; Johnson, P.M.; Drangova, M. Deep-Learning-Based Motion Correction Using Multichannel MRI Data: A Study Using Simulated Artifacts in the fastMRI Dataset. NMR Biomed. 2024, 37, e5179. [Google Scholar] [CrossRef]
  21. Lucas, A.; Campbell Arnold, T.; Okar, S.V.; Vadali, C.; Kawatra, K.D.; Ren, Z.; Cao, Q.; Shinohara, R.T.; Schindler, M.K.; Davis, K.A.; et al. Multi-Contrast High-Field Quality Image Synthesis for Portable Low-Field MRI Using Generative Adversarial Networks and Paired Data. medRxiv 2023. [Google Scholar] [CrossRef]
  22. Bahloul, M.A.; Jabeen, S.; Benoumhani, S.; Alsaleh, H.A.; Belkhatir, Z.; Al-Wabil, A. Advancements in Synthetic CT Generation from MRI: A Review of Techniques, and Trends in Radiation Therapy Planning. J. Appl. Clin. Med. Phys. 2024, 25, e14499. [Google Scholar] [CrossRef] [PubMed]
  23. Takita, H.; Matsumoto, T.; Tatekawa, H.; Katayama, Y.; Nakajo, K.; Uda, T.; Mitsuyama, Y.; Walston, S.L.; Miki, Y.; Ueda, D. AI-Based Virtual Synthesis of Methionine PET from Contrast-Enhanced MRI: Development and External Validation Study. Radiology 2023, 308, e223016. [Google Scholar] [CrossRef] [PubMed]
  24. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed]
  25. Guarnera, A.; Moltoni, G.; Dellepiane, F.; Lucignani, G.; Rossi-Espagnet, M.C.; Campi, F.; Auriti, C.; Longo, D. Bacterial Meningoencephalitis in Newborns. Biomedicines 2024, 12, 2490. [Google Scholar] [CrossRef]
  26. Filippi, C.G.; Stein, J.M.; Wang, Z.; Bakas, S.; Liu, Y.; Chang, P.D.; Lui, Y.; Hess, C.; Barboriak, D.P.; Flanders, A.E.; et al. Ethical Considerations and Fairness in the Use of Artificial Intelligence for Neuroradiology. AJNR Am. J. Neuroradiol. 2023, 44, 1242–1248. [Google Scholar] [CrossRef]
  27. Martin, D.; Tong, E.; Kelly, B.; Yeom, K.; Yedavalli, V. Current Perspectives of Artificial Intelligence in Pediatric Neuroradiology: An Overview. Front. Radiol. 2021, 1, 713681. [Google Scholar] [CrossRef]
  28. Guarnera, A.; Valente, P.; Pasquini, L.; Moltoni, G.; Randisi, F.; Carducci, C.; Carboni, A.; Lucignani, G.; Napolitano, A.; Romanzo, A.; et al. Congenital Malformations of the Eye: A Pictorial Review and Clinico-Radiological Correlations. J. Ophthalmol. 2024, 2024, 5993083. [Google Scholar] [CrossRef]
  29. Bailey, C.R.; Bailey, A.M.; McKenney, A.S.; Weiss, C.R. Understanding and Appreciating Burnout in Radiologists. Radiographics 2022, 42, E137–E139. [Google Scholar] [CrossRef]
  30. Nair, A.; Ong, W.; Lee, A.; Leow, N.W.; Makmur, A.; Ting, Y.H.; Lee, Y.J.; Ong, S.J.; Tan, J.J.H.; Kumar, N.; et al. Enhancing Radiologist Productivity with Artificial Intelligence in Magnetic Resonance Imaging (MRI): A Narrative Review. Diagnostics 2025, 15, 1146. [Google Scholar] [CrossRef]
  31. Xiao, H.; Yang, Z.; Liu, T.; Liu, S.; Huang, X.; Dai, J. Deep Learning for Medical Imaging Super-Resolution: A Comprehensive Review. Neurocomputing 2025, 630, 129667. [Google Scholar] [CrossRef]
  32. Molina-Maza, J.M.; Galiana-Bordera, A.; Jimenez, M.; Malpica, N.; Torrado-Carvajal, A. Development of a Super-Resolution Scheme for Pediatric Magnetic Resonance Brain Imaging Through Convolutional Neural Networks. Front. Neurosci. 2022, 16, 830143. [Google Scholar] [CrossRef] [PubMed]
  33. Zhou, Z.; Ma, A.; Feng, Q.; Wang, R.; Cheng, L.; Chen, X.; Yang, X.; Liao, K.; Miao, Y.; Qiu, Y. Super-Resolution of Brain Tumor MRI Images Based on Deep Learning. J. Appl. Clin. Med. Phys. 2022, 23, e13758. [Google Scholar] [CrossRef] [PubMed]
  34. Bernstein, M.A.; Fain, S.B.; Riederer, S.J. Effect of Windowing and Zero-Filled Reconstruction of MRI Data on Spatial Resolution and Acquisition Strategy. J. Magn. Reson. Imaging 2001, 14, 270–280. [Google Scholar] [CrossRef]
  35. Ebel, A.; Dreher, W.; Leibfritz, D. Effects of Zero-Filling and Apodization on Spectral Integrals in Discrete Fourier-Transform Spectroscopy of Noisy Data. J. Magn. Reson. 2006, 182, 330–338. [Google Scholar] [CrossRef]
  36. Yoon, J.H.; Nickel, M.D.; Peeters, J.M.; Lee, J.M. Rapid Imaging: Recent Advances in Abdominal MRI for Reducing Acquisition Time and Its Clinical Applications. Korean J. Radiol. 2019, 20, 1597–1615. [Google Scholar] [CrossRef]
  37. Schlemper, J.; Caballero, J.; Hajnal, J.V.; Price, A.N.; Rueckert, D. A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction. IEEE Trans. Med. Imaging 2018, 37, 491–503. [Google Scholar] [CrossRef]
  38. Ye, J.C.; Eldar, Y.C.; Unser, M. Deep Learning for Biomedical Image Reconstruction; Cambridge University Press: Cambridge, UK, 2023; ISBN 9781316517512. [Google Scholar]
  39. Lee, D.; Yoo, J.; Tak, S.; Ye, J.C. Deep Residual Learning for Accelerated MRI Using Magnitude and Phase Networks. IEEE Trans. Biomed. Eng. 2018, 65, 1985–1995. [Google Scholar] [CrossRef]
  40. Lustig, M.; Donoho, D.; Pauly, J.M. Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging. Magn. Reson. Med. 2007, 58, 1182–1195. [Google Scholar] [CrossRef]
  41. Aggarwal, H.K.; Mani, M.P.; Jacob, M. MoDL: Model-Based Deep Learning Architecture for Inverse Problems. IEEE Trans. Med. Imaging 2019, 38, 394–405. [Google Scholar] [CrossRef]
  42. Zhang, S.; Zhong, M.; Shenliu, H.; Wang, N.; Hu, S.; Lu, X.; Lin, L.; Zhang, H.; Zhao, Y.; Yang, C.; et al. Deep Learning-Based Super-Resolution Reconstruction on Undersampled Brain Diffusion-Weighted MRI for Infarction Stroke: A Comparison to Conventional Iterative Reconstruction. AJNR Am. J. Neuroradiol. 2025, 46, 41–48. [Google Scholar] [CrossRef]
  43. Matsuo, K.; Nakaura, T.; Morita, K.; Uetani, H.; Nagayama, Y.; Kidoh, M.; Hokamura, M.; Yamashita, Y.; Shinoda, K.; Ueda, M.; et al. Feasibility Study of Super-Resolution Deep Learning-Based Reconstruction Using K-Space Data in Brain Diffusion-Weighted Images. Neuroradiology 2023, 65, 1619–1629. [Google Scholar] [CrossRef]
  44. Cole, J.H.; Poudel, R.P.K.; Tsagkrasoulis, D.; Caan, M.W.A.; Steves, C.; Spector, T.D.; Montana, G. Predicting Brain Age with Deep Learning from Raw Imaging Data Results in a Reliable and Heritable Biomarker. Neuroimage 2017, 163, 115–124. [Google Scholar] [CrossRef]
  45. Behl, N. Deep Resolve—Mobilizing the Power of Networks. In MAGNETOM Flash; Siemens Healthineers: Erlangen, Germany, 2021; Volume 78. [Google Scholar]
  46. Cordero-Grande, L.; Christiaens, D.; Hutter, J.; Price, A.N.; Hajnal, J.V. Complex Diffusion-Weighted Image Estimation via Matrix Recovery under General Noise Models. Neuroimage 2019, 200, 391–404. [Google Scholar] [CrossRef] [PubMed]
  47. Singh, R.; Singh, N.; Kaur, L. Deep Learning Methods for 3D Magnetic Resonance Image Denoising, Bias Field and Motion Artifact Correction: A Comprehensive Review. Phys. Med. Biol. 2024, 69, 23TR01. [Google Scholar] [CrossRef] [PubMed]
  48. Zhang, M.; Xu, J.; Turk, E.A.; Grant, P.E.; Golland, P.; Adalsteinsson, E. Enhanced Detection of Fetal Pose in 3D MRI by Deep Reinforcement Learning with Physical Structure Priors on Anatomy. Med. Image Comput. Comput. Assist. Interv. 2020, 12266, 396–405. [Google Scholar]
  49. Kim, S.-H.; Choi, Y.H.; Lee, J.S.; Lee, S.B.; Cho, Y.J.; Lee, S.H.; Shin, S.-M.; Cheon, J.-E. Deep Learning Reconstruction in Pediatric Brain MRI: Comparison of Image Quality with Conventional T2-Weighted MRI. Neuroradiology 2023, 65, 207–214. [Google Scholar] [CrossRef] [PubMed]
  50. Usman, M.; Latif, S.; Asim, M.; Lee, B.-D.; Qadir, J. Retrospective Motion Correction in Multishot MRI Using Generative Adversarial Network. Sci. Rep. 2020, 10, 4786. [Google Scholar] [CrossRef]
  51. Chen, Z.; Pawar, K.; Ekanayake, M.; Pain, C.; Zhong, S.; Egan, G.F. Deep Learning for Image Enhancement and Correction in Magnetic Resonance Imaging-State-of-the-Art and Challenges. J. Digit. Imaging 2023, 36, 204–230. [Google Scholar] [CrossRef]
  52. Chang, Y.; Li, Z.; Saju, G.; Mao, H.; Liu, T. Deep Learning-Based Rigid Motion Correction for Magnetic Resonance Imaging: A Survey. Meta-Radiology 2023, 1, 100001. [Google Scholar] [CrossRef]
  53. Maclaren, J.; Herbst, M.; Speck, O.; Zaitsev, M. Prospective Motion Correction in Brain Imaging: A Review. Magn. Reson. Med. 2013, 69, 621–636. [Google Scholar] [CrossRef]
  54. Zaitsev, M.; Akin, B.; LeVan, P.; Knowles, B.R. Prospective Motion Correction in Functional MRI. Neuroimage 2017, 154, 33–42. [Google Scholar] [CrossRef]
  55. Power, J.D.; Barnes, K.A.; Snyder, A.Z.; Schlaggar, B.L.; Petersen, S.E. Spurious but Systematic Correlations in Functional Connectivity MRI Networks Arise from Subject Motion. Neuroimage 2012, 59, 2142–2154. [Google Scholar] [CrossRef] [PubMed]
  56. Satterthwaite, T.D.; Ciric, R.; Roalf, D.R.; Davatzikos, C.; Bassett, D.S.; Wolf, D.H. Motion Artifact in Studies of Functional Connectivity: Characteristics and Mitigation Strategies. Hum. Brain Mapp. 2019, 40, 2033–2051. [Google Scholar] [CrossRef] [PubMed]
  57. Alkhulaifat, D.; Rafful, P.; Khalkhali, V.; Welsh, M.; Sotardi, S.T. Implications of Pediatric Artificial Intelligence Challenges for Artificial Intelligence Education and Curriculum Development. J. Am. Coll. Radiol. 2023, 20, 724–729. [Google Scholar] [CrossRef]
  58. Pacchiano, F.; Tortora, M.; Doneda, C.; Izzo, G.; Arrigoni, F.; Ugga, L.; Cuocolo, R.; Parazzini, C.; Righini, A.; Brunetti, A. Radiomics and Artificial Intelligence Applications in Pediatric Brain Tumors. World J. Pediatr. 2024, 20, 747–763. [Google Scholar] [CrossRef]
  59. Forestieri, M.; Napolitano, A.; Tomà, P.; Bascetta, S.; Cirillo, M.; Tagliente, E.; Fracassi, D.; D’Angelo, P.; Casazza, I. Machine Learning Algorithm: Texture Analysis in CNO and Application in Distinguishing CNO and Bone Marrow Growth-Related Changes on Whole-Body MRI. Diagnostics 2023, 14, 61. [Google Scholar] [CrossRef]
  60. Wagner, M.W.; Hainc, N.; Khalvati, F.; Namdar, K.; Figueiredo, L.; Sheng, M.; Laughlin, S.; Shroff, M.M.; Bouffet, E.; Tabori, U.; et al. Radiomics of Pediatric Low-Grade Gliomas: Toward a Pretherapeutic Differentiation of Mutated and -Fused Tumors. AJNR Am. J. Neuroradiol. 2021, 42, 759–765. [Google Scholar] [CrossRef]
  61. Wagner, M.W.; Namdar, K.; Napoleone, M.; Hainc, N.; Amirabadi, A.; Fonseca, A.; Laughlin, S.; Shroff, M.M.; Bouffet, E.; Hawkins, C.; et al. Radiomic Features Based on MRI Predict Progression-Free Survival in Pediatric Diffuse Midline Glioma/Diffuse Intrinsic Pontine Glioma. Can. Assoc. Radiol. J. 2023, 74, 119–126. [Google Scholar] [CrossRef]
  62. Cardoso, M.J.; Modat, M.; Wolz, R.; Melbourne, A.; Cash, D.; Rueckert, D.; Ourselin, S. Geodesic Information Flows: Spatially-Variant Graphs and Their Application to Segmentation and Fusion. IEEE Trans. Med. Imaging 2015, 34, 1976–1988. [Google Scholar] [CrossRef]
  63. Reuter, M.; Schmansky, N.J.; Rosas, H.D.; Fischl, B. Within-Subject Template Estimation for Unbiased Longitudinal Image Analysis. Neuroimage 2012, 61, 1402–1418. [Google Scholar] [CrossRef] [PubMed]
  64. Shaikh, A.; Amin, S.; Zeb, M.A.; Sulaiman, A.; Al Reshan, M.S.; Alshahrani, H. Enhanced Brain Tumor Detection and Segmentation Using Densely Connected Convolutional Networks with Stacking Ensemble Learning. Comput. Biol. Med. 2025, 186, 109703. [Google Scholar] [CrossRef] [PubMed]
  65. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. ISBN 9783319245737. [Google Scholar]
  66. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. arXiv 2016, arXiv:1606.04797. [Google Scholar]
  67. Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation. IEEE Trans. Med. Imaging 2020, 39, 1856–1867. [Google Scholar] [CrossRef]
  68. Isensee, F.; Jaeger, P.F.; Kohl, S.A.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A Self-Configuring Method for Deep Learning-Based Biomedical Image Segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
  69. Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.-M.; Larochelle, H. Brain Tumor Segmentation with Deep Neural Networks. Med. Image Anal. 2017, 35, 18–31. [Google Scholar] [CrossRef]
  70. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A Survey on Deep Learning in Medical Image Analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef]
  71. Girum, K.B.; Créhange, G.; Hussain, R.; Lalande, A. Fast Interactive Medical Image Segmentation with Weakly Supervised Deep Learning Method. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1437–1444. [Google Scholar] [CrossRef]
  72. Grimm, F.; Edl, F.; Kerscher, S.R.; Nieselt, K.; Gugel, I.; Schuhmann, M.U. Semantic Segmentation of Cerebrospinal Fluid and Brain Volume with a Convolutional Neural Network in Pediatric Hydrocephalus-Transfer Learning from Existing Algorithms. Acta Neurochir. 2020, 162, 2463–2474. [Google Scholar] [CrossRef]
  73. Billot, B.; Greve, D.N.; Puonti, O.; Thielscher, A.; Van Leemput, K.; Fischl, B.; Dalca, A.V.; Iglesias, J.E. SynthSeg: Segmentation of Brain MRI Scans of Any Contrast and Resolution without Retraining. arXiv 2021, arXiv:2107.09559. [Google Scholar] [CrossRef]
  74. WHO Classification of Tumours Editorial Board. Central Nervous System Tumours; International Agency for Research on Cancer: Lyon, France, 2022; ISBN 9789283245087. [Google Scholar]
  75. Guarnera, A.; Ius, T.; Romano, A.; Bagatto, D.; Denaro, L.; Aiudi, D.; Iacoangeli, M.; Palmieri, M.; Frati, A.; Santoro, A.; et al. Advanced MRI, Radiomics and Radiogenomics in Unravelling Incidental Glioma Grading and Genetic Status: Where Are We? Medicina 2025, 61, 1453. [Google Scholar] [CrossRef]
  76. Guarnera, A.; Romano, A.; Moltoni, G.; Ius, T.; Palizzi, S.; Romano, A.; Bagatto, D.; Minniti, G.; Bozzao, A. The Role of Advanced MRI Sequences in the Diagnosis and Follow-up of Adult Brainstem Gliomas: A Neuroradiological Review. Tomography 2023, 9, 1526–1537. [Google Scholar] [CrossRef] [PubMed]
  77. Tampu, I.E.; Bianchessi, T.; Blystad, I.; Lundberg, P.; Nyman, P.; Eklund, A.; Haj-Hosseini, N. Pediatric Brain Tumor Classification Using Deep Learning on MR Images with Age Fusion. Neurooncol. Adv. 2025, 7, vdae205. [Google Scholar] [CrossRef] [PubMed]
  78. Aamir, M.; Rahman, Z.; Dayo, Z.A.; Abro, W.A.; Uddin, M.I.; Khan, I.; Imran, A.S.; Ali, Z.; Ishfaq, M.; Guan, Y.; et al. A Deep Learning Approach for Brain Tumor Classification Using MRI Images. Comput. Electr. Eng. 2022, 101, 108105. [Google Scholar] [CrossRef]
  79. Chakrabarty, S.; Sotiras, A.; Milchenko, M.; LaMontagne, P.; Hileman, M.; Marcus, D. MRI-Based Identification and Classification of Major Intracranial Tumor Types by Using a 3D Convolutional Neural Network: A Retrospective Multi-Institutional Analysis. Radiol. Artif. Intell. 2021, 3, e200301. [Google Scholar] [CrossRef]
  80. Das, D.; Mahanta, L.B.; Ahmed, S.; Baishya, B.K. Classification of Childhood Medulloblastoma into WHO-Defined Multiple Subtypes Based on Textural Analysis. J. Microsc. 2020, 279, 26–38. [Google Scholar] [CrossRef]
  81. Li, Y.; Zhuo, Z.; Weng, J.; Haller, S.; Bai, H.X.; Li, B.; Liu, X.; Zhu, M.; Wang, Z.; Li, J.; et al. A Deep Learning Model for Differentiating Paediatric Intracranial Germ Cell Tumour Subtypes and Predicting Survival with MRI: A Multicentre Prospective Study. BMC Med. 2024, 22, 375. [Google Scholar] [CrossRef]
  82. Voicu, I.P.; Dotta, F.; Napolitano, A.; Caulo, M.; Piccirilli, E.; D’Orazio, C.; Carai, A.; Miele, E.; Vinci, M.; Rossi, S.; et al. Machine Learning Analysis in Diffusion Kurtosis Imaging for Discriminating Pediatric Posterior Fossa Tumors: A Repeatability and Accuracy Pilot Study. Cancers 2024, 16, 2578. [Google Scholar] [CrossRef]
  83. Di Giannatale, A.; Di Paolo, P.L.; Curione, D.; Lenkowicz, J.; Napolitano, A.; Secinaro, A.; Tomà, P.; Locatelli, F.; Castellano, A.; Boldrini, L. Radiogenomics Prediction for MYCN Amplification in Neuroblastoma: A Hypothesis Generating Study. Pediatr. Blood Cancer 2021, 68, e29110. [Google Scholar] [CrossRef]
  84. Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE Trans. Med. Imaging 2016, 35, 1240–1251. [Google Scholar] [CrossRef]
  85. Fathi Kazerooni, A.; Arif, S.; Madhogarhia, R.; Khalili, N.; Haldar, D.; Bagheri, S.; Familiar, A.M.; Anderson, H.; Haldar, S.; Tu, W.; et al. Automated Tumor Segmentation and Brain Tissue Extraction from Multiparametric MRI of Pediatric Brain Tumors: A Multi-Institutional Study. Neurooncol. Adv. 2023, 5, vdad027. [Google Scholar] [CrossRef] [PubMed]
  86. Familiar, A.M.; Fathi Kazerooni, A.; Vossough, A.; Ware, J.B.; Bagheri, S.; Khalili, N.; Anderson, H.; Haldar, D.; Storm, P.B.; Resnick, A.C.; et al. Towards Consistency in Pediatric Brain Tumor Measurements: Challenges, Solutions, and the Role of Artificial Intelligence-Based Segmentation. Neuro Oncol. 2024, 26, 1557–1571. [Google Scholar] [CrossRef] [PubMed]
  87. Capriotti, G.; Guarnera, A.; Romano, A.; Moltoni, G.; Granese, G.; Bozzao, A.; Signore, A. Neuroimaging of Mild Traumatic Injury. Semin. Nucl. Med. 2025, 55, 512–525. [Google Scholar] [CrossRef] [PubMed]
  88. Lampros, M.; Symeou, S.; Vlachos, N.; Gkampenis, A.; Zigouris, A.; Voulgaris, S.; Alexiou, G.A. Applications of Machine Learning in Pediatric Traumatic Brain Injury (pTBI): A Systematic Review of the Literature. Neurosurg. Rev. 2024, 47, 737. [Google Scholar] [CrossRef]
  89. Pierre, K.; Turetsky, J.; Raviprasad, A.; Sadat Razavi, S.M.; Mathelier, M.; Patel, A.; Lucke-Wold, B. Machine Learning in Neuroimaging of Traumatic Brain Injury: Current Landscape, Research Gaps, and Future Directions. Trauma Care 2024, 4, 31–43. [Google Scholar] [CrossRef]
  90. Tunthanathip, T.; Oearsakul, T. Application of Machine Learning to Predict the Outcome of Pediatric Traumatic Brain Injury. Chin. J. Traumatol. 2021, 24, 350–355. [Google Scholar] [CrossRef]
  91. Chong, S.-L.; Liu, N.; Barbier, S.; Ong, M.E.H. Predictive Modeling in Pediatric Traumatic Brain Injury Using Machine Learning. BMC Med. Res. Methodol. 2015, 15, 22. [Google Scholar] [CrossRef]
  92. Daley, M.; Cameron, S.; Ganesan, S.L.; Patel, M.A.; Stewart, T.C.; Miller, M.R.; Alharfi, I.; Fraser, D.D. Pediatric Severe Traumatic Brain Injury Mortality Prediction Determined with Machine Learning-Based Modeling. Injury 2022, 53, 992–998. [Google Scholar] [CrossRef]
  93. Huth, S.F.; Slater, A.; Waak, M.; Barlow, K.; Raman, S. Predicting Neurological Recovery after Traumatic Brain Injury in Children: A Systematic Review of Prognostic Models. J. Neurotrauma 2020, 37, 2141–2149. [Google Scholar] [CrossRef]
  94. Barkovich, A.J.; Guerrini, R.; Kuzniecky, R.I.; Jackson, G.D.; Dobyns, W.B. A Developmental and Genetic Classification for Malformations of Cortical Development: Update 2012. Brain 2012, 135, 1348–1369. [Google Scholar] [CrossRef]
  95. Guarnera, A.; Lucignani, G.; Parrillo, C.; Rossi-Espagnet, M.C.; Carducci, C.; Moltoni, G.; Savarese, I.; Campi, F.; Dotta, A.; Milo, F.; et al. Predictive Value of MRI in Hypoxic-Ischemic Encephalopathy Treated with Therapeutic Hypothermia. Children 2023, 10, 446. [Google Scholar] [CrossRef] [PubMed]
  96. Guarnera, A.; Bottino, F.; Napolitano, A.; Sforza, G.; Cappa, M.; Chioma, L.; Pasquini, L.; Rossi-Espagnet, M.C.; Lucignani, G.; Figà-Talamanca, L.; et al. Early Alterations of Cortical Thickness and Gyrification in Migraine without Aura: A Retrospective MRI Study in Pediatric Patients. J. Headache Pain 2021, 22, 79. [Google Scholar] [CrossRef] [PubMed]
  97. Xie, H.N.; Wang, N.; He, M.; Zhang, L.H.; Cai, H.M.; Xian, J.B.; Lin, M.F.; Zheng, J.; Yang, Y.Z. Using Deep-Learning Algorithms to Classify Fetal Brain Ultrasound Images as Normal or Abnormal. Ultrasound Obstet. Gynecol. 2020, 56, 579–587. [Google Scholar] [CrossRef]
  98. Vahedifard, F.; Adepoju, J.O.; Supanich, M.; Ai, H.A.; Liu, X.; Kocak, M.; Marathu, K.K.; Byrd, S.E. Review of Deep Learning and Artificial Intelligence Models in Fetal Brain Magnetic Resonance Imaging. World J. Clin. Cases 2023, 11, 3725–3735. [Google Scholar] [CrossRef]
  99. Attallah, O.; Sharkas, M.A.; Gadelkarim, H. Deep Learning Techniques for Automatic Detection of Embryonic Neurodevelopmental Disorders. Diagnostics 2020, 10, 27. [Google Scholar] [CrossRef]
  100. Lin, M.; Zhou, Q.; Lei, T.; Shang, N.; Zheng, Q.; He, X.; Wang, N.; Xie, H. Deep Learning System Improved Detection Efficacy of Fetal Intracranial Malformations in a Randomized Controlled Trial. NPJ Digit. Med. 2023, 6, 191. [Google Scholar] [CrossRef]
  101. Priya, M.; Nandhini, M. Detection of Fetal Brain Abnormalities Using Data Augmentation and Convolutional Neural Network in Internet of Things. Measur. Sens. 2023, 28, 100808. [Google Scholar] [CrossRef]
  102. Zhao, L.; Asis-Cruz, J.D.; Feng, X.; Wu, Y.; Kapse, K.; Largent, A.; Quistorff, J.; Lopez, C.; Wu, D.; Qing, K.; et al. Automated 3D Fetal Brain Segmentation Using an Optimized Deep Learning Approach. AJNR Am. J. Neuroradiol. 2022, 43, 448–454. [Google Scholar] [CrossRef]
  103. Nosarti, C.; Murray, R.M.; Hack, M. Neurodevelopmental Outcomes of Preterm Birth: From Childhood to Adult Life; Cambridge University Press: Cambridge, UK, 2010; ISBN 9781139487146. [Google Scholar]
  104. Urbańska, S.M.; Leśniewski, M.; Welian-Polus, I.; Witas, A.; Szukała, K.; Chrościńska-Krawczyk, M. Epilepsy Diagnosis and Treatment in Children—New Hopes and Challenges—Literature Review. J. Pre-Clin. Clin. Res. 2024, 1, 40–49. [Google Scholar] [CrossRef]
  105. Colombo, N.; Tassi, L.; Galli, C.; Citterio, A.; Lo Russo, G.; Scialfa, G.; Spreafico, R. Focal Cortical Dysplasias: MR Imaging, Histopathologic, and Clinical Correlations in Surgically Treated Patients with Epilepsy. AJNR Am. J. Neuroradiol. 2003, 24, 724–733. [Google Scholar]
  106. Gill, R.S.; Lee, H.-M.; Caldairou, B.; Hong, S.-J.; Barba, C.; Deleo, F.; D’Incerti, L.; Mendes Coelho, V.C.; Lenge, M.; Semmelroch, M.; et al. Multicenter Validation of a Deep Learning Detection Algorithm for Focal Cortical Dysplasia. Neurology 2021, 97, e1571–e1582. [Google Scholar] [CrossRef]
  107. Adler, S.; Wagstyl, K.; Gunny, R.; Ronan, L.; Carmichael, D.; Cross, J.H.; Fletcher, P.C.; Baldeweg, T. Novel Surface Features for Automated Detection of Focal Cortical Dysplasias in Paediatric Epilepsy. Neuroimage Clin. 2017, 14, 18–27. [Google Scholar] [CrossRef] [PubMed]
  108. Dell’Isola, G.B.; Fattorusso, A.; Villano, G.; Ferrara, P.; Verrotti, A. Innovating Pediatric Epilepsy: Transforming Diagnosis and Treatment with AI. World J. Pediatr. 2025, 21, 333–337. [Google Scholar] [CrossRef] [PubMed]
  109. Jin, B.; Krishnan, B.; Adler, S.; Wagstyl, K.; Hu, W.; Jones, S.; Najm, I.; Alexopoulos, A.; Zhang, K.; Zhang, J.; et al. Automated Detection of Focal Cortical Dysplasia Type II with Surface-Based Magnetic Resonance Imaging Postprocessing and Machine Learning. Epilepsia 2018, 59, 982–992. [Google Scholar] [CrossRef] [PubMed]
  110. Ganji, Z.; Hakak, M.A.; Zamanpour, S.A.; Zare, H. Automatic Detection of Focal Cortical Dysplasia Type II in MRI: Is the Application of Surface-Based Morphometry and Machine Learning Promising? Front. Hum. Neurosci. 2021, 15, 608285. [Google Scholar] [CrossRef]
  111. Spitzer, H.; Ripart, M.; Whitaker, K.; Napolitano, A.; De Palma, L.; De Benedictis, A.; Foldes, S.; Humphreys, Z.; Zhang, K.; Hu, W.; et al. Interpretable Surface-Based Detection of Focal Cortical Dysplasias: A MELD Study. medRxiv 2021. [Google Scholar] [CrossRef]
  112. Cohen, N.T.; You, X.; Krishnamurthy, M.; Sepeta, L.N.; Zhang, A.; Oluigbo, C.; Whitehead, M.T.; Gholipour, T.; Baldeweg, T.; Wagstyl, K.; et al. Networks Underlie Temporal Onset of Dysplasia-Related Epilepsy: A MELD Study. Ann. Neurol. 2022, 92, 503–511. [Google Scholar] [CrossRef]
  113. Ripart, M.; Spitzer, H.; Williams, L.Z.J.; Walger, L.; Chen, A.; Napolitano, A.; Rossi-Espagnet, C.; Foldes, S.T.; Hu, W.; Mo, J.; et al. Detection of Epileptogenic Focal Cortical Dysplasia Using Graph Neural Networks: A MELD Study. JAMA Neurol. 2025, 82, 397–406. [Google Scholar] [CrossRef]
  114. Hirano, R.; Asai, M.; Nakasato, N.; Kanno, A.; Uda, T.; Tsuyuguchi, N.; Yoshimura, M.; Shigihara, Y.; Okada, T.; Hirata, M. Deep Learning Based Automatic Detection and Dipole Estimation of Epileptic Discharges in MEG: A Multi-Center Study. Sci. Rep. 2024, 14, 24574. [Google Scholar] [CrossRef]
  115. Gleichgerrcht, E.; Munsell, B.C.; Alhusaini, S.; Alvim, M.K.M.; Bargalló, N.; Bender, B.; Bernasconi, A.; Bernasconi, N.; Bernhardt, B.; Blackmon, K.; et al. Artificial Intelligence for Classification of Temporal Lobe Epilepsy with ROI-Level MRI Data: A Worldwide ENIGMA-Epilepsy Study. Neuroimage Clin. 2021, 31, 102765. [Google Scholar] [CrossRef]
  116. Thom, M. Review: Hippocampal Sclerosis in Epilepsy: A Neuropathology Review. Neuropathol. Appl. Neurobiol. 2014, 40, 520–543. [Google Scholar] [CrossRef]
  117. Jiménez-Murillo, D.; Castro-Ospina, A.E.; Duque-Muñoz, L.; Martínez-Vargas, J.D.; Suárez-Revelo, J.X.; Vélez-Arango, J.M.; de la Iglesia-Vayá, M. Automatic Detection of Focal Cortical Dysplasia Using MRI: A Systematic Review. Sensors 2023, 23, 7072. [Google Scholar] [CrossRef]
  118. Zhang, S.; Zhuang, Y.; Luo, Y.; Zhu, F.; Zhao, W.; Zeng, H. Deep Learning-Based Automated Lesion Segmentation on Pediatric Focal Cortical Dysplasia II Preoperative MRI: A Reliable Approach. Insights Imaging 2024, 15, 71. [Google Scholar] [CrossRef] [PubMed]
  119. Wang, H.; Ahmed, S.N.; Mandal, M. Automated Detection of Focal Cortical Dysplasia Using a Deep Convolutional Neural Network. Comput. Med. Imaging Graph. 2020, 79, 101662. [Google Scholar] [CrossRef] [PubMed]
  120. Fischl, B. FreeSurfer. Neuroimage 2012, 62, 774–781. [Google Scholar] [CrossRef] [PubMed]
  121. Spitzer, H.; Ripart, M.; Whitaker, K.; D’Arco, F.; Mankad, K.; Chen, A.A.; Napolitano, A.; De Palma, L.; De Benedictis, A.; Foldes, S.; et al. Interpretable Surface-Based Detection of Focal Cortical Dysplasias: A Multi-Centre Epilepsy Lesion Detection Study. Brain 2022, 145, 3859–3871. [Google Scholar] [CrossRef]
  122. Zhang, F.; Savadjiev, P.; Cai, W.; Song, Y.; Rathi, Y.; Tunç, B.; Parker, D.; Kapur, T.; Schultz, R.T.; Makris, N.; et al. Whole Brain White Matter Connectivity Analysis Using Machine Learning: An Application to Autism. Neuroimage 2018, 172, 826–837. [Google Scholar] [CrossRef]
  123. Zhu, J.; Yao, S.; Yao, Z.; Yu, J.; Qian, Z.; Chen, P. White Matter Injury Detection Based on Preterm Infant Cranial Ultrasound Images. Front. Pediatr. 2023, 11, 1144952. [Google Scholar] [CrossRef]
  124. Sun, X.; Niwa, T.; Okazaki, T.; Kameda, S.; Shibukawa, S.; Horie, T.; Kazama, T.; Uchiyama, A.; Hashimoto, J. Automatic Detection of Punctate White Matter Lesions in Infants Using Deep Learning of Composite Images from Two Cases. Sci. Rep. 2023, 13, 4426. [Google Scholar] [CrossRef]
  125. Schlüter, A.; Rodríguez-Palmero, A.; Verdura, E.; Vélez-Santamaría, V.; Ruiz, M.; Fourcade, S.; Planas-Serra, L.; Martínez, J.J.; Guilera, C.; Girós, M.; et al. Diagnosis of Genetic White Matter Disorders by Singleton Whole-Exome and Genome Sequencing Using Interactome-Driven Prioritization. Neurology 2022, 98, e912–e923. [Google Scholar] [CrossRef]
  126. Ecker, C.; Bookheimer, S.Y.; Murphy, D.G.M. Neuroimaging in Autism Spectrum Disorder: Brain Structure and Function across the Lifespan. Lancet Neurol. 2015, 14, 1121–1134. [Google Scholar] [CrossRef]
  127. Mous, S.E.; Muetzel, R.L.; El Marroun, H.; Polderman, T.J.C.; van der Lugt, A.; Jaddoe, V.W.; Hofman, A.; Verhulst, F.C.; Tiemeier, H.; Posthuma, D.; et al. Cortical Thickness and Inattention/hyperactivity Symptoms in Young Children: A Population-Based Study. Psychol. Med. 2014, 44, 3203–3213. [Google Scholar] [CrossRef]
  128. Castellanos, F.X.; Tannock, R. Neuroscience of Attention-Deficit/hyperactivity Disorder: The Search for Endophenotypes. Nat. Rev. Neurosci. 2002, 3, 617–628. [Google Scholar] [CrossRef]
  129. Heinsfeld, A.S.; Franco, A.R.; Craddock, R.C.; Buchweitz, A.; Meneguzzi, F. Identification of Autism Spectrum Disorder Using Deep Learning and the ABIDE Dataset. Neuroimage Clin. 2018, 17, 16–23. [Google Scholar] [CrossRef] [PubMed]
  130. Eslami, T.; Almuqhim, F.; Raiker, J.S.; Saeed, F. Machine Learning Methods for Diagnosing Autism Spectrum Disorder and Attention-Deficit/Hyperactivity Disorder Using Functional and Structural MRI: A Survey. Front. Neuroinform. 2020, 14, 575999. [Google Scholar] [CrossRef] [PubMed]
  131. Lohani, D.C.; Rana, B. ADHD Diagnosis Using Structural Brain MRI and Personal Characteristic Data with Machine Learning Framework. Psychiatry Res. Neuroimaging 2023, 334, 111689. [Google Scholar] [CrossRef] [PubMed]
  132. Sen, B.; Borle, N.C.; Greiner, R.; Brown, M.R.G. A General Prediction Model for the Detection of ADHD and Autism Using Structural and Functional MRI. PLoS ONE 2018, 13, e0194856. [Google Scholar] [CrossRef]
  133. Bahathiq, R.A.; Banjar, H.; Bamaga, A.K.; Jarraya, S.K. Machine Learning for Autism Spectrum Disorder Diagnosis Using Structural Magnetic Resonance Imaging: Promising but Challenging. Front. Neuroinform. 2022, 16, 949926. [Google Scholar] [CrossRef]
  134. Moridian, P.; Ghassemi, N.; Jafari, M.; Salloum-Asfar, S.; Sadeghi, D.; Khodatars, M.; Shoeibi, A.; Khosravi, A.; Ling, S.H.; Subasi, A.; et al. Automatic Autism Spectrum Disorder Detection Using Artificial Intelligence Methods with MRI Neuroimaging: A Review. Front. Mol. Neurosci. 2022, 15, 999605. [Google Scholar] [CrossRef]
  135. Geis, J.R.; Brady, A.P.; Wu, C.C.; Spencer, J.; Ranschaert, E.; Jaremko, J.L.; Langer, S.G.; Kitts, A.B.; Birch, J.; Shields, W.F.; et al. Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety Statement. Can. Assoc. Radiol. J. 2019, 70, 329–334. [Google Scholar] [CrossRef]
  136. Cohen, I.G.; Amarasingham, R.; Shah, A.; Xie, B.; Lo, B. The Legal and Ethical Concerns That Arise from Using Complex Predictive Analytics in Health Care. Health Aff. 2014, 33, 1139–1147. [Google Scholar] [CrossRef]
  137. Grote, T.; Berens, P. On the Ethics of Algorithmic Decision-Making in Healthcare. J. Med. Ethics 2020, 46, 205–211. [Google Scholar] [CrossRef]
  138. Kelly, C.J.; Karthikesalingam, A.; Suleyman, M.; Corrado, G.; King, D. Key Challenges for Delivering Clinical Impact with Artificial Intelligence. BMC Med. 2019, 17, 195. [Google Scholar] [CrossRef] [PubMed]
  139. Sun, Q.; Akman, A.; Schuller, B.W. Explainable Artificial Intelligence for Medical Applications: A Review. ACM Trans. Comput. Healthc. 2025, 6, 1–31. [Google Scholar] [CrossRef]
  140. Dalboni da Rocha, J.L.; Lai, J.; Pandey, P.; Myat, P.S.M.; Loschinskey, Z.; Bag, A.K.; Sitaram, R. Artificial Intelligence for Neuroimaging in Pediatric Cancer. Cancers 2025, 17, 622. [Google Scholar] [CrossRef] [PubMed]
  141. Larrazabal, A.J.; Nieto, N.; Peterson, V.; Milone, D.H.; Ferrante, E. Gender Imbalance in Medical Imaging Datasets Produces Biased Classifiers for Computer-Aided Diagnosis. Proc. Natl. Acad. Sci. USA 2020, 117, 12592–12594. [Google Scholar] [CrossRef]
  142. Vafaeikia, P.; Wagner, M.W.; Hawkins, C.; Tabori, U.; Ertl-Wagner, B.B.; Khalvati, F. MRI-Based End-To-End Pediatric Low-Grade Glioma Segmentation and Classification. Can. Assoc. Radiol. J. 2024, 75, 153–160. [Google Scholar] [CrossRef]
  143. Motamed, S.; Rogalla, P.; Khalvati, F. Data Augmentation Using Generative Adversarial Networks (GANs) for GAN-Based Detection of Pneumonia and COVID-19 in Chest X-Ray Images. Inf. Med. Unlocked 2021, 27, 100779. [Google Scholar] [CrossRef]
  144. Hedderich, D.M.; Weisstanner, C.; Van Cauter, S.; Federau, C.; Edjlali, M.; Radbruch, A.; Gerke, S.; Haller, S. Artificial Intelligence Tools in Clinical Neuroradiology: Essential Medico-Legal Aspects. Neuroradiology 2023, 65, 1091–1099. [Google Scholar] [CrossRef]
  145. Recht, M.P.; Dewey, M.; Dreyer, K.; Langlotz, C.; Niessen, W.; Prainsack, B.; Smith, J.J. Integrating Artificial Intelligence into the Clinical Practice of Radiology: Challenges and Recommendations. Eur. Radiol. 2020, 30, 3576–3584. [Google Scholar] [CrossRef]
  146. Topol, E.J. High-Performance Medicine: The Convergence of Human and Artificial Intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
  147. Raggio, C.B.; Zabaleta, M.K.; Skupien, N.; Blanck, O.; Cicone, F.; Cascini, G.L.; Zaffino, P.; Migliorelli, L.; Spadea, M.F. FedSynthCT-Brain: A Federated Learning Framework for Multi-Institutional Brain MRI-to-CT Synthesis. arXiv 2024, arXiv:2412.06690. [Google Scholar] [CrossRef]
  148. Bauer, A.; Bosl, W.; Aalami, O.; Schmiedmayer, P. Toward Scalable Access to Neurodevelopmental Screening: Insights, Implementation, and Challenges. arXiv 2025, arXiv:2503.13472. [Google Scholar]
  149. Orchard, C.; King, G.; Tryphonopoulos, P.; Gorman, E.; Ugirase, S.; Lising, D.; Fung, K. Interprofessional Team Conflict Resolution: A Critical Literature Review. J. Contin. Educ. Health Prof. 2024, 44, 203–210. [Google Scholar] [CrossRef]
  150. Guan, H.; Yap, P.-T.; Bozoki, A.; Liu, M. Federated Learning for Medical Image Analysis: A Survey. arXiv 2023, arXiv:2306.05980. [Google Scholar] [CrossRef]
Figure 2. Compares an axial ASL-derived CBF map (A) and the same CBF map after the application of a super-resolution convolutional neural network (B) (required time: 0.5192 s), characterized by a significant improvement of image resolution (https://github.com/onnx/models (accessed on 20 July 2025)). The network was applied to the data, and the results were visualized using MATLAB (MATLAB ver. 24.1–R2024a [The MathWorks Inc. (2024a). MATLAB version: 24.1 (R2024a), Natick, MA, USA: The MathWorks Inc. https://www.mathworks.com (accessed on 20 July 2025)]). ASL (Arterial Spin Labelling); CBF (Cerebral Blood Flow).
Figure 2. Compares an axial ASL-derived CBF map (A) and the same CBF map after the application of a super-resolution convolutional neural network (B) (required time: 0.5192 s), characterized by a significant improvement of image resolution (https://github.com/onnx/models (accessed on 20 July 2025)). The network was applied to the data, and the results were visualized using MATLAB (MATLAB ver. 24.1–R2024a [The MathWorks Inc. (2024a). MATLAB version: 24.1 (R2024a), Natick, MA, USA: The MathWorks Inc. https://www.mathworks.com (accessed on 20 July 2025)]). ASL (Arterial Spin Labelling); CBF (Cerebral Blood Flow).
Children 12 01127 g002
Figure 3. Shows the resolution improvement of a low-resolution brain T2WI acquisition (A) using Deep Resolve (B), an AI-powered image reconstruction technology based on convolutional neural networks [45], as compared to a high-resolution brain T2WI. The low-resolution T2WI (166 × 208 pixels) acquisition time is 58 s and was retrospectively enhanced by the Deep Resolve algorithm, which offers a high-resolution image (333 × 416 pixels) without extending the acquisition time and optimizing clinical workflow. In contrast, the high-resolution brain T2WI acquisition (350 × 350 pixels) shows similar resolution quality (C), but a longer acquisition time (5 min and 56 s). WI (Weighted Imaging).
Figure 3. Shows the resolution improvement of a low-resolution brain T2WI acquisition (A) using Deep Resolve (B), an AI-powered image reconstruction technology based on convolutional neural networks [45], as compared to a high-resolution brain T2WI. The low-resolution T2WI (166 × 208 pixels) acquisition time is 58 s and was retrospectively enhanced by the Deep Resolve algorithm, which offers a high-resolution image (333 × 416 pixels) without extending the acquisition time and optimizing clinical workflow. In contrast, the high-resolution brain T2WI acquisition (350 × 350 pixels) shows similar resolution quality (C), but a longer acquisition time (5 min and 56 s). WI (Weighted Imaging).
Children 12 01127 g003
Figure 4. Shows the segmentation of brain tissues by SynthSeg, a pre-trained U-Net-based convolutional neural network on a coronal 3D T1 MPRAGE sequence (A,D) (required time: 10.35 s) [73]. Particularly, the segmented grey matter (B) and white matter (E) may be superimposed on the 3D T1 MPRAGE image (grey matter in yellow in (C), white matter in green in (F)). The network was applied to the data, and the results were visualized using MATLAB (MATLAB ver. 24.1–R2024a [The MathWorks Inc. (2024a). MATLAB version: 24.1 (R2024a), Natick, MA, USA: The MathWorks Inc. https://www.mathworks.com (accessed on 20 July 2025)]). MPRAGE (Magnetisation Prepared Rapid Gradient Echo Imaging).
Figure 4. Shows the segmentation of brain tissues by SynthSeg, a pre-trained U-Net-based convolutional neural network on a coronal 3D T1 MPRAGE sequence (A,D) (required time: 10.35 s) [73]. Particularly, the segmented grey matter (B) and white matter (E) may be superimposed on the 3D T1 MPRAGE image (grey matter in yellow in (C), white matter in green in (F)). The network was applied to the data, and the results were visualized using MATLAB (MATLAB ver. 24.1–R2024a [The MathWorks Inc. (2024a). MATLAB version: 24.1 (R2024a), Natick, MA, USA: The MathWorks Inc. https://www.mathworks.com (accessed on 20 July 2025)]). MPRAGE (Magnetisation Prepared Rapid Gradient Echo Imaging).
Children 12 01127 g004
Figure 5. Patient’s MRI exam was processed through the MELD surface-based FCD detection algorithm, and a large FCD cluster was detected in the right temporal lobe. Particularly, the 3D T1 MPRAGE and FLAIR sequences were processed using FreeSurfer [120] to extract the following features: grey-white matter intensity contrast, cortical thickness, sulcal depth, intrinsic curvature, mean curvature, and FLAIR intensity sampled at different intracortical and subcortical depths. To ensure robustness, these features underwent several pre-processing steps. Specifically, harmonization was performed to correct for site- and scanner-related differences; normalization was performed to account for intra- and inter-subject variability; and asymmetry analysis was performed to enhance detection of inter-hemispheric differences. Prior to application on the patient, the MELD Graph U-Net model was trained on a large multi-centre dataset and benchmarked against an existing algorithm [121]. to enable precise identification of the lesion location, size, characteristics, and feature saliency, namely the relative importance of the MRI features. (https://github.com/MELDProject (accessed on 20 July 2025)) MELD (Multi-centre Epilepsy Lesion Detection); FCD (Focal Cortical Dysplasia); MPRAGE (Magnetisation Prepared Rapid Gradient Echo Imaging), FLAIR (fluid-attenuated inversion recovery).
Figure 5. Patient’s MRI exam was processed through the MELD surface-based FCD detection algorithm, and a large FCD cluster was detected in the right temporal lobe. Particularly, the 3D T1 MPRAGE and FLAIR sequences were processed using FreeSurfer [120] to extract the following features: grey-white matter intensity contrast, cortical thickness, sulcal depth, intrinsic curvature, mean curvature, and FLAIR intensity sampled at different intracortical and subcortical depths. To ensure robustness, these features underwent several pre-processing steps. Specifically, harmonization was performed to correct for site- and scanner-related differences; normalization was performed to account for intra- and inter-subject variability; and asymmetry analysis was performed to enhance detection of inter-hemispheric differences. Prior to application on the patient, the MELD Graph U-Net model was trained on a large multi-centre dataset and benchmarked against an existing algorithm [121]. to enable precise identification of the lesion location, size, characteristics, and feature saliency, namely the relative importance of the MRI features. (https://github.com/MELDProject (accessed on 20 July 2025)) MELD (Multi-centre Epilepsy Lesion Detection); FCD (Focal Cortical Dysplasia); MPRAGE (Magnetisation Prepared Rapid Gradient Echo Imaging), FLAIR (fluid-attenuated inversion recovery).
Children 12 01127 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guarnera, A.; Napolitano, A.; Liporace, F.; Marconi, F.; Rossi-Espagnet, M.C.; Gandolfo, C.; Romano, A.; Bozzao, A.; Longo, D. The Expanding Frontier: The Role of Artificial Intelligence in Pediatric Neuroradiology. Children 2025, 12, 1127. https://doi.org/10.3390/children12091127

AMA Style

Guarnera A, Napolitano A, Liporace F, Marconi F, Rossi-Espagnet MC, Gandolfo C, Romano A, Bozzao A, Longo D. The Expanding Frontier: The Role of Artificial Intelligence in Pediatric Neuroradiology. Children. 2025; 12(9):1127. https://doi.org/10.3390/children12091127

Chicago/Turabian Style

Guarnera, Alessia, Antonio Napolitano, Flavia Liporace, Fabio Marconi, Maria Camilla Rossi-Espagnet, Carlo Gandolfo, Andrea Romano, Alessandro Bozzao, and Daniela Longo. 2025. "The Expanding Frontier: The Role of Artificial Intelligence in Pediatric Neuroradiology" Children 12, no. 9: 1127. https://doi.org/10.3390/children12091127

APA Style

Guarnera, A., Napolitano, A., Liporace, F., Marconi, F., Rossi-Espagnet, M. C., Gandolfo, C., Romano, A., Bozzao, A., & Longo, D. (2025). The Expanding Frontier: The Role of Artificial Intelligence in Pediatric Neuroradiology. Children, 12(9), 1127. https://doi.org/10.3390/children12091127

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop