Next Article in Journal
Object Detection and Classification with Limited Training Data
Previous Article in Journal
A Review of Innovative Medical Rehabilitation Systems with Scalable AI-Assisted Platforms for Sensor-Based Recovery Monitoring
Previous Article in Special Issue
Automated Pneumothorax Segmentation with a Spatial Prior Contrast Adapter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Machine Learning for Chronic Kidney Disease Detection from Planar and SPECT Scintigraphy: A Scoping Review

by
Dunja Vrbaški
1,*,
Boban Vesin
2 and
Katerina Mangaroska
2
1
Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
2
USN Business School, University of South-Eastern Norway, 3184 Borre, Norway
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(12), 6841; https://doi.org/10.3390/app15126841
Submission received: 23 April 2025 / Revised: 19 May 2025 / Accepted: 23 May 2025 / Published: 18 June 2025
(This article belongs to the Special Issue Applications of Computer Vision and Image Processing in Medicine)

Abstract

:
Chronic kidney disease (CKD) is a progressive condition affecting over 800 million people worldwide (more than 10% of the general population) and is a major contributor to morbidity and mortality. Early detection is critical, yet current diagnostic methods (e.g., computed tomography or magnetic resonance imaging) do not focus on functional impairments, which begin long before structural damage becomes evident, limiting timely and accurate assessment. Nuclear medicine imaging, particularly planar scintigraphy and single-photon emission computed tomography (SPECT), offers a non-invasive evaluation of renal function, but its clinical use is hindered by interpretive complexity and variability. Machine learning (ML) holds promise for enhancing image analysis and supporting early CKD diagnosis. This study presents a scoping review of ML applications in CKD detection and monitoring using renal scintigraphy. Following the PRISMA framework, the literature was systematically identified and screened in two phases: one targeting ML methods applied specifically to renal scintigraphy, and another encompassing broader ML use in scintigraphic imaging. The results reveal a notable lack of studies integrating advanced ML techniques, especially deep learning, with renal scintigraphy, despite their potential. Key challenges include limited annotated datasets, inconsistent imaging protocols, and insufficient validation. This review synthesizes current trends, identifies methodological gaps, and highlights opportunities for developing reliable, interpretable ML tools to improve nuclear imaging-based diagnostics and support personalized management of CKD.

1. Introduction

Although it is largely preventable and treatable, chronic kidney disease (CKD) has continued to rise in rank among leading causes of morbidity and death in the world [1]. Moreover, due to the fact that it remains under-emphasized in global health policy, CKD significantly impacts global health, both as a direct contributor to morbidity and mortality and as a critical risk factor for cardiovascular disease [2]. Early identification of high-risk groups prone to kidney disease progression, could not only mitigate the economic burden on both patients and healthcare systems, but also reduce the risk of comorbidities and associated health complications for many patients, highlighting the need for robust and accurate disease prediction models [3]. Such models could serve as a valuable tool for medical professionals, enabling timely interventions to prevent disease progression, improve patient outcomes, and optimize resource allocation. By facilitating early detection and prevention strategies, predictive models have the potential to enhance clinical decision making, facilitate learning and decision-making skills in medical students, and contribute significantly to personalized care in healthcare [4].
CKD is marked by a gradual and irreversible decline in renal function, along with reduced glomerular filtration rate (GFR) and gradual structural damage to the kidneys, which accelerates over time. These changes impair kidneys to filter blood, ultimately leading to end-stage renal disease (ESRD) when dialysis or a kidney transplant are becoming the only viable options to sustain life [4]. Therefore, the accurate assessment of renal function and kidney structure is crucial for detecting and monitoring CKD. Many advanced imaging techniques, such as magnetic resonance imaging (MRI), ultrasound elastography (UE), and computed tomography (CT) are used today in CKD diagnosis and prognosis; however, nuclear techniques like PLANAR scintigraphy, PET (positron emission tomography), and SPECT (single-photon emission computed tomography), offer unique advantages that set it apart from other imaging modalities [5]. As CKD begins with functional impairments before structural damage becomes evident, structural imaging techniques like CT or MRI may not reveal abnormalities until substantial tissue damage has occurred. Thus, scintigraphy provides real-time functional and molecular data which allows for detecting early functional changes and disease progression, enabling timely diagnosis and prevention treatments. Both planar and SPECT images are visual representations of quantitative data that represent radiotracer uptake where each pixel (in planar) or voxel (in SPECT) encodes a value proportional to tracer accumulation. These values are visualized as grayscale or color images using different reconstruction algorithms to enable clinical interpretation. For example, Figure 1 shows multiple visualizations for one dynamic planar renal scan [6]. Figure 2 shows a comparison of planar scans and coronal-simulated planar from SPECT scans where SPECT is visualized as a grayscale image to enable comparison with planar scans [7].
Machine learning (ML) has emerged as a vital tool in medical image analysis, playing a crucial role in tasks like segmentation, classification, and ultimately automation of renal function diagnosis [8]. Traditional machine learning models, such as decision trees (DT), random forest (RF), and simple neural networks (NNs) are extensively used in research related to processing electronic health record (EHR) tabular data such as demographics, biological, or laboratory data. When it comes to image analysis, due to the lack of annotated data, researchers reach out beyond traditional supervised learning to approaches with less supervision, such as semi-supervised learning, multiple instance learning, deep learning, and transfer learning [9].
Preliminary literature screening reveals that scintigraphy images are not widely utilized in medical image analysis. PET has seen more extensive research interest lately but SPECT and planar scans are typically more affordable and more widely available in clinical practice, especially in smaller hospitals or regions with limited resources. Also, if the clinical question involves functional assessment specifically (such as evaluating kidney filtration or perfusion), SPECT and planar imaging are sufficient and potentially more suitable.
Therefore, the objective of this scoping review is to determine an overview of published research:
  • Aassess ML applications for renal SPECT and planar scintigraphy;
  • Identify gaps and challenges in ML applications for CKD detection and prognosis;
  • Explore related research efforts in ML applications to other scintigraphy domains;
  • Set grounds for potential future research pathways in the medical image analysis for SPECT and planar scintigraphy images.
To achieve our objectives, we proposed the following research questions (RQs) that guide our scoping review:
  • RQ1: What ML methods are currently utilized for detecting, predicting, and diagnosing CKD using PLANAR and SPECT images in renal scintigraphy?
  • RQ2: What ML methods are currently utilized for processing PLANAR and SPECT images across various scintigraphy domains?
  • RQ3: What are the challenges and limitations associated with applying ML methods to PLANAR and SPECT images in scintigraphy?
We begin by reviewing the use of ML methods for evaluating renal dysfunction, with a particular focus on CKD and scintigraphy through techniques like PLANAR and SPECT. Then, we explore the use of ML methods across various recent applications in scintigraphy for images of all organs and structures. Finally, we aim to discuss the potential and the limitations of ML and artificial intelligence (AI) in detecting, diagnosing, and predicting CKD and high-risk groups with potential renal dysfunctions.

2. Background

Despite the growing adoption of machine learning in medical imaging, its application to renal scintigraphy, particularly in the context of PLANAR and SPECT imaging for CKD detection, remains largely unexplored, while prior research has extensively examined ML applications in nephrology and medical imaging, few studies have focused on the unique advantages of nuclear imaging techniques such as scintigraphy. Given the functional nature of scintigraphy, leveraging ML for CKD diagnosis holds significant promise, yet existing reviews do not comprehensively assess its potential in this area.
One notable review by Decuyper et al. [10] highlights that while AI has been successfully applied to MRI and CT, deep learning models for SPECT and PET imaging are still in early development. The complexity of 3D nuclear imaging data and noise reduction, segmentation, and annotation challenges have hindered progress. Moreover, most AI studies focus on structural imaging rather than functional modalities like scintigraphy, which requires tailored ML approaches. This gap underscores the necessity of this review, which systematically examines the current state of ML applications in PLANAR and SPECT renal scintigraphy for CKD assessment.
In contrast, several systematic reviews have explored ML applications in CKD prediction, yet they primarily focus on structured data such as demographic or laboratory data. For example, Lei et al. [3] reviewed 15 studies where ML models were employed for kidney disease prediction, CKD, and Immunoglobulin A Nephropathy, with an emphasis on accuracy metrics and model validation. However, these studies relied exclusively on tabular data, without incorporating imaging data, thereby limiting insights into ML-based image analysis. Despite its relevance, this review does not address ML applications to scintigraphy.
Expanding the scope further, Magherini et al. [11] conducted a broader review of ML applications in nephrology, analyzing various pathologies, including kidney masses, acute kidney injury, CKD, kidney stones, glomerular disease, and kidney transplants. The study categorized ML tasks into three groups: segmentation of medical images, classification of disease severity, and prediction of treatment outcomes, while it acknowledged the role of ML in CKD diagnosis, most studies reviewed focused on the widely used UCI chronic kidney disease tabular dataset [12], with only one paper addressing CKD prediction using ultrasound images. These findings reinforce the absence of ML research specifically targeting scintigraphy-based CKD diagnosis.
A more recent systematic review by Sanmarchi et al. [13] examined ML models for CKD prediction, diagnosis, and treatment, identifying key datasets and features used in 68 studies. Despite mentioning five studies that applied ML to medical imaging, it did not analyze them separately. This review highlighted common limitations in existing works, including dataset biases, lack of external validation, and challenges, in generalizing ML models. These insights are relevant for future research applications of ML methods in CKD diagnosis but do not address the specific potential of scintigraphy imaging.
Deep learning applications in kidney disease imaging have also been reviewed by Zhang et al. [14], with a focus on segmentation and volumetric evaluation techniques for renal tumors, renal calculi, and other pathologies, while CKD was considered in this review, all related works focused on ultrasound and CT imaging, omitting nuclear imaging modalities.
Moreover, Alnazer et al. [8] reviewed medical imaging techniques for CKD monitoring, emphasizing MRI-based renal function evaluation. The study categorized ML applications into three main areas: texture analysis and traditional ML, deep learning for renal function evaluation, and automatic renal segmentation. Notably, only two studies [15,16] investigated scintigraphy imaging, both employing traditional ML techniques rather than deep learning. This further underscores the research gap in applying advanced ML techniques to PLANAR and SPECT scintigraphy for CKD assessment.
While ML has demonstrated success in MRI and CT-based CKD diagnosis, its application to scintigraphy remains limited. Unlike structural imaging, scintigraphy enables functional assessment, offering potential advantages for early CKD detection before structural damage becomes evident. As a result, ML-based analysis of PLANAR and SPECT images could facilitate earlier diagnosis and more effective intervention strategies. However, challenges such as limited annotated datasets, variations in imaging protocols, and the need for specialized pre-processing techniques must be addressed to advance research in this area.
Thus, this scoping review aims to bridge the knowledge gap by systematically analyzing the existing literature on ML applications in renal scintigraphy. By identifying key trends, challenges, and research opportunities, this study seeks to establish a foundation for future advancements in ML-based CKD diagnosis using nuclear imaging.

3. Methodology

To quickly assess the feasibility and justification for future research while also outlining a potential research framework we performed a rapid scoping review. Scoping review (SR) is an appropriate method to be used in the context of emerging and interdisciplinary research areas like ML and AI applications in medical imaging. The following are the reasons that support the selection of our method: (1) SR allows for the exploration and assessment of the breadth of available research evidence, without being restricted to CKD-specific studies; (2) it accommodates limited existing research by considering a wider variety of sources; (3) it helps to define conceptual boundaries and quickly identifies research gaps; and (4) it ensures that emerging applications of ML in scintigraphy imaging are captured holistically without the restrictive criteria of a systematic review. To perform the scoping review, we selected and adapted the PRISMA methodology [17,18].
Given that ML applications in medical imaging span across multiple disciplines, we performed the scoping review in two phases, to ensure that insights from different fields are not excluded. The first phase addresses RQ1 and encompassed screening of the literature for ML methods used in renal scintigraphy, particularly, planar and SPECT nuclear techniques for predicting and classifying stages of CKD. The second phase addresses RQ2 and RQ3, and broadened the scope to examine the overall use of ML methods in processing planar and SPECT images, regardless of the anatomical focus. Both phases aimed to identify and categorize the ML techniques applied.

3.1. Search Strategy

For both phases, we selected the databases, defined the keywords, and outlined the inclusion/exclusion criteria.

3.1.1. Preliminary Screening:Addressing the RQ1

Search string. To identify relevant research, comprehensive search queries were constructed using the following search keywords: SPECT, single-photon emission, scintigraphy, renal, kidney, CKD, GFR, glomerular filtration rate, chronic kidney disease, machine learning, artificial intelligence, and deep learning. The keywords “GFR” and “glomerular filtration rate” are selected due to their strong connection with CKD. As for the techniques used, we have opted for broad keywords, such as “machine learning”, to avoid limiting the search to known, popular, or expected ML methods. To enhance the specificity of the search and reduce irrelevant results, the keywords were combined using the boolean operators: AND and OR. This Boolean search strategy, ensured that only articles containing at least one term were retrieved. For each of the selected databases, the search queries were constructed similarly, following the respective syntax rules. For example, the WoS search query used was: “TS = ((SPECT OR “single-photon emission” OR “scintigraphy”) AND (renal OR kidney OR CKD OR GFR OR “glomerular filtration rate” OR “chronic kidney disease”) AND (“machine learning” OR “artificial intelligence” OR “deep learning”))”.
Databases. The search was performed in three databases, Web of Science (WoS), Scopus, and PubMed. PubMed is a well-known archive of biomedical and life sciences journal literature that covers extensive research for medical and clinical literature. Scopus is a free, large, multidisciplinary database of peer-reviewed literature that provides a comprehensive overview of the world’s research output in the fields of science, technology, medicine, social science, and arts and humanities. And finally WoS, is a free platform that accelerates and delivers novel research from premier scholarly and scientific venues. The last search was performed in December 2024.
Inclusion/exclusion criteria. In all three databases, searches included all collections, document types, and publication years, with results limited to English language publications. For the WoS database, the “exact search” option was not selected to allow for a broader search. These settings were chosen to maximize the number of relevant studies, given the anticipated limited volume of literature in this domain. All returned papers were screened by title and abstract and filtered by the following exclusion criteria:
  • Not related to planar scintigraphy or SPECT image modalities;
  • Not related to the CKD prediction;
  • Not related to this scoping review;
  • Not written in English language.
Exclusion criterion one will help to filter out papers that consider dual modalities, such as SPECT/CT or SPECT combined with ultrasound. Exclusion criterion two filters out papers related to deep learning applications on planar and SPECT images if are not specific to renal imaging, as well as those focused on renal imaging but unrelated to CKD. Those publications will be manually added to the broader screening. Finally, exclusion criterion three will filter out studies unrelated to the research scope, such as those on radiopharmaceutical design or dosage.

3.1.2. Broader Screening: Addressing the RQ2 and RQ3

Search string. To identify relevant research, comprehensive search queries were constructed using the following search keywords: SPECT, single-photon emission, scintigraphy, machine learning, artificial intelligence and deep learning. For the techniques used, we have opted for broad keywords, such as “machine learning”, to avoid limiting the search to known, popular, or expected ML methods. To enhance the specificity of the search and reduce irrelevant results, the keywords were combined using the boolean operators: AND, OR, and NOT. Using NOT operator results from the phase one search, from which connections to renal scintigraphy and CKD has to be omitted. For example, the WoS search query used was: “TS = ((SPECT OR “single-photon emission” OR “scintigraphy”) AND (“machine learning” OR “artificial intelligence” OR “deep learning”) NOT (renal OR kidney OR CKD OR GFR OR “glomerular filtration rate” OR “chronic kidney disease”))”.
Databases. The search was performed in two databases, WoS and Scopus. The search in PubMed was not performed due to the high number of duplicates for WoS and Scopus results. The last search was performed in January 2025.
Inclusion/exclusion criteria. Since the second phase broadened the scope to examine the overall use of ML methods regardless of the anatomical focus, the type of study/quality of source was considered as an inclusion criterion to limit the search to peer-reviewed journals and conference proceedings, excluding journal reviews, books, book chapters, reports, etc. Moreover, due to rapid changes in emerging and developing fields like ML and AI, we limited the search for work published between 2020 and 2024, to capture the most recent advancements and current state-of-the-art methods. Considering the research domain, for WoS we considered the following categories: multidisciplinary sciences, computer science artificial intelligence, computer science interdisciplinary applications, computer science information systems, computer science theory methods, computer science software engineering, medical informatics, and engineering multidisciplinary; while for Scopus, computer science and multidisciplinary sciences. All returned papers were screened by title and abstract and filtered by the following exclusion criteria:
  • Not related to any research question;
  • Considers other image modalities;
  • Considers dual or multi-modal approach;
  • Not written in the English language.
Exclusion criterion one helps filter out papers that are not related to this research domain or to our research questions, such as studies in seismography or agriculture. Exclusion criterion two helps filter out research that is related to other imaging modalities, such as MRI or CT. Exclusion criterion three filters out studies that explore multiple image modalities, separately or via image fusion.
Data charting process. A data charting form was developed as a Google Sheets document, with access granted to all three authors. The charting process was guided by research questions and was iteratively refined during the initial phases of data extraction. Finally, form captured details such as publication year, target organ and disease, study objective, image modality, dataset characteristics, ML approach (e.g., classical ML and deep learning), performance metrics, results, technology details (e.g., programming languages, libraries, and models), preprocessing details, data augmentation details, validation methods, reported problems and limitations. Two reviewers independently charted data from the eligible studies. Discrepancies were resolved through discussion. No automation tools were used for the data extraction.

4. Results

The study selection process is summarized in Figure 3, which presents the PRISMA flowchart describing the identification, screening, and inclusion of studies.

4.1. Preliminary Screening; Addressing the RQ1

The first search returned 44, 76, and 28 published works in WoS, Scopus, and PubMed databases, respectively. After removing duplicates, 88 records were screened, and only 2 papers directly addressed CKD and renal scintigraphy. Of those two papers, one is a review paper [8] which we already addressed in Section 2. The second paper is a conference paper, published in proceedings, which we were not able to retrieve it [19]. From the abstract, it was evident that the authors explored CKD classification using renogram curve data as input for classifiers such as support vector machines (SVM), k-Nearest Neighbors (KNNs), and decision trees (DT), while they concluded that these methods yield satisfactory results, they do not incorporate more advanced techniques, such as deep learning-based imaging methods. Nonetheless, this approach presents an interesting avenue for further exploration in CKD diagnosis, particularly as a supplementary and interpretable model.

4.2. Broader Screening; Addressing the RQ2 & RQ3

The second search returned 941 published works in WoS and 1643 in Scopus. After filtering for recency, research domain, and type of work (i.e., peer-reviewed journal papers), we end up with 95 and 154 identified works on WoS and Scopus databases, respectively. After removing duplicates and applying the elimination criteria, another 181 records were removed, leaving 68 for retrieval. Full texts could not be obtained for 9 records, resulting in 59 full-text articles.
Certain research details become evident only upon full-text examination. For instance, some abstracts did not explicitly indicate that the study is focused on image fusion between MRI and SPECT, which was clarified upon reviewing the complete text. In that way 8 reports were not found eligible for analysis, leaving us with 51 full-text reports. Additionally, as noted earlier, some records from the first phase were manually added. A total of 20 records were identified; however, 6 could not be retrieved, 7 were found ineligible, so in total, only 7 studies were added to the pool of records eligible for analysis. In total, 58 full-text studies were charted, reviewed and analyzed; 51 from the broader search and 7 from the first phase.

4.3. Categorization of the Results

The charted data were handled descriptively and summarized through tabular presentation and narrative synthesis. Summary tables were used to highlight the distribution of studies across different categories, enabling the identification of key trends and methodological patterns. No formal statistical analysis or meta-synthesis was conducted. In the following paragraphs, the quantitative and qualitative results from the selected 58 research papers will be presented according to various criteria.

4.3.1. Year of Publishing

Database screening was conducted for research studies published from 2020 to the end of 2024. As shown in Figure 4, the overall increase in record volume suggests a growing interest in exploring emerging technologies with significant advancements.

4.3.2. Type of Research

Table 1 presents a breakdown of publications according to the type of research and organ investigated.
Most publications (45) focused on diagnostic methods targeting specific diseases or conditions, neurological disorders such as Parkinson’s and cardiovascular conditions such as coronary artery disease (CAD). General imaging methods (13) were developed and/or tested with images of a specific organ. However, those methods were developed to enhance image manipulation processes, and, in general, are not exclusively used in some specific diagnostic processes. Therefore, they should be considered in method applications for other organs and diseases. One method, addressed in Table 1 as “N/A”, uses a Monte Carlo simulated dataset for the generation of random, not organ-specific, SPECT projections [77].
Across the 58 publications, both open and in-house datasets were utilized. The number of datasets, categorized by country, is presented in Table 2. The most frequently used open dataset is the PPMI dataset, a well-recognized, multi-center resource for Parkinson’s disease research [78]. Additionally, the widely known UCI SPECT Heart dataset appears in two publications [79], while two more datasets which were created in-house during research, later were made publicly available as open data [80,81].
A total of 38 datasets were produced and used in the studies covered in this review, with China being the most common source, followed by Japan and Taiwan. Four publications do not explicitly state the medical centers where data were collected, though the authors’ affiliations may provide some indication. Notably, publication [35] presents a large-scale international study involving multiple clinical centers across Austria, Italy, the UK, and China at different research phases, making it unique in this review. Additionally, two studies did not use patient data. Chrysostomou et al. employed software-generated phantom data [73], while Leube et al. generated synthetic SPECT data [77]. Considering the image modality, planar scintigraphy images are used only in six studies [35,58,59,60,64,74], while the rest 52 studies used SPECT images. Moreover, only seven studies include additional data on top of the imaging data, such as biomarker data or/and clinical and demographic data [29,33,42,43,51,82,83]. Finally, it is worth noting that datasets differ in the number of patients from whom data are collected. In-house datasets typically consist of several hundred patients, though some studies explore data from fewer than 100 individuals. For the PPMI dataset, the number of included patients depends on the specific research criteria defined by the authors.
The datasets used in these studies vary in scale, structure, and preprocessing approaches, which are relevant factors to be considered, as dataset selection directly impacts model choice. Nuclear imaging data are not always collected through identical processes, nor are the final datasets uniform. For example, patient data may be recorded under both stress and rest conditions or only at a single time point. Additionally, DICOM files can contain varying numbers of slices, and researchers may choose to use all or only a subset of them. The final content and structure of the data are influenced by multiple factors, including clinical protocols, imaging equipment, and research objectives. Although the final datasets are primarily composed of reconstructed nuclear images and/or calculated radiomic features, given the differences mentioned above, datasets cannot be directly compared at a broad review level.
Studies using reconstructed images often apply preprocessing techniques, though some omit them entirely or do not specify their preprocessing steps. The most common preprocessing techniques include normalization (e.g., zero-centering or min–max scaling), cropping or extracting the region of interest, and additional steps such as resizing, standardization, or noise reduction. Some researchers also introduced novel preprocessing strategies. For example, Huang et al. [47] converted original 3D stereo images into 2D polar images, while Kusumoto et al. [33] merged two cubic images into a dual channel representation, differentiating between stress and rest conditions.
Despite the widespread use of preprocessing, data augmentation is reported in only 19 studies, with common techniques including rotation, flipping, and translation. Some studies apply discrete transformations, such as 90-degree rotations or axis-aligned flips, while others introduce variability through random Gaussian noise, blurring, or affine transformations with dynamically adjusted parameters. Two studies specifically use augmentation for numerical data rather than reconstructed images, applying the synthetic minority oversampling technique (SMOTE) method to balance datasets [29,47]. In renal imaging, since each patient has two kidneys, some authors treat them as separate images, effectively increasing the number of records used for model training and testing [60,61]. As a result, the number of dataset records in these studies often exceeds the actual number of patients.
Lastly, it should be noted that some authors use multimodal SPECT/CT datasets but employ only SPECT data for model implementation. When CT data are included, it serves primarily as ground truth for attenuation correction [66,68,69,75] or automated localization [64].

4.3.3. Research Methods

In general, studies provide varying levels of detail on the design and implementation of methods; however, most employ either custom architectures and model implementations or well-known existing models. Table 3 presents a breakdown of publications according to the research task, image modality (planar and SPECT), and models used for diagnostic methods. Similarly, Table 4 presents publications categorized under general imaging methods.
Research task. Of the 45 publications exploring novel diagnostic methods, 34 focus on disease classification, 10 on image segmentation, and 1 on both segmentation and classification. Classification studies primarily aim to distinguish between healthy and diseased patients, while segmentation studies focus on detecting lesions or regions of interest (ROI). Publications categorized under general imaging methods explore techniques that can be adapted for various diagnostic applications and organs. These studies address image enhancement and reconstruction processes. One publication [66] examined three aspects of image enhancement, de-noising, angle reconstruction, and attenuation correction, which is denoted as “mixed techniques” in the table.
Image modality. Methods utilizing planar scintigraphy images are significantly fewer (n = 6) compared to those using SPECT images (n = 52). Specifically, two studies employ planar scans for classification tasks [35,58], two focus on segmentation tasks [59,64], one addresses both classification and segmentation [60], and one explores image reconstruction [74].
Models used. Differences in approach and model choice primarily stem from data gathering, selection, and preprocessing, which determine the input format for 1D, 2D, or 3D networks. Aggarwal et al. [38] employed a 1D CNN, as their input data consists of imaging-derived radiomic and biological features, both numerical. Whole-body scans are typically obtained via 2D SPECT imaging, leading to the use of 2D networks in studies such as [53]. Lin et al. [61] explored different input dimensions for diagnosing abnormal kidneys using 3D SPECT. They train three models: one with 3D images, another with 2D maximum intensity projections, and a third, termed ‘2.5D,’ which integrates features from three networks using 2D maximum intensity projections (coronal, sagittal, and transverse). Similarly, Huang et al. [84] and Magesh et al. [39] reduced 3D input to 2D by selecting a single representative slice from 3D stereo SPECT images.
For classification tasks, which are the focus of most studies, classic custom-built CNNs are frequently used. Additionally, transfer learning methods are often applied, with visual geometry group (VGG) architecture being preferred in most cases. Although other well-known models are occasionally employed, they are typically used in benchmark studies comparing multiple models’ performance. Classic machine learning models such as support vector machine (SVM), decision tree (DT), and random forest (RF) are used when numerical features serve as input. These features may include calculated values like specific binding ratio (SBR) [42,43,45,48,50], handcrafted filter-based or statistical features [31], or flattened image data transformed into one-dimensional vectors, often with dimensionality reduction techniques such as principal component analysis (PCA) [47]. Performance metrics used in classification tasks include accuracy, sensitivity, specificity, recall, precision, F1-score, AUC, and ROC.
Segmentation is less commonly studied compared to classification. For segmentation tasks, U-Net is the preferred model, though some studies employ V-Net, Mask R-CNN, or custom architectures. For example, Ryden et al. [60] integrated segmentation and classification by using Mask R-CNN outputs as input to a fully convolutional network or a set of fully connected layers for each task. Performance metrics for segmentation include the dice coefficient, precision, recall, Hausdorff distance, class pixel accuracy, and cross-merge ratio (MIoU).
Considering that general imaging methods are applied to different tasks, variations in technology selection can be observed. Deep learning models, such as CNN, GAN, and U-Net, are predominantly used both independently and as a foundation for the development of new models. One paper approaches the problem of the necessity of normalization in SPECT image utilization from a formal approach, rather than as an application approach, providing proof that self-normalization can be effective, eliminating the need for the normalization of regions [72]. Also, for feature selection, Nadimi-Shahraki et al. explored the binary approach to quantum-based avian navigation optimizer method [65].
Many publications lack clear notation regarding the technological tools used for model development. Out of 58 publications, only 26 specify the programming language employed for implementation: Python (n = 22), R (n = 3) and Matlab Script (n = 1). Some mention frameworks such as TensorFlow, Keras, or Matlab but without providing specific details on the programming language or specific libraries used (n = 16). Additionally, 16 publications do not disclose the technology utilized for model development. Among the studies that report how image annotation is performed, all state that LabelMe software was used for this task [85].
Considering interpretability, the most popular approach used is the grad-CAM method [22,28,33,35]. Other methods employed include local interpretable model-agnostic explanations (LIME) [39], layer-wise relevance propagation [40], and class activation mapping (CAM) [32]. One study explored several methods for interpretable AI, including saliency maps, guided backpropagation, grad-CAM, guided grad-CAM, deep learning important features (DeepLIFT), and shapley additive explanations (SHAP) [48].
Finally, regarding validation, 27 publications (46.5%) did not report any validation processes during model development, making it unclear whether the authors employed any validation strategies for model training and/or testing. Among the publications that describe their experimental design, k-fold cross-validation (with k = 1, 5, 10, 25, and 100) is commonly used during hyperparameter tuning and model performance assessment. Most of these studies use a fixed test set, though some authors apply nested cross-validation for more robust evaluation [30,47,48,50,51,57]. Only a few studies report whether stratification was considered during the data split, while external validation is mentioned in only three studies [35,64,69].

5. Discussion

5.1. Key Insights

From the preliminary screening, it is evident that there is a notable lack of research focusing on the utilization of machine learning models for chronic kidney disease diagnosis and monitoring using planar scintigraphy and SPECT. The broader screening reveals that classic and novel deep learning methods have been widely explored in other nuclear imaging applications, such as cardiac and neurological SPECT images and whole-body planar scans. The limited research addressing renal scans and CKD diagnosis indicates an underexplored opportunity for leveraging deep learning techniques, such as convolutional neural networks, to enhance image quality, automate kidney segmentation, improve functional parameter estimation, and refine CKD classification. Moreover, the emerging development of new models capable of generating synthetic data and employing transfer learning enables the expansion of studies on previously underutilized datasets constrained by older methodologies.

5.2. Data, Data Sources, and Data Utilization

Studies employing planar or SPECT imaging techniques offer valuable insights into various strategies for handling such data. One key consideration is whether to use a single slice or multiple slices, either separately or combined into a single image. For example, Khachnaoui et al. [44] selected eight slices (slices 37–41 out of 91) to generate three types of 3D inputs. Their methodology involved training different models using three distinct approaches: one based on the individual slices, another on a single image created by merging the slices, and a third where models were trained eight times with the final decision determined by an adaptive boosting algorithm. Although all combinations of models and datasets achieved high accuracy, variations in image organization were observed to have an impact on the performance of different models. Kikuchi et al. [20] also utilized a multi-slice approach for segmentation, noting that a single-image method fails to capture the relationships between adjacent slices. By employing dice coefficients at both the slice and pixel levels, they identified the optimal number of slices for effective segmentation of myocardial SPECT images.
While image data forms the basis for most research, as shown in the reviewed studies, other types of data should also be considered in future research. That said, tabular (i.e., blood work markers) or textual (i.e., patient’s history) data, or data from demographic sources, in addition to the images, can enhance diagnostic accuracy and clinical decision making. For instance, results from deep learning image-based models can be compared with radiomics-based classic machine learning models, such as SVM, which could be a beneficial strategy if an interpretable model is developed [48]. Furthermore, efforts to enhance models by incorporating radiomics or numerical features into pre-trained deep learning image-based models have shown potential to improve overall accuracy [62]. Another approach involves combining deep radiomics features with clinical data to estimate cognitive decline in Parkinson’s disease patients [51]. The authors demonstrated that deep radiomics features alone yield superior results compared to using clinical data alone or a combination of both. Another approach explores the integration of deep features with clinical data for estimating cognitive decline in patients with Parkinson’s disease [51]. The authors show that deep radiomics features alone outperformed both clinical data alone and the combination of clinical and deep radiomics data. Finally, a noteworthy approach by Xie et al. [70], involved using porcine studies for model training. The use of an animal model represents a divergence from the standard reliance on human clinical data, which can offer potential advantages when acquiring human data are challenging. If this approach is feasible within an institution, researchers might consider adopting it as an alternative or supplementary data source.
A critical limitation identified in the reviewed studies is the lack of attention to data heterogeneity and its impact on model generalization. Variability in imaging protocols, such as differences in scanners, radiotracer types, or image resolution, can cause ML models to learn features specific to a particular site (e.g., brightness, noise level, or image textures, as cues for classification or prediction), rather than clinically meaningful patterns related to kidney pathology. This reduces the ability of the model to generalize across institutions and limits its clinical applicability. Moreover, the use of different data sources, such as publicly available datasets and institution-specific (internal) datasets, can affect model performance due to differences in data quality, patient demographics, and labeling practices. Models trained solely on internal datasets may overfit local characteristics and fail to generalize to external cohorts, while open datasets may lack clinical richness or standardization. Population differences, including age, comorbidities, and disease prevalence, can also cause models trained in homogeneous cohorts to perform poorly when applied to diverse populations. Class imbalance, particularly the underrepresentation of early-stage or rare CKD cases, may bias models toward more prevalent conditions, limiting clinical sensitivity. In summary, these factors highlight the need for harmonized imaging protocols, multi-institutional datasets, and robust validation practices to ensure the development of generalizable and clinically applicable ML models in renal scintigraphy.

5.3. Research Methods

The primary application of machine learning in SPECT and planar nuclear imaging is automated classification. Typically, deep learning models are used when working with image data, while classic machine learning methods are employed with numerical data. Deep learning, particularly CNN-based models, has demonstrated high accuracy in classification tasks. However, many studies also explore innovative approaches that could be considered for future research in CKD diagnosis using SPECT or planar scintigraphy images. For example, Berkaya et al. [25] used SVM, a traditional classifier, with deep and shallow features extracted from various pre-trained deep neural networks (DNNs) to identify myocardial ischemia and infraction. This deep learning-based model is then compared to a knowledge-based classification model, demonstrating performance comparable to expert analysis. Apostolopoulos et al. [28] enhanced network performance by integrating feature-fusion and attention modules enhancing the localization abilities of the base VGG19 model. Lin et al. [55] proposed custom VGG7, VGG21, and VGG24 models, derived from the architectures of the official VGG16 and VGG19 models, to examine the impact of network depth on accuracy in thoracic metastasis classification task. Their results indicate that the new models perform comparably to established models, with the custom VGG21 model achieving the best results when using augmented images with normalization. Khachnaoui et al. [46] proposed unsupervised clustering methods (i.e., density-based spatial, k-means, and hierarchical clustering) for classifying healthy individuals and PD patients within the SWEDD group (i.e., patients exhibiting Parkinsonian symptoms despite normal dopamine imaging scans). Although clustering is not commonly applied to classification tasks, the authors demonstrated its effectiveness in distinguishing patients when labeled data are limited. Finally, to reduce computational cost, Ding et al. [41] detailed a model selection framework that incorporates diffusion maps and linear discriminant analysis, outperforming deep learning models such as AlexNet, VGG16, and VGG19.
Another important application of machine learning in SPECT and planar nuclear imaging is segmentation, where ML models are used to delineate organs and extract quantitative metrics or identify specific regions of interest in images, such as lesions. Segmentation studies are less prevalent, which is somewhat expected, as nuclear imaging primarily focuses on functionality rather than morphology. Additionally, nuclear images often have low contrast and high noise, posing significant challenges for precise segmentation. To address these issues, innovative approaches were developed. Zhu et al. [21] combined deep learning with a module that incorporates shape priors generated via dynamic programming, enabling automatic left ventricular segmentation, that closely matches manually annotated ground truth images. Similarly, Cao et al. [57] developed a custom CNN model that employs view aggregation (i.e., using a pixel-wise addition operation), to enhance regions with high uptake in planar whole-body scans. Their model performed well against expert annotations, with the authors suggesting further improvements to the process. Gao et al. [53] improved baseline U-Net model performance in lung SPECT scan segmentation adding residual structure and attention mechanism. These segmentation methods, originally developed for other organs, offer promising potential for adaptation in systems that automatically generate renogram curves or GFR calculations, thereby improving and accelerating clinical processes in CKD diagnosis.
General imaging methods are yet another category where machine learning models play a crucial role in nuclear renal imaging. In this area, deep learning models, both generative and convolutional, are instrumental in image reconstruction and enhancement.
One prominent application of machine learning is synthetic data generation. Small in-house datasets are a common challenge, and data augmentation techniques are often used to mitigate this issue [14]. Those methods use existing data and apply different transformations, such as rotation, scaling, and translations, to create new instances of data. However, data augmentation is limited by available data and can only introduce variations rather than new representations. When data are very limited or ground truth annotations are unavailable, augmentation may not suffice which poses a significant challenge for the exploration, development, and validation of novel and innovative models [77]. Synthetic data generation, using deep learning methods, offers a solution by creating entirely new data instances. Laube et al. [77] presented a comprehensive analysis of SPECT projection generation using large simulated datasets. They evaluated the effects of different parameters (e.g., noise, dataset size, number of input projections and rotation angle, and detector orbit) on model performance. The authors reported strong agreement between simulated and measured projections (Jaszack and NEMA Phantoms), showing that despite models being trained with a completely simulated SPECT dataset, such models can generalize well to synthetic projections derived from physical phantom measurements. In another study, Anbarasu et al. [67] introduced a deep ensemble model based on GAN to predict labels for unannotated data. The authors evaluated their model on three datasets, including the UCI Heart SPECT dataset, and compared results with other existing algorithms. Their method outperformed the other methods highlighting its potential to increase the quantity of annotated data when labeled data are scarce. Similarly, Werner et al. [71] proposed a GAN-based model for synthetic brain SPECT scans. The authors developed a model that creates scans indistinguishable from those of real patients and minimized training data requirements for rare diseases like Creutzfeldt–Jakob disease, albeit with a necessity for at least three anatomical compartments to achieve accurate results.
Another key challenge is reducing radiation dose and scan time by reconstructing some slices from existing ones. Ichikawa et al. [74] applied deep learning methods to reconstruct full-acquisition-time images from short-acquisition-time renal pediatric scintigraphy images, demonstrating that acquisition time could be reduced to one-fifth while maintaining image quality and renal uptake measurement accuracy. Similarly, Lin et al. [76] successfully reconstructed full SPECT renal images from half-acquisition-time scans showing the potential to reduce the scan time in classification of kidney renal cortical defect.
Reconstruction methods also extend to enhancing 3D SPECT images from a limited number of projection scans. Xie et al. [70] proposed a novel 3D transfer network for high-quality reconstruction from few-view cardiac SPECT scanners. Their approach enhanced cardiac defect contrast, a result validated by nuclear cardiologists and FDA-cleared clinical software.
For centers without SPECT/CT scanners, automatic attenuation generation methods have been explored to eliminate the need for additional CT scans [68,69,75]. As deep learning architectures continue to improve and evolve, there is a potential for developing a single model capable of achieving multiple outcomes simultaneously. Chet et al. [66] developed a cross-domain iterative network that simultaneously denoises, reconstructs, and corrects attenuation in SPECT images. This approach could improve clinical practices where cost-effectiveness is a critical consideration.

5.4. Diagnosis of Renal Pathologies

The broader screening identified several anatomical organs that have been studied using ML methods applied to planar and SPECT scans. Although CKD diagnosis was not represented in the selected papers, special attention should be given to six studies addressing renal pathologies, in general, as the underlying imaging is the same.
Lin et al. [61], as mentioned in Section 4.3.3, investigated deep learning methods for differentiation between normal and scarred kidneys in pediatric patients using three different interpretations of data from same SPECT scans. A model that used multiple 2D slices (2.5D model) showed the best results with an accuracy of 92% in the classification of healthy and diseased kidneys.
Ji et al. [59] automatically delineated renal ROIs and calculated renal function from dynamic renal scintigraphy scans by employing a custom deep learning method based on the Swin-Unet model. The fundamental unit of this model is the swin transformer block which is used for local self-attention. The model achieved very good segmentation performance with an intersection over union (IOU) of 0.83 and a dice similarity coefficient (DSC) of 0.91, compared to references from human experts.
Ryden et al. proposed the Mask R-CNN model that simultaneously performed segmentation of renal images and classification of acute pyelonephritis [60]. The first component of the model is ResNet, the second is the regional proposal network, and the last component is actually the two networks, one fully convolutional for segmentation and fully connected layers for classification. The model achieved an IOU of 90.3 for segmentation and an accuracy of 89% for the classification task. One notable point in this paper is that authors discuss mis-classified cases in order to explain and potentially suggest future improvements which is not commonly reported in similar studies.
Kwon et al. [75] presented a novel approach to generate synthetic attenuation maps directly from kidney SPECT data eliminating the need for accompanying CT scans. The authors employed a custom 3D Unet model, replacing the transpose convolution with nearest neighbor interpolation effectively eliminating checkerboard artifacts commonly seen in up-sampling processes. The proposed approach has the potential to reduce patient radiation exposure by 45.3% to 787.8%.
The methods previously discussed in Section 4.3.3 for reducing radiation dosage by reconstructing full-acquisition-time images are both performed on the renal SPECT images. Ichikawa et al. [74] evaluated the performance of deep learning models in predicting full-acquisition-time images from pediatric planar images acquired at only one-fifth of the standard acquisition time. Using three deep learning methods, the authors demonstrated that acquisition time reduction in pediatric planar scans is feasible, as the predicted images were comparable in quality to those acquired with full acquisition times. Among the tested models, ResUnet achieved the best performance, with a normalized mean squared error of 0.4%, a peak signal-to-noise ratio of 55.4 dB, and a structural similarity index of 0.997. Lin et al. [76] assessed the reconstruction from half-time acquisition images using deep learning also reporting comparable quality of reconstructed images, achieving an accuracy of 91.7%. Both findings suggest that deep learning techniques can effectively reduce scan times while minimizing patient discomfort, particularly for pediatric patients, without compromising diagnostic reliability.

5.5. Advancing Future Research

This review shows that recent developments in deep learning have demonstrated promising results across various anatomical regions and utilizing planar and SPECT scintigraphy images. Many approaches have adapted standard deep learning architectures to address the low-resolution, low-count nature of nuclear medicine images. Existing models can provide a valuable foundation but their task-specific adaptations have proven effective in dealing with distinct characteristics of nuclear medicine images. Given the extremely rapid development in the field, latest deep learning techniques should be considered. For example, the attention mechanism in medical image analysis demonstrated powerful performance in medical image analysis [86] but this review identified only three studies that employ this method to deal with scintigraphy images [28,53,59].
While the studies included in this review are limited to scintigraphy images, future research on CKD diagnosis should consider incorporating additional imaging modalities when available, as already noted in Section 5.2. Moreover, a multimodal approach should extend beyond the combination of different image types to encompass the integration of diverse and complementary data sources. As Zhao et al. [87] have demonstrated in the oncology domain, deep learning-based fusion strategies that integrate heterogeneous data types hold significant promise for enhancing diagnostic accuracy. In particular, vision–language models [88], which combine image and text data, may prove valuable in addressing the inherent limitations of scintigraphy images, such as low resolution. The integration of clinical text data or large language model abilities may offer new opportunities for improving the performance and interpretability of existing ML models. These research directions should be systematically explored and adapted to support CKD diagnosis using renal planar and SPECT scintigraphy research.

5.6. Towards Robust and Trustworthy Research

Despite recent advances, several challenges are identified in the development and application of new imaging methods. Concerning data, key challenges include the limited availability of large, annotated datasets and variability in imaging protocols across institutions. Future development efforts should focus on building open multi-center datasets to enhance generalizability. At the same time, data privacy must be secured to ensure ethical compliance in all stages of research. Addressing this concern requires the adoption of strict data governance protocols for all parties involved in the research. Furthermore, fairness-oriented methods can help mitigate bias and good performance in underrepresented conditions such as rare kidney diseases [89,90,91]. In addition, privacy-preserving approaches, such as federated learning, can improve transparent but responsible development and application in medical research [92].
In terms of study quality and reproducibility, researchers should consider following guidelines, such as “Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD)" [93] and the “Checklist for Artificial Intelligence in Medical Imaging (CLAIM)” [94], as presented in the following study [35]. In the future, novel and updated policies and guidelines can be expected at institutional, local, and global levels, and researchers in ML and AI models should stay informed about these developments to ensure compliance and methodological rigor. For example, updated versions of both CLAIM (2024 Update [95]) and TRIPOD (2024 AI Update [96]), offer revised recommendations for reporting clinical prediction models that use regression or machine learning methods. Furthermore, studies should carefully report preprocessing steps, data augmentation methods, hyperparameter tuning strategies and selected values, validation practices, and specific methods used. In particular, details about technologies should be reported, including programming languages, frameworks, and libraries used. For example, when considering technological solutions, researchers should evaluate specialized deep learning frameworks optimized for medical imaging applications. Most of the studies in this review either did not utilize or failed to report the use of such libraries. The only exceptions were Spielvogel et al. [35], who employed MONAI—a PyTorch-based framework—and Mostafapour et al. [69], who utilized NiftyNet, a TensorFlow-based platform. Finally, when possible, open-source code should be made available.
Ultimately, the use of interpretable AI methods can improve trust, fairness, and explainability of the models currently used in medical research and application in clinical settings [97]. Those methods may also help address current limitations in dealing with SPECT and planar scintigraphy modalities. If the application of deep learning models yield limited success and negative results remain unreported, interpretable AI methods could help with the identification of underlying limitations, thereby guiding further model refinement and advancement.

5.7. Limitations

This scoping review did not include gray literature and is limited by three databases, specific research categories and, for broader screening, the inclusion of studies published between 2020 and 2024. For future research, when assessing the intersection of CKD diagnosis, scintigraphy, and machine learning, it is important to consider additional sources. Given the rapid advancements in the domain of machine learning, at least newly published conference abstracts and reprints from repositories such as arXiv should be incorporated in order to understand and test cutting-edge ongoing research trends. Also, given the fast-paced nature of the research in the domain of deep learning there is a possibility that some newer studies were not included in the review.

6. Conclusions

This scoping review aimed to provide an overview of available research on the evaluation of chronic kidney disease from SPECT and planar scintigraphy utilizing machine learning approaches. Regarding the first research question (RQ1), preliminary screening identified limited research focused specifically on this application. For the second research question (RQ2), a broader screening of ML applications across these two imaging modalities revealed significant variability among the methods applied. Classical approaches typically involve extracting handcrafted features (e.g., uptake ratios, curve slopes) from scintigraphy images and feeding them into algorithms such as artificial neural networks or support vector machines. In contrast, more recent studies apply deep learning methods directly to the image data. When datasets are limited, data augmentation and transfer learning are commonly used. Overall, the reviewed studies indicate that ML models can produce accurate results or effectively support human decision-making processes. Regarding the last research question (RQ3), several recurring challenges were identified: the scarcity of large, annotated datasets, limited clinical validation of ML models, insufficient reporting of technological details, and the need for explainable AI methods. Given the limited number of studies specifically targeting CKD diagnosis from renal SPECT and planar scintigraphy using ML, and in light of recent ML advancements in addressing other organs and diseases, there is a clear opportunity to further explore emerging deep learning methods. Along with advanced data augmentation techniques and transfer learning approaches, such methods could produce a promising toolkit for CKD diagnosis and functional evaluation of kidney health.

Author Contributions

Conceptualization, D.V. and K.M.; methodology, D.V.; validation, B.V. and K.M.; investigation, D.V.; data curation, D.V. and K.M.; writing—original draft preparation, D.V.; writing—review and editing, B.V. and K.M.; visualization, B.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been supported by the Ministry of Science, Technological Development and Innovation (Contract No. 451-03-137/2025-03/200156) and the Faculty of Technical Sciences, University of Novi Sad through project “Scientific and Artistic Research Work of Researchers in Teaching and Associate Positions at the Faculty of Technical Sciences, University of Novi Sad 2025” (No. 01-50/295).

Data Availability Statement

The charted data used in this scoping review were extracted from publicly available published studies, which are cited throughout the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ABAdaptive Boosting
AEAutoencoders
AIArtificial intelligence
BCBagging Classifier
BTBoosted Tree
CADCoronary artery disease
CAMClass activation mapping
CKDChronic kidney disease
CNNConvolutional Neural Network
CTComputed tomography
DMDiffusion Maps
DTDecision tree
EHRElectronic health record
ESRDEnd-stage renal disease
FNNFeed-forward NN
GANGenerative Adversarial Network
GBGradient Boosting
GFRGlomerular filtration rate
GOGrowth Optimizer
KNNK-Nearest Neighbors
LDALinear Discriminant Analysis
LIMELocal interpretable model-agnostic explanations
LRLogistic Regression
MLMachine learning
MLPMultilayer Perceptron
MRIMagnetic resonance imaging
NBNaive Bayes
NNNeural network
PCAPrincipal component analysis
PETPositron emission tomography
RFRandom Forest
SGDStochastic Gradient Descent
SHAPShapley additive explanations
SMOTESynthetic minority oversampling technique
SPECTSingle photon emission computed tomography
SRScoping review
SVMSupport vector machine
UEUltrasound elastography
VGGVisual geometry group
XGBExtreme Gradient Boosting

References

  1. World Health Statistics 2019: Monitoring Health for the SDGs, Sustainable Development Goals; World Health Organization: Geneva, Switzerland, 2019.
  2. Bikbov, B.; Purcell, C.A.; Levey, A.S.; Smith, M.; Abdoli, A.; Abebe, M.; Adebayo, O.M.; Afarideh, M.; Agarwal, S.K.; Agudelo-Botero, M.; et al. Global, regional, and national burden of chronic kidney disease, 1990–2017: A systematic analysis for the Global Burden of Disease Study 2017. Lancet 2020, 395, 709–733. [Google Scholar] [CrossRef] [PubMed]
  3. Lei, N.; Zhang, X.; Wei, M.; Lao, B.; Xu, X.; Zhang, M.; Chen, H.; Xu, Y.; Xia, B.; Zhang, D.; et al. Machine learning algorithms’ accuracy in predicting kidney disease progression: A systematic review and meta-analysis. BMC Med. Inform. Decis. Mak. 2022, 22, 205. [Google Scholar] [CrossRef] [PubMed]
  4. Jiang, K.; Lerman, L.O. Prediction of chronic kidney disease progression by magnetic resonance imaging: Where are we? Am. J. Nephrol. 2019, 49, 111–113. [Google Scholar] [CrossRef] [PubMed]
  5. Fried, J.G.; Morgan, M.A. Renal Imaging: Core Curriculum 2019. Am. J. Kidney Dis. 2019, 73, 552–565. [Google Scholar] [CrossRef]
  6. Database of Dynamic Renal Scintigraphy. Available online: https://dynamicrenalstudy.org/pages/about-project.html (accessed on 17 May 2025).
  7. Dietz, M.; Jacquet-Francillon, N.; Sadr, A.B.; Collette, B.; Mure, P.Y.; Demède, D.; Pina-Jomir, G.; Moreau-Triby, C.; Grégoire, B.; Mouriquand, P.; et al. Ultrafast cadmium-zinc-telluride-based renal single-photon emission computed tomography: Clinical validation. Pediatr. Radiol. 2023, 53, 1911–1918. [Google Scholar] [CrossRef]
  8. Alnazer, I.; Bourdon, P.; Urruty, T.; Falou, O.; Khalil, M.; Shahin, A.; Fernandez-Maloigne, C. Recent advances in medical image processing for the evaluation of chronic kidney disease. Med. Image Anal. 2021, 69, 101960. [Google Scholar] [CrossRef]
  9. Cheplygina, V.; De Bruijne, M.; Pluim, J.P. Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med. Image Anal. 2019, 54, 280–296. [Google Scholar] [CrossRef]
  10. Decuyper, M.; Maebe, J.; Van Holen, R.; Vandenberghe, S. Artificial intelligence with deep learning in nuclear medicine and radiology. EJNMMI Phys. 2021, 8, 81. [Google Scholar] [CrossRef]
  11. Magherini, R.; Mussi, E.; Volpe, Y.; Furferi, R.; Buonamici, F.; Servi, M. Machine learning for renal pathologies: An updated survey. Sensors 2022, 22, 4989. [Google Scholar] [CrossRef]
  12. Rubini, L.; Soundarapandian, P.; Eswaran, P. Chronic Kidney Disease. UCI Machine Learning Repository. 2015. Available online: https://archive.ics.uci.edu/dataset/336/chronic+kidney+disease (accessed on 23 February 2025).
  13. Sanmarchi, F.; Fanconi, C.; Golinelli, D.; Gori, D.; Hernandez-Boussard, T.; Capodici, A. Predict, diagnose, and treat chronic kidney disease with machine learning: A systematic literature review. J. Nephrol. 2023, 36, 1101–1117. [Google Scholar] [CrossRef]
  14. Zhang, M.; Ye, Z.; Yuan, E.; Lv, X.; Zhang, Y.; Tan, Y.; Xia, C.; Tang, J.; Huang, J.; Li, Z. Imaging-based deep learning in kidney diseases: Recent progress and future prospects. Insights Imaging 2024, 15, 50. [Google Scholar] [CrossRef] [PubMed]
  15. Rebouças Filho, P.P.; da Silva, S.P.P.; Almeida, J.S.; Ohata, E.F.; Alves, S.S.A.; Silva, F.d.S.H. An Approach to Classify Chronic Kidney Diseases Using Scintigraphy Images. In Proceedings of the XXXII Conference on Graphics, Patterns and Images (SIBGRAPI 2019), Rio de Janeiro, Brazil, 28–31 October 2019; pp. 156–159. [Google Scholar]
  16. Ardakani, A.A.; Hekmat, S.; Abolghasemi, J.; Reiazi, R. Scintigraphic texture analysis for assessment of renal allograft function. Pol. J. Radiol. 2018, 83, e1. [Google Scholar] [CrossRef] [PubMed]
  17. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. PLoS Med. 2021, 18, e1003583. [Google Scholar] [CrossRef] [PubMed]
  18. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.J.; Horsley, T.; Weeks, L.; et al. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef]
  19. Alexandria, A.R.D.; Ferreira, M.C.; Ohata, E.F.; Cavalcante, T.D.S.; Mota, F.A.X.D.; Nogueira, I.C.; Albuquerque, V.H.C.; Gondim, V.J.T.; Neto, E.C. Automated Classification of Dynamic Renal Scintigraphy Exams to Determine the Stage of Chronic Kidney Disease: An Investigation. In Proceedings of the 2021 3rd International Conference on Research and Academic Community Services (ICRACOS 2021), Surabaya, Indonesia, 9–10 October 2021; pp. 305–310. [Google Scholar] [CrossRef]
  20. Kikuchi, A.; Wada, N.; Kawakami, T.; Nakajima, K.; Yoneyama, H. A myocardial extraction method using deep learning for 99mTc myocardial perfusion SPECT images: A basic study to reduce the effects of extra-myocardial activity. Comput. Biol. Med. 2022, 141, 105164. [Google Scholar] [CrossRef]
  21. Zhu, F.B.; Li, L.X.; Zhao, J.Y.; Zhao, C.; Tang, S.J.; Nan, J.F.; Li, Y.T.; Zhao, Z.Q.; Shi, J.Z.; Chen, Z.H.; et al. A new method incorporating deep learning with shape priors for left ventricular segmentation in myocardial perfusion SPECT images. Comput. Biol. Med. 2023, 160, 106954. [Google Scholar] [CrossRef]
  22. Papandrianos, N.I.; Feleki, A.; Moustakidis, S.; Papageorgiou, E.; Apostolopoulos, I.D.; Apostolopoulos, D.J. An Explainable Classification Method of SPECT Myocardial Perfusion Images in Nuclear Cardiology Using Deep Learning and Grad-CAM. Appl. Sci. 2022, 12, 7592. [Google Scholar] [CrossRef]
  23. Wen, H.; Wei, Q.; Huang, J.L.; Tsai, S.C.; Wang, C.Y.; Chiang, K.F.; Deng, Y.; Cui, X.; Gao, R.; Zhou, W.; et al. Analysis on SPECT myocardial perfusion imaging with a tool derived from dynamic programming to deep learning. Optik 2021, 240, 166842. [Google Scholar] [CrossRef]
  24. Papandrianos, N.; Papageorgiou, E. Automatic Diagnosis of Coronary Artery Disease in SPECT Myocardial Perfusion Imaging Employing Deep Learning. Appl. Sci. 2021, 11, 6362. [Google Scholar] [CrossRef]
  25. Berkaya, S.K.; Sivrikoz, I.A.; Gunal, S. Classification models for SPECT myocardial perfusion imaging. Comput. Biol. Med. 2020, 123, 103893. [Google Scholar] [CrossRef]
  26. Chen, J.J.; Su, T.Y.; Chen, W.S.; Chang, Y.H.; Lu, H.H.S. Convolutional Neural Network in the Evaluation of Myocardial Ischemia from CZT SPECT Myocardial Perfusion Imaging: Comparison to Automated Quantification. Appl. Sci. 2021, 11, 514. [Google Scholar] [CrossRef]
  27. Abdi, M.E.H.; Naili, Q.; Habbache, M.; Said, B.; Boumenir, A.; Douibi, T.; Djermane, D.; Berrani, S.A. Effectively Detecting Left Bundle Branch Block False Defects in Myocardial Perfusion Imaging (MPI) with a Convolutional Neural Network (CNN); Studies in Health Technology and Informatics; IOS Press: Amsterdam, The Netherlands, 2022. [Google Scholar] [CrossRef]
  28. Apostolopoulos, I.D.; Papathanasiou, N.D.; Papandrianos, N.; Papageorgiou, E.; Apostolopoulos, D.J. Innovative Attention-Based Explainable Feature-Fusion VGG19 Network for Characterising Myocardial Perfusion Imaging SPECT Polar Maps in Patients with Suspected Coronary Artery Disease. Appl. Sci. 2023, 13, 8839. [Google Scholar] [CrossRef]
  29. Amini, M.; Pursamimi, M.; Hajianfar, G.; Salimi, Y.; Saberi, A.; Mehri-Kakavand, G.; Nazari, M.; Ghorbani, M.; Shalbaf, A.; Shiri, I.; et al. Machine learning-based diagnosis and risk classification of coronary artery disease using myocardial perfusion imaging SPECT: A radiomics study. Sci. Rep. 2023, 13, 14920. [Google Scholar] [CrossRef] [PubMed]
  30. Sabouri, M.; Hajianfar, G.; Hosseini, Z.; Amini, M.; Mohebi, M.; Ghaedian, T.; Madadi, S.; Rastgou, F.; Oveisi, M.; Rajabi, A.B.; et al. Myocardial Perfusion SPECT Imaging Radiomic Features and Machine Learning Algorithms for Cardiac Contractile Pattern Recognition. J. Digit. Imaging 2023, 36, 497–509. [Google Scholar] [CrossRef]
  31. Mohebi, M.; Amini, M.; Alemzadeh-Ansari, M.J.; Alizadehasl, A.; Rajabi, A.B.; Shiri, I.; Zaidi, H.; Orooji, M. Post-revascularization Ejection Fraction Prediction for Patients Undergoing Percutaneous Coronary Intervention Based on Myocardial Perfusion SPECT Imaging Radiomics: A Preliminary Machine Learning Study. J. Digit. Imaging 2023, 36, 1348–1363. [Google Scholar] [CrossRef]
  32. Hai, P.N.; Thanh, N.C.; Trung, N.T.; Kien, T.T. Transfer Learning for Disease Diagnosis from Myocardial Perfusion SPECT Imaging. CMC—Comput. Mater. Contin. 2022, 73, 5925–5941. [Google Scholar] [CrossRef]
  33. Kusumoto, D.; Akiyama, T.; Hashimoto, M.; Iwabuchi, Y.; Katsuki, T.; Kimura, M.; Akiba, Y.; Sawada, H.; Inohara, T.; Yuasa, S.; et al. A deep learning-based automated diagnosis system for SPECT myocardial perfusion imaging. Sci. Rep. 2024, 14, 13583. [Google Scholar] [CrossRef]
  34. Chen, J.J.; Su, T.Y.; Huang, C.C.; Yang, T.H.; Chang, Y.H.; Lu, H.H.S. Classification of coronary artery disease severity based on SPECT MPI polarmap images and deep learning: A study on multi-vessel disease prediction. Digit. Health 2024, 10, 20552076241288430. [Google Scholar] [CrossRef]
  35. Spielvogel, C.P.; Haberl, D.; Mascherbauer, K.; Ning, J.; Kluge, K.; Traub-Weidinger, T.; Davies, R.H.; Pierce, I.; Patel, K.; Nakuz, T.; et al. Diagnosis and prognosis of abnormal cardiac scintigraphy uptake suggestive of cardiac amyloidosis using artificial intelligence: A retrospective, international, multicentre, cross-tracer development and validation study. Lancet Digit. Health 2024, 6, e251–e260. [Google Scholar] [CrossRef]
  36. Kiso, K.; Nakajima, K.; Nimura, Y.; Nishimura, T. A novel algorithm developed using machine learning and a J-ACCESS database can estimate defect scores from myocardial perfusion single-photon emission tomography images. Ann. Nucl. Med. 2024, 38, 980–988. [Google Scholar] [CrossRef]
  37. Tufail, A.B.; Ma, Y.K.; Zhang, Q.N.; Khan, A.; Zhao, L.; Yang, Q.; Adeel, M.; Khan, R.; Ullah, I. 3D convolutional neural networks-based multiclass classification of Alzheimer’s and Parkinson’s diseases using PET and SPECT neuroimaging modalities. Brain Inform. 2021, 8, 23. [Google Scholar] [CrossRef] [PubMed]
  38. Aggarwal, N.; Saini, B.S.; Gupta, S. A deep 1-D CNN learning approach with data augmentation for classification of Parkinson’s disease and scans without evidence of dopamine deficit (SWEDD). Biomed. Signal Process. Control. 2024, 91, 106008. [Google Scholar] [CrossRef]
  39. Magesh, P.R.; Myloth, R.D.; Tom, R.J. An Explainable Machine Learning Model for Early Detection of Parkinson’s Disease using LIME on DaTSCAN Imagery. Comput. Biol. Med. 2020, 126, 104041. [Google Scholar] [CrossRef] [PubMed]
  40. Nazari, M.; Kluge, A.; Apostolova, I.; Klutmann, S.; Kimiaei, S.; Schroeder, M.; Buchert, R. Data-driven identification of diagnostically useful extrastriatal signal in dopamine transporter SPECT using explainable AI. Sci. Rep. 2021, 11, 22932. [Google Scholar] [CrossRef]
  41. Ding, J.E.; Chu, C.H.; Huang, M.N.L.; Hsu, C.C. Dopamine Transporter SPECT Image Classification for Neurodegenerative Parkinsonism via Diffusion Maps and Machine Learning Classifiers. In Proceedings of the 25th Conference on Medical Image Understanding and Analysis (MIUA 2021), Oxford, UK, 12–14 July 2021; Volume 12722, pp. 377–393. [Google Scholar] [CrossRef]
  42. Pahuja, G.; Nagabhushan, T.N.; Prasad, B. Early Detection of Parkinson’s Disease by Using SPECT Imaging and Biomarkers. J. Intell. Syst. 2020, 29, 1329–1344. [Google Scholar] [CrossRef]
  43. Wang, W.; Lee, J.; Harrou, F.; Sun, Y. Early Detection of Parkinson’s Disease Using Deep Learning and Machine Learning. IEEE Access 2020, 8, 147635–147646. [Google Scholar] [CrossRef]
  44. Khachnaoui, H.; Chikhaoui, B.; Khlifa, N.; Mabrouk, R. Enhanced Parkinson’s Disease Diagnosis Through Convolutional Neural Network Models Applied to SPECT DaTSCAN Images. IEEE Access 2023, 11, 91157–91172. [Google Scholar] [CrossRef]
  45. Shiiba, T.; Arimura, Y.; Nagano, M.; Takahashi, T.; Takaki, A. Improvement of classification performance of Parkinson’s disease using shape features for machine learning on dopamine transporter single photon emission computed tomography. PLoS ONE 2020, 15, e0228289. [Google Scholar] [CrossRef]
  46. Khachnaoui, H.; Khlifa, N.; Mabrouk, R. Machine Learning for Early Parkinson’s Disease Identification within SWEDD Group Using Clinical and DaTSCAN SPECT Imaging Features. J. Imaging 2022, 8, 97. [Google Scholar] [CrossRef]
  47. Huang, G.H.; Lin, C.H.; Cai, Y.R.; Chen, T.B.; Hsu, S.Y.; Lu, N.H.; Chen, H.Y.; Wu, Y.C. Multiclass machine learning classification of functional brain images for Parkinson’s disease stage prediction. Stat. Anal. Data Min. 2020, 13, 508–523. [Google Scholar] [CrossRef]
  48. Pianpanit, T.; Lolak, S.; Sawangjai, P.; Sudhawiyangkul, T.; Wilaiprasitporn, T. Parkinson’s Disease Recognition Using SPECT Image and Interpretable AI: A Tutorial. IEEE Sensors J. 2021, 21, 22304–22316. [Google Scholar] [CrossRef]
  49. Paranjothi, K.; Ghouse, F.; Vaithiyanathan, R. Detection of Parkinson’s Disease on DaTSCAN Image Using Multi-kernel Support Vector Machine. Int. J. Intell. Eng. Syst. 2024, 17, 45–55. [Google Scholar] [CrossRef]
  50. Aggarwal, N.; Saini, B.S.; Gupta, S. Feature engineering-based analysis of DaTSCAN-SPECT imaging-derived features in the detection of SWEDD and Parkinson’s disease. Comput. Electr. Eng. 2024, 117, 109241. [Google Scholar] [CrossRef]
  51. Gorji, A.; Jouzdani, A.F. Machine learning for predicting cognitive decline within five years in Parkinson’s disease: Comparing cognitive assessment scales with DAT SPECT and clinical biomarkers. PLoS ONE 2024, 19, e0304355. [Google Scholar] [CrossRef]
  52. Lin, Q.; Man, Z.X.; Cao, Y.C.; Wang, H.J. Automated Classification of Whole-Body SPECT Bone Scan Images with VGG-Based Deep Networks. Int. Arab J. Inf. Technol. 2023, 20, 1–8. [Google Scholar] [CrossRef]
  53. Gao, R.; Lin, Q.; Man, Z.; Cao, Y. Automatic Lesion Segmentation of Metastases in SPECT Images Using U-Net-Based Model. In Proceedings of the 2nd International Conference on Signal Image Processing and Communication (ICSIPC 2022), Qingdao, China, 20–22 May 2022. [Google Scholar] [CrossRef]
  54. Man, Z.; Lin, Q.; Cao, Y. CNN-Based Automated Classification of SPECT Bone Scan Images. In Proceedings of the International Conference on Neural Networks, Information, and Communication Engineering (NNICE 2022), Qingdao, China, 25–27 March 2022. [Google Scholar] [CrossRef]
  55. Lin, Q.; Li, T.T.; Cao, C.G.; Cao, Y.C.; Man, Z.X.; Wang, H.J. Deep learning based automated diagnosis of bone metastases with SPECT thoracic bone images. Sci. Rep. 2021, 11, 4223. [Google Scholar] [CrossRef]
  56. Lin, Q.; Luo, M.Y.; Gao, R.T.; Li, T.T.; Man, Z.X.; Cao, Y.C.; Wang, H.J. Deep learning based automatic segmentation of metastasis hotspots in thorax bone SPECT images. PLoS ONE 2020, 15, e0243253. [Google Scholar] [CrossRef]
  57. Cao, Y.; Liu, L.; Chen, X.; Man, Z.; Lin, Q.; Zeng, X.; Huang, X. Segmentation of lung cancer-caused metastatic lesions in bone scan images using self-defined model with deep supervision. Biomed. Signal Process. Control. 2023, 79, 104068. [Google Scholar] [CrossRef]
  58. Magdy, O.; Elaziz, M.A.; Dahou, A.; Ewees, A.A.; Elgarayhi, A.; Sallah, M. Bone scintigraphy based on deep learning model and modified growth optimizer. Sci. Rep. 2024, 14, 25627. [Google Scholar] [CrossRef]
  59. Ji, X.; Zhu, G.; Gou, J.; Chen, S.; Zhao, W.; Sun, Z.; Fu, H.; Wang, H. A fully automatic deep learning-based method for segmenting regions of interest and predicting renal function in pediatric dynamic renal scintigraphy. Ann. Nucl. Med. 2024, 38, 382–390. [Google Scholar] [CrossRef]
  60. Ryden, T.; Essen, M.V.; Marin, I.; Svensson, J.; Bernhardt, P. Simultaneous segmentation and classification of 99mTc-DMSA renal scintigraphic images with a deep learning approach. J. Nucl. Med. 2021, 62, 528–535. [Google Scholar] [CrossRef] [PubMed]
  61. Lin, C.; Chang, Y.C.; Chiu, H.Y.; Cheng, C.H.; Huang, H.M. Differentiation between normal and abnormal kidneys using 99mTc-DMSA SPECT with deep learning in paediatric patients. Clin. Radiol. 2023, 78, 584–589. [Google Scholar] [CrossRef] [PubMed]
  62. Phu, M.L.; Pham, T.V.; Duc, T.P.; Thanh, T.N.; Quoc, L.T.; Minh, D.C.; Ngoc, H.L.; Hong, S.M.; Thi, P.N.; Thi, N.N.; et al. RR-HCL-SVM: A two-stage framework for assessing remaining thyroid tissue post-thyroidectomy in SPECT images. Int. J. Imaging Syst. Technol. 2024, 34, e23066. [Google Scholar] [CrossRef]
  63. Masud, M.A.; Ngali, M.Z.; Othman, S.A.; Taib, I.; Osman, K.; Salleh, S.M.; Khudzari, A.Z.M.; Ali, N.S. Variation Segmentation Layer in Deep Learning Network for SPECT Images Lesion Segmentation. J. Adv. Res. Appl. Sci. Eng. Technol. 2024, 36, 83–92. [Google Scholar] [CrossRef]
  64. Kavitha, M.; Lee, C.H.; Shibudas, K.; Kurita, T.; Ahn, B.C. Deep learning enables automated localization of the metastatic lymph node for thyroid cancer on 131I post-ablation whole-body planar scans. Sci. Rep. 2020, 10, 7738. [Google Scholar] [CrossRef]
  65. Nadimi-Shahraki, M.H.; Fatahi, A.; Zamani, H.; Mirjalili, S. Binary Approaches of Quantum-Based Avian Navigation Optimizer to Select Effective Features from High-Dimensional Medical Data. Mathematics 2022, 10, 2770. [Google Scholar] [CrossRef]
  66. Chen, X.; Zhou, B.; Xie, H.; Guo, X.; Liu, Q.; Sinusas, A.J.; Liu, C. Cross-Domain Iterative Network for Simultaneous Denoising, Limited-Angle Reconstruction, and Attenuation Correction of Cardiac SPECT. In Proceedings of the 14th International Workshop, MLMI 2023, Vancouver, BC, Canada, 8 October 2023; Volume 14348, pp. 12–22. [Google Scholar] [CrossRef]
  67. Anbarasu, P.N.; Suruli, T.M. Deep Ensemble Learning with GAN-based Semi-Supervised Training Algorithm for Medical Decision Support System in Healthcare Applications. Int. J. Intell. Eng. Syst. 2022, 15, 1–12. [Google Scholar] [CrossRef]
  68. Huxohl, T.; Patel, G.; Zabel, R.; Burchert, W. Deep learning approximation of attenuation maps for myocardial perfusion SPECT with an IQ · SPECT collimator. EJNMMI Phys. 2023, 10, 49. [Google Scholar] [CrossRef]
  69. Mostafapour, S.; Gholamiankhah, F.; Maroufpour, S.; Momennezhad, M.; Asadinezhad, M.; Zakavi, S.R.; Arabi, H.; Zaidi, H. Deep learning-guided attenuation correction in the image domain for myocardial perfusion SPECT imaging. J. Comput. Des. Eng. 2022, 9, 434–447. [Google Scholar] [CrossRef]
  70. Xie, H.; Zhou, B.; Chen, X.; Guo, X.; Thorn, S.; Liu, Y.H.; Wang, G.; Sinusas, A.; Liu, C. Transformer-Based Dual-Domain Network for Few-View Dedicated Cardiac SPECT Image Reconstructions. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023), Vancouver, BC, Canada, 8–12 October 2023; Volume 14229, pp. 163–172. [Google Scholar] [CrossRef]
  71. Werner, R.A.; Higuchi, T.; Nose, N.; Toriumi, F.; Matsusaka, Y.; Kuji, I.; Kazuhiro, K. Generative adversarial network-created brain SPECTs of cerebral ischemia are indistinguishable to scans from real patients. Sci. Rep. 2022, 12, 18787. [Google Scholar] [CrossRef]
  72. Zhou, Y.; Tagare, H.D. Self-Normalized Classification of Parkinson’s Disease DaTscan Images. In Proceedings of the 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM 2021), Virtual, 9–12 December 2021. [Google Scholar] [CrossRef]
  73. Chrysostomou, C.; Koutsantonis, L.; Lemesios, C.; Papanicolas, C.N. SPECT Angle Interpolation Based on Deep Learning Methodologies. In Proceedings of the 2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC 2020), Virtual, 31 October–7 November 2020. [Google Scholar] [CrossRef]
  74. Ichikawa, S.; Sugimori, H.; Ichijiri, K.; Yoshimura, T.; Nagaki, A. Acquisition time reduction in pediatric 99mTc-DMSA planar imaging using deep learning. J. Appl. Clin. Med. Phys. 2023, 24, e13978. [Google Scholar] [CrossRef] [PubMed]
  75. Kwon, K.; Oh, D.; Kim, J.H.; Yoo, J.; Lee, W.W. Deep-learning-based attenuation map generation in kidney single photon emission computed tomography. EJNMMI Phys. 2024, 11, 84. [Google Scholar] [CrossRef]
  76. Lin, C.; Chang, Y.C.; Chiu, H.Y.; Cheng, C.H.; Huang, H.M. Reducing scan time of paediatric 99mTc-DMSA SPECT via deep learning. Clin. Radiol. 2021, 76, 315.e13–315.e20. [Google Scholar] [CrossRef] [PubMed]
  77. Leube, J.; Gustafsson, J.; Lassmann, M.; Salas-Ramirez, M.; Tran-Gia, J. Analysis of a deep learning-based method for generation of SPECT projections based on a large Monte Carlo simulated dataset. EJNMMI Phys. 2022, 9, 47. [Google Scholar] [CrossRef] [PubMed]
  78. Marek, K.; Jennings, D.; Lasch, S.; Siderowf, A.; Tanner, C.; Simuni, T.; Coffey, C.; Kieburtz, K.; Flagg, E.; Chowdhury, S. Parkinson Progression Marker Initiative. The Parkinson Progression Marker Initiative (PPMI). Prog. Neurobiol. 2011, 95, 629–635. [Google Scholar] [CrossRef]
  79. Cios, K.; Kurgan, L.; Goodenday, L. SPECT Heart. UCI Machine Learning Repository. 2001. Available online: https://archive.ics.uci.edu/dataset/95/spect+heart (accessed on 23 February 2025).
  80. Medical Imaging Informatics Lab. SPECTMPISeg: SPECT for Left Ventricular Segmentation. 2024. Available online: https://github.com/MIILab-MTU/SPECTMPISeg (accessed on 23 February 2025).
  81. Kaplan, S. SPECT MPI Dataset. 2025. Available online: https://www.kaggle.com/datasets/selcankaplan/spect-mpi (accessed on 9 February 2025).
  82. Aggarwal, N.; Saini, B.S.; Gupta, S. Role of Artificial Intelligence Techniques and Neuroimaging Modalities in Detection of Parkinson’s Disease: A Systematic Review. Cogn. Comput. 2023, 16, 2078–2115. [Google Scholar] [CrossRef]
  83. Khachnaoui, H.; Mabrouk, R.; Khlifa, N. Machine learning and deep learning for clinical data and PET/SPECT imaging in Parkinson’s disease: A review. IET Image Process. 2020, 14, 4013–4026. [Google Scholar] [CrossRef]
  84. Hu, X.; Zhang, H.; Caobelli, F.; Huang, Y.; Li, Y.; Zhang, J.; Shi, K.; Yu, F. The role of deep learning in myocardial perfusion imaging for diagnosis and prognosis: A systematic review. iScience 2024, 27, 111374. [Google Scholar] [CrossRef]
  85. Russell, B.C.; Torralba, A.; Murphy, K.P.; Freeman, W.T. LabelMe: A Database and Web-Based Tool for Image Annotation. Int. J. Comput. Vis. 2008, 77, 157–173. [Google Scholar] [CrossRef]
  86. Li, X.; Li, M.; Yan, P.; Li, G.; Jiang, Y.; Luo, H.; Yin, S. Deep Learning Attention Mechanism in Medical Image Analysis: Basics and Beyonds. Int. J. Netw. Dyn. Intell. 2023, 2, 93–116. [Google Scholar] [CrossRef]
  87. Zhao, Y.; Li, X.; Zhou, C.; Pen, H.; Zheng, Z.; Chen, J.; Ding, W. A review of cancer data fusion methods based on deep learning. Inf. Fusion 2024, 108, 102361. [Google Scholar] [CrossRef]
  88. Li, X.; Li, L.; Jiang, Y.; Wang, H.; Qiao, X.; Feng, T.; Luo, H.; Zhao, Y. Vision-Language Models in medical image analysis: From simple fusion to general large models. Inf. Fusion 2025, 118, 102995. [Google Scholar] [CrossRef]
  89. Topol, E. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef] [PubMed]
  90. Morley, J.; Machado, C.C.V.; Burr, C.; Cowls, J.; Joshi, I.; Taddeo, M.; Floridi, L. The ethics of AI in health care: A mapping review. Soc. Sci. Med. 2020, 260, 113172. [Google Scholar] [CrossRef]
  91. Mennella, C.; Maniscalco, U.; Pietro, G.D.; Esposito, M. Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon 2024, 10, e26297. [Google Scholar] [CrossRef]
  92. Rieke, N.; Hancox, J.; Li, W.; Milletarì, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K.; et al. The future of digital health with federated learning. npj Digit. Med. 2020, 3, 119. [Google Scholar] [CrossRef]
  93. Collins, G.S.; Reitsma, J.B.; Altman, D.G.; Moons, K.G.M. Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): The TRIPOD Statement. Ann. Intern. Med. 2015, 162, 55–63. [Google Scholar] [CrossRef]
  94. Mongan, J.; Chen, M.Z.; Halabi, S.; Kalpathy-Cramer, J.; Langlotz, M.P.; Langlotz, C.P.; Kohli, M.D.; Erickson, B.J.; Kim, W.; Auffermann, W.; et al. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers. Radiol. Artif. Intell. 2020, 2, e200029. [Google Scholar] [CrossRef]
  95. Tejani, A.S.; Klontzas, M.E.; Gatti, A.A.; Mongan, J.T.; Moy, L.; Park, S.H.; Kahn, C.E., Jr. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update. Radiol. Artif. Intell. 2024, 6, e240300. [Google Scholar] [CrossRef]
  96. Collins, G.S.; Moons, K.G.M.; Dhiman, P.; Riley, R.D.; Beam, A.L.; Calster, B.V.; Ghassemi, M.; Liu, X.; Reitsma, J.B.; van Smeden, M.; et al. TRIPOD+AI statement: Updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ 2024, 385, e078378. [Google Scholar] [CrossRef]
  97. Sadeghi, Z.; Alizadehsani, R.; CIFCI, M.A.; Kausar, S.; Rehman, R.; Mahanta, P.; Bora, P.K.; Almasri, A.; Alkhawaldeh, R.S.; Hussain, S.; et al. A review of Explainable Artificial Intelligence in healthcare. Comput. Electr. Eng. 2024, 118, 109370. [Google Scholar] [CrossRef]
Figure 1. Visualizations of one renal scan for a patient from dynamic renal scintigraphy study database [6].
Figure 1. Visualizations of one renal scan for a patient from dynamic renal scintigraphy study database [6].
Applsci 15 06841 g001
Figure 2. Comparison of renal planar (af) and SPECT (gl) images [7].
Figure 2. Comparison of renal planar (af) and SPECT (gl) images [7].
Applsci 15 06841 g002
Figure 3. PRISMA flowchart for the study selection process [18].
Figure 3. PRISMA flowchart for the study selection process [18].
Applsci 15 06841 g003
Figure 4. Publishing years for the reviewed studies.
Figure 4. Publishing years for the reviewed studies.
Applsci 15 06841 g004
Table 1. Type of research categorized by method and organ.
Table 1. Type of research categorized by method and organ.
MethodOrganNPublications
Diagnostic methods (N = 45)Heart17[20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36]
Brain15[37,38,39,40,41,42,43,44,45,46,47,48,49,50,51]
Bones7[52,53,54,55,56,57,58]
Kidney3[59,60,61]
Thyroid glands2[62,63]
Lymph nodes1[64]
General imaging methods (N = 13)Heart6[65,66,67,68,69,70]
Brain3[71,72,73]
Kidney3[74,75,76]
N/A *1[77]
* The authors used Monte Carlo simulated dataset for generation of not organ-specific, SPECT projections [77].
Table 2. Data sources.
Table 2. Data sources.
DataNameCountryN
Open dataset (N = 18)PPMIMulticenter14
UCI SPECT HeartUSA2
SPECTMPISegChina1
SPECT MPITurkey1
In house (N = 38) China9
Japan5
Taiwan5
Greece3
Iran3
South Korea2
Vietnam2
Algeria1
Egypt1
Germany1
USA1
Multicenter1
Not reported4
Software phantoms1
None1
Table 3. Research tasks and methods: diagnostic methods (N = 45).
Table 3. Research tasks and methods: diagnostic methods (N = 45).
ClassificationN = 34
PlanarDenseNet21[35]
MobileViT + GO[58]
SPECTCNNs
Custom CNN[22,26,33,37,38,40,61],
Custom CNN, VGG16, DenseNet, MobileNet, Inception[24]
Custom CNN, EfficientNet-B0, MobileNet-V2[44]
VGG16[39,52]
VGG16 from scratch and with transfer lerning[54]
Custom feature-fusion VGG19[28]
ResNet50V2[27]
EfficientNet V2[34]
VGG16, VGG19, DenseNet, AlexNet, GoogleNet, NASNet-Large, ResNet,[25]
VGG16, VGG19, DensNet, ResNet and custom VGG7, VGG21 and VGG24[55]
VGG, Xception, MobileNet, EfficientNet, Inception, DenseNet, ResNet[32]
VGG16, LDA, SVN, DT, MLP, RF, AB[47]
VGG16, AlexNet + Multi-kernel SVM[49]
PD Net (previously published model)[48]
DETR (previously published model)[62]
Other Deep Learning
AE + AB, SVM, KNN, RF, GB, BC, MLP, DT, LR[51]
Stacked AE[42]
FNN, DT, RF, LR, KNN, SVM, LDA[43]
Other Machine Learning
DM + LDA[41]
Density-Based Spatial, K-means and Hierarchical Clustering[46]
SVM[45]
SVM, KNN, DT, BT, RF[31]
SVM, DT, RF, LR, MLP, GB, XGB[30]
SVM, KNN, DT, RF, LR, NB, AB, GB, SGD[50]
SVM, KNN, DT, RF, LR, MLP, NB, GB, XGB[29]
SegmentationN = 10
PlanarFNN[64]
Custom Model (Swin-Unet + DeepLab)[59]
SPECTCustom CNN[57]
U-net[20,23,36,53,63]
V-net + dynamic programming[21]
U-net, Mask R-CNN[56]
Classification and SegmentationN = 1
PlanarMask R-CNN[60]
SPECT-
“+” denotes combination of methods, “,” denotes comparing of methods, “()” denotes base models. CNN—convolutional neural network, LDA—linear discriminant analysis, GAN—generative adversarial network, SVM—support vector machines, NN—neural network, DT—decision tree, MLP—multilayer perceptron, RF—random forest, AB—adaptive boosting, DM—diffusion maps, AE—autoencoders, FNN—feed-forward NN, LR—logistic Regression, KNN—K-nearest neighbors, XGB—Extreme Gradient Boosting, GB—Gradient Boosting, BT—Boosted Tree, GO—Growth Optimizer, NB—Naive Bayes, SGD—Stochastic Gradient Descent, BC—Bagging Classifier.
Table 4. Research tasks and methods: general imaging methods (N = 13).
Table 4. Research tasks and methods: general imaging methods (N = 13).
Synthetic data
Planar-
SPECTU-net[77]
GAN[67,71]
Reconstruction
PlanarDnCNN, Win5RB, ResUnet[74]
SPECTCustom Transformer-based Dual-domain Network[70]
Custom Residual Network[76]
Attenuation generation or enhancement
Planar-
SPECTcGAN (U-net + PatchGAN)[68]
ResNet, U-net[69]
U-net[75]
Normalization
Planar-
SPECTSelf-normalization via a projection of voxels[72]
U-net[73]
Feature selection
Planar-
SPECTQuantum-Based Avian Navigation Optimizer[65]
Mixed
Planar-
SPECTCustom Cross-domain Iterative Network (U-net)[66]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vrbaški, D.; Vesin, B.; Mangaroska, K. Machine Learning for Chronic Kidney Disease Detection from Planar and SPECT Scintigraphy: A Scoping Review. Appl. Sci. 2025, 15, 6841. https://doi.org/10.3390/app15126841

AMA Style

Vrbaški D, Vesin B, Mangaroska K. Machine Learning for Chronic Kidney Disease Detection from Planar and SPECT Scintigraphy: A Scoping Review. Applied Sciences. 2025; 15(12):6841. https://doi.org/10.3390/app15126841

Chicago/Turabian Style

Vrbaški, Dunja, Boban Vesin, and Katerina Mangaroska. 2025. "Machine Learning for Chronic Kidney Disease Detection from Planar and SPECT Scintigraphy: A Scoping Review" Applied Sciences 15, no. 12: 6841. https://doi.org/10.3390/app15126841

APA Style

Vrbaški, D., Vesin, B., & Mangaroska, K. (2025). Machine Learning for Chronic Kidney Disease Detection from Planar and SPECT Scintigraphy: A Scoping Review. Applied Sciences, 15(12), 6841. https://doi.org/10.3390/app15126841

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop