Next Article in Journal
Association Between Prostate Cancer Detection Rate and Year of Prostate Biopsy
Previous Article in Journal
Simultaneous Color Contrast Increments with Complexity and Identity of the Target Stimulus
Previous Article in Special Issue
Anatomically Guided Deep Learning System for Right Internal Jugular Line (RIJL) Segmentation and Tip Localization in Chest X-Ray
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Systematic Review: AI Applications in Liver Imaging with a Focus on Segmentation and Detection

by
Mihai Dan Pomohaci
1,2,
Mugur Cristian Grasu
1,2,*,
Alexandru-Ştefan Băicoianu-Nițescu
1,2,
Robert Mihai Enache
2 and
Ioana Gabriela Lupescu
1,2,*
1
Department 8: Radiology, Discipline of Radiology, Medical Imaging and Interventional Radiology I, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania
2
Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania
*
Authors to whom correspondence should be addressed.
Life 2025, 15(2), 258; https://doi.org/10.3390/life15020258
Submission received: 29 December 2024 / Revised: 2 February 2025 / Accepted: 5 February 2025 / Published: 8 February 2025
(This article belongs to the Special Issue Current Progress in Medical Image Segmentation)

Abstract

:
The liver is a frequent focus in radiology due to its diverse pathology, and artificial intelligence (AI) could improve diagnosis and management. This systematic review aimed to assess and categorize research studies on AI applications in liver radiology from 2018 to 2024, classifying them according to areas of interest (AOIs), AI task and imaging modality used. We excluded reviews and non-liver and non-radiology studies. Using the PRISMA guidelines, we identified 6680 articles from the PubMed/Medline, Scopus and Web of Science databases; 1232 were found to be eligible. A further analysis of a subgroup of 329 studies focused on detection and/or segmentation tasks was performed. Liver lesions were the main AOI and CT was the most popular modality, while classification was the predominant AI task. Most detection and/or segmentation studies (48.02%) used only public datasets, and 27.65% used only one public dataset. Code sharing was practiced by 10.94% of these articles. This review highlights the predominance of classification tasks, especially applied to liver lesion imaging, most often using CT imaging. Detection and/or segmentation tasks relied mostly on public datasets, while external testing and code sharing were lacking. Future research should explore multi-task models and improve dataset availability to enhance AI’s clinical impact in liver imaging.

1. Introduction

The liver is the largest organ in the abdomen, normally positioned in the upper-right quadrant, acting as a biofilter, with multiple metabolic tasks including both exocrine and endocrine functions [1]. Its unique dual blood supply both from the hepatic artery and the portal vein reflects its complex role in maintaining homeostasis; the hepatic veins collect blood from the liver and deliver it to the inferior vena cava [2]. The impact of chronic liver disease, cirrhosis and its complications are extensive, with a need for better prevention and surveillance methods [3]. This need is further emphasized by the increasing prevalence of metabolic dysfunction-associated fatty liver disease (MAFLD), estimated to grow by 21% from 2015 to 2030 [4]. Metabolic dysfunction-associated steatohepatitis (MASH) is part of MAFLD and is characterized by fat accumulation, inflammation and fibrosis, often progressing to cirrhosis [5]. Advanced liver imaging provides a non-invasive assessment of these changes, reducing the need for procedures like biopsy. Transient elastography and shear wave elastography (SWE) can evaluate liver stiffness, aiding in the staging of fibrosis [6]. Magnetic resonance elastography (MRE) has emerged as a highly accurate modality for detecting fibrosis, with improved reproducibility over ultrasound-based methods [7]. Steatosis can also be diagnosed with ultrasound (US) imaging but has no precise method of non-invasive quantification. With the advent of MRI proton density fat fraction (PDFF), a more precise quantification of hepatic steatosis can be performed [8].
Primary liver cancer frequently develops in the setting of chronic liver disease, represented mainly by hepatocellular carcinoma (HCC) but also by cholangiocellular carcinoma and other rare entities [9]. In 2020, the Global Cancer Observatory classified primary hepatic cancer as the third most common cause of death, ranking it as the sixth most frequently diagnosed type of cancer [9]. The liver is also a common site of metastasis, with up to 50% of patients presenting with liver metastasis or developing them during their oncologic disease, particularly from colorectal and pancreatic cancer [10]. Computed tomography (CT) and magnetic resonance imaging (MRI) are crucial in diagnosing and monitoring these patients. Similarly, contrast-enhanced ultrasound (CEUS) is a key technique that can provide additional real-time assessment of liver lesions.
Artificial intelligence (AI) is a growing field of study in the context of an increase in the amount of available data and computational power. Machine learning (ML) and deep learning (DL) are two nested subfamilies of AI, capable of extracting data without explicit programming [11]. Convolutional neural networks (CNNs) are a type of DL inspired by the function of neurons and synapses in the human cortex that can extract patterns of features from images during the training phase and use them to give an output during the testing phase [12]. CNNs, compared to other ML subtypes, do not require hand-crafted features or manual segmentation, so minimal human intervention is required. However, they demand large amounts of data and advanced graphical processing units [12]. A simplified hierarchical representation of the relationship between these AI subcategories is represented in Figure 1. Radiology and diagnostic imaging are a major areas of research for DL and ML applications [13,14], as the data are stored in a picture archiving and communication system (PACS) for multiple years and can be retrospectively processed. Even though the number of commercially available AI applications in radiology is increasing, the abdominal region is lagging behind in the implementation of these technologies. One meta-analysis of 100 commercially available applications from 2020 showed that only 2% focused on the liver, compared to 38% on neuro-imaging and 31% on chest. Additionally, these two applications were specifically designed for iron and fat quantification. In an analysis of the trends of applications for DL networks in medical imaging [15], the abdominal region ranked third between 2012 and 2020, behind neuro- and thoracic imaging. Some potential explanations for this paucity in liver applications are the complexity of triple-phase contrast scans with arterial, porto-venous and delayed/equilibrium phases, adding the difficulty of registration. Additionally, the liver is more prone to changes in orientation or artifacts secondary to respiratory movement and diaphragmatic excursions.
To ensure reproducibility and transparency, guidelines for medical imaging AI model development have been published, such as Checklist for Artificial Intelligence in Medical Imaging (CLAIM) [16] or MINimum Information for Medical AI Reporting (MINIMAR) [17]. A comprehensive list of guidelines for developing AI tools has been outlined by Klontzas et al. [18]. Code sharing plays a crucial role in the reproducibility and validation of AI models in medical imaging. This allows researchers to verify, refine and build upon existing algorithms, fostering collaboration and accelerating innovation. Similarly, prospective studies are essential for the correct validation of AI models.
The main AI tasks in radiology are detection, segmentation, classification/regression and image optimization/reconstruction tasks. A detection model identifies a structure, an organ or a lesion, most often using a bounding box. DL models, especially CNNs, can be used to identify liver lesions, assisting radiologists and potentially reducing the number of overlooked lesions [19]. Segmentation models can create a precise delineation of the pixels representing a structure in an image [20], outputting a mask. They can help automate processes like CT or MRI liver volume assessment for transplant patients, fat quantification in MAFLD patients using MRI-PDFF and the evaluation of fibrotic changes in patients with chronic liver disease using MRE. Similarly, lesions can be precisely delineated, providing 3D measurements for diagnosis and follow-up. A classification task categorizes an image into a variable number of categories [20] (e.g., hepatocarcinoma or cholangiocarcinoma or hemangioma, etc.), while a regression task will use an image or a set of images to output a continuous value [21] (e.g., predicted survival = 1.2 years). All these models can improve the daily workflow of radiologists, providing precise, reproducible measurements and novel biomarkers in liver imaging.
The rationale for this systematic review lies in the increasing interest in AI-based applications for liver imaging in radiology in the context of a growing global burden of liver diseases and the parallel advancements in AI technologies. The objective of this review was to systematically assess and classify research studies on AI applications in liver radiology from 2018 to 2024. Specifically, our research questions were the following: Considering the complexity of hepatic imaging, what are the main anatomical areas of interest (AOIs) in liver studies developing AI models? Considering the great potential and multitude of AI tasks in liver imaging, what is the prevalence of classification, detection, segmentation and image optimization models in the evaluated studies? What imaging modalities are most frequently used in liver AI research? How has the distribution of AOI, AI task and modality changed over time (2018–2024)? A more detailed analysis was performed on detection and segmentation studies, with the following questions: What are the most common AOIs specifically for this task? What percentage of studies rely on public, private, or a combination of both datasets? What are the most commonly used public datasets? To what extent is external validation applied in liver detection and segmentation studies? What percentage of studies provide publicly available code and how does the lack of code sharing impact reproducibility and transparency? Are AI-based liver imaging studies predominantly retrospective or prospective?

2. Materials and Methods

This study adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [22].
A systematic search was performed on the PubMed/Medline, Scopus and Web of Science databases, including articles published between 01.01.2018 and 29.10.2024. The following keywords were used in combination with BOOLEANS operators according to each database’s specific search queries: “Liver”, “Hepatic”, “Liver Metastasis”, “Hepatocarcinoma”, “Cholangiocarcinoma”, “Radiology”, “Diagnostic Imaging”, “Magnetic Resonance Imaging”, “MRI”, “computed tomography”, “CT”, “Ultrasonography”, “Ultrasound”, “Artificial Intelligence”, “Machine Learning”, “Deep Learning”, “Radiomics” and “neural network”. Detailed information on the search queries for each database is provided in the Supplementary Data (Table S1).
The extracted records included 6680 articles, which were uploaded to the Rayyan application (https://www.rayyan.ai/): PubMed—2321; Scopus—1656; and Web of Science—2703. Using the dedicated functionality, duplicates were detected and removed, leaving 3830 articles for the screening phase. Two radiologists (R.E., 3rd-year resident; M.P., radiologist) independently screened the data using Rayyan labels and titles. Inclusion criteria: original research articles, publication dates of 1 January 2018–29 October 2024, English language, papers applying to humans, the use of AI and radiological images (CT, MRI, US, PET-CT, etc.) and focusing on the liver. A total of 2045 articles were removed based on the following exclusion criteria: conference papers (530), a focus on other organs (466), reviews (420), histopathology data (236), animal or cadaveric experiments (178), phantom studies (119), non-English studies (57), retracted publications (10), preprints (14), editorials (6), comments (6) and case reports (3). For the remaining 1785 articles, the titles and abstracts were analyzed by three radiologist reviewers (R.E., M.P., A.S.B.N), and 553 articles were excluded for the following reasons: they were studies with no imaging data (radiology reports, lab data, genetics, etc.), dose predictions, a technical MRI/CT component analysis or abdominal body composition studies. There were 379 conflicts, which were resolved by consensus agreement. The Prisma flow diagram of our work is represented in Figure 2.
The remaining 1232 articles were classified according to the following:
  • The area of interest (AOI): liver parenchyma, lesions, vascularity, bile ducts or complex models (applied to >1 area, e.g., liver parenchyma and lesions).
  • The AI task: detection and/or segmentation, classification/regression, image optimization (registration/synthesis/reconstruction) and multi-task (performing >1 task, e.g., segmentation and classification).
  • The modality: US, CT, MRI, nuclear medicine, multi-modality (using > 1 modality, e.g., CT and MRI).
We performed a more in-depth analysis of the full manuscripts for articles that focused on detection and/or segmentation (329/1232), extracting information on the following:
  • The datasets used: public, private or both.
  • The usage of an external dataset.
  • The availability of the code used for model development.
  • The prospective or retrospective nature of the study.
  • Data from individual studies were tabulated according to the abovementioned criteria to summarize key study characteristics. Visual representations included hierarchical tree maps (Figures 3, 5 and 7) for the AOI, modality and AI task; line charts (Figures 4, 6, 8 and 9) for temporal trends; and stacked bar charts in Appendix A (Figure A1, Figure A2 and Figure A3). The number of articles for which the respective data was unavailable is mentioned in the results.

3. Results

3.1. Area of Interest (AOI)

Data regarding the AOI are represented in Table 1, with all the five main categories in the left section and a subgroup analysis of complex AOIs in the right section. No. represents the number of articles that researched that AOI. The tree chart in Figure 3 helps visualize the relatonship between the five main areas of interest in a hierarchical configuration. The yearly trends of the same five main AOIs from 2018 to 2024 are represented in Figure 4 (the values for 2024 have been estimated).

3.2. Modality

The results on the modality used are found in Table 2, with the main six types in the left section and a more detailed analysis of multi-modality subgroups in the right section. No. represents the number of articles that used that modality. The tree chart in Figure 5 was generated to help visualize the hierarchy between the six main categories. Figure A1 in Appendix A is a stacked bar chart representing the modalities across the AOIs. The yearly trends from 2018 to 2024 are shown in Figure 6 (the values for 2024 have been estimated). For six articles, information on the modality used was not found; these were not added to the table or figures.

3.3. AI Task

Data on the main four AI tasks is represented in Table 3 in the left section, with a further analysis of AI multi-task subcategories in the right section. No. represents the number of articles that applied the AI task. A visual representation of the hierarchical structure of the main four AI tasks is represented in Figure 7. Stacked bar charts representing the relationship of AI task across AOI and AI task across modality are shown in Figure A2 and Figure A3 in Appendix A. The yearly trends from 2018 to 2024 are represented in Figure 8 (the values for 2024 have been estimated).

3.4. Detection and/or Segmentation Studies

3.4.1. Detection and/or Segmentation AOIs

An in-depth analysis of the 329 articles that researched detection and/or segmentation tasks was performed. The main AOIs studied in these papers are represented in Table 4 in the left section, with a subcategory analysis of complex AOIs in the right section. The yearly trends from 2018 to 2024 are represented in Figure 9 (the values for 2024 have been estimated).

3.4.2. Detection and/or Segmentation (D&S) Datasets

The distribution of public and private datasets is shown in the left section of Table 5, with the top four public datasets used represented in the right section. No. represents the number of articles that used that particular dataset or dataset type. For 17 articles, information regarding datasets was not found; these were not added to the tables. A list of all the public datasets found in the evaluated papers can be found in Appendix A, Table A1.

4. Discussion

Several reviews have previously analyzed the role of AI in liver imaging, systematically or by summarizing the state of the art. Nam et al. investigated publications n hepatology with a broader area of research, including studies using radiology, histopathology and clinical data [27]. Their analysis also suggested that CT was the most widely used modality and that diagnosis and prognosis were the most common functions, followed by segmentation. Unlike Nam et al., who emphasized the potential of AI in different data types, our review focused on radiology data only and had a systematic approach. Furthermore, we categorized research based on imaging modalities, AI tasks, and the areas of interest, with a detailed focus on detection and segmentation database usage and code-sharing practices. Radiya et al. systematically analyzed 191 studies and focused specifically on machine learning applications in CT imaging. Our review expanded this scope by incorporating other types of radiology data (MRI, US, multi-modality, nuclear medicine) and providing temporal trends from 2018 to 2024 [28]. Their study also analyzed dataset types, with public being the most common and LITS the most widely used. Additionally, we provided a list of all the public datasets that were found in detection and segmentation studies and performed an analysis of the data described in private datasets.
In our systematic review, liver lesions were the most researched AOI, explored in 60.30% of studies. This superiority was maintained across the years (Figure 4) and across imaging modalities (Figure A1) and AI tasks (Figure A2). The number of articles that handled complex AOIs was low (6.25%) but showed a slow and steady increase, shown in Figure 4 (a peak in 2021 with 21 studies). The majority of complex AOI studies (93.50%) combined an analysis of liver parenchyma and lesions. There was only one article that handled >2 AOIs, a model developed by Oh et al. that segmented parenchyma, lesions, vessels and bile ducts in MRI hepatobiliary phase [29]. Furthermore, when performing a cross analysis of complex AOI articles and AI tasks, we noticed that most of them (92.20%) involved detection and/or segmentation (Figure A2). These data might suggest that having a comprehensive approach and integrating all its structures into one AI model is still a very difficult task.
CT was the primary imaging modality, used in 51.42% of papers, almost twice more often than MRI (27.19%) and three times more often than US (15.34%). This preference can be explained by multiple factors. CT uses Hounsfield Unit (HU) to measure voxel values, which provides a standardized and reproducible measure across different scanners. In contrast, MRI lacks a similar quantitative standard, and the signal intensity can vary depending on scanner and imaging protocols. This variability introduces an additional preprocessing step to normalize the data, an essential step in developing MRI AI models, which lacks uniform guidelines [30]. MRI public datasets are also lacking; we found only six in the analyzed D&S group of articles, namely CHAOS [26], AMOS22 [31], ATLAS [32], DLDS [33], LiverHccSeg [34] and TCIA [35]. Ultrasound imaging is more user-dependent and the windows used to capture liver images are not standardized. Another factor could be the scarcity of public US datasets; in our D&S group of articles, we only found one, MICCAI CLUST [36,37]. The list of all the public datasets found in the evaluated papers is present in Appendix A, Table A1.
Analyzing the trends for modality use (Figure 6), we can see an increase in using multi-modality data from 5 studies in 2021 and 3 studies in 2022 to 17 in 2023. One explanation could be the popularity of foundation models, which increased in 2023, which stimulated an interest in combining data from multiple modalities for advanced analysis and applications. In addressing the need for collaborative frameworks and improved dataset diversity, the CHAIMELEON project focuses on developing a standardized, multi-modal imaging repository for AI tool validation across Europe. Similarly, the European Federation for Cancer Images (EUCAIM) project wishes to establish a federated infrastructure for secure, cross-border data sharing. By leveraging these frameworks, future research can bridge the gap between AI development and clinical application, ultimately enhancing AI’s impact in liver imaging and beyond.
Classification and/or regression were the most researched tasks in our study on liver imaging. The purpose of such models can range from distinguishing benign vs. malignant lesions [38,39] to more specific differential diagnoses like HCC vs. combined cholangiocarcinomahepatocarcinoma [40,41]. They can also be developed to subtype a tumor according to histopathological (HP) features, like predicting microvascular invasion in HCC [42,43] or predicting response to treatment in cholangiocarcinoma [44,45] or survival in HCC [46,47]. The multitude of options that this task can encompass could also be an explanation for these superior numbers, beyond just an increased interest.
The distribution of AI tasks across the modalitiy highlights key differences in research focus and modality preferences (depicted in Figure A3 in Appendix A). In publications that used CT, classification tasks represented 46.92% of CT studies, while detection and segmentation (D&S) represented 38.89% of CT studies. The CT D&S studies followed the same pattern of AOI distribution as other modalities, most focusing on liver lesion segmentation, with liver parenchyma segmentation as the second most common AOI. CT classification studies had diverse objectives, with most focusing on liver parenchyma (e.g., fibrosis staging [48], NASH diagnosis [49], etc.) or liver lesions (e.g., prediction of HCC microvascular invasion [50]). Conversely, in publications that used MRI, classification tasks prevailed, representing 71.04% of MRI studies, with detection and segmentation accounting for 13.43% of MRI studies. The MRI classification studies were also very diverse in purpose, most focusing on liver parenchyma (e.g., fibrosis evaluation on MRI ADC maps [51]) or liver lesions (e.g., predicting HCC recurrence after ablation [52]). This distribution suggests that CT is widely used for both lesion characterization and segmentation, while MRI plays an essential role mainly in AI lesion characterization or AI-based predictions and less in detection and segmentation. This can be explained by the superior complexity of MRI liver imaging, used often as an additional diagnostic tool when US and CT imaging cannot provide a diagnosis. The superior contrast resolution and multiple types of acquisition can allow for more information to be extracted by AI models in order to reach a complex diagnosis like microvascular invasion in HCC [43], which normally requires a histopathological diagnosis.
With the increasing need to extract more complex imaging biomarkers, there is also a need for automated processes that output at least a rudimentary delineation of the area of interest, if not precise 3D volumes. Manual segmentations, although considered a “gold standard”, are prone to inter- and intra-reader variability and are also time-consuming [53]. In a study conducted on 105 patients, implementing an automatic DL model reduced the processing time for liver segmentation from an average of 169.8 s per case for manual contouring to 1.7 s [54]. This was our motivation to perform a more in-depth analysis of detection and/or segmentation studies.
An analysis of the trends for segmentation and detection (Figure 9) showed that in 2018–2020, the number of articles focusing on liver parenchyma was greater than those focusing on lesions. After 2021, the ratio reversed or equalized (2022). One possible explanation is that liver parenchyma segmentation had very good results in LiTS competitions in 2017 and 2018 [23], with most of the teams obtaining Dice scores higher than 0.920. Another factor could be that many studies now focus on multi-organ segmentation, with competitions like Medical Segmentation Decathlon (MSD) [55] assessing the segmentation performance for 10 organs in total. More recently, Total Segmentator has been publicly released, which provides segmentations for 104 structures [56], including the liver, and it has been implemented in open-source applications like 3Dslicer 5.0 (https://www.slicer.org).
The good performance of DL models for liver parenchyma segmentation is also reflected in studies analyzing clinical impact, with graft volume estimations performed by DL models closely matching the actual graft weight found using both CT [57,58,59,60,61] and MRI [62]. Radiomics features extracted from automated hepatic parenchyma segmentations have also been shown to be more reproducible compared to manual contours in portal-phase MRI [63]. This data could suggest that AI liver parenchyma segmentation might be a solved problem from a technical performance perspective and should be ready for clinical implementation.
Studies have shown that a unidimensional diameter does not always correlate with the actual tumoral size and volume [64,65,66], and there is an increase in the inter-reader variability [67]. In a study by Joskowicz et al. [68], when radiologists had access to quantitative AI data for liver metastasis evaluation, they changed and improved their oncological disease status decision in 1/3 of cases. Similarly, in a study by Wesdorp et al. [69], total tumor volume quantification for colorectal liver metastasis demonstrated prognostic potential in response evaluation to systemic treatments compared to unidimensional measurements. These studies underline the need for AI-assisted quantifications in liver oncologic studies and the need to move beyond unidimensional measurements. Results from LiTS competitions [23] for liver lesion detection and segmentation showed a maximum Dice score of 0.702 in 2017 and 0.739 in 2018, and the best detection performance was 0.479 in 2017 and 0.554 in 2018. Although these competitions provide a common set of rules for participation, which ensures transparency, there is less information on patient history or multiphase scan integration. Liver imaging is very complex and clinical data is essential, reflected in the 81 defined terms for image interpretation in the Liver Imaging Reporting and Data System (LI-RADS) Lexicon [70].
A closer look into segmentation and detection (D&S) articles showed that LiTS [23] (used in 41.64% of studies) and 3DIRCADb [24] (used in 29.48%) were the most popular public datasets, used alone or in combination with other datasets. More than 1/4 of D&S studies (27.65%) used only one public dataset for model development, and only 10% of articles explicitly mentioned using an external dataset for model testing. Current guidelines like CLAIM [16] or MINIMAR [17] emphasize the presence of external data for model development in medical imaging. The most common combination of two datasets was LiTS [23] and 3DIRCAD [24], used in 35 studies (10.63%). When combining these datasets, it is important to keep in mind that 3DIRCADb is already contained within LiTS and to avoid using it as a test set, which will falsely improve the performance.
We found 155 studies that used private datasets (alone or combined with public ones). For 52.25%, information on contrast acquisition or the MRI sequence was not found. In papers that described their data, most private datasets (29.67%) used complex data, including multiphasic CT/MRI, multiple non-contrast MRI acquisitions, or a combination of both. A complete description of the imaging data found in private datasets is presented in Appendix A in Table A2. Using multiphasic imaging is strongly recommended for liver tumor imaging in clinical practice, especially if the etiology is unknown [71]. Using more than one phase for DL segmentation has been shown to improve accuracy and reduce the number of false negative predictions for hepatocarcinoma [72]. However, this adds complexity, especially with liver registration between acquisitions, which has been shown to be a source of false positives or false negatives for DL models [73]. Understanding the context where these AI models perform best or worse with regard to data type could help us better integrate these models into clinical practice.
Most detection and/or segmentation studies (74.77%) were described as retrospective, while only were prospective (0.91%); the rest (23.31%) of the studies had no information on the retro- or prospective nature of collecting data. Studies using only public datasets were considered retrospective. Prospective studies are essential for AI model development, as they allow for real-time validation in clinical settings, reducing the risk of dataset bias and overfitting. Code sharing is recommended and mentioned as a checklist item in the CLAIM guidelines [16]; it ensures reproducibility, transparency and collaboration for AI model development. A multi-society statement by the ACR, ESR and other leading radiological organizations emphasized the ethical responsibility of AI developers to promote openness and equitable access to AI tools [74]. These ethical principles align with the need for code sharing, as it allows for critical scrutiny, validation and continuous improvement by the global research community. Despite being included as a strong recommendation or as a mandatory part of scientific articles in most journals, sharing practices in medical sciences remain low [75]. In our study, code links were shared by a small number of D&S papers, 36/329 (10.94%); the full list of these articles is provided in the Supplementary Materials in Table S2 [54,59,63,73,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107].
As we understand the importance of classification and/or regression models in literature we plan to make a similar in-depth analysis of these types of studies in the future.

5. Limitations

By combining detection and segmentation in our AI task assessment, we might have lost some of the granularity in liver task evaluation. However, these terms are sometimes used interchangeably, and even when both are mentioned, sometimes metrics are present for only one of the tasks. Our decision to group them together was made to maintain consistency in data reporting. However, future studies could benefit from distinguishing these tasks more clearly and incorporating standardized evaluation metrics to improve comparability across studies.
The number of articles regarding biliary imaging might have been underrepresented as no specific keywords for biliary structures were included in the search. This limitation could impact our findings by underestimating the role of AI in evaluating biliary pathologies. Future research could refine keyword selection to include terms related explicitly to the biliary system, ensuring a more comprehensive review of AI applications in this area. Similarly, image synthesis, reconstruction or reproducibility, represented in our study by the “Image quality” group, might also not be well represented, since these models are frequently applied to multiple regions of the body and the ones focused on liver do not reflect the global impact of these types of studies. This limitation suggests that our findings may not capture the full scope of AI-driven image optimization methods. Future studies could consider a more extensive review of multi-organ AI applications and their liver-specific implications.
Our data for 2024 was incomplete (collected up to 10/2024); therefore, in all the trends figures, to avoid a false downward trend of the slope, we used a linear regression to estimate the values for 2024. While this approach provided a reasonable estimation, it introduced uncertainty into the 2024 projections. Future studies incorporating complete data from 2024 will be necessary to validate our trend observations.

6. Conclusions

This systematic review highlights the main areas of research for liver AI applications. Liver lesions emerged as the primary area of interest (60.30%), while complex models addressing multiple liver structures remain scarce. CT was the most widely used imaging modality (51.54%), benefiting from greater dataset availability, while MRI and ultrasound faced challenges due to variability and limited datasets. CT was widely used for both classification and segmentation studies, while MRI was mostly used for classification tasks. For detection and/or segmentation studies, public datasets such as LiTS and the 3DIRCADb were most popular for AI model development. However, their limited diversity and the low use of external testing (10%) can pose difficulties with generalizability. Most studies were retrospective (74.77%), with minimal code sharing (10.94%), a factor that might reduce the reproducibility and clinical adoption. Complex models that integrate multiple AOIs and tasks are still lacking. Future research should prioritize the development of diverse datasets, robust external validation and prospective studies to bridge existing gaps. Greater transparency through open-access code sharing and adherence to reporting guidelines will further support the integration of AI into clinical practice.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/life15020258/s1: Table S1: Literature search strategy; Table S2: Articles with public script.

Author Contributions

Conceptualization: M.D.P.; resources: M.C.G., A.-Ş.B.-N. and R.M.E.; writing—original draft preparation, M.D.P.; supervision, I.G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data extracted from the included studies can be provided on reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
CTComputed tomography
MRIMagnetic resonance imaging
USUltrasound
NALFDNon-alcoholic fatty liver disease
HCCHepatocarcinoma
AIArtificial intelligence
MLMachine learning
DLDeep learning
CNNConvolutional neural network
PACSPicture archiving and communication system
CLAIMChecklist for Artificial Intelligence in Medical Imaging
MINIMARMINimum Information for Medical AI Reporting
AOIArea of interest
NMNuclear medicine
M-MMulti-modality
LiTSLiver Tumor Segmentation
3DIRCAD3D image reconstruction for comparison of algorithm database
SLIVERSegmentation of the Liver
CHAOSChallenge-combined healthy abdominal organ segmentation
MSDMedical Segmentation Decathlon
CLUSTChallenge on Liver Ultrasound Tracking
AMOSAbdominal multi-organ segmentation
ATLASA Tumor and Liver Automatic Segmentation
DLDSDuke liver dataset
ACT-1KAbdomen-CT1k
BTCVBeyond the cranial vault
NMNuclear medicine
M-MMulti-modality

Appendix A

Figure A1. Stacked bar chart representing modalities across AOIs.
Figure A1. Stacked bar chart representing modalities across AOIs.
Life 15 00258 g0a1
Figure A2. Stacked bar chart representing AI tasks across AOIs.
Figure A2. Stacked bar chart representing AI tasks across AOIs.
Life 15 00258 g0a2
Figure A3. Stacked bar chart representing AI tasks across modality.
Figure A3. Stacked bar chart representing AI tasks across modality.
Life 15 00258 g0a3
Table A1. List of public datasets found in the detection and/or segmentation articles.
Table A1. List of public datasets found in the detection and/or segmentation articles.
DatasetUsed by No. of ArticlesModality
Liver Tumor Segmentation (LiTS) [23]137CT
3D image reconstruction for comparison
of algorithm database (3DIRCAD) [24]
97CT
Segmentation of the Liver (SLIVER) [25]25CT
Challenge-combined healthy abdominal
organ segmentation (CHAOS) [26]
23CT and MRI
Medical Segmentation Decathlon (MSD) [55]13CT
The cancer imaging archive (TCIA) [35]4CT and MRI
Challenge on Liver Ultrasound
Tracking (CLUST) [36,37]
3US
Abdominal multi-organ segmentation (AMOS) [31]2CT and MRI
A Tumor and Liver Automatic
Segmentation (ATLAS) [32]
2MRI
Duke liver dataset (DLDS) [33]2MRI
LIVERHCCSEG [34]2MRI
Abdomen-CT1k (ACT-1K) [108]2CT
ISICDM [109]1CT
Beyond the cranial vault (BTCV) [110]1CT
KAGGLE zxcv2022 [111]1CT
Multi-organ Abdominal CT Reference
Standard Segmentations [112]
1CT
VISCERALAnatomy [113]1CT
Table A2. List of main acquisition phases and MRI sequences used in the detection and/or segmentation papers using private datasets.
Table A2. List of main acquisition phases and MRI sequences used in the detection and/or segmentation papers using private datasets.
Dataset Type UsedUsed by No. of Articles
No information81 (52.25%)
Multiphasic CT/MRI33 (21.29%)
Single-phase CT/MRI—venous13 (8.38%)
Multiphase multiparametric MRI8 (5.16%)
Single-phase MRI—hepatobiliary6 (3.87%)
Multiparametric MRI (non-contrast)5 (3.22%)
Single-sequence MRI-T1 (non-contrast)3 (1.93%)
Single-phase CT/MRI—arterial2 (1.29%)
Single-phase CT/MRI—delayed2 (1.29%)
Single-sequence MRI-T21 (0.64%)
Single-sequence MRI-PDFF1 (0.64%)

References

  1. Arias, I.M.; Alter, H.J.; Boyer, J.L.; Cohen, D.E.; Shafritz, D.A.; Thorgeirsson, S.S.; Wolkoff, A.W. The Liver: Biology and Pathobiology; John Wiley & Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
  2. Lorente, S.; Hautefeuille, M.; Sanchez-Cedillo, A. The liver, a functionalized vascular structure. Sci. Rep. 2020, 10, 16194. [Google Scholar] [CrossRef]
  3. Moon, A.M.; Singal, A.G.; Tapper, E.B. Contemporary epidemiology of chronic liver disease and cirrhosis. Clin. Gastroenterol. Hepatol. 2020, 18, 2650–2666. [Google Scholar] [CrossRef]
  4. Estes, C.; Razavi, H.; Loomba, R.; Younossi, Z.; Sanyal, A.J. Modeling the epidemic of nonalcoholic fatty liver disease demonstrates an exponential increase in burden of disease. Hepatology 2018, 67, 123–133. [Google Scholar] [CrossRef]
  5. Caussy, C.; Reeder, S.B.; Sirlin, C.B.; Loomba, R. Noninvasive, Quantitative Assessment of Liver Fat by MRI-PDFF as an Endpoint in NASH Trials. Hepatology 2018, 68, 763–772. [Google Scholar] [CrossRef] [PubMed]
  6. Ferraioli, G.; Soares Monteiro, L.B. Ultrasound-based techniques for the diagnosis of liver steatosis. World J. Gastroenterol. 2019, 25, 6053–6062. [Google Scholar] [CrossRef] [PubMed]
  7. Venkatesh, S.K.; Yin, M.; Ehman, R.L. Magnetic resonance elastography of liver: Technique, analysis, and clinical applications. J. Magn. Reson. Imaging 2013, 37, 544–555. [Google Scholar] [CrossRef] [PubMed]
  8. Gu, J.; Liu, S.; Du, S.; Zhang, Q.; Xiao, J.; Dong, Q.; Xin, Y. Diagnostic value of MRI-PDFF for hepatic steatosis in patients with non-alcoholic fatty liver disease: A meta-analysis. Eur. Radiol. 2019, 29, 3564–3573. [Google Scholar] [CrossRef] [PubMed]
  9. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
  10. Tsilimigras, D.I.; Brodt, P.; Clavien, P.-A.; Muschel, R.J.; D’Angelica, M.I.; Endo, I.; Parks, R.W.; Doyle, M.; de Santibañes, E.; Pawlik, T.M. Liver metastases. Nat. Rev. Dis. Primers 2021, 7, 27. [Google Scholar] [CrossRef]
  11. Barragán-Montero, A.; Javaid, U.; Valdés, G.; Nguyen, D.; Desbordes, P.; Macq, B.; Willems, S.; Vandewinckele, L.; Holmström, M.; Löfman, F. Artificial intelligence and machine learning for medical imaging: A technology review. Phys. Medica 2021, 83, 242–256. [Google Scholar] [CrossRef] [PubMed]
  12. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef]
  13. Recht, M.P.; Dewey, M.; Dreyer, K.; Langlotz, C.; Niessen, W.; Prainsack, B.; Smith, J.J. Integrating artificial intelligence into the clinical practice of radiology: Challenges and recommendations. Eur. Radiol. 2020, 30, 3576–3584. [Google Scholar] [CrossRef] [PubMed]
  14. Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc. Neurol. 2017, 2. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, L.; Wang, H.; Huang, Y.; Yan, B.; Chang, Z.; Liu, Z.; Zhao, M.; Cui, L.; Song, J.; Li, F. Trends in the application of deep learning networks in medical image analysis: Evolution between 2012 and 2020. Eur. J. Radiol. 2022, 146, 110069. [Google Scholar] [CrossRef]
  16. Tejani, A.S.; Klontzas, M.E.; Gatti, A.A.; Mongan, J.T.; Moy, L.; Park, S.H.; Kahn, C.E., Jr.; Panel, C.U. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update. Radiol. Artif. Intell. 2024, 6, e240300. [Google Scholar] [CrossRef] [PubMed]
  17. Hernandez-Boussard, T.; Bozkurt, S.; Ioannidis, J.P.; Shah, N.H. MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care. J. Am. Med. Inform. Assoc. 2020, 27, 2011–2015. [Google Scholar] [CrossRef]
  18. Klontzas, M.E.; Gatti, A.A.; Tejani, A.S.; Kahn, C.E., Jr. AI Reporting Guidelines: How to Select the Best One for Your Research. Radiol. Artif. Intell. 2023, 5, e230055. [Google Scholar] [CrossRef] [PubMed]
  19. Nakai, H.; Sakamoto, R.; Kakigi, T.; Coeur, C.; Isoda, H.; Nakamoto, Y. Artificial intelligence-powered software detected more than half of the liver metastases overlooked by radiologists on contrast-enhanced CT. Eur. J. Radiol. 2023, 163, 110823. [Google Scholar] [CrossRef]
  20. Cheng, P.M.; Montagnon, E.; Yamashita, R.; Pan, I.; Cadrin-Chenevert, A.; Perdigon Romero, F.; Chartrand, G.; Kadoury, S.; Tang, A. Deep Learning: An Update for Radiologists. Radiographics 2021, 41, 1427–1445. [Google Scholar] [CrossRef]
  21. Dai, W.; Li, X.; Chiu, W.H.K.; Kuo, M.D.; Cheng, K.-T. Adaptive contrast for image regression in computer-aided disease assessment. IEEE Trans. Med. Imaging 2021, 41, 1255–1268. [Google Scholar] [CrossRef] [PubMed]
  22. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; The PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef] [PubMed]
  23. Bilic, P.; Christ, P.; Li, H.B.; Vorontsov, E.; Ben-Cohen, A.; Kaissis, G.; Szeskin, A.; Jacobs, C.; Mamani, G.E.H.; Chartrand, G. The liver tumor segmentation benchmark (lits). Med. Image Anal. 2023, 84, 102680. [Google Scholar] [CrossRef] [PubMed]
  24. Soler, L.; Hostettler, A.; Agnus, V.; Charnoz, A.; Fasquel, J.-B.; Moreau, J.; Osswald, A.-B.; Bouhadjar, M.; Marescaux, J. 3D Image Reconstruction for Comparison of Algorithm Database. 2010. Available online: https://www.ircad.fr/research/data-sets/liver-segmentation-3d-ircadb-01 (accessed on 20 October 2024).
  25. Heimann, T.; Van Ginneken, B.; Styner, M.A.; Arzhaeva, Y.; Aurich, V.; Bauer, C.; Beck, A.; Becker, C.; Beichel, R.; Bekes, G. Comparison and evaluation of methods for liver segmentation from CT datasets. IEEE Trans. Med. Imaging 2009, 28, 1251–1265. [Google Scholar] [CrossRef] [PubMed]
  26. Kavur, A.E.; Gezer, N.S.; Barış, M.; Aslan, S.; Conze, P.-H.; Groza, V.; Pham, D.D.; Chatterjee, S.; Ernst, P.; Özkan, S. CHAOS challenge-combined (CT-MR) healthy abdominal organ segmentation. Med. Image Anal. 2021, 69, 101950. [Google Scholar] [CrossRef]
  27. Nam, D.; Chapiro, J.; Paradis, V.; Seraphin, T.P.; Kather, J.N. Artificial intelligence in liver diseases: Improving diagnostics, prognostics and response prediction. JHEP Rep. 2022, 4, 100443. [Google Scholar] [CrossRef] [PubMed]
  28. Radiya, K.; Joakimsen, H.L.; Mikalsen, K.O.; Aahlin, E.K.; Lindsetmo, R.O.; Mortensen, K.E. Performance and clinical applicability of machine learning in liver computed tomography imaging: A systematic review. Eur. Radiol. 2023, 33, 6689–6717. [Google Scholar] [CrossRef] [PubMed]
  29. Oh, N.; Kim, J.H.; Rhu, J.; Jeong, W.K.; Choi, G.S.; Kim, J.M.; Joh, J.W. Automated 3D liver segmentation from hepatobiliary phase MRI for enhanced preoperative planning. Sci. Rep. 2023, 13, 17605. [Google Scholar] [CrossRef]
  30. Panic, J.; Defeudis, A.; Balestra, G.; Giannini, V.; Rosati, S. Normalization Strategies in Multi-Center Radiomics Abdominal MRI: Systematic Review and Meta-Analyses. IEEE Open J. Eng. Med. Biol. 2023, 4, 67–76. [Google Scholar] [CrossRef] [PubMed]
  31. Ji, Y.; Bai, H.; Ge, C.; Yang, J.; Zhu, Y.; Zhang, R.; Li, Z.; Zhanng, L.; Ma, W.; Wan, X. Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation. Adv. Neural Inf. Process. Syst. 2022, 35, 36722–36732. [Google Scholar]
  32. Quinton, F.; Popoff, R.; Presles, B.; Leclerc, S.; Meriaudeau, F.; Nodari, G.; Lopez, O.; Pellegrinelli, J.; Chevallier, O.; Ginhac, D. A tumour and liver automatic segmentation (atlas) dataset on contrast-enhanced magnetic resonance imaging for hepatocellular carcinoma. Data 2023, 8, 79. [Google Scholar] [CrossRef]
  33. Macdonald, J.A.; Zhu, Z.; Konkel, B.; Mazurowski, M.A.; Wiggins, W.F.; Bashir, M.R. Duke Liver Dataset: A publicly available liver MRI dataset with liver segmentation masks and series labels. Radiol. Artif. Intell. 2023, 5, e220275. [Google Scholar] [CrossRef] [PubMed]
  34. Gross, M.; Arora, S.; Huber, S.; Kücükkaya, A.S.; Onofrey, J.A. LiverHccSeg: A publicly available multiphasic MRI dataset with liver and HCC tumor segmentations and inter-rater agreement analysis. Data Brief 2023, 51, 109662. [Google Scholar] [CrossRef] [PubMed]
  35. Clark, K.; Vendt, B.; Smith, K.; Freymann, J.; Kirby, J.; Koppel, P.; Moore, S.; Phillips, S.; Maffitt, D.; Pringle, M. The Cancer Imaging Archive (TCIA): Maintaining and operating a public information repository. J. Digit. Imaging 2013, 26, 1045–1057. [Google Scholar] [CrossRef] [PubMed]
  36. De Luca, V.; Banerjee, J.; Hallack, A.; Kondo, S.; Makhinya, M.; Nouri, D.; Royer, L.; Cifor, A.; Dardenne, G.; Goksel, O. Evaluation of 2D and 3D ultrasound tracking algorithms and impact on ultrasound-guided liver radiotherapy margins. Med. Phys. 2018, 45, 4986–5003. [Google Scholar] [CrossRef]
  37. De Luca, V.; Benz, T.; Kondo, S.; König, L.; Lübke, D.; Rothlübbers, S.; Somphone, O.; Allaire, S.; Bell, M.L.; Chung, D. The 2014 liver ultrasound tracking benchmark. Phys. Med. Biol. 2015, 60, 5571. [Google Scholar] [CrossRef] [PubMed]
  38. Starmans, M.P.; Miclea, R.L.; Vilgrain, V.; Ronot, M.; Purcell, Y.; Verbeek, J.; Niessen, W.J.; Ijzermans, J.N.; de Man, R.A.; Doukas, M. Automated assessment of T2-Weighted MRI to differentiate malignant and benign primary solid liver lesions in noncirrhotic livers using radiomics. Acad. Radiol. 2024, 31, 870–879. [Google Scholar] [CrossRef] [PubMed]
  39. Urhuț, M.-C.; Săndulescu, L.D.; Streba, C.T.; Mămuleanu, M.; Ciocâlteu, A.; Cazacu, S.M.; Dănoiu, S. Diagnostic Performance of an Artificial Intelligence Model Based on Contrast-Enhanced Ultrasound in Patients with Liver Lesions: A Comparative Study with Clinicians. Diagnostics 2023, 13, 3387. [Google Scholar] [CrossRef]
  40. Deng, X.; Liao, Z. A machine-learning model based on dynamic contrast-enhanced MRI for preoperative differentiation between hepatocellular carcinoma and combined hepatocellular–cholangiocarcinoma. Clin. Radiol. 2024, 79, e817–e825. [Google Scholar] [CrossRef]
  41. Li, C.-Q.; Zheng, X.; Guo, H.-L.; Cheng, M.-Q.; Huang, Y.; Xie, X.-Y.; Lu, M.-D.; Kuang, M.; Wang, W.; Chen, L.-D. Differentiation between combined hepatocellular cholangiocarcinoma and hepatocellular carcinoma: Comparison of diagnostic performance between ultrasomics-based model and CEUS LI-RADS v2017. BMC Med. Imaging 2022, 22, 36. [Google Scholar]
  42. Lei, Y.; Feng, B.; Wan, M.; Xu, K.; Cui, J.; Ma, C.; Sun, J.; Yao, C.; Gan, S.; Shi, J. Predicting microvascular invasion in hepatocellular carcinoma with a CT-and MRI-based multimodal deep learning model. Abdom. Radiol. 2024, 49, 1397–1410. [Google Scholar] [CrossRef]
  43. Liu, J.; Cheng, D.; Liao, Y.; Luo, C.; Lei, Q.; Zhang, X.; Wang, L.; Wen, Z.; Gao, M. Development of a magnetic resonance imaging-derived radiomics model to predict microvascular invasion in patients with hepatocellular carcinoma. Quant. Imaging Med. Surg. 2023, 13, 3948. [Google Scholar] [CrossRef] [PubMed]
  44. Mosconi, C.; Cucchetti, A.; Bruno, A.; Cappelli, A.; Bargellini, I.; De Benedittis, C.; Lorenzoni, G.; Gramenzi, A.; Tarantino, F.P.; Parini, L. Radiomics of cholangiocarcinoma on pretreatment CT can identify patients who would best respond to radioembolisation. Eur. Radiol. 2020, 30, 4534–4544. [Google Scholar] [CrossRef] [PubMed]
  45. Ballı, H.T.; Pişkin, F.C.; Yücel, S.P.; Sözütok, S.; Özgül, D.; Aikimbaev, K. Predictability of the radiological response to Yttrium-90 transarterial radioembolization by dynamic magnetic resonance imaging-based radiomics analysis in patients with intrahepatic cholangiocarcinoma. Diagn. Interv. Radiol. 2024, 30, 193. [Google Scholar] [CrossRef] [PubMed]
  46. He, Y.; Hu, B.; Zhu, C.; Xu, W.; Ge, Y.; Hao, X.; Dong, B.; Chen, X.; Dong, Q.; Zhou, X. A novel multimodal radiomics model for predicting prognosis of resected hepatocellular carcinoma. Front. Oncol. 2022, 12, 745258. [Google Scholar] [CrossRef] [PubMed]
  47. Wang, L.; Yan, D.; Shen, L.; Xie, Y.; Yan, S. Prognostic Value of a CT Radiomics-Based Nomogram for the Overall Survival of Patients with Nonmetastatic BCLC Stage C Hepatocellular Carcinoma after Stereotactic Body Radiotherapy. J. Oncol. 2023, 2023, 1554599. [Google Scholar] [CrossRef] [PubMed]
  48. Yasaka, K.; Akai, H.; Kunimatsu, A.; Abe, O.; Kiryu, S. Deep learning for staging liver fibrosis on CT: A pilot study. Eur. Radiol. 2018, 28, 4578–4585. [Google Scholar] [CrossRef]
  49. Naganawa, S.; Enooku, K.; Tateishi, R.; Akai, H.; Yasaka, K.; Shibahara, J.; Ushiku, T.; Abe, O.; Ohtomo, K.; Kiryu, S. Imaging prediction of nonalcoholic steatohepatitis using computed tomography texture analysis. Eur. Radiol. 2018, 28, 3050–3058. [Google Scholar] [CrossRef]
  50. Zhou, Z.; Xia, T.; Zhang, T.; Du, M.; Zhong, J.; Huang, Y.; Xuan, K.; Xu, G.; Wan, Z.; Ju, S. Prediction of preoperative microvascular invasion by dynamic radiomic analysis based on contrast-enhanced computed tomography. Abdom. Radiol. 2024, 49, 611–624. [Google Scholar] [CrossRef]
  51. Zhu, Z.; Lv, D.; Zhang, X.; Wang, S.-H.; Zhu, G. Deep learning in the classification of stage of liver fibrosis in chronic hepatitis b with magnetic resonance ADC images. Contrast Media Mol. Imaging 2021, 2021, 2015780. [Google Scholar] [CrossRef] [PubMed]
  52. Zhang, L.; Cai, P.; Hou, J.; Luo, M.; Li, Y.; Jiang, X. Radiomics model based on gadoxetic acid disodium-enhanced MR imaging to predict hepatocellular carcinoma recurrence after curative ablation. Cancer Manag. Res. 2021, 13, 2785–2796. [Google Scholar] [CrossRef]
  53. Gotra, A.; Sivakumaran, L.; Chartrand, G.; Vu, K.N.; Vandenbroucke-Menu, F.; Kauffmann, C.; Kadoury, S.; Gallix, B.; de Guise, J.A.; Tang, A. Liver segmentation: Indications, techniques and future directions. Insights Imaging 2017, 8, 377–392. [Google Scholar] [CrossRef]
  54. Wang, J.; Peng, Y.; Jing, S.; Han, L.; Li, T.; Luo, J. A deep-learning approach for segmentation of liver tumors in magnetic resonance imaging using UNet+. BMC Cancer 2023, 23, 1060. [Google Scholar] [CrossRef] [PubMed]
  55. Antonelli, M.; Reinke, A.; Bakas, S.; Farahani, K.; Kopp-Schneider, A.; Landman, B.A.; Litjens, G.; Menze, B.; Ronneberger, O.; Summers, R.M.; et al. The Medical Segmentation Decathlon. Nat. Commun. 2022, 13, 4128. [Google Scholar] [CrossRef] [PubMed]
  56. Wasserthal, J.; Breit, H.-C.; Meyer, M.T.; Pradella, M.; Hinck, D.; Sauter, A.W.; Heye, T.; Boll, D.T.; Cyriac, J.; Yang, S. TotalSegmentator: Robust segmentation of 104 anatomic structures in CT images. Radiol. Artif. Intell. 2023, 5, e230024. [Google Scholar] [CrossRef]
  57. Oh, N.; Kim, J.-H.; Rhu, J.; Jeong, W.K.; Choi, G.-S.; Kim, J.; Joh, J.-W. Comprehensive deep learning-based assessment of living liver donor CT angiography: From vascular segmentation to volumetric analysis. Int. J. Surg. 2024, 110, 6551–6557. [Google Scholar] [CrossRef]
  58. Jeong, J.G.; Choi, S.; Kim, Y.J.; Lee, W.-S.; Kim, K.G. Deep 3D attention CLSTM U-Net based automated liver segmentation and volumetry for the liver transplantation in abdominal CT volumes. Sci. Rep. 2022, 12, 6370. [Google Scholar] [CrossRef]
  59. Park, R.; Lee, S.; Sung, Y.; Yoon, J.; Suk, H.-I.; Kim, H.; Choi, S. Accuracy and efficiency of right-lobe graft weight estimation using deep-learning-assisted CT volumetry for living-donor liver transplantation. Diagnostics 2022, 12, 590. [Google Scholar] [CrossRef] [PubMed]
  60. Koitka, S.; Gudlin, P.; Theysohn, J.M.; Oezcelik, A.; Hoyer, D.P.; Dayangac, M.; Hosch, R.; Haubold, J.; Flaschel, N.; Nensa, F. Fully automated preoperative liver volumetry incorporating the anatomical location of the central hepatic vein. Sci. Rep. 2022, 12, 16479. [Google Scholar] [CrossRef]
  61. Winkel, D.J.; Weikert, T.J.; Breit, H.-C.; Chabin, G.; Gibson, E.; Heye, T.J.; Comaniciu, D.; Boll, D.T. Validation of a fully automated liver segmentation algorithm using multi-scale deep reinforcement learning and comparison versus manual segmentation. Eur. J. Radiol. 2020, 126, 108918. [Google Scholar] [CrossRef]
  62. Zbinden, L.; Catucci, D.; Suter, Y.; Hulbert, L.; Berzigotti, A.; Brönnimann, M.; Ebner, L.; Christe, A.; Obmann, V.C.; Sznitman, R. Automated liver segmental volume ratio quantification on non-contrast T1–Vibe Dixon liver MRI using deep learning. Eur. J. Radiol. 2023, 167, 111047. [Google Scholar] [CrossRef] [PubMed]
  63. Gross, M.; Huber, S.; Arora, S.; Ze’evi, T.; Haider, S.P.; Kucukkaya, A.S.; Iseke, S.; Kuhn, T.N.; Gebauer, B.; Michallek, F. Automated MRI liver segmentation for anatomical segmentation, liver volumetry, and the extraction of radiomics. Eur. Radiol. 2024, 34, 5056–5065. [Google Scholar] [CrossRef]
  64. Sorace, A.G.; Elkassem, A.A.; Galgano, S.J.; Lapi, S.E.; Larimer, B.M.; Partridge, S.C.; Quarles, C.C.; Reeves, K.; Napier, T.S.; Song, P.N.; et al. Imaging for Response Assessment in Cancer Clinical Trials. Semin. Nucl. Med. 2020, 50, 488–504. [Google Scholar] [CrossRef] [PubMed]
  65. Frenette, A.; Morrell, J.; Bjella, K.; Fogarty, E.; Beal, J.; Chaudhary, V. Do diametric measurements provide sufficient and reliable tumor assessment? An evaluation of diametric, areametric, and volumetric variability of lung lesion measurements on computerized tomography scans. J. Oncol. 2015, 2015, 632943. [Google Scholar] [CrossRef]
  66. Schiavon, G.; Ruggiero, A.; Bekers, D.J.; Barry, P.A.; Sleijfer, S.; Kloth, J.; Krestin, G.P.; Schöffski, P.; Verweij, J.; Mathijssen, R.H. The effect of baseline morphology and its change during treatment on the accuracy of Response Evaluation Criteria in Solid Tumours in assessment of liver metastases. Eur. J. Cancer 2014, 50, 972–980. [Google Scholar] [CrossRef] [PubMed]
  67. Suzuki, C.; Torkzad, M.R.; Jacobsson, H.; Åström, G.; Sundin, A.; Hatschek, T.; Fujii, H.; Blomqvist, L. Interobserver and intraobserver variability in the response evaluation of cancer therapy according to RECIST and WHO-criteria. Acta Oncol. 2010, 49, 509–514. [Google Scholar] [CrossRef]
  68. Joskowicz, L.; Szeskin, A.; Rochman, S.; Dodi, A.; Lederman, R.; Fruchtman-Brot, H.; Azraq, Y.; Sosna, J. Follow-up of liver metastases: A comparison of deep learning and RECIST 1.1. Eur. Radiol. 2023, 33, 9320–9327. [Google Scholar] [CrossRef] [PubMed]
  69. Wesdorp, N.J.; Bolhuis, K.; Roor, J.; van Waesberghe, J.-H.T.; van Dieren, S.; van Amerongen, M.J.; Chapelle, T.; Dejong, C.H.; Engelbrecht, M.R.; Gerhards, M.F. The prognostic value of total tumor volume response compared with RECIST1. 1 in patients with initially unresectable colorectal liver metastases undergoing systemic treatment. Ann. Surg. Open 2021, 2, e103. [Google Scholar] [CrossRef] [PubMed]
  70. Fowler, K.J.; Bashir, M.R.; Fetzer, D.T.; Kitao, A.; Lee, J.M.; Jiang, H.; Kielar, A.Z.; Ronot, M.; Kamaya, A.; Marks, R.M. Universal liver imaging lexicon: Imaging atlas for research and clinical practice. Radiographics 2022, 43, e220066. [Google Scholar] [CrossRef]
  71. Frenette, C.; Mendiratta-Lala, M.; Salgia, R.; Wong, R.J.; Sauer, B.G.; Pillai, A. ACG clinical guideline: Focal liver lesions. Off. J. Am. Coll. Gastroenterol. ACG 2024, 119, 1235–1271. [Google Scholar] [CrossRef]
  72. Ye, Y.; Zhang, N.; Wu, D.; Huang, B.; Cai, X.; Ruan, X.; Chen, L.; Huang, K.; Li, Z.-P.; Wu, P.-M. Deep Learning Combined with Radiologist’s Intervention Achieves Accurate Segmentation of Hepatocellular Carcinoma in Dual-Phase Magnetic Resonance Images. BioMed Res. Int. 2024, 2024, 9267554. [Google Scholar] [CrossRef]
  73. Kim, D.W.; Lee, G.; Kim, S.Y.; Ahn, G.; Lee, J.-G.; Lee, S.S.; Kim, K.W.; Park, S.H.; Lee, Y.J.; Kim, N. Deep learning–based algorithm to detect primary hepatic malignancy in multiphase CT of patients at high risk for HCC. Eur. Radiol. 2021, 31, 7047–7057. [Google Scholar] [CrossRef] [PubMed]
  74. Geis, J.R.; Brady, A.P.; Wu, C.C.; Spencer, J.; Ranschaert, E.; Jaremko, J.L.; Langer, S.G.; Borondy Kitts, A.; Birch, J.; Shields, W.F.; et al. Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety Statement. Radiology 2019, 293, 436–440. [Google Scholar] [CrossRef] [PubMed]
  75. Hamilton, D.G.; Hong, K.; Fraser, H.; Rowhani-Farid, A.; Fidler, F.; Page, M.J. Prevalence and predictors of data and code sharing in the medical and health sciences: Systematic review with meta-analysis of individual participant data. BMJ 2023, 382, e075767. [Google Scholar] [CrossRef] [PubMed]
  76. Qin, W.; Wu, J.; Han, F.; Yuan, Y.; Zhao, W.; Ibragimov, B.; Gu, J.; Xing, L. Superpixel-based and boundary-sensitive convolutional neural network for automated liver segmentation. Phys. Med. Biol. 2018, 63, 095017. [Google Scholar] [CrossRef] [PubMed]
  77. Jin, Q.; Meng, Z.; Sun, C.; Cui, H.; Su, R. RA-UNet: A hybrid deep attention-aware network to extract liver and tumor in CT scans. Front. Bioeng. Biotechnol. 2020, 8, 605132. [Google Scholar] [CrossRef]
  78. Li, C.; Yao, G.; Xu, X.; Yang, L.; Zhang, Y.; Wu, T.; Sun, J. DCSegNet: Deep learning framework based on divide-and-conquer method for liver segmentation. IEEE Access 2020, 8, 146838–146846. [Google Scholar] [CrossRef]
  79. Kim, K.; Kim, S.; Han, K.; Bae, H.; Shin, J.; Lim, J.S. Diagnostic performance of deep learning-based lesion detection algorithm in CT for detecting hepatic metastasis from colorectal cancer. Korean J. Radiol. 2021, 22, 912. [Google Scholar] [CrossRef] [PubMed]
  80. Fehrenbach, U.; Xin, S.; Hartenstein, A.; Auer, T.A.; Dräger, F.; Froböse, K.; Jann, H.; Mogl, M.; Amthauer, H.; Geisel, D. Automatized hepatic tumor volume analysis of neuroendocrine liver metastases by gd-eob mri—A deep-learning model to support multidisciplinary cancer conference decision-making. Cancers 2021, 13, 2726. [Google Scholar] [CrossRef] [PubMed]
  81. Gross, M.; Spektor, M.; Jaffe, A.; Kucukkaya, A.S.; Iseke, S.; Haider, S.P.; Strazzabosco, M.; Chapiro, J.; Onofrey, J.A. Improved performance and consistency of deep learning 3D liver segmentation with heterogeneous cancer stages in magnetic resonance imaging. PLoS ONE 2021, 16, e0260630. [Google Scholar] [CrossRef]
  82. Wang, M.; Fu, F.; Zheng, B.; Bai, Y.; Wu, Q.; Wu, J.; Sun, L.; Liu, Q.; Liu, M.; Yang, Y. Development of an AI system for accurately diagnose hepatocellular carcinoma from computed tomography imaging data. Br. J. Cancer 2021, 125, 1111–1121. [Google Scholar] [CrossRef]
  83. Xue, Z.; Li, P.; Zhang, L.; Lu, X.; Zhu, G.; Shen, P.; Shah, S.A.A.; Bennamoun, M. Multi-modal co-learning for liver lesion segmentation on PET-CT images. IEEE Trans. Med. Imaging 2021, 40, 3531–3542. [Google Scholar] [CrossRef] [PubMed]
  84. Pang, S.; Du, A.; Orgun, M.A.; Wang, Y.; Yu, Z. Tumor attention networks: Better feature selection, better tumor segmentation. Neural Netw. 2021, 140, 203–222. [Google Scholar] [CrossRef] [PubMed]
  85. Han, L.; Chen, Y.; Li, J.; Zhong, B.; Lei, Y.; Sun, M. Liver segmentation with 2.5 D perpendicular UNets. Comput. Electr. Eng. 2021, 91, 107118. [Google Scholar] [CrossRef]
  86. Perez, A.A.; Noe-Kim, V.; Lubner, M.G.; Graffy, P.M.; Garrett, J.W.; Elton, D.C.; Summers, R.M.; Pickhardt, P.J. Deep learning CT-based quantitative visualization tool for liver volume estimation: Defining normal and hepatomegaly. Radiology 2022, 302, 336–342. [Google Scholar] [CrossRef]
  87. Senthilvelan, J.; Jamshidi, N. A pipeline for automated deep learning liver segmentation (PADLLS) from contrast enhanced CT exams. Sci. Rep. 2022, 12, 15794. [Google Scholar] [CrossRef]
  88. Barash, Y.; Klang, E.; Lux, A.; Konen, E.; Horesh, N.; Pery, R.; Zilka, N.; Eshkenazy, R.; Nachmany, I.; Pencovich, N. Artificial intelligence for identification of focal lesions in intraoperative liver ultrasonography. Langenbeck’s Arch. Surg. 2022, 407, 3553–3560. [Google Scholar] [CrossRef]
  89. Zhang, F.; Yan, S.; Zhao, Y.; Gao, Y.; Li, Z.; Lu, X. Iterative convolutional encoder-decoder network with multi-scale context learning for liver segmentation. Appl. Artif. Intell. 2022, 36, 2151186. [Google Scholar] [CrossRef]
  90. Wu, S.; Yu, H.; Li, C.; Zheng, R.; Xia, X.; Wang, C.; Wang, H. A Coarse-to-Fine Fusion Network for Small Liver Tumor Detection and Segmentation: A Real-World Study. Diagnostics 2023, 13, 2504. [Google Scholar] [CrossRef] [PubMed]
  91. Kazami, Y.; Kaneko, J.; Keshwani, D.; Kitamura, Y.; Takahashi, R.; Mihara, Y.; Ichida, A.; Kawaguchi, Y.; Akamatsu, N.; Hasegawa, K. Two-step artificial intelligence algorithm for liver segmentation automates anatomic virtual hepatectomy. J. Hepato-Biliary-Pancreat. Sci. 2023, 30, 1205–1217. [Google Scholar] [CrossRef] [PubMed]
  92. Özcan, F.; Uçan, O.N.; Karaçam, S.; Tunçman, D. Fully automatic liver and tumor segmentation from CT image using an AIM-Unet. Bioengineering 2023, 10, 215. [Google Scholar] [CrossRef]
  93. Fogarollo, S.; Bale, R.; Harders, M. Towards liver segmentation in the wild via contrastive distillation. Int. J. Comput. Assist. Radiol. Surg. 2023, 18, 1143–1149. [Google Scholar] [CrossRef]
  94. Gao, Z.; Zong, Q.; Wang, Y.; Yan, Y.; Wang, Y.; Zhu, N.; Zhang, J.; Wang, Y.; Zhao, L. Laplacian salience-gated feature pyramid network for accurate liver vessel segmentation. IEEE Trans. Med. Imaging 2023, 42, 3059–3068. [Google Scholar] [CrossRef] [PubMed]
  95. Wang, Q.; Chen, A.; Xue, Y. Liver CT Image Recognition Method Based on Capsule Network. Information 2023, 14, 183. [Google Scholar] [CrossRef]
  96. He, Q.; Duan, Y.; Yang, Z.; Wang, Y.; Yang, L.; Bai, L.; Zhao, L. Context-aware augmentation for liver lesion segmentation: Shape uniformity, expansion limit and fusion strategy. Quant. Imaging Med. Surg. 2023, 13, 5043. [Google Scholar] [CrossRef]
  97. Liu, H.; Yang, J.; Jiang, C.; He, S.; Fu, Y.; Zhang, S.; Hu, X.; Fang, J.; Ji, W. S2DA-Net: Spatial and spectral-learning double-branch aggregation network for liver tumor segmentation in CT images. Comput. Biol. Med. 2024, 174, 108400. [Google Scholar] [CrossRef] [PubMed]
  98. Shao, J.; Luan, S.; Ding, Y.; Xue, X.; Zhu, B.; Wei, W. Attention Connect Network for Liver Tumor Segmentation from CT and MRI Images. Technol. Cancer Res. Treat. 2024, 23, 15330338231219366. [Google Scholar] [CrossRef]
  99. Ou, J.; Jiang, L.; Bai, T.; Zhan, P.; Liu, R.; Xiao, H. ResTransUnet: An effective network combined with Transformer and U-Net for liver segmentation in CT scans. Comput. Biol. Med. 2024, 177, 108625. [Google Scholar] [CrossRef]
  100. Zhou, J.; Xia, Y.; Xun, X.; Yu, Z. Deep Learning-Based Detect-Then-Track Pipeline for Treatment Outcome Assessments in Immunotherapy-Treated Liver Cancer. J. Imaging Inform. Med. 2024, 1–14. [Google Scholar] [CrossRef] [PubMed]
  101. Yu, W.; Wang, M.; Zhang, Y.; Zhao, L. Reciprocal cross-modal guidance for liver lesion segmentation from multiple phases under incomplete overlap. Biomed. Signal Process. Control 2024, 88, 105561. [Google Scholar] [CrossRef]
  102. Patel, N.; Celaya, A.; Eltaher, M.; Glenn, R.; Savannah, K.B.; Brock, K.K.; Sanchez, J.I.; Calderone, T.L.; Cleere, D.; Elsaiey, A. Training robust T1-weighted magnetic resonance imaging liver segmentation models using ensembles of datasets with different contrast protocols and liver disease etiologies. Sci. Rep. 2024, 14, 20988. [Google Scholar] [CrossRef]
  103. Le, Q.A.; Pham, X.L.; van Walsum, T.; Dao, V.H.; Le, T.L.; Franklin, D.; Moelker, A.; Le, V.H.; Trung, N.L.; Luu, M.H. Precise ablation zone segmentation on CT images after liver cancer ablation using semi-automatic CNN-based segmentation. Med. Phys. 2024, 51, 8882–8899. [Google Scholar] [CrossRef] [PubMed]
  104. Chen, W.; Zhao, L.; Bian, R.; Li, Q.; Zhao, X.; Zhang, M. Compensation of small data with large filters for accurate liver vessel segmentation from contrast-enhanced CT images. BMC Med. Imaging 2024, 24, 129. [Google Scholar] [CrossRef]
  105. Quinton, F.; Presles, B.; Leclerc, S.; Nodari, G.; Lopez, O.; Chevallier, O.; Pellegrinelli, J.; Vrigneaud, J.-M.; Popoff, R.; Meriaudeau, F. Navigating the nuances: Comparative analysis and hyperparameter optimisation of neural architectures on contrast-enhanced MRI for liver and liver tumour segmentation. Sci. Rep. 2024, 14, 3522. [Google Scholar] [CrossRef] [PubMed]
  106. Cheng, D.; Zhou, Z.; Zhang, J. EG-UNETR: An edge-guided liver tumor segmentation network based on cross-level interactive transformer. Biomed. Signal Process. Control 2024, 97, 106739. [Google Scholar] [CrossRef]
  107. Zhou, G.-Q.; Zhao, F.; Yang, Q.-H.; Wang, K.-N.; Li, S.; Zhou, S.; Lu, J.; Chen, Y. Tagnet: A transformer-based axial guided network for bile duct segmentation. Biomed. Signal Process. Control 2023, 86, 105244. [Google Scholar] [CrossRef]
  108. Ma, J.; Zhang, Y.; Gu, S.; Zhu, C.; Ge, C.; Zhang, Y.; An, X.; Wang, C.; Wang, Q.; Liu, X. Abdomenct-1k: Is abdominal organ segmentation a solved problem? IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6695–6714. [Google Scholar] [CrossRef] [PubMed]
  109. ISICDM2024. Available online: https://www.imagecomputing.org/isicdm2024/index.html#/ (accessed on 4 February 2025).
  110. Landman, B.; Xu, Z.; Igelsias, J.; Styner, M.; Langerak, T.; Klein, A. Miccai multi-atlas labeling beyond the cranial vault–workshop and challenge. In Proceedings of the MICCAI Multi-Atlas Labeling Beyond Cranial Vault—Workshop Challenge, Munich, Germany; 2015; p. 12. [Google Scholar]
  111. Zxcv2022. CT Liver Liver Segmentation Dataset for Small Sample Examples. Available online: https://www.kaggle.com/datasets/zxcv2022/digital-medical-images-for--download-resource/data (accessed on 21 December 2024).
  112. Gibson, E.; Giganti, F.; Hu, Y.; Bonmati, E.; Bandula, S.; Gurusamy, K.; Davidson, B.; Pereira, S.P.; Clarkson, M.J.; Barratt, D.C. Automatic multi-organ segmentation on abdominal CT with dense V-networks. IEEE Trans. Med. Imaging 2018, 37, 1822–1834. [Google Scholar] [CrossRef]
  113. Langs, G.; Hanbury, A.; Menze, B.; Müller, H. VISCERAL: Towards large data in medical imaging—Challenges and directions. In Proceedings of the Medical Content-Based Retrieval for Clinical Decision Support: Third MICCAI International Workshop, MCBR-CDS 2012, Nice, France, 1 October 2012; pp. 92–98. [Google Scholar]
Figure 1. Simplified hierarchical representation of AI subcategories.
Figure 1. Simplified hierarchical representation of AI subcategories.
Life 15 00258 g001
Figure 2. Prisma flow chart.
Figure 2. Prisma flow chart.
Life 15 00258 g002
Figure 3. Tree chart representing AOIs.
Figure 3. Tree chart representing AOIs.
Life 15 00258 g003
Figure 4. Line chart representing AOIs in 2018–2024.
Figure 4. Line chart representing AOIs in 2018–2024.
Life 15 00258 g004
Figure 5. Tree map representing modality (M-M = multi-modality, NM = nuclear medicine).
Figure 5. Tree map representing modality (M-M = multi-modality, NM = nuclear medicine).
Life 15 00258 g005
Figure 6. Line chart representing modality use, 2018–2024.
Figure 6. Line chart representing modality use, 2018–2024.
Life 15 00258 g006
Figure 7. Tree map representing AI tasks.
Figure 7. Tree map representing AI tasks.
Life 15 00258 g007
Figure 8. Line chart representing AI tasks, 2018–2024.
Figure 8. Line chart representing AI tasks, 2018–2024.
Life 15 00258 g008
Figure 9. Line chart representing detection and/or segmentation AI tasks, 2018–2024.
Figure 9. Line chart representing detection and/or segmentation AI tasks, 2018–2024.
Life 15 00258 g009
Table 1. AOIs and its subcategory complex AOI numbers of articles.
Table 1. AOIs and its subcategory complex AOI numbers of articles.
AOINo.% of TotalComplex AOINo.% of Complex% of Total
Lesions74360.30Parenchyma and lesion7293.505.84
Parenchyma37230.19Parenchyma and vessels33.890.24
Complex776.25Lesions and vessels11.290.08
Vessels262.11Parenchyma, lesion, vessels and biliary11.290.08
Biliary141.13
Table 2. Modality and its subcategory multi-modality numbers of articles.
Table 2. Modality and its subcategory multi-modality numbers of articles.
AOINo.% of TotalMulti-ModalityNo.% of Complex% of Total
CT63551.54CT and MRI3687.802.92
MRI33527.19CT and US24.870.16
US18915.34CT and US and MRI24.870.16
Multi-Modality413.32MRI and US12.430.08
Nuclear Medicine211.70
X-Ray50.40
Table 3. AI tasks and its subcategory AI multi-task numbers of articles.
Table 3. AI tasks and its subcategory AI multi-task numbers of articles.
AI TaskNo.% of TotalAI Multi-TaskNo.% of Complex% of Total
Classification72358.68Detection and/or segmentation and classification6193.844.95
Detection and/or segmentation32926.70Detection and classification34.610.24
Image optimization1159.33Detection, segmentation and classification11.530.08
Multi-task655.27
Table 4. Detection and/or segmentation AOIs and its subcategory complex AOI numbers of articles.
Table 4. Detection and/or segmentation AOIs and its subcategory complex AOI numbers of articles.
Segmentation and/or
Detection AOI
No.% of Complex% of TotalDetection and/or Segmentation
Complex AOI
No.% of Complex% of Total
Lesions12838.9010.38Parenchyma and lesions6692.955.35
Parenchyma10431.618.44Parenchyma and vessels34.220.24
Complex7121.585.76Parenchyma, lesions, vessels and biliary 11.400.08
Vessels257.592.02Lesions and vessels11.400.08
Biliary10.300.08
Table 5. Segmentation and/or detection dataset type and most commonly found public datasets.
Table 5. Segmentation and/or detection dataset type and most commonly found public datasets.
Dataset TypeNo.% of D&S
Studies
Main Public DatasetsNo.% of D&S
Studies
Public15848.02LiTS [23]13741.64
Private9930.093DIRCADb [24]9729.48
Public and private5617.02SLIVER07 [25]257.59
CHAOS [26]236.99
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pomohaci, M.D.; Grasu, M.C.; Băicoianu-Nițescu, A.-Ş.; Enache, R.M.; Lupescu, I.G. Systematic Review: AI Applications in Liver Imaging with a Focus on Segmentation and Detection. Life 2025, 15, 258. https://doi.org/10.3390/life15020258

AMA Style

Pomohaci MD, Grasu MC, Băicoianu-Nițescu A-Ş, Enache RM, Lupescu IG. Systematic Review: AI Applications in Liver Imaging with a Focus on Segmentation and Detection. Life. 2025; 15(2):258. https://doi.org/10.3390/life15020258

Chicago/Turabian Style

Pomohaci, Mihai Dan, Mugur Cristian Grasu, Alexandru-Ştefan Băicoianu-Nițescu, Robert Mihai Enache, and Ioana Gabriela Lupescu. 2025. "Systematic Review: AI Applications in Liver Imaging with a Focus on Segmentation and Detection" Life 15, no. 2: 258. https://doi.org/10.3390/life15020258

APA Style

Pomohaci, M. D., Grasu, M. C., Băicoianu-Nițescu, A.-Ş., Enache, R. M., & Lupescu, I. G. (2025). Systematic Review: AI Applications in Liver Imaging with a Focus on Segmentation and Detection. Life, 15(2), 258. https://doi.org/10.3390/life15020258

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop