You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

12 February 2024

Adaptive Method for Exploring Deep Learning Techniques for Subtyping and Prediction of Liver Disease

,
,
,
,
and
1
Department of Radiology, College of Medicine, Jazan University, Jazan 45142, Saudi Arabia
2
Department of Computer Science, College of Computer Science & Information Technology, Jazan University, Jazan 45142, Saudi Arabia
3
Diagnostic Radiography Technology Department, Faculty of Nursing and Health Sciences, Jazan University, Jazan 45142, Saudi Arabia
4
Department of Artificial Intelligence & Data Science, AISSMS Institute of Information Technology, Pune 411001, Maharashtra, India

Abstract

The term “Liver disease” refers to a broad category of disorders affecting the liver. There are a variety of common liver ailments, such as hepatitis, cirrhosis, and liver cancer. Accurate and early diagnosis is an emergent demand for the prediction and diagnosis of liver disease. Conventional diagnostic techniques, such as radiological, CT scan, and liver function tests, are often time-consuming and prone to inaccuracies in several cases. An application of machine learning (ML) and deep learning (DL) techniques is an efficient approach to diagnosing diseases in a wide range of medical fields. This type of machine-related learning can handle various tasks, such as image recognition, analysis, and classification, because it helps train large datasets and learns to identify patterns that might not be perceived by humans. This paper is presented here with an evaluation of the performance of various DL models on the estimation and subtyping of liver ailment and prognosis. In this manuscript, we propose a novel approach, termed CNN+LSTM, which is an integration of convolutional neural network (CNN) and long short-term memory (LSTM) networks. The results of the study prove that ML and DL can be used to improve the diagnosis and prognosis of liver disease. The CNN+LSTM model achieves a better accuracy of 98.73% compared to other models such as CNN, Recurrent Neural Network (RNN), and LSTM. The incorporation of the proposed CNN+LSTM model has better results in terms of accuracy (98.73%), precision (99%), recall (98%), F1 score (98%), and AUC (Area Under the Curve)-ROC (Receiver Operating Characteristic) (99%), respectively. The use of the CNN+LSTM model shows robustness in predicting the liver ailment with an accurate diagnosis and prognosis.

1. Introduction

The liver ailment is a canopy term for a number of conditions that can damage the liver, one of the body’s most vital organs. Some of these include “hepatitis, cirrhosis, and cancer”. These illnesses have a significant effect on the healthcare system and place a heavy burden on the lives of millions of individuals as shown in Figure 1. Early diagnosis is very important to ensure that patients be given the most effective and possible treatment [].
Figure 1. Healthy and unhealthy liver affected with disease.
The liver is a vital organ for many metabolic processes, such as the production of bile and protein synthesis. There are other types of liver diseases, such as non-alcoholic fatty liver disease (NAFLD), alcoholic liver disease, hemochromatosis, autoimmune liver diseases, etc. These conditions can lead to significant morbidity and mortality []. The increasing number of liver diseases has been identified due to a range of environmental, genetic, and lifestyle factors. One factor contributing to this issue is the rising prevalence of individuals engaging in excessive alcohol consumption and adopting bad dietary habits. The interaction of environmental, genetic, and lifestyle factors worsens these disorders. In addition to posing a direct danger to individuals’ health, liver diseases may also have significant economic consequences. The financial and therapeutic costs associated with these diseases are significant, ultimately straining the healthcare system and burdening families and individuals. Furthermore, there are additional effects, such as reduced efficiency, increased disability, and reduced quality of living, which may have significant effects on both society and the individuals involved [,].
Developing effective methods for diagnosing, preventing, and treating liver diseases is necessary since they are crucial elements of an integrated approach [] for managing liver disease problems. Modifications to an individual’s lifestyle, together with public health measures such as hepatitis B and C vaccinations, may help decrease the intensity and occurrence rate of the disease [].
Liver diseases have become a serious global health issue due to their increasing frequency as well as the severity of their symptoms. The objective of this examination is to provide a comprehensive analysis of the impact caused by these ailments by summarizing the research conducted by scholars in the work []. By learning more about the numerous risk factors and epidemiology of liver disorders, as mentioned by the researchers [], there is scope to develop effective therapies that will aid in reducing the impact of these illnesses and improving the quality of life for those who suffer from them.
In most cases, a combination of imaging and histology tests may confirm a diagnosis of liver disease. This approach is widely used. However, it seems that these procedures are both time-consuming and ineffective. This is why new approaches are so important; they will improve the precision of these procedures, which in turn will aid in prognosis prediction for patients.
ML and DL has gained immense popularity in the field of healthcare []. These methods rely on artificial neural networks, which are specifically designed to learn from large datasets. DL, in particular, is highly beneficial for complex tasks such as predicting time series and analyzing images. In the context of liver infection diagnosis and prognosis, DL can be used to improve accuracy by identifying intricate patterns within the data []. This research aims to investigate the application of DL techniques in diagnosing and predicting liver infections.
In order to achieve a better result, we provide CNN+LSTM, a novel deep learning model that combines CNN and LSTM networks. Data in sequential and image types are handled by these two networks. The goal of the CNN+LSTM model is to enhance the accuracy and predictive capability of liver disease analysis by capturing temporal and spatial dependencies in the data. The evaluation for the CNN+LSTM model will be performed on a well-defined set of liver disease cases. It will be evaluated by applying several metrics, such as accuracy, precision and recall, AUC and ROC, and F1_score. AUC and ROC are widely used as evaluation metrics in ML, especially for tasks related to binary classification. Not only that, but they could additionally be expanded to incorporate multi-class classification issues. The results after evaluation will be used to determine the model’s effectiveness in predicting the prognosis of patients and subtyping liver disease.
This study’s results may have profound implications for the future of liver disease treatment. If the model proves to be useful, the process of diagnosis will improve both its efficiency and accuracy. A precise prediction of a patient’s future course of disease is essential for clinicians to develop customized treatment plans and improve patient outcomes. This study utilizes the advanced capabilities of DL to improve the field of liver disease diagnosis, prognosis, and treatment, resulting in benefits for patients, society, and healthcare professionals.

3. Proposed Method

To effectively use ML and DL for complicated medical and diagnostic activities, we have formulated a comprehensive approach to handle the complexities related to medical imaging and diagnostic processes, specifically for liver disease. This section is arranged in the following sub-sections: dataset used, pre-processing, applied algorithm, proposed (CNN+LSTM) model, and model training detail with the help of hyperparameter turning.

3.1. Applied Dataset

The Kaggle database for liver disease patients provides valuable information for developers and researchers working on the condition. It contains ten variables, such as “total bilirubin, age, gender, albumin, total proteins, direct bilirubin, SGPT, alkaline phosphatase, and A/G ratio”. These variables provide essential details on many aspects of the liver’s condition, functioning, and possible abnormalities. Through the analysis of the data and the use of machine learning or statistical methodologies, researchers and developers can identify patterns, construct predictive models, and make valuable contributions to the diagnosis, treatment, and comprehension of liver disorders. Apart from the ten variables or parameters, we have applied different labels such as Accuracy, Precision, Recall, F1 Score, AUC-ROC, etc. for our deep learning method. We also have applied the Hyperparameter for finding the where; for measuring the performance we applied Optimizer, rate of learning, size of each batch, total epochs, CNN architecture, LSTM units, dropout rate of CNN and LSTM, and loss function. The data were gathered from 30,000 individuals, and experts were notified of the information []. The information collected from the Kaggle database was utilized to train DL systems to identify individuals with liver disease. The dataset’s size and well-labeled nature make it an ideal resource for developers and researchers. It holds information about 30,000 individuals, which is enough to be considered statistically significant.

3.2. Data Pre-Processing

Missing Value Imputation: The method of imputing missing values involves replacing the absent data points with approximated values. The dataset includes a feature referred to as “Total Bilirubin”, which is absent for certain cases. The process of imputing missing values in this feature involves utilizing “mean imputation”, whereby the missing values are substituted with the mean value derived from the available non-missing values.
Normalization of Data: Normalization of data is a method applied to standardize and measure data in the direction to carry them to a comparable range and magnitude. There are several approaches available for achieving this objective, including the utilization of min-max normalization techniques. The dataset includes the variable “Age”. This particular feature has a distinct magnitude in comparison to the remaining features within the given datasets. The feature has been normalized using min-max regularization.
Selection of Features: The process of identifying the most significant characteristics for a given assignment involves the utilization of a technique known as “recursive feature elimination”. The dataset includes the variable “Gender”. There is no observed correlation between this attribute and the target variable. Therefore, the aforementioned feature was excluded from the dataset in order to improve the performance of the proposed model.
Data transformation: The method involves converting the data into a format that is better suited for deep learning algorithms. The proposed model employed the technique of discretization for the purpose of transforming the data. The dataset includes the variable “Total Bilirubin”. The aforementioned characteristic exhibits continuity. Therefore, it is necessary to discretize this particular feature into a predetermined number of bins.

3.3. Applied Algorithm

Convolutional Neural Networks (CNNs): The CNNs are mostly employed in the field of computer visualization, namely for tasks involving image and video recognition. However, they may also be effectively utilized for sequential data analysis, including text processing. The CNNs are specifically built to learn and analyze spatial hierarchies of features from input data. This enables them to efficiently capture and represent patterns and structures within the data.
The essential components of CNNs encompass the following three layers, namely—convolutional, pooling and fully connected layers. Convolutional neural networks also employ filtering techniques to apply them to input datasets, aiming to extract local features and generate feature maps. The sharing layers effectively reduce the spatial dimensions of the feature maps while retaining crucial information. The interconnected layers of the neural network combine the extracted features and produce predictions based on the learned demonstrations. The representation of the output of a convolutional layer can be achieved by the utilization of the convolution process, which is commonly termed as such,
H i j = ( K * X ) = m = 1 M n = 1 N K m , n . X i + m 1 , j + n 1
where, H i j = “element at position (i,j) is fed as input to achieve output feature map”; K = “CNN kernel”; X = “input data of a feature map”; M and N = “extents of the kernel”.
The outcome of the CNN is then voted for a triggering function called ReLU (Rectified Linear Unit). This presents non-linearity,
Y = Re L U ( H ) = max ( 0 , H )
where, Y = “output after the activation function is applied to the feature map H”.
Recurrent Neural Networks (RNN): The RNNs are designed to efficiently handle sequential data by capturing temporal dependencies and maintaining information across time steps. These techniques seem to be particularly advantageous in jobs that need the examination of sequential patterns, such as natural language processing (NLP), speech recognition, and investigation of time series data. Recurrent neural networks are characterized by the presence of feedback connections, which facilitate the transmission of information from one temporal step to the subsequent one. This stands in contrast to the unidirectional flow of information in feed-forward neural networks. The utilization of this architectural design allows RNNs to effectively preserve an internal memory, hence enhancing their ability to capture and represent long-term data dependencies.
The essential component of an RNN is the recurrent hidden layer, responsible for processing sequential data. Furthermore, each hidden unit within the recurrent layer is equipped with a recurrent connection, enabling it to receive its own prior output as an input. This facilitates the network’s ability to retain and integrate knowledge from preceding iterations, hence enhancing its computational process in the present iteration. The computation within an RNN can be denoted by the subsequent mathematical expressions.
h t = f ( W x x t + W h h t 1 + b h )
h t = f ( W x h t + W h h t 1 + b h )
y t = f ( W y h t + b y )
where h t = hidden layer at time t, x t is input at time t, W x = “weighted matrix for the RNN connections”, W x = “weight matrix for the input connections”, W y = “weight matrix for the output connections”, b h and b y are “bias terms”, and f = “activation function”.
RNNs are suitable for emotion recognition tasks that involve sequential data, such as analyzing the temporal patterns in speech or text data. However, standard RNNs have the lack of the disappearing “gradient problem” because of their limitations in capturing long-term dependence.
Long Short-Term (LST) Memory: The LSTM networks are a type of recurrent neural network that can capture dependency over time in serial data and solve the issue of vanishing gradients. Long short-term memory networks add a memory cell and numerous gating methods to regulate data transfer across the network. The LSTM architecture consists of three main components of gates: input, forget, and output. They regulate the information flow by updating or discarding statistics from the memory cell depending on the current and previous inputs. This permits LSTMs to learn long-term reliance by effectively retaining relevant information over many time steps.
f t = δ ( W x f × x t + W h f × h t 1 + b i )
i t = δ ( W x i × x t + W h i × h t 1 + b i )
δ t = δ ( W x 0 × x t + W h 0 × h t 1 + b 0 )
c t = f t . c t 1 + i t . tanh ( W x f × x i + W h c × h t 1 + b c )
h t = δ t × tanh ( c t )
where the forget gate is f t , input gate is i t , and output gate is δ t , respectively. Moreover, the term c t is stating the updated state of a memory cell and the hidden state with respect to time t is h t . The input with respect to time t is represented by x t . We measure the matrices of weight with the following symbols: W x f ,   W h f ,    W x i , W h i , W x o , W h o   , and W h c correspondingly. The biases are measured by b i , b o and b c We measure the sigmoid function for activation by the σ symbol. The LSTM network works as another part of the DL model, which is comprised of memory blocks, or a collection of subnets that are usually connected. Memory blocks consist of an output gate, an input gate, a forget gate, and a memory cell. In contrast to the conventional recurrent unit, which updates its contents at each iteration, the LSTM unit utilizes the introduced gates to determine whether or not to retain the existing memory. LSTM explicitly avoids the long-term dependency conundrum. In contrast to the solitary neural network layer found in recurrent neural networks, the LSTM architecture comprises four interconnected layers. The structure of LSTM is wherein each line represents a complete vector connecting the output of one node to the inputs of the other.
LSTM layers are especially designed to collect and represent long-term relationships in sequential data. They possess the capability to retain knowledge from previous inputs and use it to generate estimates for future inputs. Its layers have the ability to handle input sequences that have different values, and the layers provide the ability to automatically adjust and acquire knowledge from sequences of varying lengths, unlike conventional feed-forward neural networks that need inputs of fixed sizes. This model facilitates the acquisition and transmission of significant gradient information over different time steps. This process is crucial for the successful training of deep recurrent neural networks. LSTMs provide the capability of effectively acquiring and representing information across many time scales when modelling sequential data. This mechanism inherent in LSTM layers enables them to flexibly regulate the amount of past information they keep and the amount of new information to adapt, successfully capturing both short-term and long-term dependencies in the data. This skill is crucial in jobs that involve the varying significance of contextual information throughout various time periods. Its layers are suitable for jobs involving multi-dimensional input data, such as disease identification, image recognition, video analysis, and audio processing. DL models can quickly find temporal links in the input data by using LSTM layers along with convolutional or fully connected layers.
LSTMs have been applied in various NLP tasks successfully, including sentiment analysis and recognition. Their ability to capture long-term dependencies makes them suitable for analyzing sequential data and extracting meaningful features related to emotions.
Proposed (CNN and LSTM) Model: The proposed approach involves the utilization of a CNN+LSTM model, which integrates the functionalities of both CNN and LSTM networks, for the purpose of liver disease diagnosis. The utilization of temporal and medical data facilitates the provision of a diagnostic with enhanced accuracy. The CNN is employed as the primary extraction component of the model. The system undertakes a range of functions, including the acquisition of spatial data and the identification of localized patterns within medical images. The resulting output maps are subsequently subjected to down sampling in order to preserve their distinctive properties. The LSTM component subsequently analyses the data obtained from the CNN. The machine learning algorithm has the capability to acquire knowledge on the patterns and temporal relationships within the data, enabling the identification of distinct indicators associated with liver disease.
The schematic diagram in Figure 2 represents the proposed model based on CNN and LSTM. We have incorporated the diagrammatic representation through Figure 3 for representing and demonstrating the LSTM model with dense level 1, and Figure 4 demonstrates the details of the proposed CNN and LSTM models. The suggested model integrates the capabilities of LSTM and CNN in order to effectively identify liver illness. By virtue of its training on the Kaggle dataset, the system is capable of generating accurate and prompt diagnoses. Additionally, it aids in improving the overall quality and effectively managing other components associated with the system.
Figure 2. Proposed model based on CNN and LSTM.
Figure 3. Used LSTM model with dense level 1.
Figure 4. LSTM and CNN dense model in details.

4. Experimental Outcomes and Evaluations

The primary objective of our study is to examine the different architectures of deep learning in order to detect liver illness, utilizing a substantial dataset. The evaluation encompassed four models: CNN, RNN and LSTM as indicated in Table 1. Subsequently, an analysis was conducted on the performance metrics of each model to ascertain their efficacy in the detection of liver disease cases. The findings of our study demonstrate that performance is characterized by both quantitative and qualitative precision.
Table 1. Model training and Hyperparameter tuning values.

4.1. Graphical Representation of Accurateness of Epochs and Loss

In machine learning, an epoch denotes a whole iteration over the entire training dataset throughout the training procedure. The training data are often partitioned into smaller batches, each of which is used to update the model’s parameters. An era is considered complete after all batches have been processed. During each epoch, the model continually modifies its internal parameters by considering the input data and matching target labels to minimize the error or loss. The number of epochs defines the total number of iterations the model will perform on the training dataset. The training and testing datasets are used for gaining the accuracy and loss and they have been represented by Figure 5.
Figure 5. Testing and training accuracy graph of epochs and loss.

4.2. Heatmap

Heat maps are valuable tools for visually representing patterns, correlations, and trends throughout extensive datasets. The process of creating a heat map involves matching the dataset’s values to a color gradient, where each color represents a different value or level of data intensity. The color spectrum normally spans from cold hues, such as blue, for lower values, to warm hues, such as red, for higher values. The heat map is shown in Figure 6. The heat map visually emphasizes regions within the dataset that exhibit greater or lower values, facilitating the identification of discernible patterns and trends. Typically, brighter or darker colors are used to depict high values, whereas brighter or fainter colors are used to represent low values. The use of color encoding enables the easy identification of different values or data intensities. Heat maps are often used for analyzing extensive information to detect and ascertain patterns or relationships. In this study, we used heat maps to examine the projected values of liver disorders.
Figure 6. Heat map of dataset.

4.3. Model Summary

Table 2 displays the layer, its output, and its params values. In the layer LSTM, dense lambdas are considered. In the training, the number of gained total and trainable params is 60,591 and non-trainable params is zero.
Table 2. Model summary of proposed model. Model: “sequential”.

4.4. Evaluation Parameters

We integrated an in-depth investigation, including the relevant methods of multiple scholars. We also evaluated their performances using machine learning techniques. We included an illustration of the architecture used. Finally, we concluded and identified the benefits and limits of each of these methodologies to determine the potential for further research. The information is presented in Table 3, as seen below.
Table 3. Performance analysis of different researchers with their advantageous techniques and limitations.
In our evaluation process as shown in Table 4, we used parameters such as accuracy, precision, recall, F1 score, and AUC-ROC. The applied models are CNN, RNN, LSTM, and our proposed model, CNN+LSTM. Accuracy, precision, recall, F1 score, and AUC-ROC are widely used metrics for assessment in machine learning and are very appropriate for evaluating the performance of classification models. The predicted results are shown in Table 4. We achieved better results in applying the proposed CNN+LSTM model in terms of accuracy (98.73%), precision (99%), recall (98%), F1 score (98%), and AUC-ROC (99%), respectively.
Table 4. Applied model with parameters and accuracy statistics.
The CNN+LSTM model, which is a hybrid architecture that leverages the respective advantages of the LSTM and CNN, exhibited superior performance, with an accuracy rate of 98.73% as shown in Table 4. Figure 5 depicts the graph illustrating the relationship between accuracy and loss over the course of multiple epochs. Figure 6 displays a heat map, while Table 3 provides advantages and shortcomings along with the applied and suggested DL or ML method used by different researchers. The graphical representation of the comparison of various DL models with the proposed model is shown in Figure 7.
Figure 7. Comparison of various DL models with proposed model.

5. Conclusions

The purpose of this research is to compare the efficacy of several neural network models for identifying liver illnesses, such as CNN, RNN, and LSTM. The outcomes of the study show that the CNN+LSTM model may achieve superior performance compared to the other models in terms of recall, precision, AUC-ROC, and F1 score. The results of this study indicate that the integration of LSTM and CNN models has the potential to enhance the precision and resilience of liver disease diagnosis. In the future, further studies on the CNN+LSTM framework can be conducted on its generalizability and suitability for the analysis of diverse liver disease datasets. This will allow us to identify its potential applications in different clinical scenarios. The development of interpretability methods for the CNN+LSTM model can help improve the model’s comprehension of the decision-making process, allowing researchers and clinicians to gain a deeper understanding of liver disease. By unraveling the model’s temporal patterns and features, researchers and clinicians may learn more about the underlying mechanisms of the condition. In addition, the model can be further expanded and its performance improved by incorporating multi-modal information sources, such as genetic data and laboratory test results. This enhancement would enable the system to achieve higher levels of accuracy and efficiency in the detection of liver disorders. We performed a comprehensive study on the applied tools and techniques of several researchers [,,,,,,,,,,,,,] and assessed their performances using machine learning methodologies. We demonstrated the advantages and shortcomings of each of these techniques used by researchers, as shown in Table 4. The CNN+LSTM model, as proposed, has enhanced performance in liver disease recognition when compared to alternative models. The study’s results indicate that the use of deep learning methods can improve both the accuracy and effectiveness of medical diagnostics in detecting liver illnesses. These advancements in technology have the potential to enhance patient outcomes and facilitate the development of personalized treatment plans.

Author Contributions

Conceptualization, M.A.H.; Methodology, M.A.H. and S.L.; Validation, M.R.; Formal analysis, A.M.H., M.A.H., N.A.M., S.L., B.M.E. and M.R.; Investigation, A.M.H., M.A.H., N.A.M., S.L., B.M.E. and M.R.; Resources, N.A.M.; Writing—original draft, M.A.H.; Writing—review & editing, A.M.H., M.A.H., S.L., B.M.E. and M.R.; Funding acquisition, A.M.H. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of research and innovation, Jazan University, Kingdom of Saudi Arabia for supporting and funding the project work. The Grant Number of this project is ISP23-99.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: https://www.kaggle.com/datasets?search=liver.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Asrani, S.K.; Devarbhavi, H.; Eaton, J.; Kamath, P.S. Burden of liver diseases in the world. J. Hepatol. 2019, 70, 151–171. [Google Scholar] [CrossRef]
  2. Park, I.; Kim, N.; Lee, S.; Park, K.; Son, M.-Y.; Cho, H.-S.; Kim, D.-S. Characterization of signature trends across the spectrum of non-alcoholic fatty liver disease using deep learning method. Life Sci. 2023, 314, 121195. [Google Scholar] [CrossRef]
  3. Survarachakan, S.; Prasad, P.J.R.; Naseem, R.; de Frutos, J.P.; Kumar, R.P.; Langø, T.; Cheikh, F.A.; Elle, O.J.; Lindseth, F. Deep learning for image-based liver analysis—A comprehensive review focusing on malignant lesions. Artif. Intell. Med. 2022, 130, 102331. [Google Scholar] [CrossRef]
  4. Khan, R.A.; Luo, Y.; Wu, F.-X. Machine learning based liver disease diagnosis: A systematic review. Neurocomputing 2022, 468, 492–509. [Google Scholar] [CrossRef]
  5. Al-Kababji, A.; Bensaali, F.; Dakua, S.P.; Himeur, Y. Automated liver tissues delineation techniques: A systematic survey on machine learning current trends and future orientations. Eng. Appl. Artif. Intell. 2023, 117, 105532. [Google Scholar] [CrossRef]
  6. Ahn, J.C.; Connell, A.; Simonetto, D.A.; Hughes, C.; Shah, V.H. Application of Artificial Intelligence for the Diagnosis and Treatment of Liver Diseases. Hepatology 2020, 73, 2546–2563. [Google Scholar] [CrossRef]
  7. Masmali, I.; Kanwal, M.T.A.; Jamil, M.K.; Ahmad, A.; Azeem, M.; Koam, A.N.A. COVID antiviral drug structures and their edge metric dimension. Mol. Phys. 2023, e2259508. [Google Scholar] [CrossRef]
  8. Bhat, M.; Rabindranath, M.; Chara, B.S.; Simonetto, D.A. Artificial intelligence, machine learning, and deep learning in liver transplantation. J. Hepatol. 2023, 78, 1216–1233. [Google Scholar] [CrossRef]
  9. Huang, Q.; Khalil, A.; Ali, D.A.; Ahmad, A.; Luo, R.; Azeem, M. Breast cancer chemical structures and their partition resolvability. Math. Biosci. Eng. 2022, 20, 3838–3853. [Google Scholar] [CrossRef] [PubMed]
  10. Yin, Y.; Yakar, D.; Dierckx, R.A.J.O.; Mouridsen, K.B.; Kwee, T.C.; de Haas, R.J. Liver fibrosis staging by deep learning: A visual-based explanation of diagnostic decisions of the model. Eur. Radiol. 2021, 31, 9620–9627. [Google Scholar] [CrossRef] [PubMed]
  11. Koam, A.N.; Ahmad, A.; Azeem, M.; Hakami, K.H.; Elahi, K. Some stable and closed-shell structures of anticancer drugs by graph theoretical parameters. Heliyon 2023, 9, e17122. [Google Scholar] [CrossRef]
  12. Manjunath, R.V.; Kwadiki, K. Automatic liver and tumour segmentation from CT images using Deep learning algorithm. Results Control Optim. 2021, 6, 100087. [Google Scholar] [CrossRef]
  13. Yang, Y.; Liu, J.; Sun, C.; Shi, Y.; Hsing, J.C.; Kamya, A.; Keller, C.A.; Antil, N.; Rubin, D.; Wang, H.; et al. Nonalcoholic fatty liver disease (NAFLD) detection and deep learning in a Chinese community-based population. Eur. Radiol. 2023, 33, 5894–5906. [Google Scholar] [CrossRef]
  14. Hamid, K.; Asif, A.; Abbasi, W.; Sabih, D.; Minhas, F.U.A.A. Machine Learning with Abstention for Automated Liver Disease Diagnosis. In Proceedings of the 2017 International Conference on Frontiers of Information Technology, FIT 2017, Islamabad, Pakistan, 18–20 December 2017; Volume 2017, pp. 356–361. [Google Scholar] [CrossRef]
  15. Naeem, S.; Ali, A.; Qadri, S.; Mashwani, W.K.; Tairan, N.; Shah, H.; Fayaz, M.; Jamal, F.; Chesneau, C.; Anam, S. Machine-learning based hybrid-feature analysis for liver cancer classification using fused (MR and CT) images. Appl. Sci. 2020, 10, 3134. [Google Scholar] [CrossRef]
  16. Assiri, B. A Modified and Effective Blockchain Model for E-Healthcare Systems. Appl. Sci. 2023, 13, 12630. [Google Scholar] [CrossRef]
  17. Wu, C.-C.; Yeh, W.-C.; Hsu, W.-D.; Islam, M.; Nguyen, P.A.; Poly, T.N.; Wang, Y.-C.; Yang, H.-C.; Li, Y.-C. Prediction of fatty liver disease using machine learning algorithms. Comput. Methods Programs Biomed. 2019, 170, 23–29. [Google Scholar] [CrossRef] [PubMed]
  18. Yao, Z.; Li, J.; Guan, Z.; Ye, Y.; Chen, Y. Liver disease screening based on densely connected deep neural networks. Neural Netw. 2020, 123, 299–304. [Google Scholar] [CrossRef] [PubMed]
  19. Wu, B.; Moeckel, G. Application of digital pathology and machine learning in the liver, kidney and lung diseases. J. Pathol. Informatics 2023, 14, 100184. [Google Scholar] [CrossRef] [PubMed]
  20. Tian, Y.; Liu, M.; Sun, Y.; Fu, S. When liver disease diagnosis encounters deep learning: Analysis, challenges, and prospects. iLIVER 2023, 2, 73–87. [Google Scholar] [CrossRef]
  21. Han, Y.; Akhtar, J.; Liu, G.; Li, C.; Wang, G. Early warning and diagnosis of liver cancer based on dynamic network biomarker and deep learning. Comput. Struct. Biotechnol. J. 2023, 21, 3478–3489. [Google Scholar] [CrossRef] [PubMed]
  22. Bakrania, A.; Joshi, N.; Zhao, X.; Zheng, G.; Bhat, M. Artificial intelligence in liver cancers: Decoding the impact of machine learning models in clinical diagnosis of primary liver cancers and liver cancer metastases. Pharmacol. Res. 2023, 189, 106706. [Google Scholar] [CrossRef]
  23. Takahashi, Y.; Dungubat, E.; Kusano, H.; Fukusato, T. Artificial intelligence and deep learning: New tools for histopathological diagnosis of nonalcoholic fatty liver disease/nonalcoholic steatohepatitis. Comput. Struct. Biotechnol. J. 2023, 21, 2495–2501. [Google Scholar] [CrossRef]
  24. Md, A.Q.; Kulkarni, S.; Joshua, C.J.; Vaichole, T.; Mohan, S.; Iwendi, C. Enhanced Preprocessing Approach Using Ensemble Machine Learning Algorithms for Detecting Liver Disease. Biomedicines 2023, 11, 581. [Google Scholar] [CrossRef]
  25. Refaee, E.A.; Hossain, M.A.; Soundrapandiyan, R.; Karuppiah, M. Biomedical image retrieval using adaptive neuro-fuzzy optimized classifier system. Math. Biosc. Eng. 2022, 19, 8132–8151. [Google Scholar] [CrossRef]
  26. Ahmad, G.N.; Ullah, S.; Algethami, A.; Fatima, H.; Akhter, S.M.H. Comparative Study of Optimum Medical Diagnosis of Human Heart Disease Using Machine Learning Technique with and Without Sequential Feature Selection. IEEE Access 2022, 10, 23808–23828. [Google Scholar] [CrossRef]
  27. Vyas, S.; Seal, A. A comparative study of different feature extraction techniques for identifying COVID-19 patients using chest X-rays images. In Proceedings of the 2020 International Conference on Decision Aid Sciences and Application (DASA), Sakheer, Bahrain, 8–9 November 2020; pp. 209–213. [Google Scholar] [CrossRef]
  28. Ghazal, T.M.; Rehman, A.U.; Saleem, M.; Ahmad, M.; Ahmad, S.; Mehmood, F. Intelligent Model to Predict Early Liver Disease using Machine Learning Technique. In Proceedings of the 2022 International Conference on Business Analytics for Technology and Security (ICBATS), Dubai, United Arab Emirates, 16–17 February 2022. [Google Scholar] [CrossRef]
  29. Shrivastava, A. Liver Disease Patient Dataset 30 K Train Data_Kaggle. Available online: https://www.kaggle.com/datasets/abhi8923shriv/liver-disease-patient-dataset (accessed on 6 February 2024).
  30. Özcan, F.; Uçan, O.N.; Karaçam, S.; Tunçman, D. Fully Automatic Liver and Tumor Segmentation from CT Image Using an AIM-Unet. Bioengineering 2023, 10, 215. [Google Scholar] [CrossRef]
  31. Khoshkhabar, M.; Meshgini, S.; Afrouzian, R.; Danishvar, S. Automatic Liver Tumor Segmentation from CT Images Using Graph Convolutional Network. Sensors 2023, 23, 7561. [Google Scholar] [CrossRef]
  32. Cervantes-Sanchez, F.; Maktabi, M.; Köhler, H.; Sucher, R.; Rayes, N.; Avina-Cervantes, J.G.; Cruz-Aceves, I.; Chalopin, C. Automatic tissue segmentation of hyperspectral images in liver and head neck surgeries using machine learning. Art Int. Surg. 2021, 1, 22–37. [Google Scholar] [CrossRef]
  33. Wei, X.; Chen, X.; Lai, C.; Zhu, Y.; Yang, H.; Du, Y. Automatic Liver Segmentation in CT Images with Enhanced GAN and Mask Region-Based CNN Architectures. BioMed Res. Int. 2021, 2021, 9956983. [Google Scholar] [CrossRef] [PubMed]
  34. Rahman, H.; Bukht, T.F.N.; Imran, A.; Tariq, J.; Tu, S.; Alzahrani, A. A Deep Learning Approach for Liver and Tumor Segmentation in CT Images Using ResUNet. Bioengineering 2022, 9, 368. [Google Scholar] [CrossRef] [PubMed]
  35. Saha, R.S.; Roy, S.; Mukherjee, P.; Halder, R.A. An automated liver tumour segmentation and classification model by deep learning based approaches. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2022, 11, 638–650. [Google Scholar] [CrossRef]
  36. Khan, R.A.; Luo, Y.; Wu, F.-X. Multi-level GAN based enhanced CT scans for liver cancer diagnosis. Biomed. Signal Process. Control 2023, 81, 104450. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.