Application of Artificial Intelligence in Medical Assisted Decision System

A special issue of Processes (ISSN 2227-9717). This special issue belongs to the section "Pharmaceutical Processes".

Deadline for manuscript submissions: closed (1 November 2024) | Viewed by 8049

Special Issue Editor


E-Mail Website
Guest Editor
Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
Interests: medical laboratory technology based on machine vision and image processing; medical Internet of things and big data application; intelligent in vitro diagnostic instrument
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The application of artificial intelligence (AI) in many medical fields and specialities is starting to be introduced. Artificial intelligence, machine learning, natural language processing and deep learning enable medical professionals to quickly and accurately identify intelligent medical needs and solutions, and quickly make intelligent medical or business decisions based on data models. AI can analyze a large amount of data stored by smart medical applications in the form of images, clinical research trials, and medical claims. With the help of in-depth learning, data are analyzed and explained with the help of computer extended knowledge. The impact of these tools is enormous, and the use of AI is assisting many stakeholders in the field of intelligent medicine.

  1. Clinicians, researchers, or data management teams participating in clinical trials can speed up the process of searching for the confirmation of medical code, which is crucial for the development and completion of clinical research.
  2. Patients can personalize their health plans by connecting virtual agents with members interested in customized health solutions through conversational AI.
  3. Clinicians can predict or diagnose diseases faster by combining medical data to improve and customize patient care.

This Special Issue on the “Application of Artificial Intelligence in Medical Assisted Decision System” aims to introduce the application of AI technology in the medical field. The adoption of AI in smart medicine still faces challenges. Topics include but are not limited to:

  • Intelligent recognition and analysis of medical images;
  • Intelligent auxiliary diagnosis of disease;
  • Medical robot;
  • Research and development of drug intelligence;
  • Intelligent health management;
  • Unstructured medical data analysis;
  • Intelligent discovery and development of genetic medicine.

Prof. Dr. Lei Wang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Processes is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • medical care
  • auxiliary diagnosis
  • data analysis
  • decision-making system
  • medical imaging
  • medical robot

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1816 KiB  
Article
Improving Polyp Segmentation with Boundary-Assisted Guidance and Cross-Scale Interaction Fusion Transformer Network
by Lincen Jiang, Yan Hui, Yuan Fei, Yimu Ji and Tao Zeng
Processes 2024, 12(5), 1030; https://doi.org/10.3390/pr12051030 - 19 May 2024
Viewed by 1013
Abstract
Efficient and precise colorectal polyp segmentation has significant implications for screening colorectal polyps. Although network variants derived from the Transformer network have high accuracy in segmenting colorectal polyps with complex shapes, they have two main shortcomings: (1) multi-level semantic information at the output [...] Read more.
Efficient and precise colorectal polyp segmentation has significant implications for screening colorectal polyps. Although network variants derived from the Transformer network have high accuracy in segmenting colorectal polyps with complex shapes, they have two main shortcomings: (1) multi-level semantic information at the output of the encoder may result in information loss during the fusion process and (2) failure to adequately suppress background noise during segmentation. To address these challenges, we propose a cross-scale interaction fusion transformer for polyp segmentation (CIFFormer). Firstly, a novel feature supplement module (FSM) supplements the missing details and explores potential features to enhance the feature representations. Additionally, to mitigate the interference of background noise, we designed a cross-scale interactive fusion module (CIFM) that combines feature information between different layers to obtain more multi-scale and discriminative representative features. Furthermore, a boundary-assisted guidance module (BGM) is proposed to help the segmentation network obtain boundary-enhanced details. Extensive experiments on five typical datasets have demonstrated that CIFFormer has an obvious advantage in segmenting polyps. Specifically, CIFFormer achieved an mDice of 0.925 and an mIoU of 0.875 on the Kvasir-SEG dataset, achieving superior segmentation accuracy to competing methods. Full article
Show Figures

Figure 1

16 pages, 6594 KiB  
Article
Optimizing Pneumonia Diagnosis Using RCGAN-CTL: A Strategy for Small or Limited Imaging Datasets
by Ke Han, Shuai He and Yue Yu
Processes 2024, 12(3), 548; https://doi.org/10.3390/pr12030548 - 11 Mar 2024
Viewed by 1111
Abstract
In response to the urgent need for efficient pneumonia diagnosis—a significant health challenge that has been intensified during the COVID-19 era—this study introduces the RCGAN-CTL model. This innovative approach combines a coupled generative adversarial network (GAN) with relativistic and conditional discriminators to optimize [...] Read more.
In response to the urgent need for efficient pneumonia diagnosis—a significant health challenge that has been intensified during the COVID-19 era—this study introduces the RCGAN-CTL model. This innovative approach combines a coupled generative adversarial network (GAN) with relativistic and conditional discriminators to optimize performance in contexts with limited data resources. It significantly enhances the efficacy of small or incomplete datasets through the integration of synthetic images generated by an advanced RCGAN. Rigorous evaluations using a wide range of lung X-ray images validate the model’s effectiveness. In binary classification tasks that differentiate between normal and pneumonia cases, RCGAN-CTL demonstrates exceptional accuracy, exceeding 99%, with an area under the curve (AUC) of around 95%. Its capabilities extend to a complex triple classification task, accurately distinguishing between normal, viral pneumonia, and bacterial pneumonia, with precision scores of 89.9%, 95.5%, and 90.5%, respectively. A notable improvement in sensitivity further evidences the model’s robustness. Comprehensive validation underscores RCGAN-CTL’s superior accuracy and reliability in both binary and triple classification scenarios. This advancement is pivotal for enhancing deep learning applications in medical diagnostics, presenting a significant tool in addressing the challenges of pneumonia diagnosis, a key concern in contemporary healthcare. Full article
Show Figures

Figure 1

17 pages, 9658 KiB  
Article
Transfer Learning and Interpretable Analysis-Based Quality Assessment of Synthetic Optical Coherence Tomography Images by CGAN Model for Retinal Diseases
by Ke Han, Yue Yu and Tao Lu
Processes 2024, 12(1), 182; https://doi.org/10.3390/pr12010182 - 13 Jan 2024
Cited by 4 | Viewed by 1343
Abstract
This study investigates the effectiveness of using conditional generative adversarial networks (CGAN) to synthesize Optical Coherence Tomography (OCT) images for medical diagnosis. Specifically, the CGAN model is trained to generate images representing various eye conditions, including normal retina, vitreous warts (DRUSEN), choroidal neovascularization [...] Read more.
This study investigates the effectiveness of using conditional generative adversarial networks (CGAN) to synthesize Optical Coherence Tomography (OCT) images for medical diagnosis. Specifically, the CGAN model is trained to generate images representing various eye conditions, including normal retina, vitreous warts (DRUSEN), choroidal neovascularization (CNV), and diabetic macular edema (DME), creating a dataset of 102,400 synthetic images per condition. The quality of these images is evaluated using two methods. First, 18 transfer-learning neural networks (including AlexNet, VGGNet16, GoogleNet) assess image quality through model-scoring metrics, resulting in an accuracy rate of 97.4% to 99.9% and an F1 Score of 95.3% to 100% across conditions. Second, interpretative analysis techniques (GRAD-CAM, occlusion sensitivity, LIME) compare the decision score distribution of real and synthetic images, further validating the CGAN network’s performance. The results indicate that CGAN-generated OCT images closely resemble real images and could significantly contribute to medical datasets. Full article
Show Figures

Figure 1

18 pages, 2274 KiB  
Article
Gout Staging Diagnosis Method Based on Deep Reinforcement Learning
by Chao Ma, Changgang Pan, Zi Ye, Hanbin Ren, Hai Huang and Jiaxing Qu
Processes 2023, 11(8), 2450; https://doi.org/10.3390/pr11082450 - 15 Aug 2023
Cited by 2 | Viewed by 1334
Abstract
In clinical practice, diseases with a prolonged course and disease characteristics at the time of diagnosis are often classified into specific stages. The precision of disease staging significantly impacts the therapeutic and curative outcomes for patients, and the diagnosis of multi-clinical-stage diseases based [...] Read more.
In clinical practice, diseases with a prolonged course and disease characteristics at the time of diagnosis are often classified into specific stages. The precision of disease staging significantly impacts the therapeutic and curative outcomes for patients, and the diagnosis of multi-clinical-stage diseases based on electronic medical records is a problem that needs further research. Gout is a multi-stage disease. This paper focuses on the research of gout and proposes a staging diagnosis method for gout based on deep reinforcement learning. This method firstly uses the candidate binary classification model library for accurate diagnosis of gout, and then corrects the results of the binary classification through the set medical rules for diagnosis of gout, and then uses the machine learning model to diagnose different stages of corrected accurate data. In the course of the experiment, deep reinforcement learning was introduced to solve the hyperparameter tuning problem of the staging model. Through experiments conducted on 24,872 electronic medical records, the accuracy rate of gout diagnosis was found to be 90.03%, while the accuracy rate for diagnosing different stages of gout disease reached 86.85%. These findings serve as a valuable tool in assisting clinicians with accurate staging and diagnosis of gout. The application of deep reinforcement learning in gout staging diagnosis demonstrates a significant enhancement in diagnostic accuracy, thereby validating the effectiveness and feasibility of this method. Full article
Show Figures

Figure 1

14 pages, 3960 KiB  
Article
GCCSwin-UNet: Global Context and Cross-Shaped Windows Vision Transformer Network for Polyp Segmentation
by Jianbo Zhu, Mingfeng Ge, Zhimin Chang and Wenfei Dong
Processes 2023, 11(4), 1035; https://doi.org/10.3390/pr11041035 - 29 Mar 2023
Cited by 1 | Viewed by 2471
Abstract
Accurate polyp segmentation is of great importance for the diagnosis and treatment of colon cancer. Convolutional neural networks (CNNs) have made significant strides in the processing of medical images in recent years. The limited structure of convolutional operations prevents CNNs from learning adequately [...] Read more.
Accurate polyp segmentation is of great importance for the diagnosis and treatment of colon cancer. Convolutional neural networks (CNNs) have made significant strides in the processing of medical images in recent years. The limited structure of convolutional operations prevents CNNs from learning adequately about global and long-range semantic information interactions, despite the remarkable performance they have attained. Therefore, the GCCSwin-UNet framework is suggested in this study. Specifically, the model utilizes an encoder–decoder structure, using the patch-embedding layer for feature downsampling and the CSwin Transformer block as the encoder for contextual feature extraction. To restore the feature map’s spatial resolution during upsampling operations, a symmetric decoder and patch expansion layer are also created. In order to help the backbone module to do better feature learning, we also create a global context module (GCM) and a local position-enhanced module (LPEM). We conducted extensive experiments on the Kvasir-SEG and CVC-ClinicDB datasets, and compared them with existing methods. GCCSwin-UNet reached remarkable results with Dice and MIoU of 86.37% and 83.19% for Kvasir-SEG, respectively, and 91.26% and 84.65% for CVC-ClinicDB, respectively. Finally, quantitative analysis and statistical tests are applied to further demonstrate the validity and plausibility of our method. Full article
Show Figures

Figure 1

Back to TopTop