sensors-logo

Journal Browser

Journal Browser

Sensor Data Fusion Based on Deep Learning for Computer Vision and Medical Applications II

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (30 November 2023) | Viewed by 16904

Special Issue Editors


E-Mail Website
Guest Editor
Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
Interests: computer vision; human–computer interaction; biometrics; medical image processing and understanding; artificial intelligence; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Republic of Korea
Interests: deep learning; semantic segmentation; image classification; medical image analysis; computer-aided diagnosis (CAD); biometrics (finger vein and iris segmentation)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
Interests: medical image analysis; weakly supervised learning; reinforcement learning; computer aided diagnosis (CAD)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
Interests: image segmentation; image classification; medical image analysis; biometrics (fingerprints and iris segmentation); deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
Interests: database usability; advanced data analytics; graph data management
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

It is our pleasure to invite submissions to this Special Issue on "Sensor Data Fusion Based on Deep Learning for Computer Vision and Medical Applications II".

Recent advancements have led to the extensive use of various sensors, such as visible light, near-infrared (NIR), thermal camera sensors, fundus cameras, H&E stains, endoscopy, OCT cameras, and magnetic resonance imaging sensors, in a variety of applications in computer vision, biometrics, video surveillance, image compression and image restoration, medical image analysis, computer-aided diagnosis, etc. Research related to sensor and data fusion, information processing and merging, and fusion architecture for the cooperative perception and risk assessment is needed for computer vision and medical applications. Indeed, prior to ensuring a high level of accuracy in the deployment of computer vision and deep learning applications, it is necessary to guarantee high-quality and real-time perception mechanisms. While computer vision technology has matured, its performance is still affected by various environmental factors, and recent approaches have been attempted to fuse data from various sensors based on deep learning techniques to guarantee higher accuracy. The objective of this Special Issue is to invite high-quality, state-of-the-art research papers that deal with challenging issues in deep-learning-based computer vision and medical applications. We solicit original papers of unpublished and completed research that are not currently under review by any other conference/magazine/journal. Topics of interest include, but are not limited to, the following:

  • Computer vision by various camera sensors;
  • Biometrics and spoof detection by various camera sensors;
  • Image classification using various, NIR, VL camera sensors;
  • Detection and localization by deep learning by various cameras;
  • Deep-learning-based object segmentation/instance segmentation by media sensors;
  • Medical image processing and analysis by various camera sensors;
  • Deep learning by various camera sensors;
  • Multiple-approach fusion that combines deep learning techniques and conventional methods on images obtained by various camera sensors.

Dr. Rizwan Ali Naqvi
Dr. Muhammad Arsalan
Dr. Talha Qaiser
Dr. Tariq Mahmood Khan
Dr. Imran Razzak
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensor data fusion
  • image processing
  • deep feature fusion
  • image/video-based classification
  • semantic segmentation/instance segmentation
  • medical image analysis
  • computer-aided diagnosis
  • computer vision
  • fusion for biometrics
  • fusion for medical applications
  • fusion for semantic information
  • smart sensors

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 11160 KiB  
Article
mid-DeepLabv3+: A Novel Approach for Image Semantic Segmentation Applied to African Food Dietary Assessments
by Thierry Roland Baban A Erep and Lotfi Chaari
Sensors 2024, 24(1), 209; https://doi.org/10.3390/s24010209 - 29 Dec 2023
Viewed by 896
Abstract
Recent decades have witnessed the development of vision-based dietary assessment (VBDA) systems. These systems generally consist of three main stages: food image analysis, portion estimation, and nutrient derivation. The effectiveness of the initial step is highly dependent on the use of accurate segmentation [...] Read more.
Recent decades have witnessed the development of vision-based dietary assessment (VBDA) systems. These systems generally consist of three main stages: food image analysis, portion estimation, and nutrient derivation. The effectiveness of the initial step is highly dependent on the use of accurate segmentation and image recognition models and the availability of high-quality training datasets. Food image segmentation still faces various challenges, and most existing research focuses mainly on Asian and Western food images. For this reason, this study is based on food images from sub-Saharan Africa, which pose their own problems, such as inter-class similarity and dishes with mixed-class food. This work focuses on the first stage of VBDAs, where we introduce two notable contributions. Firstly, we propose mid-DeepLabv3+, an enhanced food image segmentation model based on DeepLabv3+ with a ResNet50 backbone. Our approach involves adding a middle layer in the decoder path and SimAM after each extracted backbone feature layer. Secondly, we present CamerFood10, the first food image dataset specifically designed for sub-Saharan African food segmentation. It includes 10 classes of the most consumed food items in Cameroon. On our dataset, mid-DeepLabv3+ outperforms benchmark convolutional neural network models for semantic image segmentation, with an mIoU (mean Intersection over Union) of 65.20%, representing a +10.74% improvement over DeepLabv3+ with the same backbone. Full article
Show Figures

Figure 1

20 pages, 3611 KiB  
Article
Monkeypox Detection Using CNN with Transfer Learning
by Murat Altun, Hüseyin Gürüler, Osman Özkaraca, Faheem Khan, Jawad Khan and Youngmoon Lee
Sensors 2023, 23(4), 1783; https://doi.org/10.3390/s23041783 - 5 Feb 2023
Cited by 38 | Viewed by 4729
Abstract
Monkeypox disease is caused by a virus that causes lesions on the skin and has been observed on the African continent in the past years. The fatal consequences caused by virus infections after the COVID pandemic have caused fear and panic among the [...] Read more.
Monkeypox disease is caused by a virus that causes lesions on the skin and has been observed on the African continent in the past years. The fatal consequences caused by virus infections after the COVID pandemic have caused fear and panic among the public. As a result of COVID reaching the pandemic dimension, the development and implementation of rapid detection methods have become important. In this context, our study aims to detect monkeypox disease in case of a possible pandemic through skin lesions with deep-learning methods in a fast and safe way. Deep-learning methods were supported with transfer learning tools and hyperparameter optimization was provided. In the CNN structure, a hybrid function learning model was developed by customizing the transfer learning model together with hyperparameters. Implemented on the custom model MobileNetV3-s, EfficientNetV2, ResNET50, Vgg19, DenseNet121, and Xception models. In our study, AUC, accuracy, recall, loss, and F1-score metrics were used for evaluation and comparison. The optimized hybrid MobileNetV3-s model achieved the best score, with an average F1-score of 0.98, AUC of 0.99, accuracy of 0.96, and recall of 0.97. In this study, convolutional neural networks were used in conjunction with optimization of hyperparameters and a customized hybrid function transfer learning model to achieve striking results when a custom CNN model was developed. The custom CNN model design we have proposed is proof of how successfully and quickly the deep learning methods can achieve results in classification and discrimination. Full article
Show Figures

Figure 1

21 pages, 1649 KiB  
Article
A Machine Learning-Based Applied Prediction Model for Identification of Acute Coronary Syndrome (ACS) Outcomes and Mortality in Patients during the Hospital Stay
by Syed Waseem Abbas Sherazi, Huilin Zheng and Jong Yun Lee
Sensors 2023, 23(3), 1351; https://doi.org/10.3390/s23031351 - 25 Jan 2023
Cited by 6 | Viewed by 2473
Abstract
Nowadays, machine learning (ML) is a revolutionary and cutting-edge technology widely used in the medical domain and health informatics in the diagnosis and prognosis of cardiovascular diseases especially. Therefore, we propose a ML-based soft-voting ensemble classifier (SVEC) for the predictive modeling of acute [...] Read more.
Nowadays, machine learning (ML) is a revolutionary and cutting-edge technology widely used in the medical domain and health informatics in the diagnosis and prognosis of cardiovascular diseases especially. Therefore, we propose a ML-based soft-voting ensemble classifier (SVEC) for the predictive modeling of acute coronary syndrome (ACS) outcomes such as STEMI and NSTEMI, discharge reasons for the patients admitted in the hospitals, and death types for the affected patients during the hospital stay. We used the Korea Acute Myocardial Infarction Registry (KAMIR-NIH) dataset, which has 13,104 patients’ data containing 551 features. After data extraction and preprocessing, we used the 125 useful features and applied the SMOTETomek hybrid sampling technique to oversample the data imbalance of minority classes. Our proposed SVEC applied three ML algorithms, such as random forest, extra tree, and the gradient-boosting machine for predictive modeling of our target variables, and compared with the performances of all base classifiers. The experiments showed that the SVEC outperformed other ML-based predictive models in accuracy (99.0733%), precision (99.0742%), recall (99.0734%), F1-score (99.9719%), and the area under the ROC curve (AUC) (99.9702%). Overall, the performance of the SVEC was better than other applied models, but the AUC was slightly lower than the extra tree classifier for the predictive modeling of ACS outcomes. The proposed predictive model outperformed other ML-based models; hence it can be used practically in hospitals for the diagnosis and prediction of heart problems so that timely detection of proper treatments can be chosen, and the occurrence of disease predicted more accurately. Full article
Show Figures

Figure 1

23 pages, 6118 KiB  
Article
DMFL_Net: A Federated Learning-Based Framework for the Classification of COVID-19 from Multiple Chest Diseases Using X-rays
by Hassaan Malik, Ahmad Naeem, Rizwan Ali Naqvi and Woong-Kee Loh
Sensors 2023, 23(2), 743; https://doi.org/10.3390/s23020743 - 9 Jan 2023
Cited by 28 | Viewed by 3461
Abstract
Coronavirus Disease 2019 (COVID-19) is still a threat to global health and safety, and it is anticipated that deep learning (DL) will be the most effective way of detecting COVID-19 and other chest diseases such as lung cancer (LC), tuberculosis (TB), pneumothorax (PneuTh), [...] Read more.
Coronavirus Disease 2019 (COVID-19) is still a threat to global health and safety, and it is anticipated that deep learning (DL) will be the most effective way of detecting COVID-19 and other chest diseases such as lung cancer (LC), tuberculosis (TB), pneumothorax (PneuTh), and pneumonia (Pneu). However, data sharing across hospitals is hampered by patients’ right to privacy, leading to unexpected results from deep neural network (DNN) models. Federated learning (FL) is a game-changing concept since it allows clients to train models together without sharing their source data with anybody else. Few studies, however, focus on improving the model’s accuracy and stability, whereas most existing FL-based COVID-19 detection techniques aim to maximize secondary objectives such as latency, energy usage, and privacy. In this work, we design a novel model named decision-making-based federated learning network (DMFL_Net) for medical diagnostic image analysis to distinguish COVID-19 from four distinct chest disorders including LC, TB, PneuTh, and Pneu. The DMFL_Net model that has been suggested gathers data from a variety of hospitals, constructs the model using the DenseNet-169, and produces accurate predictions from information that is kept secure and only released to authorized individuals. Extensive experiments were carried out with chest X-rays (CXR), and the performance of the proposed model was compared with two transfer learning (TL) models, i.e., VGG-19 and VGG-16 in terms of accuracy (ACC), precision (PRE), recall (REC), specificity (SPF), and F1-measure. Additionally, the DMFL_Net model is also compared with the default FL configurations. The proposed DMFL_Net + DenseNet-169 model achieves an accuracy of 98.45% and outperforms other approaches in classifying COVID-19 from four chest diseases and successfully protects the privacy of the data among diverse clients. Full article
Show Figures

Figure 1

17 pages, 2638 KiB  
Article
Detection of COVID-19 in X-ray Images Using Densely Connected Squeeze Convolutional Neural Network (DCSCNN): Focusing on Interpretability and Explainability of the Black Box Model
by Sikandar Ali, Ali Hussain, Subrata Bhattacharjee, Ali Athar, Abdullah and Hee-Cheol Kim
Sensors 2022, 22(24), 9983; https://doi.org/10.3390/s22249983 - 18 Dec 2022
Cited by 3 | Viewed by 2190
Abstract
The novel coronavirus (COVID-19), which emerged as a pandemic, has engulfed so many lives and affected millions of people across the world since December 2019. Although this disease is under control nowadays, yet it is still affecting people in many countries. The traditional [...] Read more.
The novel coronavirus (COVID-19), which emerged as a pandemic, has engulfed so many lives and affected millions of people across the world since December 2019. Although this disease is under control nowadays, yet it is still affecting people in many countries. The traditional way of diagnosis is time taking, less efficient, and has a low rate of detection of this disease. Therefore, there is a need for an automatic system that expedites the diagnosis process while retaining its performance and accuracy. Artificial intelligence (AI) technologies such as machine learning (ML) and deep learning (DL) potentially provide powerful solutions to address this problem. In this study, a state-of-the-art CNN model densely connected squeeze convolutional neural network (DCSCNN) has been developed for the classification of X-ray images of COVID-19, pneumonia, normal, and lung opacity patients. Data were collected from different sources. We applied different preprocessing techniques to enhance the quality of images so that our model could learn accurately and give optimal performance. Moreover, the attention regions and decisions of the AI model were visualized using the Grad-CAM and LIME methods. The DCSCNN combines the strength of the Dense and Squeeze networks. In our experiment, seven kinds of classification have been performed, in which six are binary classifications (COVID vs. normal, COVID vs. lung opacity, lung opacity vs. normal, COVID vs. pneumonia, pneumonia vs. lung opacity, pneumonia vs. normal) and one is multiclass classification (COVID vs. pneumonia vs. lung opacity vs. normal). The main contributions of this paper are as follows. First, the development of the DCSNN model which is capable of performing binary classification as well as multiclass classification with excellent classification accuracy. Second, to ensure trust, transparency, and explainability of the model, we applied two popular Explainable AI techniques (XAI). i.e., Grad-CAM and LIME. These techniques helped to address the black-box nature of the model while improving the trust, transparency, and explainability of the model. Our proposed DCSCNN model achieved an accuracy of 98.8% for the classification of COVID-19 vs normal, followed by COVID-19 vs. lung opacity: 98.2%, lung opacity vs. normal: 97.2%, COVID-19 vs. pneumonia: 96.4%, pneumonia vs. lung opacity: 95.8%, pneumonia vs. normal: 97.4%, and lastly for multiclass classification of all the four classes i.e., COVID vs. pneumonia vs. lung opacity vs. normal: 94.7%, respectively. The DCSCNN model provides excellent classification performance consequently, helping doctors to diagnose diseases quickly and efficiently. Full article
Show Figures

Figure 1

Review

Jump to: Research

27 pages, 5251 KiB  
Review
Federated and Transfer Learning Methods for the Classification of Melanoma and Nonmelanoma Skin Cancers: A Prospective Study
by Shafia Riaz, Ahmad Naeem, Hassaan Malik, Rizwan Ali Naqvi and Woong-Kee Loh
Sensors 2023, 23(20), 8457; https://doi.org/10.3390/s23208457 - 13 Oct 2023
Cited by 2 | Viewed by 1545
Abstract
Skin cancer is considered a dangerous type of cancer with a high global mortality rate. Manual skin cancer diagnosis is a challenging and time-consuming method due to the complexity of the disease. Recently, deep learning and transfer learning have been the most effective [...] Read more.
Skin cancer is considered a dangerous type of cancer with a high global mortality rate. Manual skin cancer diagnosis is a challenging and time-consuming method due to the complexity of the disease. Recently, deep learning and transfer learning have been the most effective methods for diagnosing this deadly cancer. To aid dermatologists and other healthcare professionals in classifying images into melanoma and nonmelanoma cancer and enabling the treatment of patients at an early stage, this systematic literature review (SLR) presents various federated learning (FL) and transfer learning (TL) techniques that have been widely applied. This study explores the FL and TL classifiers by evaluating them in terms of the performance metrics reported in research studies, which include true positive rate (TPR), true negative rate (TNR), area under the curve (AUC), and accuracy (ACC). This study was assembled and systemized by reviewing well-reputed studies published in eminent fora between January 2018 and July 2023. The existing literature was compiled through a systematic search of seven well-reputed databases. A total of 86 articles were included in this SLR. This SLR contains the most recent research on FL and TL algorithms for classifying malignant skin cancer. In addition, a taxonomy is presented that summarizes the many malignant and non-malignant cancer classes. The results of this SLR highlight the limitations and challenges of recent research. Consequently, the future direction of work and opportunities for interested researchers are established that help them in the automated classification of melanoma and nonmelanoma skin cancers. Full article
Show Figures

Figure 1

Back to TopTop