sensors-logo

Journal Browser

Journal Browser

Machine and Deep Learning in Sensing and Imaging

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 30 April 2025 | Viewed by 18873

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, Boston University, Boston, MA 02215, USA
Interests: computer intelligence; image processing; machine learning; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The application of machine and deep learning methods in sensing and imaging can potentially have a significant and profound impact on analysis and treatment of the human body, therapeutic decisions, and may ultimately improve the outcome for patients. A wide range of machine and deep learning methods have been applied to analyze and interpret data of various kinds from sensors embedded in different tools and devices, or from portable sensor devices. Advances in network design, processing power, the availability of easy-to-use software packages, and the scale of available medical image databases have accelerated the developments in this exciting field. Nevertheless, studies evaluating the potential applications of machine and deep learning methods for detection, lesion segmentation, therapeutic decision, and prognosis of human body disease are still relatively sparse.

This Special Issue encourages authors from academia and industry to submit new research results regarding methods and applications in this field. We welcome high-quality original research or review articles relating to the application of current machine and deep learning methods to the human body, including clinical applications, methods, data augmentation, machine learning interpretation, and new algorithm design. The Special Issue topics include, but are not limited to:

  • Medical imaging
  • Biomedical engineering
  • Data fusion techniques
  • Human body imaging and therapy
  • Imaging modality
  • Decision support algorithms
  • Predictive modelling of treatment efficacy
  • Multi-parametric study

Dr. Kate Saenko
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical imaging
  • biomedical engineering
  • data fusion techniques
  • human body imaging and therapy
  • imaging modality
  • decision support algorithms
  • predictive modelling of treatment efficacy

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 6411 KiB  
Article
CNN-Based Kidney Segmentation Using a Modified CLAHE Algorithm
by Abror Shavkatovich Buriboev, Ahmadjon Khashimov, Akmal Abduvaitov and Heung Seok Jeon
Sensors 2024, 24(23), 7703; https://doi.org/10.3390/s24237703 - 2 Dec 2024
Cited by 1 | Viewed by 1183
Abstract
This paper presents an enhanced approach to kidney segmentation using a modified CLAHE preprocessing method, aimed at improving image clarity and CNN performance on the KiTS19 dataset. To assess the impact of the modified CLAHE method, we conducted quality evaluations using the BRISQUE [...] Read more.
This paper presents an enhanced approach to kidney segmentation using a modified CLAHE preprocessing method, aimed at improving image clarity and CNN performance on the KiTS19 dataset. To assess the impact of the modified CLAHE method, we conducted quality evaluations using the BRISQUE metric, comparing the original, standard CLAHE and modified CLAHE versions of the dataset. The BRISQUE score decreased from 28.8 in the original dataset to 21.1 with the modified CLAHE method, indicating a significant improvement in image quality. Furthermore, CNN segmentation accuracy rose from 0.951 with the original dataset to 0.996 with the modified CLAHE method, outperforming the accuracy achieved with standard CLAHE preprocessing (0.969). These results highlight the benefits of the modified CLAHE method in refining image quality and enhancing segmentation performance. This study highlights the value of adaptive preprocessing in medical imaging workflows and shows that CNN-based kidney segmentation accuracy may be greatly increased by altering conventional CLAHE. Our method provides insightful information on optimizing preprocessing for medical imaging applications, leading to more accurate and dependable segmentation results for better clinical diagnosis. Full article
(This article belongs to the Special Issue Machine and Deep Learning in Sensing and Imaging)
Show Figures

Figure 1

23 pages, 4902 KiB  
Article
Concatenated CNN-Based Pneumonia Detection Using a Fuzzy-Enhanced Dataset
by Abror Shavkatovich Buriboev, Dilnoz Muhamediyeva, Holida Primova, Djamshid Sultanov, Komil Tashev and Heung Seok Jeon
Sensors 2024, 24(20), 6750; https://doi.org/10.3390/s24206750 - 21 Oct 2024
Cited by 4 | Viewed by 1875
Abstract
Pneumonia is a form of acute respiratory infection affecting the lungs. Symptoms of viral and bacterial pneumonia are similar. Rapid diagnosis of the disease is difficult, since polymerase chain reaction-based methods, which have the greatest reliability, provide results in a few hours, while [...] Read more.
Pneumonia is a form of acute respiratory infection affecting the lungs. Symptoms of viral and bacterial pneumonia are similar. Rapid diagnosis of the disease is difficult, since polymerase chain reaction-based methods, which have the greatest reliability, provide results in a few hours, while ensuring high requirements for compliance with the analysis technology and professionalism of the personnel. This study proposed a Concatenated CNN model for pneumonia detection combined with a fuzzy logic-based image improvement method. The fuzzy logic-based image enhancement process is based on a new fuzzification refinement algorithm, with significantly improved image quality and feature extraction for the CCNN model. Four datasets, original and upgraded images utilizing fuzzy entropy, standard deviation, and histogram equalization, were utilized to train the algorithm. The CCNN’s performance was demonstrated to be significantly improved by the upgraded datasets, with the fuzzy entropy-added dataset producing the best results. The suggested CCNN attained remarkable classification metrics, including 98.9% accuracy, 99.3% precision, 99.8% F1-score, and 99.6% recall. Experimental comparisons showed that the fuzzy logic-based enhancement worked significantly better than traditional image enhancement methods, resulting in higher diagnostic precision. This study demonstrates how well deep learning models and sophisticated image enhancement techniques work together to analyze medical images. Full article
(This article belongs to the Special Issue Machine and Deep Learning in Sensing and Imaging)
Show Figures

Figure 1

18 pages, 9570 KiB  
Article
A Depth Awareness and Learnable Feature Fusion Network for Enhanced Geometric Perception in Semantic Correspondence
by Fazeng Li, Chunlong Zou, Juntong Yun, Li Huang, Ying Liu, Bo Tao and Yuanmin Xie
Sensors 2024, 24(20), 6680; https://doi.org/10.3390/s24206680 - 17 Oct 2024
Viewed by 1308
Abstract
Deep learning is becoming the most widely used technology for multi-sensor data fusion. Semantic correspondence has recently emerged as a foundational task, enabling a range of downstream applications, such as style or appearance transfer, robot manipulation, and pose estimation, through its ability to [...] Read more.
Deep learning is becoming the most widely used technology for multi-sensor data fusion. Semantic correspondence has recently emerged as a foundational task, enabling a range of downstream applications, such as style or appearance transfer, robot manipulation, and pose estimation, through its ability to provide robust correspondence in RGB images with semantic information. However, current representations generated by self-supervised learning and generative models are often limited in their ability to capture and understand the geometric structure of objects, which is significant for matching the correct details in applications of semantic correspondence. Furthermore, efficiently fusing these two types of features presents an interesting challenge. Achieving harmonious integration of these features is crucial for improving the expressive power of models in various tasks. To tackle these issues, our key idea is to integrate depth information from depth estimation or depth sensors into feature maps and leverage learnable weights for feature fusion. First, depth information is used to model pixel-wise depth distributions, assigning relative depth weights to feature maps for perceiving an object’s structural information. Then, based on a contrastive learning optimization objective, a series of weights are optimized to leverage feature maps from self-supervised learning and generative models. Depth features are naturally embedded into feature maps, guiding the network to learn geometric structure information about objects and alleviating depth ambiguity issues. Experiments on the SPair-71K and AP-10K datasets show that the proposed method achieves scores of 81.8 and 83.3 on the percentage of correct keypoints (PCK) at the 0.1 level, respectively. Our approach not only demonstrates significant advantages in experimental results but also introduces the depth awareness module and a learnable feature fusion module, which enhances the understanding of object structures through depth information and fully utilizes features from various pre-trained models, offering new possibilities for the application of deep learning in RGB and depth data fusion technologies. We will also continue to focus on accelerating model inference and optimizing model lightweighting, enabling our model to operate at a faster speed. Full article
(This article belongs to the Special Issue Machine and Deep Learning in Sensing and Imaging)
Show Figures

Figure 1

18 pages, 3601 KiB  
Article
Using Deep Learning Architectures for Detection and Classification of Diabetic Retinopathy
by Cheena Mohanty, Sakuntala Mahapatra, Biswaranjan Acharya, Fotis Kokkoras, Vassilis C. Gerogiannis, Ioannis Karamitsos and Andreas Kanavos
Sensors 2023, 23(12), 5726; https://doi.org/10.3390/s23125726 - 19 Jun 2023
Cited by 69 | Viewed by 9646
Abstract
Diabetic retinopathy (DR) is a common complication of long-term diabetes, affecting the human eye and potentially leading to permanent blindness. The early detection of DR is crucial for effective treatment, as symptoms often manifest in later stages. The manual grading of retinal images [...] Read more.
Diabetic retinopathy (DR) is a common complication of long-term diabetes, affecting the human eye and potentially leading to permanent blindness. The early detection of DR is crucial for effective treatment, as symptoms often manifest in later stages. The manual grading of retinal images is time-consuming, prone to errors, and lacks patient-friendliness. In this study, we propose two deep learning (DL) architectures, a hybrid network combining VGG16 and XGBoost Classifier, and the DenseNet 121 network, for DR detection and classification. To evaluate the two DL models, we preprocessed a collection of retinal images obtained from the APTOS 2019 Blindness Detection Kaggle Dataset. This dataset exhibits an imbalanced image class distribution, which we addressed through appropriate balancing techniques. The performance of the considered models was assessed in terms of accuracy. The results showed that the hybrid network achieved an accuracy of 79.50%, while the DenseNet 121 model achieved an accuracy of 97.30%. Furthermore, a comparative analysis with existing methods utilizing the same dataset revealed the superior performance of the DenseNet 121 network. The findings of this study demonstrate the potential of DL architectures for the early detection and classification of DR. The superior performance of the DenseNet 121 model highlights its effectiveness in this domain. The implementation of such automated methods can significantly improve the efficiency and accuracy of DR diagnosis, benefiting both healthcare providers and patients. Full article
(This article belongs to the Special Issue Machine and Deep Learning in Sensing and Imaging)
Show Figures

Figure 1

17 pages, 3617 KiB  
Article
Saliency Map and Deep Learning in Binary Classification of Brain Tumours
by Wojciech Chmiel, Joanna Kwiecień and Kacper Motyka
Sensors 2023, 23(9), 4543; https://doi.org/10.3390/s23094543 - 7 May 2023
Cited by 4 | Viewed by 3620
Abstract
The paper was devoted to the application of saliency analysis methods in the performance analysis of deep neural networks used for the binary classification of brain tumours. We have presented the basic issues related to deep learning techniques. A significant challenge in using [...] Read more.
The paper was devoted to the application of saliency analysis methods in the performance analysis of deep neural networks used for the binary classification of brain tumours. We have presented the basic issues related to deep learning techniques. A significant challenge in using deep learning methods is the ability to explain the decision-making process of the network. To ensure accurate results, the deep network being used must undergo extensive training to produce high-quality predictions. There are various network architectures that differ in their properties and number of parameters. Consequently, an intriguing question is how these different networks arrive at similar or distinct decisions based on the same set of prerequisites. Therefore, three widely used deep convolutional networks have been discussed, such as VGG16, ResNet50 and EfficientNetB7, which were used as backbone models. We have customized the output layer of these pre-trained models with a softmax layer. In addition, an additional network has been described that was used to assess the saliency areas obtained. For each of the above networks, many tests have been performed using key metrics, including statistical evaluation of the impact of class activation mapping (CAM) and gradient-weighted class activation mapping (Grad-CAM) on network performance on a publicly available dataset of brain tumour X-ray images. Full article
(This article belongs to the Special Issue Machine and Deep Learning in Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop