sensors-logo

Journal Browser

Journal Browser

Special Issue "Sensors and Deep Learning for Digital Image Processing"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 March 2021) | Viewed by 6346

Special Issue Editors

Prof. Dr. Bogdan Smolka
E-Mail Website
Guest Editor
Department of Automatic Control, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
Interests: color image enhancement, noise reduction, medical image analysis, facial expression recognition
Prof. Dr. M. Emre Celebi
E-Mail Website
Guest Editor
Department of Computer Science, University of Central Arkansas, 201 Donaghey Ave., Conway, AR 72035, USA
Interests: image processing, medical image analysis, data mining, partitional clustering
Prof. Dr. Takahiko Horiuchi
E-Mail Website
Guest Editor
Department of Imaging Sciences, Graduate School of Engineering, Chiba University, 1-33, Yayoi-cho, Inage-ku, Chiba 263-8522, Japan
Interests: multispectral imaging; image analysis; 3D measurement; material appearance
Special Issues, Collections and Topics in MDPI journals
Prof. Dr. Gianluigi Ciocca
E-Mail Website
Guest Editor
Department of Informatics, University of Milano-Bicocca, Systems and Communication, Viale Sarca 336, 20126 Milano, Italy
Interests: image understanding, video analysis, machine learning, computer vision

Special Issue Information

Dear Colleagues,

Recently, deep learning has triggered a revolution in image processing and computer vision as it allows computational models of multiple layers to learn and represent data by imitating how the brain perceives and understands multimodal information. In recent years, deep learning methods have outperformed conventional machine learning techniques in many application areas including image enhancement, image segmentation, object detection, object recognition, scene understanding, image synthesis, healthcare and visual recognition, among many others. 

We would like to invite the academic and industrial research community to submit original research as well as review articles to this Special Issue. Topics of interest include:

  • Learning and Adaptive Sensor Fusion
  • Multisensor Data Fusion
  • Emerging Trends in Deep Learning Techniques
  • Intelligent Measurement Systems
  • Analysis of Image Sensor Data
  • Data Augmentation Techniques 
  • Image Classification, Image Clustering, Object Detection, Object Localization, Object Detection, Image Segmentation, Image Compression
  • Interpolation, Denoising, Deblurring, Dehazing, Inpainting and Super-Resolution
  • Deep Learning Architectures for Remote Sensing
  • Image Quality Assessment
  • Deep Learning-Based Biometrics 
  • Human/Machine Smart Interfaces
  • Industrial Applications
  • 3D Point Cloud Measurement and Processing
  • Image Synthesis

Prof. Dr. Bogdan Smolka
Prof. Dr. M. Emre Celebi
Prof. Dr. Takahiko Horiuchi
Prof. Dr. Gianluigi Ciocca
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensors
  • image processing
  • machine learning
  • data mining
  • pattern recognition
  • deep learning
  • convolutional neural networks
  • image enhancement
  • object detection and recognition
  • biometrics
  • information processing

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
Zero-Shot Action Recognition with Three-Stream Graph Convolutional Networks
Sensors 2021, 21(11), 3793; https://doi.org/10.3390/s21113793 - 30 May 2021
Cited by 1 | Viewed by 1112
Abstract
Large datasets are often used to improve the accuracy of action recognition. However, very large datasets are problematic as, for example, the annotation of large datasets is labor-intensive. This has encouraged research in zero-shot action recognition (ZSAR). Presently, most ZSAR methods recognize actions [...] Read more.
Large datasets are often used to improve the accuracy of action recognition. However, very large datasets are problematic as, for example, the annotation of large datasets is labor-intensive. This has encouraged research in zero-shot action recognition (ZSAR). Presently, most ZSAR methods recognize actions according to each video frame. These methods are affected by light, camera angle, and background, and most methods are unable to process time series data. The accuracy of the model is reduced owing to these reasons. In this paper, in order to solve these problems, we propose a three-stream graph convolutional network that processes both types of data. Our model has two parts. One part can process RGB data, which contains extensive useful information. The other part can process skeleton data, which is not affected by light and background. By combining these two outputs with a weighted sum, our model predicts the final results for ZSAR. Experiments conducted on three datasets demonstrate that our model has greater accuracy than a baseline model. Moreover, we also prove that our model can learn from human experience, which can make the model more accurate. Full article
(This article belongs to the Special Issue Sensors and Deep Learning for Digital Image Processing)
Show Figures

Figure 1

Communication
Classification of Diffuse Glioma Subtype from Clinical-Grade Pathological Images Using Deep Transfer Learning
Sensors 2021, 21(10), 3500; https://doi.org/10.3390/s21103500 - 17 May 2021
Cited by 3 | Viewed by 1268
Abstract
Diffuse gliomas are the most common primary brain tumors and they vary considerably in their morphology, location, genetic alterations, and response to therapy. In 2016, the World Health Organization (WHO) provided new guidelines for making an integrated diagnosis that incorporates both morphologic and [...] Read more.
Diffuse gliomas are the most common primary brain tumors and they vary considerably in their morphology, location, genetic alterations, and response to therapy. In 2016, the World Health Organization (WHO) provided new guidelines for making an integrated diagnosis that incorporates both morphologic and molecular features to diffuse gliomas. In this study, we demonstrate how deep learning approaches can be used for an automatic classification of glioma subtypes and grading using whole-slide images that were obtained from routine clinical practice. A deep transfer learning method using the ResNet50V2 model was trained to classify subtypes and grades of diffuse gliomas according to the WHO’s new 2016 classification. The balanced accuracy of the diffuse glioma subtype classification model with majority voting was 0.8727. These results highlight an emerging role of deep learning in the future practice of pathologic diagnosis. Full article
(This article belongs to the Special Issue Sensors and Deep Learning for Digital Image Processing)
Show Figures

Figure 1

Article
Automatic vs. Human Recognition of Pain Intensity from Facial Expression on the X-ITE Pain Database
Sensors 2021, 21(9), 3273; https://doi.org/10.3390/s21093273 - 10 May 2021
Cited by 3 | Viewed by 991
Abstract
Prior work on automated methods demonstrated that it is possible to recognize pain intensity from frontal faces in videos, while there is an assumption that humans are very adept at this task compared to machines. In this paper, we investigate whether such an [...] Read more.
Prior work on automated methods demonstrated that it is possible to recognize pain intensity from frontal faces in videos, while there is an assumption that humans are very adept at this task compared to machines. In this paper, we investigate whether such an assumption is correct by comparing the results achieved by two human observers with the results achieved by a Random Forest classifier (RFc) baseline model (called RFc-BL) and by three proposed automated models. The first proposed model is a Random Forest classifying descriptors of Action Unit (AU) time series; the second is a modified MobileNetV2 CNN classifying face images that combine three points in time; and the third is a custom deep network combining two CNN branches using the same input as for MobileNetV2 plus knowledge of the RFc. We conduct experiments with X-ITE phasic pain database, which comprises videotaped responses to heat and electrical pain stimuli, each of three intensities. Distinguishing these six stimulation types plus no stimulation was the main 7-class classification task for the human observers and automated approaches. Further, we conducted reduced 5-class and 3-class classification experiments, applied Multi-task learning, and a newly suggested sample weighting method. Experimental results show that the pain assessments of the human observers are significantly better than guessing and perform better than the automatic baseline approach (RFc-BL) by about 1%; however, the human performance is quite poor due to the challenge that pain that is ethically allowed to be induced in experimental studies often does not show up in facial reaction. We discovered that downweighting those samples during training improves the performance for all samples. The proposed RFc and two-CNNs models (using the proposed sample weighting) significantly outperformed the human observer by about 6% and 7%, respectively. Full article
(This article belongs to the Special Issue Sensors and Deep Learning for Digital Image Processing)
Show Figures

Figure 1

Article
Irradiance Restoration Based Shadow Compensation Approach for High Resolution Multispectral Satellite Remote Sensing Images
Sensors 2020, 20(21), 6053; https://doi.org/10.3390/s20216053 - 24 Oct 2020
Cited by 1 | Viewed by 813
Abstract
Numerous applications are hindered by shadows in high resolution satellite remote sensing images, like image classification, target recognition and change detection. In order to improve remote sensing image utilization, significant importance appears for restoring surface feature information under shadow regions. Problems inevitably occur [...] Read more.
Numerous applications are hindered by shadows in high resolution satellite remote sensing images, like image classification, target recognition and change detection. In order to improve remote sensing image utilization, significant importance appears for restoring surface feature information under shadow regions. Problems inevitably occur for current shadow compensation methods in processing high resolution multispectral satellite remote sensing images, such as color distortion of compensated shadow and interference of non-shadow. In this study, to further settle these problems, we analyzed the surface irradiance of both shadow and non-shadow areas based on a satellite sensor imaging mechanism and radiative transfer theory, and finally develop an irradiance restoration based (IRB) shadow compensation approach under the assumption that the shadow area owns the same irradiance to the nearby non-shadow area containing the same type features. To validate the performance of the proposed IRB approach for shadow compensation, we tested numerous images of WorldView-2 and WorldView-3 acquired at different sites and times. We particularly evaluated the shadow compensation performance of the proposed IRB approach by qualitative visual sense comparison and quantitative assessment with two WorldView-3 test images of Tripoli, Libya. The resulting images automatically produced by our IRB method deliver a good visual sense and relatively low relative root mean square error (rRMSE) values. Experimental results show that the proposed IRB shadow compensation approach can not only compensate information of surface features in shadow areas both effectively and automatically, but can also well preserve information of objects in non-shadow regions for high resolution multispectral satellite remote sensing images. Full article
(This article belongs to the Special Issue Sensors and Deep Learning for Digital Image Processing)
Show Figures

Figure 1

Other

Jump to: Research

Letter
Gated Dehazing Network via Least Square Adversarial Learning
Sensors 2020, 20(21), 6311; https://doi.org/10.3390/s20216311 - 05 Nov 2020
Cited by 4 | Viewed by 950
Abstract
In a hazy environment, visibility is reduced and objects are difficult to identify. For this reason, many dehazing techniques have been proposed to remove the haze. Especially, in the case of the atmospheric scattering model estimation-based method, there is a problem of distortion [...] Read more.
In a hazy environment, visibility is reduced and objects are difficult to identify. For this reason, many dehazing techniques have been proposed to remove the haze. Especially, in the case of the atmospheric scattering model estimation-based method, there is a problem of distortion when inaccurate models are estimated. We present a novel residual-based dehazing network model to overcome the performance limitation in an atmospheric scattering model-based method. More specifically, the proposed model adopted the gate fusion network that generates the dehazed results using a residual operator. To further reduce the divergence between the clean and dehazed images, the proposed discriminator distinguishes dehazed results and clean images, and then reduces the statistical difference via adversarial learning. To verify each element of the proposed model, we hierarchically performed the haze removal process in an ablation study. Experimental results show that the proposed method outperformed state-of-the-art approaches in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), international commission on illumination cie delta e 2000 (CIEDE2000), and mean squared error (MSE). It also gives subjectively high-quality images without color distortion or undesired artifacts for both synthetic and real-world hazy images. Full article
(This article belongs to the Special Issue Sensors and Deep Learning for Digital Image Processing)
Show Figures

Figure 1

Back to TopTop