You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Editorial
  • Open Access

21 October 2022

Sensor Data Fusion Based on Deep Learning for Computer Vision Applications and Medical Applications

,
,
,
and
1
School of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Korea
2
Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea
3
Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
4
School of Computer Science and Engineering, University of New South Wales, Sydney 1466, Australia
This article belongs to the Special Issue Sensor Data Fusion Based on Deep Learning for Computer Vision and Medical Applications
Sensor fusion is the process of merging data from many sources, such as radar, lidar and camera sensors, to provide less uncertain information compared to the information collected from single source. Data fusion, on the other hand, is a process in which multiple data sources increase the measurement reliability, range, and accuracy. Different measuring principles are also used to confirm detected objects. The combined term sensor data fusion is defined as the gathering of data that individual sensors functioning independently cannot provide. It combines the advantages of many sensors and measurement techniques in an efficient manner. A wide range of emerging applications in computer vision, biometrics, video surveillance, image compression and restoration, medical image analysis, computer-aided diagnosis, etc., have resulted from the extensive use of various sensors, including visible light sensors, near-infrared (NIR) sensors, thermal camera sensors, fundus cameras, H&E stains, endoscopies, OCT cameras, and magnetic resonance imaging sensors. Sensor fusion-based methods give us the ability to utilize these sensor data and adjust industrial strategies to improve operations while increasing efficiency. High-quality and real-time perception mechanisms are necessary in order to obtain high accuracy when deploying computer vision and deep learning applications. However, there are few studies on information processing and merging sensor and data fusion and fusion architecture for cooperative perception and risk assessment for computer vision and medical applications in the literature.
The performance of computer vision technology still faces challenges due to the impact of various external environmental factors. Moreover, other challenges in this area also need to be solved or improved. In order to ensure greater accuracy, current systems have sought to combine data from numerous sensors based on deep learning techniques. This Special Issue in Sensors entitled Sensor Data Fusion Based on Deep Learning for Computer Vision and Medical Applications aims to explore high-caliber, cutting-edge research proposals in areas including multiple-approach fusion, spoof detection, image detection, localization, classification, and segmentation by deep learning to tackle challenging problems in computer vision and medical applications. To this end, while numerous manuscripts were received for consideration, only 10 high-caliber and original manuscripts were selected after an extensive peer-review process. Although some provide interesting suggestions for computer vision and medical applications, most of the proposed methodologies discussed in this Special Issue are focused on applications in sensor data fusion.

Funding

This research received no external funding.

Acknowledgments

The Guest Editors are very grateful to all the authors for publishing their valuable research, which led to the success of this Special Issue.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Muhammad, S.; Kim, D.; Cha, J.; Lee, C.; Lee, S.; Baek, S. 3DMesh-GAR: 3D Human Body Mesh-Based Method for Group Activity Recognition. Sensors 2022, 22, 1464. [Google Scholar] [CrossRef] [PubMed]
  2. Hong, S.; Mehdi, S.R.; Zhang, Y.; Shentu, Y.; Wan, Q.; Wang, W.; Raza, K.; Huang, H. Development of coral investigation system based on semantic segmentation of single-channel images. Sensors 2021, 21, 1848. [Google Scholar] [CrossRef] [PubMed]
  3. Muhammad, S.A.; Ahn, H.; Choi, Y.B. Human sentiment and activity recognition in disaster situations using social media images based on deep learning. Sensors 2020, 20, 7115. [Google Scholar] [CrossRef] [PubMed]
  4. Jaiteg, S.; Thakur, D.; Ali, F.; Gera, T.; Kwak, K.S. Deep feature extraction and classification of android malware images. Sensors 2020, 20, 7013. [Google Scholar] [CrossRef] [PubMed]
  5. Sadiq, A.M.; Yasir, S.M.; Ahn, H. Recognition of pashto handwritten characters based on deep learning. Sensors 2020, 20, 5884. [Google Scholar] [CrossRef] [PubMed]
  6. Iftikhar, N.; Akram, S.; Masood, T.; Jaffar, A.; Khan, M.A.; Mosavi, A. Performance Analysis of State-of-the-Art CNN Architectures for LUNA16. Sensors 2022, 22, 4426. [Google Scholar] [CrossRef] [PubMed]
  7. Anza, A.; Hassan, A.; Khan, M.A.; Rehman, S.; Tariq, U.; Kadry, S.; Majumdar, A.; Thinnukool, O. A Long Short-Term Memory Biomarker-Based Prediction Framework for Alzheimer’s Disease. Sensors 2022, 22, 1475. [Google Scholar]
  8. Farhat, A.; Sharif, M.; Khan, M.A.; Tariq, U.; Yong, H.; Cha, J. Multiclass skin lesion classification using hybrid deep features selection and extreme learning machine. Sensors 2022, 22, 799. [Google Scholar] [CrossRef] [PubMed]
  9. Muhammad, M.; Malik, T.S.; Hayat, S.; Hameed, M.; Sun, S.; Wen, J. Deep Learning Approach for Automatic Microaneurysms Detection. Sensors 2022, 22, 542. [Google Scholar] [CrossRef] [PubMed]
  10. Jeza, A.A.; Shuja, J.; Alasmary, W.; Alashaikh, A. Evaluating the dynamics of Bluetooth low energy based COVID-19 risk estimation for educational institutes. Sensors 2021, 21, 6667. [Google Scholar] [CrossRef] [PubMed]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.