sensors-logo

Journal Browser

Journal Browser

Machine Learning and Image-Based Smart Sensing and Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 31 July 2025 | Viewed by 3191

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Engineering, Yuan Ze University, Taoyuan City 32003, Taiwan
Interests: image processing; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, Yuan Ze University, Taoyuan 32003, Taiwan
Interests: image encoding; machine learning

Special Issue Information

Dear Colleagues,

In recent years, the integration of machine learning techniques with image-based smart sensing has emerged as a powerful paradigm, reshaping the landscape of computer vision technologies and their applications. This convergence has been fueled by remarkable advancements in both machine learning algorithms and image processing methods, unlocking unprecedented capabilities in perception, understanding, and decision making. From healthcare to environmental monitoring, from autonomous systems to augmented reality, the synergy between machine learning and image-based sensing has paved the way for innovative solutions to complex real-world problems. This Special Issue seeks to explore the latest developments and trends in this exciting field, providing a platform for researchers and practitioners to exchange ideas, share insights, and push the boundaries of what is possible with machine learning and image-based smart sensing.

Topics of interest include, but are not limited to, the following:

  • Deep learning algorithms for image-based smart sensing applications.
  • Machine learning approaches for medical imaging and healthcare diagnostics.
  • Computer vision techniques for image analysis, recognition, and understanding.
  • Security and surveillance systems utilizing image analysis and computer vision.
  • Multi-modal sensor fusion incorporating visual data for enhanced perception.
  • Environmental monitoring and sustainability solutions leveraging image-based sensing.

Prof. Dr. Ran-Zan Wang
Dr. Shang-Kuan Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • smart sensing
  • image processing
  • computer vision
  • medical imaging
  • healthcare diagnostics
  • security and surveillance systems
  • multi-modal sensor fusion
  • environmental monitoring and sustainability solutions

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 25110 KiB  
Article
Depth-Based Intervention Detection in the Neonatal Intensive Care Unit Using Vision Transformers
by Zein Hajj-Ali, Yasmina Souley Dosso, Kim Greenwood, JoAnn Harrold and James R. Green
Sensors 2024, 24(23), 7753; https://doi.org/10.3390/s24237753 - 4 Dec 2024
Cited by 1 | Viewed by 860
Abstract
Depth cameras can provide an effective, noncontact, and privacy-preserving means to monitor patients in the Neonatal Intensive Care Unit (NICU). Clinical interventions and routine care events can disrupt video-based patient monitoring. Automatically detecting these periods can decrease the time required for hand-annotating recordings, [...] Read more.
Depth cameras can provide an effective, noncontact, and privacy-preserving means to monitor patients in the Neonatal Intensive Care Unit (NICU). Clinical interventions and routine care events can disrupt video-based patient monitoring. Automatically detecting these periods can decrease the time required for hand-annotating recordings, which is needed for system development. Moreover, the automatic detection can be used in the future for real-time or retrospective intervention event classification. An intervention detection method based solely on depth data was developed using a vision transformer (ViT) model utilizing real-world data from patients in the NICU. Multiple design parameters were investigated, including encoding of depth data and perspective transform to account for nonoptimal camera placement. The best-performing model utilized ∼85 M trainable parameters, leveraged both perspective transform and HHA (Horizontal disparity, Height above ground, and Angle with gravity) encoding, and achieved a sensitivity of 85.6%, a precision of 89.8%, and an F1-Score of 87.6%. Full article
(This article belongs to the Special Issue Machine Learning and Image-Based Smart Sensing and Applications)
Show Figures

Figure 1

16 pages, 4191 KiB  
Article
Respiratory Rate Estimation from Thermal Video Data Using Spatio-Temporal Deep Learning
by Mohsen Mozafari, Andrew J. Law, Rafik A. Goubran and James R. Green
Sensors 2024, 24(19), 6386; https://doi.org/10.3390/s24196386 - 2 Oct 2024
Cited by 1 | Viewed by 1903
Abstract
Thermal videos provide a privacy-preserving yet information-rich data source for remote health monitoring, especially for respiration rate (RR) estimation. This paper introduces an end-to-end deep learning approach to RR measurement using thermal video data. A detection transformer (DeTr) first finds the subject’s facial [...] Read more.
Thermal videos provide a privacy-preserving yet information-rich data source for remote health monitoring, especially for respiration rate (RR) estimation. This paper introduces an end-to-end deep learning approach to RR measurement using thermal video data. A detection transformer (DeTr) first finds the subject’s facial region of interest in each thermal frame. A respiratory signal is estimated from a dynamically cropped thermal video using 3D convolutional neural networks and bi-directional long short-term memory stages. To account for the expected phase shift between the respiration measured using a respiratory effort belt vs. a facial video, a novel loss function based on negative maximum cross-correlation and absolute frequency peak difference was introduced. Thermal recordings from 22 subjects, with simultaneous gold standard respiratory effort measurements, were studied while sitting or standing, both with and without a face mask. The RR estimation results showed that our proposed method outperformed existing models, achieving an error of only 1.6 breaths per minute across the four conditions. The proposed method sets a new State-of-the-Art for RR estimation accuracy, while still permitting real-time RR estimation. Full article
(This article belongs to the Special Issue Machine Learning and Image-Based Smart Sensing and Applications)
Show Figures

Figure 1

Back to TopTop