sensors-logo

Journal Browser

Journal Browser

Feature Papers in Sensing and Imaging 2024

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (31 December 2024) | Viewed by 14681

Special Issue Editors


E-Mail Website
Guest Editor
Laboratoire Hubert Curien, CNRS UMR 5516, Université de Lyon, 42000 Saint-Étienne, France
Interests: fiber sensors; optical sensors; image sensors; optical materials; radiation effects
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Department of Computer Science, University of Applied Sciences and Arts Dortmund (FH Dortmund), 44227 Dortmund, Germany
2. Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, 45122 Essen, Germany
Interests: machine learning; computational intelligence; biomedical applications; interpretable machine learning; natural language processing (NLP); computer vision; augmented reality; information extraction; information retrieval; image processing; biostatistics; bioinformatics; mathematics for computer science
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce that the Sensors Section ‘Sensing and Imaging’ is now compiling a collection of papers submitted by the Editorial Board Members (EBMs) of our section and outstanding scholars in this research field. We welcome contributions and recommendations from the EBMs.

We are seeking original papers and review articles that showcase state-of-the-art theoretical and applicative advances, new experimental discoveries, and novel technological improvements regarding sensing and imaging. We expect these papers to be widely read and highly influential within the field. All papers in this Special Issue will be well promoted.

We would also like to take this opportunity to call on more experienced scholars to join the Section ‘Sensing and Imaging’ so that we can work together to further develop this exciting field of research.

Prof. Dr. Sylvain Girard
Prof. Dr. Christoph M. Friedrich
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • imaging systems
  • sensors
  • camera
  • radar and sonar
  • probes

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 4789 KiB  
Article
Machine-Learning-Based Activity Tracking for Individual Pig Monitoring in Experimental Facilities for Improved Animal Welfare in Research
by Frederik Deutch, Marc Gjern Weiss, Stefan Rahr Wagner, Lars Schmidt Hansen, Frederik Larsen, Constanca Figueiredo, Cyril Moers and Anna Krarup Keller
Sensors 2025, 25(3), 785; https://doi.org/10.3390/s25030785 - 28 Jan 2025
Cited by 1 | Viewed by 1065
Abstract
In experimental research, animal welfare should always be of the highest priority. Currently, physical in-person observations are the standard. This is time-consuming, and results are subjective. Video-based machine learning models for monitoring experimental pigs provide a continuous and objective observation method for animal [...] Read more.
In experimental research, animal welfare should always be of the highest priority. Currently, physical in-person observations are the standard. This is time-consuming, and results are subjective. Video-based machine learning models for monitoring experimental pigs provide a continuous and objective observation method for animal misthriving detection. The aim of this study was to develop and validate a pig tracking technology, using video-based data in a machine learning model to analyze the posture and activity level of experimental pigs living in single-pig pens. A research prototype was created using a microcomputer and a ceiling-mounted camera for live recording based on the obtained images from the experimental facility, and a combined model was created based on the Ultralytics YOLOv8n for object detection trained on the obtained images. As a second step, the Lucas–Kanade sparse optical flow technique for movement detection was applied. The resulting model successfully classified whether individual pigs were lying, standing, or walking. The validation test showed an accuracy of 90.66%, precision of 90.91%, recall of 90.66%, and correlation coefficient of 84.53% compared with observed ground truth. In conclusion, the model demonstrates how machine learning can be used to monitor experimental animals to potentially improve animal welfare. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2024)
Show Figures

Figure 1

16 pages, 1164 KiB  
Article
Real-Time Quantification of Gas Leaks Using a Snapshot Infrared Spectral Imager
by Nathan Hagen
Sensors 2025, 25(2), 538; https://doi.org/10.3390/s25020538 - 17 Jan 2025
Viewed by 636
Abstract
We describe the various steps of a gas imaging algorithm developed for detecting, identifying, and quantifying gas leaks using data from a snapshot infrared spectral imager. The spectral video stream delivered by the hardware allows the system to combine spatial, spectral, and temporal [...] Read more.
We describe the various steps of a gas imaging algorithm developed for detecting, identifying, and quantifying gas leaks using data from a snapshot infrared spectral imager. The spectral video stream delivered by the hardware allows the system to combine spatial, spectral, and temporal correlations into the gas detection algorithm, which significantly improves its measurement sensitivity in comparison to non-spectral video, and also in comparison to scanning spectral imaging. After describing the special calibration needs of the hardware, we show how to regularize the gas detection/identification for optimal performance, provide example SNR spectral images, and discuss the effects of humidity and absorption nonlinearity on detection and quantification. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2024)
Show Figures

Figure 1

14 pages, 10327 KiB  
Article
High-Impact Polystyrene Structured Light Components for Terahertz Imaging Applications
by Kasparas Stanaitis, Vladislovas Čižas, Augustė Bielevičiūtė, Ignas Grigelionis and Linas Minkevičius
Sensors 2025, 25(1), 131; https://doi.org/10.3390/s25010131 - 28 Dec 2024
Viewed by 960
Abstract
Terahertz frequency range imaging has become more and more attractive for a wide range of practical applications; however, further component optimization is still required. The presented research introduces 3D-printed high-impact polystyrene (HIPS) beam-shaping components for the terahertz range. Gaussian, Bessel, and Airy beam-shaping [...] Read more.
Terahertz frequency range imaging has become more and more attractive for a wide range of practical applications; however, further component optimization is still required. The presented research introduces 3D-printed high-impact polystyrene (HIPS) beam-shaping components for the terahertz range. Gaussian, Bessel, and Airy beam-shaping structures are fabricated, and different combinations are employed to evaluate imaging system performance. The combination of the Gaussian element as focusing and the Bessel element as collecting is revealed to be similarly efficient and less sensitive to misalignment than the classical Gaussian–Gaussian element setup. The presented research paves the way for introducing cost-effective structured light beam-shaping elements into THz imaging systems. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2024)
Show Figures

Figure 1

24 pages, 5554 KiB  
Article
Unsupervised Deep Learning for Synthetic CT Generation from CBCT Images for Proton and Carbon Ion Therapy for Paediatric Patients
by Matteo Pepa, Siavash Taleghani, Giulia Sellaro, Alfredo Mirandola, Francesca Colombo, Sabina Vennarini, Mario Ciocca, Chiara Paganelli, Ester Orlandi, Guido Baroni and Andrea Pella
Sensors 2024, 24(23), 7460; https://doi.org/10.3390/s24237460 - 22 Nov 2024
Viewed by 880
Abstract
Image-guided treatment adaptation is a game changer in oncological particle therapy (PT), especially for younger patients. The purpose of this study is to present a cycle generative adversarial network (CycleGAN)-based method for synthetic computed tomography (sCT) generation from cone beam CT (CBCT) towards [...] Read more.
Image-guided treatment adaptation is a game changer in oncological particle therapy (PT), especially for younger patients. The purpose of this study is to present a cycle generative adversarial network (CycleGAN)-based method for synthetic computed tomography (sCT) generation from cone beam CT (CBCT) towards adaptive PT (APT) of paediatric patients. Firstly, 44 CBCTs of 15 young pelvic patients were pre-processed to reduce ring artefacts and rigidly registered on same-day CT scans (i.e., verification CT scans, vCT scans) and then inputted to the CycleGAN network (employing either Res-Net and U-Net generators) to synthesise sCT. In particular, 36 and 8 volumes were used for training and testing, respectively. Image quality was evaluated qualitatively and quantitatively using the structural similarity index metric (SSIM) and the peak signal-to-noise ratio (PSNR) between registered CBCT (rCBCT) and vCT and between sCT and vCT to evaluate the improvements brought by CycleGAN. Despite limitations due to the sub-optimal input image quality and the small field of view (FOV), the quality of sCT was found to be overall satisfactory from a quantitative and qualitative perspective. Our findings indicate that CycleGAN is promising to produce sCT scans with acceptable CT-like image texture in paediatric settings, even when CBCT with narrow fields of view (FOV) are employed. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2024)
Show Figures

Figure 1

32 pages, 21133 KiB  
Article
An Automated Feature-Based Image Registration Strategy for Tool Condition Monitoring in CNC Machine Applications
by Eden Lazar, Kristin S. Bennett, Andres Hurtado Carreon and Stephen C. Veldhuis
Sensors 2024, 24(23), 7458; https://doi.org/10.3390/s24237458 - 22 Nov 2024
Cited by 1 | Viewed by 1016
Abstract
The implementation of Machine Vision (MV) systems for Tool Condition Monitoring (TCM) plays a critical role in reducing the total cost of operation in manufacturing while expediting tool wear testing in research settings. However, conventional MV-TCM edge detection strategies process each image independently [...] Read more.
The implementation of Machine Vision (MV) systems for Tool Condition Monitoring (TCM) plays a critical role in reducing the total cost of operation in manufacturing while expediting tool wear testing in research settings. However, conventional MV-TCM edge detection strategies process each image independently to infer edge positions, rendering them susceptible to inaccuracies when tool edges are compromised by material adhesion or chipping, resulting in imprecise wear measurements. In this study, an MV system is developed alongside an automated, feature-based image registration strategy to spatially align tool wear images, enabling a more consistent and accurate detection of tool edge position. The MV system was shown to be robust to the machining environment, versatile across both turning and milling machining centers and capable of reducing tool wear image capturing time up to 85% in reference to standard approaches. A comparison of feature detector-descriptor algorithms found SIFT, KAZE, and ORB to be the most suitable for MV-TCM registration, with KAZE presenting the highest accuracy and ORB being the most computationally efficient. The automated registration algorithm was shown to be efficient, performing registrations in 1.3 s on average and effective across a wide range of tool geometries and coating variations. The proposed tool reference line detection strategy, based on spatially aligned tool wear images, outperformed standard methods, resulting in average tool wear measurement errors of 2.5% and 4.5% in the turning and milling tests, respectively. Such a system allows machine tool operators to more efficiently capture cutting tool images while ensuring more reliable tool wear measurements. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2024)
Show Figures

Figure 1

27 pages, 2390 KiB  
Article
Visualizing Plant Responses: Novel Insights Possible Through Affordable Imaging Techniques in the Greenhouse
by Matthew M. Conley, Reagan W. Hejl, Desalegn D. Serba and Clinton F. Williams
Sensors 2024, 24(20), 6676; https://doi.org/10.3390/s24206676 - 17 Oct 2024
Viewed by 1165
Abstract
Efficient and affordable plant phenotyping methods are an essential response to global climatic pressures. This study demonstrates the continued potential of consumer-grade photography to capture plant phenotypic traits in turfgrass and derive new calculations. Yet the effects of image corrections on individual calculations [...] Read more.
Efficient and affordable plant phenotyping methods are an essential response to global climatic pressures. This study demonstrates the continued potential of consumer-grade photography to capture plant phenotypic traits in turfgrass and derive new calculations. Yet the effects of image corrections on individual calculations are often unreported. Turfgrass lysimeters were photographed over 8 weeks using a custom lightbox and consumer-grade camera. Subsequent imagery was analyzed for area of cover, color metrics, and sensitivity to image corrections. Findings were compared to active spectral reflectance data and previously reported measurements of visual quality, productivity, and water use. Results confirm that Red–Green–Blue imagery effectively measures plant treatment effects. Notable correlations were observed for corrected imagery, including between yellow fractional area with human visual quality ratings (r = −0.89), dark green color index with clipping productivity (r = 0.61), and an index combination term with water use (r = −0.60). The calculation of green fractional area correlated with Normalized Difference Vegetation Index (r = 0.91), and its RED reflectance spectra (r = −0.87). A new chromatic ratio correlated with Normalized Difference Red-Edge index (r = 0.90) and its Red-Edge reflectance spectra (r = −0.74), while a new calculation correlated strongest to Near-Infrared (r = 0.90). Additionally, the combined index term significantly differentiated between the treatment effects of date, mowing height, deficit irrigation, and their interactions (p < 0.001). Sensitivity and statistical analyses of typical image file formats and corrections that included JPEG, TIFF, geometric lens distortion correction, and color correction were conducted. Findings highlight the need for more standardization in image corrections and to determine the biological relevance of the new image data calculations. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2024)
Show Figures

Figure 1

14 pages, 2913 KiB  
Article
Photobleaching Effect on the Sensitivity Calibration at 638 nm of a Phosphorus-Doped Single-Mode Optical Fiber Dosimeter
by Fiammetta Fricano, Adriana Morana, Martin Roche, Alberto Facchini, Gilles Mélin, Florence Clément, Nicolas Balcon, Julien Mekki, Emmanuel Marin, Youcef Ouerdane, Aziz Boukenter, Thierry Robin and Sylvain Girard
Sensors 2024, 24(17), 5547; https://doi.org/10.3390/s24175547 - 27 Aug 2024
Viewed by 876
Abstract
We investigated the influence of the photobleaching (PB) effect on the dosimetry performances of a phosphosilicate single-mode optical fiber (core diameter of 6.6 µm) operated at 638 nm, within the framework of the LUMINA project. Different irradiation tests were performed under ~40 keV [...] Read more.
We investigated the influence of the photobleaching (PB) effect on the dosimetry performances of a phosphosilicate single-mode optical fiber (core diameter of 6.6 µm) operated at 638 nm, within the framework of the LUMINA project. Different irradiation tests were performed under ~40 keV mean energy fluence X-rays at a 530 µ Gy(SiO2)/s dose rate to measure in situ the radiation-induced attenuation (RIA) growth and decay kinetics while injecting a 638 nm laser diode source with powers varying from 500 nW to 1 mW. For injected continuous power values under 1 µW, we did not measure any relevant influence of the photobleaching effect on the fiber radiation sensitivity coefficient of ~140 dB km−1 Gy−1 up to ~30 Gy. Above 1 µW, the fiber radiation sensitivity is significantly reduced due to the PB associated with the signal and can decrease to ~80 dB km−1 Gy−1 at 1 mW, strongly affecting the capability of this fiber to serve as a dosimeter-sensitive element. Higher power values up to 50 µW can still be used by properly choosing a pulsed regime with periodic injection cycles to reduce the PB efficiency and maintain the dosimetry properties. Basing on the acquired data, a simple model of the photobleaching effect on a coil of the investigated fiber is proposed in order to estimate its sensitivity coefficient evolution as a function of the cumulated dose and its fiber length when injecting a certain laser power. Additional studies need to investigate the influence of the temperature and the dose rate on the PB effects since these parameters were fixed during all the reported acquisitions. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2024)
Show Figures

Figure 1

29 pages, 9384 KiB  
Article
Occupancy Estimation from Blurred Video: A Multifaceted Approach with Privacy Consideration
by Md Sakib Galib Sourav, Ehsan Yavari, Xiaomeng Gao, James Maskrey, Yao Zheng, Victor M. Lubecke and Olga Boric-Lubecke
Sensors 2024, 24(12), 3739; https://doi.org/10.3390/s24123739 - 8 Jun 2024
Cited by 2 | Viewed by 1156
Abstract
Building occupancy information is significant for a variety of reasons, from allocation of resources in smart buildings to responding during emergency situations. As most people spend more than 90% of their time indoors, a comfortable indoor environment is crucial. To ensure comfort, traditional [...] Read more.
Building occupancy information is significant for a variety of reasons, from allocation of resources in smart buildings to responding during emergency situations. As most people spend more than 90% of their time indoors, a comfortable indoor environment is crucial. To ensure comfort, traditional HVAC systems condition rooms assuming maximum occupancy, accounting for more than 50% of buildings’ energy budgets in the US. Occupancy level is a key factor in ensuring energy efficiency, as occupancy-controlled HVAC systems can reduce energy waste by conditioning rooms based on actual usage. Numerous studies have focused on developing occupancy estimation models leveraging existing sensors, with camera-based methods gaining popularity due to their high precision and widespread availability. However, the main concern with using cameras for occupancy estimation is the potential violation of occupants’ privacy. Unlike previous video-/image-based occupancy estimation methods, we addressed the issue of occupants’ privacy in this work by proposing and investigating both motion-based and motion-independent occupancy counting methods on intentionally blurred video frames. Our proposed approach included the development of a motion-based technique that inherently preserves privacy, as well as motion-independent techniques such as detection-based and density-estimation-based methods. To improve the accuracy of the motion-independent approaches, we utilized deblurring methods: an iterative statistical technique and a deep-learning-based method. Furthermore, we conducted an analysis of the privacy implications of our motion-independent occupancy counting system by comparing the original, blurred, and deblurred frames using different image quality assessment metrics. This analysis provided insights into the trade-off between occupancy estimation accuracy and the preservation of occupants’ visual privacy. The combination of iterative statistical deblurring and density estimation achieved a 16.29% counting error, outperforming our other proposed approaches while preserving occupants’ visual privacy to a certain extent. Our multifaceted approach aims to contribute to the field of occupancy estimation by proposing a solution that seeks to balance the trade-off between accuracy and privacy. While further research is needed to fully address this complex issue, our work provides insights and a step towards a more privacy-aware occupancy estimation system. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2024)
Show Figures

Figure 1

16 pages, 6341 KiB  
Article
Spectral Reconstruction from RGB Imagery: A Potential Option for Infinite Spectral Data?
by Abdelhamid N. Fsian, Jean-Baptiste Thomas, Jon Y. Hardeberg and Pierre Gouton
Sensors 2024, 24(11), 3666; https://doi.org/10.3390/s24113666 - 5 Jun 2024
Cited by 4 | Viewed by 3512
Abstract
Spectral imaging has revolutionisedvarious fields by capturing detailed spatial and spectral information. However, its high cost and complexity limit the acquisition of a large amount of data to generalise processes and methods, thus limiting widespread adoption. To overcome this issue, a body of [...] Read more.
Spectral imaging has revolutionisedvarious fields by capturing detailed spatial and spectral information. However, its high cost and complexity limit the acquisition of a large amount of data to generalise processes and methods, thus limiting widespread adoption. To overcome this issue, a body of the literature investigates how to reconstruct spectral information from RGB images, with recent methods reaching a fairly low error of reconstruction, as demonstrated in the recent literature. This article explores the modification of information in the case of RGB-to-spectral reconstruction beyond reconstruction metrics, with a focus on assessing the accuracy of the reconstruction process and its ability to replicate full spectral information. In addition to this, we conduct a colorimetric relighting analysis based on the reconstructed spectra. We investigate the information representation by principal component analysis and demonstrate that, while the reconstruction error of the state-of-the-art reconstruction method is low, the nature of the reconstructed information is different. While it appears that the use in colour imaging comes with very good performance to handle illumination, the distribution of information difference between the measured and estimated spectra suggests that caution should be exercised before generalising the use of this approach. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2024)
Show Figures

Figure 1

Review

Jump to: Research

21 pages, 449 KiB  
Review
Three-Dimensional Dense Reconstruction: A Review of Algorithms and Datasets
by Yangming Lee
Sensors 2024, 24(18), 5861; https://doi.org/10.3390/s24185861 - 10 Sep 2024
Cited by 2 | Viewed by 2875
Abstract
Three-dimensional dense reconstruction involves extracting the full shape and texture details of three-dimensional objects from two-dimensional images. Although 3D reconstruction is a crucial and well-researched area, it remains an unsolved challenge in dynamic or complex environments. This work provides a comprehensive overview of [...] Read more.
Three-dimensional dense reconstruction involves extracting the full shape and texture details of three-dimensional objects from two-dimensional images. Although 3D reconstruction is a crucial and well-researched area, it remains an unsolved challenge in dynamic or complex environments. This work provides a comprehensive overview of classical 3D dense reconstruction techniques, including those based on geometric and optical models, as well as approaches leveraging deep learning. It also discusses the datasets used for deep learning and evaluates the performance and the strengths and limitations of deep learning methods on these datasets. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2024)
Show Figures

Figure 1

Back to TopTop