sensors-logo

Journal Browser

Journal Browser

Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Optical Sensors".

Deadline for manuscript submissions: 30 December 2025 | Viewed by 19073

Special Issue Editors


E-Mail Website
Guest Editor
Department of AI Technology Development, M&D Data Science Center, Institute of Integrated Research, Institute of Science Tokyo, Tokyo, Japan
Interests: medical image and signal processing; cardiac modeling and simulation; medical instrumentation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Control Engineering, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
Interests: Optical detection and imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The goal of this Special Issue is to introduce recent advances in optical sensing, instrumentation, and systems. This involves medical imaging, virtual reality, 3D reconstruction, automatic driving devices, optical system optimization, internet of things, security facilities, navigation systems, computer vision devices, optical materials, optical battery, and so on.

In this Special Issue entitled “Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition”, we aim to publish papers with theoretical and practical novelties in optical sensing, instrumentation, and systems involving medical imaging, computer vision, machine learning, nature-inspired optimization, 3D reconstruction, and any other possible applications.

Topics of interest include, but are not limited to, the following:

  • Optical coherence tomography in biometrics and diagnosis;
  • The implementation of deep learning in optical systems;
  • The optimization of optical systems using nature-inspired optimization methods;
  • 3D reconstruction of uncalibrated visual system in arbitrary scene;
  • Image and signal processing in optical sensing, instrumentation, and systems;
  • Advance laser technology;
  • Optical networks;
  • Optical communication;
  • Optical sensors;
  • Optical materials;
  • Optical devices;
  • Photoelectric sensing;
  • Optical navigation;
  • Nano-optics technology;
  • Optical sensing and diagnosis;
  • Endoscopic microscopy;
  • Optical imaging;
  • Visual sensing;
  • Computer vision;
  • Optical measurement;
  • AI applications in optical sensing;
  • Spectrum detection and analysis;
  • Fiber sensors;
  • Surface plasmon resonance technology.

Prof. Dr. Xin Zhu
Prof. Dr. Zhenhe Ma
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • optical coherence tomography (OCT)
  • image processing
  • deep learning
  • binocular vision
  • 3D reconstruction
  • optimization
  • spectral analysis
  • computer vision
  • stereo vision
  • surface plasmon resonance

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 3466 KiB  
Article
Hardware-Efficient Phase Demodulation for Digital ϕ-OTDR Receivers with Baseband and Analytic Signal Processing
by Shangming Du, Tianwei Chen, Can Guo, Yuxing Duan, Song Wu and Lei Liang
Sensors 2025, 25(10), 3218; https://doi.org/10.3390/s25103218 - 20 May 2025
Viewed by 281
Abstract
This paper presents hardware-efficient phase demodulation schemes for FPGA-based digital phase-sensitive optical time-domain reflectometry (ϕ-OTDR) receivers. We first derive a signal model for the heterodyne ϕ-OTDR frontend, then propose and analyze three demodulation methods: (1) a baseband reconstruction approach via [...] Read more.
This paper presents hardware-efficient phase demodulation schemes for FPGA-based digital phase-sensitive optical time-domain reflectometry (ϕ-OTDR) receivers. We first derive a signal model for the heterodyne ϕ-OTDR frontend, then propose and analyze three demodulation methods: (1) a baseband reconstruction approach via zero-IF downconversion, (2) an analytic signal generation technique using the Hilbert transform (HT), and (3) a wavelet transform (WT)-based alternative for analytic signal extraction. Algorithm-hardware co-design implementations are detailed for both RFSoC and conventional FPGA platforms, with resource utilization comparisons. Additionally, we introduce an incremental DC-rejected phase unwrapper (IDRPU) algorithm to jointly address phase unwrapping and DC drift removal, minimizing computational overhead while avoiding numerical overflow. Experiments on simulated and real-world ϕ-OTDR data show that the HT method matches the performance of zero-IF demodulation with simpler hardware and lower resource usage, while the WT method offers enhanced robustness against fading noise (3.35–22.47 dB SNR improvement in fading conditions), albeit with slightly ambiguous event boundaries and higher hardware utilization. These findings provide actionable insights for demodulator design in distributed acoustic sensing (DAS) applications and advance the development of single-chip DAS systems. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

13 pages, 8736 KiB  
Article
Software-Defined Optical Coherence Measurement of Seawater Refractive Index Variations
by Jiaxin Zhao, Xinyi Zhang, Qi Wang, Liyan Li, Songtao Fan, Yongjie Wang and Yan Zhou
Sensors 2025, 25(10), 3119; https://doi.org/10.3390/s25103119 - 15 May 2025
Viewed by 224
Abstract
The seawater refractive index is an important parameter in marine environments, with its variations depending on the specific environmental conditions. During practical applications, modulation parameters such as the sampling rate, bandwidth, and filters directly affect the signal-to-noise ratio (SNR) and need to be [...] Read more.
The seawater refractive index is an important parameter in marine environments, with its variations depending on the specific environmental conditions. During practical applications, modulation parameters such as the sampling rate, bandwidth, and filters directly affect the signal-to-noise ratio (SNR) and need to be adjusted in real-time according to the characteristics of the target signal. Low-cost software-defined radio (SDR) offers significant advantages in this regard. This paper proposes an optical coherence measurement method for seawater refractive index changes based on orthogonal demodulation using SDR along with simulation calculations, and the results demonstrate that the resolution of the refractive index change rate is 3.165×109 RIU/s, corresponding to a refractive index change resolution of 1010 RIU (frequency range 1 Hz–100 Hz, measurement range 0.1 m). By adopting SDR as the implementation platform for the demodulation algorithm and using a radio-frequency source to simulate interference signals for demodulating the refractive index variation, the results show that the relative error of the SDR demodulation results is below 0.3%. Additionally, this study developed a software-defined optical coherence measurement system for the seawater refractive index and measured the refractive index changes in deionized water during heating. The experimental results showed that the root mean square error (RMSE) of the refractive index changes obtained through SDR demodulation was 5.68×106 RIU. This research provides a novel demodulation method for high-precision measurements of seawater refractive index changes under different marine environments. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

17 pages, 14682 KiB  
Article
Research on Space Targets Simulation Modulation Algorithm Combined Global–Local Multi-Spectral Radiation Features
by Yu Zhang, Songzhou Yang, Zhipeng Wei, Jian Zhang, Bin Zhao, Dianwu Ren, Jingrui Sun, Lu Wang, Taiyang Ren, Dongpeng Yang and Guoyu Zhang
Sensors 2025, 25(9), 2702; https://doi.org/10.3390/s25092702 - 24 Apr 2025
Viewed by 260
Abstract
To solve the international problem of global–local radiation features simulation of multi-spectral space targets, this paper proposes a multi-spectral space target simulation modulation algorithm that can combine global–local spectral radiation features. An overall architecture of a series-parallel multi-source information fusion space target simulation [...] Read more.
To solve the international problem of global–local radiation features simulation of multi-spectral space targets, this paper proposes a multi-spectral space target simulation modulation algorithm that can combine global–local spectral radiation features. An overall architecture of a series-parallel multi-source information fusion space target simulation system (MITS) is constructed, and a global–local multi-spectral radiation feature modulation link is built. A multi-spectral feature modulation algorithm consisting of three modules, including optical engine non-uniformity compensation, global spectral radiant energy modulation, and local radiant grayscale modulation, is designed, and an experimental platform is built to verify the correctness and advancement of the proposed algorithm. The results indicate that the non-uniformity is better than 3.78%, the global simulation error is better than −4.56%, and the local simulation error is better than 4.25%. It is one of the few multi-spectral target simulation modulation algorithms worldwide that can combine the global whole and local details. It supports the performance test and technology iteration of multi-spectral optical loads. It helps to supplement the theoretical system of multi-spectral space target simulation and enhance the ground-based semi-physical simulation link of optical loads. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

18 pages, 5582 KiB  
Article
Extending Sensing Range by Physics Constraints in Multiband-Multiline Absorption Spectroscopy for Flame Measurement
by Tengfei Jiao, Sheng Kou, Liuhao Ma, Kin-Pang Cheong and Wei Ren
Sensors 2025, 25(7), 2317; https://doi.org/10.3390/s25072317 - 5 Apr 2025
Viewed by 335
Abstract
The present numerical study proposes a technique to extend the sensing range of tunable diode laser absorption spectroscopy (TDLAS) for flame measurement by involving physics constraints on both gas condition and spectroscopic parameters in the interpretation of spectra from multiple bands. A total [...] Read more.
The present numerical study proposes a technique to extend the sensing range of tunable diode laser absorption spectroscopy (TDLAS) for flame measurement by involving physics constraints on both gas condition and spectroscopic parameters in the interpretation of spectra from multiple bands. A total of 24 major spectral lines for 2 spectral segments 4029–4031 cm−1 and 7185–7186 cm−1 are determined by specially designed detection function and contribution filtering. Numerical tests on uniform and complicated combustion fields prove the high accuracy, strong robustness to noise, wide sensing range, and good compatibility with tomography. The present study provides a strong technique for future complex combustion detection with advanced laser sources of broad spectrum. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

14 pages, 4060 KiB  
Article
Real-Time Pupil Localization Algorithm for Blurred Images Based on Double Constraints
by Shufang Qiu, Yi Wang, Zeyuan Liu, Huaiyu Cai and Xiaodong Chen
Sensors 2025, 25(6), 1749; https://doi.org/10.3390/s25061749 - 12 Mar 2025
Viewed by 410
Abstract
Accurate pupil localization is crucial for the eye-tracking technology used in monitoring driver fatigue. However, factors such as poor road conditions may result in blurred eye images being captured by eye-tracking devices, affecting the accuracy of pupil localization. To address the above problems, [...] Read more.
Accurate pupil localization is crucial for the eye-tracking technology used in monitoring driver fatigue. However, factors such as poor road conditions may result in blurred eye images being captured by eye-tracking devices, affecting the accuracy of pupil localization. To address the above problems, we propose a real-time pupil localization algorithm for blurred images based on double constraints. The algorithm is divided into three stages: extracting the rough pupil area based on grayscale constraints, refining the pupil region based on geometric constraints, and determining the pupil center according to geometric moments. First, the rough pupil area is adaptively extracted from the input image based on grayscale constraints. Then, the designed pupil shape index is used to refine the pupil area based on geometric constraints. Finally, the geometric moments are calculated to quickly locate the pupil center. The experimental results demonstrate that the algorithm exhibits superior localization performance in both blurred and clear images, with a localization error within 6 pixels, an accuracy exceeding 97%, and real-time performance of up to 85 fps. The proposed algorithm provides an efficient and precise solution for pupil localization, demonstrating practical applicability in the monitoring of real-world driver fatigue. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

12 pages, 2136 KiB  
Article
Innovative Binocular Vision Testing for Phoria and Vergence Ranges Using Automatic Dual Rotational Risley Prisms
by Hui-Rong Su, Yu-Jung Chen, Yun-Shao Hu, Chi-Hung Lee, Shang-Min Yeh and Shuan-Yu Huang
Sensors 2025, 25(5), 1604; https://doi.org/10.3390/s25051604 - 5 Mar 2025
Viewed by 775
Abstract
This study evaluated binocular visual function using automatic dual rotational Risley prisms (ADRRPs) to measure phoria and vergence ranges. Thirty-nine (mean age: 21.82 ± 1.10 years; age range: 20–24 years) healthy adults with normal binocular vision participated. Each underwent baseline refraction exams followed [...] Read more.
This study evaluated binocular visual function using automatic dual rotational Risley prisms (ADRRPs) to measure phoria and vergence ranges. Thirty-nine (mean age: 21.82 ± 1.10 years; age range: 20–24 years) healthy adults with normal binocular vision participated. Each underwent baseline refraction exams followed by phoria and vergence tests conducted using both a phoropter with Maddox rods and the ADRRPs. The results revealed a strong positive correlation between the two instruments for distance phoria (r = 0.959, p < 0.001) and near-phoria measurements (r = 0.968, p < 0.001). For vergence testing, positive fusional vergence (PFV) at distance showed a moderate-to-strong correlation for break points (r = 0.758, p < 0.001) and a moderate correlation for recovery points (r = 0.452, p < 0.001). Negative fusional vergence (NFV) at distance demonstrated a strong correlation for break points (r = 0.863, p < 0.001) and a moderate correlation for recovery points (r = 0.458, p < 0.01). Near-vergence testing showed moderate-to-strong correlations for break points (r = 0.777, p < 0.001) and recovery points (r = 0.623, p < 0.001). The inclusion of Bland–Altman analysis provides a more comprehensive evaluation of agreement between ADRRPs and the phoropter. While strong correlations were observed, systematic bias and LoA indicate that these methods are not perfectly interchangeable. The ADRRPs demonstrated potential for binocular vision assessment but require further validation for clinical application. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

19 pages, 9180 KiB  
Article
Accurate Real-Time Live Face Detection Using Snapshot Spectral Imaging Method
by Zhihai Wang, Shuai Wang, Weixing Yu, Bo Gao, Chenxi Li and Tianxin Wang
Sensors 2025, 25(3), 952; https://doi.org/10.3390/s25030952 - 5 Feb 2025
Cited by 1 | Viewed by 1060
Abstract
Traditional facial recognition is realized by facial recognition algorithms based on 2D or 3D digital images and has been well developed and has found wide applications in areas related to identification verification. In this work, we propose a novel live face detection (LFD) [...] Read more.
Traditional facial recognition is realized by facial recognition algorithms based on 2D or 3D digital images and has been well developed and has found wide applications in areas related to identification verification. In this work, we propose a novel live face detection (LFD) method by utilizing snapshot spectral imaging technology, which takes advantage of the distinctive reflected spectra from human faces. By employing a computational spectral reconstruction algorithm based on Tikhonov regularization, a rapid and precise spectral reconstruction with a fidelity of over 99% for the color checkers and various types of “face” samples has been achieved. The flat face areas were extracted exactly from the “face” images with Dlib face detection and Euclidean distance selection algorithms. A large quantity of spectra were rapidly reconstructed from the selected areas and compiled into an extensive database. The convolutional neural network model trained on this database demonstrates an excellent capability for predicting different types of “faces” with an accuracy exceeding 98%, and, according to a series of evaluations, the system’s detection time consistently remained under one second, much faster than other spectral imaging LFD methods. Moreover, a pixel-level liveness detection test system is developed and a LFD experiment shows good agreement with theoretical results, which demonstrates the potential of our method to be applied in other recognition fields. The superior performance and compatibility of our method provide an alternative solution for accurate, highly integrated video LFD applications. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

16 pages, 2668 KiB  
Article
Localization of Capsule Endoscope in Alimentary Tract by Computer-Aided Analysis of Endoscopic Images
by Ruiyao Zhang, Boyuan Peng, Yiyang Liu, Xinkai Liu, Jie Huang, Kohei Suzuki, Yuki Nakajima, Daiki Nemoto, Kazutomo Togashi and Xin Zhu
Sensors 2025, 25(3), 746; https://doi.org/10.3390/s25030746 - 26 Jan 2025
Viewed by 969
Abstract
Capsule endoscopy is a common method for detecting digestive diseases. The location of a capsule endoscope should be constantly monitored through a visual inspection of the endoscopic images by medical staff to confirm the examination’s progress. In this study, we proposed a computer-aided [...] Read more.
Capsule endoscopy is a common method for detecting digestive diseases. The location of a capsule endoscope should be constantly monitored through a visual inspection of the endoscopic images by medical staff to confirm the examination’s progress. In this study, we proposed a computer-aided analysis (CADx) method for the localization of a capsule endoscope. At first, a classifier based on a Swin Transformer was proposed to classify each frame of the capsule endoscopy videos into images of the stomach, small intestine, and large intestine, respectively. Then, a K-means algorithm was used to correct outliers in the classification results. Finally, a localization algorithm was proposed to determine the position of the capsule endoscope in the alimentary tract. The proposed method was developed and validated using videos of 204 consecutive cases. The proposed CADx, based on a Swin Transformer, showed a precision of 93.46%, 97.28%, and 98.68% for the classification of endoscopic images recorded in the stomach, small intestine, and large intestine, respectively. Compared with the landmarks identified by endoscopists, the proposed method demonstrated an average transition time error of 16.2 s to locate the intersection of the stomach and small intestine, as well as 13.5 s to locate that of the small intestine and the large intestine, based on the 20 validation videos with an average length of 3261.8 s. The proposed method accurately localizes the capsule endoscope in the alimentary tract and may replace the laborious real-time visual inspection in capsule endoscopic examinations. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

14 pages, 11582 KiB  
Article
Channeled Polarimetry for Magnetic Field/Current Detection
by Georgi Dyankov, Petar Kolev, Tinko A. Eftimov, Evdokiya O. Hikova and Hristo Kisov
Sensors 2025, 25(2), 466; https://doi.org/10.3390/s25020466 - 15 Jan 2025
Cited by 1 | Viewed by 666
Abstract
Magneto-optical magnetic field/current sensors are based on the Faraday effect, which involves changing the polarized state of light. Polarimetric methods are therefore used for measuring polarization characteristics. Channeled polarimetry allows polarization information to be obtained from the analysis of the spectral domain. Although [...] Read more.
Magneto-optical magnetic field/current sensors are based on the Faraday effect, which involves changing the polarized state of light. Polarimetric methods are therefore used for measuring polarization characteristics. Channeled polarimetry allows polarization information to be obtained from the analysis of the spectral domain. Although this allows the characterization of Faraday materials, the method has not yet been used for detection in magneto-optical sensors. This paper reports experimental results for magnetic field/current detection using the channeled polarimetry method. It is shown that in contrast to other methods, this method allows the detection of the phase shift caused by Faraday rotation alone, making the detection independent of temperature. Although an increase in measurement accuracy is required for practical applications by refining the data processing, the experimental results obtained show that this method offers a new approach to improving the performance of magneto-optical current sensors. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

13 pages, 7014 KiB  
Article
Displacement Measurement Based on the Missing-Order Talbot Effect
by Liuxing Song, Kailun Zhao, Xiaoyong Wang, Jinping He, Guoliang Tian, Shihua Yang and Yaning Li
Sensors 2025, 25(1), 292; https://doi.org/10.3390/s25010292 - 6 Jan 2025
Cited by 1 | Viewed by 1390
Abstract
Displacement measurement is a crucial application, with laser-based methods offering high precision and being well established in commercial settings. However, these methods often come with the drawbacks of significant size and exorbitant costs. We introduce a novel displacement measurement method that utilizes the [...] Read more.
Displacement measurement is a crucial application, with laser-based methods offering high precision and being well established in commercial settings. However, these methods often come with the drawbacks of significant size and exorbitant costs. We introduce a novel displacement measurement method that utilizes the missing-order Talbot effect. This approach circumvents the need to measure contrast in the Talbot diffraction field, opting instead to leverage the displacement within the missing-order Talbot diffraction pattern. Our method only requires parallel light, an amplitude grating, and a detector to achieve displacement measurement. The measurement dynamic range can be adjusted by altering the grating period and the wavelength of the incident light. Through careful simulation and experimental validation, our method exhibits a correlation coefficient R surpassing 0.999 across a 30 mm dynamic range and achieves a precision superior to 3 μm. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

16 pages, 9195 KiB  
Article
Simulating and Verifying a 2D/3D Laser Line Sensor Measurement Algorithm on CAD Models and Real Objects
by Rok Belšak, Janez Gotlih and Timi Karner
Sensors 2024, 24(22), 7396; https://doi.org/10.3390/s24227396 - 20 Nov 2024
Cited by 1 | Viewed by 1199
Abstract
The increasing adoption of 2D/3D laser line sensors in industrial and research applications necessitates accurate and efficient simulation tools for tasks such as surface inspection, dimensional verification, and quality control. This paper presents a novel algorithm developed in MATLAB for simulating the measurements [...] Read more.
The increasing adoption of 2D/3D laser line sensors in industrial and research applications necessitates accurate and efficient simulation tools for tasks such as surface inspection, dimensional verification, and quality control. This paper presents a novel algorithm developed in MATLAB for simulating the measurements of any 2D/3D laser line sensor on STL CAD models. The algorithm uses a modified fast-ray triangular intersection method, addressing challenges such as overlapping triangles in assembly models and incorporating sensor resolution to ensure realistic simulations. Quantitative analysis shows a significant reduction in computation time, enhancing the practical utility of the algorithm. The simulation results exhibit a mean deviation of 0.42 mm when compared to real-world measurements. Notably, the algorithm effectively handles complex geometric features, such as holes and grooves, and offers flexibility in generating point cloud data in both local and global coordinate systems. This work not only reduces the need for physical prototyping, thereby contributing to sustainability, but also supports AI training by generating accurate synthetic data. Future work should aim to further optimize the simulation speed and explore noise modeling to enhance the realism of simulated measurements. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

20 pages, 7281 KiB  
Article
Enhancing Open-Space Gas Detection Limit: A Novel Environmentally Adaptive Infrared Temperature Prediction Method for Uncooled Spectroscopy
by Guoliang Tang, Fang Ding, Dunping Li, Bangjian Zhao, Chunlai Li and Jianyu Wang
Sensors 2024, 24(22), 7173; https://doi.org/10.3390/s24227173 - 8 Nov 2024
Viewed by 1190
Abstract
Gas cloud imaging with uncooled infrared spectroscopy is influenced by ambient temperature, complicating the quantitative detection of gas concentrations in open environments. To solve the aforementioned challenges, the paper analyzes the main factors influencing detection errors in uncooled infrared spectroscopy gas cloud imaging [...] Read more.
Gas cloud imaging with uncooled infrared spectroscopy is influenced by ambient temperature, complicating the quantitative detection of gas concentrations in open environments. To solve the aforementioned challenges, the paper analyzes the main factors influencing detection errors in uncooled infrared spectroscopy gas cloud imaging and proposes a temperature correction method to address them. Firstly, to mitigate the environmental effects on the radiative temperature output of uncooled infrared detectors, a snapshot-based, multi-band infrared temperature compensation algorithm incorporating environmental awareness was developed. This algorithm enables precise infrared radiation prediction across a wide operating temperature range. Validation tests conducted over the full temperature range of 0 °C to 80 °C demonstrated that the prediction error was maintained within ±0.96 °C. Subsequently, temperature compensation techniques were integrated, resulting in the development of a comprehensive uncooled infrared spectroscopy gas cloud imaging detection method. Ultimately, the detection limits for SF6, ethylene, cyclohexane, and ammonia were enhanced by 50%, 33%, 25%, and 67%, respectively. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

14 pages, 692 KiB  
Article
Auto Aligning, Error-Compensated Broadband Collimated Transmission Spectroscopy
by Karsten Pink, Alwin Kienle and Florian Foschum
Sensors 2024, 24(21), 6993; https://doi.org/10.3390/s24216993 - 30 Oct 2024
Cited by 3 | Viewed by 677
Abstract
Broadband spectral measurements of the ballistic transmission of scattering samples are challenging. The presented work shows an approach that includes a broadband system and an automated adjustment unit for compensation of angular distortions caused by non-plane-parallel samples. The limits of the system in [...] Read more.
Broadband spectral measurements of the ballistic transmission of scattering samples are challenging. The presented work shows an approach that includes a broadband system and an automated adjustment unit for compensation of angular distortions caused by non-plane-parallel samples. The limits of the system in terms of optimal transmission and detected forward scattering influenced by the scattering phase function are investigated. We built and validated a setup that measures the collimated transmission signal in a spectral range from 300 nm to 2150 nm. The system was validated using polystyrene spheres and Mie calculations. The limits of the system in terms of optimal transmission and detected forward scattering were researched. The optimal working parameters of the system, analyzed by simulations using the Monte Carlo method, show that the transmission should be larger than 10% and less than 90% to allow for a reliable measurement with acceptable errors caused by noise and systematic errors of the system. The optimal transmission range is between 25% and 50%. We show that the phase function is important when considering the accuracy of the measurement. For strongly forward-scattering samples, errors of up to 80% can be observed, even for a very small numerical aperture of 6.6·104, as used in our experimental system. We also show that errors increase with optical thickness as the ballistic transmission decreases and the multiscattered fraction increases. In addition, errors caused by multiple reflections in the sample layer were analyzed and also classified as relevant for classical absorption spectroscopy. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

18 pages, 11414 KiB  
Article
Analysis of Field of View for a Moon-Based Earth Observation Multispectral Camera
by Zhitong Yu, Hanlin Ye, Mengxiong Zhou, Feifei Li, Yin Jin, Chunlai Li, Guang Liu and Huadong Guo
Sensors 2024, 24(21), 6962; https://doi.org/10.3390/s24216962 - 30 Oct 2024
Cited by 1 | Viewed by 1047
Abstract
A Moon-based Earth observation multispectral camera provides a unique perspective for observing large-scale Earth phenomena. This study focuses on the analysis of the field of view (FOV) for such a sensor. Unlike space-borne sensors, the analysis of the FOV for a Moon-based sensor [...] Read more.
A Moon-based Earth observation multispectral camera provides a unique perspective for observing large-scale Earth phenomena. This study focuses on the analysis of the field of view (FOV) for such a sensor. Unlike space-borne sensors, the analysis of the FOV for a Moon-based sensor takes into account not only Earth’s maximum apparent diameter as seen from the lunar surface but also the Earth’s and the solar trajectory in the lunar sky, as well as the pointing accuracy and pointing adjustment temporal intervals of the turntable. Three critical issues are analyzed: (1) The relationship between the Earth’s apparent diameter and the Earth’s phase angle is revealed. It is found that the Earth’s maximum apparent diameter encompasses the Earth’s full phase, suggesting the FOV should exceed this maximum. (2) Regardless of the location on the lunar surface, a sensor will suffer from solar intrusion every orbital period. Although the Earth’s trajectory forms an envelope during an 18.6-year cycle, the FOV should not be excessively large. (3) To design a reasonable FOV, it is necessary to consider both the pointing accuracy and pointing adjustment temporal interval comprehensively. All these insights will guide future Moon-based Earth observation multispectral camera design. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

18 pages, 10601 KiB  
Article
The Zero-Velocity Correction Method for Pipe Jacking Automatic Guidance System Based on Fiber Optic Gyroscope
by Wenbo Zhang, Lu Wang and Yutong Zu
Sensors 2024, 24(18), 5911; https://doi.org/10.3390/s24185911 - 12 Sep 2024
Cited by 1 | Viewed by 1141
Abstract
The pipe jacking guidance system based on a fiber optic gyroscope (FOG) has gained extensive attention due to its high degree of safety and autonomy. However, all inertial guidance systems have accumulative errors over time. The zero-velocity update (ZUPT) algorithm is an effective [...] Read more.
The pipe jacking guidance system based on a fiber optic gyroscope (FOG) has gained extensive attention due to its high degree of safety and autonomy. However, all inertial guidance systems have accumulative errors over time. The zero-velocity update (ZUPT) algorithm is an effective error compensation method, but accurately distinguishing between moving and stationary states in slow pipe jacking operations is a major challenge. To address this challenge, a “MV + ARE + SHOE” three-conditional zero-velocity detection (TCZVD) algorithm for the fiber optic gyroscope inertial navigation system (FOG-INS) is designed. Firstly, a Kalman filter model based on ZUPT is established. Secondly, the TCZVD algorithm, which combines the moving variance of acceleration (MV), angular rate energy (ARE), and stance hypothesis optimal estimation (SHOE), is proposed. Finally, experiments are conducted, and the results indicate that the proposed algorithm achieves a zero-velocity detection accuracy of 99.18% and can reduce positioning error to less than 2% of the total distance. Furthermore, the applicability of the proposed algorithm in the practical working environment is confirmed through on-site experiments. The results demonstrate that this method can effectively suppress the accumulated error of the inertial guidance system and improve the positioning accuracy of pipe jacking. It provides a robust and reliable solution for practical engineering challenges. Therefore, this study will contribute to the development of pipe jacking automatic guidance technology. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

17 pages, 5234 KiB  
Article
Full-Automatic High-Efficiency Mueller Matrix Microscopy Imaging for Tissue Microarray Inspection
by Hanyue Wei, Yifu Zhou, Feiya Ma, Rui Yang, Jian Liang and Liyong Ren
Sensors 2024, 24(14), 4703; https://doi.org/10.3390/s24144703 - 20 Jul 2024
Cited by 1 | Viewed by 1313
Abstract
This paper proposes a full-automatic high-efficiency Mueller matrix microscopic imaging (MMMI) system based on the tissue microarray (TMA) for cancer inspection for the first time. By performing a polar decomposition on the sample’s Mueller matrix (MM) obtained by a transmissive MMMI system we [...] Read more.
This paper proposes a full-automatic high-efficiency Mueller matrix microscopic imaging (MMMI) system based on the tissue microarray (TMA) for cancer inspection for the first time. By performing a polar decomposition on the sample’s Mueller matrix (MM) obtained by a transmissive MMMI system we established, the linear phase retardance equivalent waveplate fast-axis azimuth and the linear phase retardance are obtained for distinguishing the cancerous tissues from the normal ones based on the differences in their polarization characteristics, where three analyses methods including statistical analysis, the gray-level co-occurrence matrix analysis (GLCM) and the Tamura image processing method (TIPM) are used. Previous MMMI medical diagnostics typically utilized discrete slices for inspection under a high-magnification objective (20×–50×) with a small field of view, while we use the TMA under a low-magnification objective (5×) with a large field of view. Experimental results indicate that MMMI based on TMA can effectively analyze the pathological variations in biological tissues, inspect cancerous cervical tissues, and thus contribute to the diagnosis of postoperative cancer biopsies. Such an inspection method, using a large number of samples within a TMA, is beneficial for obtaining consistent findings and good reproducibility. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

20 pages, 36354 KiB  
Article
Optoelectronic Strain-Measurement System Demonstrated on Scaled-Down Flywheels
by Matthias Franz Rath, Christof Birgel, Armin Buchroithner, Bernhard Schweighofer and Hannes Wegleiter
Sensors 2024, 24(13), 4292; https://doi.org/10.3390/s24134292 - 1 Jul 2024
Viewed by 1496
Abstract
Monitoring the strain in the rotating flywheel in a kinetic energy storage system is important for safe operation and for the investigation of long-term effects in composite materials like carbon-fiber-reinforced plastics. An optoelectronic strain-measurement system for contactless deformation and position monitoring of a [...] Read more.
Monitoring the strain in the rotating flywheel in a kinetic energy storage system is important for safe operation and for the investigation of long-term effects in composite materials like carbon-fiber-reinforced plastics. An optoelectronic strain-measurement system for contactless deformation and position monitoring of a flywheel was investigated. The system consists of multiple optical sensors measuring the local relative in-plane displacement of the flywheel rotor. A special reflective pattern, which is necessary to interact with the sensors, was applied to the surface of the rotor. Combining the measurements from multiple sensors makes it possible to distinguish between the deformation and in-plane displacement of the flywheel. The sensor system was evaluated using a low-speed steel rotor for single-sensor performance investigation as well as a scaled-down high-speed rotor made from PVC plastic. The PVC rotor exhibits more deformation due to centrifugal stresses than a steel or aluminum rotor of the same dimensions, which allows experimental measurements at a smaller flywheel scale as well as a lower rotation speed. Deformation measurements were compared to expected deformation from calculations. The influence of sensor distance was investigated. Deformation and position measurements as well as derived imbalance measurements were demonstrated. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

16 pages, 4031 KiB  
Article
Self-Calibration for Star Sensors
by Jingneng Fu, Ling Lin and Qiang Li
Sensors 2024, 24(11), 3698; https://doi.org/10.3390/s24113698 - 6 Jun 2024
Viewed by 1708
Abstract
Aiming to address the chicken-and-egg problem in star identification and the intrinsic parameter determination processes of on-orbit star sensors, this study proposes an on-orbit self-calibration method for star sensors that does not depend on star identification. First, the self-calibration equations of a star [...] Read more.
Aiming to address the chicken-and-egg problem in star identification and the intrinsic parameter determination processes of on-orbit star sensors, this study proposes an on-orbit self-calibration method for star sensors that does not depend on star identification. First, the self-calibration equations of a star sensor are derived based on the invariance of the interstar angle of a star pair between image frames, without any requirements for the true value of the interstar angle of the star pair. Then, a constant constraint of the optical path from the star spot to the center of the star sensor optical system is defined to reduce the biased estimation in self-calibration. Finally, a scaled nonlinear least square method is developed to solve the self-calibration equations, thus accelerating iteration convergence. Our simulation and analysis results show that the bias of the focal length estimation in on-orbit self-calibration with a constraint is two orders of magnitude smaller than that in on-orbit self-calibration without a constraint. In addition, it is shown that convergence can be achieved in 10 iterations when the scaled nonlinear least square method is used to solve the self-calibration equations. The calibrated intrinsic parameters obtained by the proposed method can be directly used in traditional star map identification methods. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

14 pages, 4294 KiB  
Article
Pointing Error Correction for Vehicle-Mounted Single-Photon Ranging Theodolite Using a Piecewise Linear Regression Model
by Qingjia Gao, Chong Wang, Xiaoming Wang, Zhenyu Liu, Yanjun Liu, Qianglong Wang and Wenda Niu
Sensors 2024, 24(10), 3192; https://doi.org/10.3390/s24103192 - 17 May 2024
Cited by 3 | Viewed by 1282
Abstract
Pointing error is a critical performance metric for vehicle-mounted single-photon ranging theodolites (VSRTs). Achieving high-precision pointing through processing and adjustment can incur significant costs. In this study, we propose a cost-effective digital correction method based on a piecewise linear regression model to mitigate [...] Read more.
Pointing error is a critical performance metric for vehicle-mounted single-photon ranging theodolites (VSRTs). Achieving high-precision pointing through processing and adjustment can incur significant costs. In this study, we propose a cost-effective digital correction method based on a piecewise linear regression model to mitigate this issue. Firstly, we introduce the structure of a VSRT and conduct a comprehensive analysis of the factors influencing its pointing error. Subsequently, we develop a physically meaningful piecewise linear regression model that is both physically meaningful and capable of accurately estimating the pointing error. We then calculate and evaluate the regression equation to ensure its effectiveness. Finally, we successfully apply the proposed method to correct the pointing error. The efficacy of our approach has been substantiated through dynamic accuracy testing of a 450 mm optical aperture VSRT. The findings illustrate that our regression model diminishes the root mean square (RMS) value of VSRT’s pointing error from 17″ to below 5″. Following correction utilizing this regression model, the pointing error of VSRT can be notably enhanced to the arc-second precision level. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

Back to TopTop