Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = FLIR video

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 10628 KiB  
Article
Thermal Video Enhancement Mamba: A Novel Approach to Thermal Video Enhancement for Real-World Applications
by Sargis Hovhannisyan, Sos Agaian, Karen Panetta and Artyom Grigoryan
Information 2025, 16(2), 125; https://doi.org/10.3390/info16020125 - 9 Feb 2025
Viewed by 1513
Abstract
Object tracking in thermal video is challenging due to noise, blur, and low contrast. We present TVEMamba, a Mamba-based enhancement framework with near-linear complexity that improves tracking in these conditions. Our approach uses a State Space 2D (SS2D) module integrated with Convolutional Neural [...] Read more.
Object tracking in thermal video is challenging due to noise, blur, and low contrast. We present TVEMamba, a Mamba-based enhancement framework with near-linear complexity that improves tracking in these conditions. Our approach uses a State Space 2D (SS2D) module integrated with Convolutional Neural Networks (CNNs) to filter, sharpen, and highlight important details. Key components include (i) a denoising module to reduce background noise and enhance image clarity, (ii) an optical flow attention module to handle complex motion and reduce blur, and (iii) entropy-based labeling to create a fully labeled thermal dataset for training and evaluation. TVEMamba outperforms existing methods (DCRGC, RLBHE, IE-CGAN, BBCNN) across multiple datasets (BIRDSAI, FLIR, CAMEL, Autonomous Vehicles, Solar Panels) and achieves higher scores on standard quality metrics (EME, BDIM, DMTE, MDIMTE, LGTA). Extensive tests, including ablation studies and convergence analysis, confirm its robustness. Real-world examples, such as tracking humans, animals, and moving objects for self-driving vehicles and remote sensing, demonstrate the practical value of TVEMamba. Full article
(This article belongs to the Special Issue Emerging Research in Object Tracking and Image Segmentation)
Show Figures

Figure 1

17 pages, 3042 KiB  
Article
Multimodality Video Acquisition System for the Assessment of Vital Distress in Children
by Vincent Boivin, Mana Shahriari, Gaspar Faure, Simon Mellul, Edem Donatien Tiassou, Philippe Jouvet and Rita Noumeir
Sensors 2023, 23(11), 5293; https://doi.org/10.3390/s23115293 - 2 Jun 2023
Cited by 2 | Viewed by 2557
Abstract
In children, vital distress events, particularly respiratory, go unrecognized. To develop a standard model for automated assessment of vital distress in children, we aimed to construct a prospective high-quality video database for critically ill children in a pediatric intensive care unit (PICU) setting. [...] Read more.
In children, vital distress events, particularly respiratory, go unrecognized. To develop a standard model for automated assessment of vital distress in children, we aimed to construct a prospective high-quality video database for critically ill children in a pediatric intensive care unit (PICU) setting. The videos were acquired automatically through a secure web application with an application programming interface (API). The purpose of this article is to describe the data acquisition process from each PICU room to the research electronic database. Using an Azure Kinect DK and a Flir Lepton 3.5 LWIR attached to a Jetson Xavier NX board and the network architecture of our PICU, we have implemented an ongoing high-fidelity prospectively collected video database for research, monitoring, and diagnostic purposes. This infrastructure offers the opportunity to develop algorithms (including computational models) to quantify vital distress in order to evaluate vital distress events. More than 290 RGB, thermographic, and point cloud videos of each 30 s have been recorded in the database. Each recording is linked to the patient’s numerical phenotype, i.e., the electronic medical health record and high-resolution medical database of our research center. The ultimate goal is to develop and validate algorithms to detect vital distress in real time, both for inpatient care and outpatient management. Full article
Show Figures

Figure 1

14 pages, 2850 KiB  
Technical Note
An Unpaired Thermal Infrared Image Translation Method Using GMA-CycleGAN
by Shihao Yang, Min Sun, Xiayin Lou, Hanjun Yang and Hang Zhou
Remote Sens. 2023, 15(3), 663; https://doi.org/10.3390/rs15030663 - 22 Jan 2023
Cited by 18 | Viewed by 4940
Abstract
Automatically translating chromaticity-free thermal infrared (TIR) images into realistic color visible (CV) images is of great significance for autonomous vehicles, emergency rescue, robot navigation, nighttime video surveillance, and many other fields. Most recent designs use end-to-end neural networks to translate TIR directly to [...] Read more.
Automatically translating chromaticity-free thermal infrared (TIR) images into realistic color visible (CV) images is of great significance for autonomous vehicles, emergency rescue, robot navigation, nighttime video surveillance, and many other fields. Most recent designs use end-to-end neural networks to translate TIR directly to CV; however, compared to these networks, TIR has low contrast and an unclear texture for CV translation. Thus, directly translating the TIR temperature value of only one channel to the RGB color value of three channels without adding additional constraints or semantic information does not handle the one-to-three mapping problem between different domains in a good way, causing the translated CV images not only to have blurred edges but also color confusion. As for the methodology of the work, considering that in the translation from TIR to CV the most important process is to map information from the temperature domain into the color domain, an improved CycleGAN (GMA-CycleGAN) is proposed in this work in order to translate TIR images to grayscale visible (GV) images. Although the two domains have different properties, the numerical mapping is one-to-one, which reduces the color confusion caused by one-to-three mapping when translating TIR to CV. Then, a GV-CV translation network is applied to obtain CV images. Since the process of decomposing GV images into CV images is carried out in the same domain, edge blurring can be avoided. To enhance the boundary gradient between the object (pedestrian and vehicle) and the background, a mask attention module based on the TIR temperature mask and the CV semantic mask is designed without increasing the network parameters, and it is added to the feature encoding and decoding convolution layers of the CycleGAN generator. Moreover, a perceptual loss term is applied to the original CycleGAN loss function to bring the translated images closer to the real images regarding the space feature. In order to verify the effectiveness of the proposed method, the FLIR dataset is used for experiments, and the obtained results show that, compared to the state-of-the-art model, the subjective quality of the translated CV images obtained by the proposed method is better, as the objective evaluation metric FID (Fréchet inception distance) is reduced by 2.42 and the PSNR (peak signal-to-noise ratio) is improved by 1.43. Full article
Show Figures

Figure 1

19 pages, 2937 KiB  
Article
Thermal Infrared Imaging to Evaluate Emotional Competences in Nursing Students: A First Approach through a Case Study
by Pilar Marqués-Sánchez, Cristina Liébana-Presa, José Alberto Benítez-Andrades, Raquel Gundín-Gallego, Lorena Álvarez-Barrio and Pablo Rodríguez-Gonzálvez
Sensors 2020, 20(9), 2502; https://doi.org/10.3390/s20092502 - 28 Apr 2020
Cited by 10 | Viewed by 6262
Abstract
During university studies of nursing, it is important to develop emotional skills for their impact on academic performance and the quality of patient care. Thermography is a technology that could be applied during nursing training to evaluate emotional skills. The objective is to [...] Read more.
During university studies of nursing, it is important to develop emotional skills for their impact on academic performance and the quality of patient care. Thermography is a technology that could be applied during nursing training to evaluate emotional skills. The objective is to evaluate the effect of thermography as the tool for monitoring and improving emotional skills in student nurses through a case study. The student was subjected to different emotions. The stimuli applied were video and music. The process consisted of measuring the facial temperatures during each emotion and stimulus in three phases: acclimatization, stimulus, and response. Thermographic data acquisition was performed with an FLIR E6 camera. The analysis was complemented with the environmental data (temperature and humidity). With the video stimulus, the start and final forehead temperature from testing phases, showed a different behavior between the positive (joy: 34.5 °C–34.5 °C) and negative (anger: 36.1 °C–35.1 °C) emotions during the acclimatization phase, different from the increase experienced in the stimulus (joy: 34.7 °C–35.0 °C and anger: 35.0 °C–35.0 °C) and response phases (joy: 35.0 °C–35.0 °C and anger: 34.8 °C–35.0 °C). With the music stimulus, the emotions showed different patterns in each phase (joy: 34.2 °C–33.9 °C–33.4 °C and anger: 33.8 °C–33.4 °C–33.8 °C). Whenever the subject is exposed to a stimulus, there is a thermal bodily response. All of the facial areas follow a common thermal pattern in response to the stimulus, with the exception of the nose. Thermography is a technique suitable for the stimulation practices in emotional skills, given that it is non-invasive, it is quantifiable, and easy to access. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

16 pages, 11400 KiB  
Article
The Use of Thermal Infra-Red Imagery to Elucidate the Dynamics and Processes Occurring in Fog
by Jeremy Price and Kristian Stokkereit
Atmosphere 2020, 11(3), 240; https://doi.org/10.3390/atmos11030240 - 29 Feb 2020
Cited by 7 | Viewed by 3057
Abstract
Improving our ability to predict fog accurately is currently a high priority for Numerical Weather Prediction models. Such an endeavour requires numerous types of observations of real fog as a means to both better understand it and also provide an assessment of model [...] Read more.
Improving our ability to predict fog accurately is currently a high priority for Numerical Weather Prediction models. Such an endeavour requires numerous types of observations of real fog as a means to both better understand it and also provide an assessment of model performance. We consider the use of thermal infra-red imagery, used in conjunction with other meteorological observations, for the purposes of studying fog. Two cameras were used—a FLIR Systems Inc. A655sc and a FLIR Systems Inc. A65sc—which were set up to capture one image per minute. Images were then combined to provide video footage of nocturnal fog events. Results show that the imagery from such cameras can provide great insight into fog processes and dynamics, identifying interesting features not previously seen. Furthermore, comparison of imagery with conventional meteorological observations showed that the observations were often not capable of being used to delineate all of the processes affecting fog, due to their incomplete and local nature. Full article
(This article belongs to the Special Issue Observation, Simulation and Predictability of Fog )
Show Figures

Figure 1

16 pages, 26716 KiB  
Article
Sensor Pods: Multi-Resolution Surveys from a Light Aircraft
by Conor Cahalane, Daire Walsh, Aidan Magee, Sean Mannion, Paul Lewis and Tim McCarthy
Inventions 2017, 2(1), 2; https://doi.org/10.3390/inventions2010002 - 7 Feb 2017
Cited by 5 | Viewed by 10041
Abstract
Airborne remote sensing, whether performed from conventional aerial survey platforms such as light aircraft or the more recent Remotely Piloted Airborne Systems (RPAS) has the ability to compliment mapping generated using earth-orbiting satellites, particularly for areas that may experience prolonged cloud cover. Traditional [...] Read more.
Airborne remote sensing, whether performed from conventional aerial survey platforms such as light aircraft or the more recent Remotely Piloted Airborne Systems (RPAS) has the ability to compliment mapping generated using earth-orbiting satellites, particularly for areas that may experience prolonged cloud cover. Traditional aerial platforms are costly but capture spectral resolution imagery over large areas. RPAS are relatively low-cost, and provide very-high resolution imagery but this is limited to small areas. We believe that we are the first group to retrofit these new, low-cost, lightweight sensors in a traditional aircraft. Unlike RPAS surveys which have a limited payload, this is the first time that a method has been designed to operate four distinct RPAS sensors simultaneously—hyperspectral, thermal, hyper, RGB, video. This means that imagery covering a broad range of the spectrum captured during a single survey, through different imaging capture techniques (frame, pushbroom, video) can be applied to investigate different multiple aspects of the surrounding environment such as, soil moisture, vegetation vitality, topography or drainage, etc. In this paper, we present the initial results validating our innovative hybrid system adapting dedicated RPAS sensors for a light aircraft sensor pod, thereby providing the benefits of both methodologies. Simultaneous image capture with a Nikon D800E SLR and a series of dedicated RPAS sensors, including a FLIR thermal imager, a four-band multispectral camera and a 100-band hyperspectral imager was enabled by integration in a single sensor pod operating from a Cessna c172. However, to enable accurate sensor fusion for image analysis, each sensor must first be combined in a common vehicle coordinate system and a method for triggering, time-stamping and calculating the position/pose of each sensor at the time of image capture devised. Initial tests were carried out over agricultural regions with geometric tests designed to assess the spatial accuracy of the fused imagery in terms of its absolute and relative accuracy. The results demonstrate that by using our innovative system, images captured simultaneously by the four sensors could be geometrically corrected successfully and then co-registered and fused exhibiting a root-mean-square error (RMSE) of approximately 10m independent of inertial measurements and ground control. Full article
Show Figures

Figure 1

15 pages, 6942 KiB  
Article
Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding
by Xin Li, Rui Guo and Chao Chen
Sensors 2014, 14(6), 11245-11259; https://doi.org/10.3390/s140611245 - 24 Jun 2014
Cited by 19 | Viewed by 7841
Abstract
Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the [...] Read more.
Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach. Full article
Show Figures

11 pages, 420 KiB  
Article
Thermal-Infrared Pedestrian ROI Extraction through Thermal and Motion Information Fusion
by Antonio Fernández-Caballero, María T. López and Juan Serrano-Cuerda
Sensors 2014, 14(4), 6666-6676; https://doi.org/10.3390/s140406666 - 10 Apr 2014
Cited by 35 | Viewed by 10026
Abstract
This paper investigates the robustness of a new thermal-infrared pedestrian detection system under different outdoor environmental conditions. In first place the algorithm for pedestrian ROI extraction in thermal-infrared video based on both thermal and motion information is introduced. Then, the evaluation of the [...] Read more.
This paper investigates the robustness of a new thermal-infrared pedestrian detection system under different outdoor environmental conditions. In first place the algorithm for pedestrian ROI extraction in thermal-infrared video based on both thermal and motion information is introduced. Then, the evaluation of the proposal is detailed after describing the complete thermal and motion information fusion. In this sense, the environment chosen for evaluation is described, and the twelve test sequences are specified. For each of the sequences captured from a forward-looking infrared FLIR A-320 camera, the paper explains the weather and light conditions under which it was captured. The results allow us to draw firm conclusions about the conditions under which it can be affirmed that it is efficient to use our thermal-infrared proposal to robustly extract human ROIs. Full article
Show Figures

Graphical abstract

Back to TopTop