Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = far-infrared video

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 3147 KiB  
Article
A Two-Stage Registration Strategy for Thermal–Visible Images in Substations
by Wanfeng Sun, Haibo Gao and Cheng Li
Appl. Sci. 2024, 14(3), 1158; https://doi.org/10.3390/app14031158 - 30 Jan 2024
Cited by 2 | Viewed by 1817
Abstract
The analysis of infrared video images is becoming one of the methods used to detect thermal hazards in many large-scale engineering sites. The fusion of infrared thermal imaging and visible image data in the target area can help people to identify and locate [...] Read more.
The analysis of infrared video images is becoming one of the methods used to detect thermal hazards in many large-scale engineering sites. The fusion of infrared thermal imaging and visible image data in the target area can help people to identify and locate the fault points of thermal hazards. Among them, a very important step is the registration of thermally visible images. However, the direct registration of images with large-scale differences may lead to large registration errors or even failure. This paper presents a novel two-stage thermal–visible-image registration strategy specifically designed for exceptional scenes, such as a substation. Firstly, the original image pairs that occur after binarization are quickly and roughly registered. Secondly, the adaptive downsampling unit partial-intensity invariant feature descriptor (ADU-PIIFD) algorithm is proposed to correct the small-scale differences in details and achieve finer registration. Experiments are conducted on 30 data sets containing complex power station scenes and compared with several other methods. The results show that the proposed method exhibits an excellent and stable performance in thermal–visible-image registration, and the registration error on the entire data set is within five pixels. Especially for multimodal images with poor image quality and many detailed features, the robustness of the proposed method is far better than that of other methods, which provides a more reliable image registration scheme for the field of fire safety. Full article
(This article belongs to the Special Issue Advanced Methodology and Analysis in Fire Protection Science)
Show Figures

Figure 1

20 pages, 5767 KiB  
Article
Image Region Prediction from Thermal Videos Based on Image Prediction Generative Adversarial Network
by Ganbayar Batchuluun, Ja Hyung Koo, Yu Hwan Kim and Kang Ryoung Park
Mathematics 2021, 9(9), 1053; https://doi.org/10.3390/math9091053 - 7 May 2021
Cited by 6 | Viewed by 2619
Abstract
Various studies have been conducted on object detection, tracking, and action recognition based on thermal images. However, errors occur during object detection, tracking, and action recognition when a moving object leaves the field of view (FOV) of a camera and part of the [...] Read more.
Various studies have been conducted on object detection, tracking, and action recognition based on thermal images. However, errors occur during object detection, tracking, and action recognition when a moving object leaves the field of view (FOV) of a camera and part of the object becomes invisible. However, no studies have examined this issue so far. Therefore, this article proposes a method for widening the FOV of the current image by predicting images outside the FOV of the camera using the current image and previous sequential images. In the proposed method, the original one-channel thermal image is converted into a three-channel thermal image to perform image prediction using an image prediction generative adversarial network. When image prediction and object detection experiments were conducted using the marathon sub-dataset of the Boston University-thermal infrared video (BU-TIV) benchmark open dataset, we confirmed that the proposed method showed the higher accuracies of image prediction (structural similarity index measure (SSIM) of 0.9839) and object detection (F1 score (F1) of 0.882, accuracy (ACC) of 0.983, and intersection over union (IoU) of 0.791) than the state-of-the-art methods. Full article
(This article belongs to the Special Issue Computer Graphics, Image Processing and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 5742 KiB  
Article
Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors
by Jong Hyun Kim, Hyung Gil Hong and Kang Ryoung Park
Sensors 2017, 17(5), 1065; https://doi.org/10.3390/s17051065 - 8 May 2017
Cited by 68 | Viewed by 8146
Abstract
Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours [...] Read more.
Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

15 pages, 8218 KiB  
Article
Discharge Measurements of Snowmelt Flood by Space-Time Image Velocimetry during the Night Using Far-Infrared Camera
by Ichiro Fujita
Water 2017, 9(4), 269; https://doi.org/10.3390/w9040269 - 11 Apr 2017
Cited by 36 | Viewed by 8104
Abstract
The space time image velocimetry (STIV) technique is presented and shown to be a useful tool for extracting river flow information non-intrusively simply by taking surface video images. This technique is applied to measure surface velocity distributions on the Uono River on Honshu [...] Read more.
The space time image velocimetry (STIV) technique is presented and shown to be a useful tool for extracting river flow information non-intrusively simply by taking surface video images. This technique is applied to measure surface velocity distributions on the Uono River on Honshu Island, Japan. At the site, various measurement methods such as a radio-wave velocity meter, an acoustic Doppler current profiler (ADCP) or imaging techniques were implemented. The performance of STIV was examined in various aspects such as a night measurement using a far-infrared-ray (FIR) camera and a comparison to ADCP data for checking measurement accuracy. All the results showed that STIV is capable of providing reliable data for surface velocity and water discharge that agree fairly well with ADCP data. In particular, it was demonstrated that measurements during the night can be conducted without any difficulty using an FIR camera and the STIV technique. In particular, using the FIR camera, the STIV technique can capture water surface features better than conventional cameras even at low resolution. Furthermore, it was demonstrated that measurements during the night can be conducted without any difficulty. Full article
(This article belongs to the Special Issue Advances in Hydro-Meteorological Monitoring)
Show Figures

Figure 1

25 pages, 8698 KiB  
Article
Far-Infrared Based Pedestrian Detection for Driver-Assistance Systems Based on Candidate Filters, Gradient-Based Feature and Multi-Frame Approval Matching
by Guohua Wang and Qiong Liu
Sensors 2015, 15(12), 32188-32212; https://doi.org/10.3390/s151229874 - 21 Dec 2015
Cited by 19 | Viewed by 8847
Abstract
Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a [...] Read more.
Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians’ head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians’ size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only. Full article
(This article belongs to the Special Issue Sensors in New Road Vehicles)
Show Figures

Figure 1

Back to TopTop