Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (18)

Search Parameters:
Keywords = thermal–visible-image registration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 70926 KiB  
Article
Fusion of Visible and Infrared Aerial Images from Uncalibrated Sensors Using Wavelet Decomposition and Deep Learning
by Chandrakanth Vipparla, Timothy Krock, Koundinya Nouduri, Joshua Fraser, Hadi AliAkbarpour, Vasit Sagan, Jing-Ru C. Cheng and Palaniappan Kannappan
Sensors 2024, 24(24), 8217; https://doi.org/10.3390/s24248217 - 23 Dec 2024
Cited by 2 | Viewed by 2012
Abstract
Multi-modal systems extract information about the environment using specialized sensors that are optimized based on the wavelength of the phenomenology and material interactions. To maximize the entropy, complementary systems operating in regions of non-overlapping wavelengths are optimal. VIS-IR (Visible-Infrared) systems have been at [...] Read more.
Multi-modal systems extract information about the environment using specialized sensors that are optimized based on the wavelength of the phenomenology and material interactions. To maximize the entropy, complementary systems operating in regions of non-overlapping wavelengths are optimal. VIS-IR (Visible-Infrared) systems have been at the forefront of multi-modal fusion research and are used extensively to represent information in all-day all-weather applications. Prior to image fusion, the image pairs have to be properly registered and mapped to a common resolution palette. However, due to differences in the device physics of image capture, information from VIS-IR sensors cannot be directly correlated, which is a major bottleneck for this area of research. In the absence of camera metadata, image registration is performed manually, which is not practical for large datasets. Most of the work published in this area assumes calibrated sensors and the availability of camera metadata providing registered image pairs, which limits the generalization capability of these systems. In this work, we propose a novel end-to-end pipeline termed DeepFusion for image registration and fusion. Firstly, we design a recursive crop and scale wavelet spectral decomposition (WSD) algorithm for automatically extracting the patch of visible data representing the thermal information. After data extraction, both the images are registered to a common resolution palette and forwarded to the DNN for image fusion. The fusion performance of the proposed pipeline is compared and quantified with state-of-the-art classical and DNN architectures for open-source and custom datasets demonstrating the efficacy of the pipeline. Furthermore, we also propose a novel keypoint-based metric for quantifying the quality of fused output. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

15 pages, 3147 KiB  
Article
A Two-Stage Registration Strategy for Thermal–Visible Images in Substations
by Wanfeng Sun, Haibo Gao and Cheng Li
Appl. Sci. 2024, 14(3), 1158; https://doi.org/10.3390/app14031158 - 30 Jan 2024
Cited by 2 | Viewed by 1742
Abstract
The analysis of infrared video images is becoming one of the methods used to detect thermal hazards in many large-scale engineering sites. The fusion of infrared thermal imaging and visible image data in the target area can help people to identify and locate [...] Read more.
The analysis of infrared video images is becoming one of the methods used to detect thermal hazards in many large-scale engineering sites. The fusion of infrared thermal imaging and visible image data in the target area can help people to identify and locate the fault points of thermal hazards. Among them, a very important step is the registration of thermally visible images. However, the direct registration of images with large-scale differences may lead to large registration errors or even failure. This paper presents a novel two-stage thermal–visible-image registration strategy specifically designed for exceptional scenes, such as a substation. Firstly, the original image pairs that occur after binarization are quickly and roughly registered. Secondly, the adaptive downsampling unit partial-intensity invariant feature descriptor (ADU-PIIFD) algorithm is proposed to correct the small-scale differences in details and achieve finer registration. Experiments are conducted on 30 data sets containing complex power station scenes and compared with several other methods. The results show that the proposed method exhibits an excellent and stable performance in thermal–visible-image registration, and the registration error on the entire data set is within five pixels. Especially for multimodal images with poor image quality and many detailed features, the robustness of the proposed method is far better than that of other methods, which provides a more reliable image registration scheme for the field of fire safety. Full article
(This article belongs to the Special Issue Advanced Methodology and Analysis in Fire Protection Science)
Show Figures

Figure 1

25 pages, 25613 KiB  
Article
Orthomosaicking Thermal Drone Images of Forests via Simultaneously Acquired RGB Images
by Rudraksh Kapil, Guillermo Castilla, Seyed Mojtaba Marvasti-Zadeh, Devin Goodsman, Nadir Erbilgin and Nilanjan Ray
Remote Sens. 2023, 15(10), 2653; https://doi.org/10.3390/rs15102653 - 19 May 2023
Cited by 9 | Viewed by 6290
Abstract
Operational forest monitoring often requires fine-detail information in the form of an orthomosaic, created by stitching overlapping nadir images captured by aerial platforms such as drones. RGB drone sensors are commonly used for low-cost, high-resolution imaging that is conducive to effective orthomosaicking, but [...] Read more.
Operational forest monitoring often requires fine-detail information in the form of an orthomosaic, created by stitching overlapping nadir images captured by aerial platforms such as drones. RGB drone sensors are commonly used for low-cost, high-resolution imaging that is conducive to effective orthomosaicking, but only capture visible light. Thermal sensors, on the other hand, capture long-wave infrared radiation, which is useful for early pest detection among other applications. However, these lower-resolution images suffer from reduced contrast and lack of descriptive features for successful orthomosaicking, leading to gaps or swirling artifacts in the orthomosaic. To tackle this, we propose a thermal orthomosaicking workflow that leverages simultaneously acquired RGB images. The latter are used for producing a surface mesh via structure from motion, while thermal images are only used to texture this mesh and yield a thermal orthomosaic. Prior to texturing, RGB-thermal image pairs are co-registered using an affine transformation derived from a machine learning technique. On average, the individual RGB and thermal images achieve a mutual information of 0.2787 after co-registration using our technique, compared to 0.0591 before co-registration, and 0.1934 using manual co-registration. We show that the thermal orthomosaic generated from our workflow (1) is of better quality than other existing methods, (2) is geometrically aligned with the RGB orthomosaic, (3) preserves radiometric information (i.e., surface temperatures) from the original thermal imagery, and (4) enables easy transfer of downstream tasks—such as tree crown detection from the RGB to the thermal orthomosaic. We also provide an open-source tool that implements our workflow to facilitate usage and further development. Full article
(This article belongs to the Collection Feature Paper Special Issue on Forest Remote Sensing)
Show Figures

Graphical abstract

15 pages, 3431 KiB  
Article
A Method of Aerial Multi-Modal Image Registration for a Low-Visibility Approach Based on Virtual Reality Fusion
by Yuezhou Wu and Changjiang Liu
Appl. Sci. 2023, 13(6), 3396; https://doi.org/10.3390/app13063396 - 7 Mar 2023
Cited by 3 | Viewed by 2047
Abstract
Aiming at the approach and landing of an aircraft under low visibility, this paper studies the use of an infrared heat-transfer imaging camera and visible-light camera to obtain dynamic hyperspectral images of flight approach scenes from the perspective of enhancing pilot vision. Aiming [...] Read more.
Aiming at the approach and landing of an aircraft under low visibility, this paper studies the use of an infrared heat-transfer imaging camera and visible-light camera to obtain dynamic hyperspectral images of flight approach scenes from the perspective of enhancing pilot vision. Aiming at the problems of affine deformation, difficulty in extracting similar geometric features, thermal shadows, light shadows, and other issues in heterogenous infrared and visible-light image registration, a multi-modal image registration method based on RoI driving in a virtual scene, RoI feature extraction, and virtual-reality-fusion-based contour angle orientation is proposed, and this could reduce the area to be registered, reduces the amount of computation, and improves the real-time registration accuracy. Aiming at the differences in multi-modal image fusion in terms of resolution, contrast, color channel, color information strength, and other aspects, the contour angle orientation maintains the geometric deformation of multi-source images well, and the virtual reality fusion technology effectively deletes incorrectly matched point pairs. By integrating redundant information and complementary information from multi-modal images, the visual perception abilities of pilots during the approach process are enhanced as a whole. Full article
Show Figures

Figure 1

22 pages, 5942 KiB  
Article
Feather Damage Monitoring System Using RGB-Depth-Thermal Model for Chickens
by Xiaomin Zhang, Yanning Zhang, Jinfeng Geng, Jinming Pan, Xinyao Huang and Xiuqin Rao
Animals 2023, 13(1), 126; https://doi.org/10.3390/ani13010126 - 28 Dec 2022
Cited by 12 | Viewed by 3276
Abstract
Feather damage is a continuous health and welfare challenge among laying hens. Infrared thermography is a tool that can evaluate the changes in the surface temperature, derived from an inflammatory process that would make it possible to objectively determine the depth of the [...] Read more.
Feather damage is a continuous health and welfare challenge among laying hens. Infrared thermography is a tool that can evaluate the changes in the surface temperature, derived from an inflammatory process that would make it possible to objectively determine the depth of the damage to the dermis. Therefore, the objective of this article was to develop an approach to feather damage assessment based on visible light and infrared thermography. Fusing information obtained from these two bands can highlight their strengths, which is more evident in the assessment of feather damage. A novel pipeline was proposed to reconstruct the RGB-Depth-Thermal maps of the chicken using binocular color cameras and a thermal infrared camera. The process of stereo matching based on binocular color images allowed for a depth image to be obtained. Then, a heterogeneous image registration method was presented to achieve image alignment between thermal infrared and color images so that the thermal infrared image was also aligned with the depth image. The chicken image was segmented from the background using a deep learning-based network based on the color and depth images. Four kinds of images, namely, color, depth, thermal and mask, were utilized as inputs to reconstruct the 3D model of a chicken with RGB-Depth-Thermal maps. The depth of feather damage can be better assessed with the proposed model compared to the 2D thermal infrared image or color image during both day and night, which provided a reference for further research in poultry farming. Full article
(This article belongs to the Special Issue Indicators and Assessment Methods of Poultry Welfare)
Show Figures

Figure 1

21 pages, 6867 KiB  
Article
TISD: A Three Bands Thermal Infrared Dataset for All Day Ship Detection in Spaceborne Imagery
by Liyuan Li, Jianing Yu and Fansheng Chen
Remote Sens. 2022, 14(21), 5297; https://doi.org/10.3390/rs14215297 - 23 Oct 2022
Cited by 17 | Viewed by 4732
Abstract
The development of infrared remote sensing technology improves the ability of night target observation, and thermal imaging systems (TIS) play a key role in the military field. Ship detection using thermal infrared (TI) remote sensing images (RSIs) has aroused great interest for fishery [...] Read more.
The development of infrared remote sensing technology improves the ability of night target observation, and thermal imaging systems (TIS) play a key role in the military field. Ship detection using thermal infrared (TI) remote sensing images (RSIs) has aroused great interest for fishery supervision, port management, and maritime safety. However, due to the high secrecy level of infrared data, thermal infrared ship datasets are lacking. In this paper, a new three-bands thermal infrared ship dataset (TISD) is proposed to evaluate all-day ship target detection algorithms. All images are from SDGSAT-1 satellite TIS three bands RSIs of the real world. Based on the TISD, we use the state-of-the-art algorithm as a baseline to do the following. (1) Common ship detection methods and existing ship datasets from synthetic aperture radar, visible, and infrared images are elementarily summarized. (2) The proposed standard deviation of single band, correlation coefficient of combined bands, and optimum index factor features of three-bands datasets are analyzed, respectively. Combined with the above theoretical analysis, the influence of the bands’ information input on the detection accuracy of a neural network model is explored. (3) We construct a lightweight network based on Yolov5 to reduce the number of floating-point operations, which is beneficial to reduce the inference time. (4) By utilizing up-sampling and registration pre-processing methods, TI images are fused with glimmer RSIs to verify the detection accuracy at night. In practice, the proposed datasets are expected to promote the research and application of all-day ship detection. Full article
Show Figures

Figure 1

18 pages, 6323 KiB  
Technical Note
Landsat 9 Geometric Characteristics Using Underfly Data
by Michael J. Choate, Rajagopalan Rengarajan, James C. Storey and Mark Lubke
Remote Sens. 2022, 14(15), 3781; https://doi.org/10.3390/rs14153781 - 6 Aug 2022
Cited by 18 | Viewed by 3764
Abstract
The Landsat program has a long history of providing remotely sensed data to the user community. This history is being extended with the addition of the Landsat 9 satellite, which closely mimics the Landsat 8 satellite and its instruments. These satellites contain two [...] Read more.
The Landsat program has a long history of providing remotely sensed data to the user community. This history is being extended with the addition of the Landsat 9 satellite, which closely mimics the Landsat 8 satellite and its instruments. These satellites contain two instruments, the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). OLI is a push-broom sensor that collects visible and near-infrared (VNIR) and short-wave infrared (SWIR) wavelengths at 30 m ground sample distance, along with a panchromatic 15 m band. The TIRS sensor contains two long-wave thermal spectral channels centered at 10.9 and 12 µm. The data from these two instruments, on both satellites, are combined into a single Landsat product. The Landsat 5–9 satellites follow a 16 day repeat cycle designated as the Worldwide Reference System (WRS-2), which provides a global notional gridded mapping for identifying individual Landsat scenes. The Landsat 8 and 9 satellites are flown such that their orbital tracks are separated by 8 days in this 16 day cycle. During the commissioning period of Landsat 9, and during its ascent to its operational WRS-2 orbit, the Landsat 9 satellite’s orbital track went under and crossed over the orbital track of the Landsat 8 satellite. This produced a unique situation where nearly time-coincident imagery could be obtained from the instruments of the two spacecrafts. From a radiometric standpoint, this allowed for near-time cross-calibration between the instruments to be performed. From a geometry perspective, calibration is achieved through high-resolution reference imagery over specific ground locations, thus ensuring calibration of the instruments and for the instruments to be well cross-calibrated geometrically. Although these underfly data do not provide calibration of the instruments between the platforms from a geometric perspective, they allow for the verification of the calibration steps involving the instruments and spacecraft. This paper discusses the co-registration of this unique set of data while also discussing other geometric aspects of these data by looking at and comparing the differences in sensor viewing and sun angles associated with the collections from the two platforms for imagery obtained over common geographic locations. The image-to-image comparisons between Landsat 8 and 9 coincident pairs, where both datasets are precision terrain products, are registered to within 2.2 m with respect to their root-mean-squared radial error (RMSEr). The 2.2 m represents less than 0.1 of a 30 m multispectral pixel in misregistration between the L9 and L8 underfly products that will be available to the user community. This unique dataset will provide well-registered, near-coincident image acquisitions between the two platforms that can be a key to any calibration or application comparisons. The paper also presents that, for images for which one of the image pairs failed precision corrections and became a terrain-corrected only product type, a range of 8–14 m RMSEr could be expected in co-registration, while, in cases where both image pairs failed the precision correction step and both images became a terrain-corrected only product type, a 14 m RMSEr could be expected for co-registration. Full article
(This article belongs to the Special Issue Environmental Monitoring Using Satellite Remote Sensing)
Show Figures

Figure 1

21 pages, 10303 KiB  
Article
A TIR-Visible Automatic Registration and Geometric Correction Method for SDGSAT-1 Thermal Infrared Image Based on Modified RIFT
by Jinfen Chen, Bo Cheng, Xiaoping Zhang, Tengfei Long, Bo Chen, Guizhou Wang and Degang Zhang
Remote Sens. 2022, 14(6), 1393; https://doi.org/10.3390/rs14061393 - 14 Mar 2022
Cited by 22 | Viewed by 4362
Abstract
High-resolution thermal infrared (TIR) remote sensing images can more accurately retrieve land surface temperature and describe the spatial pattern of urban thermal environment. The Thermal Infrared Spectrometer (TIS), which has high spatial resolution among spaceborne thermal infrared sensors at present, and global data [...] Read more.
High-resolution thermal infrared (TIR) remote sensing images can more accurately retrieve land surface temperature and describe the spatial pattern of urban thermal environment. The Thermal Infrared Spectrometer (TIS), which has high spatial resolution among spaceborne thermal infrared sensors at present, and global data acquisition capability, is one of the sensors equipped in the SDGSAT-1. It is an important complement to the existing international mainstream satellites. In order to produce standard data products, rapidly and accurately, the automatic registration and geometric correction method needs to be developed. Unlike visible–visible image registration, thermal infrared images are blurred in edge details and have obvious non-linear radiometric differences from visible images, which make it challenging for the TIR-visible image registration task. To address these problems, homomorphic filtering is employed to enhance TIR image details and the modified RIFT algorithm is proposed to achieve TIR-visible image registration. Different from using MIM for feature description in RIFT, the proposed modified RIFT uses the novel binary pattern string to descriptor construction. With sufficient and uniformly distributed ground control points, the two-step orthorectification framework, from SDGSAT-1 TIS L1A image to L4 orthoimage, are proposed in this study. The first experiment, with six TIR-visible image pairs, captured in different landforms, is performed to verify the registration performance, and the result indicates that the homomorphic filtering and modified RIFT greatly increase the number of corresponding points. The second experiment, with one scene of an SDGSAT-1 TIS image, is executed to test the proposed orthorectification framework. Subsequently, 52 GCPs are selected manually to evaluate the orthorectification accuracy. The result indicates that the proposed orthorectification framework is helpful to improve the geometric accuracy and guarantee for the subsequent thermal infrared applications. Full article
(This article belongs to the Section Earth Observation Data)
Show Figures

Figure 1

13 pages, 5490 KiB  
Article
Band-to-Band Registration of FY-1C/D Visible-IR Scanning Radiometer High-Resolution Picture Transmission Data
by Hongbo Pan, Jia Tian, Taoyang Wang, Jing Wang, Chengbao Liu and Lei Yang
Remote Sens. 2022, 14(2), 411; https://doi.org/10.3390/rs14020411 - 17 Jan 2022
Cited by 3 | Viewed by 2850
Abstract
The visible-IR scanning radiometer (VIRR) of FY1-C/D meteorological satellites consists of 10 bands with 4 different focal plane assemblies (FPAs). However, there are significant band-to-band registration (BBR) errors between different bands, which cannot be compensated for by a simple shift in the along-scan [...] Read more.
The visible-IR scanning radiometer (VIRR) of FY1-C/D meteorological satellites consists of 10 bands with 4 different focal plane assemblies (FPAs). However, there are significant band-to-band registration (BBR) errors between different bands, which cannot be compensated for by a simple shift in the along-scan direction. A rigorous BBR frame was proposed to analyze the sources of misregistration in the whisk-broom camera. According to theory, the 45° scanning mirror introduces tangent function style misregistration in the along-track direction and secant function style misregistration in the across-track direction between different bands if the bands are not in the same optical axis. As proven by the experiments of both FY-1C and FY-1D, the image rotation caused by the 45° scanning mirrors plays a major role in the misregistration. However, misregistration between different FPAs does not strictly adhere to this theory. Therefore, a polynomial-based co-registration method was proposed to model the BBR errors for the VIRR. To achieve 0.1 pixel accuracy, a fourth-degree polynomial was used for BBR in the along-scan direction, and a fifth-degree polynomial was used for the along-track direction. For the reflective bands, the root-mean-square errors (RMSEs) of misregistration could be improved from 3 pixels to 0.11 pixels. Limited by matching accuracy, the RMSEs of misregistration between thermal bands and reflective bands were approximately 0.2 to 0.4 pixels, depending on the signal-to-noise ratio. Full article
Show Figures

Figure 1

23 pages, 4788 KiB  
Article
A Rapid Beam Pointing Determination and Beam-Pointing Error Analysis Method for a Geostationary Orbiting Microwave Radiometer Antenna in Consideration of Antenna Thermal Distortions
by Hualong Hu, Xiaochong Tong and He Li
Sensors 2021, 21(17), 5943; https://doi.org/10.3390/s21175943 - 4 Sep 2021
Viewed by 2628
Abstract
When observing the Earth’s radiation signal with a geostationary orbiting (GEO) mechanically scanned microwave radiometer, it is necessary to correct the antenna beam pointing (ABP) in real time for the deviation caused by thermal distortions of antenna reflectors with the help of the [...] Read more.
When observing the Earth’s radiation signal with a geostationary orbiting (GEO) mechanically scanned microwave radiometer, it is necessary to correct the antenna beam pointing (ABP) in real time for the deviation caused by thermal distortions of antenna reflectors with the help of the on-board Image Navigation and Registration (INR) system during scanning of the Earth. The traditional ABP determination and beam-pointing error (BPE) analysis method is based on the electromechanical coupling principle, which usurps time and computing resources and thus cannot meet the requirement for frequent real-time on-board INR operations needed by the GEO microwave radiometer. For this reason, matrix optics (MO), which is widely used in characterizing the optical path of the visible/infrared sensor, is extended to this study so that it can be applied to model the equivalent optical path of the microwave antenna with a much more complicated configuration. Based on the extended MO method, the ideal ABP determination model and the model for determining the actual ABP affected by reflector thermal distortions are deduced for China’s future GEO radiometer, and an MO-based BPE computing method, which establishes a direct connection between the reflector thermal distortion errors (TDEs) and the thermally induced BPE, is defined. To verify the overall performance of the extended MO method for rapid ABP determination, the outputs from the ideal ABP determination model were compared to calculations from GRASP 10.3 software. The experimental results show that the MO-based ABP determination model can achieve the same results as GRASP software with a significant advantage in computational efficiency (e.g., at the lowest frequency band of 54 GHz, our MO-based model yielded a 4,730,000 times faster computation time than the GRASP software). After validating the correctness of the extended MO method, the impacts of the reflector TDEs on the BPE were quantified on a case-by-case basis with the help of the defined BPE computing method, and those TDEs that had a significant impact on the BPE were therefore identified. The methods and results presented in this study are expected to set the basis for the further development of on-board INR systems to be used in China’s future GEO microwave radiometer and benefit the ABP determination and BEP analysis of other antenna configurations to a certain extent. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

30 pages, 18767 KiB  
Article
Geometric- and Optimization-Based Registration Methods for Long-Wave Infrared Hyperspectral Images
by Alper Koz and Ufuk Efe
Remote Sens. 2021, 13(13), 2465; https://doi.org/10.3390/rs13132465 - 24 Jun 2021
Cited by 3 | Viewed by 2457
Abstract
Registration of long-wave infrared (LWIR) hyperspectral images with their thermal and emissivity components has until now received comparatively less attention with respect to the visible near and short wave infrared hyperspectral images. In this paper, the registration of LWIR hyperspectral images is investigated [...] Read more.
Registration of long-wave infrared (LWIR) hyperspectral images with their thermal and emissivity components has until now received comparatively less attention with respect to the visible near and short wave infrared hyperspectral images. In this paper, the registration of LWIR hyperspectral images is investigated to enhance applications of LWIR images such as change detection, temperature and emissivity separation, and target detection. The proposed approach first searches for the best features of hyperspectral image pixels for extraction and matching in the LWIR range and then performs a global registration over two-dimensional maps of three-dimensional hyperspectral cubes. The performances of temperature and emissivity features in the thermal domain along with the average energy and principal components of spectral radiance are investigated. The global registration performed over whole 2D maps is further improved by blockwise local refinements. Among the two proposed approaches, the geometric refinement seeks the best keypoint combination in the neighborhood of each block to estimate the transformation for that block. The alternative optimization-based refinement iteratively finds the best transformation by maximizing the similarity of the reference and transformed blocks. The possible blocking artifacts due to blockwise mapping are finally eliminated by pixelwise refinement. The experiments are evaluated with respect to the (i) utilized similarity metrics in the LWIR range between transformed and reference blocks, (ii) proposed geometric- and optimization-based methods, and (iii) image pairs captured on the same and different days. The better performance of the proposed approach compared to manual, GPU-IMU-based, and state-of-the-art image registration methods is verified. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

16 pages, 2365 KiB  
Article
Assessment of Registration Methods for Thermal Infrared and Visible Images for Diabetic Foot Monitoring
by Sara González-Pérez, Daniel Perea Ström, Natalia Arteaga-Marrero, Carlos Luque, Ignacio Sidrach-Cardona, Enrique Villa and Juan Ruiz-Alzola
Sensors 2021, 21(7), 2264; https://doi.org/10.3390/s21072264 - 24 Mar 2021
Cited by 10 | Viewed by 3262
Abstract
This work presents a revision of four different registration methods for thermal infrared and visible images captured by a camera-based prototype for the remote monitoring of diabetic foot. This prototype uses low cost and off-the-shelf available sensors in thermal infrared and visible spectra. [...] Read more.
This work presents a revision of four different registration methods for thermal infrared and visible images captured by a camera-based prototype for the remote monitoring of diabetic foot. This prototype uses low cost and off-the-shelf available sensors in thermal infrared and visible spectra. Four different methods (Geometric Optical Translation, Homography, Iterative Closest Point, and Affine transform with Gradient Descent) have been implemented and analyzed for the registration of images obtained from both sensors. All four algorithms’ performances were evaluated using the Simultaneous Truth and Performance Level Estimation (STAPLE) together with several overlap benchmarks as the Dice coefficient and the Jaccard index. The performance of the four methods has been analyzed with the subject at a fixed focal plane and also in the vicinity of this plane. The four registration algorithms provide suitable results both at the focal plane as well as outside of it within 50 mm margin. The obtained Dice coefficients are greater than 0.950 in all scenarios, well within the margins required for the application at hand. A discussion of the obtained results under different distances is presented along with an evaluation of its robustness under changing conditions. Full article
Show Figures

Figure 1

16 pages, 3239 KiB  
Article
Multi-Sensor Face Registration Based on Global and Local Structures
by Wei Li, Mingli Dong, Naiguang Lu, Xiaoping Lou and Wanyong Zhou
Appl. Sci. 2019, 9(21), 4623; https://doi.org/10.3390/app9214623 - 30 Oct 2019
Cited by 6 | Viewed by 3156
Abstract
The work reported in this paper aims at utilizing the global geometrical relationship and local shape feature to register multi-spectral images for fusion-based face recognition. We first propose a multi-spectral face images registration method based on both global and local structures of feature [...] Read more.
The work reported in this paper aims at utilizing the global geometrical relationship and local shape feature to register multi-spectral images for fusion-based face recognition. We first propose a multi-spectral face images registration method based on both global and local structures of feature point sets. In order to combine the global geometrical relationship and local shape feature in a new Student’s t Mixture probabilistic model framework. On the one hand, we use inner-distance shape context as the local shape descriptors of feature point sets. On the other hand, we formulate the feature point sets registration of the multi-spectral face images as the Student’s t Mixture probabilistic model estimation, and local shape descriptors are used to replace the mixing proportions of the prior Student’s t Mixture Model. Furthermore, in order to improve the anti-interference performance of face recognition techniques, a guided filtering and gradient preserving image fusion strategy is used to fuse the registered multi-spectral face image. It can make the multi-spectral fusion image hold more apparent details of the visible image and thermal radiation information of the infrared image. Subjective and objective registration experiments are conducted with manual selected landmarks and real multi-spectral face images. The qualitative and quantitative comparisons with the state-of-the-art methods demonstrate the accuracy and robustness of our proposed method in solving the multi-spectral face image registration problem. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

5 pages, 2713 KiB  
Proceeding Paper
A Preliminary Study on Non Contact Thermal Monitoring of Microwave Photonic Systems
by Bushra Jalil, Bilal Hussain, Maria Antonietta Pascali, Giovanni Serafino, Davide Moroni and Paolo Ghelfi
Proceedings 2019, 27(1), 19; https://doi.org/10.3390/proceedings2019027019 - 23 Sep 2019
Viewed by 1561
Abstract
Microwave photonic systems are more susceptible to thermal fluctuations due to thermo-optic effect. In order to stabilize the performance of photonic components, thermal monitoring is achieved by using thermistors placed at any arbitrary location along the component. This work presents non contact thermography [...] Read more.
Microwave photonic systems are more susceptible to thermal fluctuations due to thermo-optic effect. In order to stabilize the performance of photonic components, thermal monitoring is achieved by using thermistors placed at any arbitrary location along the component. This work presents non contact thermography of a fully functional microwave photonic system. The temperature profile of printed circuit board (PCB) and photonic integrated circuit (PIC) is obtained using Fluke FLIR (A65) camera. We performed Otsu’s thresholding to segment heat centers located across PCB as well as PIC. The infrared and visible cameras used in this work have different field of view, therefore, after applying morphological methods, we performed image registration to synchronize both visible and thermal images. We demonstrate this method on the circuit board with active electrical/photonic elements and were able to observe thermal profile of these components. Full article
Show Figures

Figure 1

18 pages, 7849 KiB  
Article
Automated Attitude Determination for Pushbroom Sensors Based on Robust Image Matching
by Ryu Sugimoto, Toru Kouyama, Atsunori Kanemura, Soushi Kato, Nevrez Imamoglu and Ryosuke Nakamura
Remote Sens. 2018, 10(10), 1629; https://doi.org/10.3390/rs10101629 - 13 Oct 2018
Cited by 5 | Viewed by 4916
Abstract
Accurate attitude information from a satellite image sensor is essential for accurate map projection and reducing computational cost for post-processing of image registration, which enhance image usability, such as change detection. We propose a robust attitude-determination method for pushbroom sensors onboard spacecraft by [...] Read more.
Accurate attitude information from a satellite image sensor is essential for accurate map projection and reducing computational cost for post-processing of image registration, which enhance image usability, such as change detection. We propose a robust attitude-determination method for pushbroom sensors onboard spacecraft by matching land features in well registered base-map images and in observed images, which extends the current method that derives satellite attitude using an image taken with 2-D image sensors. Unlike 2-D image sensors, a pushbroom sensor observes the ground by changing its position and attitude according to the trajectory of a satellite. To address pushbroom-sensor observation, the proposed method can trace the temporal variation in the sensor attitude by combining the robust matching technique for a 2-D image sensor and a non-linear least squares approach, which can express gradual time evolution of the sensor attitude. Experimental results using images taken from a visible and near infrared pushbroom sensor of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) onboard Terra as test image and Landsat-8/OLI images as a base map show that the proposed method can determine satellite attitude with an accuracy of 0.003° (corresponding to the 2-pixel scale of ASTER) in roll and pitch angles even for a scene in which there are many cloud patches, whereas the determination accuracy remains 0.05° in the yaw angle that does not affect accuracy of image registration compared with the other two axes. In addition to the achieved attitude accuracy that was better than that using star trackers (0.01°) regarding roll and pitch angles, the proposed method does not require any attitude information from onboard sensors. Therefore, the proposed method may contribute to validating and calibrating attitude sensors in space, at the same time better accuracy will contribute to reducing computational cost in post-processing for image registration. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Back to TopTop