Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (45)

Search Parameters:
Keywords = thermal pedestrian images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 5954 KiB  
Article
Research on the Coupling Relationship Between Street Built Environment and Thermal Comfort Based on Deep Learning of Street View Images: A Case Study of Chaowai Block in Beijing
by Xin Yang, Haocheng Li, Xin Ma and Bo Zhang
Buildings 2025, 15(9), 1449; https://doi.org/10.3390/buildings15091449 - 24 Apr 2025
Viewed by 466
Abstract
Against the background of global climate change receiving widespread attention, local microclimate environments have become a key focus of climate change research, which is of great significance for improving the quality of urban living environments. This study explored the quantitative coupling relationship between [...] Read more.
Against the background of global climate change receiving widespread attention, local microclimate environments have become a key focus of climate change research, which is of great significance for improving the quality of urban living environments. This study explored the quantitative coupling relationship between the built environment and the thermal comfort of complex streets. Outward blocks in Beijing were used as an example. By applying deep learning to street view images of an arterial road, we built three levels of road environmental elements for a quantitative analysis, simulated the block thermal comfort, numerically extracted the built environment factor, and derived a regression equation of the thermal comfort. The research results show that the UTCI value range of the Chaowai Block is between 28.15 °C and 47.11 °C, corresponding to human thermal sensations from slightly warm to very hot. The green rate, expressways, road width, spacious surroundings, sky, traffic, and ancillary facilities significantly affected the thermal comfort. Through the regression equation results, it can be found that the thermal comfort of different levels of roads is affected by multiple street built environment factors, and these influencing factors show differences in various levels of roads. Based on the results of the regression equation, corresponding optimization strategies were proposed to improve the thermal environment of urban streets and enhance the thermal comfort of pedestrians. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

21 pages, 9099 KiB  
Article
Urban Street Greening and Resident Comfort: An Integrated Approach Based on High-Precision Shadow Distribution and Facade Visual Assessment
by Yuting Ni, Liqun Lin, Huiqiong Xia and Xiajun Wang
Sustainability 2025, 17(3), 1026; https://doi.org/10.3390/su17031026 - 27 Jan 2025
Viewed by 1296
Abstract
With the acceleration of global climate change and urbanization, the urban heat island effect has significantly impacted the quality of life of urban residents. Although numerous studies have focused on macro-scale factors such as air temperature, surface albedo, and green space coverage, relatively [...] Read more.
With the acceleration of global climate change and urbanization, the urban heat island effect has significantly impacted the quality of life of urban residents. Although numerous studies have focused on macro-scale factors such as air temperature, surface albedo, and green space coverage, relatively little attention has been paid to micro-scale factors, such as shading provided by building facades and tree canopy coverage. However, these micro-scale factors play a significant role in enhancing pedestrian thermal comfort. This study focuses on a city community in China, aiming to assess the thermal comfort of urban streets during the summer. Utilizing high-resolution 3D geographic data and street view images extracted from drone data, this study comprehensively considers the mechanisms affecting the urban street thermal environment and the human comfort requirements for shading and greening. By proposing quantitative indicators from multiple scales and dimensions, this study thoroughly quantifies the impact of the surrounding environment, greening, shading effects, buildings, and road design on the thermal comfort of summer streets. The results show that increasing tree canopy coverage by 10 m can significantly reduce the surrounding temperature, and a building layout extending 200 m can regulate temperature. The distribution of shadows at different times significantly affects thermal comfort, while the sky view factor negatively correlates with thermal comfort. Environments with a high green view index enhance visual comfort. This study reveals the specific contributions of different environmental characteristics to street thermal comfort and identifies factors that significantly impact thermal comfort. This provides a scientific basis for urban green space planning and thermal comfort improvement, holding substantial practical significance. Full article
Show Figures

Figure 1

19 pages, 7121 KiB  
Article
Sensor-Fused Nighttime System for Enhanced Pedestrian Detection in ADAS and Autonomous Vehicles
by Jungme Park, Bharath Kumar Thota and Karthik Somashekar
Sensors 2024, 24(14), 4755; https://doi.org/10.3390/s24144755 - 22 Jul 2024
Cited by 6 | Viewed by 3061
Abstract
Ensuring a safe nighttime environmental perception system relies on the early detection of vulnerable road users with minimal delay and high precision. This paper presents a sensor-fused nighttime environmental perception system by integrating data from thermal and RGB cameras. A new alignment algorithm [...] Read more.
Ensuring a safe nighttime environmental perception system relies on the early detection of vulnerable road users with minimal delay and high precision. This paper presents a sensor-fused nighttime environmental perception system by integrating data from thermal and RGB cameras. A new alignment algorithm is proposed to fuse the data from the two camera sensors. The proposed alignment procedure is crucial for effective sensor fusion. To develop a robust Deep Neural Network (DNN) system, nighttime thermal and RGB images were collected under various scenarios, creating a labeled dataset of 32,000 image pairs. Three fusion techniques were explored using transfer learning, alongside two single-sensor models using only RGB or thermal data. Five DNN models were developed and evaluated, with experimental results showing superior performance of fused models over non-fusion counterparts. The late-fusion system was selected for its optimal balance of accuracy and response time. For real-time inferencing, the best model was further optimized, achieving 33 fps on the embedded edge computing device, an 83.33% improvement in inference speed over the system without optimization. These findings are valuable for advancing Advanced Driver Assistance Systems (ADASs) and autonomous vehicle technologies, enhancing pedestrian detection during nighttime to improve road safety and reduce accidents. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion in Autonomous Vehicles)
Show Figures

Figure 1

15 pages, 2391 KiB  
Article
Multispectral Pedestrian Detection Based on Prior-Saliency Attention and Image Fusion
by Jiaren Guo, Zihao Huang and Yanyun Tao
Electronics 2024, 13(9), 1770; https://doi.org/10.3390/electronics13091770 - 3 May 2024
Cited by 2 | Viewed by 1629
Abstract
Detecting pedestrians in varying illumination conditions poses a significant challenge, necessitating the development of innovative solutions. In response to this, we introduce Prior-AttentionNet, a pedestrian detection model featuring a Prior-Attention mechanism. This model leverages the stark contrast between thermal objects and their backgrounds [...] Read more.
Detecting pedestrians in varying illumination conditions poses a significant challenge, necessitating the development of innovative solutions. In response to this, we introduce Prior-AttentionNet, a pedestrian detection model featuring a Prior-Attention mechanism. This model leverages the stark contrast between thermal objects and their backgrounds in far-infrared (FIR) images by employing saliency attention derived from FIR images via UNet. However, extracting salient regions of diverse scales from FIR images poses a challenge for saliency attention. To address this, we integrate Simple Linear Iterative Clustering (SLIC) superpixel segmentation, embedding the segmentation feature map as prior knowledge into UNet’s decoding stage for comprehensive end-to-end training and detection. This integration enhances the extraction of focused attention regions, with the synergy of segmentation prior and saliency attention forming the core of Prior-AttentionNet. Moreover, to enrich pedestrian details and contour visibility in low-light conditions, we implement multispectral image fusion. Experimental evaluations were conducted on the KAIST and OTCBVS datasets. Applying Prior-Attention mode to FIR-RGB images significantly improves the delineation and focus on multi-scale pedestrians. Prior-AttentionNet’s general detector demonstrates the capability of detecting pedestrians with minimal computational resources. The ablation studies indicate that the FIR-RGB+ Prior-Attention mode markedly enhances detection robustness over other modes. When compared to conventional multispectral pedestrian detection models, Prior-AttentionNet consistently surpasses them by achieving higher mean average precision and lower miss rates in diverse scenarios, during both day and night. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 11394 KiB  
Article
Bio-Inspired Dark Adaptive Nighttime Object Detection
by Kuo-Feng Hung and Kang-Ping Lin
Biomimetics 2024, 9(3), 158; https://doi.org/10.3390/biomimetics9030158 - 3 Mar 2024
Cited by 5 | Viewed by 2620
Abstract
Nighttime object detection is challenging due to dim, uneven lighting. The IIHS research conducted in 2022 shows that pedestrian anti-collision systems are less effective at night. Common solutions utilize costly sensors, such as thermal imaging and LiDAR, aiming for highly accurate detection. Conversely, [...] Read more.
Nighttime object detection is challenging due to dim, uneven lighting. The IIHS research conducted in 2022 shows that pedestrian anti-collision systems are less effective at night. Common solutions utilize costly sensors, such as thermal imaging and LiDAR, aiming for highly accurate detection. Conversely, this study employs a low-cost 2D image approach to address the problem by drawing inspiration from biological dark adaptation mechanisms, simulating functions like pupils and photoreceptor cells. Instead of relying on extensive machine learning with day-to-night image conversions, it focuses on image fusion and gamma correction to train deep neural networks for dark adaptation. This research also involves creating a simulated environment ranging from 0 lux to high brightness, testing the limits of object detection, and offering a high dynamic range testing method. Results indicate that the dark adaptation model developed in this study improves the mean average precision (mAP) by 1.5−6% compared to traditional models. Our model is capable of functioning in both twilight and night, showcasing academic novelty. Future developments could include using virtual light in specific image areas or integrating with smart car lighting to enhance detection accuracy, thereby improving safety for pedestrians and drivers. Full article
(This article belongs to the Special Issue Biomimetic and Bioinspired Computer Vision and Image Processing)
Show Figures

Figure 1

21 pages, 2839 KiB  
Article
Assessing the Relationship between Land Surface Temperature and Composition Elements of Urban Green Spaces during Heat Waves Episodes in Mediterranean Cities
by Manuel José Delgado-Capel, Paloma Egea-Cariñanos and Paloma Cariñanos
Forests 2024, 15(3), 463; https://doi.org/10.3390/f15030463 - 1 Mar 2024
Cited by 6 | Viewed by 2693
Abstract
In the context of escalating global temperatures and intensified heat waves, the Mediterranean region emerges as a noteworthy hotspot, experiencing a surge in the frequency and intensity of these extreme heat events. Nature-based solutions, particularly management of urban green infrastructure (UGI) areas, have [...] Read more.
In the context of escalating global temperatures and intensified heat waves, the Mediterranean region emerges as a noteworthy hotspot, experiencing a surge in the frequency and intensity of these extreme heat events. Nature-based solutions, particularly management of urban green infrastructure (UGI) areas, have shown promising outcomes in adapting urban areas to the challenges posed by heat waves. The objective of the current study is twofold: firstly, to identify the compositional patterns of strategically distributed small public green spaces, demonstrating their enhanced capacity to mitigate the impact of heat waves in the Mediterranean region; secondly, to assess the association, direction, and explanatory strength of the relationship between the composition elements of the UGI areas and area typology, specifically focusing on the variation in land surface temperature (LST) values during heat wave episodes spanning from 2017 to 2023. The methodology involved obtaining land surface temperature (LST) values from satellite images and classifying green areas based on composition, orientation, and typology. Ordinal multiple regressions were conducted to analyze the relationship between the considered variables and LST ranges during heat wave episodes that occurred from 2017 to 2023. The findings indicate an increase in LST ranges across many areas, emphasizing heightened thermal stress in a Mediterranean medium-sized compact city, Granada (in the southeast of the Iberian Peninsula). Traditional squares, pocket parks and gardens, and pedestrian areas with trees and impervious surfaces performed better in reducing the probability of exceeding LST values above 41 °C compared to other vegetated patches mainly occupied by herbaceous vegetation and grass. The study concludes by advocating for the strategic incorporation of vegetation, especially trees, along with traditional squares featuring semipermeable pavement with trees and shrubbery, as a potential effective strategy for enhancing resilience against extreme heat events. Overall, this research enhances our understanding of LST dynamics during heat waves and offers guidance for bolstering the resilience of urban green spaces in the Mediterranean region. Full article
(This article belongs to the Section Urban Forestry)
Show Figures

Figure 1

17 pages, 6317 KiB  
Article
INSANet: INtra-INter Spectral Attention Network for Effective Feature Fusion of Multispectral Pedestrian Detection
by Sangin Lee, Taejoo Kim, Jeongmin Shin, Namil Kim and Yukyung Choi
Sensors 2024, 24(4), 1168; https://doi.org/10.3390/s24041168 - 10 Feb 2024
Cited by 14 | Viewed by 2615
Abstract
Pedestrian detection is a critical task for safety-critical systems, but detecting pedestrians is challenging in low-light and adverse weather conditions. Thermal images can be used to improve robustness by providing complementary information to RGB images. Previous studies have shown that multi-modal feature fusion [...] Read more.
Pedestrian detection is a critical task for safety-critical systems, but detecting pedestrians is challenging in low-light and adverse weather conditions. Thermal images can be used to improve robustness by providing complementary information to RGB images. Previous studies have shown that multi-modal feature fusion using convolution operation can be effective, but such methods rely solely on local feature correlations, which can degrade the performance capabilities. To address this issue, we propose an attention-based novel fusion network, referred to as INSANet (INtra-INter Spectral Attention Network), that captures global intra- and inter-information. It consists of intra- and inter-spectral attention blocks that allow the model to learn mutual spectral relationships. Additionally, we identified an imbalance in the multispectral dataset caused by several factors and designed an augmentation strategy that mitigates concentrated distributions and enables the model to learn the diverse locations of pedestrians. Extensive experiments demonstrate the effectiveness of the proposed methods, which achieve state-of-the-art performance on the KAIST dataset and LLVIP dataset. Finally, we conduct a regional performance evaluation to demonstrate the effectiveness of our proposed network in various regions. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

27 pages, 1122 KiB  
Article
Evaluation of Preferences for a Thermal-Camera-Based Abnormal Situation Detection Service via the Integrated Fuzzy AHP/TOPSIS Model
by Woochul Choi, Bongjoo Jang, Intaek Jung, Hongki Sung and Younmi Jang
Appl. Sci. 2023, 13(20), 11591; https://doi.org/10.3390/app132011591 - 23 Oct 2023
Cited by 3 | Viewed by 2114
Abstract
Research related to thermal cameras, which are major control measures, is increasing to overcome the limitations of closed-circuit television (CCTV) images. Thermal cameras have the advantage of easily detecting objects at night and of being able to identify initial signs of dangerous situations [...] Read more.
Research related to thermal cameras, which are major control measures, is increasing to overcome the limitations of closed-circuit television (CCTV) images. Thermal cameras have the advantage of easily detecting objects at night and of being able to identify initial signs of dangerous situations owing to changes in temperature. However, research on thermal cameras from a comprehensive perspective for practical urban control is insufficient. Accordingly, this study presents a thermal camera-based abnormal-situation detection service that can supplement/replace CCTV image analysis and evaluate service preferences. We suggested an integrated Fuzzy AHP/TOPSIS model, which induces a more reasonable selection to support the decision-making of the demand for introducing thermography cameras. We found that developers highly evaluated services that can identify early signs of dangerous situations by detecting temperature changes in heat, which is the core principle of thermography cameras (e.g., pre-fire phenomenon), while local governments highly evaluated control services related to citizen safety (e.g., pedestrian detection at night). Clearly, while selecting an effective service model, the opinions of experts with a high understanding of the technology itself and operators who actually manage ser-vices should be appropriately reflected. This study contributes to the literature and provides the basic foundation for the development of services utilizing thermography cameras by presenting a thermography camera-based abnormal situation detection service and selection methods and joint decision-making engagement between developers and operators. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

19 pages, 2662 KiB  
Article
Pedestrian Detection Method Based on Two-Stage Fusion of Visible Light Image and Thermal Infrared Image
by Yugui Zhang, Bo Zhai, Gang Wang and Jianchu Lin
Electronics 2023, 12(14), 3171; https://doi.org/10.3390/electronics12143171 - 21 Jul 2023
Cited by 8 | Viewed by 2007
Abstract
Pedestrian detection has important research value and practical significance. It has been used in intelligent monitoring, intelligent transportation, intelligent therapy, and automatic driving. However, in the pixel-level fusion and the feature-level fusion of visible light images and thermal infrared images under shadows during [...] Read more.
Pedestrian detection has important research value and practical significance. It has been used in intelligent monitoring, intelligent transportation, intelligent therapy, and automatic driving. However, in the pixel-level fusion and the feature-level fusion of visible light images and thermal infrared images under shadows during the daytime or under low illumination at night in actual surveillance, missed and false pedestrian detection always occurs. To solve this problem, an algorithm for the pedestrian detection based on the two-stage fusion of visible light images and thermal infrared images is proposed. In this algorithm, in view of the difference and complementarity of visible light images and thermal infrared images, these two types of images are subjected to pixel-level fusion and feature-level fusion according to the varying daytime conditions. In the pixel-level fusion stage, the thermal infrared image, after being brightness enhanced, is fused with the visible image. The obtained pixel-level fusion image contains the information critical for accurate pedestrian detection. In the feature-level fusion stage, in the daytime, the previous pixel-level fusion image is fused with the visible light image; meanwhile, under low illumination at night, the previous pixel-level fusion image is fused with the thermal infrared image. According to the experimental results, the proposed algorithm accurately detects pedestrian under shadows during the daytime and low illumination at night, thereby improving the accuracy of the pedestrian detection and reducing the missed rate and false rate in the detection of pedestrians. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

13 pages, 2343 KiB  
Article
All-Weather Pedestrian Detection Based on Double-Stream Multispectral Network
by Chih-Hsien Hsia, Hsiao-Chu Peng and Hung-Tse Chan
Electronics 2023, 12(10), 2312; https://doi.org/10.3390/electronics12102312 - 20 May 2023
Cited by 6 | Viewed by 2244
Abstract
Recently, advanced driver assistance systems (ADAS) have attracted wide attention in pedestrian detection for using the multi-spectrum generated by multi-sensors. However, it is quite challenging for image-based sensors to perform their tasks due to instabilities such as light changes, object shading, or weather [...] Read more.
Recently, advanced driver assistance systems (ADAS) have attracted wide attention in pedestrian detection for using the multi-spectrum generated by multi-sensors. However, it is quite challenging for image-based sensors to perform their tasks due to instabilities such as light changes, object shading, or weather conditions. Considering all the above, based on different spectral information of RGB and thermal images, this study proposed a deep learning (DL) framework to improve the problem of confusing light sources and extract highly differentiated multimodal features through multispectral fusion. Pedestrian detection methods, including a double-stream multispectral network (DSMN), were used to extract a multispectral fusion and double-stream detector with Yolo-based (MFDs-Yolo) information. Moreover, a self-adaptive multispectral weight adjustment method improved illumination–aware network (i-IAN) for later fusion strategy, making different modes complimentary. According to the experimental results, the good performance of this detection method was demonstrated in the public dataset KAIST and the multispectral pedestrian detection dataset FLIR, and it even performed better than the most advanced method in the miss rate (MR) (IoU@0.75) evaluation system. Full article
(This article belongs to the Special Issue New Trends in Deep Learning for Computer Vision)
Show Figures

Figure 1

18 pages, 12133 KiB  
Article
HAFNet: Hierarchical Attentive Fusion Network for Multispectral Pedestrian Detection
by Peiran Peng, Tingfa Xu, Bo Huang and Jianan Li
Remote Sens. 2023, 15(8), 2041; https://doi.org/10.3390/rs15082041 - 12 Apr 2023
Cited by 10 | Viewed by 3157
Abstract
Multispectral pedestrian detection via visible and thermal image pairs has received widespread attention in recent years. It provides a promising multi-modality solution to address the challenges of pedestrian detection in low-light environments and occlusion situations. Most existing methods directly blend the results of [...] Read more.
Multispectral pedestrian detection via visible and thermal image pairs has received widespread attention in recent years. It provides a promising multi-modality solution to address the challenges of pedestrian detection in low-light environments and occlusion situations. Most existing methods directly blend the results of the two modalities or combine the visible and thermal features via a linear interpolation. However, such fusion strategies tend to extract coarser features corresponding to the positions of different modalities, which may lead to degraded detection performance. To mitigate this, this paper proposes a novel and adaptive cross-modality fusion framework, named Hierarchical Attentive Fusion Network (HAFNet), which fully exploits the multispectral attention knowledge to inspire pedestrian detection in the decision-making process. Concretely, we introduce a Hierarchical Content-dependent Attentive Fusion (HCAF) module to extract top-level features as a guide to pixel-wise blending features of two modalities to enhance the quality of the feature representation and a plug-in multi-modality feature alignment (MFA) block to fine-tune the feature alignment of two modalities. Experiments on the challenging KAIST and CVC-14 datasets demonstrate the superior performance of our method with satisfactory speed. Full article
(This article belongs to the Special Issue Data Fusion for Urban Applications)
Show Figures

Figure 1

27 pages, 5384 KiB  
Article
Fused Thermal and RGB Imagery for Robust Detection and Classification of Dynamic Objects in Mixed Datasets via Pre-Trained High-Level CNN
by Ravit Ben-Shoushan and Anna Brook
Remote Sens. 2023, 15(3), 723; https://doi.org/10.3390/rs15030723 - 26 Jan 2023
Cited by 4 | Viewed by 5537
Abstract
Smart vehicles with embedded Autonomous Vehicle (AV) technologies are currently equipped with different types of mounted sensors, aiming to ensure safe movement for both passengers and other road users. The sensors’ ability to capture and gather data to be synchronically interpreted by neural [...] Read more.
Smart vehicles with embedded Autonomous Vehicle (AV) technologies are currently equipped with different types of mounted sensors, aiming to ensure safe movement for both passengers and other road users. The sensors’ ability to capture and gather data to be synchronically interpreted by neural networks for a clear understanding of the surroundings is influenced by lighting conditions, such as natural lighting levels, artificial lighting effects, time of day, and various weather conditions, such as rain, fog, haze, and extreme temperatures. Such changing environmental conditions are also known as complex environments. In addition, the appearance of other road users is varied and relative to the vehicle’s perspective; thus, the identification of features in a complex background is still a challenge. This paper presents a pre-processing method using multi-sensorial RGB and thermal camera data. The aim is to handle issues arising from the combined inputs of multiple sensors, such as data registration and value unification. Foreground refinement, followed by a novel statistical anomaly-based feature extraction prior to image fusion, is presented. The results met the AV challenges in CNN’s classification. The reduction of the collected data and its variation level was achieved. The unified physical value contributed to the robustness of input data, providing a better perception of the surroundings under varied environmental conditions in mixed datasets for day and night images. The method presented uses fused images, robustly enriched with texture and feature depth and reduced dependency on lighting or environmental conditions, as an input for a CNN. The CNN was capable of extracting and classifying dynamic objects as vehicles and pedestrians from the complex background in both daylight and nightlight images. Full article
Show Figures

Figure 1

14 pages, 2850 KiB  
Technical Note
An Unpaired Thermal Infrared Image Translation Method Using GMA-CycleGAN
by Shihao Yang, Min Sun, Xiayin Lou, Hanjun Yang and Hang Zhou
Remote Sens. 2023, 15(3), 663; https://doi.org/10.3390/rs15030663 - 22 Jan 2023
Cited by 18 | Viewed by 4945
Abstract
Automatically translating chromaticity-free thermal infrared (TIR) images into realistic color visible (CV) images is of great significance for autonomous vehicles, emergency rescue, robot navigation, nighttime video surveillance, and many other fields. Most recent designs use end-to-end neural networks to translate TIR directly to [...] Read more.
Automatically translating chromaticity-free thermal infrared (TIR) images into realistic color visible (CV) images is of great significance for autonomous vehicles, emergency rescue, robot navigation, nighttime video surveillance, and many other fields. Most recent designs use end-to-end neural networks to translate TIR directly to CV; however, compared to these networks, TIR has low contrast and an unclear texture for CV translation. Thus, directly translating the TIR temperature value of only one channel to the RGB color value of three channels without adding additional constraints or semantic information does not handle the one-to-three mapping problem between different domains in a good way, causing the translated CV images not only to have blurred edges but also color confusion. As for the methodology of the work, considering that in the translation from TIR to CV the most important process is to map information from the temperature domain into the color domain, an improved CycleGAN (GMA-CycleGAN) is proposed in this work in order to translate TIR images to grayscale visible (GV) images. Although the two domains have different properties, the numerical mapping is one-to-one, which reduces the color confusion caused by one-to-three mapping when translating TIR to CV. Then, a GV-CV translation network is applied to obtain CV images. Since the process of decomposing GV images into CV images is carried out in the same domain, edge blurring can be avoided. To enhance the boundary gradient between the object (pedestrian and vehicle) and the background, a mask attention module based on the TIR temperature mask and the CV semantic mask is designed without increasing the network parameters, and it is added to the feature encoding and decoding convolution layers of the CycleGAN generator. Moreover, a perceptual loss term is applied to the original CycleGAN loss function to bring the translated images closer to the real images regarding the space feature. In order to verify the effectiveness of the proposed method, the FLIR dataset is used for experiments, and the obtained results show that, compared to the state-of-the-art model, the subjective quality of the translated CV images obtained by the proposed method is better, as the objective evaluation metric FID (Fréchet inception distance) is reduced by 2.42 and the PSNR (peak signal-to-noise ratio) is improved by 1.43. Full article
Show Figures

Figure 1

17 pages, 981 KiB  
Article
Probabilistic Fusion for Pedestrian Detection from Thermal and Colour Images
by Zuhaib Ahmed Shaikh, David Van Hamme, Peter Veelaert and Wilfried Philips
Sensors 2022, 22(22), 8637; https://doi.org/10.3390/s22228637 - 9 Nov 2022
Cited by 5 | Viewed by 2086
Abstract
Pedestrian detection is an important research domain due to its relevance for autonomous and assisted driving, as well as its applications in security and industrial automation. Often, more than one type of sensor is used to cover a broader range of operating conditions [...] Read more.
Pedestrian detection is an important research domain due to its relevance for autonomous and assisted driving, as well as its applications in security and industrial automation. Often, more than one type of sensor is used to cover a broader range of operating conditions than a single-sensor system would allow. However, it remains difficult to make pedestrian detection systems perform well in highly dynamic environments, often requiring extensive retraining of the algorithms for specific conditions to reach satisfactory accuracy, which, in turn, requires large, annotated datasets captured in these conditions. In this paper, we propose a probabilistic decision-level sensor fusion method based on naive Bayes to improve the efficiency of the system by combining the output of available pedestrian detectors for colour and thermal images without retraining. The results in this paper, obtained through long-term experiments, demonstrate the efficacy of our technique, its ability to work with non-registered images, and its adaptability to cope with situations when one of the sensors fails. The results also show that our proposed technique improves the overall accuracy of the system and could be very useful in several applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

27 pages, 10384 KiB  
Article
Analysis of Thermal Imaging Performance under Extreme Foggy Conditions: Applications to Autonomous Driving
by Josué Manuel Rivera Velázquez, Louahdi Khoudour, Guillaume Saint Pierre, Pierre Duthon, Sébastien Liandrat, Frédéric Bernardin, Sharon Fiss, Igor Ivanov and Raz Peleg
J. Imaging 2022, 8(11), 306; https://doi.org/10.3390/jimaging8110306 - 9 Nov 2022
Cited by 15 | Viewed by 4508
Abstract
Object detection is recognized as one of the most critical research areas for the perception of self-driving cars. Current vision systems combine visible imaging, LIDAR, and/or RADAR technology, allowing perception of the vehicle’s surroundings. However, harsh weather conditions mitigate the performances of these [...] Read more.
Object detection is recognized as one of the most critical research areas for the perception of self-driving cars. Current vision systems combine visible imaging, LIDAR, and/or RADAR technology, allowing perception of the vehicle’s surroundings. However, harsh weather conditions mitigate the performances of these systems. Under these circumstances, thermal imaging becomes the complementary solution to current systems not only because it makes it possible to detect and recognize the environment in the most extreme conditions, but also because thermal images are compatible with detection and recognition algorithms, such as those based on artificial neural networks. In this paper, an analysis of the resilience of thermal sensors in very unfavorable fog conditions is presented. The goal was to study the operational limits, i.e., the very degraded fog situation beyond which a thermal camera becomes unreliable. For the analysis, the mean pixel intensity and the contrast were used as indicators. Results showed that the angle of view (AOV) of a thermal camera is a determining parameter for object detection in foggy conditions. Additionally, results show that cameras with AOVs 18° and 30° are suitable for object detection, even under thick fog conditions (from 13 m meteorological optical range). These results were extended using object detection software, with which it is shown that, for the pedestrian, a detection rate ≥90% was achieved using the images from the 18° and 30° cameras. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

Back to TopTop