Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,616)

Search Parameters:
Keywords = infrared camera

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 1433 KB  
Article
Imaging Through Scattering Tissue Using Near Infra-Red and a Convolutional Autoencoder
by Alon Silberschein, Amir Shemer, Chanan Berkovits, Yair Engler, Ariel Schwarz, Eliran Talker and Yossef Danan
Sensors 2026, 26(8), 2507; https://doi.org/10.3390/s26082507 (registering DOI) - 18 Apr 2026
Abstract
Accurate delineation of tumor margins is critical for complete resection and minimizing recurrence, yet existing imaging modalities such as MRI, CT, and fluorescence imaging suffer from limitations including high cost, limited accessibility, and intraoperative constraints. In this study, we propose a low-cost, non-invasive [...] Read more.
Accurate delineation of tumor margins is critical for complete resection and minimizing recurrence, yet existing imaging modalities such as MRI, CT, and fluorescence imaging suffer from limitations including high cost, limited accessibility, and intraoperative constraints. In this study, we propose a low-cost, non-invasive approach for subsurface imaging based on near-infrared (NIR) illumination combined with deep learning. A controlled experimental setup was developed in which structured patterns displayed on an electronic paper screen were concealed beneath a tissue-mimicking chicken phantom and imaged using a NIR-sensitive camera under halogen illumination. A convolutional autoencoder based on a U-Net architecture was trained on approximately 10,000 paired samples to reconstruct hidden structures from highly scattered surface images. The proposed method achieved strong reconstruction performance, with the best model reaching a peak signal-to-noise ratio (PSNR) of 20.14 dB, structural similarity index (SSIM) of 0.92, and feature similarity index (FSIM) of 0.94, significantly outperforming conventional Wiener filtering. Qualitative results demonstrated accurate recovery of subsurface shapes with minor smoothing artifacts. While generalization to out-of-distribution samples remains limited, the findings highlight the potential of combining NIR imaging and deep learning for safe, rapid, and cost-effective subsurface visualization. This work establishes a foundation for future development toward clinically relevant tumor margin detection. Full article
(This article belongs to the Special Issue Spectral Detection Technology, Sensors and Instruments, 3rd Edition)
Show Figures

Figure 1

23 pages, 16273 KB  
Article
Design of a High Dynamic Range Acquisition System for Airborne VNIR Push-Broom Hyperspectral Camera
by Haoyang Feng, Yueming Wang, Daogang He, Changxing Zhang and Chunlai Li
Sensors 2026, 26(8), 2474; https://doi.org/10.3390/s26082474 - 17 Apr 2026
Abstract
Achieving a high frame rate and high dynamic range (HDR) under complex illumination remains a significant challenge for airborne push-broom visible-near-infrared (VNIR) hyperspectral cameras. Problematic scenarios typically include high-contrast scenes, such as ocean whitecaps alongside deep water or concurrently sunlit and shadowed urban [...] Read more.
Achieving a high frame rate and high dynamic range (HDR) under complex illumination remains a significant challenge for airborne push-broom visible-near-infrared (VNIR) hyperspectral cameras. Problematic scenarios typically include high-contrast scenes, such as ocean whitecaps alongside deep water or concurrently sunlit and shadowed urban surfaces. To address this, a real-time HDR acquisition system based on a dual-gain complementary metal–oxide–semiconductor (CMOS) image sensor is proposed. Specifically, a four-pixel HDR fusion method is developed, utilizing an optical calibration setup to accurately determine the fusion parameters and configure the spectral region of interest (ROI) for reduced data volume. The complete workflow, encompassing spectral–spatial four-pixel binning and piecewise dual-gain fusion, is implemented on a field-programmable gate array (FPGA) using a dual-port RAM-based buffering strategy and a low-latency five-stage pipeline. Experimental results demonstrate a minimal processing latency of 0.0183 ms and a maximum frame rate of 290 frames/s. By extending the output bit depth from 11 to 15 bits, the system achieves a digital dynamic range of the final output of 2.03 × 104:1, representing a 9.58-fold improvement over the original low-gain data. The fused HDR data maintain high linearity and good spectral fidelity, with spectral angle mapper (SAM) values at the 10−3 level. Featuring a compact and low-power design, this system provides a practical engineering solution for efficient airborne VNIR hyperspectral acquisition. Full article
(This article belongs to the Section Sensing and Imaging)
26 pages, 6550 KB  
Article
Clinical Thermography of the Diabetic Foot Using a Low-Cost Thermal Camera: Processing and Instrumental Framework
by Vanéva Chingan-Martino, Mériem Allali, Stéphane Henri, El Hadji Mama Guène, Dominique Gibert and Antoine Chéret
Sensors 2026, 26(8), 2438; https://doi.org/10.3390/s26082438 - 16 Apr 2026
Viewed by 252
Abstract
Infrared thermography is a non-contact tool for monitoring inflammatory processes in the diabetic foot, but quantitative bedside use remains challenging with low-cost thermal infrared cameras due to radiometric drift, non-uniformity (vignetting), geometric distortions, and visible–thermal parallax. This paper presents an end-to-end clinical and [...] Read more.
Infrared thermography is a non-contact tool for monitoring inflammatory processes in the diabetic foot, but quantitative bedside use remains challenging with low-cost thermal infrared cameras due to radiometric drift, non-uniformity (vignetting), geometric distortions, and visible–thermal parallax. This paper presents an end-to-end clinical and instrumental framework built around a cheap thermal camera to ensure reproducible acquisition and physically consistent temperature estimation. The approach combines a standardized mobile acquisition setup and measurement protocol, extraction of embedded radiometric data from raw images, radiometric inversion with atmospheric correction, vignette correction performed in the radiometric domain, and geometric calibration of both visible and infrared sensors using dedicated (thermal) calibration targets. Accurate visible–infrared registration is obtained from hybrid heated markers, enabling reliable overlay and downstream analysis. The full processing chain yields quantitative thermograms with radiometric errors below 0.15 °C and sub-pixel multimodal alignment, supporting the detection of clinically relevant plantar temperature asymmetries and paving the way for routine calibrated low-cost thermography in diabetic foot care. Full article
(This article belongs to the Collection Biomedical Imaging and Sensing)
Show Figures

Figure 1

38 pages, 1822 KB  
Review
UAV-Based Infrared Thermography for Qualitative and Quantitative Building Energy Assessment: A Review
by Seyed Amirhossein Saei Marand, Milad Mahmoodzadeh and Phalguni Mukhopadhyaya
Energies 2026, 19(7), 1776; https://doi.org/10.3390/en19071776 - 4 Apr 2026
Viewed by 588
Abstract
The growing demand for energy-efficient buildings and the urgent need to retrofit aging infrastructure have driven increased interest in advanced diagnostic technologies. Among these, unmanned aerial vehicle (UAV)-based infrared thermography (IRT) has emerged as a promising non-destructive technique for assessing the thermal performance [...] Read more.
The growing demand for energy-efficient buildings and the urgent need to retrofit aging infrastructure have driven increased interest in advanced diagnostic technologies. Among these, unmanned aerial vehicle (UAV)-based infrared thermography (IRT) has emerged as a promising non-destructive technique for assessing the thermal performance of building envelopes. This review examines recent developments and applications of dynamic infrared thermography (IRT) in the building sector for both qualitative and quantitative thermal assessment, based on previously conducted studies. It highlights the increasing adoption of integrated UAV-based IRT for building inspection and diagnostics, and critically reviews the operational, technical, and methodological advancements in dynamic thermography achieved over the past decade. Furthermore, the review presents a comprehensive framework for operational planning, encompassing environmental conditions, infrared camera configuration, and optimal UAV flight parameters. The key findings identify major challenges associated with dynamic IRT applications, particularly those related to measurement accuracy that currently limit its use for quantitative assessments and synthesize proposed methodologies to address these limitations. The review also highlights the absence of standardized procedures for determining emissivity and reflected apparent temperature in dynamic measurement setups and discusses potential approaches to overcome these gaps. Finally, it outlines priority directions for future research to support the reliable and consistent application of dynamic IRT in quantitative analysis and provides a reference for energy auditors and thermography practitioners to inform the selection of appropriate procedures for accurately quantifying heat loss in building envelopes. Full article
Show Figures

Figure 1

21 pages, 5940 KB  
Article
Feasibility Study for Determining the Coating State of ISIComp Material with Thermographic Techniques
by Giovanni Santonicola, Francesca Di Carolo, Davide Palumbo, Tiziana Matarrese, Ester D`Accardi, Mario De Cesare, Mario De Stefano Fumo, Cinzia Toscano and Umberto Galietti
Appl. Sci. 2026, 16(7), 3498; https://doi.org/10.3390/app16073498 - 3 Apr 2026
Viewed by 261
Abstract
This work investigates the feasibility of using thermographic techniques to identify the three possible states of a silicon-based coating on a carbon–silicon matrix (ISiComp). Experimental tests were therefore carried out on specimens prepared in three different conditions: uncoated, coated, and coated then oxidized. [...] Read more.
This work investigates the feasibility of using thermographic techniques to identify the three possible states of a silicon-based coating on a carbon–silicon matrix (ISiComp). Experimental tests were therefore carried out on specimens prepared in three different conditions: uncoated, coated, and coated then oxidized. The study compares lock-in thermography and pulsed thermography using both a cooled mid-wave infrared (MWIR) camera and an uncooled long-wave infrared (LWIR) microbolometric camera. The main objective is to distinguish coated from uncoated conditions and oxidized from non-oxidized conditions, while recognizing that the coated and oxidized states cannot coexist simultaneously on the same specimen. The results show that thermographic techniques, when supported by appropriate post-processing, are promising for this purpose. In particular, the uncooled LWIR camera provided better results than the cooled MWIR camera, whereas the current approach did not allow a robust distinction between the pristine-coated and oxidized-coated states. At the same time, the study highlights limitations related to specimen size and to the additional treatments applied to reproduce the different surface states. Future work will address larger specimens and real components, together with the implementation of advanced AI-based classification algorithms to overcome the current limitations of the proposed approach. Full article
Show Figures

Figure 1

45 pages, 7679 KB  
Article
Conquering the Urban Firefighting Challenge: A Deep Q-Network Approach for Autonomous UAV Navigation
by Shafiqul Alam Khan, Damian Valles, Marcelo M. Carvalho and Wenquan Dong
Inventions 2026, 11(2), 35; https://doi.org/10.3390/inventions11020035 - 2 Apr 2026
Viewed by 394
Abstract
Firefighters must locate victims reliably to carry out rescue operations within burning structures during urban firefighting events. Low visibility, reduced oxygen levels, weakened structural rigidity, and dense smoke make it difficult to locate victims. In addition to these challenges, victims may be unconscious [...] Read more.
Firefighters must locate victims reliably to carry out rescue operations within burning structures during urban firefighting events. Low visibility, reduced oxygen levels, weakened structural rigidity, and dense smoke make it difficult to locate victims. In addition to these challenges, victims may be unconscious and unable to report their locations to firefighters. This research work explores the Double Deep Q-Network (Double DQN), Dueling Deep Q-Network (Dueling DQN), and Dueling Double Deep Q-Network (D3QN) agents for an unmanned aerial vehicle (UAV) to navigate around a structure and locate trapped victims within it. The UAV’s position, Light Detection and Ranging (LiDAR), and infrared camera data are utilized as inputs for the Deep Q-Networks. The PER is used to store transitions and sample them according to priority for training. Python’s Pygame library is used in this research to create a simulated environment in which infrared camera and LiDAR data are simulated. The performance of the UAV agent is evaluated using cumulative maximum reward, reward distribution histogram, Temporal Difference (TD) error over time, and number of successful episodes. Among the three DQN UAV agents, the Dueling DQN and Double DQN have potential for real-world applications in firefighting. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles (UAVs): Innovations and Applications)
Show Figures

Figure 1

24 pages, 6716 KB  
Article
In-Situ Infrared Camera Monitoring for Defect and Anomaly Detection in Laser Powder Bed Fusion: Calibration, Data Mapping, and Feature Extraction
by Shawn Hinnebusch, David Anderson, Berkay Bostan and Albert C. To
Appl. Sci. 2026, 16(7), 3378; https://doi.org/10.3390/app16073378 - 31 Mar 2026
Viewed by 295
Abstract
Laser powder bed fusion (LPBF) is susceptible to defects arising from melt pool instabilities, spatter, heat accumulation, and powder spreading anomalies. In situ infrared (IR) monitoring can detect these issues; however, it typically generates large volumes of data that are costly to store [...] Read more.
Laser powder bed fusion (LPBF) is susceptible to defects arising from melt pool instabilities, spatter, heat accumulation, and powder spreading anomalies. In situ infrared (IR) monitoring can detect these issues; however, it typically generates large volumes of data that are costly to store and analyze. This work proposes a projection-based framework that directly maps in situ thermal measurements onto a three-dimensional (3D) voxelized part geometry, substantially reducing storage requirements while preserving spatial fidelity. In addition, several IR derived features are incorporated into a practical workflow for defect detection and process model calibration, including laser scan order, local pre-deposition temperature, maximum pre-scan temperature, and spatter generation and landing locations. For completeness, commonly used metrics such as interpass temperature, heat intensity, cooling rate, and relative melt pool area are extracted within the same unified processing pipeline. All features are computed using a consistent, reproducible Python-based implementation to streamline integration into routine monitoring and analysis tasks. Multiple parts are fabricated, monitored, and characterized to evaluate the proposed framework, demonstrating that the extracted features reliably identify process anomalies and correlate with observed defects. Full article
Show Figures

Graphical abstract

16 pages, 2379 KB  
Article
An Integrated 60 GHz Radar and AI-Guided Infrared System for Non-Contact Heart Rate and Body Temperature Monitoring
by Sangwook Sim and Changgyun Kim
Appl. Sci. 2026, 16(7), 3272; https://doi.org/10.3390/app16073272 - 27 Mar 2026
Viewed by 397
Abstract
The growing need for remote patient monitoring, accelerated by the global pandemic and an aging population, necessitates the development of advanced non-contact technologies for measuring vital signs. In this study, an integrated, non-contact system for accurately measuring heart rate (HR) and body temperature [...] Read more.
The growing need for remote patient monitoring, accelerated by the global pandemic and an aging population, necessitates the development of advanced non-contact technologies for measuring vital signs. In this study, an integrated, non-contact system for accurately measuring heart rate (HR) and body temperature (BT) is developed and validated. The proposed system combines a 60 GHz radar sensor and infrared (IR) sensor for HR and BT measurements, respectively, enhanced with advanced signal processing and an AI-based computer vision algorithm. A Window Filter and a Peak Uniformity algorithm were applied to the raw radar signal to mitigate noise and motion artifacts. For Temp measurement, an IR sensor with a narrow five-degree field of view (FOV) was integrated with a YOLO Pose-based tracking system using a camera and servo motors to automatically orient the sensor towards the user’s face. The system was validated with 30 healthy adult participants, benchmarked against a MAX30102 PPG sensor and Braun ThermoScan 7 for BT and BT measurements, respectively. The advanced signal processing reduced the HR Mean Absolute Error from 13.73 BPM to 5.28 BPM (p = 0.002), while the AI-guided IR sensor reduced the BT MAE from 4.10 °C to 1.64 °C (p < 0.001). These findings demonstrate that integrating 60 GHz radar with AI-driven tracking provides a promising approach for home-based trend monitoring. Full article
(This article belongs to the Special Issue AI-Based Biomedical Signal Processing—2nd Edition)
Show Figures

Figure 1

22 pages, 2650 KB  
Article
Design and Implementation of an Eyewear-Integrated Infrared Eye-Tracking System
by Carlo Pezzoli, Marco Brando Mario Paracchini, Daniele Maria Crafa, Marco Carminati, Luca Merigo, Tommaso Ongarello and Marco Marcon
Sensors 2026, 26(7), 2065; https://doi.org/10.3390/s26072065 - 26 Mar 2026
Viewed by 516
Abstract
Eye-tracking is a key enabling technology for smart eyewear, supporting hands-free interaction, accessibility, and context-aware human–machine interfaces under strict constraints on size, power consumption, and computational complexity. While camera-based solutions provide high accuracy, their integration into lightweight and low-power wearable platforms remains challenging. [...] Read more.
Eye-tracking is a key enabling technology for smart eyewear, supporting hands-free interaction, accessibility, and context-aware human–machine interfaces under strict constraints on size, power consumption, and computational complexity. While camera-based solutions provide high accuracy, their integration into lightweight and low-power wearable platforms remains challenging. This paper is a feasibility study for the design, simulation, and experimental evaluation of a photosensor oculography (PSOG) eye-tracking system that is fully integrated into an eyewear frame, based on near-infrared (NIR) emitters and photodiodes. The proposed approach combines simulation-driven optimization of the optical constellation, a multi-frequency modulation and demodulation scheme enabling parallel source discrimination and robust ambient-light rejection, and a resource-efficient signal acquisition pipeline suitable for embedded implementation. Eye rotations in azimuth and elevation are inferred from differential reflectance patterns of ocular regions (sclera, iris, and pupil) using lightweight regression techniques, including shallow neural networks and Gaussian process regression, selected to balance estimation accuracy with computational and power constraints. System performance is evaluated using a controllable artificial-eye platform under defined geometric and illumination conditions, enabling repeatable assessment of gaze-estimation accuracy and algorithmic behavior. Sub-degree errors are achieved in this controlled setting, demonstrating the feasibility and potential effectiveness of the proposed architecture. Practical considerations for translation to real-world smart eyewear, including human-subject validation, anatomical variability, calibration strategies, and embedded deployment, are discussed and identified as directions for future work. By detailing the optical design methodology, modulation strategy, and algorithmic trade-offs, this work clarifies the distinct contributions of the proposed PSOG system relative to existing frame-integrated and camera-free eye-tracking approaches, and provides a foundation for further development toward wearable and augmented-reality applications. Full article
Show Figures

Figure 1

27 pages, 8337 KB  
Article
VNIR/SWIR Multispectral Polarimetric Imager for Polymer Discrimination and Identification
by Ramon Prats Consola and Adriano Camps
Sensors 2026, 26(7), 2040; https://doi.org/10.3390/s26072040 - 25 Mar 2026
Viewed by 498
Abstract
This work presents a portable polarimetric multispectral imaging (PMSI) system operating in the visible to shortwave infrared range (VNIR–SWIR: 400–1700 nm) and its application to target detection, discrimination from aquatic backgrounds, and polymer identification. The instrument integrates two synchronized cameras with motorized bandpass [...] Read more.
This work presents a portable polarimetric multispectral imaging (PMSI) system operating in the visible to shortwave infrared range (VNIR–SWIR: 400–1700 nm) and its application to target detection, discrimination from aquatic backgrounds, and polymer identification. The instrument integrates two synchronized cameras with motorized bandpass filters and piezoelectric polarization control, enabling the acquisition of 48 wavelength–polarization measurements per capture. This configuration allows the extraction of both intensity-based and polarimetric features, including the degree of linear polarization (DoLP). A complete radiometric and polarimetric calibration framework is implemented, encompassing system response characterization, polarization-dependent gain correction, and reflectance normalization under variable illumination. Experiments conducted on a representative set of 16 polymer materials show that polarimetric information consistently improves class separability compared to intensity-only features, with a mean gain of 6.9 (95% CI: 6.35–8.47). Although the correlation between intensity- and DoLP-based separability is moderate (r = 0.44), the results indicate complementary identification capability. Material recoverability was further evaluated using spectral unmixing techniques (VCA, N-FINDR, and PPI), with VCA offering the best accuracy–complexity trade-off on the calibrated Stokes reflectance dataset. Despite these gains, identification among chemically similar polyethylene variants remains challenging due to limited spectral and polarimetric contrast. An underwater detectability study under natural illumination reveals strong wavelength-dependent constraints: SWIR penetration is limited to 4 cm, whereas VNIR bands (430–550 nm) preserve detectability up to 20 cm, with DoLP enhancing edge visibility. These results motivate future validation in more complex aquatic conditions and with increased spectral dimensionality. Full article
(This article belongs to the Special Issue Hyperspectral Imaging for Environmental Monitoring)
Show Figures

Figure 1

22 pages, 3135 KB  
Article
Computational Imaging Method for Thermal Infrared Hyperspectral Imaging Based on a Snapshot Divided-Aperture System
by Tianzhen Ma, Zhijing He, Bin Wu, Yutian Lei, Yijie Wang, Xinze Liu, Bingmei Guo, Jiawei Lu, Bo Cheng, Shikai Zan, Chunlai Li and Liyin Yuan
Sensors 2026, 26(6), 1982; https://doi.org/10.3390/s26061982 - 22 Mar 2026
Viewed by 421
Abstract
To address the technical challenge of simultaneously achieving snapshot imaging capability and high spectral resolution in thermal infrared spectral imaging, this paper proposes a computational imaging method based on a snapshot divided-aperture imaging system. In this method, a self-developed divided-aperture snapshot multispectral camera [...] Read more.
To address the technical challenge of simultaneously achieving snapshot imaging capability and high spectral resolution in thermal infrared spectral imaging, this paper proposes a computational imaging method based on a snapshot divided-aperture imaging system. In this method, a self-developed divided-aperture snapshot multispectral camera is utilized to simultaneously capture nine low-spectral-resolution images in a single exposure. The precise registration of the sub-channel images is accomplished via a star-point array calibration method. To construct the spectral reconstruction dataset, a Fourier-transform infrared hyperspectral camera (FTIR HCam) is employed to simultaneously acquire hyperspectral data from real-world scenes. Based on this, a neural network model is applied to reconstruct 127-channel hyperspectral information from the low-dimensional multispectral measurements. Experimental results demonstrate that the proposed method effectively achieves hyperspectral reconstruction while maintaining system compactness and snapshot imaging capability, thus providing a viable technical approach for hyperspectral sensing in dynamic thermal infrared scenarios. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 6624 KB  
Article
Impacts of Climate Change and Inter-Specific Competition on the Spatial Distribution of Elliot’s Pheasant (Syrmaticus ellioti, Swinhoe, 1872) in Huzhou City, China
by Yongxiang Zhao, Xiaofan Jiang, Min Jiang, Yongqiang Qin, Yue Song, Yujie Zhang, Ke He and Liqiong Peng
Biology 2026, 15(6), 480; https://doi.org/10.3390/biology15060480 - 18 Mar 2026
Viewed by 333
Abstract
Ground-dwelling pheasants are vital indicators of forest ecosystem health. Understanding their distribution and response to climate change is crucial for regional biodiversity conservation. Based on 97,000 camera-days of infrared monitoring from 2019 to 2022 in Huzhou, China, we analyzed the spatial patterns and [...] Read more.
Ground-dwelling pheasants are vital indicators of forest ecosystem health. Understanding their distribution and response to climate change is crucial for regional biodiversity conservation. Based on 97,000 camera-days of infrared monitoring from 2019 to 2022 in Huzhou, China, we analyzed the spatial patterns and niche overlap of five pheasant species, including the first class national protected animal Elliot’s Pheasant (Syrmaticus ellioti), using MaxEnt modeling and Schoener’s D index. Results showed the following: (1) Pheasants in Huzhou exhibited distinct vertical gradients, with Elliot’s Pheasant restricted primarily to mid-mountain forests (200–600 m) in western Anji. (2) Iso-thermality and winter thermal limits were the primary drivers of its distribution. (3) Niche analysis revealed intense competitive pressure; Elliot’s Pheasant habitat was largely encompassed by dominant species like the Silver Pheasant (Lophura nycthemera), showing a high overlap (D = 0.642) with the Koklass Pheasant (Pucrasia macrolopha). (4) By 2050, its suitable habitat is projected to shrink by 84.6% (from 1085.7 to 118.8 km2) and shift eastward. These findings highlight the high climate sensitivity and competitive vulnerability of Elliot’s Pheasant. We recommend prioritizing micro-habitat maintenance in mid-mountain zones and proactively establishing ecological corridors between Anji and Deqing to mitigate habitat loss and displacement. Full article
(This article belongs to the Special Issue Bird Biology and Conservation)
Show Figures

Figure 1

24 pages, 6973 KB  
Article
Enhancing Wildlife Monitoring: An Advanced AI Approach for Accurate Giant Panda Behavior Detection and Conservation Insights
by Jin Hou, Chaoyu Liu, Dan Liu, Vanessa Hull, Yutong Wang, Xinyi Zhao, Yingchun Tan, Xiaogang Shi, Yuehong Cheng, Zhuo Tang, Desheng Li, Jifeng Ning and Jindong Zhang
Animals 2026, 16(6), 943; https://doi.org/10.3390/ani16060943 - 17 Mar 2026
Viewed by 455
Abstract
As global demands for nature reserve management intensify, intelligent monitoring has become a pivotal trend. Integrating artificial intelligence with infrared camera traps enables automated analysis of endangered species behavior, providing timely insights for conservation. However, complex habitats often degrade the performance of existing [...] Read more.
As global demands for nature reserve management intensify, intelligent monitoring has become a pivotal trend. Integrating artificial intelligence with infrared camera traps enables automated analysis of endangered species behavior, providing timely insights for conservation. However, complex habitats often degrade the performance of existing detection technologies. Focusing on the giant panda—a flagship conservation species—we constructed a novel dataset from long-term field monitoring videos and developed an improved PandaSlowFast network. Our model employs channel attention to enhance temporal features, uses small-kernel depth-wise convolutions and dilated convolutions to expand receptive fields for spatial feature extraction, and introduces the Adaptive SwisH activation function to improve adaptability and training stability. The results show that PandaSlowFast achieves 85.38% mean average precision (mAP), outperforming existing methods. An FP16-quantized version maintains comparable accuracy (85.16% mAP) while running at 3.2 frames per second on a Raspberry Pi 4, demonstrating practical deployability for on-site monitoring. This work provides technical support for intelligent panda behavior analysis and offers a transferable methodology for monitoring other rare species, contributing to biodiversity conservation. Full article
(This article belongs to the Section Ecology and Conservation)
Show Figures

Figure 1

23 pages, 2962 KB  
Article
Feasibility of Infrared-Based Pedestrian Detectability in Unlit Urban and Rural Road Sections Using Consumer Thermal Cameras
by Yordan Stoyanov, Atanasi Tashev and Penko Mitev
Vehicles 2026, 8(3), 61; https://doi.org/10.3390/vehicles8030061 - 16 Mar 2026
Viewed by 358
Abstract
This study evaluates the feasibility of using two affordable thermal cameras (UNI-T UTi260M and UTi260T), which are not designed as automotive sensors, for observing pedestrians and warm objects during night-time driving under low-illumination conditions. The experimental setup includes mounting the camera on the [...] Read more.
This study evaluates the feasibility of using two affordable thermal cameras (UNI-T UTi260M and UTi260T), which are not designed as automotive sensors, for observing pedestrians and warm objects during night-time driving under low-illumination conditions. The experimental setup includes mounting the camera on the vehicle body (e.g., side mirror area/roof), recording road scenes in urban and rural environments, and selecting representative frames for qualitative and quantitative analysis. The study assesses: (i) observable pedestrian detectability in unlit road sections and under oncoming headlight glare, where visible cameras often lose contrast; (ii) the influence of low ambient temperature and strong cold wind on image appearance (including “whitening”/contrast shifts); and (iii) workflow differences, where UTi260M relies on a smartphone application for streaming/recording, while UTi260T supports PC-based image analysis and temperature-profile visualization. In addition, a calibration-based geometric method is proposed for approximate pedestrian distance estimation from single frames using silhouette pixel height and a regression model based on 1/hpx, valid for a specific mounting configuration and a known subject height. Results indicate that both cameras can highlight warm objects relative to the background and support visual pedestrian identification at low illumination, including in the presence of oncoming headlights, with UTi260M showing more stable behavior in parts of the tests. This work is a feasibility study and does not claim Advanced Driver Assist Systems (ADAS) functionality; it outlines limitations, repeatability considerations, and a minimal set of metrics and procedures for future extension. All quantitative indicators derived from exported frames are explicitly treated as image-level proxy metrics, not as physical sensor characteristics. Full article
(This article belongs to the Special Issue Novel Solutions for Transportation Safety, 2nd Edition)
Show Figures

Figure 1

20 pages, 2991 KB  
Article
Advancing Defect Detection in Laser Welding: A Machine Learning Approach Based on Spatter Feature Analysis
by Gleb Solovev, Evgenii Klokov, Dmitrii Krasnov and Mikhail Sokolov
Sensors 2026, 26(6), 1825; https://doi.org/10.3390/s26061825 - 13 Mar 2026
Viewed by 493
Abstract
Full-penetration laser welding (FPLW) is increasingly adopted in manufacturing pipelines, yet its industrial scalability is constrained by in-process defect formation, particularly incomplete penetration. To address this, we propose a sensor-driven framework for non-destructive monitoring and automated defect detection that uses infrared (IR) thermography [...] Read more.
Full-penetration laser welding (FPLW) is increasingly adopted in manufacturing pipelines, yet its industrial scalability is constrained by in-process defect formation, particularly incomplete penetration. To address this, we propose a sensor-driven framework for non-destructive monitoring and automated defect detection that uses infrared (IR) thermography as the primary in situ sensing modality and applies deep learning to the acquired thermal signals. High-speed IR camera recordings were processed to track spatter and the weld zone, yielding a time series of physically interpretable spatiotemporal features (mean spatter area, mean spatter temperature, number of spatters, and mean welding zone temperature). Defect recognition is formulated as a multi-label classification problem targeting incomplete penetration, sagging, shrinkage groove, and linear misalignment, and multiple temporal models were evaluated on the same sensor-derived feature sequences. Experimental validation on 09G2S pipeline steel demonstrates that the proposed time series pipeline based on a hybrid CNN–transformer achieves a mean Average Precision (mAP) of 0.85 while preserving near-real-time inference on a CPU. The results indicate that IR thermography-based spatter dynamics provide actionable sensing signatures for automated defect prediction and can serve as a foundation for closed-loop quality control in industrial laser pipeline welding. Full article
(This article belongs to the Special Issue Sensing Technologies in Industrial Defect Detection)
Show Figures

Figure 1

Back to TopTop