Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (970)

Search Parameters:
Keywords = infrared remote sensing images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 16046 KB  
Article
UAV-Based Multimodal Monitoring of Tea Anthracnose with Temporal Standardization
by Qimeng Yu, Jingcheng Zhang, Lin Yuan, Xin Li, Fanguo Zeng, Ke Xu, Wenjiang Huang and Zhongting Shen
Agriculture 2025, 15(21), 2270; https://doi.org/10.3390/agriculture15212270 (registering DOI) - 31 Oct 2025
Abstract
Tea Anthracnose (TA), caused by fungi of the genus Colletotrichum, is one of the major threats to global tea production. UAV remote sensing has been explored for non-destructive and high-efficiency monitoring of diseases in tea plantations. However, variations in illumination, background, and [...] Read more.
Tea Anthracnose (TA), caused by fungi of the genus Colletotrichum, is one of the major threats to global tea production. UAV remote sensing has been explored for non-destructive and high-efficiency monitoring of diseases in tea plantations. However, variations in illumination, background, and meteorological factors undermine the stability of cross-temporal data. Data processing and modeling complexity further limits model generalizability and practical application. This study introduced a cross-temporal, generalizable disease monitoring approach based on UAV multimodal data coupled with relative-difference standardization. In an experimental tea garden, we collected multispectral, thermal infrared, and RGB images and extracted four classes of features: spectral (Sp), thermal (Th), texture (Te), and color (Co). The Normalized Difference Vegetation Index (NDVI) was used to identify reference areas and standardize features, which significantly reduced the relative differences in cross-temporal features. Additionally, we developed a vegetation–soil relative temperature (VSRT) index, which exhibits higher temporal-phase consistency than the conventional normalized relative canopy temperature (NRCT). A multimodal optimal feature set was constructed through sensitivity analysis based on the four feature categories. For different modality combinations (single and fused), three machine learning algorithms, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP), were selected to evaluate disease classification performance due to their low computational burden and ease of deployment. Results indicate that the “Sp + Th” combination achieved the highest accuracy (95.51%), with KNN (95.51%) outperforming SVM (94.23%) and MLP (92.95%). Moreover, under the optimal feature combination and KNN algorithm, the model achieved high generalizability (86.41%) on independent temporal data. This study demonstrates that fusing spectral and thermal features with temporal standardization, combined with the simple and effective KNN algorithm, achieves accurate and robust tea anthracnose monitoring, providing a practical solution for efficient and generalizable disease management in tea plantations. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

18 pages, 2981 KB  
Article
Multispectral and Colorimetric Approaches for Non-Destructive Maturity Assessment of Specialty Arabica Coffee
by Seily Cuchca Ramos, Jaris Veneros, Carlos Bolaños-Carriel, Grobert A. Guadalupe, Marilu Mestanza, Heyton Garcia, Segundo G. Chavez and Ligia Garcia
Foods 2025, 14(21), 3644; https://doi.org/10.3390/foods14213644 - 25 Oct 2025
Viewed by 225
Abstract
This study evaluated the integration of non-invasive remote sensing and colorimetry to classify the maturity stages of Coffea arabica fruits across four varieties: Caturra Amarillo, Excelencia, Milenio, and Típica. Multispectral signatures were captured using a Parrot Sequoia camera at wavelengths of 550 nm, [...] Read more.
This study evaluated the integration of non-invasive remote sensing and colorimetry to classify the maturity stages of Coffea arabica fruits across four varieties: Caturra Amarillo, Excelencia, Milenio, and Típica. Multispectral signatures were captured using a Parrot Sequoia camera at wavelengths of 550 nm, 660 nm, 735 nm, and 790 nm, while colorimetric parameters L*, a*, and b* were measured with a high-precision colorimeter. We conducted multivariate analyses, including Principal Component Analysis (PCA) and multiple linear regression (MLR), to identify color patterns and develop predictors for fruit maturity. Spectral curve analysis revealed consistent changes related to ripening: a decrease in reflectance in the green band (550 nm), a progressive increase in the red band (660 nm), and relative stability in the RedEdge and near-infrared regions (735–790 nm). Colorimetric analysis confirmed systematic trends, indicating that the a* component (green to red) was the most reliable indicator of ripeness. Additionally, L* (lightness) decreased with maturity, and the b* component (yellowness to blue) showed varying importance depending on the variety. PCA accounted for over 98% of the variability across all varieties, demonstrating that these three parameters effectively characterize maturity. MLR models exhibited strong predictive performance, with adjusted R2 values ranging between 0.789 and 0.877. Excelencia achieved the highest predictive accuracy, while Milenio demonstrated the lowest, highlighting varietal differences in pigmentation dynamics. These findings show that combining multispectral imaging, colorimetry, and statistical modeling offers a non-destructive, accessible, and cost-effective method for objectively classifying coffee maturity. Integrating this approach into computer vision or remote sensing systems could enhance harvest planning, reduce variability in specialty coffee lots, and improve competitiveness by ensuring greater consistency in cup quality. Full article
(This article belongs to the Special Issue Coffee Science: Innovations Across the Production-to-Consumer Chain)
Show Figures

Figure 1

21 pages, 2767 KB  
Article
Semi-Automated Extraction of Active Fire Edges from Tactical Infrared Observations of Wildfires
by Christopher C. Giesige, Eric Goldbeck-Dimon, Andrew Klofas and Mario Miguel Valero
Remote Sens. 2025, 17(21), 3525; https://doi.org/10.3390/rs17213525 - 24 Oct 2025
Viewed by 243
Abstract
Remote sensing of wildland fires has become an integral part of fire science. Airborne sensors provide high spatial resolution and can provide high temporal resolution, enabling fire behavior monitoring at fine scales. Fire agencies frequently use airborne long-wave infrared (LWIR) imagery for fire [...] Read more.
Remote sensing of wildland fires has become an integral part of fire science. Airborne sensors provide high spatial resolution and can provide high temporal resolution, enabling fire behavior monitoring at fine scales. Fire agencies frequently use airborne long-wave infrared (LWIR) imagery for fire monitoring and to aid in operational decision-making. While tactical remote sensing systems may differ from scientific instruments, our objective is to illustrate that operational support data has the capacity to aid scientific fire behavior studies and to facilitate the data analysis. We present an image processing algorithm that automatically delineates active fire edges in tactical LWIR orthomosaics. Several thresholding and edge detection methodologies were investigated and combined into a new algorithm. Our proposed method was tested on tactical LWIR imagery acquired during several fires in California in 2020 and compared to manually annotated mosaics. Jaccard index values ranged from 0.725 to 0.928. The semi-automated algorithm successfully extracted active fire edges over a wide range of image complexity. These results contribute to the integration of infrared fire observations captured during firefighting operations into scientific studies of fire spread and support landscape-scale fire behavior modeling efforts. Full article
Show Figures

Figure 1

9 pages, 2395 KB  
Article
A Wide Field of View and Broadband Infrared Imaging System Integrating a Dispersion-Engineered Metasurface
by Bo Liu, Yunqiang Zhang, Zhu Li, Xuetao Gan and Xin Xie
Photonics 2025, 12(10), 1033; https://doi.org/10.3390/photonics12101033 - 19 Oct 2025
Viewed by 326
Abstract
We present a compact hybrid imaging system operating in the 3–5 μm spectral band that combines refractive optics with a dispersion-engineered metasurface to overcome the longstanding trade-off between wide field of view (FOV), system size, and thermal stability. The system achieves an ultra-wide [...] Read more.
We present a compact hybrid imaging system operating in the 3–5 μm spectral band that combines refractive optics with a dispersion-engineered metasurface to overcome the longstanding trade-off between wide field of view (FOV), system size, and thermal stability. The system achieves an ultra-wide 178° FOV within a total track length of only 28.25 mm, employing just three refractive lenses and one metasurface. Through co-optimization of material selection and system architecture, it maintains the modulation transfer function (MTF) exceeding 0.54 at 33 lp/mm and the geometric (GEO) radius below 15 μm across an extended operational temperature range from –40 °C to 60 °C. The metasurface is designed using a propagation phase approach with cylindrical unit cells to ensure polarization-insensitive behavior, and its broadband dispersion-free phase profile is optimized via a particle swarm algorithm. The results indicate that phase-matching errors remain small at all wavelengths, with a mean value of 0.11068. This design provides an environmentally resilient solution for lightweight applications, including automotive infrared night vision and unmanned aerial vehicle remote sensing. Full article
(This article belongs to the Special Issue Optical Metasurfaces: Applications and Trends)
Show Figures

Figure 1

20 pages, 1288 KB  
Article
Spatio-Temporal Residual Attention Network for Satellite-Based Infrared Small Target Detection
by Yan Chang, Decao Ma, Qisong Yang, Shaopeng Li and Daqiao Zhang
Remote Sens. 2025, 17(20), 3457; https://doi.org/10.3390/rs17203457 - 16 Oct 2025
Viewed by 312
Abstract
With the development of infrared remote sensing technology and the deployment of satellite constellations, infrared video from orbital platforms is playing an increasingly important role in airborne target surveillance. However, due to the limitations of remote sensing imaging, the aerial targets in such [...] Read more.
With the development of infrared remote sensing technology and the deployment of satellite constellations, infrared video from orbital platforms is playing an increasingly important role in airborne target surveillance. However, due to the limitations of remote sensing imaging, the aerial targets in such videos are often small in scale, low in contrast, and slow in movement, making them difficult to detect in complex backgrounds. In this paper, we propose a novel detection network that integrates inter-frame residual guidance with spatio-temporal feature enhancement to address the challenge of small object detection in infrared satellite video. This method first extracts residual features to highlight motion-sensitive regions, then uses a dual-branch structure to encode spatial semantics and temporal evolution, and then fuses them deeply through a multi-scale feature enhancement module. Extensive experiments show that this method outperforms mainstream methods in terms on various infrared small target video datasets, and has good robustness under low-signal-to-noise-ratio conditions. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

18 pages, 112460 KB  
Article
Gradient Boosting for the Spectral Super-Resolution of Ocean Color Sensor Data
by Brittney Slocum, Jason Jolliff, Sherwin Ladner, Adam Lawson, Mark David Lewis and Sean McCarthy
Sensors 2025, 25(20), 6389; https://doi.org/10.3390/s25206389 - 16 Oct 2025
Viewed by 675
Abstract
We present a gradient boosting framework for reconstructing hyperspectral signatures in the visible spectrum (400–700 nm) of satellite-based ocean scenes from limited multispectral inputs. Hyperspectral data is composed of many, typically greater than 100, narrow wavelength bands across the electromagnetic spectrum. While hyperspectral [...] Read more.
We present a gradient boosting framework for reconstructing hyperspectral signatures in the visible spectrum (400–700 nm) of satellite-based ocean scenes from limited multispectral inputs. Hyperspectral data is composed of many, typically greater than 100, narrow wavelength bands across the electromagnetic spectrum. While hyperspectral data can offer reflectance values at every nanometer, multispectral sensors typically provide only 3 to 11 discrete bands, undersampling the visible color space. Our approach is applied to remote sensing reflectance (Rrs) measurements from a set of ocean color sensors, including Suomi-National Polar-orbiting Partnership (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS), the Ocean and Land Colour Instrument (OLCI), Hyperspectral Imager for the Coastal Ocean (HICO), and NASA’s Plankton, Aerosol, Cloud, Ocean Ecosystem Ocean Color Instrument (PACE OCI), as well as in situ Rrs data from National Oceanic and Atmospheric Administration (NOAA) calibration and validation cruises. By leveraging these datasets, we demonstrate the feasibility of transforming low-spectral-resolution imagery into high-fidelity hyperspectral products. This capability is particularly valuable given the increasing availability of low-cost platforms equipped with RGB or multispectral imaging systems. Our results underscore the potential of hyperspectral enhancement for advancing ocean color monitoring and enabling broader access to high-resolution spectral data for scientific and environmental applications. Full article
Show Figures

Figure 1

20 pages, 8158 KB  
Article
Reconstructing Global Chlorophyll-a Concentration for the COCTS Aboard Chinese Ocean Color Satellites via the DINEOF Method
by Xiaomin Ye, Mingsen Lin, Bin Zou, Xiaomei Wang and Zhijia Lin
Remote Sens. 2025, 17(20), 3433; https://doi.org/10.3390/rs17203433 - 15 Oct 2025
Viewed by 398
Abstract
The chlorophyll-a (Chl-a) concentration, a critical parameter for characterizing marine primary productivity and ecological health, plays a vital role in providing ecological environment monitoring and climate change assessment while serving as a core retrieval product in ocean color remote sensing. Currently, more than [...] Read more.
The chlorophyll-a (Chl-a) concentration, a critical parameter for characterizing marine primary productivity and ecological health, plays a vital role in providing ecological environment monitoring and climate change assessment while serving as a core retrieval product in ocean color remote sensing. Currently, more than ten ocean color satellites operate globally, including China’s HY-1C, HY-1D and HY-1E satellites. However, significant spatial data gaps exist in Chl-a concentration retrieval from satellites because of cloud cover, sun-glint, and limitation of sensor swath. This study aimed to systematically enhance the spatiotemporal integrity of ocean monitoring data through multisource data merging and reconstruction techniques. We integrated Chl-a concentration datasets from four major sensor types—Moderate Resolution Imaging Spectroradiometer (MODIS), Visible Infrared Imaging Radiometer Suite (VIIRS), Ocean and Land Color Instrument (OLCI), and Chinese Ocean Color and Temperature Scanner (COCTS)—and quantitatively evaluated their global coverage performance under different payload combinations. The key findings revealed that single-sensor 4-day continuous observation achieved effective coverage levels ranging from only 10.45–26.1%, while multi-sensor merging substantially increased coverage, namely, homogeneous payload merging provided 25.7% coverage for two MODIS satellites, 41.1% coverage for three VIIRS satellites, 24.8% coverage for two OLCI satellites, and 37.1% coverage for three COCTS satellites, with 10-payload merging increasing the coverage rate to 55.4%. Employing the Data Interpolating Empirical Orthogonal Functions (DINEOFS) algorithm, we successfully reconstructed data for China’s ocean color satellites. Validation against VIIRS reconstructions indicated high consistency (a mean relative error of 26% and a linear correlation coefficient of 0.93), whereas self-verification yielded a mean relative error of 27% and a linear correlation coefficient of 0.90. Case studies in Chinese offshore and adjacent waters, waters east of Mindanao Island and north of New Guinea, demonstrated the successful reconstruction of spatiotemporal Chl-a dynamics. The results demonstrated that China’s HY-1C, HY-1D, and HY-1E satellites enable daily global-scale Chl-a reconstruction. Full article
Show Figures

Figure 1

17 pages, 7451 KB  
Article
An Off-Axis Catadioptric Division of Aperture Optical System for Multi-Channel Infrared Imaging
by Jie Chen, Tong Yang, Hongbo Xie and Lei Yang
Photonics 2025, 12(10), 1008; https://doi.org/10.3390/photonics12101008 - 13 Oct 2025
Viewed by 251
Abstract
Multi-channel optical systems can provide more feature information compared to single-channel systems, making them valuable for optical remote sensing, target identification, and other applications. The division of aperture polarization imaging modality allows for the simultaneous imaging of targets in the same field of [...] Read more.
Multi-channel optical systems can provide more feature information compared to single-channel systems, making them valuable for optical remote sensing, target identification, and other applications. The division of aperture polarization imaging modality allows for the simultaneous imaging of targets in the same field of view with a single detector. To overcome the limitations of conventional refractive aperture-divided systems for miniaturization, this work proposes an off-axis catadioptric aperture-divided technique for polarization imaging. First, the design method of the off-axis reflective telescope structure is discussed. The relationship between optical parameters such as magnification, surface coefficient, and primary aberration is studied. Second, by establishing the division of the aperture optical model, the method of maximizing the field of view and aperture is determined. Finally, an off-axis catadioptric cooled aperture-divided infrared optical system with a single aperture focal length of 60 mm is shown as a specific design example. Each channel can achieve 100% cold shield efficiency, and the overall length of the telescope module can be decreased significantly. The image quality of each imaging channel is close to the diffraction limit, verifying the effectiveness and feasibility of the method. The proposed off-axis catadioptric aperture-divided design method holds potential applications in simultaneous infrared polarization imaging. Full article
(This article belongs to the Section Optical Interaction Science)
Show Figures

Figure 1

8 pages, 2675 KB  
Proceeding Paper
Enhancing Tetracorder Mineral Classification with Random Forest Modeling
by Hideki Tsubomatsu and Hideyuki Tonooka
Eng. Proc. 2025, 94(1), 25; https://doi.org/10.3390/engproc2025094025 - 10 Oct 2025
Viewed by 227
Abstract
Hyperspectral (HS) remote sensing is a valuable tool for geological surveys and mineral classification. However, mineral maps derived from HS data can exhibit inconsistencies across different imaging times or sensors due to complex factors. In this study, we propose a novel method to [...] Read more.
Hyperspectral (HS) remote sensing is a valuable tool for geological surveys and mineral classification. However, mineral maps derived from HS data can exhibit inconsistencies across different imaging times or sensors due to complex factors. In this study, we propose a novel method to enhance the robustness and temporal consistency of mineral mapping. The method combines the spectral identification capabilities of the Tetracorder expert system, developed by United States Geological Survey (USGS), with a data-driven classification model, involving the application of Tetracorder to high-purity pixels identified through the pixel purity index (PPI) analysis to generate reliable training labels. These labels, along with hyperspectral bands transformed by the minimum noise fraction (MNF), are used to train a random forest classifier. The methodology was evaluated using multi-temporal images of the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), acquired over Cuprite, Nevada, between 2011 and 2013. The results demonstrate that the proposed method achieves accuracy comparable to Tetracorder while improving map consistency and reducing inter-annual mapping errors by approximately 30%. Full article
Show Figures

Figure 1

21 pages, 14964 KB  
Article
An Automated Framework for Abnormal Target Segmentation in Levee Scenarios Using Fusion of UAV-Based Infrared and Visible Imagery
by Jiyuan Zhang, Zhonggen Wang, Jing Chen, Fei Wang and Lyuzhou Gao
Remote Sens. 2025, 17(20), 3398; https://doi.org/10.3390/rs17203398 - 10 Oct 2025
Viewed by 382
Abstract
Levees are critical for flood defence, but their integrity is threatened by hazards such as piping and seepage, especially during high-water-level periods. Traditional manual inspections for these hazards and associated emergency response elements, such as personnel and assets, are inefficient and often impractical. [...] Read more.
Levees are critical for flood defence, but their integrity is threatened by hazards such as piping and seepage, especially during high-water-level periods. Traditional manual inspections for these hazards and associated emergency response elements, such as personnel and assets, are inefficient and often impractical. While UAV-based remote sensing offers a promising alternative, the effective fusion of multi-modal data and the scarcity of labelled data for supervised model training remain significant challenges. To overcome these limitations, this paper reframes levee monitoring as an unsupervised anomaly detection task. We propose a novel, fully automated framework that unifies geophysical hazards and emergency response elements into a single analytical category of “abnormal targets” for comprehensive situational awareness. The framework consists of three key modules: (1) a state-of-the-art registration algorithm to precisely align infrared and visible images; (2) a generative adversarial network to fuse the thermal information from IR images with the textural details from visible images; and (3) an adaptive, unsupervised segmentation module where a mean-shift clustering algorithm, with its hyperparameters automatically tuned by Bayesian optimization, delineates the targets. We validated our framework on a real-world dataset collected from a levee on the Pajiang River, China. The proposed method demonstrates superior performance over all baselines, achieving an Intersection over Union of 0.348 and a macro F1-Score of 0.479. This work provides a practical, training-free solution for comprehensive levee monitoring and demonstrates the synergistic potential of multi-modal fusion and automated machine learning for disaster management. Full article
Show Figures

Graphical abstract

31 pages, 3160 KB  
Article
Multimodal Image Segmentation with Dynamic Adaptive Window and Cross-Scale Fusion for Heterogeneous Data Environments
by Qianping He, Meng Wu, Pengchang Zhang, Lu Wang and Quanbin Shi
Appl. Sci. 2025, 15(19), 10813; https://doi.org/10.3390/app151910813 - 8 Oct 2025
Viewed by 570
Abstract
Multi-modal image segmentation is a key task in various fields such as urban planning, infrastructure monitoring, and environmental analysis. However, it remains challenging due to complex scenes, varying object scales, and the integration of heterogeneous data sources (such as RGB, depth maps, and [...] Read more.
Multi-modal image segmentation is a key task in various fields such as urban planning, infrastructure monitoring, and environmental analysis. However, it remains challenging due to complex scenes, varying object scales, and the integration of heterogeneous data sources (such as RGB, depth maps, and infrared). To address these challenges, we proposed a novel multi-modal segmentation framework, DyFuseNet, which features dynamic adaptive windows and cross-scale feature fusion capabilities. This framework consists of three key components: (1) Dynamic Window Module (DWM), which uses dynamic partitioning and continuous position bias to adaptively adjust window sizes, thereby improving the representation of irregular and fine-grained objects; (2) Scale Context Attention (SCA), a hierarchical mechanism that associates local details with global semantics in a coarse-to-fine manner, enhancing segmentation accuracy in low-texture or occluded regions; and (3) Hierarchical Adaptive Fusion Architecture (HAFA), which aligns and fuses features from multiple modalities through shallow synchronization and deep channel attention, effectively balancing complementarity and redundancy. Evaluated on benchmark datasets (such as ISPRS Vaihingen and Potsdam), DyFuseNet achieved state-of-the-art performance, with mean Intersection over Union (mIoU) scores of 80.40% and 80.85%, surpassing MFTransNet by 1.91% and 1.77%, respectively. The model also demonstrated strong robustness in challenging scenes (such as building edges and shadowed objects), achieving an average F1 score of 85% while maintaining high efficiency (26.19 GFLOPs, 30.09 FPS), making it suitable for real-time deployment. This work presents a practical, versatile, and computationally efficient solution for multi-modal image analysis, with potential applications beyond remote sensing, including smart monitoring, industrial inspection, and multi-source data fusion tasks. Full article
(This article belongs to the Special Issue Signal and Image Processing: From Theory to Applications: 2nd Edition)
Show Figures

Figure 1

22 pages, 8737 KB  
Article
UAV-Based Multispectral Imagery for Area-Wide Sustainable Tree Risk Management
by Kinga Mazurek, Łukasz Zając, Marzena Suchocka, Tomasz Jelonek, Adam Juźwiak and Marcin Kubus
Sustainability 2025, 17(19), 8908; https://doi.org/10.3390/su17198908 - 7 Oct 2025
Viewed by 686
Abstract
The responsibility for risk assessment and user safety in forested and recreational areas lies with the property owner. This study shows that unmanned aerial vehicles (UAVs), combined with remote sensing and GIS analysis, effectively support the identification of high-risk trees, particularly those with [...] Read more.
The responsibility for risk assessment and user safety in forested and recreational areas lies with the property owner. This study shows that unmanned aerial vehicles (UAVs), combined with remote sensing and GIS analysis, effectively support the identification of high-risk trees, particularly those with reduced structural stability. UAV-based surveys successfully detect 78% of dead or declining trees identified during ground inspections, while significantly reducing labor and enabling large-area assessments within a short timeframe. The study covered an area of 6.69 ha with 51 reference trees assessed on the ground. Although the multispectral camera also recorded the red-edge band, it was not included in the present analysis. Compared to traditional ground-based surveys, the UAV-based approach reduced fieldwork time by approx. 20–30% and labor costs by approx. 15–20%. Orthomosaics generated from images captured by commercial multispectral drones (e.g., DJI Mavic 3 Multispectral) provide essential information on tree condition, especially mortality indicators. UAV data collection is fast and relatively low-cost but requires equipment capable of capturing high-resolution imagery in specific spectral bands, particularly near-infrared (NIR). The findings suggest that UAV-based monitoring can enhance the efficiency of large-scale inspections. However, ground-based verification remains necessary in high-traffic areas where safety is critical. Integrating UAV technologies with GIS supports the development of risk management strategies aligned with the principles of precision forestry, enabling sustainable, more proactive and efficient monitoring of tree-related hazards. Full article
(This article belongs to the Section Sustainable Forestry)
Show Figures

Figure 1

17 pages, 2923 KB  
Article
TY-SpectralNet: An Interpretable Adaptive Network for the Pattern of Multimode Fiber Spectral Analysis
by Yuzhe Wang, Songlu Lin, Fudong Zhang and Zhihong Wang
Appl. Sci. 2025, 15(19), 10606; https://doi.org/10.3390/app151910606 - 30 Sep 2025
Viewed by 238
Abstract
Background: The high-precision analysis of multimode fibers (MMFs) is a critical task in numerous applications, including remote sensing, medical imaging, and environmental monitoring. In this study, we propose a novel deep interpretable network approach to reconstruct spectral images captured using CCD sensors. [...] Read more.
Background: The high-precision analysis of multimode fibers (MMFs) is a critical task in numerous applications, including remote sensing, medical imaging, and environmental monitoring. In this study, we propose a novel deep interpretable network approach to reconstruct spectral images captured using CCD sensors. Methods: Our model leverages a Tiny-YOLO-inspired convolutional neural network architecture, specifically designed for spectral wavelength prediction tasks. A total of 1880 CCD interference images were acquired across a broad near-infrared range from 1527.7 to 1565.3 nm. To ensure precise predictions, we introduce a dynamic factor α and design a dynamic adaptive loss function based on Huber loss and Log-Cosh loss. Results: Experimental evaluation with five-fold cross-validation demonstrates the robustness of the proposed method, achieving an average validation MSE of 0.0149, an R2 score of 0.9994, and a normalized error (μ) of 0.0005 in single MMF wavelength prediction, confirming its strong generalization capability across unseen data. The reconstructed outputs are further visualized as smooth spectral curves, providing interpretable insights into the model’s decision-making process. Conclusions: This study highlights the potential of deep learning-based interpretable networks in reconstructing high-fidelity spectral images from CCD sensors, paving the way for advancements in spectral imaging technology. Full article
(This article belongs to the Special Issue Advanced Optical Fiber Sensors: Applications and Technology)
Show Figures

Figure 1

29 pages, 19475 KB  
Article
Fine-Scale Grassland Classification Using UAV-Based Multi-Sensor Image Fusion and Deep Learning
by Zhongquan Cai, Changji Wen, Lun Bao, Hongyuan Ma, Zhuoran Yan, Jiaxuan Li, Xiaohong Gao and Lingxue Yu
Remote Sens. 2025, 17(18), 3190; https://doi.org/10.3390/rs17183190 - 15 Sep 2025
Cited by 1 | Viewed by 845
Abstract
Grassland classification via remote sensing is essential for ecosystem monitoring and precision management, yet conventional satellite-based approaches are fundamentally constrained by coarse spatial resolution. To overcome this limitation, we harness high-resolution UAV multi-sensor data, integrating multi-scale image fusion with deep learning to achieve [...] Read more.
Grassland classification via remote sensing is essential for ecosystem monitoring and precision management, yet conventional satellite-based approaches are fundamentally constrained by coarse spatial resolution. To overcome this limitation, we harness high-resolution UAV multi-sensor data, integrating multi-scale image fusion with deep learning to achieve fine-scale grassland classification that satellites cannot provide. First, four categories of UAV data, including RGB, multispectral, thermal infrared, and LiDAR point cloud, were collected, and a fused image tensor consisting of 10 channels (NDVI, VCI, CHM, etc.) was constructed through orthorectification and resampling. For feature-level fusion, four deep fusion networks were designed. Among them, the MultiScale Pyramid Fusion Network, utilizing a pyramid pooling module, effectively integrated spectral and structural features, achieving optimal performance in all six image fusion evaluation metrics, including information entropy (6.84), spatial frequency (15.56), and mean gradient (12.54). Subsequently, training and validation datasets were constructed by integrating visual interpretation samples. Four backbone networks, including UNet++, DeepLabV3+, PSPNet, and FPN, were employed, and attention modules (SE, ECA, and CBAM) were introduced separately to form 12 model combinations. Results indicated that the UNet++ network combined with the SE attention module achieved the best segmentation performance on the validation set, with a mean Intersection over Union (mIoU) of 77.68%, overall accuracy (OA) of 86.98%, F1-score of 81.48%, and Kappa coefficient of 0.82. In the categories of Leymus chinensis and Puccinellia distans, producer’s accuracy (PA)/user’s accuracy (UA) reached 86.46%/82.30% and 82.40%/77.68%, respectively. Whole-image prediction validated the model’s coherent identification capability for patch boundaries. In conclusion, this study provides a systematic approach for integrating multi-source UAV remote sensing data and intelligent grassland interpretation, offering technical support for grassland ecological monitoring and resource assessment. Full article
Show Figures

Figure 1

4 pages, 1052 KB  
Abstract
LWIR InAs/InAsSb Superlattice Detector for Cooled FPA
by Małgorzata Kopytko, Grzegorz Kołodziej, Piotr Baranowski, Krzysztof Murawski, Łukasz Kubiszyn, Krystian Michalczewski, Bartłomiej Seredyński, Kamil Szlachetko, Jarosław Jureńczyk and Waldemar Gawron
Proceedings 2025, 129(1), 28; https://doi.org/10.3390/proceedings2025129028 - 12 Sep 2025
Viewed by 303
Abstract
Long-wavelength infrared (LWIR) focal plane arrays (FPAs) are of particular importance in thermal imaging, remote sensing, and defense applications due to their ability to detect thermal signatures in the 8–12 μm spectral range [...] Full article
Show Figures

Figure 1

Back to TopTop