Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = vehicle-borne infrared

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 7269 KB  
Article
MSLCP-DETR: A Multi-Scale Linear Attention and Sparse Fusion Framework for Infrared Small Target Detection in Vehicle-Mounted Systems
by Fu Li, Meimei Zhu, Ming Zhao, Yuxin Sun and Wangyu Wu
Mathematics 2026, 14(1), 67; https://doi.org/10.3390/math14010067 - 24 Dec 2025
Abstract
Detecting small infrared targets in vehicle-mounted systems remains challenging due to weak thermal radiation, cross-scale feature loss, and dynamic background interference. To address these issues, this paper proposes MSLCP-DETR, an enhanced RT-DETR-based framework that integrates multi-scale linear attention and sparse fusion mechanisms. The [...] Read more.
Detecting small infrared targets in vehicle-mounted systems remains challenging due to weak thermal radiation, cross-scale feature loss, and dynamic background interference. To address these issues, this paper proposes MSLCP-DETR, an enhanced RT-DETR-based framework that integrates multi-scale linear attention and sparse fusion mechanisms. The model introduces three novel components: a Multi-Scale Linear Attention Encoder (MSLA-AIFI), which combines multi-branch depth-wise convolution with linear attention to efficiently capture cross-scale features while reducing computational complexity; a Cross-Scale Small Object Feature Optimization module (CSOFO), which enhances the localization of small targets in dense scenes through spatial rearrangement and dynamic modeling; and a Pyramid Sparse Transformer (PST), which replaces traditional dense fusion with a dual-branch sparse attention mechanism to improve both accuracy and real-time performance. Extensive experiments on the M3FD and FLIR datasets demonstrate that MSLCP-DETR achieves an excellent balance between accuracy and efficiency, with its precision, mAP@50, and mAP@50:95 reaching 90.3%, 79.5%, and 86.0%, respectively. Ablation studies and visual analysis further validate the effectiveness of the proposed modules and the overall design strategy. Full article
13 pages, 14213 KB  
Article
All-Weather Drone Vision: Passive SWIR Imaging in Fog and Rain
by Alexander Bessonov, Aleksei Rozanov, Richard White, Galih Suwito, Ivonne Medina-Salazar, Marat Lutfullin, Dmitrii Gusev and Ilya Shikov
Drones 2025, 9(8), 553; https://doi.org/10.3390/drones9080553 - 7 Aug 2025
Cited by 1 | Viewed by 3148
Abstract
Short-wave-infrared (SWIR) imaging can extend drone operations into fog and rain, yet the optimum spectral strategy remains unclear. We evaluated a drone-borne quantum-dot SWIR camera inside a climate-controlled tunnel that generated calibrated advection fog, radiation fog, and rain. Images were captured with a [...] Read more.
Short-wave-infrared (SWIR) imaging can extend drone operations into fog and rain, yet the optimum spectral strategy remains unclear. We evaluated a drone-borne quantum-dot SWIR camera inside a climate-controlled tunnel that generated calibrated advection fog, radiation fog, and rain. Images were captured with a broadband 400–1700 nm setting and three sub-band filters, each at four lens apertures (f/1.8–5.6). Entropy, structural-similarity index (SSIM), and peak signal-to-noise ratio (PSNR) were computed for every weather–aperture–filter combination. Broadband SWIR consistently outperformed all filtered configurations. The gain stems from higher photon throughput, which outweighs the modest scattering reduction offered by narrowband selection. Under passive illumination, broadband SWIR therefore represents the most robust single-camera choice for unmanned aerial vehicles (UAVs), enhancing situational awareness and flight safety in fog and rain. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

26 pages, 9416 KB  
Article
Multi-Component Remote Sensing for Mapping Buried Water Pipelines
by John Lioumbas, Thomas Spahos, Aikaterini Christodoulou, Ioannis Mitzias, Panagiota Stournara, Ioannis Kavouras, Alexandros Mentes, Nopi Theodoridou and Agis Papadopoulos
Remote Sens. 2025, 17(12), 2109; https://doi.org/10.3390/rs17122109 - 19 Jun 2025
Viewed by 1848
Abstract
Accurate localization of buried water pipelines in rural areas is crucial for maintenance and leak management but is often hindered by outdated maps and the limitations of traditional geophysical methods. This study aimed to develop and validate a multi-source remote-sensing workflow, integrating UAV [...] Read more.
Accurate localization of buried water pipelines in rural areas is crucial for maintenance and leak management but is often hindered by outdated maps and the limitations of traditional geophysical methods. This study aimed to develop and validate a multi-source remote-sensing workflow, integrating UAV (unmanned aerial vehicle)-borne near-infrared (NIR) surveys, multi-temporal Sentinel-2 imagery, and historical Google Earth orthophotos to precisely map pipeline locations and establish a surface baseline for future monitoring. Each dataset was processed within a unified least-squares framework to delineate pipeline axes from surface anomalies (vegetation stress, soil discoloration, and proxies) and rigorously quantify positional uncertainty, with findings validated against RTK-GNSS (Real-Time Kinematic—Global Navigation Satellite System) surveys of an excavated trench. The combined approach yielded sub-meter accuracy (±0.3 m) with UAV data, meter-scale precision (≈±1 m) with Google Earth, and precision up to several meters (±13.0 m) with Sentinel-2, significantly improving upon inaccurate legacy maps (up to a 300 m divergence) and successfully guiding excavation to locate a pipeline segment. The methodology demonstrated seasonal variability in detection capabilities, with optimal UAV-based identification occurring during early-vegetation growth phases (NDVI, Normalized Difference Vegetation Index ≈ 0.30–0.45) and post-harvest periods. A Sentinel-2 analysis of 221 cloud-free scenes revealed persistent soil discoloration patterns spanning 15–30 m in width, while Google Earth historical imagery provided crucial bridging data with intermediate spatial and temporal resolution. Ground-truth validation confirmed the pipeline location within 0.4 m of the Google Earth-derived position. This integrated, cost-effective workflow provides a transferable methodology for enhanced pipeline mapping and establishes a vital baseline of surface signatures, enabling more effective future monitoring and proactive maintenance to detect leaks or structural failures. This methodology is particularly valuable for water utility companies, municipal infrastructure managers, consulting engineers specializing in buried utilities, and remote-sensing practitioners working in pipeline detection and monitoring applications. Full article
(This article belongs to the Special Issue Remote Sensing Applications for Infrastructures)
Show Figures

Graphical abstract

23 pages, 452 KB  
Review
Technology and Data Fusion Methods to Enhance Site-Specific Crop Monitoring
by Uzair Ahmad, Abozar Nasirahmadi, Oliver Hensel and Stefano Marino
Agronomy 2022, 12(3), 555; https://doi.org/10.3390/agronomy12030555 - 23 Feb 2022
Cited by 51 | Viewed by 8187
Abstract
Digital farming approach merges new technologies and sensor data to optimize the quality of crop monitoring in agriculture. The successful fusion of technology and data is highly dependent on the parameter collection, the modeling adoption, and the technology integration being accurately implemented according [...] Read more.
Digital farming approach merges new technologies and sensor data to optimize the quality of crop monitoring in agriculture. The successful fusion of technology and data is highly dependent on the parameter collection, the modeling adoption, and the technology integration being accurately implemented according to the specified needs of the farm. This fusion technique has not yet been widely adopted due to several challenges; however, our study here reviews current methods and applications for fusing technologies and data. First, the study highlights different sensors that can be merged with other systems to develop fusion methods, such as optical, thermal infrared, multispectral, hyperspectral, light detection and ranging and radar. Second, the data fusion using the internet of things is reviewed. Third, the study shows different platforms that can be used as a source for the fusion of technologies, such as ground-based (tractors and robots), space-borne (satellites) and aerial (unmanned aerial vehicles) monitoring platforms. Finally, the study presents data fusion methods for site-specific crop parameter monitoring, such as nitrogen, chlorophyll, leaf area index, and aboveground biomass, and shows how the fusion of technologies and data can improve the monitoring of these parameters. The study further reveals limitations of the previous technologies and provides recommendations on how to improve their fusion with the best available sensors. The study reveals that among different data fusion methods, sensors and technologies, the airborne and terrestrial LiDAR fusion method for crop, canopy, and ground may be considered as a futuristic easy-to-use and low-cost solution to enhance the site-specific monitoring of crop parameters. Full article
(This article belongs to the Special Issue Crop Yield Prediction in Precision Agriculture)
12 pages, 1999 KB  
Article
Precise Monitoring of Soil Salinity in China’s Yellow River Delta Using UAV-Borne Multispectral Imagery and a Soil Salinity Retrieval Index
by Xinyang Yu, Chunyan Chang, Jiaxuan Song, Yuping Zhuge and Ailing Wang
Sensors 2022, 22(2), 546; https://doi.org/10.3390/s22020546 - 11 Jan 2022
Cited by 31 | Viewed by 10433
Abstract
Monitoring salinity information of salinized soil efficiently and precisely using the unmanned aerial vehicle (UAV) is critical for the rational use and sustainable development of arable land resources. The sensitive parameter and a precise retrieval method of soil salinity, however, remain unknown. This [...] Read more.
Monitoring salinity information of salinized soil efficiently and precisely using the unmanned aerial vehicle (UAV) is critical for the rational use and sustainable development of arable land resources. The sensitive parameter and a precise retrieval method of soil salinity, however, remain unknown. This study strived to explore the sensitive parameter and construct an optimal method for retrieving soil salinity. The UAV-borne multispectral image in China’s Yellow River Delta was acquired to extract band reflectance, compute vegetation indexes and soil salinity indexes. Soil samples collected from 120 different study sites were used for laboratory salt content measurements. Grey correlation analysis and Pearson correlation coefficient methods were employed to screen sensitive band reflectance and indexes. A new soil salinity retrieval index (SSRI) was then proposed based on the screened sensitive reflectance. The Partial Least Squares Regression (PLSR), Multivariable Linear Regression (MLR), Back Propagation Neural Network (BPNN), Support Vector Machine (SVM), and Random Forest (RF) methods were employed to construct retrieval models based on the sensitive indexes. The results found that green, red, and near-infrared (NIR) bands were sensitive to soil salinity, which can be used to build SSRI. The SSRI-based RF method was the optimal method for accurately retrieving the soil salinity. Its modeling determination coefficient (R2) and Root Mean Square Error (RMSE) were 0.724 and 1.764, respectively; and the validation R2, RMSE, and Residual Predictive Deviation (RPD) were 0.745, 1.879, and 2.211. Full article
Show Figures

Figure 1

28 pages, 7403 KB  
Article
Assessment of Classifiers and Remote Sensing Features of Hyperspectral Imagery and Stereo-Photogrammetric Point Clouds for Recognition of Tree Species in a Forest Area of High Species Diversity
by Sakari Tuominen, Roope Näsi, Eija Honkavaara, Andras Balazs, Teemu Hakala, Niko Viljanen, Ilkka Pölönen, Heikki Saari and Harri Ojanen
Remote Sens. 2018, 10(5), 714; https://doi.org/10.3390/rs10050714 - 5 May 2018
Cited by 68 | Viewed by 10558
Abstract
Recognition of tree species and geospatial information on tree species composition is essential for forest management. In this study, tree species recognition was examined using hyperspectral imagery from visible to near-infrared (VNIR) and short-wave infrared (SWIR) camera sensors in combination with a 3D [...] Read more.
Recognition of tree species and geospatial information on tree species composition is essential for forest management. In this study, tree species recognition was examined using hyperspectral imagery from visible to near-infrared (VNIR) and short-wave infrared (SWIR) camera sensors in combination with a 3D photogrammetric canopy surface model based on RGB camera stereo-imagery. An arboretum with a diverse selection of 26 tree species from 14 genera was used as a test area. Aerial hyperspectral imagery and high spatial resolution photogrammetric color imagery were acquired from the test area using unmanned aerial vehicle (UAV) borne sensors. Hyperspectral imagery was processed to calibrated reflectance mosaics and was tested along with the mosaics based on original image digital number values (DN). Two alternative classifiers, a k nearest neighbor method (k-nn), combined with a genetic algorithm and a random forest method, were tested for predicting the tree species and genus, as well as for selecting an optimal set of remote sensing features for this task. The combination of VNIR, SWIR, and 3D features performed better than any of the data sets individually. Furthermore, the calibrated reflectance values performed better compared to uncorrected DN values. These trends were similar with both tested classifiers. Of the classifiers, the k-nn combined with the genetic algorithm provided consistently better results than the random forest algorithm. The best result was thus achieved using calibrated reflectance features from VNIR and SWIR imagery together with 3D point cloud features; the proportion of correctly-classified trees was 0.823 for tree species and 0.869 for tree genus. Full article
Show Figures

Graphical abstract

14 pages, 32606 KB  
Article
Use of UAV-Borne Spectrometer for Land Cover Classification
by Sowmya Natesan, Costas Armenakis, Guy Benari and Regina Lee
Drones 2018, 2(2), 16; https://doi.org/10.3390/drones2020016 - 20 Apr 2018
Cited by 44 | Viewed by 11938
Abstract
Unmanned aerial vehicles (UAV) are being used for low altitude remote sensing for thematic land classification using visible light and multi-spectral sensors. The objective of this work was to investigate the use of UAV equipped with a compact spectrometer for land cover classification. [...] Read more.
Unmanned aerial vehicles (UAV) are being used for low altitude remote sensing for thematic land classification using visible light and multi-spectral sensors. The objective of this work was to investigate the use of UAV equipped with a compact spectrometer for land cover classification. The UAV platform used was a DJI Flamewheel F550 hexacopter equipped with GPS and Inertial Measurement Unit (IMU) navigation sensors, and a Raspberry Pi processor and camera module. The spectrometer used was the FLAME-NIR, a near-infrared spectrometer for hyperspectral measurements. RGB images and spectrometer data were captured simultaneously. As spectrometer data do not provide continuous terrain coverage, the locations of their ground elliptical footprints were determined from the bundle adjustment solution of the captured images. For each of the spectrometer ground ellipses, the land cover signature at the footprint location was determined to enable the characterization, identification, and classification of land cover elements. To attain a continuous land cover classification map, spatial interpolation was carried out from the irregularly distributed labeled spectrometer points. The accuracy of the classification was assessed using spatial intersection with the object-based image classification performed using the RGB images. Results show that in homogeneous land cover, like water, the accuracy of classification is 78% and in mixed classes, like grass, trees and manmade features, the average accuracy is 50%, thus, indicating the contribution of hyperspectral measurements of low altitude UAV-borne spectrometers to improve land cover classification. Full article
Show Figures

Figure 1

17 pages, 17821 KB  
Article
Automatic Coregistration Algorithm to Remove Canopy Shaded Pixels in UAV-Borne Thermal Images to Improve the Estimation of Crop Water Stress Index of a Drip-Irrigated Cabernet Sauvignon Vineyard
by Tomas Poblete, Samuel Ortega-Farías and Dongryeol Ryu
Sensors 2018, 18(2), 397; https://doi.org/10.3390/s18020397 - 30 Jan 2018
Cited by 66 | Viewed by 7805
Abstract
Water stress caused by water scarcity has a negative impact on the wine industry. Several strategies have been implemented for optimizing water application in vineyards. In this regard, midday stem water potential (SWP) and thermal infrared (TIR) imaging for crop water stress index [...] Read more.
Water stress caused by water scarcity has a negative impact on the wine industry. Several strategies have been implemented for optimizing water application in vineyards. In this regard, midday stem water potential (SWP) and thermal infrared (TIR) imaging for crop water stress index (CWSI) have been used to assess plant water stress on a vine-by-vine basis without considering the spatial variability. Unmanned Aerial Vehicle (UAV)-borne TIR images are used to assess the canopy temperature variability within vineyards that can be related to the vine water status. Nevertheless, when aerial TIR images are captured over canopy, internal shadow canopy pixels cannot be detected, leading to mixed information that negatively impacts the relationship between CWSI and SWP. This study proposes a methodology for automatic coregistration of thermal and multispectral images (ranging between 490 and 900 nm) obtained from a UAV to remove shadow canopy pixels using a modified scale invariant feature transformation (SIFT) computer vision algorithm and Kmeans++ clustering. Our results indicate that our proposed methodology improves the relationship between CWSI and SWP when shadow canopy pixels are removed from a drip-irrigated Cabernet Sauvignon vineyard. In particular, the coefficient of determination (R2) increased from 0.64 to 0.77. In addition, values of the root mean square error (RMSE) and standard error (SE) decreased from 0.2 to 0.1 MPa and 0.24 to 0.16 MPa, respectively. Finally, this study shows that the negative effect of shadow canopy pixels was higher in those vines with water stress compared with well-watered vines. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications)
Show Figures

Figure 1

36 pages, 6569 KB  
Article
Intercomparison of Unmanned Aerial Vehicle and Ground-Based Narrow Band Spectrometers Applied to Crop Trait Monitoring in Organic Potato Production
by Marston Héracles Domingues Franceschini, Harm Bartholomeus, Dirk Van Apeldoorn, Juha Suomalainen and Lammert Kooistra
Sensors 2017, 17(6), 1428; https://doi.org/10.3390/s17061428 - 18 Jun 2017
Cited by 55 | Viewed by 16614 | Correction
Abstract
Vegetation properties can be estimated using optical sensors, acquiring data on board of different platforms. For instance, ground-based and Unmanned Aerial Vehicle (UAV)-borne spectrometers can measure reflectance in narrow spectral bands, while different modelling approaches, like regressions fitted to vegetation indices, can relate [...] Read more.
Vegetation properties can be estimated using optical sensors, acquiring data on board of different platforms. For instance, ground-based and Unmanned Aerial Vehicle (UAV)-borne spectrometers can measure reflectance in narrow spectral bands, while different modelling approaches, like regressions fitted to vegetation indices, can relate spectra with crop traits. Although monitoring frameworks using multiple sensors can be more flexible, they may result in higher inaccuracy due to differences related to the sensors characteristics, which can affect information sampling. Also organic production systems can benefit from continuous monitoring focusing on crop management and stress detection, but few studies have evaluated applications with this objective. In this study, ground-based and UAV spectrometers were compared in the context of organic potato cultivation. Relatively accurate estimates were obtained for leaf chlorophyll (RMSE = 6.07 µg·cm−2), leaf area index (RMSE = 0.67 m2·m−2), canopy chlorophyll (RMSE = 0.24 g·m−2) and ground cover (RMSE = 5.5%) using five UAV-based data acquisitions, from 43 to 99 days after planting. These retrievals are slightly better than those derived from ground-based measurements (RMSE = 7.25 µg·cm−2, 0.85 m2·m−2, 0.28 g·m−2 and 6.8%, respectively), for the same period. Excluding observations corresponding to the first acquisition increased retrieval accuracy and made outputs more comparable between sensors, due to relatively low vegetation cover on this date. Intercomparison of vegetation indices indicated that indices based on the contrast between spectral bands in the visible and near-infrared, like OSAVI, MCARI2 and CIg provided, at certain extent, robust outputs that could be transferred between sensors. Information sampling at plot level by both sensing solutions resulted in comparable discriminative potential concerning advanced stages of late blight incidence. These results indicate that optical sensors, and their integration, have great potential for monitoring this specific organic cropping system. Full article
(This article belongs to the Special Issue Precision Agriculture and Remote Sensing Data Fusion)
Show Figures

Figure 1

Back to TopTop