Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,259)

Search Parameters:
Keywords = high-resolution remote-sensing images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 10676 KB  
Article
Hourly and 0.5-Meter Green Space Exposure Mapping and Its Impacts on the Urban Built Environment
by Yan Wu, Weizhong Su, Yingbao Yang and Jia Hu
Remote Sens. 2025, 17(21), 3531; https://doi.org/10.3390/rs17213531 (registering DOI) - 24 Oct 2025
Viewed by 161
Abstract
Accurately mapping urban residents’ exposure to green space at high spatiotemporal resolutions is essential for assessing disparities and equality across blocks and enhancing urban environment planning. In this study, we developed a framework to generate hourly green space exposure maps at 0.5 m [...] Read more.
Accurately mapping urban residents’ exposure to green space at high spatiotemporal resolutions is essential for assessing disparities and equality across blocks and enhancing urban environment planning. In this study, we developed a framework to generate hourly green space exposure maps at 0.5 m resolution using multiple sources of remote sensing data and an Object-Based Image Classification with Graph Convolutional Network (OBIC-GCN) model. Taking the main urban area in Nanjing city of China as the study area, we proposed a Dynamic Residential Green Space Exposure (DRGE) metric to reveal disparities in green space access across four housing price blocks. The Palma ratio was employed to explain the inequity characteristics of DRGE, while XGBoost (eXtreme Gradient Boosting) and SHAP (SHapley Additive explanation) methods were utilized to explore the impacts of built environment factors on DRGE. We found that the difference in daytime and nighttime DRGE values was significant, with the DRGE value being higher after 6:00 compared to the night. Mean DRGE on weekends was about 1.5 times higher than on workdays, and the DRGE in high-priced blocks was about twice that in low-priced blocks. More than 68% of residents in high-priced blocks experienced over 8 h of green space exposure during weekend nighttime (especially around 19:00), which was much higher than low-price blocks. Moreover, spatial inequality in residents’ green space exposure was more pronounced on weekends than on workdays, with lower-priced blocks exhibiting greater inequality (Palma ratio: 0.445 vs. 0.385). Furthermore, green space morphology, quantity, and population density were identified as the critical factors affecting DRGE. The optimal threshold for Percent of Landscape (PLAND) was 25–70%, while building density, height, and Sky View Factor (SVF) were negatively correlated with DRGE. These findings address current research gaps by considering population mobility, capturing green space supply and demand inequities, and providing scientific decision-making support for future urban green space equality and planning. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Urban Environment and Climate)
Show Figures

Figure 1

21 pages, 2767 KB  
Article
Semi-Automated Extraction of Active Fire Edges from Tactical Infrared Observations of Wildfires
by Christopher C. Giesige, Eric Goldbeck-Dimon, Andrew Klofas and Mario Miguel Valero
Remote Sens. 2025, 17(21), 3525; https://doi.org/10.3390/rs17213525 (registering DOI) - 24 Oct 2025
Viewed by 154
Abstract
Remote sensing of wildland fires has become an integral part of fire science. Airborne sensors provide high spatial resolution and can provide high temporal resolution, enabling fire behavior monitoring at fine scales. Fire agencies frequently use airborne long-wave infrared (LWIR) imagery for fire [...] Read more.
Remote sensing of wildland fires has become an integral part of fire science. Airborne sensors provide high spatial resolution and can provide high temporal resolution, enabling fire behavior monitoring at fine scales. Fire agencies frequently use airborne long-wave infrared (LWIR) imagery for fire monitoring and to aid in operational decision-making. While tactical remote sensing systems may differ from scientific instruments, our objective is to illustrate that operational support data has the capacity to aid scientific fire behavior studies and to facilitate the data analysis. We present an image processing algorithm that automatically delineates active fire edges in tactical LWIR orthomosaics. Several thresholding and edge detection methodologies were investigated and combined into a new algorithm. Our proposed method was tested on tactical LWIR imagery acquired during several fires in California in 2020 and compared to manually annotated mosaics. Jaccard index values ranged from 0.725 to 0.928. The semi-automated algorithm successfully extracted active fire edges over a wide range of image complexity. These results contribute to the integration of infrared fire observations captured during firefighting operations into scientific studies of fire spread and support landscape-scale fire behavior modeling efforts. Full article
Show Figures

Figure 1

20 pages, 9075 KB  
Article
CatBoost Improves Inversion Accuracy of Plant Water Status in Winter Wheat Using Ratio Vegetation Index
by Bingyan Dong, Shouchen Ma, Zhenhao Gao and Anzhen Qin
Appl. Sci. 2025, 15(21), 11363; https://doi.org/10.3390/app152111363 - 23 Oct 2025
Viewed by 182
Abstract
The accurate monitoring of crop water status is critical for optimizing irrigation strategies in winter wheat. Compared with satellite remote sensing, unmanned aerial vehicle (UAV) technology offers superior spatial resolution, temporal flexibility, and controllable data acquisition, making it an ideal choice for the [...] Read more.
The accurate monitoring of crop water status is critical for optimizing irrigation strategies in winter wheat. Compared with satellite remote sensing, unmanned aerial vehicle (UAV) technology offers superior spatial resolution, temporal flexibility, and controllable data acquisition, making it an ideal choice for the small-scale monitoring of crop water status. During 2023–2025, field experiments were conducted to predict crop water status using UAV images in the North China Plain (NCP). Thirteen vegetation indices were calculated and their correlations with observed crop water content (CWC) and equivalent water thickness (EWT) were analyzed. Four machine learning (ML) models, namely, random forest (RF), decision tree (DT), LightGBM, and CatBoost, were evaluated for their inversion accuracy with regard to CWC and EWT in the 2024–2025 growing season of winter wheat. The results show that the ratio vegetation index (RVI, NIR/R) exhibited the strongest correlation with CWC (R = 0.97) during critical growth stages. Among the ML models, CatBoost demonstrated superior performance, achieving R2 values of 0.992 (CWC) and 0.962 (EWT) in training datasets, with corresponding RMSE values of 0.012% and 0.1907 g cm−2, respectively. The model maintained robust performance in testing (R2 = 0.893 for CWC, and R2 = 0.961 for EWT), outperforming conventional approaches like RF and DT. High-resolution (5 cm) inversion maps successfully identified spatial variability in crop water status across experimental plots. The CatBoost-RVI framework proved particularly effective during the booting and flowering stages, providing reliable references for precision irrigation management in the NCP. Full article
(This article belongs to the Special Issue Advanced Plant Biotechnology in Sustainable Agriculture—2nd Edition)
Show Figures

Figure 1

24 pages, 1777 KB  
Systematic Review
Monitoring Biodiversity and Ecosystem Services Using L-Band Synthetic Aperture Radar Satellite Data
by Brian Alan Johnson, Chisa Umemiya, Koji Miwa, Takeo Tadono, Ko Hamamoto, Yasuo Takahashi, Mariko Harada and Osamu Ochiai
Remote Sens. 2025, 17(20), 3489; https://doi.org/10.3390/rs17203489 - 20 Oct 2025
Viewed by 211
Abstract
Over the last decade, L-band synthetic aperture radar (SAR) satellite data has become more widely available globally, providing new opportunities for biodiversity and ecosystem services (BES) monitoring. To better understand these opportunities, we conducted a systematic scoping review of articles that utilized L-band [...] Read more.
Over the last decade, L-band synthetic aperture radar (SAR) satellite data has become more widely available globally, providing new opportunities for biodiversity and ecosystem services (BES) monitoring. To better understand these opportunities, we conducted a systematic scoping review of articles that utilized L-band synthetic aperture radar (SAR) satellite data for BES monitoring. We found that the data have mainly been analyzed using image classification and regression methods, with classification methods attempting to understand how the extent, spatial distribution, and/or changes in different types of land use/land cover affect BES, and regression methods attempting to generate spatially explicit maps of important BES-related indicators like species richness or vegetation above-ground biomass. Random forest classification and regression algorithms, in particular, were used frequently and found to be promising in many recent studies. Deep learning algorithms, while also promising, have seen relatively little usage thus far. PALSAR-1/-2 annual mosaic data was by far the most frequently used dataset. Although free, this data is limited by its low temporal resolution. To help overcome this and other limitations of the existing L-band SAR datasets, 64% of studies combined them with other types of remote sensing data (most commonly, optical multispectral data). Study sites were mainly subnational in scale and located in countries with high species richness. Future research opportunities include investigating the benefits of new free, high temporal resolution L-band SAR datasets (e.g., PALSAR-2 ScanSAR data) and the potential of combining L-band SAR with new sources of SAR data (e.g., P-band SAR data from the “Biomass” satellite) and further exploring the potential of deep learning techniques. Full article
(This article belongs to the Special Issue Global Biospheric Monitoring with Remote Sensing (2nd Edition))
Show Figures

Figure 1

18 pages, 112460 KB  
Article
Gradient Boosting for the Spectral Super-Resolution of Ocean Color Sensor Data
by Brittney Slocum, Jason Jolliff, Sherwin Ladner, Adam Lawson, Mark David Lewis and Sean McCarthy
Sensors 2025, 25(20), 6389; https://doi.org/10.3390/s25206389 - 16 Oct 2025
Viewed by 602
Abstract
We present a gradient boosting framework for reconstructing hyperspectral signatures in the visible spectrum (400–700 nm) of satellite-based ocean scenes from limited multispectral inputs. Hyperspectral data is composed of many, typically greater than 100, narrow wavelength bands across the electromagnetic spectrum. While hyperspectral [...] Read more.
We present a gradient boosting framework for reconstructing hyperspectral signatures in the visible spectrum (400–700 nm) of satellite-based ocean scenes from limited multispectral inputs. Hyperspectral data is composed of many, typically greater than 100, narrow wavelength bands across the electromagnetic spectrum. While hyperspectral data can offer reflectance values at every nanometer, multispectral sensors typically provide only 3 to 11 discrete bands, undersampling the visible color space. Our approach is applied to remote sensing reflectance (Rrs) measurements from a set of ocean color sensors, including Suomi-National Polar-orbiting Partnership (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS), the Ocean and Land Colour Instrument (OLCI), Hyperspectral Imager for the Coastal Ocean (HICO), and NASA’s Plankton, Aerosol, Cloud, Ocean Ecosystem Ocean Color Instrument (PACE OCI), as well as in situ Rrs data from National Oceanic and Atmospheric Administration (NOAA) calibration and validation cruises. By leveraging these datasets, we demonstrate the feasibility of transforming low-spectral-resolution imagery into high-fidelity hyperspectral products. This capability is particularly valuable given the increasing availability of low-cost platforms equipped with RGB or multispectral imaging systems. Our results underscore the potential of hyperspectral enhancement for advancing ocean color monitoring and enabling broader access to high-resolution spectral data for scientific and environmental applications. Full article
Show Figures

Figure 1

22 pages, 6497 KB  
Article
Semantic Segmentation of High-Resolution Remote Sensing Images Based on RS3Mamba: An Investigation of the Extraction Algorithm for Rural Compound Utilization Status
by Xinyu Fang, Zhenbo Liu, Su’an Xie and Yunjian Ge
Remote Sens. 2025, 17(20), 3443; https://doi.org/10.3390/rs17203443 - 15 Oct 2025
Viewed by 258
Abstract
In this study, we utilize Gaofen-2 satellite remote sensing images to optimize and enhance the extraction of feature information from rural compounds, addressing key challenges in high-resolution remote sensing analysis: traditional methods struggle to effectively capture long-distance spatial dependencies for scattered rural compounds. [...] Read more.
In this study, we utilize Gaofen-2 satellite remote sensing images to optimize and enhance the extraction of feature information from rural compounds, addressing key challenges in high-resolution remote sensing analysis: traditional methods struggle to effectively capture long-distance spatial dependencies for scattered rural compounds. To this end, we implement the RS3Mamba+ deep learning model, which introduces the Mamba state space model (SSM) into its auxiliary branching—leveraging Mamba’s sequence modeling advantage to efficiently capture long-range spatial correlations of rural compounds, a critical capability for analyzing sparse rural buildings. This Mamba-assisted branch, combined with multi-directional selective scanning (SS2D) and the enhanced STEM network framework (replacing single 7 × 7 convolution with two-stage 3 × 3 convolutions to reduce information loss), works synergistically with a ResNet-based main branch for local feature extraction. We further introduce a multiscale attention feature fusion mechanism that optimizes feature extraction and fusion, enhances edge contour extraction accuracy in courtyards, and improves the recognition and differentiation of courtyards from regions with complex textures. The feature information of courtyard utilization status is finally extracted using empirical methods. A typical rural area in Weifang City, Shandong Province, is selected as the experimental sample area. Results show that the extraction accuracy reaches an average intersection over union (mIoU) of 79.64% and a Kappa coefficient of 0.7889, improving the F1 score by at least 8.12% and mIoU by 4.83% compared with models such as DeepLabv3+ and Transformer. The algorithm’s efficacy in mitigating false alarms triggered by shadows and intricate textures is particularly salient, underscoring its potential as a potent instrument for the extraction of rural vacancy rates. Full article
Show Figures

Figure 1

20 pages, 8158 KB  
Article
Reconstructing Global Chlorophyll-a Concentration for the COCTS Aboard Chinese Ocean Color Satellites via the DINEOF Method
by Xiaomin Ye, Mingsen Lin, Bin Zou, Xiaomei Wang and Zhijia Lin
Remote Sens. 2025, 17(20), 3433; https://doi.org/10.3390/rs17203433 - 15 Oct 2025
Viewed by 353
Abstract
The chlorophyll-a (Chl-a) concentration, a critical parameter for characterizing marine primary productivity and ecological health, plays a vital role in providing ecological environment monitoring and climate change assessment while serving as a core retrieval product in ocean color remote sensing. Currently, more than [...] Read more.
The chlorophyll-a (Chl-a) concentration, a critical parameter for characterizing marine primary productivity and ecological health, plays a vital role in providing ecological environment monitoring and climate change assessment while serving as a core retrieval product in ocean color remote sensing. Currently, more than ten ocean color satellites operate globally, including China’s HY-1C, HY-1D and HY-1E satellites. However, significant spatial data gaps exist in Chl-a concentration retrieval from satellites because of cloud cover, sun-glint, and limitation of sensor swath. This study aimed to systematically enhance the spatiotemporal integrity of ocean monitoring data through multisource data merging and reconstruction techniques. We integrated Chl-a concentration datasets from four major sensor types—Moderate Resolution Imaging Spectroradiometer (MODIS), Visible Infrared Imaging Radiometer Suite (VIIRS), Ocean and Land Color Instrument (OLCI), and Chinese Ocean Color and Temperature Scanner (COCTS)—and quantitatively evaluated their global coverage performance under different payload combinations. The key findings revealed that single-sensor 4-day continuous observation achieved effective coverage levels ranging from only 10.45–26.1%, while multi-sensor merging substantially increased coverage, namely, homogeneous payload merging provided 25.7% coverage for two MODIS satellites, 41.1% coverage for three VIIRS satellites, 24.8% coverage for two OLCI satellites, and 37.1% coverage for three COCTS satellites, with 10-payload merging increasing the coverage rate to 55.4%. Employing the Data Interpolating Empirical Orthogonal Functions (DINEOFS) algorithm, we successfully reconstructed data for China’s ocean color satellites. Validation against VIIRS reconstructions indicated high consistency (a mean relative error of 26% and a linear correlation coefficient of 0.93), whereas self-verification yielded a mean relative error of 27% and a linear correlation coefficient of 0.90. Case studies in Chinese offshore and adjacent waters, waters east of Mindanao Island and north of New Guinea, demonstrated the successful reconstruction of spatiotemporal Chl-a dynamics. The results demonstrated that China’s HY-1C, HY-1D, and HY-1E satellites enable daily global-scale Chl-a reconstruction. Full article
Show Figures

Figure 1

31 pages, 9234 KB  
Article
A Dual-Branch Framework Integrating the Segment Anything Model and Semantic-Aware Network for High-Resolution Cropland Extraction
by Dujuan Zhang, Yiping Li, Yucai Shen, Hengliang Guo, Haitao Wei, Jian Cui, Gang Wu, Tian He, Lingling Wang, Xiangdong Liu and Shan Zhao
Remote Sens. 2025, 17(20), 3424; https://doi.org/10.3390/rs17203424 - 13 Oct 2025
Viewed by 339
Abstract
Accurate spatial information of cropland is crucial for precision agricultural management and ensuring national food security. High-resolution remote sensing imagery combined with deep learning algorithms provides a promising approach for extracting detailed cropland information. However, due to the diverse morphological characteristics of croplands [...] Read more.
Accurate spatial information of cropland is crucial for precision agricultural management and ensuring national food security. High-resolution remote sensing imagery combined with deep learning algorithms provides a promising approach for extracting detailed cropland information. However, due to the diverse morphological characteristics of croplands across different agricultural landscapes, existing deep learning methods encounter challenges in precise boundary localization. The advancement of large-scale vision models has led to the emergence of the Segment Anything Model (SAM), which has demonstrated remarkable performance on natural images and attracted considerable attention in the field of remote sensing image segmentation. However, when applied to high-resolution cropland extraction, SAM faces limitations in semantic expressiveness and cross-domain adaptability. To address these issues, this study proposes a dual-branch framework integrating SAM and a semantically aware network (SAM-SANet) for high-resolution cropland extraction. Specifically, a semantically aware branch based on a semantic segmentation network is applied to identify cropland areas, complemented by a boundary-constrained SAM branch that directs the model’s attention to boundary information and enhances cropland extraction performance. Additionally, a boundary-aware feature fusion module and a prompt generation and selection module are incorporated into the SAM branch for precise cropland boundary localization. The former aggregates multi-scale edge information to enhance boundary representation, while the latter generates prompts with high relevance to the boundary. To evaluate the effectiveness of the proposed approach, we construct three cropland datasets named GID-CD, JY-CD and QX-CD. Experimental results on these datasets demonstrated that SAM-SANet achieved mIoU scores of 87.58%, 91.17% and 71.39%, along with mF1 scores of 93.54%, 95.35% and 82.21%, respectively. Comparative experiments with mainstream semantic segmentation models further confirmed the superior performance of SAM-SANet in high-resolution cropland extraction. Full article
Show Figures

Figure 1

22 pages, 7596 KB  
Article
Orthographic Video Map Generation Considering 3D GIS View Matching
by Xingguo Zhang, Xiangfei Meng, Li Zhang, Xianguo Ling and Sen Yang
ISPRS Int. J. Geo-Inf. 2025, 14(10), 398; https://doi.org/10.3390/ijgi14100398 - 13 Oct 2025
Viewed by 371
Abstract
Converting tower-mounted videos from perspective to orthographic view is beneficial for their integration with maps and remote sensing images and can provide a clearer and more real-time data source for earth observation. This paper addresses the issue of low geometric accuracy in orthographic [...] Read more.
Converting tower-mounted videos from perspective to orthographic view is beneficial for their integration with maps and remote sensing images and can provide a clearer and more real-time data source for earth observation. This paper addresses the issue of low geometric accuracy in orthographic video generation by proposing a method that incorporates 3D GIS view matching. Firstly, a geometric alignment model between video frames and 3D GIS views is established through camera parameter mapping. Then, feature point detection and matching algorithms are employed to associate image coordinates with corresponding 3D spatial coordinates. Finally, an orthographic video map is generated based on the color point cloud. The results show that (1) for tower-based video, a 3D GIS constructed from publicly available DEMs and high-resolution remote sensing imagery can meet the spatialization needs of large-scale tower-mounted video data. (2) The feature point matching algorithm based on deep learning effectively achieves accurate matching between video frames and 3D GIS views. (3) Compared with the traditional method, such as the camera parameters method, the orthographic video map generated by this method has advantages in terms of geometric mapping accuracy and visualization effect. In the mountainous area, the RMSE of the control points is reduced from 137.70 m to 7.72 m. In the flat area, it is reduced from 13.52 m to 8.10 m. The proposed method can provide a near-real-time orthographic video map for smart cities, natural resource monitoring, emergency rescue, and other fields. Full article
Show Figures

Figure 1

27 pages, 6909 KB  
Article
Comparative Analysis of Deep Learning and Traditional Methods for High-Resolution Cropland Extraction with Different Training Data Characteristics
by Dujuan Zhang, Xiufang Zhu, Yaozhong Pan, Hengliang Guo, Qiannan Li and Haitao Wei
Land 2025, 14(10), 2038; https://doi.org/10.3390/land14102038 - 13 Oct 2025
Viewed by 342
Abstract
High-resolution remote sensing (HRRS) imagery enables the extraction of cropland information with high levels of detail, especially when combined with the impressive performance of deep convolutional neural networks (DCNNs) in understanding these images. Comprehending the factors influencing DCNNs’ performance in HRRS cropland extraction [...] Read more.
High-resolution remote sensing (HRRS) imagery enables the extraction of cropland information with high levels of detail, especially when combined with the impressive performance of deep convolutional neural networks (DCNNs) in understanding these images. Comprehending the factors influencing DCNNs’ performance in HRRS cropland extraction is of considerable importance for practical agricultural monitoring applications. This study investigates the impact of classifier selection and different training data characteristics on the HRRS cropland classification outcomes. Specifically, Gaofen-1 composite images with 2 m spatial resolution are employed for HRRS cropland extraction, and two county-wide regions with distinct agricultural landscapes in Shandong Province, China, are selected as the study areas. The performance of two deep learning (DL) algorithms (UNet and DeepLabv3+) and a traditional classification algorithm, Object-Based Image Analysis with Random Forest (OBIA-RF), is compared. Additionally, the effects of different band combinations, crop growth stages, and class mislabeling on the classification accuracy are evaluated. The results demonstrated that the UNet and DeepLabv3+ models outperformed OBIA-RF in both simple and complex agricultural landscapes, and were insensitive to the changes in band combinations, indicating their ability to learn abstract features and contextual semantic information for HRRS cropland extraction. Moreover, compared with the DL models, OBIA-RF was more sensitive to changes in the temporal characteristics. The performance of all three models was unaffected when the mislabeling error ratio remained below 5%. Beyond this threshold, the performance of all models decreased, with UNet and DeepLabv3+ showing similar performance decline trends and OBIA-RF suffering a more drastic reduction. Furthermore, the DL models exhibited relatively low sensitivity to the patch size of sample blocks and data augmentation. These findings can facilitate the design of operational implementations for practical applications. Full article
Show Figures

Figure 1

20 pages, 5086 KB  
Article
A Multi-Modal Attention Fusion Framework for Road Connectivity Enhancement in Remote Sensing Imagery
by Yongqi Yuan, Yong Cheng, Bo Pan, Ge Jin, De Yu, Mengjie Ye and Qian Zhang
Mathematics 2025, 13(20), 3266; https://doi.org/10.3390/math13203266 - 13 Oct 2025
Viewed by 367
Abstract
Ensuring the structural continuity and completeness of road networks in high-resolution remote sensing imagery remains a major challenge for current deep learning methods, especially under conditions of occlusion caused by vegetation, buildings, or shadows. To address this, we propose a novel post-processing enhancement [...] Read more.
Ensuring the structural continuity and completeness of road networks in high-resolution remote sensing imagery remains a major challenge for current deep learning methods, especially under conditions of occlusion caused by vegetation, buildings, or shadows. To address this, we propose a novel post-processing enhancement framework that improves the connectivity and accuracy of initial road extraction results produced by any segmentation model. The method employs a dual-stream encoder architecture, which jointly processes RGB images and preliminary road masks to obtain complementary spatial and semantic information. A core component is the MAF (Multi-Modal Attention Fusion) module, designed to capture fine-grained, long-range, and cross-scale dependencies between image and mask features. This fusion leads to the restoration of fragmented road segments, the suppression of noise, and overall improvement in road completeness. Experiments on benchmark datasets (DeepGlobe and Massachusetts) demonstrate substantial gains in precision, recall, F1-score, and mIoU, confirming the framework’s effectiveness and generalization ability in real-world scenarios. Full article
(This article belongs to the Special Issue Mathematical Methods for Machine Learning and Computer Vision)
Show Figures

Figure 1

22 pages, 8737 KB  
Article
UAV-Based Multispectral Imagery for Area-Wide Sustainable Tree Risk Management
by Kinga Mazurek, Łukasz Zając, Marzena Suchocka, Tomasz Jelonek, Adam Juźwiak and Marcin Kubus
Sustainability 2025, 17(19), 8908; https://doi.org/10.3390/su17198908 - 7 Oct 2025
Viewed by 632
Abstract
The responsibility for risk assessment and user safety in forested and recreational areas lies with the property owner. This study shows that unmanned aerial vehicles (UAVs), combined with remote sensing and GIS analysis, effectively support the identification of high-risk trees, particularly those with [...] Read more.
The responsibility for risk assessment and user safety in forested and recreational areas lies with the property owner. This study shows that unmanned aerial vehicles (UAVs), combined with remote sensing and GIS analysis, effectively support the identification of high-risk trees, particularly those with reduced structural stability. UAV-based surveys successfully detect 78% of dead or declining trees identified during ground inspections, while significantly reducing labor and enabling large-area assessments within a short timeframe. The study covered an area of 6.69 ha with 51 reference trees assessed on the ground. Although the multispectral camera also recorded the red-edge band, it was not included in the present analysis. Compared to traditional ground-based surveys, the UAV-based approach reduced fieldwork time by approx. 20–30% and labor costs by approx. 15–20%. Orthomosaics generated from images captured by commercial multispectral drones (e.g., DJI Mavic 3 Multispectral) provide essential information on tree condition, especially mortality indicators. UAV data collection is fast and relatively low-cost but requires equipment capable of capturing high-resolution imagery in specific spectral bands, particularly near-infrared (NIR). The findings suggest that UAV-based monitoring can enhance the efficiency of large-scale inspections. However, ground-based verification remains necessary in high-traffic areas where safety is critical. Integrating UAV technologies with GIS supports the development of risk management strategies aligned with the principles of precision forestry, enabling sustainable, more proactive and efficient monitoring of tree-related hazards. Full article
(This article belongs to the Section Sustainable Forestry)
Show Figures

Figure 1

23 pages, 18068 KB  
Article
Vegetation Classification and Extraction of Urban Green Spaces Within the Fifth Ring Road of Beijing Based on YOLO v8
by Bin Li, Xiaotian Xu, Yingrui Duan, Hongyu Wang, Xu Liu, Yuxiao Sun, Na Zhao, Shaoning Li and Shaowei Lu
Land 2025, 14(10), 2005; https://doi.org/10.3390/land14102005 - 6 Oct 2025
Viewed by 482
Abstract
Real-time, accurate and detailed monitoring of urban green space is of great significance for constructing the urban ecological environment and maximizing ecological benefits. Although high-resolution remote sensing technology provides rich ground object information, it also makes the surface information of urban green spaces [...] Read more.
Real-time, accurate and detailed monitoring of urban green space is of great significance for constructing the urban ecological environment and maximizing ecological benefits. Although high-resolution remote sensing technology provides rich ground object information, it also makes the surface information of urban green spaces more complex. Existing classification methods often struggle to meet the requirements of classification accuracy and the automation demands of high-resolution images. This study utilized GF-7 remote sensing imagery to construct an urban green space classification method for Beijing. The study used the YOLO v8 model as the framework to conduct a fine classification of urban green spaces within the Fifth Ring Road of Beijing, distinguishing between evergreen trees, deciduous trees, shrubs and grasslands. The aims were to address the limitations of insufficient model fit and coarse-grained classifications in existing studies, and to improve vegetation extraction accuracy for green spaces in northern temperate cities (with Beijing as a typical example). The results show that the overall classification accuracy of the trained YOLO v8 model is 89.60%, which is 25.3% and 28.8% higher than that of traditional machine learning methods such as Maximum Likelihood and Support Vector Machine, respectively. The model achieved extraction accuracies of 92.92%, 93.40%, 87.67%, and 93.34% for evergreen trees, deciduous trees, shrubs, and grasslands, respectively. This result confirms that the combination of deep learning and high-resolution remote sensing images can effectively enhance the classification extraction of urban green space vegetation, providing technical support and data guarantees for the refined management of green spaces and “garden cities” in megacities such as Beijing. Full article
(This article belongs to the Special Issue Vegetation Cover Changes Monitoring Using Remote Sensing Data)
Show Figures

Figure 1

22 pages, 5361 KB  
Article
LMVMamba: A Hybrid U-Shape Mamba for Remote Sensing Segmentation with Adaptation Fine-Tuning
by Fan Li, Xiao Wang, Haochen Wang, Hamed Karimian, Juan Shi and Guozhen Zha
Remote Sens. 2025, 17(19), 3367; https://doi.org/10.3390/rs17193367 - 5 Oct 2025
Viewed by 675
Abstract
High-precision semantic segmentation of remote sensing imagery is crucial in geospatial analysis. It plays an immeasurable role in fields such as urban governance, environmental monitoring, and natural resource management. However, when confronted with complex objects (such as winding roads and dispersed buildings), existing [...] Read more.
High-precision semantic segmentation of remote sensing imagery is crucial in geospatial analysis. It plays an immeasurable role in fields such as urban governance, environmental monitoring, and natural resource management. However, when confronted with complex objects (such as winding roads and dispersed buildings), existing semantic segmentation methods still suffer from inadequate target recognition capabilities and multi-scale representation issues. This paper proposes a neural network model, LMVMamba (LoRA Multi-scale Vision Mamba), for semantic segmentation of remote sensing images. This model integrates the advantages of convolutional neural networks (CNNs), Transformers, and state-space models (Mamba) with a multi-scale feature fusion strategy. It simultaneously captures global contextual information and fine-grained local features. Specifically, in the encoder stage, the ResT Transformer serves as the backbone network, employing a LoRA fine-tuning strategy to effectively enhance model accuracy by training only the introduced low-rank matrix pairs. The extracted features are then passed to the decoder, where a U-shaped Mamba decoder is designed. In this stage, a Multi-Scale Post-processing Block (MPB) is introduced, consisting of depthwise separable convolutions and residual concatenation. This block effectively extracts multi-scale features and enhances local detail extraction after the VSS block. Additionally, a Local Enhancement and Fusion Attention Module (LAS) is added at the end of each decoder block. LAS integrates the SimAM attention mechanism, further enhancing the model’s multi-scale feature fusion capability and local detail segmentation capability. Through extensive comparative experiments, it was found that LMVMamba achieves superior performance on the OpenEarthMap dataset (mIoU 52.3%, OA 69.8%, mF1: 68.0%) and LoveDA (mIoU 67.9%, OA 80.3%, mF1: 80.5%) datasets. Ablation experiments validated the effectiveness of each module. The final results indicate that this model is highly suitable for high-precision land-cover classification tasks in remote sensing imagery. LMVMamba provides an effective solution for precise semantic segmentation of high-resolution remote sensing imagery. Full article
Show Figures

Graphical abstract

27 pages, 5542 KB  
Article
ILF-BDSNet: A Compressed Network for SAR-to-Optical Image Translation Based on Intermediate-Layer Features and Bio-Inspired Dynamic Search
by Yingying Kong and Cheng Xu
Remote Sens. 2025, 17(19), 3351; https://doi.org/10.3390/rs17193351 - 1 Oct 2025
Viewed by 385
Abstract
Synthetic aperture radar (SAR) exhibits all-day and all-weather capabilities, granting it significant application in remote sensing. However, interpreting SAR images requires extensive expertise, making SAR-to-optical remote sensing image translation a crucial research direction. While conditional generative adversarial networks (CGANs) have demonstrated exceptional performance [...] Read more.
Synthetic aperture radar (SAR) exhibits all-day and all-weather capabilities, granting it significant application in remote sensing. However, interpreting SAR images requires extensive expertise, making SAR-to-optical remote sensing image translation a crucial research direction. While conditional generative adversarial networks (CGANs) have demonstrated exceptional performance in image translation tasks, their massive number of parameters pose substantial challenges. Therefore, this paper proposes ILF-BDSNet, a compressed network for SAR-to-optical image translation. Specifically, first, standard convolutions in the feature-transformation module of the teacher network are replaced with depthwise separable convolutions to construct the student network, and a dual-resolution collaborative discriminator based on PatchGAN is proposed. Next, knowledge distillation based on intermediate-layer features and channel pruning via weight sharing are designed to train the student network. Then, the bio-inspired dynamic search of channel configuration (BDSCC) algorithm is proposed to efficiently select the optimal subnet. Meanwhile, the pixel-semantic dual-domain alignment loss function is designed. The feature-matching loss within this function establishes an alignment mechanism based on intermediate-layer features from the discriminator. Extensive experiments demonstrate the superiority of ILF-BDSNet, which significantly reduces number of parameters and computational complexity while still generating high-quality optical images, providing an efficient solution for SAR image translation in resource-constrained environments. Full article
Show Figures

Figure 1

Back to TopTop