Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (181)

Search Parameters:
Keywords = VHR imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 9284 KiB  
Article
UAV-YOLO12: A Multi-Scale Road Segmentation Model for UAV Remote Sensing Imagery
by Bingyan Cui, Zhen Liu and Qifeng Yang
Drones 2025, 9(8), 533; https://doi.org/10.3390/drones9080533 - 29 Jul 2025
Viewed by 413
Abstract
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes UAV-YOLOv12, a multi-scale segmentation model specifically designed for UAV-based road imagery analysis. The proposed model builds on the YOLOv12 architecture by adding two key modules. It uses a Selective Kernel Network (SKNet) to adjust receptive fields dynamically and a Partial Convolution (PConv) module to improve spatial focus and robustness in occluded regions. These enhancements help the model better detect small and irregular road features in complex aerial scenes. Experimental results on a custom UAV dataset collected from national highways in Wuxi, China, show that UAV-YOLOv12 achieves F1-scores of 0.902 for highways (road-H) and 0.825 for paths (road-P), outperforming the original YOLOv12 by 5% and 3.2%, respectively. Inference speed is maintained at 11.1 ms per image, supporting near real-time performance. Moreover, comparative evaluations with U-Net show that UAV-YOLOv12 improves by 7.1% and 9.5%. The model also exhibits strong generalization ability, achieving F1-scores above 0.87 on public datasets such as VHR-10 and the Drone Vehicle dataset. These results demonstrate that the proposed UAV-YOLOv12 can achieve high accuracy and robustness in diverse road environments and object scales. Full article
Show Figures

Figure 1

20 pages, 6074 KiB  
Article
Remote Sensing Archaeology of the Xixia Imperial Tombs: Analyzing Burial Landscapes and Geomantic Layouts
by Wei Ji, Li Li, Jia Yang, Yuqi Hao and Lei Luo
Remote Sens. 2025, 17(14), 2395; https://doi.org/10.3390/rs17142395 - 11 Jul 2025
Viewed by 545
Abstract
The Xixia Imperial Tombs (XITs) represent a crucial, yet still largely mysterious, component of the Tangut civilization’s legacy. Located in northwestern China, this extensive necropolis offers invaluable insights into the Tangut state, culture, and burial practices. This study employs an integrated approach utilizing [...] Read more.
The Xixia Imperial Tombs (XITs) represent a crucial, yet still largely mysterious, component of the Tangut civilization’s legacy. Located in northwestern China, this extensive necropolis offers invaluable insights into the Tangut state, culture, and burial practices. This study employs an integrated approach utilizing multi-resolution and multi-temporal satellite remote sensing data, including Gaofen-2 (GF-2), Landsat-8 OLI, declassified GAMBIT imagery, and Google Earth, combined with deep learning techniques, to conduct a comprehensive archaeological investigation of the XITs’ burial landscape. We performed geomorphological analysis of the surrounding environment and automated identification and mapping of burial mounds and mausoleum features using YOLOv5, complemented by manual interpretation of very-high-resolution (VHR) satellite imagery. Spectral indices and image fusion techniques were applied to enhance the detection of archaeological features. Our findings demonstrated the efficacy of this combined methodology for archaeology prospect, providing valuable insights into the spatial layout, geomantic considerations, and preservation status of the XITs. Notably, the analysis of declassified GAMBIT imagery facilitated the identification of a suspected true location for the ninth imperial tomb (M9), a significant contribution to understanding Xixia history through remote sensing archaeology. This research provides a replicable framework for the detection and preservation of archaeological sites using readily available satellite data, underscoring the power of advanced remote sensing and machine learning in heritage studies. Full article
Show Figures

Figure 1

20 pages, 13476 KiB  
Article
Monitoring Pine Wilt Disease Using High-Resolution Satellite Remote Sensing at the Single-Tree Scale with Integrated Self-Attention
by Wenhao Lv, Junhao Zhao and Jixia Huang
Remote Sens. 2025, 17(13), 2197; https://doi.org/10.3390/rs17132197 - 26 Jun 2025
Viewed by 381
Abstract
Pine wilt disease has caused severe damage to China’s forest ecosystems. Utilizing the rich information from very-high-resolution (VHR) satellite imagery for large-scale and accurate monitoring of pine wilt disease is a crucial approach to curbing its spread. However, current research on identifying infected [...] Read more.
Pine wilt disease has caused severe damage to China’s forest ecosystems. Utilizing the rich information from very-high-resolution (VHR) satellite imagery for large-scale and accurate monitoring of pine wilt disease is a crucial approach to curbing its spread. However, current research on identifying infected trees using VHR satellite imagery and deep learning remains extremely limited. This study introduces several advanced self-attention algorithms into the task of satellite-based monitoring of pine wilt disease to enhance detection performance. We constructed a dataset of discolored pine trees affected by pine wilt disease using imagery from the Gaofen-2 and Gaofen-7 satellites. Within the unified semantic segmentation framework MMSegmentation, we implemented four single-head attention models—NLNet, CCNet, DANet, and GCNet—and two multi-head attention models—Swin Transformer and SegFormer—for the accurate semantic segmentation of infected trees. The model predictions were further analyzed through visualization. The results demonstrate that introducing appropriate self-attention algorithms significantly improves detection accuracy for pine wilt disease. Among the single-head attention models, DANet achieved the highest accuracy, reaching 73.35%. The multi-head attention models exhibited an excellent performance, with SegFormer-b2 achieving an accuracy of 76.39%, learning the features of discolored pine trees at the earliest stage and converging faster. The visualization of model inference results indicates that DANet, which integrates convolutional neural networks (CNNs) with self-attention mechanisms, achieved the highest overall accuracy at 94.43%. The use of self-attention algorithms enables models to extract more precise morphological features of discolored pine trees, enhancing user accuracy while potentially reducing production accuracy. Full article
Show Figures

Figure 1

18 pages, 4309 KiB  
Article
OMRoadNet: A Self-Training-Based UDA Framework for Open-Pit Mine Haul Road Extraction from VHR Imagery
by Suchuan Tian, Zili Ren, Xingliang Xu, Zhengxiang He, Wanan Lai, Zihan Li and Yuhang Shi
Appl. Sci. 2025, 15(12), 6823; https://doi.org/10.3390/app15126823 - 17 Jun 2025
Viewed by 384
Abstract
Accurate extraction of dynamically evolving haul roads in open-pit mines from very-high-resolution (VHR) satellite imagery remains a critical challenge due to domain gaps between urban and mining environments, prohibitive annotation costs, and morphological irregularities. This paper introduces OMRoadNet, an unsupervised domain adaptation (UDA) [...] Read more.
Accurate extraction of dynamically evolving haul roads in open-pit mines from very-high-resolution (VHR) satellite imagery remains a critical challenge due to domain gaps between urban and mining environments, prohibitive annotation costs, and morphological irregularities. This paper introduces OMRoadNet, an unsupervised domain adaptation (UDA) framework for open-pit mine road extraction, which synergizes self-training, attention-based feature disentanglement, and morphology-aware augmentation to address these challenges. The framework employs a cyclic GAN (generative adversarial network) architecture with bidirectional translation pathways, integrating pseudo-label refinement through confidence thresholds and geometric rules (eight-neighborhood connectivity and adaptive kernel resizing) to resolve domain shifts. A novel exponential moving average unit (EMAU) enhances feature robustness by adaptively weighting historical states, while morphology-aware augmentation simulates variable road widths and spectral noise. Evaluations on cross-domain datasets demonstrate state-of-the-art performance with 92.16% precision, 80.77% F1-score, and 67.75% IoU (intersection over union), outperforming baseline models by 4.3% in precision and reducing annotation dependency by 94.6%. By reducing per-kilometer operational costs by 78% relative to LiDAR (Light Detection and Ranging) alternatives, OMRoadNet establishes a practical solution for intelligent mining infrastructure mapping, bridging the critical gap between structured urban datasets and unstructured mining environments. Full article
(This article belongs to the Special Issue Novel Technologies in Intelligent Coal Mining)
Show Figures

Figure 1

21 pages, 8280 KiB  
Article
Segmentation of Multitemporal PlanetScope Data to Improve the Land Parcel Identification System (LPIS)
by Marco Obialero and Piero Boccardo
Remote Sens. 2025, 17(12), 1962; https://doi.org/10.3390/rs17121962 - 6 Jun 2025
Viewed by 725
Abstract
The 1992 reform of the European Common Agricultural Policy (CAP) introduced the Land Parcel Identification System (LPIS), a geodatabase of land parcels used to monitor and regulate agricultural subsidies. Traditionally, the LPIS has relied on high-resolution aerial orthophotos; however, recent advancements in very-high-resolution [...] Read more.
The 1992 reform of the European Common Agricultural Policy (CAP) introduced the Land Parcel Identification System (LPIS), a geodatabase of land parcels used to monitor and regulate agricultural subsidies. Traditionally, the LPIS has relied on high-resolution aerial orthophotos; however, recent advancements in very-high-resolution (VHR) satellite imagery present new opportunities to enhance its effectiveness. This study explores the feasibility of utilizing PlanetScope, a commercial VHR optical satellite constellation, to map agricultural parcels within the LPIS. A test was conducted in Umbria, Italy, integrating existing datasets with a series of PlanetScope images from 2023. A segmentation workflow was designed, employing the Normalized difference Vegetation Index (NDVI) alongside the Edge segmentation method with varying sensitivity thresholds. An accuracy evaluation based on geometric metrics, comparing detected parcels with cadastral references, revealed that a 30% scale threshold yielded the most reliable results, achieving an accuracy rate of 83.3%. The results indicate that the short revisit time of PlanetScope compensates for its lower spatial resolution compared to traditional orthophotos, allowing accurate delineation of parcels. However, challenges remain in automating parcel matching and integrating alternative methods for accuracy assessment. Further research should focus on refining segmentation parameters and optimizing PlanetScope’s temporal and spectral resolution to strengthen LPIS performance, ultimately fostering more sustainable and data-driven agricultural management. Full article
Show Figures

Figure 1

28 pages, 2816 KiB  
Article
Enhancing Urban Understanding Through Fine-Grained Segmentation of Very-High-Resolution Aerial Imagery
by Umamaheswaran Raman Kumar, Toon Goedemé and Patrick Vandewalle
Remote Sens. 2025, 17(10), 1771; https://doi.org/10.3390/rs17101771 - 19 May 2025
Viewed by 726
Abstract
Despite the growing availability of very-high-resolution (VHR) remote sensing imagery, extracting fine-grained urban features and materials remains a complex task. Land use/land cover (LULC) maps generated from satellite imagery often fall short in providing the resolution needed for detailed urban studies. While hyperspectral [...] Read more.
Despite the growing availability of very-high-resolution (VHR) remote sensing imagery, extracting fine-grained urban features and materials remains a complex task. Land use/land cover (LULC) maps generated from satellite imagery often fall short in providing the resolution needed for detailed urban studies. While hyperspectral imagery offers rich spectral information ideal for material classification, its complex acquisition process limits its use on aerial platforms such as manned aircraft and unmanned aerial vehicles (UAVs), reducing its feasibility for large-scale urban mapping. This study explores the potential of using only RGB and LiDAR data from VHR aerial imagery as an alternative for urban material classification. We introduce an end-to-end workflow that leverages a multi-head segmentation network to jointly classify roof and ground materials while also segmenting individual roof components. The workflow includes a multi-offset self-ensemble inference strategy optimized for aerial data and a post-processing step based on digital elevation models (DEMs). In addition, we present a systematic method for extracting roof parts as polygons enriched with material attributes. The study is conducted on six cities in Flanders, Belgium, covering 18 material classes—including rare categories such as green roofs, wood, and glass. The results show a 9.88% improvement in mean intersection over union (mIOU) for building and ground segmentation, and a 3.66% increase in mIOU for material segmentation compared to a baseline pyramid attention network (PAN). These findings demonstrate the potential of RGB and LiDAR data for high-resolution material segmentation in urban analysis. Full article
(This article belongs to the Special Issue Applications of AI and Remote Sensing in Urban Systems II)
Show Figures

Figure 1

18 pages, 3261 KiB  
Article
Exploring Burnt Area Delineation with Cross-Resolution Mapping: A Case Study of Very High and Medium-Resolution Data
by Sai Balakavi, Vineet Vadrevu and Kristofer Lasko
Sensors 2025, 25(10), 3009; https://doi.org/10.3390/s25103009 - 10 May 2025
Viewed by 543
Abstract
Remote sensing is essential for mapping and monitoring burnt areas. Integrating Very High-Resolution (VHR) data with medium-resolution datasets like Landsat and deep learning algorithms can enhance mapping accuracy. This study employs two deep learning algorithms, UNET and Gated Recurrent Unit (GRU), to classify [...] Read more.
Remote sensing is essential for mapping and monitoring burnt areas. Integrating Very High-Resolution (VHR) data with medium-resolution datasets like Landsat and deep learning algorithms can enhance mapping accuracy. This study employs two deep learning algorithms, UNET and Gated Recurrent Unit (GRU), to classify burnt areas in the Bandipur Forest, Karnataka, India. We explore using VHR imagery with limited samples to train models on Landsat imagery for burnt area delineation. Four models were analyzed: (a) custom UNET with Landsat labels, (b) custom UNET with PlanetScope-labeled data on Landsat, (c) custom UNET-GRU with Landsat labels, and (d) custom UNET-GRU with PlanetScope-labeled data on Landsat. Custom UNET with Landsat labels achieved the best performance, excelling in precision (0.89), accuracy (0.98), and segmentation quality (Mean IOU: 0.65, Dice Coefficient: 0.78). Using PlanetScope labels resulted in slightly lower performance, but its high recall (0.87 for UNET-GRU) demonstrating its potential for identifying positive instances. In the study, we highlight the potential and limitations of integrating VHR with medium-resolution satellite data for burnt area delineation using deep learning. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

22 pages, 8296 KiB  
Article
Urban Sprawl Monitoring by VHR Images Using Active Contour Loss and Improved U-Net with Mix Transformer Encoders
by Miguel Chicchon, Francesca Colosi, Eva Savina Malinverni and Francisco James León Trujillo
Remote Sens. 2025, 17(9), 1593; https://doi.org/10.3390/rs17091593 - 30 Apr 2025
Viewed by 557
Abstract
Monitoring the variation of urban expansion is crucial for sustainable urban planning and cultural heritage management. This paper proposes an approach for the semantic segmentation of very-high-resolution (VHR) satellite imagery to detect the changes in urban sprawl in the surroundings of Chan Chan, [...] Read more.
Monitoring the variation of urban expansion is crucial for sustainable urban planning and cultural heritage management. This paper proposes an approach for the semantic segmentation of very-high-resolution (VHR) satellite imagery to detect the changes in urban sprawl in the surroundings of Chan Chan, a UNESCO World Heritage Site in Peru. This study explores the effectiveness of combining Mix Transformer encoders with U-Net architectures to improve feature extraction and spatial context understanding in VHR satellite imagery. The integration of active contour loss functions further enhances the model’s ability to delineate complex urban boundaries, addressing the challenges posed by the heterogeneous landscape surrounding the archaeological complex of Chan Chan. The results demonstrate that the proposed approach achieves accurate semantic segmentation on images of the study area from different years. Quantitative results showed that the U-Net-scse model with an MiTB5 encoder achieved the best performance with respect to SegFormer and FT-UNet-Former, with IoU scores of 0.8288 on OpenEarthMap and 0.6743 on Chan Chan images. Qualitative analysis revealed the model’s effectiveness in segmenting buildings across diverse urban and rural environments in Peru. Utilizing this approach for monitoring urban expansion over time can enable managers to make informed decisions aimed at preserving cultural heritage and promoting sustainable urban development. Full article
Show Figures

Graphical abstract

25 pages, 7199 KiB  
Article
A Progressive Semantic-Aware Fusion Network for Remote Sensing Object Detection
by Lerong Li, Jiayang Wang, Yue Liao and Wenbin Qian
Appl. Sci. 2025, 15(8), 4422; https://doi.org/10.3390/app15084422 - 17 Apr 2025
Viewed by 653
Abstract
Object detection in remote sensing images has gained prominence alongside advancements in sensor technology and earth observation systems. Although current detection frameworks demonstrate remarkable achievements in natural imagery analysis, their performance degrades when applied to remote imaging scenarios due to two inherent limitations: [...] Read more.
Object detection in remote sensing images has gained prominence alongside advancements in sensor technology and earth observation systems. Although current detection frameworks demonstrate remarkable achievements in natural imagery analysis, their performance degrades when applied to remote imaging scenarios due to two inherent limitations: (1) complex background interference, which causes object features to be easily obscured by noise, leading to reduced detection accuracy; (2) the variation in object scales leads to a decrease in the model’s generalization ability. To address these issues, we propose a progressive semantic-aware fusion network (ProSAF-Net). First, we design a shallow detail aggregation module (SDAM), which adaptively integrates features across different channels and scales in the early Neck stage through dynamically adjusted fusion weights, fully exploiting shallow detail information to refine object edge and texture representation. Second, to effectively integrate shallow detail information and high-level semantic abstractions, we propose a deep semantic fusion module (DSFM), which employs a progressive feature fusion mechanism to incrementally integrate deep semantic information, strengthening the global representation of objects while effectively complementing the rich shallow details extracted by SDAM, enhancing the model’s capability in distinguishing objects and refining spatial localization. Furthermore, we develop a spatial context-aware module (SCAM) to fully exploit both global and local contextual information, effectively distinguishing foreground from background and suppressing interference, thus improving detection robustness. Finally, we propose auxiliary dynamic loss (ADL), which adaptively adjusts loss weights based on object scales and utilizes supplementary anchor priors to expedite parameter convergence during coordinate regression, thereby improving the model’s positioning accuracy for targets. Extensive experiments on the RSOD, DIOR, and NWPU VHR-10 datasets demonstrate that our method outperforms other state-of-the-art methods. Full article
Show Figures

Figure 1

23 pages, 7802 KiB  
Article
Can Separation Enhance Fusion? An Efficient Framework for Target Detection in Multimodal Remote Sensing Imagery
by Yong Wang, Jiexuan Jia, Rui Liu, Qiusheng Cao, Jie Feng, Danping Li and Lei Wang
Remote Sens. 2025, 17(8), 1350; https://doi.org/10.3390/rs17081350 - 10 Apr 2025
Viewed by 611
Abstract
Target detection in remote sensing images has garnered significant attention due to its wide range of applications. Many traditional methods primarily rely on unimodal data, which often struggle to address the complexities of remote sensing environments. Furthermore, small-target detection remains a critical challenge [...] Read more.
Target detection in remote sensing images has garnered significant attention due to its wide range of applications. Many traditional methods primarily rely on unimodal data, which often struggle to address the complexities of remote sensing environments. Furthermore, small-target detection remains a critical challenge in remote sensing image analysis, as small targets occupy only a few pixels, making feature extraction difficult and prone to errors. To address these challenges, this paper revisits the existing multimodal fusion methodologies and proposes a novel framework of separation before fusion (SBF). Leveraging this framework, we present Sep-Fusion—an efficient target detection approach tailored for multimodal remote sensing aerial imagery. Within the modality separation module (MSM), the method separates the three RGB channels of visible light images into independent modalities aligned with infrared image channels. Each channel undergoes independent feature extraction through the unimodal block (UB) to effectively capture modality-specific features. The extracted features are then fused using the feature attention fusion (FAF) module, which integrates channel attention and spatial attention mechanisms to enhance multimodal feature interaction. To improve the detection of small targets, an image regeneration module is exploited during the training stage. It incorporates the super-resolution strategy with attention mechanisms to further optimize high-resolution feature representations for subsequent positioning and detection. Sep-Fusion is currently developed on the YOLO series to make itself a potential real-time detector. Its lightweight architecture enables the model to achieve high computational efficiency while maintaining the desired detection accuracy. Experimental results on the multimodal VEDAI dataset show that Sep-Fusion achieves 77.9% mAP50, surpassing many state-of-the-art models. Ablation experiments further illustrate the respective contribution of modality separation and attention fusion. The adaptation of our multimodal method to unimodal target detection is also verified on NWPU VHR-10 and DIOR datasets, which proves Sep-Fusion to be a suitable alternative to current detectors in various remote sensing scenarios. Full article
(This article belongs to the Special Issue Remote Sensing Image Thorough Analysis by Advanced Machine Learning)
Show Figures

Figure 1

18 pages, 1795 KiB  
Article
Impact of UAV-Derived RTK/PPK Products on Geometric Correction of VHR Satellite Imagery
by Muhammed Enes Atik, Mehmet Arkali and Saziye Ozge Atik
Drones 2025, 9(4), 291; https://doi.org/10.3390/drones9040291 - 9 Apr 2025
Cited by 1 | Viewed by 1159
Abstract
Satellite imagery is a widely used source of spatial information in many applications, such as land use/land cover, object detection, agricultural monitoring, and urban area monitoring. Numerous factors, including projection, tilt angle, scanner, atmospheric conditions, terrain curvature, and fluctuations, can cause satellite images [...] Read more.
Satellite imagery is a widely used source of spatial information in many applications, such as land use/land cover, object detection, agricultural monitoring, and urban area monitoring. Numerous factors, including projection, tilt angle, scanner, atmospheric conditions, terrain curvature, and fluctuations, can cause satellite images to become distorted. Eliminating systematic errors caused by the sensor and platform is a crucial step to obtaining reliable information from satellite images. To utilize satellite images directly in applications requiring high accuracy, the errors in the images should be removed by geometric correction. In this study, geometric correction was applied to the Pléiades 1A (PHR) image using non-parametric methods, and the effects of different transformation models and digital elevation models (DEMs) were investigated. Ground control points (GCPs) were obtained from orthophotos created by the photogrammetric method using precise positioning. The effect of photogrammetric DEMs with various spatial resolutions on geometric correction was investigated. Additionally, the effect of DEMs obtained using the photogrammetric method was compared with those from open-source DEMs, including SRTM, ASTER GDEM, COP30, AW3D30, and NASADEM. Two-dimensional polynomial transformation, the thin plate spline (TPS), and the rational function model (RFM) were applied as transformation methods. Our results showed that a higher-accuracy geometric correction process could be achieved with orthophotos and DEMs created using precise positioning techniques such as RTK and PPK. According to the results obtained, an RMSE of 0.633 m was achieved with RFM using RTK-DEM, while an RMSE of 0.615 m was achieved with RFM using PPK-DEM. Full article
(This article belongs to the Special Issue Applications of UVs in Digital Photogrammetry and Image Processing)
Show Figures

Figure 1

22 pages, 11865 KiB  
Article
Detection and Optimization of Photovoltaic Arrays’ Tilt Angles Using Remote Sensing Data
by Niko Lukač, Sebastijan Seme, Klemen Sredenšek, Gorazd Štumberger, Domen Mongus, Borut Žalik and Marko Bizjak
Appl. Sci. 2025, 15(7), 3598; https://doi.org/10.3390/app15073598 - 25 Mar 2025
Viewed by 692
Abstract
Maximizing the energy output of photovoltaic (PV) systems is becoming increasingly important. Consequently, numerous approaches have been developed over the past few years that utilize remote sensing data to predict or map solar potential. However, they primarily address hypothetical scenarios, and few focus [...] Read more.
Maximizing the energy output of photovoltaic (PV) systems is becoming increasingly important. Consequently, numerous approaches have been developed over the past few years that utilize remote sensing data to predict or map solar potential. However, they primarily address hypothetical scenarios, and few focus on improving existing installations. This paper presents a novel method for optimizing the tilt angles of existing PV arrays by integrating Very High Resolution (VHR) satellite imagery and airborne Light Detection and Ranging (LiDAR) data. At first, semantic segmentation of VHR imagery using a deep learning model is performed in order to detect PV modules. The segmentation is refined using a Fine Optimization Module (FOM). LiDAR data are used to construct a 2.5D grid to estimate the modules’ tilt (inclination) and aspect (orientation) angles. The modules are grouped into arrays, and tilt angles are optimized using a Simulated Annealing (SA) algorithm, which maximizes simulated solar irradiance while accounting for shadowing, direct, and anisotropic diffuse irradiances. The method was validated using PV systems in Maribor, Slovenia, achieving a 0.952 F1-score for module detection (using FT-UnetFormer with SwinTransformer backbone) and an estimated electricity production error of below 6.7%. Optimization results showed potential energy gains of up to 4.9%. Full article
Show Figures

Figure 1

21 pages, 14494 KiB  
Article
GIS-Based Approach for Estimating Olive Tree Heights Using High-Resolution Satellite Imagery and Shadow Analysis
by Raffaella Brigante, Valerio Baiocchi, Roberto Calisti, Laura Marconi, Primo Proietti, Fabio Radicioni, Luca Regni and Alessandra Vinci
Appl. Sci. 2025, 15(6), 3066; https://doi.org/10.3390/app15063066 - 12 Mar 2025
Cited by 2 | Viewed by 812
Abstract
Measuring tree heights is a critical step for assessing ecological and agricultural parameters, including biomass, carbon stock, and canopy volume. In extensive areas exceeding a few hectares, traditional terrestrial measurement methods are often prohibitively expensive in terms of time and cost. This study [...] Read more.
Measuring tree heights is a critical step for assessing ecological and agricultural parameters, including biomass, carbon stock, and canopy volume. In extensive areas exceeding a few hectares, traditional terrestrial measurement methods are often prohibitively expensive in terms of time and cost. This study introduces a GIS-based methodology for estimating olive tree (Olea europaea L.) heights using very-high-resolution (VHR) satellite imagery. The approach integrates a mathematical model that incorporates slope and aspect information derived in a GIS environment from a large-scale Digital Elevation Model. By leveraging sun position data embedded in satellite image metadata, a dedicated geometric model was developed to calculate tree heights. Comparative analyses with a drone-based 3D model demonstrated the statistical reliability of the proposed methodology. While this study focuses on olive trees due to their unique canopy structure, the method could also be applied to other tree species or even to buildings and other vertically developed structures on the ground. Future developments aim to enhance efficiency and usability through the creation of a specialized GIS tool, making it a valuable resource for environmental monitoring, sustainable agricultural management, and broader spatial analysis applications. Full article
(This article belongs to the Special Issue GIS-Based Spatial Analysis for Environmental Applications)
Show Figures

Figure 1

20 pages, 42010 KiB  
Article
Coastline and Riverbed Change Detection in the Broader Area of the City of Patras Using Very High-Resolution Multi-Temporal Imagery
by Spiros Papadopoulos, Vassilis Anastassopoulos and Georgia Koukiou
Electronics 2025, 14(6), 1096; https://doi.org/10.3390/electronics14061096 - 11 Mar 2025
Viewed by 708
Abstract
Accurate and robust information on land cover changes in urban and coastal areas is essential for effective urban land management, ecosystem monitoring, and urban planning. This paper details the methodology and results of a pixel-level classification and change detection analysis, leveraging 1945 Royal [...] Read more.
Accurate and robust information on land cover changes in urban and coastal areas is essential for effective urban land management, ecosystem monitoring, and urban planning. This paper details the methodology and results of a pixel-level classification and change detection analysis, leveraging 1945 Royal Air Force (RAF) aerial imagery and 2011 Very High-Resolution (VHR) multispectral WorldView-2 satellite imagery from the broader area of Patras, Greece. Our attention is mainly focused on the changes in the coastline from the city of Patras to the northeast direction and the two major rivers, Charadros and Selemnos. The methodology involves preprocessing steps such as registration, denoising, and resolution adjustments to ensure computational feasibility for both coastal and riverbed change detection procedures while maintaining critical spatial features. For change detection at coastal areas over time, the Normalized Difference Water Index (NDWI) was applied to the new imagery to mask out the sea from the coastline and manually archive imagery from 1945. To determine the differences in the coastline between 1945 and 2011, we perform image differencing by subtracting the 1945 image from the 2011 image. This highlights the areas where changes have occurred over time. To conduct riverbed change detection, feature extraction using the Gray-Level Co-occurrence Matrix (GLCM) was applied to capture spatial characteristics. A Support Vector Machine (SVM) classification model was trained to distinguish river pixels from non-river pixels, enabling the identification of changes in riverbeds and achieving 92.6% and 92.5% accuracy for new and old imagery, respectively. Post-classification processing included classification maps to enhance the visualization of the detected changes. This approach highlights the potential of combining historical and modern imagery with supervised machine learning methods to effectively assess coastal erosion and riverbed alterations. Full article
Show Figures

Figure 1

20 pages, 12082 KiB  
Article
Mapping Habitat Structures of Endangered Open Grassland Species (E. aurinia) Using a Biotope Classification Based on Very High-Resolution Imagery
by Steffen Dietenberger, Marlin M. Mueller, Andreas Henkel, Clémence Dubois, Christian Thiel and Sören Hese
Remote Sens. 2025, 17(1), 149; https://doi.org/10.3390/rs17010149 - 4 Jan 2025
Cited by 1 | Viewed by 1410
Abstract
Analyzing habitat conditions and mapping habitat structures are crucial for monitoring ecosystems and implementing effective conservation measures, especially in the context of declining open grassland ecosystems in Europe. The marsh fritillary (Euphydryas aurinia), an endangered butterfly species, depends heavily on specific [...] Read more.
Analyzing habitat conditions and mapping habitat structures are crucial for monitoring ecosystems and implementing effective conservation measures, especially in the context of declining open grassland ecosystems in Europe. The marsh fritillary (Euphydryas aurinia), an endangered butterfly species, depends heavily on specific habitat conditions found in these grasslands, making it vulnerable to environmental changes. To address this, we conducted a comprehensive habitat suitability analysis within the Hainich National Park in Thuringia, Germany, leveraging very high-resolution (VHR) airborne, red-green-blue (RGB), and color-infrared (CIR) remote sensing data and deep learning techniques. We generated habitat suitability models (HSM) to gain insights into the spatial factors influencing the occurrence of E. aurinia and to predict potential habitat suitability for the whole study site. Through a deep learning classification technique, we conducted biotope mapping and generated fine-scale spatial variables to model habitat suitability. By employing various modeling techniques, including Generalized Additive Models (GAM), Generalized Linear Models (GLM), and Random Forest (RF), we assessed the influence of different modeling parameters and pseudo-absence (PA) data generation on model performance. The biotope mapping achieved an overall accuracy of 81.8%, while the subsequent HSMs yielded accuracies ranging from 0.69 to 0.75, with RF showing slightly better performance. The models agree that homogeneous grasslands, paths, hedges, and areas with dense bush encroachment are unsuitable habitats, but they differ in their identification of high-suitability areas. Shrub proximity and density were identified as important factors influencing the occurrence of E. aurinia. Our findings underscore the critical role of human intervention in preserving habitat suitability, particularly in mitigating the adverse effects of natural succession dominated by shrubs and trees. Furthermore, our approach demonstrates the potential of VHR remote sensing data in mapping small-scale butterfly habitats, offering applicability to habitat mapping for various other species. Full article
Show Figures

Figure 1

Back to TopTop