Next Article in Journal
Current Status and Future Trends in China’s Photovoltaic Agriculture Development
Previous Article in Journal
Empirical Analysis of the Energy–Growth Nexus with Machine Learning and Panel Causality: Evidence from Disaggregated Energy Sources
Previous Article in Special Issue
Ecological Security Pattern Construction for Carbon Sink Capacity Enhancement: The Case of Chengdu Metropolitan Area
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating Two Approaches for Mapping Solar Installations to Support Sustainable Land Monitoring: Semantic Segmentation on Orthophotos vs. Multitemporal Sentinel-2 Classification

by
Adolfo Lozano-Tello
*,
Andrés Caballero-Mancera
,
Jorge Luceño
and
Pedro J. Clemente
Quercus Software Engineering Group, Universidad de Extremadura, 10003 Cáceres, Spain
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(19), 8628; https://doi.org/10.3390/su17198628
Submission received: 28 August 2025 / Revised: 16 September 2025 / Accepted: 23 September 2025 / Published: 25 September 2025

Abstract

This study evaluates two approaches for detecting solar photovoltaic (PV) installations across agricultural areas, emphasizing their role in supporting sustainable energy monitoring, land management, and planning. Accurate PV mapping is essential for tracking renewable energy deployment, guiding infrastructure development, assessing land-use impacts, and informing policy decisions aimed at reducing carbon emissions and fostering climate resilience. The first approach applies deep learning-based semantic segmentation to high-resolution RGB orthophotos, using the pretrained “Solar PV Segmentation” model, which achieves an F1-score of 95.27% and an IoU of 91.04%, providing highly reliable PV identification. The second approach employs multitemporal pixel-wise spectral classification using Sentinel-2 imagery, where the best-performing neural network achieved a precision of 99.22%, a recall of 96.69%, and an overall accuracy of 98.22%. Both approaches coincided in detecting 86.67% of the identified parcels, with an average surface difference of less than 6.5 hectares per parcel. The Sentinel-2 method leverages its multispectral bands and frequent revisit rate, enabling timely detection of new or evolving installations. The proposed methodology supports the sustainable management of land resources by enabling automated, scalable, and cost-effective monitoring of solar infrastructures using open-access satellite data. This contributes directly to the goals of climate action and sustainable land-use planning and provides a replicable framework for assessing human-induced changes in land cover at regional and national scales.

1. Introduction

The transition towards a sustainable energy model has positioned solar photovoltaic (PV) power as a cornerstone of the climate strategy in many countries. In Europe, the Green Deal sets the ambitious target of reducing net greenhouse gas emissions by at least 55% by 2030 and achieving climate neutrality by 2050 [1]. To achieve this, the deployment of renewable sources and the transformation of the energy system towards greater efficiency, resilience, and sustainability have been prioritized. Within this framework, solar PV has experienced consistent growth in recent years, both in installed capacity and electricity generation. According to Eurostat, in 2022, 23.1% of the European Union’s gross final energy consumption came from renewable sources, compared to just 8.3% two decades earlier [2]. Among these, solar PV stands out for its modularity, fast deployment, and decreasing cost per installed watt. In Spain, growth has been particularly significant, both in large-scale solar farms and in self-consumption systems. In 2022 alone, more than 5300 MW of new PV capacity were installed, contributing to nearly 7000 GWh of electricity production in certain regions [3]. This accelerated deployment generates increasing demand for spatially explicit, up-to-date, and scalable tools to monitor land use transformation, support decision-making, and foster sustainable land resource management.
The systematic detection of photovoltaic installations is essential in various fields: regulatory compliance, environmental impact assessment, cadastral updating, grid management, land use analysis, and the smart and efficient installation of solar panels [4,5,6]. Accurate land mapping of PV infrastructures is also critical to assess land occupation trends, plan sustainable energy corridors, and anticipate potential land-use conflicts, especially in rural and peri-urban areas. However, traditional detection tools—based on orthophotos, administrative records, or field surveys—suffer from limitations in coverage, update frequency, and operational cost. In this context, satellite remote sensing emerges as a promising alternative, enabling automated, scalable, and accurate mapping of solar infrastructures.
Many recent studies have approached PV detection using very high-resolution (VHR) imagery acquired by satellites, aircraft, or Unmanned Aerial Vehicles (UAVs). These methods, usually based on semantic segmentation with convolutional neural networks (CNNs), deliver highly detailed results in urban and industrial environments. In studies such as [7], high-resolution color satellite orthophotos with a spatial resolution of 0.3 m per pixel, provided by the US Geological Survey, were employed. The authors implemented a region-based object detection framework combined with a Support Vector Machine (SVM) classifier trained on candidate region features such as color, shape, and texture. Their system achieved a detection rate of 94%, correctly identifying 50 out of 53 PV installations with only 4 false positives, as evaluated through ROC analysis. However, this approach depends heavily on up-to-date RGB imagery of consistent quality and very high resolution, which restricts its scalability for large-scale or rural studies. Importantly, the method is object-based—focusing on detecting entire regions or objects—rather than operating at the pixel level. Relying exclusively on single-date RGB imagery can limit PV detection, as some modules may remain undetected due to strong sunlight reflections, which significantly alter their apparent spectral response compared to their typical signatures. In [8], they introduce Deep Solar PV Rener, a detail-oriented deep learning network, to enhance PV segmentation from satellite imagery by addressing challenges like small PV areas. Their network incorporates Split-Attention Network, combines Dual-Attention Network with Atrous Spatial Pyramid Pooling, and integrates PointRend Network to refine PV boundary prediction achieving significant improvements reaching a F-Score of 90.91%.
More recently, multi-source fusion strategies have been explored to improve detection accuracy by leveraging both spatial detail and spectral richness. For instance, Wang et al. [9] proposed FusionPV, a deep learning framework that jointly processes RGB orthophotos and Sentinel-2 imagery. Their results showed improved performance, especially in heterogeneous landscapes, but also revealed the need for precise spatial alignment and increased computational demand. Similarly, Zhao et al. [10] introduced PV-Unet, a network specifically designed to extract PV arrays from RGB and multispectral images with different resolutions. While their method achieved strong results in urban settings, it relied on manually curated datasets and complex fusion schemes.
Compared to these works, our study takes a more modular approach by assessing each data source (orthophotos and Sentinel-2) independently over the same study area and parcel baseline. This design allows us to explicitly quantify their relative strengths, divergences, and complementarities. While our orthophoto-based results align with previous literature in terms of high precision in structured environments [7,10], we also report a significant rate of false positives due to visual similarities with hedgerow plantations or water bodies—an issue only briefly noted in earlier works. On the other hand, our Sentinel-2 model, trained on multitemporal spectral profiles, successfully detected solar installations absent from the orthophotos due to acquisition date, highlighting a key advantage over VHR-only methods. This reinforces findings from Wang et al. [9] regarding the utility of temporal signals, even when spatial resolution is limited.
Sentinel-2, part of the European Copernicus program, offers multitemporal multispectral imagery with a revisit frequency of five days in Europe and spatial resolutions of 10 to 60 m. With 13 spectral bands covering visible, near-infrared (NIR), and short-wave infrared (SWIR) wavelengths, Sentinel-2 captures subtle spectral differences that can distinguish between land cover types. This makes it particularly suitable for identifying artificial surfaces, including rooftops and solar panels, especially in contexts where high-resolution data are unavailable or impractical.
Unlike orthophotos, which provide very high spatial resolution but typically only RGB information and limited temporal coverage [11], Sentinel-2 imagery offers coarser spatial resolution but richer spectral information and frequent revisits [12]. Orthophotos allow for detailed visual analysis and accurate shape delineation of objects, whereas Sentinel-2 enables multi-temporal, multispectral analysis, capturing spectral signatures that can help discriminate between materials and land cover types over time. This contrast makes the combined use of both data sources highly valuable for mapping solar infrastructure and understanding its spatial distribution over time, particularly in the context of sustainable land development and energy planning. In summary, our contribution is twofold: (i) we provide a structured comparison between two widely used data modalities in solar infrastructure mapping, and (ii) we empirically validate their performance on a real-world, parcel-level dataset—highlighting where each method reinforces or departs from existing literature. This comparative design supports decision-making in operational contexts where resource availability, update frequency, and spatial coverage differ.
Over the past few years, two main methodological paths have been explored in the literature for PV detection: (i) semantic segmentation of VHR imagery and (ii) pixel-wise classification using satellite multispectral data. While segmentation with deep CNNs like U-Net or DeepLab has proven effective in VHR datasets, it faces challenges of cost, availability, and generalization. In contrast, pixel-wise approaches offer better scalability and have shown promising results when using multispectral imagery with carefully selected features.
Some works, such as Zhang et al. [13], applied a pixel-based Random Forest (RF) classifier with Landsat 8 imagery (30 m resolution) to map photovoltaic power plants across China. By generating autumn 2020 composites to minimize cloud interference and incorporating terrain constraints to filter unsuitable areas, the authors reported very high classification performance, achieving an overall accuracy (OA) above 95%.
Recent studies support the potential of Sentinel-2 for PV detection even without VHR or radar data. Zixuan et al. [14] employed E-UNet for segmenting Sentinel-2 images, achieving 98.9% accuracy. Other studies have combined Sentinel-1 and Sentinel-2 with models such as YOLO [15,16] or Random Forest [17] to detect large PV plants, leveraging textural and structural cues [18], where their models achieved over 90% accuracy in detecting large-scale PV panels using Sentinel-1 synthetic aperture radar (SAR) images with YOLO techniques.
In parallel, other approaches have combined multispectral and radar sources, such as Sentinel-1 and Sentinel-2, to improve robustness by fusing optical and microwave information. Zhang et al. [19], for instance, used a Random Forest classifier for mapping photovoltaic (PV) panels with multitemporal Sentinel-1 and Sentinel-2 data, reaching 94.3% accuracy in coastal China in 2021. Wang et al. [20] integrated nighttime light data (VIIRS) to improve urban-rural discrimination, achieving 98.9% overall accuracy. Other pixel-wise studies [21] have proposed an Enhanced PV Index (EPVI) based on Sentinel-2 imagery to improve the spectral discrimination of photovoltaic panels from surrounding land covers. By applying this index, the spatial distribution of China’s PV power stations in 2020 was mapped, achieving an overall accuracy of about 97.6%. These studies highlight the value of multitemporal and multisource data fusion for enhancing detection accuracy at regional scales. However, most rely on heterogeneous data sources—combining radar, multispectral, or nighttime imagery—or focus exclusively on large PV plants visible in coarse-resolution composites. In contrast, our study focuses solely on multitemporal Sentinel-2 imagery and applies the analysis at the individual parcel level, enabling fine-grained identification of both large and medium-scale installations over agricultural zones.
Spectral classification based solely on Sentinel-2 has proven to be a viable alternative when sufficient temporal resolution is available and the data is properly prepared. The key to these approaches lies in the multitemporal exploitation of spectral signatures, as photovoltaic installations exhibit specific and temporally stable reflective patterns that differ from those of vegetation or natural surfaces. While the EPVI approach [21] and Random Forest models [19] achieved high performance by designing custom spectral indices or combining features, they did not conduct a systematic comparison with VHR orthophoto-based methods. Moreover, they generally operate at the pixel or region level without using a parcel-based framework, which limits their application to cadastral or regulatory needs. Our methodology bridges this gap by combining a pixel-wise deep learning classifier trained with temporal profiles and applying it over 227,000 georeferenced parcels—offering direct applicability to land management and planning systems.
As summarized in Table 1, although these works report promising accuracies, they differ substantially in terms of spatial resolution, input data requirements, and study contexts. Most are limited to urban or industrial areas, focus on large-scale solar plants, or depend on costly or auxiliary datasets. Therefore, their findings are not directly comparable to the approach proposed here, which uniquely explores the exclusive use of multitemporal Sentinel-2 imagery for parcel-level photovoltaic detection over extensive agricultural areas.
To address this gap, we propose a fully automated methodology based exclusively on multitemporal Sentinel-2 imagery, without requiring orthophotos, radar data, or additional external sources. We develop and validate a dense neural network trained on pixel-level multitemporal features over more than 227,000 georeferenced agricultural parcels, optimized with Grid Search techniques [22]. As a reference, we compare our method with a segmentation-based approach using the pretrained “Solar PV Segmentation” model by Kleebauer et al. [23], based on DeepLabV3 with ResNet101 and evaluated over RGB orthophotos with 25 cm resolution. The study focuses on Extremadura (Spain), a region of strategic importance for solar energy. In 2022, Extremadura accounted for 31% of new PV capacity installed in Spain (1467 MW), reaching a total of 5348 MW and producing 6953.8 GWh of PV electricity—68% of its total renewable generation [3]. These figures make it an ideal testbed for validating solar detection methods under diverse landscape and installation conditions.
This study presents a direct comparison between two approaches for photovoltaic (PV) detection applied to the same set of agricultural parcels: high-resolution RGB orthophoto segmentation and pixel-wise multitemporal spectral classification using Sentinel-2 imagery. Despite its coarser 10 m spatial resolution, Sentinel-2 proves effective for PV detection by exploiting its multispectral bands and frequent five-day revisit cycle. Its multitemporal nature allows the identification of installations that may be missed in orthophotos due to acquisition timing and offers the possibility of tracking the construction and expansion of solar plants, supporting planning and management needs. Furthermore, the Sentinel-2–based workflow is fully automated and computationally efficient, in contrast to orthophoto-based methods, which demand more intensive computation and manual preprocessing. By explicitly comparing results from both methods—pixel-wise and object-based—on the same geographical and administrative base, this study contributes to ongoing research on multi-source PV detection, while reinforcing the practicality and scalability of Sentinel-2–based approaches when aligned with cadastral units.
The work is structured in different sections, where Section 2 focuses exclusively on describing the data and algorithms used, without presenting numerical results or performance evaluations. The following sections are devoted to the results (Section 3) and a detailed analysis of model behavior and limitations (Section 4). The methodology is structured into four subsections. First, the study area and the reference datasets used for training and validation are described. Then, the segmentation procedure for rooftop solar panels from orthophotos is detailed. Next, the Sentinel-2 imagery and multitemporal preprocessing are introduced. Finally, the pixel-based spectral classification model is presented, including its structure, training process, and evaluation criteria.

2. Materials and Methods

This study proposes a system for the automated detection of photovoltaic installations based exclusively on multitemporal imagery from the Sentinel-2 satellite, using a pixel-level spectral classification model. The approach is designed to be practical, replicable, and scalable, without relying on orthophotos or other external data sources.
As a comparative baseline, a segmentation method based on high-resolution RGB aerial orthophotos was also implemented, using the Deepness plugin for QGIS [24]. This plugin enables the application of deep learning models directly on georeferenced imagery. Specifically, the pre-trained Solar PV Segmentation model developed by Kleebauer et al. [23] was used. This model is based on the DeepLabV3 architecture with a ResNet101 backbone and is specifically designed for rooftop solar panel detection. It achieves an F-score of 95.27%. This second approach serves as a reference baseline against which the performance of the proposed system is evaluated.

2.1. Study Area and Reference Data

The study was conducted in the region of Extremadura, located in southwestern Spain (Figure 1b). This predominantly rural region is characterized by vast agricultural and livestock areas, low population density, and high levels of solar radiation. These conditions make it a strategic area for the deployment of photovoltaic solar energy systems, both as ground-mounted solar farms and rooftop installations on industrial or agricultural buildings. Extremadura spans 41,635 km2, has an average annual temperature of 17.9 °C, an average annual precipitation of 456 mm, and an average relative humidity of 59.08%. Elevation across the study area ranges from 150 to 1112 m above sea level. The central geographical coordinates of the study are approximately 38°40′0″ N, 6°10′0″ W (Figure 1c).
Extremadura was selected as the study area both for its strategic role in Spain’s solar energy development and for the availability of reliable reference data provided by the regional government (“Junta de Extremadura”) through its technical departments. Data collection was carried out using the official geographic delimitations from the SIGPAC system (Spanish Land Parcel Identification System), ensuring accurate identification and classification of the analyzed parcels.
For the model training phase, parcels located in the Cáceres province of Extremadura were used (Figure 2a). This area includes a total of 953,322 agricultural parcels over a surface of 19,868 km2. Positive instances (i.e., parcels with visible solar panel installations) were validated by experts from the Junta de Extremadura through manual inspection of RGB orthophotos from the Spanish National Aerial Orthophotography Plan (PNOA). Only parcels showing clear visual evidence of photovoltaic structures were selected, resulting in 1150 positive parcels distributed across 113 municipalities. A balanced dataset was created by randomly selecting 1150 negative parcels (without any visual signs of solar installations) with similar land use characteristics.
For the testing and evaluation phase, an area of approximately 21,767 km2 was used. Specifically, four counties in Extremadura were selected: Tierra de Barros, Zafra-Río Bodión, Campiña Sur, and Tentudía (Figure 2b). These areas present notable geographic and socioeconomic diversity, enabling a robust assessment of model performance under varying conditions. In total, 227,121 parcels were analyzed during this phase, covering a combined area of 594,510.12 hectares (Figure 2c). Importantly, this same set of parcels was used as the reference dataset for both methodological approaches (Sentinel-2 pixel-wise classification and orthophoto-based semantic segmentation), ensuring a fair and consistent basis for comparison despite the differences in model architectures. These parcels were automatically classified as either containing or not containing solar installations using the methods described in the following sections. The coordinates and geographic boundaries of the study parcels are available in the dataset at https://doi.org/10.5281/zenodo.16261223 (accessed on 24 August 2025).
For image preprocessing, georeferencing, model training, and inference, a workstation was used equipped with Ubuntu 24.04.2 LTS operating system, an Intel Core i7-14700KF processor (20 cores, up to 5.6 GHz), an NVIDIA GeForce RTX 4070 Ti graphics card (12 GB), and 32 GB of DDR4 RAM. The software stack included Python 3.10 for implementing models and tools, and SNAP 11.0.1 for Sentinel-2 image processing.

2.2. Detection of Photovoltaic Installations Through Segmentation on Orthophotos

This section describes the procedure used for detecting photovoltaic installations from high-resolution aerial imagery. This approach is proposed as a comparative baseline against the method introduced in this work, which is based on multitemporal Sentinel-2 images.
The segmentation was performed using RGB orthophotos from the 2022 National Aerial Orthophotography Plan (PNOA), available for the autonomous community of Extremadura. This year was chosen as a reference because there are currently no more recent orthophotos available for the region.
Specifically, the segmentation was carried out using the pretrained “Solar PV Segmentation” model developed by Kleebauer et al. [23], based on the DeepLabV3 architecture. This model has been specifically designed for detecting solar panels in high-resolution aerial images and has demonstrated a strong generalization capacity. According to the authors, it achieves an F1-score of 95.27% and an IoU of 91.04%, making it one of the most accurate and robust publicly available solutions for this task. Additionally, the DeepLabV3-based architecture is widely recognized for its effectiveness in segmenting objects with clearly defined edges [25]. Due to this choice, the technical details of the segmentation process are relatively brief compared to those provided in Section 2.3. The main focus here is on the preparation and preprocessing of the orthophotos for model input, as well as their adaptation to the specific characteristics of the study area. Rather than developing a model from scratch, this study adapts the existing pretrained model for use under comparable experimental conditions.
The photovoltaic installation detection process through segmentation involves the following sequential steps: (a) downloading high-resolution satellite images, (b) geospatial preprocessing including parcel clipping, (c) application of the “Solar PV Segmentation” model, specifically trained to identify morphological configurations and spectral patterns associated with PV installations, and (d) large-scale segmentation automation, which enabled this analysis to be conducted over a total of 227,122 parcels in the study area. The following subsections describe these subprocesses in detail.

2.2.1. High-Resolution Image Download

A total of 435 orthophotos from 2022 were used, captured on different days between June and July of that year, in GeoTIFF format with a spatial resolution of 0.25 m per pixel and approximate dimensions of 60,000 × 40,000 pixels each, resulting in a total volume of 923.7 GB. These images were downloaded from the website of the Spanish National Center for Geographic Information (CNIG) [26].
To reduce processing volume and focus the analysis on a representative region, four counties within Extremadura were selected, as described in Section 2.1. This reduced the dataset to 92 images, covering a total of 227,121 parcels and 594,510.12 hectares.

2.2.2. Parcel Clipping and Preprocessing

Once the study area was defined, it was necessary to ensure that the orthophotos were properly clipped and aligned with the 227,121 parcels contained in the region. To achieve this, the corresponding georeferenced graphical declarations were used, provided by the Junta de Extremadura in GeoJSON format, which precisely delineates the geometry of each parcel.
Based on these geometries, an automated process was developed to identify, for each parcel, the orthophotos that contained it either fully or partially. This is particularly important since a single parcel can span multiple images when located at their edges.
Once the images associated with each parcel were identified, the first step consisted of reprojecting them to match the spatial reference system used by the parcel geometry—either EPSG:25829 or EPSG:25830. This process was carried out using the OSgeo/GDAL library for Python. The result of each reprojection was saved as a temporary numbered TIFF file.
After ensuring that all images associated with a parcel were in the same coordinate system, they were merged into a single virtual mosaic, creating a VRT (Virtual Raster) file that serves as a logical representation of the combined images without physically duplicating the data. This VRT was then converted into a real disk image, producing a single GeoTIFF file that consolidated the necessary spatial information for further analysis of the parcel. An example of the result of this image fusion is shown in Figure 3.
The next step involved the individualized geospatial clipping of each parcel from the fused image. To do this, it was first verified that both the image and the parcel geometry shared the same spatial reference system. If not, the parcel geometry was reprojected to the corresponding system (EPSG:25829 or EPSG:25830). This ensured perfect spatial alignment between the vector polygon and the raster image containing it.
Once both the image and the parcel geometry shared the same coordinate system, a rasterized mask was generated from the parcel polygon. This step is essential because it translates a vector geometry (composed of coordinates and vertices) into a discrete matrix structure that can be directly applied to orthophotos in pixel form. First, an in-memory vector layer was created using OGR, based on the reprojected parcel geometry. Then, that layer was converted into a binary mask in which the pixels inside the polygon perimeter were assigned a value of 1, and the rest a value of 0 (outside the clipping area). This rasterized matrix has the same resolution and dimensions as the original image, allowing it to be overlaid without alignment errors. The main reason for creating this raster mask is that orthophotos consist of pixel matrices, and clipping must occur at the pixel level. Since vector geometries do not naturally align with rows and columns, a rasterized mask acts as a bridge between both formats, enabling the clipping operation to be formulated as an efficient and precise matrix operation.
Once the mask was generated, it was applied to each spectral band of the original image (red, green, and blue) using filtering operations. In this process, only the pixel values corresponding to the interior of the parcel were retained, and exterior values were set to null or zero, ensuring a clean crop.
Finally, a new final image was reconstructed, the dimensions of which were adjusted to fit the cropped plot, thus ensuring correct georeferencing of the result, preserving both its spatial location and reference system. The final image resulting from a cropped plot is shown in Figure 4.

2.2.3. Segmentation Using Deepness and the Solar PV Segmentation Model

This subsection describes the open-access model Solar PV Segmentation, used for the segmentation of photovoltaic installations in RGB orthophotos of previously processed parcels, as outlined in the preceding subsection. It also details the fine-tuning process carried out to adapt the model to the specific case study. To perform the segmentation and detection process with this model, the Deepness plugin for QGIS [24] was used. This plugin specializes in semantic segmentation via deep learning and supports the integration of pretrained models.
Specifically, the pretrained “Solar PV Segmentation” model developed by Kleebauer et al. [23] was used. It is based on the DeepLabV3 architecture with a ResNet101 backbone. This deep neural network architecture employs atrous (dilated) convolutions to capture context at multiple scales, enabling the detection of objects of various sizes with higher precision. It also incorporates an Atrous Spatial Pyramid Pooling (ASPP) mechanism, which improves segmentation by processing image features at multiple sampling rates before applying convolution.
The model was trained on a diverse dataset, including UAV data, aerial and satellite imagery, with resolutions ranging from 0.1 m to 3.2 m, as well as images from different countries such as France, Germany, and China. Its ability to handle multiple image sources and resolutions made it highly suitable for the present study, as the images used had a resolution of 0.25 m/pixel. The model demonstrated superior performance compared to those trained exclusively on a single-resolution image type, achieving an F1-score of 95.27% and an IoU of 91.04%. The training was performed on a system equipped with a NVIDIA Tesla A100-SXM4 GPU with 40 GB of VRAM and 512 GB of RAM.
This model was selected for several reasons: first, its excellent performance in key semantic segmentation metrics, and second, its availability as a pretrained model, which allows seamless integration into analysis workflows without the need for training from scratch. Additionally, its DeepLabV3-based architecture is widely recognized for its effectiveness in segmenting objects with well-defined edges, such as photovoltaic panels. These features make it especially well-suited for this study’s objective—direct comparison with pixel-by-pixel detection of solar panels using Sentinel-2 images—allowing technical efforts to focus on model fine-tuning and on designing a preprocessing pipeline to adapt RGB orthophotos to the model’s input requirements.
Although the model initially yielded good results, adjustments were necessary to adapt it to the study images and minimize false positives and negatives in solar panel detection. Specifically, three key tunable parameters were modified: segmentation resolution, confidence threshold, and small-segment filtering.
  • The segmentation resolution defines the granularity level for object detection; if too low, segmentation may be incomplete, whereas excessively high resolution may cause unnecessary fragmentation.
  • The confidence threshold controls the model’s sensitivity in classifying a segment as a solar panel; higher values reduce the likelihood of false positives but may exclude structures that should be correctly classified.
  • Lastly, small-segment removal filters out very small detections that might represent image noise, preventing misclassification of irrelevant elements like roads or dense vegetation.
After multiple tests and adjustments, the optimal values were set as follows: a segmentation resolution of 25 cm/pixel (aligned with the orthophoto resolution), a confidence threshold of 0.95, and a minimum segment size of 20 pixels. The use of a 0.95 confidence threshold minimized false positives by ensuring that only highly confident structures were selected, thereby reducing erroneous inclusion of elements like trees or water bodies. Meanwhile, the 20-pixel filtering threshold eliminated small, non-relevant areas unrelated to photovoltaic installations. An example of the segmentation and detection output is shown in Figure 5.

2.2.4. Tiling and Automation

Once the parcel images had been processed and the model selected and fine-tuned, the large-scale automated detection of solar panels was carried out. The Solar PV Segmentation model requires inputs of 256 × 256 pixels, which necessitated adapting the parcel images to this size. Therefore, each image was divided into tiles of 256 × 256 pixels. This tiling process can lead to fragmentation of detected objects and loss of spatial continuity, especially in the case of large installations spanning multiple tiles. After the individual segmentation and detection of each tile, a final reconstruction was required for each parcel, involving the post-processed merging of all generated tiles.
All the aforementioned processing was performed using Python scripts and was completed after 874.4 h of computation. The total computation time of 874.4 h reflects the cumulative duration required to process more than 227,000 parcels, including image preprocessing, tiling into 256 × 256 patches, inference, post-processing, and PV detection. With parallelization on multiple machines or high-performance computing infrastructure, this runtime could be substantially reduced, underscoring the scalability of the proposed approach.
This segmentation and identification process led to the detection of a total of 101 parcels, covering an area of 4081.38 hectares that contained solar panels. Specifically, considering only the positively segmented zones, the total area of solar panels detected amounted to 1325.82 hectares. These results provide a detailed map of the distribution of photovoltaic infrastructures in the study region, ensuring high accuracy in the delineation of each installation.

2.3. Detection of Photovoltaic Panels Using Sentinel-2

This section presents in detail the methodology proposed in this study for the detection of photovoltaic installations, based on the use of multispectral images from the Sentinel-2 satellite. Unlike other approaches, such as the use of RGB orthophotos described in the previous section, this method leverages the multitemporal and spectral nature of Sentinel-2 data to address the problem from a more generalizable and automatable perspective. As this is the core methodological contribution of the work, the section is developed in greater depth, describing each stage of the process—from data collection and preprocessing to model training and parcel classification—through an applied analysis of 227,121 land parcels in the study region.
The workflow (summarized in Figure 6) begins with the download and preprocessing of Sentinel-2 images corresponding to the period from 1 June to 31 July 2022, over the study area. Simultaneously, the geographic identification of the pixels associated with each of the study parcels is carried out. Once both processes are completed, synthetic images are generated, the model is trained, and finally, pixel-level identification of solar panels is performed on a large scale. These procedures are detailed in the following subsections.

2.3.1. Image Download and Processing

A total of 121 Sentinel-2 images were downloaded between 1 June and 1 July 2022. This period was used to match the acquisition timeframe of the orthophotos, which are collected over multiple dates. This approach ensures temporal consistency between datasets and allows the detection algorithm to leverage spectral information from several time points, improving robustness and comparability with high-resolution aerial imagery. These images were retrieved using the official Copernicus API based on OData (Open Data Protocol), a standard approved by both ISO/IEC and OASIS, which provides a RESTful HTTPS-based API [27].
Since the proposed method requires multiple images to analyze the temporal evolution of each pixel, using full-sized original Sentinel-2 images could lead to RAM overload and internal storage issues. Therefore, memory optimization became a priority. To address this, once the original images were downloaded, they were segmented into more manageable areas, reducing their size from 110 × 110 km to 5 × 5 km (5000 × 5000 m) tiles. This segmentation used the same coordinate reference system as Sentinel-2 (SRC), EPSG:32630, corresponding to WGS 84/UTM Zones 30N and 29N. Taking this division into account, a total of 331 zones were analyzed (Figure 7).
For the automated and large-scale image clipping process, we used the Python interface of SNAP (available at https://step.esa.int/main/download/snap-download/ accessed 17 July 2025), known as esa_snappy. After collecting and clipping the images, data from each spectral band were extracted, rescaled to a 10 m resolution, and normalized within a range of 0 to 1 to avoid anomalies in the original values. Subsequently, anomalous values were removed using ESA-provided cloud, water, and shadow detection models, complemented with our own outlier detection algorithms and interpolation techniques [28], enabling both anomaly filtering and smoothing of the data distribution.
The spatial boundaries of each parcel were defined using georeferenced polygons provided by administrative databases and the GIS system of the Junta de Extremadura. These polygons were used to map Sentinel-2 image pixels to the corresponding interior pixels of each parcel. However, since parcel boundaries may include pixels that partially fall outside the defined area, those pixels were excluded from the analysis to prevent spectral information from neighboring areas from contaminating the results.
Once the pixels corresponding to each parcel were extracted and stored in the database, we prepared the input data for the training model. To this end, synthetic images were generated for each of the 331 study zones, following the methodology proposed in [29]. These synthetic images integrate the multispectral information from all 12 Sentinel-2 bands corresponding to the selected dates (1 June to 31 July 2022, 12 pentads, 5-day intervals) and are organized in a format directly interpretable by the neural network. For each pixel, spectral data from multiple acquisition dates were combined into a matrix capturing both spectral and temporal information. This two-dimensional representation (Figure 8), where one dimension represents the temporal sequence and the other represents the spectral bands, reduces the complexity of the original data and enhances the network’s ability to distinguish between classes during training and detection. Each pixel’s temporal evolution across the period is, thus, explicitly encoded and leveraged in the classification process. The resulting images formed the dataset used for the training, validation, and testing phases of the model, as well as for its direct application during the operational deployment of the automatic photovoltaic installation detection system. The entire process, from image download to the generation of synthetic images, took a total of 96.1 h.

2.3.2. Neural Network Training and Hyperparameter Tuning

To evaluate the performance of the solar panel detection model, the dataset was divided into two subsets: 70% for training and 30% for validation. This ratio is widely used in machine learning as it allows the model to learn general patterns from the training set, while the validation set is used to tune hyperparameters and prevent overfitting. Finally, a test set is used to assess the model’s final performance on unseen data. This strategy is based on previous deep learning optimization studies, such as those presented in [30,31].
The solar panel detection model was implemented using the Keras library with TensorFlow in the Python programming language. A sequential dense neural network was trained using pixels from 227,121 parcels as inputs, with positive examples corresponding to solar panels and negative examples including vegetation, buildings, and water bodies. Hyperparameters were optimized via a grid search [18] over the study period (1 June–31 July 2022) to ensure temporal consistency with the orthophotos.
Table 2 presents the 10 best-performing models obtained after testing 73 combinations through the grid search procedure, where the architecture parameters (number of neurons per layer), batch size, and learning rate were varied. The selected models are ranked by their F-score, a metric that combines precision and recall into a single measure, offering a balanced and robust evaluation of the model’s performance in classification tasks. The full list of the 73 evaluated models is included in Appendix A, Table A1.
The best-performing model, with two hidden layers of 64 and 32 neurons, a batch size of 128, and a learning rate of 0.0005, achieved an F-score of 0.9794, precision of 0.9922, recall of 0.9669, overall accuracy of 0.9822, and a low error rate (1.56% false negatives, 0.25% false positives). This model was selected for its reliability and robustness for large-scale, pixel-wise detection of photovoltaic installations.
Using this model—hereafter referred to as the optimal model—a pixel-by-pixel prediction was carried out over the 227,121 agricultural plots (an example is shown in Figure 9), with a total area of 594,510.12 hectares. When processing the Sentinel-2 pixels (10 m × 10 m), the total area analyzed amounted to 494,073.12 hectares. This difference is due to the exclusion of pixels that were not entirely within the geometric boundaries of each plot, thereby ensuring that only spectral data from within each parcel were analyzed. As a result, plots that were too small (e.g., 0.01 ha) or had irregular shapes in which no full pixels could fit were discarded from the analysis.
The detection results using the optimal model identified 97 plots containing solar panels, covering a total of 3577.95 hectares. Considering only those pixels classified as photovoltaic panels, the actual area of solar panels amounted to 1366.59 hectares. The time required for parcel-level identification was 75.2 h. When adding the total time spent from image download to identification, the overall processing time was 171.4 h.

3. Results and Discussion

This study addressed the detection of photovoltaic infrastructures using two distinct methodological approaches: semantic segmentation on high-resolution RGB orthophotos, and pixel-wise spectral classification based on multitemporal Sentinel-2 imagery. Both methods were applied over the same study area, allowing for a direct quantitative and qualitative comparison of the results. The full analysis of both segmentation results and pixel-wise classification is available in the dataset: https://doi.org/10.5281/zenodo.16261507 (accessed on 24 September 2025).

3.1. Comparative Analysis of Detection Performance

The pretrained “Solar PV Segmentation” model [23] achieved an F1-score of 95.27% and an IoU of 91.04% across orthophotos (Section 2.2). For pixel-wise classification (Section 2.3), the best-performing neural network reached a precision of 0.9922, a recall of 0.9669, an overall accuracy of 0.9822, and a loss of 0.0420. Importantly, the selection also considered misclassification rates: this model achieved the lowest false negative rate (1.56%) and false positive rate (0.25%), ensuring robust detection while minimizing both overlooked photovoltaic plots and incorrect identifications.
Orthophoto segmentation identified 101 parcels totaling 4081.38 hectares (1325.82 hectares effectively covered by PV), whereas Sentinel-2 classification detected 97 parcels covering 3577.95 hectares (1366.59 hectares of PV).
Figure 10 and Figure 11 illustrate representative parcels from the four counties of Badajoz, including both those containing photovoltaic panels and those without installations. Figure 10 shows positive cases where PV systems were successfully detected by both the segmentation and pixel-wise classification approaches, while Figure 11 presents negative cases corresponding to parcels without PV installations.

3.2. Analysis of Error Patterns and Methodological Limitations

Both approaches coincided in the detection of 92 parcels, representing 86.79% of the total unique parcels detected (106 in total). The average absolute difference in surface area detected per parcel between the two methods was 6.67 hectares, demonstrating a high level of agreement, though not without deviations attributable to the specific characteristics of each technique.
Differences between the two approaches were evident in error types. For orthophoto segmentation:
  • Hedgerow olive groves: These plantations exhibit a regular and densely aligned pattern that visually resembles photovoltaic installations. The model frequently confused these crops with solar panels due to the similarity in texture and layout (Figure 12). A total of 5 plots with hedgerow olive trees were mis-detected with segmentation, while a total of 0 parcels were detected with a pixel-wise approach.
  • Water bodies: Reservoirs, rivers, and ponds caused specular reflections in the RGB orthophotos, which led the model to mistakenly label them as solar installations (Figure 13). A total of 3 water body plots were mis-detected with segmentation, while with the pixel-wise approach, 0 plots were detected.
  • Image perspective: In some orthophotos, solar panels appeared almost vertically relative to the image capture angle, making them harder to detect visually (Figure 14a). These installations were missed by the segmentation model but correctly identified by Sentinel-2 thanks to their distinctive spectral signature (Figure 14b). Both detected 2 parcels, but with the pixel-by-pixel approach the detection was much more accurate.
  • Temporal obsolescence: Some installations built between June and July 2022 were not yet visible in the orthophotos used. In contrast, the Sentinel-2 images, with their high temporal frequency, allowed the model to capture the spectral changes caused by the newly installed panels, enabling their detection (Figure 15). Only one parcel was detected pixel-wise with Sentinel-2.
Although the Sentinel-2–based detection approach demonstrated strong overall performance, two notable limitations were identified:
  • Small and dispersed installations: The 10 m spatial resolution did not provide sufficient spectral detail, resulting in missed detections of smaller or irregularly distributed photovoltaic systems (Figure 16). A total of three dispersed installations were detected with both techniques, but the segmentation approach has greater accuracy.
  • Urban environments: While effective in identifying large, clustered installations, the method frequently overestimated the surface area due to limited capacity to delineate individual rooftop systems with precision (Figure 17). A total of two urban environments were detected with both techniques, but the segmentation approach has greater accuracy.

3.3. Trade-Offs: Spatial vs. Temporal-Spectral Resolution

The comparison between orthophoto segmentation and Sentinel-2 pixel-wise classification highlights fundamental trade-offs between spatial and temporal-spectral resolution. High-resolution RGB orthophotos provide fine spatial detail, enabling precise delineation of PV panels, even for small installations or complex urban rooftops. However, this precision comes at the cost of large file sizes, extensive preprocessing (cropping, mosaicking, georeferencing), and high computational demands. Moreover, orthophotos are usually acquired on specific dates, which may not coincide with the construction of new PV installations, potentially missing recently built infrastructure.
Sentinel-2 imagery, in contrast, offers a coarser spatial resolution (10 m) but compensates with multispectral data across all its bands and a frequent revisit cycle (every 5 days in Europe). This temporal richness allows multitemporal detection of PV panels, capturing spectral changes over time and enabling identification of newly installed or partially shaded panels that might be missed in single-date orthophotos. Additionally, the fully automated pipeline for Sentinel-2 reduces human intervention and computational load, making it more scalable for large-area monitoring.
These differences imply that the choice of methodology depends on the study objectives. For applications requiring parcel-level accuracy and precise boundary delineation (e.g., cadastral updates or detailed infrastructure surveys), orthophoto segmentation is preferable. Conversely, for regional or national-scale monitoring where automation, up-to-date information, and repeated observations are critical, Sentinel-2 offers a more practical solution. This trade-off between precision and temporal-spectral richness is central to designing efficient PV monitoring systems.

3.4. Implications for Large-Scale PV Monitoring and Policy Making

The findings of this study have direct implications for sustainable energy management, regulatory monitoring, and policy formulation. Sentinel-2’s temporal and multispectral capabilities enable automated, frequent, and cost-effective monitoring of PV installations, providing decision-makers with up-to-date information on renewable energy infrastructure. This supports strategic planning for grid integration, identification of underperforming or newly installed PV plants, and evaluation of regional energy targets.
By integrating multitemporal satellite observations, authorities can track construction timelines and infrastructure expansion, which is valuable for both operational management and long-term energy planning. The approach also demonstrates the potential for combining open-access satellite data with machine learning models to reduce reliance on costly, high-resolution aerial imagery, aligning with principles of sustainable resource management and energy monitoring.
Orthophoto-based segmentation, while less scalable, remains important for validating large-scale remote sensing results and for applications that demand high spatial accuracy, such as precise assessment of PV coverage per parcel or urban-scale energy auditing. Together, these complementary methods provide a framework that balances scalability, precision, and temporal relevance, informing both scientific research and practical decision-making in sustainable energy deployment.
At the same time, we acknowledge several methodological and contextual limitations. The study is geographically focused on Extremadura, Spain. Expanding the methodology to other regions, incorporating higher temporal resolution imagery, or combining data sources could further improve robustness and applicability. This positions the study as a foundation for scalable, replicable systems that monitor solar energy infrastructure in support of sustainability goals.
Furthermore, several of the photovoltaic installations identified in this study via orthophoto segmentation have been the subject of field visits by technicians from the Agricultural Department of the Regional Government of Extremadura. These visits, conducted as part of their routine inspection tasks, provided indirect yet valuable confirmation of the detected sites, adding an empirical layer of validation to the results and reinforcing the robustness of the methodological comparison.
Future work could explore the integration of additional open-access satellite data sources, multi-sensor fusion, or larger and more diverse datasets to extend the generalizability of our findings and improve PV detection across different environmental and geographic contexts.

4. Conclusions

This study compared two approaches for detecting photovoltaic (PV) installations: semantic segmentation of high-resolution RGB orthophotos and pixel-wise multitemporal spectral classification using Sentinel-2 imagery. Both methods were applied to the same set of agricultural parcels in Extremadura to evaluate whether Sentinel-2 alone can reliably complement or replace orthophoto-based approaches.
Results show that Sentinel-2 provides competitive detection performance despite its coarser resolution (10 m), leveraging its multispectral bands and frequent revisits to capture multitemporal patterns. Both methods coincided in detecting 86.67% of parcels, with an average surface difference of less than 6.5 hectares per parcel, reflecting strong agreement. Orthophoto segmentation offers high spatial precision but requires substantial computational resources and manual preprocessing and is prone to false positives in visually complex areas. Sentinel-2 enables fully automated, scalable detection with lower computational demands, though small or scattered installations may be missed. Moreover, several of the installations detected through orthophoto segmentation were indirectly confirmed during field inspections by technicians from the Agricultural Department of the Regional Government of Extremadura, providing additional empirical support to the results.
This study demonstrates a practical, transferable, and replicable methodology using open-access data for large-scale PV monitoring. The approach supports sustainable land management, energy planning, environmental assessment, and policy development, particularly in regions with limited access to high-resolution imagery. Future work could expand the methodology to other regions, integrate additional satellite data sources, or combine data for improved detection robustness.

Author Contributions

A.L.-T., P.J.C. and A.C.-M. conceived and designed the framework of the study. A.C.-M. and J.L. completed the data collection and processing. A.L.-T., P.J.C. and A.C.-M. completed the algorithm design and the data analysis and were the lead authors of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially funded by the European Union’s Next Generation funds, coordinated at the Spanish level by the Recovery, Transformation and Resilience Plan. Also, this work has been co-funded by the European Regional Development Fund and the Regional Government of Extremadura (Ministry of Education, Science and Vocational Training) through grant GR24099.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in this study are openly available in the ZENODO platform: Coordinates of plots in the Badajoz region for the identification of solar panels 2022 used for the identification with segmentation and pixel-by-pixel approach, are available at: https://doi.org/10.5281/zenodo.16261223 (accessed on 24 September 2025). Results of the photovoltaic infrastructure detection. Badajoz. Spain 2022 is available at: https://doi.org/10.5281/zenodo.16261507 (accessed on 24 September 2025).

Acknowledgments

We thank the Regional Government of Extremadura (Junta de Extremadura, Consejería de Economía e Infraestructuras), Spain, for providing the data used in this study. We are also grateful to its technical staff for their collaboration in supervising the analysis process and for carrying out field visits that indirectly validated several of the detected photovoltaic installations.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Resulting values from the GridSearch were performed for the period between 1 June 2022 and 31 July 2022. The best model found achieved an accuracy of 0.982226, a precision of 0.9922, a loss of 0.04204, a recall of 0.96695, and an F-Score of 0.9794. Additionally, it reported a false positive rate of 1.56% and a false negative rate of 0.25%. This model is highlighted in bold.
Table A1. Resulting values from the GridSearch were performed for the period between 1 June 2022 and 31 July 2022. The best model found achieved an accuracy of 0.982226, a precision of 0.9922, a loss of 0.04204, a recall of 0.96695, and an F-Score of 0.9794. Additionally, it reported a false positive rate of 1.56% and a false negative rate of 0.25%. This model is highlighted in bold.
NeuronBatchLearningLossOverall AccuracyAverage AccuracyPrecisionRecallF-ScoreFalse NegativesFalse Positives
64-321280.00050.0420.98990.98220.99230.9670.979515,6990.2504
128-641280.00050.04080.98790.97730.99520.95610.975243,9230.1535
16-8640.00050.1030.98410.97060.99260.94340.967456,5580.2341
32-16320.0010.08030.98350.97140.98640.94720.966452,7880.4346
64-32-16640.00010.07250.98360.96940.99280.94110.966358,8900.2274
32-16320.00050.09380.98220.96650.99310.93520.963364,7900.2178
64-32-16320.00010.05960.98210.96770.98870.9390.963260,9910.3588
32-16640.00050.10320.98210.9650.99740.93090.96369,1070.0816
16-8640.0010.09310.98170.96710.98840.93790.962562,0560.3665
128-64-321280.00050.07760.98140.96380.99710.92850.961671,4680.0911
64-32640.00010.10090.98090.96280.99680.92660.960473,3960.0988
64-32640.0010.14380.98050.96220.9960.92560.959574,4030.1238
16-81280.0010.1250.98030.96180.99610.92480.959175,2380.1218
16-8320.00010.08160.98010.96120.99660.92340.958676,5620.1055
128-641280.00010.05730.97980.96050.99720.92190.958178,1450.0854
16-8-4640.0010.0650.97950.96240.98890.92830.957671,6980.3483
128-64640.00010.07860.97950.960.99690.9210.957578,9510.0959
64-321280.00010.15370.97940.95980.99690.92050.957279,4690.094
16-81280.00050.09080.97940.95960.99740.92010.957279,9300.0796
128-64320.00010.11230.97920.96070.99280.92360.95776,3610.2235
64-32320.00010.10380.97880.95850.99730.91780.955982,2040.0825
64-32640.00050.1130.97870.95830.99740.91740.955782,6070.0806
128-64640.00050.16470.97870.95770.99920.91560.955584,4490.0249
64-32-161280.00010.13310.97820.95780.99570.9170.954783,0380.1324
32-16640.0010.12750.97820.95830.99360.91860.954681,3980.1967
16-8640.00010.07060.97730.96510.96760.94070.95459,32110,487
8-4640.0010.11690.97770.95670.99590.91460.953585,3700.1257
128-64-32320.00010.10540.97760.95640.99610.91390.953286,0890.1199
8-41280.0010.10910.97760.95590.99760.91240.953187,5570.072
128-64-32320.0010.16180.97730.9550.99850.91050.952589,4860.0461
128-64-32320.00050.10380.97720.95560.9960.91230.952387,6720.1218
128-64320.0010.10670.97690.95440.99820.90930.951790,7230.0547
128-64-32640.00010.1070.97680.95560.99340.91320.951686,8090.2034
16-8-41280.00050.0630.97660.95450.99620.91010.951289,8890.117
32-16640.00010.09780.97620.95740.98380.91990.950880,1310.5056
64-32320.0010.12760.97560.95880.97610.9250.949974,9500.7541
16-8-4320.00010.22350.97530.95920.97260.92720.949472,7920.8712
64-321280.0010.1960.97570.95180.9990.90380.94996,1920.0297
32-161280.00010.08180.97390.97250.92920.96960.948930,42324,638
64-32-16320.0010.09070.97570.95160.99920.90350.948996,5090.024
64-32-161280.00050.16410.97560.9520.99780.90460.948995,4150.0662
16-8320.0010.18040.97540.9510.99930.90220.948397,8040.0211
8-4320.0010.08480.97420.95440.98080.91490.946785,1390.5968
64-32-16640.0010.11850.97450.95040.99560.90220.946697,8330.1334
128-64-321280.00010.13260.97440.94960.99740.90.946299,9630.0796
128-64-32640.00050.13460.9740.94880.99720.89850.9453101,5170.0835
16-81280.00010.08160.97120.96780.92640.96110.943438,91425,444
64-32-16640.00050.21510.97310.94670.99880.89370.9433106,3240.0365
32-16320.00010.17850.9730.94660.99830.89360.9431106,3520.0508
128-64640.0010.15510.9730.94640.99890.8930.943106,9570.0336
8-41280.00010.07760.97270.94640.99680.89370.9425106,2660.095
16-8-4320.0010.11620.97220.94470.9990.88960.9411110,3820.0307
128-641280.0010.22210.97210.94460.99910.88940.9411110,6120.0269
8-4320.00010.11520.97190.94640.99160.89520.941104,7690.2514
128-64-321280.0010.16690.9720.94460.9980.88980.9408110,2090.0585
16-8-4320.00050.11010.97180.94450.99690.88980.9403110,1520.0931
8-4640.00010.09020.97160.94420.99640.88940.9399110,5550.1065
128-64320.00050.21210.97110.94270.99820.8860.9387113,9800.0547
8-41280.00050.13570.9710.94250.99810.88550.9384114,4690.0566
128-64-32640.0010.20140.97050.94120.99950.88260.9374117,4340.0144
16-8-4640.00010.1390.97020.9410.99810.88250.9368117,5200.0547
16-8-41280.00010.11680.96990.940.9990.88030.9359119,6500.0307
64-32-16320.00050.20010.96980.93980.99910.87990.9357120,0820.0259
16-8-4640.00050.08540.96930.93930.9980.87910.9348120,8880.0576
64-32320.00050.2610.96920.93850.99950.87720.9344122,7870.0144
16-8320.00050.18790.96860.93730.99940.87490.933125,1480.0173
32-161280.0010.22540.96720.93440.99980.8690.9298131,0480.0067
8-4640.00050.1120.96610.93280.99820.86610.9275133,9260.0508
32-161280.00050.15010.96160.95880.89930.95320.925546,83035,566
8-4320.00050.15840.9650.93050.99850.86130.9249138,6750.0422
64-32-161280.0010.2240.96420.928510,0000.8570.923143,0220
16-8-41280.0010.11570.9530.94160.89560.9190.907280,99535,710

References

  1. European Commission. The European Green Deal. Available online: https://ec.europa.eu/info/publications/communication-european-green-deal_en (accessed on 10 July 2025).
  2. Eurostat. Renewable Energy Statistics. Available online: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Renewable_energy_statistics (accessed on 10 July 2025).
  3. Red Eléctrica Española (REE). API de Datos del Sistema Eléctrico. Available online: https://www.ree.es/en/datos/apidata (accessed on 10 July 2025).
  4. Sun, T.; Shan, M.; Rong, X.; Yang, X. Estimating the spatial distribution of solar photovoltaic power generation potential on different types of rural rooftops using a deep learning network applied to satellite images. Appl. Energy 2022, 315, 119025. [Google Scholar] [CrossRef]
  5. Tiwari, A.; Meir, I.A.; Karnieli, A. Object-Based Image Procedures for Assessing the Solar Energy Photovoltaic Potential of Heterogeneous Rooftops Using Airborne LiDAR and Orthophoto. Remote Sens. 2020, 12, 223. [Google Scholar] [CrossRef]
  6. Kouyama, T.; Imamoglu, N.; Imai, M.; Nakamura, R. Verifying Rapid Increasing of Mega-Solar PV Power Plants in Japan by Applying a CNN-Based Classification Method to Satellite Images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 4104–4107. [Google Scholar] [CrossRef]
  7. Malof, J.M.; Hou, R.; Collins, L.M.; Bradbury, K.; Newell, R. Automatic solar photovoltaic panel detection in satellite imagery. In Proceedings of the 2015 International Conference on Renewable Energy Research and Applications (ICRERA), Palermo, Italy, 22–25 November 2015; pp. 1428–1431. [Google Scholar] [CrossRef]
  8. Zhu, R.; Guo, D.; Wong, M.S.; Qian, Z.; Chen, M.; Yang, B.; Chen, B.; Zhang, H.; You, L.; Heo, J.; et al. Deep solar PV refiner: A detail-oriented deep learning network for refined segmentation of photovoltaic areas from satellite imagery. Int. J. Appl. Earth Obs. Geoinf. 2023, 116, 103134. [Google Scholar] [CrossRef]
  9. Wang, S.; Cai, B.; Hou, D.; Liu, Q.; Zheng, X.; Wang, J.; Shao, Z. Uncovering the location of photovoltaic power plants using heterogeneous remote sensing imagery. Energy AI 2025, 21, 100527. [Google Scholar] [CrossRef]
  10. Zhao, Z.; Chen, Y.; Li, K.; Ji, W.; Sun, H. Extracting Photovoltaic Panels From Heterogeneous Remote Sensing Images With Spatial and Spectral Differences. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 5553–5564. [Google Scholar] [CrossRef]
  11. Schlosser, A.; Szabó, G.; Bertalan, L.; Varga, Z.; Enyedi, P.; Szabó, S. Building Extraction Using Orthophotos and Dense Point Cloud Derived from Visual Band Aerial Imagery Based on Machine Learning and Segmentation. Remote Sens. 2020, 12, 2397. [Google Scholar] [CrossRef]
  12. Li, J.; Roy, D. A Global Analysis of Sentinel-2A, Sentinel-2B and Landsat-8 Data Revisit Intervals and Implications for Terrestrial Monitoring. Remote Sens. 2017, 9, 902. [Google Scholar] [CrossRef]
  13. Zhang, X.; Xu, M.; Wang, S.; Huang, Y.; Xie, Z. Mapping photovoltaic power plants in China using Landsat, random forest, and Google Earth Engine. Earth Syst. Sci. Data 2022, 14, 3743–3755. [Google Scholar] [CrossRef]
  14. Dui, Z.; Huang, Y.; Jin, J.; Gu, Q. Automatic detection of photovoltaic facilities from Sentinel-2 observations by the enhanced U-Net method. J. Appl. Rem. Sens. 2023, 17, 014516. [Google Scholar] [CrossRef]
  15. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
  16. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
  17. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  18. Lee, S.-H.; Yoon, D.-H.; Lee, S.-K.; Oh, K.-Y.; Lee, M.-J. Development of a Technique for Classifying Photovoltaic Panels Using Sentinel-1 and Machine Learning. J. Sens. 2022, 2022, 1121971. [Google Scholar] [CrossRef]
  19. Zhang, H.; Tian, P.; Zhong, J.; Liu, Y.; Li, J. Mapping Photovoltaic Panels in Coastal China Using Sentinel-1 and Sentinel-2 Images and Google Earth Engine. Remote Sens. 2023, 15, 3712. [Google Scholar] [CrossRef]
  20. Wang, J.; Liu, J.; Li, L. Detecting Photovoltaic Installations in Diverse Landscapes Using Open Multi-Source Remote Sensing Data. Remote Sens. 2022, 14, 6296. [Google Scholar] [CrossRef]
  21. Wang, J.; Chen, X.; Shi, T.; Hu, L.; Shi, W.; Du, Z.; Zhang, X.; Zhang, H.; Zeng, Y.; Hua, L.; et al. Mapping national-scale photovoltaic power stations using a novel enhanced photovoltaic index and evaluating carbon reduction benefits. Energy Convers. Manag. 2024, 318, 118894. [Google Scholar] [CrossRef]
  22. Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
  23. Kleebauer, M.; Marz, C.; Reudenbach, C.; Braun, M. Multi-Resolution Segmentation of Solar Photovoltaic Systems Using Deep Learning. Remote Sens. 2023, 15, 5687. [Google Scholar] [CrossRef]
  24. PUTvision. QGIS-Plugin-Deepness. GitHub. 2025. Available online: https://github.com/PUTvision/qgis-plugin-deepness (accessed on 21 July 2025).
  25. Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar] [CrossRef]
  26. Centro Nacional de Información Geográfica (CNIG). Ortofoto PNOA Máxima Actualidad. Available online: https://centrodedescargas.cnig.es/CentroDescargas/ortofoto-pnoa-maxima-actualidad (accessed on 21 July 2025).
  27. Copernicus Data Space Ecosystem. OData API Documentation. Available online: https://documentation.dataspace.copernicus.eu/APIs/OData.html (accessed on 17 July 2025).
  28. Lozano-Tello, A.; Fernández-Sellers, M.; Quirós, E.; Fragoso-Campón, L.; García-Martín, A.; Gutiérrez Gallego, J.A.; Muñoz, P. Crop identification by massive processing of multiannual satellite imagery for EU common agriculture policy subsidy control. Eur. J. Remote Sens. 2020, 54, 1–12. [Google Scholar] [CrossRef]
  29. Siesto, G.; Fernández-Sellers, M.; Lozano-Tello, A. Crop Classification of Satellite Imagery Using Synthetic Multitemporal and Multispectral Images in Convolutional Neural Networks. Remote Sens. 2021, 13, 3378. [Google Scholar] [CrossRef]
  30. Nguyen, Q.H.; Ly, H.-B.; Ho, L.S.; Al-Ansari, N.; Le, H.V.; Tran, V.Q.; Prakash, I.; Pham, B.T. Influence of Data Splitting on Performance of Machine Learning Models in Prediction of Shear Strength of Soil. Math. Probl. Eng. 2021, 2021, 4832864. [Google Scholar] [CrossRef]
  31. Pham, B.T.; Prakash, I.; Jaafari, A.; Bui, D.T. Spatial Prediction of Rainfall-Induced Landslides Using Aggregating One-Dependence Estimators Classifier. J. Indian Soc. Remote Sens. 2018, 46, 1457–1470. [Google Scholar] [CrossRef]
Figure 1. Study area and its geographical context. (a) Global map with the Iberian Peninsula highlighted (40°12′30.6″ N, 3°42′46.8″ W); (b) detail of the Peninsula with Spain outlined in white and the region of Extremadura highlighted in yellow; (c) close-up of the Extremadura region.
Figure 1. Study area and its geographical context. (a) Global map with the Iberian Peninsula highlighted (40°12′30.6″ N, 3°42′46.8″ W); (b) detail of the Peninsula with Spain outlined in white and the region of Extremadura highlighted in yellow; (c) close-up of the Extremadura region.
Sustainability 17 08628 g001
Figure 2. Study area and its geographical context. (a) provincial division of Extremadura in Cáceres (blue) and Badajoz (green); (b) boundaries of the four Badajoz counties used in the testing phase (orange); and (c) visualization of the parcels analyzed (in red).
Figure 2. Study area and its geographical context. (a) provincial division of Extremadura in Cáceres (blue) and Badajoz (green); (b) boundaries of the four Badajoz counties used in the testing phase (orange); and (c) visualization of the parcels analyzed (in red).
Sustainability 17 08628 g002
Figure 3. Example of a resulting image from the fusion of two GeoTIFFs (squared in blue) corresponding to a selected parcel, outlined in red. The fusion was carried out by generating a virtual mosaic and then converting it into a georeferenced GeoTIFF image.
Figure 3. Example of a resulting image from the fusion of two GeoTIFFs (squared in blue) corresponding to a selected parcel, outlined in red. The fusion was carried out by generating a virtual mosaic and then converting it into a georeferenced GeoTIFF image.
Sustainability 17 08628 g003
Figure 4. GeoTIFF image resulting from the geospatial clipping of a parcel, obtained after the previous fusion and reprojection process of the orthophotos. This image represents the exact area of the parcel, ready for segmentation and solar panel identification.
Figure 4. GeoTIFF image resulting from the geospatial clipping of a parcel, obtained after the previous fusion and reprojection process of the orthophotos. This image represents the exact area of the parcel, ready for segmentation and solar panel identification.
Sustainability 17 08628 g004
Figure 5. Example result of the segmentation and detection of photovoltaic installations on a clipped parcel derived from fused and cropped orthophotos. Solar panels identified by the model are shown in red, overlaid on the clipped image of the parcel.
Figure 5. Example result of the segmentation and detection of photovoltaic installations on a clipped parcel derived from fused and cropped orthophotos. Solar panels identified by the model are shown in red, overlaid on the clipped image of the parcel.
Sustainability 17 08628 g005
Figure 6. Workflow for pixel-level detection of photovoltaic installations using Sentinel-2 multispectral imagery. The process begins with image download and preprocessing (1 June–31 July 2022), followed by pixel–parcel association, generation of synthetic images, model training, and finally large-scale solar panel identification.
Figure 6. Workflow for pixel-level detection of photovoltaic installations using Sentinel-2 multispectral imagery. The process begins with image download and preprocessing (1 June–31 July 2022), followed by pixel–parcel association, generation of synthetic images, model training, and finally large-scale solar panel identification.
Sustainability 17 08628 g006
Figure 7. Overlay of the 331 processed zones (gray squares) covering the study area boundaries of the four Badajoz counties used in the testing phase (orange).
Figure 7. Overlay of the 331 processed zones (gray squares) covering the study area boundaries of the four Badajoz counties used in the testing phase (orange).
Sustainability 17 08628 g007
Figure 8. Example of synthetic image generation for model training. For each parcel, multispectral data from 12 Sentinel-2 bands and 12 pentads (1 June–31 July 2022) were combined into a two-dimensional matrix, capturing both spectral and temporal information in a format directly interpretable by the neural network.
Figure 8. Example of synthetic image generation for model training. For each parcel, multispectral data from 12 Sentinel-2 bands and 12 pentads (1 June–31 July 2022) were combined into a two-dimensional matrix, capturing both spectral and temporal information in a format directly interpretable by the neural network.
Sustainability 17 08628 g008
Figure 9. (a) Image generated with Sentinel-2 bands 4, 3, 2 (RGB) showing an example plot containing photovoltaic panels. (b) Example plot showing the pixels identified as solar panels (in green) and those not identified (in red).
Figure 9. (a) Image generated with Sentinel-2 bands 4, 3, 2 (RGB) showing an example plot containing photovoltaic panels. (b) Example plot showing the pixels identified as solar panels (in green) and those not identified (in red).
Sustainability 17 08628 g009
Figure 10. (a) Parcel’s high-resolution image cropped with PV. (b) Pixel-wise prediction showing the detection of PV with green pixels and no detection of PV (red pixels). (c) Identification by orthophoto segmentation showing the presence of the PV in red.
Figure 10. (a) Parcel’s high-resolution image cropped with PV. (b) Pixel-wise prediction showing the detection of PV with green pixels and no detection of PV (red pixels). (c) Identification by orthophoto segmentation showing the presence of the PV in red.
Sustainability 17 08628 g010
Figure 11. (a) Parcel’s high-resolution image cropped without PV on it. (b) Pixel-wise prediction showing the non-existence of PV (red pixels) in the parcel (outlined in blue). (c) Identification by orthophoto segmentation showing the non-presence of the PV.
Figure 11. (a) Parcel’s high-resolution image cropped without PV on it. (b) Pixel-wise prediction showing the non-existence of PV (red pixels) in the parcel (outlined in blue). (c) Identification by orthophoto segmentation showing the non-presence of the PV.
Sustainability 17 08628 g011
Figure 12. (a) Image generated with Sentinel-2 bands 4, 3, 2 (RGB) showing a parcel with hedgerow olive groves. (b) Pixel-wise prediction showing no detection of solar panels (red pixels). (c) Identification by orthophoto segmentation showing the presence of hedgerow olive groves.
Figure 12. (a) Image generated with Sentinel-2 bands 4, 3, 2 (RGB) showing a parcel with hedgerow olive groves. (b) Pixel-wise prediction showing no detection of solar panels (red pixels). (c) Identification by orthophoto segmentation showing the presence of hedgerow olive groves.
Sustainability 17 08628 g012
Figure 13. (a) Sentinel-2 image (bands 4, 3, 2 RGB) of a graphic declaration labeled as “water surface.” (b) Pixel-wise detection for the corresponding parcel, showing no solar panels (red pixels). (c) False positive detection using orthophoto segmentation.
Figure 13. (a) Sentinel-2 image (bands 4, 3, 2 RGB) of a graphic declaration labeled as “water surface.” (b) Pixel-wise detection for the corresponding parcel, showing no solar panels (red pixels). (c) False positive detection using orthophoto segmentation.
Sustainability 17 08628 g013
Figure 14. (a) Sentinel-2 image (bands 4, 3, 2 RGB) showing an example parcel. (b) Pixel-wise detection identifying photovoltaic panels (green) contrasted with the rest of the parcel (red). (c) Orthophoto of the same parcel showing a vertical orientation of the panels, making them difficult to spot from a top-down view.
Figure 14. (a) Sentinel-2 image (bands 4, 3, 2 RGB) showing an example parcel. (b) Pixel-wise detection identifying photovoltaic panels (green) contrasted with the rest of the parcel (red). (c) Orthophoto of the same parcel showing a vertical orientation of the panels, making them difficult to spot from a top-down view.
Sustainability 17 08628 g014
Figure 15. (a) Sentinel-2 images (bands 4, 3, 2 RGB) showing the progressive installation of solar panels on a specific parcel. (b) Pixel-wise detection identifying solar panels (green) and the rest of the parcel (red). (c) Orthophoto taken before the panel installation, where the panels are not yet visible due to the image’s earlier date.
Figure 15. (a) Sentinel-2 images (bands 4, 3, 2 RGB) showing the progressive installation of solar panels on a specific parcel. (b) Pixel-wise detection identifying solar panels (green) and the rest of the parcel (red). (c) Orthophoto taken before the panel installation, where the panels are not yet visible due to the image’s earlier date.
Sustainability 17 08628 g015
Figure 16. (a) Sentinel-2 image (bands 4, 3, 2 RGB) showing photovoltaic panels scattered irregularly across the terrain. (b) Pixel-wise detection results obtained with Sentinel-2, overlapped on the corresponding orthophoto to facilitate interpretation. Areas marked in green represent pixels classified as photovoltaic (PV) installations, while areas in red correspond to pixels where no PV was detected. This visualization highlights the challenge of mapping PV systems at Sentinel-2’s spatial resolution. (c) Orthophoto-based segmentation showing parcel identification, highlighted in red.
Figure 16. (a) Sentinel-2 image (bands 4, 3, 2 RGB) showing photovoltaic panels scattered irregularly across the terrain. (b) Pixel-wise detection results obtained with Sentinel-2, overlapped on the corresponding orthophoto to facilitate interpretation. Areas marked in green represent pixels classified as photovoltaic (PV) installations, while areas in red correspond to pixels where no PV was detected. This visualization highlights the challenge of mapping PV systems at Sentinel-2’s spatial resolution. (c) Orthophoto-based segmentation showing parcel identification, highlighted in red.
Sustainability 17 08628 g016
Figure 17. (a) Orthophoto showing correct segmentation-based detection of very small panels on a rooftop in an urban area (highlighted in red). (b) Pixel-wise detection results obtained with Sentinel-2, overlapped on the corresponding orthophoto to facilitate interpretation. Areas marked in green represent pixels classified as PV installations, while areas in red correspond to pixels where no PV was detected. This visualization highlights the challenge of mapping PV systems at Sentinel-2’s spatial resolution.
Figure 17. (a) Orthophoto showing correct segmentation-based detection of very small panels on a rooftop in an urban area (highlighted in red). (b) Pixel-wise detection results obtained with Sentinel-2, overlapped on the corresponding orthophoto to facilitate interpretation. Areas marked in green represent pixels classified as PV installations, while areas in red correspond to pixels where no PV was detected. This visualization highlights the challenge of mapping PV systems at Sentinel-2’s spatial resolution.
Sustainability 17 08628 g017
Table 1. Summary of recent studies on photovoltaic (PV) detection using remote sensing data.
Table 1. Summary of recent studies on photovoltaic (PV) detection using remote sensing data.
AuthorDataMethodStudy AreaMetricsLimitations
Malof et al. [7] RGB High-Resolution imagery (0.3 m/px).Feature Extraction + SVM 100 roofts in Lemoore (EEUU).Detection Rate: 94% Reliance on updated VHR images; not scalable to large areas; sensitive to solar glare.
Rui et al. [8]RGB Google Earth Satellite imagery (0.15 m/px).Deep Solar PV RenerHeilbronn (Germany). 115.6 km2.F1-Score: 90.91%Boundaries of PV over-smoothed; Misclassification due to similar spatial textures between PV modules and; Reflections and sunlight rejections; projection of tilted PV panels.
Wang et al. [9]GF-2 (1 m/px) + Sentinel-2 (10 m/px, 20 m/px, 60 m/px).FusionPV frameworkHubei Province, (China). 185,900 km2.Kappa: 92.38%Spectral and spatial resolution discrepancies between datasets; Environmental diversity and background complexity in large-scale PV mapping pose; Temporal and environmental factor.
Zhiyu et al. [10]GF-2 (1 m/px) + Sentinel-2 (10 m/px, 20 m/px, 60 m/px).PV-UnetOrdos (Mongolia). 87,000 km2.F1-Score: 98.04%Image conditions; Dependency on high-quality, cloud-free images; The need for precise annotation and preprocessing; The current model may require adaptation for other regions.
Schlosser et al. [11]RGB High-Resolution imagery (0.3 m/px).RF & SVMDebrecen (Hungary). 76 ha.OA: 82%Difficulty in detecting smaller or partially obscured buildings; Limited spectral information; The method’s accuracy varies depending on the complexity of the urban environment and image quality; Availability is not always guaranteed.
Zhang et al. [13]Landsat 8 (30 m/px).RFChina.OA: 95% Low spatial resolution; possible confusion in dispersed urban environments.
Zixuan et al. [14]Sentinel-2 (10 m/px, 20 m/px, 60 m/px).E-UnetDiverse environments, (deserts, mountains, lakes, and coastal regions across different latitudes and topography).OA: 98.90%PV panels and background features have similar spectral or spatial textures, leading to misdetections; Large-scale PV facilities; The datasets not fully capture all environmental variability; Scalability to larger, diverse global datasets requires further validation.
Seong-Hyeok et al. [18]Sentinel-1 (10 m/px).YOLOv3 & YOLOv5570 PV in South Korea (~250 km2).OA: 93.17%Focused on large-scale PV panels; Limited to object detection; Detection accuracy depends on topographical and weather conditions.
Zhang et al. [19]Sentinel-1 (10 m/px) + Sentinel-2 (10 m/px, 20 m/px, 60 m/px).RFCoastal Regions in China. 1.34 million km2.OA: 95.07%Data Sensitivity to weather conditions; Salt-and-pepper noise leads to potential spatial misclassification; Imagery with Temporal Mismatch.
Wang et al. [20] Sentinel-1 (10 m/px) + Sentinel-2 (10 m/px, 20 m/px, 60 m/px) + VIIRS (500 m/px).RFGansu and Zhejiang Province (China).OA: 98.9% Natural landscape features cause false positives; Reflective surfaces lead to misclassification; Less effective in landscapes; VIIRS post-processing accuracy depends on the timeliness and resolution of nighttime light data.
Wang et al. [21] Sentinel-1 (10 m/px) + Sentinel-2 (10 m/px, 20 m/px, 60 m/px).EPVI + RFChina.OA: 97.6% PV tilt angles; EPVI faces challenges in densely vegetated or complex land covers where PV signatures could be less distinctive.
Table 2. Top 10 results obtained, sorted by F-score, after applying a GridSearch over different neural network configurations (combinations of number of layers, neurons per layer, batch size, and learning rate).
Table 2. Top 10 results obtained, sorted by F-score, after applying a GridSearch over different neural network configurations (combinations of number of layers, neurons per layer, batch size, and learning rate).
NeuronBatchLearningLossOverall AccuracyAverage AccuracyPrecisionRecallF-ScoreFalse Negatives
64-321280.00050.0420.98990.98220.99230.9670.979515,699
128-641280.00050.04080.98790.97730.99520.95610.975243,923
16-8640.00050.1030.98410.97060.99260.94340.967456,558
32-16320.0010.08030.98350.97140.98640.94720.966452,788
64-32-16640.00010.07250.98360.96940.99280.94110.966358,890
32-16320.00050.09380.98220.96650.99310.93520.963364,790
64-32-16320.00010.05960.98210.96770.98870.9390.963260,991
32-16640.00050.10320.98210.9650.99740.93090.96369,107
16-8640.0010.09310.98170.96710.98840.93790.962562,056
128-64-321280.00050.07760.98140.96380.99710.92850.961671,468
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lozano-Tello, A.; Caballero-Mancera, A.; Luceño, J.; Clemente, P.J. Evaluating Two Approaches for Mapping Solar Installations to Support Sustainable Land Monitoring: Semantic Segmentation on Orthophotos vs. Multitemporal Sentinel-2 Classification. Sustainability 2025, 17, 8628. https://doi.org/10.3390/su17198628

AMA Style

Lozano-Tello A, Caballero-Mancera A, Luceño J, Clemente PJ. Evaluating Two Approaches for Mapping Solar Installations to Support Sustainable Land Monitoring: Semantic Segmentation on Orthophotos vs. Multitemporal Sentinel-2 Classification. Sustainability. 2025; 17(19):8628. https://doi.org/10.3390/su17198628

Chicago/Turabian Style

Lozano-Tello, Adolfo, Andrés Caballero-Mancera, Jorge Luceño, and Pedro J. Clemente. 2025. "Evaluating Two Approaches for Mapping Solar Installations to Support Sustainable Land Monitoring: Semantic Segmentation on Orthophotos vs. Multitemporal Sentinel-2 Classification" Sustainability 17, no. 19: 8628. https://doi.org/10.3390/su17198628

APA Style

Lozano-Tello, A., Caballero-Mancera, A., Luceño, J., & Clemente, P. J. (2025). Evaluating Two Approaches for Mapping Solar Installations to Support Sustainable Land Monitoring: Semantic Segmentation on Orthophotos vs. Multitemporal Sentinel-2 Classification. Sustainability, 17(19), 8628. https://doi.org/10.3390/su17198628

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop