Next Article in Journal
Perspective Charts in a Multi-Foci Globe-Based Visualization of COVID-19 Data
Next Article in Special Issue
Geo-Enabled Sustainable Municipal Energy Planning for Comprehensive Accessibility: A Case in the New Federal Context of Nepal
Previous Article in Journal
Accuracy Issues for Spatial Update of Digital Cadastral Maps
Previous Article in Special Issue
SmartEle: Smart Electricity Dashboard for Detecting Consumption Patterns: A Case Study at a University Campus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Utilizing Geospatial Data for Assessing Energy Security: Mapping Small Solar Home Systems Using Unmanned Aerial Vehicles and Deep Learning

1
Department of Electrical and Computer Engineering, Duke University, Durham, NC 27705, USA
2
Nicholas Institute for Environmental Policy Solutions, Duke University, Durham, NC 27705, USA
3
RTI International, Research Triangle Park, NC 27709, USA
4
Energy Initiative, Duke University, Durham, NC 27705, USA
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2022, 11(4), 222; https://doi.org/10.3390/ijgi11040222
Submission received: 14 January 2022 / Revised: 16 March 2022 / Accepted: 20 March 2022 / Published: 24 March 2022
(This article belongs to the Special Issue Geospatial Electrification and Energy Access Planning)

Abstract

:
Solar home systems (SHS), a cost-effective solution for rural communities far from the grid in developing countries, are small solar panels and associated equipment that provides power to a single household. A crucial resource for targeting further investment of public and private resources, as well as tracking the progress of universal electrification goals, is shared access to high-quality data on individual SHS installations including information such as location and power capacity. Though recent studies utilizing satellite imagery and machine learning to detect solar panels have emerged, they struggle to accurately locate many SHS due to limited image resolution (some small solar panels only occupy several pixels in satellite imagery). In this work, we explore the viability and cost-performance tradeoff of using automatic SHS detection on unmanned aerial vehicle (UAV) imagery as an alternative to satellite imagery. More specifically, we explore three questions: (i) what is the detection performance of SHS using drone imagery; (ii) how expensive is the drone data collection, compared to satellite imagery; and (iii) how well does drone-based SHS detection perform in real-world scenarios? To examine these questions, we collect and publicly-release a dataset of high-resolution drone imagery encompassing SHS imaged under a variety of real-world conditions and use this dataset and a dataset of imagery from Rwanda to evaluate the capabilities of deep learning models to recognize SHS, including those that are too small to be reliably recognized in satellite imagery. The results suggest that UAV imagery may be a viable alternative to identify very small SHS from perspectives of both detection accuracy and financial costs of data collection. UAV-based data collection may be a practical option for supporting electricity access planning strategies for achieving sustainable development goals and for monitoring the progress towards those goals.

1. Introduction

Energy access is a global issue. Sustainable development goal (SDG) 7.1, outlined by United Nations [1], proposed to achieve “access to affordable, reliable, sustainable, and modern energy for all” by 2030. However, as the latest progress report in 2020 [2] pointed out, extrapolating from the current progress towards electrification, there would still be 620 million people without basic access to electricity by 2030. Although the great majority of the global population has grid-connected electricity access, the high costs of traversing challenging topographies make grid extension to remote, rural communities less affordable or efficient [3].
In attempting to reach SDG 7.1, solar home systems (SHS) have become a promising solution as an alternative to grid extension [4]. SHS are solar-photovoltaic-based systems that provide electricity to individual homes. Compared to solar farms where large solar arrays are installed and generated power is transmitted and regulated by large power grids, a SHS (a solar panel and associated equipment that provides power to a single household) is typically a small solar panel fixed to the top of the household roof and not connected to the electric grid. Off-grid solar systems are often cost-effective solutions for rural communities far from the grid.
A crucial resource for targeting further investment of public and private resources, as well as tracking the progress of universal electrification goals, is shared access to high-quality data on individual SHS installations including information such as location and capacity. With this information, decision-makers can make more informed decisions about electrification options, such as grid extensions, mini/micro grids, and stand-alone systems [5]. Despite the importance of such data, unfortunately this information is often of limited availability or only accessible by expensive and time-consuming surveys [6] or incomplete self reports [7].
To address this widespread obstacle to energy access tracking and planning [8], generally three approaches to small SHS data gathering are available: (1) on-the-ground surveys (people going door-to-door to collect information), (2) information from retailers of solar panels or government agencies, or (3) remotely sensed data using overhead photography (satellites, aircrafts, or UAVs). Ground surveys are usually too expensive (this will be discussed in Section 6) and gathering data from governments or utilities is often hindered by either the fact that such data are not collected or that there is not the willingness nor the incentive to share the data. This leaves remote sensing approaches as a promising path to data collection. Previous studies have recognized that SHS are visible in overhead imagery and when remote sensing data sources are combined with machine learning tools, it may create a suitable approach to data collection for some SHS. Utilizing high-resolution satellite images, it was shown that medium to large installations of solar panels can be identified with high accuracy [9,10,11,12]. However, significantly smaller sizes of SHS are used in rural areas of low- and middle-income countries. The majority of the global population using off-grid solutions use SHS under 50 W [13], which typically occupies an area of 0.3 m 2 [14]. This is a challenge even at the highest resolution of commercially available satellite imagery (typically around 0.3 m/pixel [15]). This is a major limitation of the collection of location and capacity data for very small SHS. Therefore, unmanned aerial vehicles (UAVs) offer a higher-resolution alternative to satellite platforms and are a potential alternative solution to filling this information gap. A more detailed discussion of the existing literature on solar panel detection using remote sensing data is presented in Section 2.

Contributions of This Work

In this paper, we critically analyze the viability of using drone-based aerial imagery to detect common, but physically small, energy infrastructure as an alternative to satellite imagery or manual surveys from both an assessment accuracy and cost effectiveness perspective. Our contributions closely follow the three research questions that we propose to answer: (i) what is the detection performance of SHS using UAV imagery; (ii) how expensive is the drone data collection, compared to satellite imagery; and (iii) how well does drone-based SHS detection perform in real-world scenarios?
For investigating (i), the detection performance of SHS using UAV imagery, as there exists no public dataset to directly answer this question, we develop a drone-based solar panel detection dataset covering various flying altitudes and flying speeds and train a deep learning detection algorithm to detect the SHS. We then evaluate the performance and robustness of this algorithm. For research question (ii), regarding the comparative cost of UAV based data collection, we conduct a cost/benefit analysis of UAV-based SHS data collection and compare it to common alternatives such as satellite imagery and aerial photography. For research question (iii) regarding the performance of UAV-based data analysis for SHS assessment under real-world conditions, we annotated and evaluated SHS detection performance on UAV imagery from Rwanda to quantify the performance of UAV imagery in a real-world setting.
The main contributions of this work are summarized as follows:
  • The first publicly available dataset of UAV imagery of very small (less than 100 Watts) SHS (Section 3). We collected, annotated, and openly shared the first UAV-based very small solar panel dataset with precise ground sampling distance and flight altitude. The dataset contains 423 images, 60 videos and 2019 annotated solar panel instances. The dataset contains annotations for training object detection or segmentation models.
  • Evaluating the robustness and detection performance of deep learning object detection for solar PV UAV data (Section 5). We evaluate the performance of SHS detection performance with a U-Net architecture with a pre-trained ResNet50 backbone. We controlled for the data collection resolution (or 1/altitude): sampling every 10 m of altitude across an interval from 50 m–120 m. We controlled for the dimension of panel size by using 5 diverse solar PV panel sizes
  • Cost/benefit analysis of UAV- and satellite-based solar PV mapping (Section 6). We estimate a cost-performance curve for comparing remote sensing based data collection for both UAV and satellite systems for direct comparison. We demonstrate that using the highest resolution satellite imagery currently available, very small SHS are hardly detectable; thus, even the highest-resolution commercially available satellite imagery does not present a viable solution for assessment of very small (less than 100 Watt) solar panel deployments.
  • Case study in Rwanda illustrating the potential of drone-based solar panel detection for very small SHS installations (Section 7). By applying our models to drone data collected in the field in Rwanda, we demonstrate an example of the practical performance of using UAV imagery for solar panel detection. Comparing the results to our experiments with data collected under controlled conditions, we identified the two largest obstacles to achieving improved performance are the resolution of the imagery and the diversity of the training data.

2. Related Work

Larger solar PV arrays, especially as compared to very small SHS, have been shown to be detectable in satellite imagery. Recent work demonstrates the potential of automatically mapping Solar PV arrays using remote sensing and machine learning from individual household SHS (5–10 kW [6,9,10,16,17]) to utility-scale (>10 MW [12,18]) installations, while no past studies have focused on smaller solar arrays that are being deployed for those transitioning to electricity access for the first time, which could be 100 W or less—an order of magnitude smaller than all past studies.
The earliest work on solar panel segmentation used traditional machine learning methods like support vector machines (SVM), decision trees and random forests, with manually-engineered features like mean, variance, textons, and local color statistics of pixels [6,9,19]. Yuan et al. [16] applied deep neural networks (DNN) for solar panel detection from aerial imagery. Advances in convolutional neural networks (CNNs) [20,21] in large scale image datasets like ImageNet [22] have also propelled the field of solar panel segmentation (i.e., pixel-wise classification) forward. Convolutional networks for image classification were the first type of CNNs to be used for a coarse form of solar panel segmentation by [11,16,23]. True segmentation CNNs for solar PV identification soon followed including SegNet [24] and activation map based methods [10]. U-net structures [25] were also quickly adopted in this field further increasing model detection performance [7,26].
UAVs have been used for solar PV monitoring and management on solar farms. However, this has typically been limited to situations where the locations of the solar PV is already known, such as solar panel monitoring in large solar farms. UAV-based solar array segmentation has been used for inspecting the arrays of solar farms, using texture features and a clustering algorithm for the analysis [27]. Other research on drone-based solar panel detection used a thermal camera to identify potential defects in solar arrays at the site [28,29]. There have also been techniques for monitoring solar PV farms using satellite imagery, such as tracking particulate matter deposition and effects on generation efficiency [30]; however, this is the exception since solar PV condition monitoring typically requires UAV data due to high image resolution requirements. In these settings, both thermal and optical cameras have been used. Thermal cameras are used in this setting to identify abnormal temperature profiles that may indicate malfunctioning solar arrays [31,32,33]. Optical UAV imagery has also been used to identify damaged or dust covered solar cells at solar farms [34,35,36,37,38]. None of the these UAV-based studies released their imagery datasets.
Access to energy at a larger scale has also been explored using night-time lights imagery. This use of 750 m-resolution data has been used extensively to generate estimates of which communities are grid-connected and even to explore the reliability of those grid connections [39,40,41]. This approach is not able to investigate the small-scale off-grid systems, however, as some of these SHS are less than a meter in diameter. In this work, we do not focus on identifying grid-connected communities, but are looking for communities and individual households that receive their power through off-grid solutions.

3. The SHS Drone Imagery Dataset

Drone datasets containing annotated images of solar panels, and especially very small (≤100 W) solar panels do not exist to our knowledge. We summarize the most relevant publicly-available UAV-based datasets in the appendix; however, none of these datasets contain solar panel annotations, making them unsuitable for training automated techniques to identify solar PV cells in UAV imagery.
Since no existing public data were available for our study, we collected a new dataset (DOI: https://doi.org/10.6084/m9.figshare.18093890) specifically for exploring the performance of UAVs (also know as drones) under controlled conditions for identifying very small solar PV. We describe our dataset, test sites, and equipment below. Below are our data design considerations:
  • Adequate ground sampling distance (GSD) range and granularity. Due to variations in factors such as hardware and elevation change, the GSD of drone imagery can vary significantly in practice. Therefore, we want our dataset to contain imagery with a range of image GSDs (shown in Table 1) that are sufficient to represent a variety of real-world conditions, as well as detect SHS.
  • Diverse and representative solar panels. As solar panels can have different configurations affecting the visual appearance (polycrystal or monocrystal, size, aspect ratio), we chose our solar panels carefully so that they form a diverse and representative (in terms of power capacity) set (Table A1) of actual solar panels that would be deployed in developing countries.
  • Fixed camera angle of 90 degrees and different flying speeds: To investigate the robustness of solar panel detection as well as data collection cost (that is correlated with flying speed), we want our dataset to have more than one flight speed.

Data Collection Process

UAV. A DJI mini 2 was used to collect all our data except for the Rwanda case study (in which we used previously published data [42,43]). The DJI mini 2 UAV was selected due to its flexibility in operational mode, high camera quality, and low cost. It has 3 modes of flight, with a maximum speed of 16 m/s. It has the capability to hover nearly stationarily, resulting in high image quality in low wind conditions. It uses 1/2.3 CMOS sensor (4 k × 3 k pixels) with a wide range of possible shutter speeds. It shoots 4 k video (3840 × 2160) up to 30 fps. We fixed the gimbal (angle of camera) to be vertically downward (90 degrees).
Solar panels. We purchased 5 solar panels in total with various physical sizes and PV materials to be representative of the types of SHS that are typically installed for off-grid solar projects in communities transitioning to electricity and include the low end of feasible use (around 10 W). More detailed specifications of our solar panels are presented in the appendix.
Flight plan. Imagery data were collected in the flight zone within Duke Forest in Durham, NC. Due to regulations of the federal aviation agency (FAA) under part 107, we were allowed to occupy class G airspace that includes altitudes lower than 120 m. Therefore, the height of all the data collected was capped at 120 m. Additionally, to prevent collisions with trees, we set the lower limit of our flight height to be 50 m.
Annotation. The data were manually annotated with polygons drawn around the solar panel by a human annotator. These polygons were converted to pixel maps for network training. All stationary images were labeled and a small subset of the videos were annotated.
We present examples of our collected dataset here for reference in Figure 1.

4. Post-Processing and Metrics

For the evaluation metrics, we use standard object-wise detection precision and recall, and summary metrics, as described below. As our models are U-Net-based segmentation models (code available at https://github.com/BensonRen/Drone_based_solar_PV_detection) that produce pixel-wise confidence scores, we aggregate the pixel-wise predictions into objects based on thresholding the pixel-wise confidence scores into binary images, grouping those binary images into groups of pixels (objects), and morphologically dilating those objects into groups of neighboring objects, a process illustrated in the Figure 2. The resulting set of objects and corresponding confidence scores then serve as an input to a scoring function that compares each detected object to ground truth to determine whether they are a true positive, false positive, or false negative object, (illustrated in appendix Figure A2). We consider a predicted object to be a true positive if its intersection over union (IoU) with a ground truth object is 0.2 or greater. Intersection over union computes ratio of the intersection of the area between two objects with the union of the area included in the two objects.
While all PR-curves can be found in the appendix, we present summary statistics including the maximum F 1 score and average precision (AP). F 1 is the harmonic mean of precision and recall and is frequently used as a measure of the accuracy of the detection/classification where an F 1 score of 1 implies perfect precision and perfect recall. Average precision is a summary statistic representing the area under the PR-curve. An average precision of 1 signifies perfect precision and recall as well. They are defined as follows:
F 1 m a x = max τ 2 × P ( τ ) × R ( τ ) P ( τ ) + R ( τ )
A P = 0 R m a x P × d r
Here, P is Precision, R is Recall, R m a x is the maximum recall that the model can achieve (usually less than 1 due to object-wise and IoU limitations), and τ is the operation confidence threshold for object-wise scoring that we sweep to produce the PR curve. The F 1 score uses the harmonic mean to penalize lower values in either precision or recall. For example, if a detector achieves 90% precision and 10% recall, the F 1 score is 18% (a simple mean would yield 50%).

5. Experiment #1: Solar Panel Detection Performance Using UAV Imagery

As we describe the three experiments included in this study, recall that we are investigating three goals: (i) the detection performance of SHS using UAV imagery and computer vision techniques; (ii) the cost of drone data collection compared to satellite imagery and other data collection approaches; and (iii) performance in a real-world scenario using data from Rwanda. This section investigates the first (i) of these three questions, and subsequent sections explore the remaining two.
The primary goal of this first set of experiments is to estimate the achievable accuracy of small SHS detection models when they are applied to UAV imagery. We also evaluate how their accuracy varies with respect to two operational settings: the speed of the UAV, and the ground sampling distance (or resolution) of the UAV’s imagery. We hypothesize that these two factors will have a significant impact on the achievable accuracy of recognition techniques. Low image resolution can reduce detection accuracy and higher UAV speed can result in motion blur in the imagery, which can also negatively impact detection accuracy. These two factors are likely to vary in practice, making them an important consideration for practitioners and researchers who wish to employ automated recognition models.
To estimate the accuracy of contemporary PV detection models on UAV imagery, we applied a proven object detection model called U-Net [25] to the UAV Solar PV dataset we created for this experiment. The U-Net has been employed in many recent studies involving object recognition in overhead imagery, including solar PV mapping [26]. Due to the relatively small size of our UAV dataset, we begin by further training our ImageNet-pretrained U-Net model on a satelliteimagery solar PV panel dataset [23]. We then fine-tune the model on our UAV data (more details about this in the appendix). We evaluated the performance of this fine-tuned detection model as we varied both the resolution of the imagery and the flight speed at which the imagery was collected. To evaluate the model’s performance, we split our data into separate training, validation, and testing sets, which is a widely-used approach to obtain unbiased generalization performance estimates of machine learning models [44]. Full details of the training and data handling procedure can be found in the Appendix A.

5.1. Detection Performance Comparison over Imagery Resolution

To investigate the performance as resolution varies, we train and test image detection models multiple times, varying the resolution of the image for each scenario. Note that we define image resolution as the GSD of the UAV imagery (other optics-focused definitions exist [45], but are less relevant for this study).
To evaluate the performance change due to changes in the resolution of the imagery, we fly our drone at various heights from 50 m to 120 m (the maximum legally allowed height in our jurisdiction), resulting in an image resolution range of 1.6 cm to 4.5 cm per pixel. We divide our flying height (altitude) into 20 m-interval groups for which we evaluate performance. We trained and tested each possible image resolution to evaluate the performance impact of flying at different altitudes (resulting in varying image resolutions).
Additionally, we compare the performance across UAV image resolution to common satellite imagery resolutions, which are a potential alternative approach to collecting data on very small scale solar PV. We simulate the highest commercially available satellite imagery resolution (30 cm) as well as high resolution aerial imagery resolution (7.5 cm and 15 cm) to provide a thorough comparison to UAV performance. To simulate these alternative image resolutions, we resize the high-resolution UAV imagery to the desired lower resolution (Details in Appendix A.3 list 3).

Results

Even at the highest flying altitude, as shown in Figure 3b, UAV-based solar PV detection performance had an Average Precision of 0.94 (at 4.5 cm ground sampling distance). While this high level of performance is reasonable given that the UAV resolution is high enough to capture multiple pixels of solar panel content for each SHS, these results massively outperformed satellite imagery resolutions. As the resolution decreases from our UAV imagery to satellite imagery, the detection performance drops monotonically. At 7.5 cm (high-resolution aerial imagery), performance remains high. However, at a resolution of 15 cm and above, the detection capability of our neural network degrades heavily with only about half of the panels being detected (the maximum recall is 0.6) and at the typical highest satellite resolution (30 cm), the performance drops to near zero. Therefore, we conclude that at the best commercially available satellite resolution (30 cm), very small SHS PV panels are not feasible to detect. UAVs offer a feasible alternative for small solar PV detection as does high resolution aerial imagery.

5.2. Detection Performance with Respect to Resolution Mismatch

As demonstrated in the previous section, very small solar PV detection performance depends greatly on resolution, but across UAV resolutions (1.7 cm to 4.3 cm per pixel in our data collection), performance was less variable. In practice, the resolution may change for multiple reasons. Elevation of the terrain may change, the slope of the landscape and angle of the image may change over the target area and we may fly UAVs at varying heights for data collection (also possible if collected by different operators). Therefore, the resolution may not always be identical. In this section, we investigate the impact of mismatch between training image resolution and test image resolution.
To accomplish this, we divide our flying height (altitude) into 20 m-interval groups for which we evaluate performance. We trained and tested all the possible pairwise combinations of imagery resolution to evaluate the performance impact of a mismatch between training and testing altitude.

5.2.1. Results

The results are shown in the Figure 4, where we can see the algorithm’s performance with different training and testing image resolution pairings. When the training resolution and test resolution matched, the average precision was 90%. This can be seen in the diagonal pattern (with the only exception in training of 1.6–2.2 cm ground sampling distance). The off-diagonal performance (when there was considerable mismatch between the training and test resolution) was generally lower in performance, illustrating the importance of having a similar resolution during model training and testing.

5.3. Solar Panel Detection Performance with Respect to Flight Speed

Apart from resolution, another important controllable variable for drone imagery operation is the speed at which to collect the imagery. The faster the drone flies, the shorter the time span, and hence the lower the cost of data collection. However, greater speed may also introduce additional noise into the data due to motion blur [46], impacts on the stability of the flight, reduced exposure time, and forcing the use of a higher ISO. Each of these may lower the performance of solar panel detection. Here, we aim to provide a controlled experiment to investigate the change in performance if we fly at different flight speeds and compare how flight speed impacts detection performance.
The UAV we used for these experiments, the DJI mini 2, has 2 operational flight speeds: “Normal” mode and “Sport” mode. Normal mode has an average speed of 8 ms, and Sport mode has an average speed of 14 s. Continuous videos were taken, and individual frames were cut from the video and labeled for evaluating the performance change when flight speed changes (models only operate on individual image frames rather than continuous video).

Results

Figure 5 shows the degradation in detection performance with respect to flying speed. From the plot, we can see that the detection performance does drop with respect to the flying speed as all performance statistics (AP, F 1 m a x and R m a x ) are worse for sports mode than the normal mode. However, as the resolution differs, the performance drop monotonically increases as the resolution becomes lower. We also note that with the normal mode (slow speed) flying, the detection performance is kept the same comparing to stationary mode.

6. Experiment #2: Cost Analysis of UAV-Based Solar Panel Detection and Comparison to Satellite Data

6.1. Cost Analysis: Methods

Having demonstrated that UAVs can accurately identify very small solar PV panels under laboratory conditions, a remaining question is whether using UAVs for this purpose is cost-effective. In this section, we estimate the costs of using UAVs for small solar panel detection and compare the estimated costs with that of commercial satellite data and aerial photography. Although we present the operational trade-off of detection performance using small solar PV detection, these costs may be relevant to a variety of similar analyses of optical overhead UAV imagery data.
To estimate the cost of UAV mapping (in unit of $), we first identify two key characteristics of the final product: (1) image resolution and (2) size of the area imaged. Nearly all our cost estimates are a function of these two product specifications (e.g., cost per km 2 at a resolution of 0.03 m). With a fixed resolution and total area, the total amount of work required (total flight time) can be determined. Although we split our cost estimation into five major categories listed in Figure 6, not all of them are dependent on the image resolution or size (such as legal costs). We made a few assumptions regarding operational parameters in order to provide representative cost estimates for a generic case:
  • The UAV is operated five days each week, six hours per day (assuming an eight-hour work day, and allowing two hours for local transportation and drone site setup).
  • Each UAV operator rents one car and one UAV; the upfront cost of the UAV is amortized over the expected useful life of the UAV.
  • Total UAV image collection time is capped at three months, but multiple pilots (each with their own UAV) may be hired if necessary to complete the collection.
  • UAV lifetime is assumed to be 800 flight hours (estimate from consultation with a UAV manufacturer).
  • A sufficient quantity of UAV batteries is purchased for operating for a full day.
  • The probability of inclement weather is fixed at 20%. and no operation would be carried out under those conditions.
Using these assumptions, we can translate total required flight time for a given data collection into total working hours, the number of pilots needed and the number of UAVs needed. Together, this information can be combined to estimate each individual cost type we enumerate in Figure 6.
We distinguish four categories of costs, as illustrated in Figure 6, for UAV data collection:
  • Legal and permit: The legal and permitting cost of getting the credentials for flying in a certain country or region. As an example, in the US, although state laws may vary, at a federal level, flying for non-hobbyist purposes (class G airspace, below 120 m) requires the drone pilot to have a Part 107 permit, which requires payment of a fee as well as successful completion of a knowledge test. The legal and permitting costs are inherently location-dependent, and cost variation may be large.
  • Transportation: The total transportation cost for the drone operator. For the purposes of our estimate here, we assume one drone pilot (thus, total data collection time is a linear function of area covered). Note that this category includes travel to and from the data collection location, which is assumed to include air travel, local car rental, car insurance, fuel costs, and (when the operational crew is foreign to the language) a translation service.
  • Labor-related expenses: Umbrella category of all labor-related costs including wages and fringe benefits or overhead paid to the drone pilot, as well as boarding and hotel costs.
  • Drone-related expenses: Umbrella category of all drone-related costs including purchase of the drone, batteries, and camera (if not included with the drone).
The details of the price assumptions are outlined in Table 2. In each row, we specify the major cost item and the unit price. We put additional details in the appendix.

6.2. Cost Analysis: Result

We present our cost estimate in Figure 7. There were three types of drones used in [42] and we use “Ebee Plus” to estimate our total drone operation cost because it is able to map the largest area per flight (or per unit time) among the three. From Figure 7, we see that the relationship between the total cost and the total area of interest after some threshold is nearly linear (i.e., as the fixed cost is averaged out over a larger amount of area). We also see that the overall cost of using a UAV to map high-resolution imagery is non-trivial: mapping an area equivalent to the size of the Federal Capital Territory (7000–8000 km 2 ) around and including Abuja, the capital area of Nigeria, would cost around 6 million dollars in total.
One of our assumptions of the price estimation is that the pilots are not within day-trip distance to the place of interest and need lodging during operation. Local pilots are also an option and are increasingly available, which would yield lodging and other travel related expense savings. At 0.03 m resolution, the lodging fee, at most represents about 15% of the total operation cost, which would not impact our conclusion or the relative ranking of UAV, satellite, or aerial imagery cost. More information regarding lodging fee is provided in the Appendix A.
Apart from the drone operation cost, the core question we aim to answer concerns the cost and performance trade-offs between using drone-based high-resolution imagery or satellite imagery for a given setting or application. Therefore, we combine these two elements together in Figure 3 for direct reference. Note that the price estimates for commercial offerings are estimated from online sources, government agency reports, and expert consultation, but we recognize these estimates; estimates which we believe are reasonable for the purpose of this study, but are certainly imperfect. The exact price would vary based on negotiated rates, changing business models, and on specific parameters of the desired data collection including the region of interest, business status (e.g., for nonprofit or for-profit use), etc. Currently, the highest resolution (0.3 m) commercially available satellite imagery is provided by private companies. We collected estimates of satellite imagery cost per unit area (USD per km 2 ) and compared these with our estimated UAV average cost per km 2 with the same resolution. Note: these costs estimates will vary based on negotiated rates and size of the imagery order.
From Figure 3, we draw several conclusions: (1) The satellite imagery costs are typically much lower than UAV imagery cost, although they also have significantly lower resolution, which results in extremely poor performance in very small SHS detection. (2) The unit price of UAV imagery is lower both when the total area increases sufficiently (to a limit where the average fixed cost per unit area falls to essentially zero, so that the relationship between total cost and coverage area becomes essentially linear) and when the resolution requirement drops. (3) The cost of UAV imagery is comparable to satellite imagery when operating over a large area (>50 km 2 ) with lower image resolution (4–7.5 cm); which also achieves excellent performance according to the performance plot.
In addition to the cost and performance of satellites and UAVs, Figure 3 also provides the same information for one more approach: imagery collection from piloted aircraft. To estimate the costs of the piloted aircraft approach, we collected data from two government reports [51,52]), and also information on the Hexagon (HxGN) Imagery Program. That report indicates that the cost is $100–200 per km 2 and the resolution ranges from 7.5 cm to 30 cm. However, we note that these cost and resolution figures are based on historical data collection in the US, conducted for government agencies and for a large coverage area. Therefore, the average costs may differ in a different setting, such as in South Asia or sub-Saharan Africa. The Hexagon Imagery Program’s price reflects only the archival imagery that was collected in previous piloted missions and is limited in geospatial and temporal coverage compared with satellite imagery.
For context, we also provide data points of cost for ground-based surveys. Since ground surveys are usually conducted one household unit at a time (rather than by one unit area, km 2 , as in aerial surveys), the costs are typically calculated per household. Researchers and organizations conducting such surveys [53,54,55] reported that ground survey costs varies widely between $20–$422 per household, depending on the level of detail of the survey. Comparing those estimates with UAV cost estimates, as long as the area of the drone survey has an average density of more than 5 households per km 2 , then UAV-based data collection would be cheaper than ground surveys (assuming at 3 cm GSD, 50 km 2 in total area for the UAV data collection). For comparison, the average household density in Rwanda is around 120 households per km 2 and over 90% of the population of lives in an area where household density is greater than 78 households per km 2 (Details can be found in Appendix A.4) which is much higher than our 5 households per km 2 price parity point for UAV and ground based surveys. Piloted flights (as shown in Figure 3) are also a potentially cost-effective data collection methodology and are also less expensive than ground surveys for regions with a density of 15 households per km 2 or higher. Although from these estimates, UAV costs are significantly lower than ground surveys, ground surveys can provide a much larger variety of information such as economic and demographic information, which would be nearly impossible to collect from raw overhead imagery alone.
In conclusion, although satellite imagery are cheaper and logistically easier to access, satellite imagery simply cannot adequately capture the small solar PV as Section 5 shows. UAV imagery has the advantage of unmatched high resolution and therefore, even at somewhat higher costs, is far more reliable in very small SHS performance in the SHS detection task. While tasks other than small SHS would likely have different performance with respect to image resolution, the trend we show here may be similar for cost estimates relevant to other applications (such as agriculture [42], wildlife conservation [56], emergency response [57], and wildfire monitoring [58], among others). Additionally, as we found the biggest cost element is the human operator-related cost, which is indispensable currently due to the legal requirement of human-guided operation within the line of sight [59]. However, if automated UAV flights become possible (or did not require on-site operators), then the cost of drone-based operations would become significantly lower, potentially giving UAVs a large cost advantage compared to satellites.
The cost estimates provided here were collected regarding data collection in the US as the numerous cost estimates and underlying operational assumptions that go into such estimates are readily available. This may be an optimistic estimate when considering the heterogeneity of applications globally due to differences in operational expenses, transportation time, and local regulations. Additionally, weather conditions and flight performance at different altitudes [60] may affect the flight speed and hence impact costs. We encourage the use of the calculations presented in Appendix A.5 to create estimates for a specific data collection process.

7. Experiment #3: Case Study: Rwanda SHS Detection Using Drone Imagery

The last of our three questions was to evaluate how well UAV-based SHS detection performs in real-world scenarios. To test the feasibility and robustness of using UAV imagery to detect small SHS in rural areas, we conduct a real-life case study in Rwanda to detect the SHS that villagers in developing countries are currently using, using the UAV imagery provided by RTI [42,43]). As this imagery was collected from multiple rural agricultural areas in Rwanda representing different agroecological zones without any prior information on the prevalence or locations of SHS, the model performance achieved is expected to be more representative of the actual performance when this method is applied to a new location of interest. The drone was flown at a fixed height yielding a GSD of 0.03 m/pixel, which was also covered in the range of our controlled experiments above. We are also making our annotations of these Rwanda data public for reproducibility.
We manually labeled the solar panels in the Rwanda imagery. We found 214 solar panels in total and split them into a training set of 114 panels and validation set of 100 panels. We limited the fraction of images without SHS to 10% of the total available to maintain class balance. The dataset contains around 21,000 image patches of 512 × 512 pixels, of which 1% has solar panels. In total, the model required 2 days to train on the 10% background data and the 114 solar panels using an NVIDIA GTX2080. We trained 120 epochs to ensure convergence, which led to 0.25 million images passing through our network.

Case Study: Result

Figure 8 shows randomly selected examples of our model predictions. Qualitatively, the majority of the SHS were found with few missing SHS and false positives. Out of the nine solar panels present in the sample imagery, eight of them are correctly detected and one of them is missed in the example given. From the example, we can also see that the network can find panels reliably in two distinct roof types present in the Rwanda data.
Quantitatively, from the PR curve in Figure 9 we can see that the performance achieves a maximum F 1 score of 0.79, and average precision of 0.8 and maximum recall of around 0.9. By properly thresholding the confidence score, we can achieve a Recall of 0.89 (detecting 89% of the solar home systems) with precision of 41%. Lower precision means more false positives, but in a post-processing the detections could be manually reviewed quickly and most eliminated without minimal intervention. Alternatively, recall and be sacrificed for precision, reducing the number of false positives (potentially dramatically so), but sacrificing some recall, meaning that the number of solar panels that were correctly identified would decrease. This amounts to moving around the PR curve in Figure 9 to find the appropriate balance for a given application.
Although there is a performance gap between the lab controlled experiment and the Rwanda case study, it is encouraging to see that recall, the fraction of very small SHS successfully identified remains high in both cases, suggesting that UAV based data collection may be a viable approach for providing information to decision makers working in sustainable development.
While these results are encouraging, there remain several limitations or challenges around this application that include regulatory requirements [59] to maintain visual line of sight, which necessitates a trained drone operator; limitations in battery technology that bottlenecks the overall flight time of each mission [61]; and limited autonomous navigation and control systems [62]. More specifically in sub-Saharan African countries (where the majority of people without access to electricity are located), Washington [63] states that the lack of trained operators, lack of government regulations, and privacy concerns are the major challenges to be anticipated for drone technologies to be widely and safely applied to benefit wider communities. However, this has been changing in recent years as more countries in sub-Saharan Africa have developed standard processes for obtaining flight permits and local capacity for operating drones to collect and process imagery for analytical purposes has been expanding rapidly in many countries.

8. Conclusions

Small solar home systems that provide transitional electricity access to communities are too small for any commercially available satellite imagery to capture. We demonstrate that UAV imagery is a viable alternative to map these small solar home systems to provide critical information to stakeholders working to improve electrification in developing countries including the identification of potential markets and information to track progress to electrification sustainable development goals. Through controlled experiments examining the impact of altitude and speed, we tested the technological and financial viability of UAVs for this purpose. We investigated the robustness of drone-based small SHS detection with respect to both resolution (detection performance changes minimally within typical UAV operational altitude) and flying speed (detection performance drops with higher speed), estimated the cost of operation and found comparable cost with satellite imagery given a sufficiently large region to map. We also evaluated UAV small solar detection performance on a case study in Rwanda that demonstrated that drone-based small SHS detection is a viable approach to supply crucial information about local SHS conditions for energy access projects, successfully identifying nearly 90% of solar panels with moderate false positive rates that could be reduced through post-processing.
The evidence from this study suggest that UAVs are a technically viable and financially reasonable approach to collect data for small energy infrastructure like solar home systems. The information about the location and characteristics of small SHS in developing countries collected from drones may provide evidence for decision-making around energy access planning for reaching the SDG 7.1 of universal access to electricity by 2030.
Future works: There are at least two potential future directions of this work. The first is developing a larger, more diverse collection of UAV data on small energy system objects such as solar PV, diesel generators, electric water pumps for irrigation, and distribution poles and lines. The dataset we share controlled for a large number of possible experimental variables to form fundamental conclusions. A larger and more diverse set of imagery could provide an application-focused benchmark for future algorithm development. The second potential direction of this work is for a larger set of case studies, preferably with even more diverse geographies beyond the regions of Rwanda that were available for this work to explore algorithm robustness.

Author Contributions

Conceptualization, Jordan Malof, Rob Fetter and Kyle Bradbury; methodology, Jordan Malof, Rob Fetter, Robert Beach, Jay Rineer and Kyle Bradbury; software, Simiao Ren; validation, Rob Fetter, Robert Beach and Jay Rineer; investigation, Simiao Ren; resources, Jordan Malof and Kyle Bradbury; data curation, Simiao Ren; writing—original draft preparation, Simiao Ren; writing—review and editing, Simiao Ren, Jordan Malof, Rob Fetter, Robert Beach, Jay Rineer and Kyle Bradbury; visualization, Simiao Ren; supervision, Jordan Malof and Kyle Bradbury; project administration, Kyle Bradbury; funding acquisition, Rob Fetter and Kyle Bradbury. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Catalyst program at the Nicholas Institute for Environmental Policy Solutions, and from the Alfred P. Sloan Foundation Grant G-2020-13922 through the Duke University Energy Data Analytics Ph.D. Student Fellowship.

Data Availability Statement

Code for our work is published at: https://github.com/BensonRen/Drone_based_solar_PV_detection). The data presented in this study are openly available in Figshare. The UAV small solar PV data collection: https://doi.org/10.6084/m9.figshare.18093890; Rwanda solar PV annotations: https://doi.org/10.6084/m9.figshare.18094043. Publicy available datasets were analyzed in this study: https://doi.org/10.34911/rdnt.r4p1fr (accessed on 1 June 2021).

Acknowledgments

We would like to express special thanks for Leslie Collins for providing useful feedback and discussion. We also would like to thank Bohao Huang for his MRS framework code, Wei (Wayne) Hu for his help in developing the code and discussion, Trey Gowdy for his helpful discussions and expertise in energy data and other energy fellows/mentors during the Duke University Energy Data Analytics Ph.D. Student Fellowship Program for their suggestions, questions and comments. We also thank the Duke Forest for their use of the UAV flight zone for data collection.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SDGSustainable Development Goal
UAVUnmanned Aerial Vehicles
USUnited States of America
APAverage Precision
IoUIntersection over Union
CNNConvolutional neural networks
SHSSolar Home Systems
GSDGround Sampling Distance

Appendix A

Appendix A.1. Data Collection Pipeline and Detailed Specifications of Solar Panels Used

To collect our UAV imagery dataset, (1) the solar panels were randomly distributed at the data collection site; (2) the UAV pilot operated the drone to reach required each altitude (every 10 m) and hovered to take stationary images; (3) the UAV was flown at approximately a constant speed (in both a lower and higher speed setting) across the drone site, collecting video of the solar panels while the UAV was in motion; (4) this process was repeated on different dates, drone sites, times of day, and orientations of the drone flight path; and (5) the SHS in the data were manually annotated by drawing polygons to indicate the presence of each solar panel.
The list of solar panels we used is provided in Table A1. To ensure our experimental results are representative of practical applications of small SHS detection, we chose a diverse set of solar panels from three different manufacturers, included both monocrystalline (black) and polycrystalline (blue) compositions, and varied the physical dimensions in total area and aspect ratio. The majority (4/5) are below 50 Watts, and we include one larger panel with a 100 Watt rated power capacity as well (which we imagine to be the upper limit of small SHS).
Table A1. Details of the solar panels used in the experiments. X-crystalline (cell composition): Polycrystalline or Monocrystalline, which have slightly different colors. L: Length. W: Width. T: Thickness.
Table A1. Details of the solar panels used in the experiments. X-crystalline (cell composition): Polycrystalline or Monocrystalline, which have slightly different colors. L: Length. W: Width. T: Thickness.
BrandX-CrystallineL (mm)W (mm)Aspect_RatioArea (dm 2 )T (mm)Power (W)Voltage (V)
ECOPoly5203651.4319182518
ECOMono8303402.4528.330505
Rich solarPoly624360.81.7322.625.43012
NewpowaPoly3452401.448.3181012
NewpowaPoly9106751.3561.53010012

Appendix A.2. Satellite View for Small SHS

Here, we present a simulated (via downsampling) satellite-view of the SHS we investigated in this work to demonstrate the difference in the quality of visibility of the solar arrays at a satellite imagery resolution as compared to UAV image resolution. From Figure A1 we see that it is almost impossible (for average human eyes) to detect the presence of solar panels in the image at satellite resolution. We see that the near-zero performance in Section 5.1 is reasonable at satellite imagery resolution, as there is very limited visibility of these very small SHS. Therefore, UAV imagery resolution is necessary to successfully extract information on the location and size of small SHS.
Figure A1. Satellite resolution view of SHS compared with a UAV example. (a) Original UAV imagery with GSD of 2 cm. (b) Human labeled ground truth of the SHS. (c) Simulated Satellite imagery with GSD = 30 cm. (d) Simulated ground truth of SHS at the resolution of satellite imagery.
Figure A1. Satellite resolution view of SHS compared with a UAV example. (a) Original UAV imagery with GSD of 2 cm. (b) Human labeled ground truth of the SHS. (c) Simulated Satellite imagery with GSD = 30 cm. (d) Simulated ground truth of SHS at the resolution of satellite imagery.
Ijgi 11 00222 g0a1

Appendix A.3. Algorithm and Performance Details

  • Pretraining: As labeled drone datasets, especially the ones including solar panels, are extremely scarce, we use satellite imagery containing solar panels (same target as our task, but larger in size) to pre-train our network before fine-tuning it with the UAV imagery data we collected. This practice increased performance over fine-tuning from ImageNet pre-trained weights alone, IoU improved from 48% to 70%).
  • Scoring pipeline: We illustrate the process of scoring in Figure A2. Note that in detection problems, the concept of true negatives is not defined. This is also precision and recall (and therefore precision-recall curves) are used for performance evaluation rather than ROC curves.
  • Simulating satellite resolution imagery: In Section 5, we downsampled our UAV imagery to simulate satellite imagery resolution. To make sure the imagery has an effective resolution that is the same as satellite imagery, while keeping the same overall image dimensions so that our model has the same number of parameters, we follow the downsampling process with an up-sampling procedure using bi-linear interpolation (using OpenCV’s resizing function). The effective resolution remains at the satellite imagery level (30 cm/pixel), but the input size of each image into the convolutional neural network remains the same.
  • Hyper-parameter tuning: Across the different resolutions of training data, we kept all hyperparameters constant except for the class weight of the positive class (due to the largely uneven distribution of solar panels and background imagery across changes in GSD). After tuning the other hyperparameters like learning rate and the model architecture once for all flight heights, we tuned the positive class weight individually for each of our image resolution groups due to the inherent difference in the ratio of number of solar panel pixels within each image.
  • Precision-recall curves for Section 5.2.1: As only aggregate statistics were presented in Section 5.2.1, we present all relevant precision-recall curves here (Figure A3) for reference.

Appendix A.4. Household Density Estimation

To estimate household density, we used the average population density [64] divided by the average household size [65]. For a further point of reference we calculated the population density level at which 90 percent of the population of Rwanda lives in a more dense region. To do so, we collected high resolution population data from 2020 [66], determined the areas of highest population density, sorting them from highest to lowest, and determined the level that encompassed 90% of the population. We divided that estimate (inhabitants per square kilometer) by average household size (4.3 inhabitants per household [65]) in order to determine an estimate of the average number of households per square kilometer at that level of population density.
Figure A2. Scoring process. (GT: ground truth). After post-processing, the candidate object groups are matched with the ground truth labels. The confidence score of each object is denoted as the average confidence value of all the pixels associated with that object.
Figure A2. Scoring process. (GT: ground truth). After post-processing, the candidate object groups are matched with the ground truth labels. The confidence score of each object is denoted as the average confidence value of all the pixels associated with that object.
Ijgi 11 00222 g0a2
Figure A3. All precision recall curves for the summary metrics shown in Figure 4.
Figure A3. All precision recall curves for the summary metrics shown in Figure 4.
Ijgi 11 00222 g0a3

Appendix A.5. Cost Estimation Calculations and Assumptions

Here, we present the details of the calculations and assumptions behind the costs estimates used in this work for UAV operation.
The area that a drone can map in a day A d a y can be calculated from the manufacture’s specifications on the resolution of the imagery of the drone and the battery life for a given flight. We also assume a 6 h daily flight limit on drone operation.
A d a y = A 1 f l i g h t ( r e s ) / T b a t t e r y × 6 ( h )
The area that a drone can map in a single flight is dependent on its resolution, and we assume this relationship to be linear (assuming a constant flight speed). The area of the field of view for a drone is quadratic with respect to ground sampling distance. Here, we use the manufacture’s specifications that claims that flying at a height of 120 m produces 3 cm ground sampling distance imagery.
A 1 f l i g h t ( r e s ) = r e s / 0.03 × 120
To calculate the required number of days (workday + weekends) for a specific mission can be calculated using the above quantities. One factor here is that not all days are suitable for a drone flight (e.g., inclement weather). Therefore, we factor in the probability of sunny (or otherwise mild) days into the equation and assume it to be 80%. This could be replaced with the weather conditions of any region of interest. Additionally, note that since we will use this to calculate the cost of compensation for the pilot, we assume payment during weekends as well although pilots are not assumed to be working during weekends, and therefore multiply the number of days required by 7/5 to get the actual time it takes. Note that we assume only weekends (no holiday) for simplicity.
D a y t o t = A t o t / A d a y / 5 × 7 / % s u n n y
Next, we calculate the number of pilots required for the mission, which can be calculated from our assumption that the mission should take less than 3 months (90 days). Since we are paying the pilots by their time, the overall cost would not be largely affected if we increase the number of pilots, hence reducing time for each of them on this mission.
N p i l o t s = c e i l ( D a y t o t / 90 )
Using the total number of pilots, the fixed cost for the mission can be calculated. The fixed cost consists of pilot training cost, pilot certificate exam cost, permit/registration of drone cost, and transportation (assuming airfare) to the mission site. The flight cost assumption was estimated according to international travel standard ticket fares and assumed to cover up to two round trips based on the assumption of up to 3 month work time. The training cost were estimated from the costs of online preparatory courses for a part 107 certificate.
$ f i x e d + = ( $ f l i g h t + $ t r a i n i n g + $ e x a m + $ p e r m i t ) × N p i l o t s
The bulk part of the operational costs comes from the human labor, hotel, and transportation. Wage information is taken as an average of median wage values reported for drone operators from salary tables across various online sources. The hotel cost assumptions were from average U.S. daily rates from a variety of leading hotel retailers. The car and insurance cost assumptions come from monthly rental quotes from leading car rental companies.
$ h u m a n = ( $ w a g e + $ b e n e f i t + $ h o t e l + $ c a r + $ c a r i n s u r a n c e + $ t r a n s l a t o r ) × D a y t o t × N p i l o t s
Another cost comes from the amortized cost for the uAV. We assume the total flight time for a drone and all associated equipment is 800 flight-hours. The price assumption for drones comes directly from drone retailers. Some drones retailers combine the camera price into the drone price while some others do not. For battery costs, we assume we buy enough batteries to support a full day’s operation for each of the pilot teams.
$ d r o n e = ( ( $ b u y d r o n e + $ c a m e r a + $ b a t t e r i e s ) / T l i f e t i m e × T o p e r a t i o n ) × N p i l o t
We assume pilots rent cars to commute locally during the mission, therefore, the fuel also needs to be factored into the calculation. Although we found this to be a small enough portion to be neglected retrospectively, we still include it here for completeness. To calculate the total driving distance of the pilot, we assume the mission is to map a large contiguous square area. As the maximum communication distance between drone and controller is 7 km, in principle, the pilot needs to drive at least the total area divided by twice the communication distance (14 km) to complete the mission. The Mile per Gallon (MPG) of the car is assumed to be 25 MPG and converted to kilometers per Gallon (40 KMPG).
$ f u e l = $ g a l l o n × A t o t / 14 / K M P G
Another cost is data storage. Assuming 8-bit color imagery, that translates into 1 Byte/pixel/channel. We assume four channels to be stored (red, green, blue, and one for GIS post-processing), the number of pixels can be calculated for a given area and image resolution. The total cost of data storage would be the cost of number of hard drives (HDD) needed to store all the data, which is the total number of bytes divided by the capacity of the HDD (we assume physical hard drives will be needed in case access to cloud services are unavailable).
$ D a t a S t o r a g e = 1 ( B y t e / p i x e l / c h a n n e l ) × 4 ( c h a n n e l s ) × a r e a ( km 2 ) / r e s 2 ( m 2 ) × 10 6 × $ H D D / C a p H D D
The total cost of the mission would calculated by adding all of the above costs together:
$ t o t = $ f i x e d + $ h u m a n + $ d r o n e + $ f u e l + $ D a t a S t o r a g e

Appendix A.6. Lodging Cost Ratio

One of our assumptions is the absence of local pilots that do not require a travel status (hotel cost). We recognize this might not be as justified as more and more certified pilots are available across the globe and therefore provide the ratio of lodging fee to the total cost with respect to total area change.
Figure A4. Hotel cost as a percentage of total cost with respect to the total area of the mission at 0.03 m resolution.
Figure A4. Hotel cost as a percentage of total cost with respect to the total area of the mission at 0.03 m resolution.
Ijgi 11 00222 g0a4
As Figure A4 shows, the hotel cost gradually ramp up when total mission area increase and asymptotes to 15%. This means that at resolution of 0.03 m, if local pilots can be found within day-trip of operation place of interest, up to 15% of the total cost can be saved.

References

  1. United Nations. Goal 7|Department of Economic and Social Affairs. Available online: https://sdgs.un.org/goals/goal7 (accessed on 1 September 2021).
  2. Martin. Sustainable Development Goals Report. Available online: https://www.un.org/sustainabledevelopment/progress-report/ (accessed on 1 September 2021).
  3. Bisaga, I.; Parikh, P.; Tomei, J.; To, L.S. Mapping synergies and trade-offs between energy and the sustainable development goals: A case study of off-grid solar energy in Rwanda. Energy Policy 2021, 149, 112028. [Google Scholar] [CrossRef]
  4. Bandi, V.; Sahrakorpi, T.; Paatero, J.; Lahdelma, R. Touching the invisible: Exploring the nexus of energy access, entrepreneurship, and solar homes systems in India. Energy Res. Soc. Sci. 2020, 69, 101767. [Google Scholar] [CrossRef]
  5. Watson, A.C.; Jacobson, M.D.; Cox, S.L. Renewable Energy Data, Analysis, and Decisions Viewed through a Case Study in Bangladesh; Technical Report; National Renewable Energy Lab. (NREL): Golden, CO, USA, 2019.
  6. Malof, J.M.; Hou, R.; Collins, L.M.; Bradbury, K.; Newell, R. Automatic solar photovoltaic panel detection in satellite imagery. In Proceedings of the 2015 International Conference on Renewable Energy Research and Applications (ICRERA), Palermo, Italy, 22–25 November 2015; pp. 1428–1431. [Google Scholar]
  7. Castello, R.; Roquette, S.; Esguerra, M.; Guerra, A.; Scartezzini, J.L. Deep learning in the built environment: Automatic detection of rooftop solar panels using Convolutional Neural Networks. J. Phys. Conf. Ser. 2019, 1343, 012034. [Google Scholar] [CrossRef]
  8. Bhatia, M.; Angelou, N. Beyond Connections; World Bank: Washington, DC, USA, 2015. [Google Scholar]
  9. Malof, J.M.; Bradbury, K.; Collins, L.M.; Newell, R.G. Automatic detection of solar photovoltaic arrays in high resolution aerial imagery. Appl. Energy 2016, 183, 229–240. [Google Scholar] [CrossRef] [Green Version]
  10. Yu, J.; Wang, Z.; Majumdar, A.; Rajagopal, R. DeepSolar: A machine learning framework to efficiently construct a solar deployment database in the United States. Joule 2018, 2, 2605–2617. [Google Scholar] [CrossRef] [Green Version]
  11. Malof, J.; Collins, L.; Bradbury, K.; Newell, R. A deep convolutional neural network and a random forest classifier for solar photovoltaic array detection in aerial imagery. In Proceedings of the 2016 IEEE International Conference on Renewable Energy Research and Applications (ICRERA), Birmingham, UK, 20–23 November 2016; pp. 650–654. [Google Scholar]
  12. Kruitwagen, L.; Story, K.; Friedrich, J.; Byers, L.; Skillman, S.; Hepburn, C. A global inventory of photovoltaic solar energy generating units. Nature 2021, 598, 604–610. [Google Scholar] [CrossRef]
  13. IRENA. Off-Grid Renewable Energy Statistics 2020; International Renewable Energy Agency: Abu Dhabi, United Arab Emirates, 2020. [Google Scholar]
  14. What Is the Standard Size of a Solar Panel? Available online: https://www.thesolarnerd.com/blog/solar-panel-dimensions/ (accessed on 1 September 2021).
  15. Liang, S.; Wang, J. (Eds.) Chapter 1—A systematic view of remote sensing. In Advanced Remote Sensing, 2nd ed.; Academic Press: Cambridge, MA, USA, 2020; pp. 1–57. [Google Scholar]
  16. Yuan, J.; Yang, H.H.L.; Omitaomu, O.A.; Bhaduri, B.L. Large-scale solar panel mapping from aerial images using deep convolutional networks. In Proceedings of the 2016 IEEE International Conference on Big Data (Big Data), Washington, DC, USA, 5–8 December 2016; pp. 2703–2708. [Google Scholar] [CrossRef]
  17. Bradbury, K.; Saboo, R.; Johnson, T.L.; Malof, J.M.; Devarajan, A.; Zhang, W.; Collins, L.M.; Newell, R.G. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification. Sci. Data 2016, 3, 160106. [Google Scholar] [CrossRef] [Green Version]
  18. Ishii, T.; Simo-Serra, E.; Iizuka, S.; Mochizuki, Y.; Sugimoto, A.; Ishikawa, H.; Nakamura, R. Detection by classification of buildings in multispectral satellite imagery. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 3344–3349. [Google Scholar]
  19. Malof, J.M.; Bradbury, K.; Collins, L.; Newell, R. Image features for pixel-wise detection of solar photovoltaic arrays in aerial imagery using a random forest classifier. In Proceedings of the 5th International Conference on Renewable Energy Research and Applications, Birmingham, UK, 20–23 November 2016; Volume 5, pp. 799–803. [Google Scholar]
  20. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  21. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  22. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  23. Malof, J.M.; Li, B.; Huang, B.; Bradbury, K.; Stretslov, A. Mapping solar array location, size, and capacity using deep learning and overhead imagery. arXiv 2019, arXiv:1902.10895. [Google Scholar]
  24. Camilo, J.; Wang, R.; Collins, L.M.; Bradbury, K.; Malof, J.M. Application of a semantic segmentation convolutional neural network for accurate automatic detection and mapping of solar photovoltaic arrays in aerial imagery. arXiv 2018, arXiv:1801.04018. [Google Scholar]
  25. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  26. Hou, X.; Wang, B.; Hu, W.; Yin, L.; Wu, H. SolarNet: A Deep Learning Framework to Map Solar Power Plants in China From Satellite Imagery. arXiv 2019, arXiv:1912.03685. [Google Scholar]
  27. Zhang, D.; Wu, F.; Li, X.; Luo, X.; Wang, J.; Yan, W.; Chen, Z.; Yang, Q. Aerial image analysis based on improved adaptive clustering for photovoltaic module inspection. In Proceedings of the 2017 International Smart Cities Conference (ISC2), Wuxi, China, 14–17 September 2017. [Google Scholar] [CrossRef]
  28. Ismail, H.; Chikte, R.; Bandyopadhyay, A.; Al Jasmi, N. Autonomous detection of PV panels using a drone. In Proceedings of the ASME 2019 International Mechanical Engineering Congress and Exposition, Salt Lake City, UT, USA, 11–14 November 2019; Volume 59414, p. V004T05A051. [Google Scholar]
  29. Vega Díaz, J.J.; Vlaminck, M.; Lefkaditis, D.; Orjuela Vargas, S.A.; Luong, H. Solar panel detection within complex backgrounds using thermal images acquired by UAVs. Sensors 2020, 20, 6219. [Google Scholar] [CrossRef]
  30. Zheng, T.; Bergin, M.H.; Hu, S.; Miller, J.; Carlson, D.E. Estimating ground-level PM2. 5 using micro-satellite images by a convolutional neural network and random forest approach. Atmos. Environ. 2020, 230, 117451. [Google Scholar] [CrossRef]
  31. Pierdicca, R.; Malinverni, E.S.; Piccinini, F.; Paolanti, M.; Felicetti, A.; Zingaretti, P. Deep convolutional neural network for automatic detection of damaged photovoltaic cells. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2018, 42, 893–900. [Google Scholar] [CrossRef] [Green Version]
  32. Xie, X.; Wei, X.; Wang, X.; Guo, X.; Li, J.; Cheng, Z. Photovoltaic panel anomaly detection system based on Unmanned Aerial Vehicle platform. IOP Conf. Ser. Mater. Sci. Eng. 2020, 768, 072061. [Google Scholar] [CrossRef]
  33. Herraiz, A.; Marugan, A.; Marquez, F. Optimal Productivity in Solar Power Plants Based on Machine Learning and Engineering Management. In Proceedings of the Twelfth International Conference on Management Science and Engineering Management, Melbourne, Australia, 1–4 August 2018; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 901–912. [Google Scholar] [CrossRef]
  34. Li, X.; Yang, Q.; Lou, Z.; Yan, W. Deep Learning Based Module Defect Analysis for Large-Scale Photovoltaic Farms. IEEE Trans. Energy Convers. 2019, 34, 520–529. [Google Scholar] [CrossRef]
  35. Ding, S.; Yang, Q.; Li, X.; Yan, W.; Ruan, W. Transfer Learning based Photovoltaic Module Defect Diagnosis using Aerial Images. In Proceedings of the 2018 International Conference on Power System Technology (POWERCON), Guangzhou, China, 6–8 November 2018; pp. 4245–4250. [Google Scholar] [CrossRef]
  36. Li, X.; Yang, Q.; Wang, J.; Chen, Z.; Yan, W. Intelligent fault pattern recognition of aerial photovoltaic module images based on deep learning technique. In Proceedings of the 9th International Multi-Conference on Complexity, Informatics and Cybernetics (IMCIC 2018), Orlando, FL, USA, 13–16 March 2018; Volume 1, pp. 22–27. [Google Scholar]
  37. Li, X.; Li, W.; Yang, Q.; Yan, W.; Zomaya, A.Y. An Unmanned Inspection System for Multiple Defects Detection in Photovoltaic Plants. IEEE J. Photovolt. 2020, 10, 568–576. [Google Scholar] [CrossRef]
  38. Hanafy, W.A.; Pina, A.; Salem, S.A. Machine learning approach for photovoltaic panels cleanliness detection. In Proceedings of the 2019 15th International Computer Engineering Conference (ICENCO), Cairo, Egypt, 29–30 December 2019; p. 77. [Google Scholar] [CrossRef]
  39. Correa, S.; Shah, Z.; Taneja, J. This Little Light of Mine: Electricity Access Mapping Using Night-time Light Data. In Proceedings of the Twelfth ACM International Conference on Future Energy Systems, Online. 28 June–2 July 2021; pp. 254–258. [Google Scholar]
  40. Falchetta, G.; Pachauri, S.; Parkinson, S.; Byers, E. A high-resolution gridded dataset to assess electrification in sub-Saharan Africa. Sci. Data 2019, 6, 110. [Google Scholar] [CrossRef] [Green Version]
  41. Min, B.; Gaba, K.M.; Sarr, O.F.; Agalassou, A. Detection of rural electrification in Africa using DMSP-OLS night lights imagery. Int. J. Remote Sens. 2013, 34, 8118–8141. [Google Scholar] [CrossRef]
  42. Chew, R.; Rineer, J.; Beach, R.; O’Neil, M.; Ujeneza, N.; Lapidus, D.; Miano, T.; Hegarty-Craver, M.; Polly, J.; Temple, D.S. Deep Neural Networks and Transfer Learning for Food Crop Identification in UAV Images. Drones 2020, 4, 7. [Google Scholar] [CrossRef] [Green Version]
  43. Drone Imagery Classification Training Dataset for Crop Types in Rwanda. 2021. Available online: https://doi.org/10.34911/rdnt.r4p1fr (accessed on 1 June 2021).
  44. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006; Volume 128. [Google Scholar]
  45. Orych, A. Review of methods for determining the spatial resolution of UAV Sensors. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 391–395. [Google Scholar] [CrossRef] [Green Version]
  46. Şimşek, B.; Bilge, H.Ş. A Novel Motion Blur Resistant vSLAM Framework for Micro/Nano-UAVs. Drones 2021, 5, 121. [Google Scholar] [CrossRef]
  47. Federal Aviation Administration. Become a Drone Pilot; Federal Aviation Administration: Washington, DC, USA, 2021.
  48. Federal Aviation Administration. How to Register Your Drone; Federal Aviation Administration: Washington, DC, USA, 2021.
  49. U.S. Energy Information Administration. Gasoline and Diesel Fuel Update; U.S. Energy Information Administration: Washington, DC, USA, 2022.
  50. U.S. Department of Labor, Bureau of Labor Statistics. News Release. Available online: https://www.bls.gov/news.release/pdf/ecec.pdf (accessed on 1 January 2022).
  51. Geographic Information Coordinating Council, NC. Business Plan for Orthoimagery in North Carolina. Available online: https://files.nc.gov/ncdit/documents/files/OrthoImageryBusinessPlan-NC-20101029.pdf (accessed on 1 September 2021).
  52. State of Connecticut. Connecticut’s Plan for The American Rescue Plan Act of 2021. Available online: https://portal.ct.gov/-/media/Office-of-the-Governor/News/2021/20210426-Governor-Lamont-ARPA-allocation-plan.pdf (accessed on 1 September 2021).
  53. Lietz, H.; Lingani, M.; Sie, A.; Sauerborn, R.; Souares, A.; Tozan, Y. Measuring population health: Costs of alternative survey approaches in the Nouna Health and Demographic Surveillance System in rural Burkina Faso. Glob. Health Action 2015, 8, 28330. [Google Scholar] [CrossRef]
  54. Fuller, A.T.; Butler, E.K.; Tran, T.M.; Makumbi, F.; Luboga, S.; Muhumza, C.; Chipman, J.G.; Groen, R.S.; Gupta, S.; Kushner, A.L.; et al. Surgeons overseas assessment of surgical need (SOSAS) Uganda: Update for Household Survey. World J. Surg. 2015, 39, 2900–2907. [Google Scholar] [CrossRef] [PubMed]
  55. Gertler, P.J.; Martinez, S.; Premand, P.; Rawlings, L.B.; Vermeersch, C.M. Impact Evaluation in Practice; World Bank Publications: Washington, DC, USA, 2016. [Google Scholar]
  56. Ivosevic, B.; Han, Y.G.; Cho, Y.; Kwon, O. The use of conservation drones in ecology and wildlife research. J. Ecol. Environ. 2015, 38, 113–118. [Google Scholar] [CrossRef] [Green Version]
  57. Rabta, B.; Wankmüller, C.; Reiner, G. A drone fleet model for last-mile distribution in disaster relief operations. Int. J. Disaster Risk Reduct. 2018, 28, 107–112. [Google Scholar] [CrossRef]
  58. Afghah, F.; Razi, A.; Chakareski, J.; Ashdown, J. Wildfire monitoring in remote areas using autonomous unmanned aerial vehicles. In Proceedings of the IEEE INFOCOM 2019—IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Paris, France, 29 April–2 May 2019; pp. 835–840. [Google Scholar]
  59. Floreano, D.; Wood, R.J. Science, technology and the future of small autonomous drones. Nature 2015, 521, 460–466. [Google Scholar] [CrossRef] [Green Version]
  60. Ramsankaran, R.; Navinkumar, P.; Dashora, A.; Kulkarni, A.V. UAV-based survey of glaciers in himalayas: Challenges and recommendations. J. Indian Soc. Remote Sens. 2021, 49, 1171–1187. [Google Scholar] [CrossRef]
  61. Hassanalian, M.; Abdelkefi, A. Classifications, applications, and design challenges of drones: A review. Prog. Aerosp. Sci. 2017, 91, 99–131. [Google Scholar] [CrossRef]
  62. Azar, A.T.; Koubaa, A.; Ali Mohamed, N.; Ibrahim, H.A.; Ibrahim, Z.F.; Kazim, M.; Ammar, A.; Benjdira, B.; Khamis, A.M.; Hameed, I.A.; et al. Drone Deep Reinforcement Learning: A Review. Electronics 2021, 10, 999. [Google Scholar] [CrossRef]
  63. Washington, A.N. A Survey of Drone Use for Socially Relevant Problems: Lessons from Africa. Afr. J. Comput. ICT 2018, 11, 1–11. [Google Scholar]
  64. The World Bank. Population Density (People per s1. km of Land Area)—Rwanda. Available online: https://data.worldbank.org/indicator/EN.POP.DNST?locations=RW (accessed on 1 March 2022).
  65. National Institute of Statistics of Rwanda (NISR), Ministry of Health and ICF International. The Rwanda Demographic and Health Survey 2014–15. Available online: https://www.dhsprogram.com/pubs/pdf/SR229/SR229.pdf (accessed on 1 March 2022).
  66. WorldPop and CIESIN, Columbia University. The Spatial Distribution of Population Density in 2020, Rwanda. Available online: https://dx.doi.org/10.5258/SOTON/WP00675 (accessed on 1 March 2022).
Figure 1. Example imagery in our dataset collected. First row contains imagery taken in Blackwood Field. Second row contains imagery taken in Couch Field. Left column has GSD of 2 cm, middle column has GSD of 3 cm, right column has GSD of 4 cm.
Figure 1. Example imagery in our dataset collected. First row contains imagery taken in Blackwood Field. Second row contains imagery taken in Couch Field. Left column has GSD of 2 cm, middle column has GSD of 3 cm, right column has GSD of 4 cm.
Ijgi 11 00222 g001
Figure 2. Post-processing diagram. (a) Original RGB drone imagery. The post processing step takes as input the prediction confidence map (b) from the model output and generates candidate objects through thresholding, grouping, and dilating. (ch) are products of the following steps: Step 1 (S1) thresholds the confidence at 0.5, eliminating the least-confident detections. Step 2 (S2) matches connected pixels into groups of pixels (groups shown in different colors). Step 3 (S3) eliminates the groups of pixels that are too small (likely noise). Until this point, some pixels that corresponds to the same solar panel appear disconnected and therefore belong to different groups. Steps 4 and 5 address this issue by dilating the proposal pixels (S4) and grouping them (S5). To ensure the dilation does not change the overall area of prediction, we assign the groups based on the pre-dilation map, but with the dilated grouping by label point-wise multiplication.
Figure 2. Post-processing diagram. (a) Original RGB drone imagery. The post processing step takes as input the prediction confidence map (b) from the model output and generates candidate objects through thresholding, grouping, and dilating. (ch) are products of the following steps: Step 1 (S1) thresholds the confidence at 0.5, eliminating the least-confident detections. Step 2 (S2) matches connected pixels into groups of pixels (groups shown in different colors). Step 3 (S3) eliminates the groups of pixels that are too small (likely noise). Until this point, some pixels that corresponds to the same solar panel appear disconnected and therefore belong to different groups. Steps 4 and 5 address this issue by dilating the proposal pixels (S4) and grouping them (S5). To ensure the dilation does not change the overall area of prediction, we assign the groups based on the pre-dilation map, but with the dilated grouping by label point-wise multiplication.
Ijgi 11 00222 g002
Figure 3. Cost and performance tradeoff with UAV and satellite imagery. (a) Cost per km 2 in USD versus resolution for UAV, satellite, and aerial imagery. Hexagon (HxGN*) provides archival piloted aerial imagery which is limited in coverage compared to satellite coverage. Pilot-{CT, NC}* are piloted new aerial imagery collection costs estimated from government documents of CT and NC USA. Note that unit costs vary with different target area sizes for drone operation. (b) Detection performance vs. resolution up to satellite resolutions with typical resolutions of platform annotated. The performance drops significantly down to effectively zero from typical UAV resolution to typical satellite resolutions.
Figure 3. Cost and performance tradeoff with UAV and satellite imagery. (a) Cost per km 2 in USD versus resolution for UAV, satellite, and aerial imagery. Hexagon (HxGN*) provides archival piloted aerial imagery which is limited in coverage compared to satellite coverage. Pilot-{CT, NC}* are piloted new aerial imagery collection costs estimated from government documents of CT and NC USA. Note that unit costs vary with different target area sizes for drone operation. (b) Detection performance vs. resolution up to satellite resolutions with typical resolutions of platform annotated. The performance drops significantly down to effectively zero from typical UAV resolution to typical satellite resolutions.
Ijgi 11 00222 g003
Figure 4. Average Precision score for the training and testing data at various resolutions. The rows are the same training resolution and the columns are the testing resolutions. The bins are grouped so that they correspond to 20 m intervals in flight height.
Figure 4. Average Precision score for the training and testing data at various resolutions. The rows are the same training resolution and the columns are the testing resolutions. The bins are grouped so that they correspond to 20 m intervals in flight height.
Ijgi 11 00222 g004
Figure 5. Detection performance degradation due to flying faster at various altitudes. This shows the absolute difference between each performance metric at a slow flying speed and the same performance metric at faster flying speed.
Figure 5. Detection performance degradation due to flying faster at various altitudes. This shows the absolute difference between each performance metric at a slow flying speed and the same performance metric at faster flying speed.
Ijgi 11 00222 g005
Figure 6. Major categories of our UAV operation cost estimate structure. Legal and permitting cost is highly region-dependent and is usually a fixed cost that is not dependent on the resolution and area covered. All other categories are highly resolution-area dependent.
Figure 6. Major categories of our UAV operation cost estimate structure. Legal and permitting cost is highly region-dependent and is usually a fixed cost that is not dependent on the resolution and area covered. All other categories are highly resolution-area dependent.
Ijgi 11 00222 g006
Figure 7. Total cost of UAV mapping with respect to total area mapped with resolution of 0.03 m. The x axis is the area in km 2 in log scale, and y axis is the total cost in USD, log scaled. We provide reference area sizes with the population: (1) O.R. Tambo International Airport of South Africa, the busiest airport in Africa; (2) N’Djamena, capital city of Chad; (3) Kigali, capital and largest city of Rwanda; (4) Manus Province, the smallest province of Papua New Guinea; and (5) Federal Capital Territory, capital area of Nigeria. All description and population information is from Wikipedia.
Figure 7. Total cost of UAV mapping with respect to total area mapped with resolution of 0.03 m. The x axis is the area in km 2 in log scale, and y axis is the total cost in USD, log scaled. We provide reference area sizes with the population: (1) O.R. Tambo International Airport of South Africa, the busiest airport in Africa; (2) N’Djamena, capital city of Chad; (3) Kigali, capital and largest city of Rwanda; (4) Manus Province, the smallest province of Papua New Guinea; and (5) Federal Capital Territory, capital area of Nigeria. All description and population information is from Wikipedia.
Ijgi 11 00222 g007
Figure 8. Sample predictions from the Rwanda dataset. Left columns are the imagery patches and right columns show the output of the predictions. Green represents true positives, which are corrected labeled solar panel pixels. Red represents false positive, which occurs when the algorithm predicted a solar panel where there was none. Orange represents false negatives, which are actually solar panels, but were not detected.
Figure 8. Sample predictions from the Rwanda dataset. Left columns are the imagery patches and right columns show the output of the predictions. Green represents true positives, which are corrected labeled solar panel pixels. Red represents false positive, which occurs when the algorithm predicted a solar panel where there was none. Orange represents false negatives, which are actually solar panels, but were not detected.
Ijgi 11 00222 g008
Figure 9. Precision-recall curve of the case study for small home systems in Rwanda RTI imagery.
Figure 9. Precision-recall curve of the case study for small home systems in Rwanda RTI imagery.
Ijgi 11 00222 g009
Table 1. Details of the dataset collected. Altitude and corresponding GSD are listed. Img: Images. Vid: Videos. In total there are 423 images, 60 videos, and 2019 annotated solar panels.
Table 1. Details of the dataset collected. Altitude and corresponding GSD are listed. Img: Images. Vid: Videos. In total there are 423 images, 60 videos, and 2019 annotated solar panels.
AltitudeGSD# Img# Vid# Annotated PV
50 m1.7 cm586248
60 m2.1 cm637289
70 m2.5 cm478227
80 m2.8 cm608295
90 m3.2 cm448214
100 m3.5 cm479230
110 m3.9 cm564278
120 m4.3 cm4810238
Table 2. Table with detailed breakdown of the cost assumptions in each category for US-based operation. References embedded with hyperlinks. Note that nearly all of the unit prices in this chart vary greatly with the operational location of interest, and therefore anyone referencing this cost estimate framework should adjust according to local conditions and prices. A detailed calculation and explanation of our cost assumptions are given in the appendix and can be found at our code repository.
Table 2. Table with detailed breakdown of the cost assumptions in each category for US-based operation. References embedded with hyperlinks. Note that nearly all of the unit prices in this chart vary greatly with the operational location of interest, and therefore anyone referencing this cost estimate framework should adjust according to local conditions and prices. A detailed calculation and explanation of our cost assumptions are given in the appendix and can be found at our code repository.
CategoryItemUnit CostUnit
Legal and permitPart 107 certificate$150 [47]/pilot
Pilot training for exam$300/pilot
Drone registration fee$5 [48]/drone × year
TransportationCar rental$1700/month
Car insurance$400/month
Fuel$3 [49]/gallon
Flight ticket$2000/pilot
Driver/translator0 in US/pilot
Labor relatedWage$40/hour × pilot
Benefit$20 [50]/hour × pilot
Hotel$125/night
Drone relatedDrone$27,000/drone
Camera$0/drone
Battery$3000/drone
Data storage$130/5 TB
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ren, S.; Malof, J.; Fetter, R.; Beach, R.; Rineer, J.; Bradbury, K. Utilizing Geospatial Data for Assessing Energy Security: Mapping Small Solar Home Systems Using Unmanned Aerial Vehicles and Deep Learning. ISPRS Int. J. Geo-Inf. 2022, 11, 222. https://doi.org/10.3390/ijgi11040222

AMA Style

Ren S, Malof J, Fetter R, Beach R, Rineer J, Bradbury K. Utilizing Geospatial Data for Assessing Energy Security: Mapping Small Solar Home Systems Using Unmanned Aerial Vehicles and Deep Learning. ISPRS International Journal of Geo-Information. 2022; 11(4):222. https://doi.org/10.3390/ijgi11040222

Chicago/Turabian Style

Ren, Simiao, Jordan Malof, Rob Fetter, Robert Beach, Jay Rineer, and Kyle Bradbury. 2022. "Utilizing Geospatial Data for Assessing Energy Security: Mapping Small Solar Home Systems Using Unmanned Aerial Vehicles and Deep Learning" ISPRS International Journal of Geo-Information 11, no. 4: 222. https://doi.org/10.3390/ijgi11040222

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop