Next Article in Journal
Kernel Sparse Subspace Clustering with a Spatial Max Pooling Operation for Hyperspectral Remote Sensing Data Interpretation
Next Article in Special Issue
A Low Cost UWB Based Solution for Direct Georeferencing UAV Photogrammetry
Previous Article in Journal
Supervised and Semi-Supervised Multi-View Canonical Correlation Analysis Ensemble for Heterogeneous Domain Adaptation in Remote Sensing Image Classification
Previous Article in Special Issue
Deep Learning Approach for Car Detection in UAV Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combining Unmanned Aerial Systems and Sensor Networks for Earth Observation

1
Robotics Vision and Control Group, University of Sevilla, Escuela Superior de Ingenieros, c/Camino de los Descubrimientos s/n, 41092 Seville, Spain
2
Center for Advanced Aerospace Technologies (CATEC), Aerospace Technology Park of Andalusia, 41309 La Rinconada, Sevilla, Spain
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(4), 336; https://doi.org/10.3390/rs9040336
Submission received: 30 December 2016 / Revised: 22 March 2017 / Accepted: 27 March 2017 / Published: 1 April 2017
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)

Abstract

:
The combination of remote sensing and sensor network technologies can provide unprecedented earth observation capabilities, and has attracted high R&D interest in recent years. However, the procedures and tools used for deployment, geo-referenciation and collection of logged measurements in the case of traditional environmental monitoring stations are not suitable when dealing with hundreds or thousands of sensor nodes deployed in an environment of tenths of hectares. This paper presents a scheme based on Unmanned Aerial Systems that intends to give a step forward in the use of sensor networks for environment observation. The presented scheme includes methods, tools and technologies to solve sensor node deployment, localization and collection of measurements. The presented scheme is scalable—it is suitable for medium–large environments with a high number of sensor nodes—and highly autonomous—it is operated with very low human intervention. This paper presents the scheme including its main components, techniques and technologies, and describes its implementation and evaluation in field experiments.

Graphical Abstract

1. Introduction

The need for better understanding the complex interactions between the atmosphere, oceans and land surfaces and the population with its activities has motivated intense research and technological development focused on improving the spatial, temporal and spectral resolutions in remote sensing. New-generation satellites are revolutionizing remote sensing with unprecedented resolutions and accuracies. The need for high resolutions has also motivated the employment of aircrafts in earth observation. Unmanned aerial systems (UAS) with lower costs and take-off and landing infrastructure requirements than manned aircrafts are being used in more and more remote sensing applications.
In the last few years, the advances and miniaturization of sensors, computing and communication devices have originated the development of wireless sensor technologies. Sensor networks are comprised of embedded nodes with sensing, computational and communicational capabilities that can autonomously organize into networks [1]. Sensor nodes are designed to be low-cost and to have low energy consumption, which impose tight constraints on their computational and transmission capacities. Sensor nodes are small, lightweight, easy to be installed and functionally autonomous. They are equipped with their own energy supply, sensors, and processing and data logging capacity. Sensor networks have been widely employed in environmental monitoring [2,3,4,5]. Having hundreds or thousands of small, inexpensive sensor nodes deployed in the environment, gathering high-resolution measurements during months or even years, opens many possibilities in a wide range of applications.
Measuring environment variables at appropriate temporal and spatial scales remains an important challenge in earth sciences research. The integration of sensor networks in remote sensing can provide unprecedented observation capabilities and has attracted high interest in the last years [6,7,8,9,10]. However, using sensor network technologies for large-scale environmental monitoring requires solving a number of issues. Tasks such as sensor node installation and deployment, geo-referentiation or collection of measurements can be very complex when dealing with hundreds or thousands of small inexpensive sensor nodes deployed in an environment of tenths or hundreds of hectares. The procedures and technologies adopted when dealing with traditional environmental monitoring stations involve very high human labor, and are not suitable in this problem.
In this paper, Unmanned Aerial Systems (UAS) are proposed as tools for solving the main difficulties of employing sensor network (SN) technologies in medium–large scale environmental monitoring. UAS can autonomously transport and deploy sensor nodes at difficult accessibility areas. In addition, UAS are suitable for estimating the location of deployed sensor nodes using the strength of the radio signals they receive from them and then use these locations for geo-referencing their measurements. UAS can also communicate with the deployed sensor nodes and collect their measurements in an energetically efficient way in order to enlarge the battery lifetimes of the deployed sensor nodes. Both UAS and sensor nodes are suitable for rapid and effort-efficient deployment.
This paper presents an integral UAS-based scheme that intends to advance in the use of sensor nodes in remote sensing. The scheme is suitable for medium–large environments with a high number of sensor nodes. It is designed for operators without specific knowledge on the involved technologies and methods: operators only have to provide high-level commands and the scheme performs the mission in a highly automated way. We think that the main strength of the proposed work is not the individual methods and technologies for sensor node deployment, localization and measurement collection, but the integration of these methods and technologies into a fully functional scheme. To the best of our knowledge, the presented scheme is the first integral tool that addresses UAS-sensor node cooperation at different levels for environmental monitoring. This paper presents the proposed scheme, its main components, technologies and methods, and describes their implementation and evaluation in realistic field experiments.
This paper is structured as follows. Section 1.1 summarizes the related work. The main equipment and methods are presented in Section 2. This section also describes how the methods are harmonized into one integrated scheme. The field experimentation of the scheme and methods is briefly presented in Section 3. Some discussions of the presented methods and experimental results can be found in Section 4. The conclusions are summarized in Section 5.

1.1. Related Work

The potentialities and maturity of sensor network technologies have attracted very high interest in environmental monitoring applications. A very high number of systems, methods and experiences have been reported in the literature. For brevity, only a few of them are cited. Sensor networks have been proposed for pollution and noise monitoring [11], earth observation [6] and for predicting the occurrence of natural hazards [12], among many others. A good review of sensor networks for environmental monitoring can be found in [2,5].
The idea of combining sensor network measurements of very high spatial and temporal resolutions with observations obtained using traditional remote sensing tools, such as satellites, is very appealing and has attracted high interest in the recent years (see e.g., [6,7,8]). Sensor node measurements can be used to significantly improve the interpretation, configuration and calibration of remote sensing models and algorithms. In addition, they can also be useful to enhance or complement remote sensing measurements of insufficient temporal or spatial resolutions. Moreover, sensor nodes can be active during months or years enabling long-term observations [1]. Some authors have even proposed the idea of integrating sensor networks with other earth observation sensors and tools adopting the Internet of Things paradigm [9].
The cooperation of Unmanned Aerial Systems (UAS) with sensor networks (SN) provides very interesting synergies in many monitoring problems. UAS–SN cooperation has been proposed in disaster monitoring applications (see [13,14,15], among others). Furthermore, it has also received special attention in precision agriculture applications. For instance, in [16], the cooperation of UAS and sensor networks is exploited to optimize the application of pesticides and fertilizers in crops. In [17], the wind speed and direction estimated by the deployed sensor nodes are used to adjust the routes of the UAS in pesticide spraying. It should be noticed that most reported works do not consider large-scale deployments, and do not include tools or mechanisms for automatic sensor node deployment, localization and measurement collection.
The use of UAS for automatic deployment of sensor nodes was initially proposed in [18]. Its objective was to repair the connectivity of a sensor network deployed in an environment. UAS automatically deploying sensor nodes have been proposed also in many monitoring applications including disaster monitoring [13,19], border surveillance [20] and military surveillance [21], among others. Some of the above works describe theoretical designs. In those that include experimental results, sensor node deployment is performed using UAS with hovering capabilities—mainly helicopters, which are not suitable for operation in large environments due to flight time and range constraints.
Using UAS for collection of sensor measurements has been an attractive research topic. Most works have proposed schemes, architectures and protocols from a theoretical perspective. A centralized medium access control method for aerial platforms has been proposed in [22]. The architecture of a middleware for the integration of sensor networks and UAS is presented in [23]. Measurement collection using UAS with a directional antenna is proposed in [24]. A method for efficient collection is presented in [25]. These works do not include evaluation in field experiments, which is critical due to the high level of uncertainty. Very few works have been experimented in realistic conditions. In the simplest data collection approach, the UAS “visits”—and communicates with—every deployed sensor node in order to collect its measurements. Some experiments of this approach can be found in [26,27]. In works [28,29], we devised and preliminarily tested a measurement collection method based on grouping the deployed sensor nodes. Now, in this paper, that method is improved, combined and extended in an integral UAS–SN scheme and extensively experimented in environmentally relevant scenarios.
Recent Low-Power Wide-Area Network (LPWAN) technologies, such as LoRa (Long Range), NB-IoT (Narrowband—Internet of Things) or SigFox, are designed to enable long-range communications with low bit rates in order to reduce energy consumption. These networks can connect thousands of devices located at distances from 5 km to 30 km, but they are still in experimental stages. We believe that our scheme and LPWAN technologies are more complementary rather than alternative solutions. The presented scheme provides autonomous node deployment and localization capabilities improving flexibility and highly reducing human effort. Using LPWAN nodes in our scheme will enable creating much larger clusters, which simplifies UAS flight planning and enables covering even larger areas.

2. Equipment and Method

In this work, UAS are employed for automatic sensor node deployment, localization and measurement collection. Two main types of hardware components are involved: sensor nodes and UAS. For clarity, the methods are described considering only one UAS, but they can be straightforward extended to several UAS.

2.1. Components and Equipment

Figure 1 shows the main modules of the presented scheme. UAS-SN Cooperation Control is the core component. It receives the missions and commands given by the operator and coordinates all of the modules to accomplish the mission. UAS Control acts as the interface between the UAS and the rest of the modules. It is implemented by the UAS Ground Control Station (GCS). The GCS receives commands from UAS-SN Cooperation Control, and transmits them to the computer on board the UAS. The GCS also receives the UAS telemetry from its computer and reports to the UAS-SN Cooperation Control module. SN Control acts as the interface between the sensor nodes deployed in the environment and the rest of the modules. It is implemented by a sensor node on board the UAS that is connected to the UAS computer. The on-board sensor node transmits messages to coordinate the operation of the deployed nodes and also collects their measurements.
We employed TelosB ultra low-power Wireless Sensor Networks (WSN) nodes, see Figure 2. The TelosB node can put in low-energy mode several of its electronic components, including the radio transceiver—enabling low-energy transmissions—and the microcontroller—enabling the so called “sleep” mode. TelosB nodes are equipped with built-in sensors suitable for environmental monitoring such as a light detector in the chlorophyll absorption spectrum (Hamamatsu S1087, (Hamamatsu Photonics K.K., Japan)), a light detector in the visible to IR band (Hamamatsu S1087-01) and a relative humidity and temperature sensor (Sensirion SHT11). TelosB nodes provide flexible interconnection with a large variety of peripherals and sensors. In work [30], we connected them with sensors to measure concentrations of CO 2 , CO and hydrocarbon gases.
UAS are suitable for transporting and deploying sensor nodes at difficult accessibility areas. We preferred fixed-wing UAS over rotary-wing UAS due to their longer flight endurance and flight range [31]. However, fixed-wing UAS have constrained maneuverability—they cannot hover and have flight kinematic constraints, which must be taken into account when designing the methods. The UAS employed in the field experiments was the ELIMCO UAV (Unmanned Aerial Vehicle) E-300 aerial vehicle [32] (see Figure 3). UAV E-300 has an electric engine that provides a flight endurance of 1 h and has a payload of 2 kg. It uses a catapult for taking-off and lands over its belly: it does not need external taking-off and landing infrastructure; hence, it is suitable for quick deployment. It has on-board autonomous flight control capability—uses the Piccolo autopilot—and its maximum flight range is 40 km—limited by the radio-link range. Its payload includes an on-board BeagleBone embedded computer, an Inertial Measurement Unit (IMU) and an RTK (Real Time Kinematic) GPS receiver and the Piccolo autopilot controller. The UAV E-300 is commanded by its Ground Control Station (GCS). The on-board computer gathers measurements from on-board sensors, logs them locally, and transmits them to the GCS. Two different radio-links were used to communicate the aerial vehicle and the GCS: one for flight navigation and the other for payload management.
The UAS is also equipped with an on-board TelosB node that is connected to the on-board computer through USB (see Figure 4). The on-board node communicates with the deployed sensor nodes and acts as their interface with the rest of the components. The GCS transmits messages to the UAS, and the on-board computer forwards the messages to the on-board sensor node. The on-board sensor node is equipped with an external antenna of 9 dB instead of the built-in micro-strip antenna of TelosB nodes. The antenna is set pointing downwards rigidly attached to the bottom of the UAS fuselage in order to prevent shadowing. With this setting, we experimentally checked that the on-board node can communicate with sensor nodes deployed on the ground when the UAS flies at altitudes of up to 450 m.

2.2. Sensor Node Deployment

UAS are used to deploy sensor nodes with very low human intervention. Fixed-wing UAS have significant flight endurance, which enables deploying sensor nodes at locations far away from the landing/take-off areas. However, they cannot hover, and sensor node deployment is performed dropping the nodes while the UAS is in motion. First, the UAS should be equipped with specific mechanisms to transport and deploy sensor nodes. In addition, the UAS should include techniques to automatically trigger the deployment mechanism so that the sensor nodes fall at the specified locations. Figure 5 shows the sensor node transportation mechanism during deployment.
In the Sensor Node Deployment mission, the operator specifies the number of sensor nodes to be deployed and the locations where they should be deployed. The UAS-SN Cooperation Control module communicates with the GCS, which computes a suitable deployment plan that includes: (1) the flight plan and (2) the UAS locations where the deployment mechanism should be triggered so that the sensor nodes fall at the specified locations.
Assume that the UAS is flying horizontally at altitude z ( t ) = H with constant speed v y ( t ) = V and v x ( t ) = 0 . At t = 0 , the UAS is at location x ( t ) = 0 , y ( t ) = 0 and z ( t ) = H . At t = 0 , sensor node i drops from the UAS. It is assumed protected with some enclosure and causes aerodynamic effects. We adopt a simple model that considers air resistance as a force that is proportional to the object speed. In this case, the free-fall trajectory of the sensor node is given by the following expression [33]:
z ( t ) = H δ x ( t ) β δ l n 1 x ( t ) β δ ,
where δ = g m 2 / k w 2 and β = V k w / m g . In these expressions, k w is the proportionality constant of air resistance, m is the mass of the sensor node and g is the acceleration due to the gravity. From (1) and using a Digital Terrain Model (DTM) of the environment, it is simple to efficiently predict the location where sensor node i will fall. It is also possible to compute D T i , and the UAS location where the deployment mechanism should be triggered so that node i falls at the desired location. The adopted method is valid for deployment also on non-flat areas. Of course, errors in the parameters of the free-fall trajectory model, wind gusts or obstacles not present in the DTM can originate deployment location errors. In order to reduce deployment errors, the UAS velocity and orientation are kept constant for some seconds before the activation of the deployment mechanism.
Next, the GCS sends to the UAS on-board computer the deployment plan consisting of the flight plan and the list of D T i . During the flight, the on-board computer activates the deployment mechanism for node i when | U A S t D T i | < ϵ , where U A S t is the UAS location at time t and ϵ is a location error tolerance. When all the nodes have been deployed, the on-board computer informs the GCS, which informs the UAS-SN Cooperation Control. The computation of D T i is performed by the GCS and requires very low computational and memory requirements. The on-board computer only has to monitor the UAS telemetry and trigger the dropping mechanism when | U A S t D T i | < ϵ , avoiding delays that could cause deployment location errors.
Sensor node deployment—and also sensor localization and measurement collection—requires UAS flight planning and control methods. None of these missions requires specific flight control strategies. The performance of the flight control methods implemented in the Piccolo autopilot in the UAV E-300 was found suitable in preliminary tests and were adopted. They are not described in this paper for brevity. The optimal planning of UAS trajectories is a very complex problem—it has been mathematically proven that it is a NP-hard [34]. In this problem, we prefer a method that efficiently finds a suitable trajectory rather than a method that provides an optimal solution, but requires too much of a computational burden. The adopted flight planning method is based on that described in [29]. It is an heuristic-based trajectory planning method that combines a rectilinear motion model for the initial evaluation of trajectories together with a realistic UAS motion model, for final evaluation and selection of the trajectory. For brevity and to better focus on the main UAS-SN cooperation methods of the scheme, the method is not described in this paper.

2.3. Sensor Node Localization

Knowing the sensor node actual location is critical for measurement geo-referencing, and for computing the UAS flight plan in measurement collection. In addition, it is interesting for collecting the sensor nodes after the environment observation campaign. Deployment of sensor nodes using fixed-wing UAS is not very accurate. The presented scheme includes a method that estimates the actual locations of the sensor nodes, and, using the RSSI (Received Signal Strength Indicator) of the packets, they interchange with the on-board sensor node. Notice that most radio modules of existing sensor nodes can measure the RSSI of incoming messages with negligible delay, energy or computational cost.
In the Sensor Node Localization mission, the UAS-SN Cooperation Control module communicates with the GCS, which commands the UAS to fly over the sensor deployment areas while its on-board sensor node broadcasts LocalizationReq messages. When deployed sensor node i (located at L i = [ x i , y i , z i ] ) receives a LocalizationReq message, it transmits in response a LocalizationResp message with its identifier. If the on-board sensor node receives the LocalizationResp message, it measures the RSSI of the message, and transmits it to the UAS on-board computer. It will be called R S S I j i , where subindex j represents the different RSSI measurements from node i. This LocalizationResp message was received when the UAS was at location L j U A S , which is assumed known since the UAS is equipped with a GPS receiver. It is well known that the RSSI between an emitter and a receiver depends on the distance between them. The most widely-used RSSI-range model adopts the following expression [35,36]:
R S S I j i = a log d j i + b ,
where a and b are model parameters and d j i = | L i L j U A S | is the distance between the sensor node i and the UAS when it was at location L j U A S .
Many RSSI-based localization methods have been proposed. Some methods such as Least Squares [37], although optimal with Gaussian noise, do not perform well in presence of high levels of RSSI noise. In this work, we adopted the Weighted Centroid Localization (WCL) [38], a widely-used method that exhibits significant robustness to noise. In WCL, the location of sensor node i is computed using the expression:
L i = j = 1 n ( ω i j L j U A S ) j = 1 n ω i j ,
where n is the number of RSSI measurements from node i and ω i j are weighting factors that depend on the distance as follows: ω i j = 1 / ( d j i ) p , where p is an exponent to consider the influence of distance.
If the scenario is flat—or can be approximated as flat when compared to the UAS altitude—and the nodes have been deployed on the ground, the localization algorithm can be refined as follows. If a DTM is available z i , the location of node i in the vertical axis, can be assumed known. The altitude of the UAS at L j U A S is also known. Thus, d z j i , the vertical distance between L i and L j U A S , is known. Using (2), it is easy to compute d j i and the horizontal distance between L i and L j U A S is d h j i = ( d j i ) 2 ( d z j i ) 2 . Hence, the localization problem can be solved using a 2D version of the WCL algorithm:
L h i = j = 1 n ( ω h i j L h j U A S ) j = 1 n ω h i j ,
where L h i is the location of sensor node i on the horizontal plane, L h j U A S is the horizontal location of the UAS when it received the LocalizationResp message and ω h i j = 1 / ( d h j i ) p are weighting factors.
The GCS computes the locations of the discovered ground sensor nodes with the data transmitted by the on-board computer. When all the discovered sensor nodes have been localized, the GCS transmits their locations to the UAS-SN Cooperation Control and can be used for operator visualization, among others: the node localization mission has been accomplished. The presented method involves negligible computational burden in standard computers, and the only significant memory footprint requirement is caused by the DTM.

2.4. Measurement Collection

In traditional environmental monitoring stations, measurements are registered in data-loggers and are collected manually or transmitted to a base station using radio links. These approaches are unsuitable in the case of having hundreds of sensor nodes scattered in a medium-size environment. In this work, UAS are employed for collecting the measurements gathered and logged by the deployed sensor nodes. Schemes where the UAS collect measurements asking one-by-one each of the deployed nodes are inefficient. The adopted approach is based on grouping the deployed sensor nodes in single-hop clusters. Each cluster has its cluster head (CH), which is responsible for coordinating the operation in the cluster. All the sensor nodes in the cluster gather measurements periodically and the non-CH nodes transmit their measurements to its CH. The CH logs the measurements it receives. Only CH nodes communicate with the UAS. The UAS fly over—”visit”—the CH of each cluster in order to collect the measurements of all deployed nodes, simplifying UAS trajectories. Another advantage of cluster-based approach is energy efficiency, which is critical to enlarge the battery lifetimes of deployed nodes or to reduce their energy supply requirements.
The following assumptions are adopted: (1) each sensor node can measure the remaining energy in its batteries, and (2) each sensor node can enter and get out of its sleep mode by itself. These assumptions do not involve practical constraints and are satisfied by most COTS (Commercial off-the-shelf) sensor node models. Below, the main stages in measurement collection are summarized.
First, the deployed sensor nodes organize autonomously into clusters and select the CH of each cluster. A cluster formation method based on [39] is adopted. During the Cluster Formation mission, the UAS is commanded to fly over the sensor nodes deployment area following a zig-zag pattern while the sensor node on board the UAS broadcasts ClusterFormation messages. Sensor node i is activated when it receives a ClusterFormation message. If after a period of time node i does not detect activity from its CH, it broadcasts a CHProposal message proposing itself as a CH candidate. Each sensor node j receiving a CHProposal message evaluates how well node i could perform the CH role as follows:
L i = E i P R i , j P T x i ,
where E i is the remaining energy in the batteries of node i and P T x i is the transmission power of node i. Both E i and P T x i , were included in the CHProposal message. P R i , j is the received signal power that node j receives from node i. If several CH candidates have similar values of P R i , j / P T x i , E i favors the selection of the candidate with higher remaining energy. In case of candidates with similar E i , P R i , j / P T x i favors the selection of the CH candidate with better link quality, which reduces the consumption of node j. Node j evaluates each candidate, selects the most suitable one and sends to the winning candidate an informing message. The candidate with more votes becomes the CH and broadcasts ClusterFormed messages with the list of the identifiers of the sensor nodes in its cluster. When the on-board sensor node receives the ClusterFormed messages, it forwards it to the on-board computer, which transmits it to the GCS. When the GCS detects that a sufficiently high percentage—80% in the experiments—of the deployed nodes has been assigned to a cluster, cluster formation is considered accomplished and transmits to the UAS-SN Cooperation Control a message informing.
Communication within the cluster is performed using Time Division Multiple Access (TDMA). TDMA avoids many of the problems typical in medium access in sensor networks including message collisions, overhearing and idle listening. In addition, it enables data aggregation. Next, the CH starts the operation of the TDMA-based protocol. One slot in the TDMA frame is reserved for the transmissions from the CH to the non-CH sensor nodes. The rest of the slots are devoted to the transmissions of measurements from each non-CH sensor node to the CH. During the measurement collection stage, each sensor node periodically gathers measurements from its sensors and transmits them to its CH in its TDMA frame slots. The CH node receives the measurements and logs them.
During the Measurement Collection mission, the GCS computes a flight plan that flies over all CHs. When the UAS visits CH node i, the on-board sensor node transmits DataCollectionReq messages to CH i. When CH i receives the message, it responds with one or more DataCollectionResp messages with the logged measurements. Each of these messages includes Seq, a sequence number. The UAS on-board node sends in DataCollectionReq messages the Seq of the last DataCollectionResp message it received from that CH, so that each CH knows which messages have been successfully collected. The on-board sensor node forwards the measurements to the UAS on-board computer, which transmits them to the GCS using the UAS radio link. An example of the message exchange in Measurement Collection is shown in Figure 6. In each TDMA cycle, non-CH sensor node j transmits to its CH node i a message with its measurements. In the experiments performed, the UAS speed was around 20–25 m/s. Assuming that the area in which the UAS and a CH can communicate is a circle of 50 m of diameter, every time the UAS visits the CH, they can interchange messages during around 2 s, enough for transmitting tenths of DataCollectionResp messages. This prediction was confirmed experimentally.
The adopted measurement collection is energy efficient. Non-CH nodes are in sleep mode during most of the time, and only wake up for gathering measurements and for communicating with its CH. The CH nodes also communicate with the UAS, hence they consume more energy than non-CH nodes. Each CH checks periodically if it can keep on with the CH role, and when it finds that its remaining energy is below a threshold, it triggers CH Selection, which is very similar to Cluster Formation—it is not presented for brevity. The role of CHs is critical in the cluster. All non-CH sensor nodes monitor the activity of the CH of its cluster. If a non-CH does not detect activity of its CH during a predefined interval, it triggers CH Selection in order to select a new CH.
The presented method involves very low computational burden and memory footprint and can be implemented in regular COTS WSN nodes. During measurement collection, no control message is exchanged. In cluster formation and CH selection, each node transmits two or three control messages.

2.5. Integration of Methods

The role of the UAS-SN Cooperation Control module is to harmoniously integrate the methods in the scheme in order to accomplish the full environmental monitoring mission. The UAS-SN Cooperation Control module was implemented as a finite-state machine. The flexibility of the missions enables adaptation to different types of monitoring and scenarios, and to the potential technological constraints on the hardware employed. For instance, the aforementioned missions can be executed one after the other in an initialization phase (see Figure 7A), and also in a run-time phase performing on-demand measurement collection, in which the UAS take-off, collect the logged measurements and land (see Figure 7B). The frequency of on-demand collection mainly depends on the specific application, the measurement gathering rate and the storage capacity of the sensor nodes.
The operation of the UAS-SN Cooperation Control module involves very low computational burden and memory footprint. The coordination of missions in a modular and distributed scheme requires exchanging frequent messages between the components including some few control messages. The messages exchanged in missions Take-off, NodeDeployment, NodeLocalization and ClusterFormation are shown in Figure 8. The communication between the modules was implemented using DDS (Data Distribution Service), which enables scalable, real-time, dependable and interoperable data communication.

3. Field Experiments

The described system and missions were experimented in two main locations: an airfield, where the techniques were debugged and preliminarily tested, and the Doñana Biological Reserve in southern Spain, where they were experimented in realistic conditions. The main details of these experiments are in Table 1. A total of 24 different experiments were performed, 16 of them in the biological reserve. The experiments described below were performed in the biological reserve in April 2014, the last experiment campaign. High care and effort were devoted to these experiments since they were used for the final validation of the methods.
A sensor node deployment experiment is described in the following. This experiment was performed on 8 April 2014. For experiment analysis, all relevant information and measurements were logged, including all messages exchanged, the GNSS (Global Navigation Satellite System) locations of sensor nodes and the UAS trajectory, among others. The operator commanded a Sensor Node Deployment mission at the locations shown in Table 2. The ground in the deployment area was flat. The UAS-SN Cooperation Control commanded the GCS, which determined the deployment plan as follows:
  • computed the UAS flight plan between the deployment locations,
  • computed D T i , the UAS locations where the deployment mechanism should be triggered using the method described in Section 2.2. The resulting D T i locations are shown in Table 2.
First, the GCS commanded UAS take-off. After the UAS took off, the GCS sent the deployment plan to the UAS on-board computer. During the flight, the GCS activated the deployment mechanism when | U A S t D T i | < ϵ . The value ϵ = 9 m was selected, high enough to absorb the flight control and planning errors. Figure 9 shows the results of the experiment. The UAS direction along the trajectory is represented with an arrow. The GNSS positions of the desired node deployment locations are represented with circles in Figure 9. The positions of the actual sensor node locations after deployment were measured manually with GPS. They are represented with dark stars in Figure 9, and are also shown in Table 2. The deployment of Node4 was affected by higher errors, which was attributed to the curved trajectory described by the UAS during its deployment. In this experiment, the UAS flew at an altitude of 240 m and deployment was significantly perturbed by wind. The mean deployment error, defined as the distance between the desired deployment position and the actual position after deployment, both measured with GNSS, was 29.2 m. In other experiments, the UAS altitude was reduced to 100 m and the resulting error when deploying as well over flat ground was 13.6 m.
To the best of our knowledge, no work in the literature has reported experimental results of fixed-wing UAS deploying sensor nodes. Some few works reported deployment using rotary-wing UAS. In Ref. [13], sensor node deployment was performed with helicopter UAS, which have hovering capabilities. In this work, the helicopter hovers on top of the desired deployment location, reduces altitude, and when it is at 2 m over the ground, it drops the sensor node. The deployment errors achieved in [13] were lower than 1 m. However, helicopter UAS have significant flight endurance and range constraints, and are unsuitable in our problem.
After accomplishing the node deployment mission, while the UAS was still flying, the operator commanded a Sensor Node Localization mission at the rectangular area between positions (N 37.131258°/E −6.465607°) and (N 37.118601°/E −6.441192°). The UAS-SN Cooperation Control sent to the GCS a NodeLocalization message, and the GCS performed as follows: (1) computed a zig-zag flight plan on top of the area; (2) transmitted the flight plan to the on-board computer and; (3) commanded the on-board node to start transmitting LocalizationReq messages. The ground was flat and the 2D version of WCL was executed. The resulting sensor node locations estimated were compared to the actual locations. The location errors in this experiment were: e 1 = 12.4 m, e 2 = 18.6 m, e 3 = 17.1 m and e 4 = 14.3 m, which were considered sufficient for geo-referencing purposes.
Several works reporting experimental results of WSN node localization using helicopter UAS and quadrotors have been published. However, these results cannot be used for comparison since these UAS have hovering capacity, move at lower speeds and gather RSSI measurements from sensor nodes placed at much lower ranges than in case of fixed-wing aerial vehicles, resulting in different localization accuracies. We could not find works reporting experimental results of WSN sensor node localization using fixed-wing UAS. Cuesta et al. [40] reports experiments in which a WiMAX base station is localized using a fixed-wing UAS. When the UAS flies at 1.5 km from the base station, the localization error was between 10–100 m. If we consider the distance between the UAS and the base station, these results are outstanding, but they were obtained using WiMAX technology, which cannot be integrated in low-cost, small, lightweight devices.
In the following, a measurement collection experiment is summarized. The experiment was performed on 9 April 2014. A total of 196 sensor nodes were manually deployed on a rectangular sensor deployment area between (N 37.119°/E −6.430°) and (N 37.110°/E −6.422°). The location of the nodes was measured using GNSS (see Figure 10). All relevant information and measurements were logged, including all messages exchanged, the GNSS locations of sensor nodes and the UAS trajectory. A sniffer WSN node was placed at each cluster in order to log all the messages interchanged in the cluster. We preferred to concentrate on the evaluation of the measurement collection method rather than in devoting high efforts to equipping the nodes with sensors with higher environmental monitoring value. The Telosb nodes used in this experiment were programmed to gather measurements of its three built-in sensors. First, the operator commanded a Cluster Formation mission in the deployment area. The UAS-SN Cooperation Control sent to the GCS a ClusterFormation message. The GCS performed as follows:
  • computed a zig-zag flight plan on top of the sensor deployment area,
  • commanded UAS take-off,
  • after the UAS took-off, the GCS sent the flight plan to the UAS on-board computer,
  • the GCS commanded the on-board sensor node to start transmitting ClusterFormation messages.
The deployed nodes performed as described in Section 2.4 and organized into a total of 17 clusters. Each CH of a cluster transmitted ClusterFormed messages, which included the identifiers of the CH and of the sensor nodes belonging to that cluster. The on-board sensor node forwarded to the on-board computer the ClusterFormed messages it received, and the on-board computer transmitted them to the GCS. When the GCS detected that 80% of the deployed nodes were assigned to a cluster, cluster formation was considered accomplished, and the GCS transmitted to UAS-SN Cooperation Control a message informing. Figure 10 shows the locations of the sensor nodes deployed in this experiment. The deployment area used for UAS trajectory planning is shown with a rectangle. The clusters resulting of cluster formation are represented with ellipses.
In this experiment, 13 sensor nodes were not assigned to any cluster: they had bad radio link quality with the CH of nearby clusters. Each of these nodes could have created a cluster of only one node. However, in order to prevent excessive network partitioning, we established a lower bound on the number of sensor nodes in clusters, and clusters with low number of nodes were not considered in measurement collection. The UAS will collect only the measurements from the nodes that belong to a valid cluster, 183 nodes in this experiment.
Next, the operator commanded a Measurement collection mission. The UAS-SN Cooperation Control sent a Measurement collection message to the GCS, which performed as follows: (1) computed a flight plan to fly on top of the 17 CHs (each waypoint had the horizontal location of a CH but with a given desired flight altitude of 210 m); (2) commanded UAS take-off; and (3) after the UAS took-off, the GCS sent the flight plan to the UAS on-board computer. When the UAS flew on top of a CH node, the on-board sensor node transmitted DataCollectionReq messages to that CH. The CH and the on-board exchanged messages as described in Section 2.4. Figure 11 shows the trajectory followed by the UAS in one measurement collection lap. The locations of the CHs are represented with stars, and the point marked with “1” is the take-off location. In this experiment, an additional constraint was imposed on the flight planning method: the UAS trajectory length between two consecutive CHs should be higher than 100 m.
It can be noticed in Figure 11 that the UAS flew on top of each of the CHs. In the following, the flight control and planning error are evaluated analyzing the distance between the UAS trajectory and the waypoints given to the UAS flight control and planning system. The distribution of this distance in all the performed field experiments was computed. Figure 12 shows the probability that this distance is lower than a certain value. The average distance error—probability of 0.5—in the experiments was 3.16 m. This distance error was lower than 5.9 m in 90% of the cases. This result validates the flight control and planning methods adopted.
In this experiment, the measurements of 183 sensor nodes were collected by the UAS. The UAS performed 29 measurement collection laps—visiting each CH in each lap—during 51 min. The UAS on-board sensor node received an average of 9.7 errorless DataCollectionResp messages from each CH in each lap. The average Packet Reception Rate (PRR) in this experiment (flight at 210 m of altitude) was 86%. In total, the UAS collected 9236 different measurements from each sensor node, i.e., 3078 measurements from each of the 3 × 183 sensors.
After the experiment, the sniffer nodes placed at each cluster for logging purposes were analyzed. The communications between the CH and non-CH nodes had a mean PRR of 94%. The PRR between the on-board node and the CH was not symmetric, which was attributed mainly to the differences in the antennas. The PRR of the messages from the on-board node to the ground nodes was 95% in average, whereas the PRR from the ground node to the on-board node was 81%.
The battery lifetimes of the deployed sensor nodes in this experiment were simulated using the energy consumption characteristics in the TelosB data-sheet. Each node was assumed equipped with two standard AA batteries of 2400 mAh, hence the initial energy stored in all the nodes was 25,920 J. Figure 13 shows the number of active sensor nodes—sensor nodes with remaining energy—along time. The adopted cluster-based collection is compared to the scheme used in [26], in which the UAS collect the measurements of sensor nodes one-by-one. The presented cluster-based collection provides significantly higher battery lifetimes. In one-by-one collection, the batteries of all the nodes are exhausted after t = 21,000 s. In cluster-based collection, all nodes are active until t = 64,000 s.

4. Discussion

Scalability is a main concern in a scheme designed for monitoring of medium–large environments. The proposed scheme scales well with the size of the scenario. First, fixed-wing UAS are endowed with high endurance and flight ranges. Second, larger scenarios require higher numbers of sensor nodes and UAS. As shown below, the proposed methods scale well with both.
Sensor node localization is not significantly influenced by the number of nodes. If the node density is not too high to cause excessive message collisions, one UAS requires roughly the same time to localize one sensor node than to localize many nodes. Deploying more nodes with the same number of UAS involves UAS trajectories of higher length and duration. Of course, deployment times can be made shorter using a higher number of UAS. Measurement collection also scales well with the number of sensor nodes due to its cluster-based approach. In addition, if several UAS are employed, different UAS can be assigned to collect the measurements from different clusters, reducing collection times. It is easy to notice that the computational burden, memory footprint and overhead protocol messages of the proposed methods scale linearly with the number of sensor nodes.
The proposed methods also scale linearly with the numbers of UAS. Using several UAS can reduce roughly proportionally the times required for measurement collection. An illustrating example is shown in Figure 14a–c, which shows the UAS trajectories for visiting waypoints 2–10 using, respectively, one, two and three UAS. Point 1 represents the taking-off position and it is not used for computing the flight plan. The adopted flight planning method divides the waypoints between the available UAS in order to homogenize flight duration. This analysis was performed with simulations of 16 scenarios each with 24 randomly placed waypoints. The total time in minutes required by different numbers of UAS for visiting all of the waypoints is shown in Figure 14d. In each scenario, the time required by several UAS to visit the waypoints is roughly inversely proportional to the number of UAS. Hence, the measurement collection times—and also the node deployment times—can be severely reduced when using several UAS. The heuristic-based flight planning adopted is very efficient and its computational burden and memory footprint are almost linear with the number of UAS. Moreover, sensor node localization can achieve more accurate estimations in the case of using RSSI measurements from the same sensor node received by different UAS.
Flight altitude is also a critical issue in the presented methods. Node deployment errors—distance between desired deployment positions and actual location after deployment—tend to be lower when deploying from lower altitudes. First, it is less influenced by errors in the parameters of the free-fall trajectory model or wind gusts. In addition, the falling sensor nodes have lower kinematic energy and are less prone to displacements after the impact. The sensor node enclosures should be designed to absorb impact energy. In fact, in the first experiments, 13% of the nodes malfunctioned after deployment. In addition, low UAS altitudes also result in better link quality between the on-board node and the deployed sensor nodes, improving the performance of sensor node localization and measurement collection.
In the above measurement collection experiment, the average PRR between the on-board node and the sensor nodes deployed on the ground was 86%. However, the on-board node is equipped with a directional antenna rigidly attached to the UAS fuselage. The projection on the ground of the transmission lobe of the antenna depends on the UAS location and orientation along its flight. If the antenna projection at time t is near to a deployed sensor node, at that time, that node will have higher link quality with the on-board node than another sensor node deployed far from the antenna projection. In our problem, describing the PRR between the on-board node and the deployed sensor nodes as a single mean value is not accurate. If the UAS flies at a constant altitude, it is more consistent to describe PRR as a distribution where ground locations at different horizontal distances from the antenna projection have different PRR values. This analysis and experimental evaluation is the object of current research.
The operation of the scheme and methods was rather robust, and they performed satisfactorily in each of the 16 experiments except one, which was performed in the 2011 Experiment Campaign. In that experiment, too many deployed sensor nodes were not assigned to any cluster. The cluster formation mission was not considered accomplished and the measurement collection mission did not start. The issue was solved adding a timeout. Although the scheme performed consistently in the experiments, more experiments are necessary to really evaluate their performance in the wide range of conditions that can be found in environmental monitoring. For instance, all of the node deployment experiments were performed on fairly flat grounds. The deployment method is also valid for non-flat grounds, but confirmation and performance evaluation in field experiments is necessary.
Finally, the presented scheme massively uses radio communications between the UAS and static sensor nodes. However, in the many experiments performed, we did not notice significant degradation in the radio link performance that could be attributed to the UAS motion speed. The reasons for this can be the relatively low UAS speed (25–30 m/s in the experiments performed) or the rather simple message exchange between the UAS and the deployed sensor nodes.

5. Conclusions

In the last few years, the need for spatial, temporal and spectral resolutions in earth observation has motivated the development of new-generation satellites and the employment of technologies such as aircrafts and UAS in remote sensing. Sensor network technologies, where hundreds of small inexpensive and functionally-autonomous sensor nodes can be deployed in the environment to gather high-resolution measurements during months, provide unprecedented observation capabilities and have attracted significant interest.
Earth observation using sensor nodes requires procedures for sensor deployment/installation, geo-referenciation and collection of measurements. The procedures and tools adopted for traditional environmental monitoring stations involve very high human labor and cannot be used when hundreds of small sensor nodes are deployed in an environment of tenths or hundreds of hectares.
This paper presented an integral scheme based on the cooperation of UAS and sensor networks for environmental monitoring that intends to give a step forward in the use of sensor networks in earth observation. The presented scheme includes methods for autonomous sensor node deployment, localization and measurement collection. It can be used in medium–large environments with high numbers of sensor nodes and its operation requires low human intervention. The presented scheme was implemented and satisfactorily evaluated in field experiments performed in environmentally relevant environments.
This work opens a wide field for research. The presented scheme was experimented in realistic conditions but still requires significant effort to be transformed into a product. The experiments were performed with IEEE 802.15.4-based nodes gathering measurements from their built-in sensors. A real application would require sensor nodes designed for the specific application. For instance, using Low-Power Wide-Area Networks (LPWAN) nodes in our scheme would facilitate forming larger clusters, simplifying UAS flight planning and enabling covering larger areas. In addition, further research is necessary to integrate satellite data with sensor node measurements, for instance to improve the calibration of remote sensing models or to fuse satellite data with sensor node measurements of significantly higher temporal and spatial resolutions.

Acknowledgments

This work was developed mainly in the context of EU Project PLANET (FP7-2010-257649). Partial funding was obtained from EU Project AEROARMS Ref. H2020-ICT-2014-1-644271 and the AEROMAIN project funded by the Spanish R&D plan (DPI2014-59383-C2-1-R). A.T.-G. thanks the Ministerio de Educación y Deportes FPU Program. J.R. Martínez-de Dios acknowledges the EU Project AEROBI funded under contract H2020-ICT-2015-1-687384.

Author Contributions

J.R.M.-d.D. was the main designer of the scheme and algorithms that led to this paper, coordinated the experiments and elaborated the major part of the manuscript. A.d.S.B. implemented the algorithms and participated in the real experiments dealing with sensor networks. A.V. coordinated the developments of tasks and implementations related to the UAS. A.T.-G. collaborated in the development of the localization methods. A.O. provided suggestions and corrections during the preparation of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Marrón, P.J.; Karnouskos, S.; Minder, D.; Ollero, A. The Emerging Domain of Cooperating Objects; Springer Science & Business Media: Berlin, Germany, 2011. [Google Scholar]
  2. Mainwaring, A.; Culler, D.; Polastre, J.; Szewczyk, R.; Anderson, J. Wireless sensor networks for habitat monitoring. In Proceedings of the ACM International Workshop on Wireless Sensor Networks and Applications, Atlanta, GA, USA, 28 September 2002; pp. 88–97. [Google Scholar]
  3. Collins, S.L.; Bettencourt, L.M.A.; Hagberg, A.; Brown, R.F.; Moore, D.I.; Bonito, G.; Delin, K.A.; Jackson, S.P.; Johnson, D.W.; Burleigh, S.C.; et al. New opportunities in ecological sensing using wireless sensor networks. Front. Ecol. Environ. 2006, 4, 402–407. [Google Scholar] [CrossRef]
  4. Hamilton, M.; Graham, E.A.; Rundel, P.W.; Allen, M. New Approaches in Embedded Networked Sensing for Terrestrial Ecological Observatories. Environ. Eng. Sci. 2007, 24, 192–204. [Google Scholar] [CrossRef]
  5. Othman, M.F.; Shazali, K. Wireless Sensor Network Applications: A Study in Environment Monioring System. Proc. Eng. 2012, 41, 1204–1210. [Google Scholar] [CrossRef]
  6. Hart, J.K.; Martinez, K. Environmental Sensor Networks: A revolution in the earth system science? Earth Sci. Rev. 2006, 78, 177–191. [Google Scholar] [CrossRef]
  7. Bailey, S.W.; Werdellb, P.J. A multi-sensor approach for the on-orbit validation of ocean color satellite data products. Remote Sens. Environ. 2006, 1–2, 12–23. [Google Scholar] [CrossRef]
  8. Handcock, R.N.; Swain, D.L.; Bishop-Hurley, G.J.; Patison, K.P.; Wark, T.; Valencia, P.; Corke, P.; O’Neill, C.J. Monitoring Animal Behaviour and Environmental Interactions Using Wireless Sensor Networks, GPS Collars and Satellite Remote Sensing. Sensors 2009, 9, 3586–3603. [Google Scholar] [CrossRef] [PubMed]
  9. Hart, J.K.; Martinez, K. Toward an environmental Internet of Things. Earth Space Sci. 2015, 5, 194–200. [Google Scholar] [CrossRef]
  10. Thomas, T.; Gray, H. Green Falcon IV: An Unmanned Aerial System and an integrated Wireless Sensor Network for remote sensing tasks. Available online: http://eprints.qut.edu.au/94753/ (accessed on 28 March 2017).
  11. Santini, S.; Ostermaier, B.; Vitaletti, A. First experiences using wireless sensor networks for noise pollution minotoring. In Proceedings of the workshop on Real-world wireless sensor networks, Glasgow, Scotland, 31 March–4 April 2008. [Google Scholar]
  12. Lehning, M.; Bavay, M.; Loewe, H.; Parlange, M.; Aberer, K. Das swiss experiment und die zukunft der vorhersage von alpinen naturgefahren. Available online: http://www.wslf.ch/dienstleistungen/publikationen/pdf/8232.pdf#page=39 (accessed on 28 March 2017).
  13. Maza, I.; Caballero, F.; Capitan, J.; Martinez-de Dios, J.R.; Ollero, A. A Distributed Architecture for a Robotic Platform With Aerial Sensor Transportation and Self-Deployment Capabilities. J. Field Robot. 2011, 28, 303–328. [Google Scholar] [CrossRef]
  14. Gomez, A.; Lagadec, M.F.; Magno, M.; Benini, L. Self-powered wireless sensor nodes for monitoring radioactivity in contaminated areas using unmanned aerial vehicles. In Proceedings of the IEEE Sensors Applications Symposium, Zadar, Croatia, 13–15 April 2015. [Google Scholar]
  15. Capitan Fernandez, J.; Martinez-de-Dios, J.R.; Maza, I.; Ramon, F.F.; Ollero, A. Ten years of cooperation between mobile robots and sensor networks. Int. J. Adv. Robot. Syst. 2015, 12, 70. [Google Scholar] [CrossRef]
  16. Costa, F.G.; Ueyamaa, J.; Braun, T.; Pessin, G.; Osório, F.S.; Vargas, P.A. The use of unmanned aerial vehicles and wireless sensor network in agricultural applications. In Proceedings of the IEEE International Conference on Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012. [Google Scholar]
  17. Faiçal, B.S.; Costa, F.G.; Pessin, G.; Ueyama, J.; Freitas, H.; Colombo, A.; Fini, P.H.; Villas, L.; Osório, F.S.; Vargas, P.A.; et al. The use of unmanned aerial vehicles and wireless sensor networks for spraying pesticides. J. Syst. Archit. 2014, 4, 393–404. [Google Scholar] [CrossRef]
  18. Corke, P.; Hrabar, S.; Peterson, R.; Rus, D.; Saripalli, S.; Sukhatme, G. Autonomous deployment and repair of a sensor network using an unmanned aerial vehicle. In Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 31 May–7 June 2004; pp. 3602–3608. [Google Scholar]
  19. Suzuki, T.; Sugizaki, R.; Kawabata, K.; Hada, Y.; Tobe, Y. Autonomous Deployment and Restoration of Sensor Network using Mobile Robots. Int. J. Adv. Robot. Syst. 2010, 2, 105–114. [Google Scholar] [CrossRef]
  20. Sun, Z.; Wang, P.; Vuran, M.C.; Al-Rodhaan, M.A.; Al-Dhelaan, A.M.; Akyildiz, I.F. BorderSense: Border patrol through advanced wireless sensor networks. Ad Hoc Netw. 2011, 9, 468–477. [Google Scholar] [CrossRef]
  21. Diamond, S.M.; Ceruti, M.G. Application of Wireless Sensor Network to Military Information Integration. In Proceedings of the IEEE International Conferenc on Industrial Informatics, Vigo, Spain, 4–7 June 2007. [Google Scholar]
  22. Mitchell, P.D.; Qiua, J.; Lia, H. Use of aerial platforms for energy efficient medium access control in wireless sensor networks. Comput. Commun. 2010, 4, 500–512. [Google Scholar] [CrossRef]
  23. Pignaton, E.; Morado, A.; Pereira, C.E.; Larsson, T. Middleware support in unmanned aerial vehicles and wireless sensor networks for surveillance applications. Stud. Comput. Intell. 2009, 237, 289–296. [Google Scholar]
  24. Ho, T.D.; Park, J.; Shimamoto, S. Power and performance tradeoff of MAC protocol for wireless sensor network employing UAV. In Proceedings of the IEEE International Conference on Advanced Technologies for Communications, Waltham, MA, USA, 8–10 November 2010; pp. 23–28. [Google Scholar]
  25. Wang, C.; Ma, F.; Yan, J.; De, D.; Das, S.K. Efficient Aerial Data Collection with UAV in Large-Scale Wireless Sensor Networks. Available online: http://journals.sagepub.com/doi/full/10.1155/2015/286080 (accessed on 28 March 2017).
  26. Teh, S.K.; Mejias, L.; Corke, P.; Hu, W. Experiments in integrating autonomous uninhabited aerial vehicles (UAVs) and wireless sensor networks. In Proceedings of the Australasian Conference on Robotics and Automation, Canberra, Australia, 3–5 December 2008. [Google Scholar]
  27. Valente, J.; Sanz, D.; Barrientos, A.; del Cerro, J.; Ribeiro, A.; Rossi, C. An air-ground wireless sensor network for crop monitoring. Sensors 2011, 6, 6088–6108. [Google Scholar] [CrossRef]
  28. Cobano, J.A.; Martínez-de Dios, J.R.; Conde, R.; Sánchez-Matamoros, J.M.; Ollero, A. Data retrieving from heterogeneous wireless sensor network nodes using UAVs. J. Intell. Robot. Syst. 2010, 60, 133–151. [Google Scholar] [CrossRef]
  29. Martinez de Dios, J.R.; Lferd, K.; De San Bernabe, A.; Torres-Gonzalez, A.; Ollero, A. Cooperation Between UAS and Wireless Sensor Networks for Efficient Data Collection in Large Environments. J. Intell. Robot. Syst. 2013, 1–4, 491–508. [Google Scholar] [CrossRef]
  30. De San Bernabe, A.; Martínez-de Dios, J.R.; Ollero, A. A WSN-Based Tool for Urban and Industrial Fire-Fighting. Sensors 2012, 12, 15009–15035. [Google Scholar]
  31. Ollero, A.; Merino, L. Control and perception techniques for aerial robotics. Ann. Rev. Control 2004, 28, 167–178. [Google Scholar] [CrossRef]
  32. Elimco. Available online: http://www.elimco.com (accessed on 30 March 2017).
  33. Resnick, R.; Halliday, D.; Krane, K.S. Physics; John Wiley & Sons: New York, NY, USA, 2001. [Google Scholar]
  34. Guilmore, J.F. Autonomous vehicle planning analysis methodology. In Proceedings of the AIAAA Guidance Navigation Control Conference, Chicago, IL, USA, 10–13 August 1991; pp. 2000–4370. [Google Scholar]
  35. Benkic, K.; Malajner, M.; Planinsic, P.; Cucej, Z. Using RSSI value for distance estimation in wireless sensor networks based on ZigBee. In Proceedings of the International Conference on Systems, Signals and Image Processing, Rhodes, Greece, 25–28 June 2008; pp. 303–306. [Google Scholar]
  36. Jianwu, Z.; Lu, Z. Research on distance measurement based on RSSI of ZigBee. In Proceedings of the International Conference on Computing, Communication, Control, and Management, Sanya, China, 8–9 August 2009; Volume 3, pp. 210–212. [Google Scholar]
  37. Savvides, A.; Han, C.; Strivastava, M. Dynamic fine-grained localization in ad-hoc networks of sensors. In Proceedings of the ACM 7th Annual International Conference on Mobile Computing and Networking, Rome, Italy, 16–21 July 2001; pp. 166–179. [Google Scholar]
  38. Blumenthal, J.; Grossmann, R.; Golatowski, F.; Timmermann, D. Weighted centroid localization in Zigbee-based sensor networks. In Proceedings of the IEEE International Symposium on Intelligent Signal Processing, Alcala de Henares, Spain, 3–5 October 2007; pp. 1–6. [Google Scholar]
  39. Gamwarige, S.; Kulasekere, C. An Algorithm for Energy Driven Cluster Head Rotation in a Distributed Wireless Sensor Network. In Proceedings of the International Conference on Information and Automation, Colombo, Sri Lanka, 15–18 December 2005. [Google Scholar]
  40. Cuesta, F.; Cordero, M.; Díaz, L.; Viguria, A.; Ollero, A. A particle filter-based method for ground-based WSN localization using an Aerial Robot. In Cooperative Robots and Sensor Networks 2015. Studies in Computational Intelligence; Springer: Berlin, Germany, 2015; Volume 604. [Google Scholar]
Figure 1. Main modules of the presented scheme.
Figure 1. Main modules of the presented scheme.
Remotesensing 09 00336 g001
Figure 2. Some TelosB WSN nodes deployed in the field experiments. For better visualization in the picture, the nodes were put on top of plastic boxes.
Figure 2. Some TelosB WSN nodes deployed in the field experiments. For better visualization in the picture, the nodes were put on top of plastic boxes.
Remotesensing 09 00336 g002
Figure 3. Picture of the UAV E-300 used in the field experiments.
Figure 3. Picture of the UAV E-300 used in the field experiments.
Remotesensing 09 00336 g003
Figure 4. Pictures of the TelosB node on board the UAS: (a) detail and; (b) mounted on the UAS.
Figure 4. Pictures of the TelosB node on board the UAS: (a) detail and; (b) mounted on the UAS.
Remotesensing 09 00336 g004
Figure 5. Pictures taken during the activation of the sensor node deployment mechanism. The sensor node protected with a blue color enclosure is marked in the images.
Figure 5. Pictures taken during the activation of the sensor node deployment mechanism. The sensor node protected with a blue color enclosure is marked in the images.
Remotesensing 09 00336 g005
Figure 6. Example of message interchange in measurement collection.
Figure 6. Example of message interchange in measurement collection.
Remotesensing 09 00336 g006
Figure 7. (A) sequence of missions in the initialization phase of measurement collection. (B) sequence of missions in the run-time phase that perform on-demand measurement collection, in which the UAS take-off, collect the measurements and land.
Figure 7. (A) sequence of missions in the initialization phase of measurement collection. (B) sequence of missions in the run-time phase that perform on-demand measurement collection, in which the UAS take-off, collect the measurements and land.
Remotesensing 09 00336 g007
Figure 8. Message interchanged in missions Take-off, NodeDeployment, NodeLocalization and ClusterFormation.
Figure 8. Message interchanged in missions Take-off, NodeDeployment, NodeLocalization and ClusterFormation.
Remotesensing 09 00336 g008
Figure 9. Performance of sensor node deployment in one field experiment. The desired locations are represented with circles and, the actual locations after deployment, with dark stars.
Figure 9. Performance of sensor node deployment in one field experiment. The desired locations are represented with circles and, the actual locations after deployment, with dark stars.
Remotesensing 09 00336 g009
Figure 10. Deployed sensor nodes and resulting clusters (ellipses) in the experiment.
Figure 10. Deployed sensor nodes and resulting clusters (ellipses) in the experiment.
Remotesensing 09 00336 g010
Figure 11. Trajectory performed by the UAS in one lap in this measurement collection experiment.
Figure 11. Trajectory performed by the UAS in one lap in this measurement collection experiment.
Remotesensing 09 00336 g011
Figure 12. Flight control and planning error expressed as the probability that the distance between the UAS trajectory and waypoints is lower than a certain value.
Figure 12. Flight control and planning error expressed as the probability that the distance between the UAS trajectory and waypoints is lower than a certain value.
Remotesensing 09 00336 g012
Figure 13. Number of sensor nodes with remaining energy in their batteries along the simulation when using the presented cluster-based collection and one-by-one collection.
Figure 13. Number of sensor nodes with remaining energy in their batteries along the simulation when using the presented cluster-based collection and one-by-one collection.
Remotesensing 09 00336 g013
Figure 14. Example of flight planning using one (a); two (b) and three (c) UAS. (d) Time required by different numbers of UAS for visiting 24 randomly placed waypoints in different scenarios.
Figure 14. Example of flight planning using one (a); two (b) and three (c) UAS. (d) Time required by different numbers of UAS for visiting 24 randomly placed waypoints in different scenarios.
Remotesensing 09 00336 g014
Table 1. Field experiments of the presented scheme.
Table 1. Field experiments of the presented scheme.
Location 1Aeroclub Volar al Sur
Deployment siteN 36.5232°/W 5.3892°
Deployment area180 hectares
No. of experiments8
Experiment campaignsJuly 2012, July 2013
Location 2Doñana Biological Reserve
Deployment siteN 37.7276°/W 6.2673°
Deployment area900 hectares
No. of experiments16
Experiment campaignsOctober 2011, September 2013, April 2014
Table 2. Sensor node deployment locations commanded by the operator, values of D T i computed for each node and actual locations of nodes after deployment.
Table 2. Sensor node deployment locations commanded by the operator, values of D T i computed for each node and actual locations of nodes after deployment.
Node IDDeployment Locations DT i Actual Locations after Deployment
Node 1N 37.120029°/E −6.452538°N 37.120101°/E −6.451519°N 37.120091°/E −6.453184°
Node 2N 37.121043°/E −6.444912°N 37.121179°/E −6.443858°N 37.120760°/E −6.445820°
Node 3N 37.130860°/E −6.452872°N 37.130809°/E −6.452272°N 37.130746°/E −6.453996°
Node 4N 37.126961°/E −6.462332°N 37.127646°/E −6.461624°N 37.126991°/E −6.463370°

Share and Cite

MDPI and ACS Style

Martínez-de Dios, J.R.; De San Bernabé, A.; Viguria, A.; Torres-González, A.; Ollero, A. Combining Unmanned Aerial Systems and Sensor Networks for Earth Observation. Remote Sens. 2017, 9, 336. https://doi.org/10.3390/rs9040336

AMA Style

Martínez-de Dios JR, De San Bernabé A, Viguria A, Torres-González A, Ollero A. Combining Unmanned Aerial Systems and Sensor Networks for Earth Observation. Remote Sensing. 2017; 9(4):336. https://doi.org/10.3390/rs9040336

Chicago/Turabian Style

Martínez-de Dios, José Ramiro, Alberto De San Bernabé, Antidio Viguria, Arturo Torres-González, and Anibal Ollero. 2017. "Combining Unmanned Aerial Systems and Sensor Networks for Earth Observation" Remote Sensing 9, no. 4: 336. https://doi.org/10.3390/rs9040336

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop