^{1}

^{2}

^{*}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (

Distributed sensing, computing and communication capabilities of wireless sensor networks require, in most situations, an efficient node localization procedure. In the case of random deployments in harsh or hostile environments, a general localization process within global coordinates is based on a set of anchor nodes able to determine their own position using GPS receivers. In this paper we propose another anchor node localization technique that can be used when GPS devices cannot accomplish their mission or are considered to be too expensive. This novel technique is based on the fusion of video and compass data acquired by the anchor nodes and is especially suitable for video- or multimedia-based wireless sensor networks. For these types of wireless networks the presence of video cameras is intrinsic, while the presence of digital compasses is also required for identifying the cameras' orientations.

A wireless sensor network (WSN) is a collection of small and inexpensive embedded devices, with limited sensing, computing and communication capabilities, acting together to provide measurements of physical parameters or to identify events in known or unknown environments. Their real-life applications are rapidly emerging in a wide variety of domains, ranging from smart battlefields to natural habitat monitoring, precision agriculture, industrial process control or intelligent traffic management.

A large majority of the algorithms developed for WSNs are based on the assumption that all sensor nodes are aware of their position and, furthermore, of the position of nearby neighbor nodes. Since every measurement provided is strictly linked with the sensor node position in the field, a localization process with respect to a local/global coordinate system for each node must be carefully performed. Moreover, some other wireless sensor network related issues (e.g., geographic routing, sensing coverage estimation or nodes' sleep/wake-up procedures) might increase the need to accomplish nodes' localization by relying on location information.

The aim of localization is to supply the physical coordinates for all sensor nodes [

In order to identify their exact location, an almost general solution is to equip the beacon nodes with a Global Position System (GPS) receiver. This approach, even though it is based on a mature technology, has some drawbacks that make it impractical for many applications involving random deployments: (a) the GPS receivers are relatively expensive, energy demanding and bulky; (b) the GPS receivers may be confused by environmental obstacles, tall buildings, dense foliage,

In this paper we propose a new method to obtain the anchor node location that uses a digital compass (magnetometer), an image taken by a video camera and the exact location data for some geographically-located referential objects (e.g., solitary trees, electricity transmission towers, furnace chimneys,

The remainder of the paper is organized as follows: Section 2 is a brief overview of anchor localization techniques. In Section 3 we describe the three-step methodology underlining the types of information needed. Section 4 presents the kernel of our approach—the triangulation-based procedure that fuses a valid photo image with the related compass information to obtain the beacon node position within global coordinates. In Section 5, an illustrative case study is depicted, while the conclusions are drawn in the last section.

Due to cost and power consumption reasons, every node of a randomly deployed WSN cannot be equipped with localization components, only for a small number of them, named anchors. The anchor node localization procedure is basically done through triangulation or trilateration based on a number of known locations. The classical solution used in practice involves GPS receivers that use satellites to obtain the anchors' positions, but this approach is not generally viable due to some known shortcomings of GPS [

Reference [

Another solution that uses prior known positions for anchor nodes is presented in [

The anchor localization solution described in this paper does not rely on GPS receivers and does not require manual pre-calculated node placement. Instead, it uses a video camera and a compass to estimate the anchor position based on images captured by the camera and the compass information. It is very cost-effective, especially in the case of video- or multimedia-based wireless sensor networks, where only a compass is needed as supplementary node equipment.

Anchor node localization is a key prerequisite for every localization technique in wireless sensor networks within global coordinates. Our method to determine the exact position of this special type of nodes is an alternative for using GPS devices in areas where the Earth's magnetic field is not disturbed by structures containing ferrous metals or by electronic interferences. It requires information captured by two sensors that equip the node (video and compass), exact positions of a few reference objects in the deployment area and some constructive parameters of the mentioned sensors, as follows (

Based on information described above, a three-step methodology must be performed after the random deployment in the field:

After anchor node deployment, a startup image is gathered by its video sensor and sent to the central point where an automatic shape recognition algorithm or even a human operator may identify objects from a given reference object set ℜ:

This set is created prior to the sensor deployment and is based on analysis made upon known information about deployment field. It will contain _{RO}_{i}

All objects that belong to ℜ will be marked with a reference point calculated as the median point of the lower side of the bounding box framing the object. Position of the reference point is selected in order to be invariant to the height of the objects. However, if dimensions of some of the objects from ℜ are known, we can use this information to estimate the distance between the camera and these objects and therefore to validate or to make small correction after main localization algorithm is ended. Examples for choosing the reference point are depicted in

During setup processing, all reference objects from ℜ that could be identified in the setup images gathered from the anchors' sensors should be marked. Considering K deployed anchors, this procedure results in Γ_{k}_{k}) ≥ 2. To validate a candidate anchor the following algorithm is proposed, where σ_{i}_{i} ≔_{j}_{i}_{j}_{i}≔_{i}_{j}_{k}_{i}_{k}_{i}_{i} ≔_{i}_{k}_{m}_{i}_{n}_{i}_{m}_{n}_{i}≔_{i}_{n}_{m}_{i}

The next step of localization algorithm is based on the well-known triangulation method [

The first step consists in determining the angle of the vertices formed by the camera and the two reference points. For this we consider the camera model presented in _{min}_{max}_{min}_{max}

To determine the absolute camera position using two reference objects we define first a Cartesian reference system. Considering the map of the deployment area we choose and mark an arbitrary point situated near lower left corner of the area. Then we measure on the map the position of this point in geographic coordinates as a (latitude, longitude) pair. Next, we consider this point as the origin of a Cartesian referential system having the y-axis oriented along the North direction on the map, and the x-axis pointing toward East.

Then we can express the absolute reference objects location into this referential system. For this, we use the haversine _{1}_{1}_{2}_{2}

Therefore, to calculate the absolute coordinates of the reference objects in the proposed Cartesian reference system, we can consider the distances between the origin point and the projection of the corresponding objects reference points on the two axes. These projections will have Δ_{o}_{o}_{RP1}_{RP1}

Then, the position of the second reference point (_{RP2}_{RP2}

To finalize the first step we use the image captured by the sensor camera where the two reference objects were identified. The aim is to determine the angles α′ and β′ formed by the camera axis and the two reference points. A vertical line drawn on the center of image and therefore aligned with the camera's direction of sight is used as reference. Angles are measured counterclockwise relative to this direction, as depicted in

Starting from these values the next step consists in computing the values of the reference angles relative to the geographic North. First we use the compass measurement to obtain the orientation of the camera axis denoted here by the angle γ. In addition we should consider the magnetic declination of the geographical position of the deployment field. Magnetic declination represents the angle between geographic and magnetic North at a specific location and can be positive or negative [

The last step is the triangulation process itself, presented in _{c}_{c}

Therefore, the position is calculated as:

The final result of entire localization process is expressed by the triplet {_{c}_{c}_{min}_{max}

The approximation of the radius of the average Earth circumference, which results from the mean of radius variation. However, this variation from the average radius to meridian (6,367.45 km) or equator (6,378.14 km) is less than 0.08%.

The precision of localization of reference objects. If we use a GPS for estimation we can rely on a precision around 10 m, depending of the number and position of available satellites [

The precision of digital compass measurement in general is less than 1.0 degree. For example the three-axis HMR3000 Digital Compass Module (Honeywell, Plymouth, MN, USA) provides 0.5° accuracy [

The precision of reference points' angles estimation depends on the method used to identify reference objects into setup images. However, by favoring very thin objects during creation of ℜ we can hardly increase the accuracy.

To demonstrate the effectiveness of the presented algorithm we consider six anchor nodes, deployed in an area near km-2 of route RO DN59A. The deployment configuration is presented in _{i}

The chosen origin

In the following, we present in detail the localization steps for the first node, the results for all nodes being summarized in _{1}_{1}_{2}

For _{1}_{RP1}_{RP2}

Using

In order to estimate the error we used a supplementary GPS measurement of sensor _{1}

_{GPS}_{GPS}_{C}_{C}

As resulting from

Localization is a crucial procedure for random deployed wireless sensor networks. It is generally based on anchor nodes with efficient capabilities to automatically acquire their position in global coordinates. In this paper, we proposed a new anchor node localization technique meant for special cases where GPS receivers are unavailable or too expensive. Using a triangulation technique based on a video image and compass information our method displays a reasonable precision. Moreover, our method is especially appropriate for non-isotropic sensor networks, where magnetometers providing the orientation of the sensor cones are mandatory.

The authors declare no conflict of interest.

Information sources used in our beacon node localization method.

Examples of reference point selection for various objects.

The camera model.

Reference system defined for network deployment area.

Computation of reference points bearing.

Position estimation using triangulation.

Network deployment area with highlighted reference objects on a Google map.

Reference points angles measurement for S_{1}.

Overall localization results for all six deployed nodes.

_{GPS} |
_{GPS} |
_{C} |
_{C} |
||
---|---|---|---|---|---|

1 | 277.18 | 139.14 | 276.55 | 131.65 | 7.52 |

2 | 296.43 | 189.31 | 298.63 | 192.47 | 3.85 |

3 | 310.16 | 238.80 | 299.87 | 234.54 | 11.14 |

4 | 258.95 | 131.58 | 261.49 | 137.25 | 6.21 |

5 | 336.31 | 156.49 | 332.28 | 152.99 | 5.34 |

6 | 319.71 | 124.90 | 312.85 | 119.61 | 8.66 |