You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

3 November 2017

A Middleware Solution for Wireless IoT Applications in Sparse Smart Cities

,
,
,
,
and
1
DISI—Department of Computer Science and Engineering, University of Bologna, Bologna 40136, Italy
2
DMI—Department of Mathematics and Computer Science, University of Ferrara, Ferrara 44122, Italy
3
DE—Engineering Department, University of Ferrara, Ferrara 44122, Italy
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Next Generation Wireless Technologies for Internet of Things

Abstract

The spread of off-the-shelf mobile devices equipped with multiple wireless interfaces together with sophisticated sensors is paving the way to novel wireless Internet of Things (IoT) environments, characterized by multi-hop infrastructure-less wireless networks where devices carried by users act as sensors/actuators as well as network nodes. In particular, the paper presents Real Ad-hoc Multi-hop Peer-to peer-Wireless IoT Application (RAMP-WIA), a novel solution that facilitates the development, deployment, and management of applications in sparse Smart City environments, characterized by users willing to collaborate by allowing new applications to be deployed on their smartphones to remotely monitor and control fixed/mobile devices. RAMP-WIA allows users to dynamically configure single-hop wireless links, to manage opportunistically multi-hop packet dispatching considering that the network topology (together with the availability of sensors and actuators) may abruptly change, to actuate reliably sensor nodes specifically considering that only part of them could be actually reachable in a timely manner, and to upgrade dynamically the nodes through over-the-air distribution of new software components. The paper also reports the performance of RAMP-WIA on simple but realistic cases of small-scale deployment scenarios with off-the-shelf Android smartphones and Raspberry Pi devices; these results show not only the feasibility and soundness of the proposed approach, but also the efficiency of the middleware implemented when deployed on real testbeds.

1. Introduction

Today, it is estimated that some 15 billion devices are connected, and this number is set to explode to 50 billion by 2020, particularly in and around urban centers [1]. These devices are often organized in systems or networks of advanced sensors, which capture and disseminate multimedia data sets, often of considerably large size. In addition, a steadily growing selection of devices (not only smartphones, tablets, and laptops, but also wearable devices, smart TVs, smart electric appliances, and any kind of gadget ready for the Internet of Things) equipped with multiple network interfaces (LTE/4G, WiMax, WiFi, Bluetooth, Ethernet, etc.) and sophisticated sensing capabilities are entering the market, thus increasing even further the amount of sensing data generated. For instance, on top of at least two cameras and a microphone, modern smartphones feature an iris scanner, a pressure sensor, a fingerprint reader, an accelerometer, a gyroscope, a barometer, a proximity sensor, a compass, a heart rate sensor, a light sensor, and a magnetometer.
In particular, Smart Cities worldwide are embracing Internet of Things (IoT) technologies to streamline their operations and meet the growing expectations of their citizens. Today, citizens in the most vibrant cities are already seeing many initiatives designed to make urban services smarter, whether for transportation, parking, lighting, traffic and waste management, safety, or law enforcement. Urban services powered by the Internet will certainly enhance citizen quality of life, but developing this new generation of services requires to address challenging aspects, such as efficient packet dispatching in greatly dynamic and possibly partitioned wireless networks.
The paper has the specific goal of supporting Wireless IoT Applications in sparse Smart City environments by considering the many issues that the target scenario raises at different and heterogeneous levels, ranging from single- and multi-hop communication and inter-node coordination (by also considering dynamic network partitioning), to the management of distributed applications’ remote deployment and upgrades. These applications are expected to implement smart information dissemination functions on top of a distributed architecture of software components running on top of fixed sensor systems, sparse mobile nodes nomadically roaming and interacting with one another opportunistically, edge devices located in proximity of either raw data sources or information consumers, and in the Cloud via Internet-enabled mobile devices sharing their connectivity to the Web.
The raw (sensing) data collection and processing functions implemented by Wireless IoT Applications are significantly more complex than those usually found in Wireless Sensor Network (WSN) applications. In fact, WSNs are usually designed and deployed to achieve very specific goals that are well-defined at service implementation time, and are thus unlikely to be exploited for different goals, either at the same time or in different time periods. For this reason, nodes of WSNs are usually configured in a specialized fashion, with limited requirements to drastically modify their behavior (even if configurability and upgradeability of software components may be considered as a plus). Instead, Wireless IoT Applications in sparse Smart Cities have a much more heterogeneous, dynamic, and general-purpose nature; they typically implement many different but concurrent data collection and processing tasks, and provide a set of services that can significantly change over their lifetime.
We envision that smartphones will play a key enabling role in Wireless IoT Applications, as they will be opportunistically adopted to dynamically extend and enhance Smart City environments, by dispatching data as well as by behaving as IoT devices that can monitor/control the surrounding environment. In particular, by taking advantage of their pervasive availability and increased software/hardware capabilities, smartphones will become an integrated part of sparse Smart Cities and will collaborate to support multi-hop connectivity, by dynamically and directly interconnecting one another (i) to create single-hop links in a peer-to-peer fashion and (ii) to collaboratively dispatch packets by acting as intermediary nodes between senders and receivers (in addition to exploiting the “more traditional” but sporadic availability of infrastructure connectivity, e.g., based on IEEE 802.11 Access Points).
To facilitate a full understanding of the proposal, the manuscript also illustrates how our solution enables and supports the implementation of a sample Wireless IoT Application, characterized by a pool of Raspberry Pi devices acting as sensors periodically gathering raw data (pictures via webcams), processing them (by comparing current and previous pictures), and sending alert messages if given conditions apply (the two pictures differ more than a threshold). Internet-enabled Android smartphones, widely available in the envisioned sparse Smart City scenarios, act as controllers, located in the proximity of sensor nodes themselves and dispatching information (e.g., alert messages) and commands (e.g., configuration parameters to modify the image comparison threshold) as well as novel software components (e.g., containing a different algorithm for image comparison) between a (logically) centralized cloud application and sensor nodes themselves.
The paper takes advantage of our previous experiences in the spontaneous networking research field [2,3] by originally providing novel solutions to support Wireless IoT Applications in sparse Smart City scenarios that consider the many issues that developing and deploying applications in sparse Smart Cities raise. More specifically, the proposed solution provides effective mechanisms to manage single subnets, to dispatch packets at multi-hop distances between subnets managed in an uncoordinated way, to detect packet dispatching failures by also taking proper countermeasures based on store-and-forward policies, to coordinate/manage controller and sensor nodes by specifically taking into consideration that the whole network could become partitioned at provisioning time, and also to make easier the development/deployment of novel services by providing general-purpose controller/sensor software components as the basis of new and specialized applications. In particular, the paper presents the Real Ad-hoc Multi-hop Peer-to peer-Wireless IoT Application (RAMP-WIA) middleware, which extends our previous solution along 4 primary guidelines: (i) proper management of single-hop wireless link among sensors, controllers, and other nodes of the network to increase the dynamicity of the overall network (Section 4); (ii) effective multi-hop packet management also considering that sparse networks are likely to be partitioned and thus packets dispatched among sensors and controllers should be properly managed by intermediary nodes (Section 5); (iii) reliable and resilient actuation of sensor nodes specifically considering that controllers could be interested in issuing commands only if (most of) sensor nodes are actually reachable, otherwise the command should be safely aborted (Section 6); and (iv) dynamic upgrade of sensor nodes by supporting the distribution and deployment of new software components on-the-fly (Section 7).

3. The RAMP Middleware for Wireless IoT Applications

We have identified three primary features that should be supported to foster the development of Wireless IoT Applications in sparse Smart Cities: single-hop connectivity management to make easier the dynamic creation of device-to-device connections, opportunistic multi-hop packet management to properly dispatch packets at multi-hop distance also in case of temporarily partitioned wireless networks, and application coordination tools to facilitate the dissemination and deployment of Wireless IoT Applications. The rest of the section presents a three-layer reference architecture supporting the required features (see Figure 1) and outlines the main functionalities that each layer should support. This architecture is implemented in our novel RAMP-WIA solution (based on and enhancing our RAMP middleware), which is described in the following sections and which provides novel tools addressing specific issues of monitoring and controlling Wireless IoT Applications in sparse Smart City environments.
Figure 1. High-level overview of the three-layer reference architecture.
Starting from the bottom layer of the proposed reference architecture, the main objective of the Single-Hop Connectivity Layer is to support the creation and access of IP subnets based on wireless protocols widely available on off-the-shelf mobile nodes. In particular, the layer should provide the capability of:
  • manually configuring IP connectivity. In this case, developers can optimize networks to achieve specific goals based on the full knowledge of the environment, that is, node location and movements. However, since nodes may have unpredictable movement patterns, it may be unfeasible (or very complex with high costs in terms of information dissemination and centralized control) to manually configure each and every network interface of nodes;
  • managing IP connectivity in a completely autonomous manner. The layer should be able to modify network configurations to take advantage of possibly available nodes in the same location, both accessing and providing connectivity in a peer-to-peer-fashion. In this manner, it is possible to achieve a high degree of dynamicity while completely hiding any link connectivity complexity.
In both cases, the layer should provide easy to use Application Programming Interface (API) to: (i) create new IP subnets and access to already available ones, by hiding (whenever possible) the complexity of managing heterogeneous wireless protocols; (ii) finely tune the degree of dynamicity that the network should have in case of automatic configuration; and (iii) notify link connectivity modification to efficiently make other components aware of new connectivity opportunities. Practical examples of how a middleware implementation of this layer can work, and with which advantages, are reported in Section 4; in particular, our Link Connectivity Component (LCC) (inspired by WiFi-Opp [35]) implements this layer for both IEEE 802.11 and Bluetooth in manual as well as automatic manner, on top of off-the-shelf Android devices.
The goal of the Multi-hop Networking Overlay Layer is to support node by node packet delivery among nodes via multiple single-hop links provided by the layer below. To this purpose, we identify two primary features this layer should support:
  • it should allow to easily exploit IP subnets provided by the Single-hop Connectivity Layer by dispatching packets among nodes in a transparent manner, with no need of manually configuring the operating system routing tables to correctly deliver information to the receiver. To this purpose, we exploit the RAMP middleware which supports multi-hop collaborative communications independently from how underlying (possibly heterogeneous) links/IP sub-networks have been created. RAMP nodes cooperate at the middleware layer to dispatch packets, with no need to modify routing tables at the operating-system level, thus achieving the degree of dynamicity needed in sparse Smart City scenarios [2,3,36];
  • it should efficiently manage network failures and partitioning that have to be considered as a frequent event in the sparse Smart City scenarios. To this purpose, we have significantly extended our RAMP middleware by designing and implementing the RAMP Opportunistic Networking (RON) component, transparently supporting the efficient management of packets in case of the destination node is temporarily unreachable. In case of packet dispatching failure, RON persists packets and then automatically retries to transmit them to the destination in case of link connectivity modifications (even duplicating packets to maximize packet delivery probabilities), that is, since a node moved from one location to another providing access to new IP subnets or the Single-hop Connectivity Layer has modified the configuration of a network interface.
API at the Multi-hop Networking Overlay Layer can allow to tune the behavior of nodes to achieve a proper tradeoff among resource sharing to support packet management and resource preservation to avoid too much overhead (additional details on our implementation in Section 5).
Finally, the Coordination Wireless IoT Application Layer provides tools to simplify the development of sensor and controller nodes. This layer hides the complexity of node coordination, such as by enabling the transparent dispatching of commands and updated software components (additional insights in Section 6 and Section 7 respectively) among sensors and coordinators related to the same Wireless IoT Application. At the same time, it supports the tuning of packet management to achieve a proper tradeoff among packet delivery probabilities and cost of packet management. For instance, when sending a message, sensor nodes can specify a proper expiry value after which messages should be discarded even if not yet correctly delivered to the destination by the Multi-hop Networking Overlay Layer. From an architectural point of view, it exploits primitives provided by the layer below to support the dispatching of messages with end-to-end visibility by adopting four different semantics:
one-to-one, to send packets from a sender to a given receiver, for example, to send a document via a file sharing application;
one-to-any, to send packets from a sender to any node of a set of receivers, for example, to send sensed data to one of the available sinks in a sparse Smart City environment;
one-to-many, to send packets from a sender to every node of a set of receivers, for example, to send a message to a group of a chat application;
one-to-all, to send a packet to every node of the network, for example, an alert message in case of fire hazard.

5. The RON Component for Networking Overlay

The RAMP Opportunistic Networking (RON) component (Figure 3) extends the RAMP middleware by enhancing features already provided by the Continuity Manager (CM) component [3]. In particular, CM is in charge of detecting path disconnections and packet dispatching failures (notified by the Dispatcher RAMP component) and trying to take proper countermeasures. CM waits for a short time interval, interacts with the Resolver component in charge of looking for alternative paths to the destination, and sends the packet towards the (possibly discovered) new path. It is worth noting that CM only keeps packets on volatile memory, periodically looks for other paths only for brief intervals (typically in the range of some seconds), and forwards packets only if a path to the destination is found. In cases where a path is not found, the packet is dropped.
Figure 3. Real Ad-hoc Multi-hop Peer-to peer (RAMP) Opportunistic Networking (RON) architecture.
RON interacts with CM to intercept packets that would otherwise be dropped, and adopts opportunistic networking techniques [37] to maximize packet delivery probabilities. To this purpose, we have extended the header of RAMP packets by specifying two new metadata:
  • an Expiry value in seconds (4 bytes). The sender application exploits novel sendUnicast(…) API to (optionally) specify the Expiry value. Then the RON component exploits this value to understand if and for how long the packet should be managed opportunistically;
  • a unique Packet Id (P-ID) that should be valid till the expiration of the packet itself. To this purpose, on the sender side, the RAMP middleware transparently adds a random P-ID (4 bytes) in the header. This solution is envisioned as a proper tradeoff between simplicity of implementation and soundness of the solution, since it is very unlikely that two nodes in the same opportunistic network create the same random packet id, also considering that in most actual scenarios the packet expiry value is typically in the range of hours.
In particular, the primary activities of RON are:
  • RON drops the packet if it is expired (Expire ≤ 0) or if it is a duplicated (by exploiting a volatile database containing the set of P-ID related to already managed packets);
  • in case the packet is still valid, it saves it on persistent memory, eventually removing other persisted packets in case the amount of dedicated memory (configured by the user, mainly dependent on the storage capacity of the node) is full, such as by adopting traditional “remove oldest/biggest” policies (out of the scope of this paper);
  • then, it deserializes packets from the persistent memory in a periodic fashion (typically every 10 s, user configurable) and whenever the CC layer of LCC notifies a network modification. For each deserialized packet, it appropriately decreases the Expiry value, and (if the packet is not expired) exploits the Resolver component to look for a path towards the destination:
    • if a path is found, the packet is sent to the destination and discarded by the local memory;
    • if a path is not found, the packet is duplicated based on the known spray & wait [38] policy (different, more articulated policies can be adopted, but they are out-of-the-scope of this paper) and sent to N one-hop neighbor nodes (N set by the user, default value is 3), with the main purpose of maximizing the probability of correctly dispatching the packet to the destination. To limit networking overhead, nodes take care of not sending the same packet to the same one-hop neighbor more than once. In addition, the destination node should be aware that the same packet could be received more than once, for example, dispatched via different multi-hop paths, taking advantage of the P-ID value to discard duplicated packets.
The source code of the RAMP middleware enhanced with the novel RON component can be found at https://github.com/DSG-UniFE/ramp while the Android porting of RAMP can be found at https://github.com/DSG-UniFE/ramp-android.

6. Quorum-Based Distributed Actuation and Resiliency Policy

As depicted in Section 3, the purpose of the Coordination Wireless IoT Application Layer is to easily coordinate nodes of Wireless IoT Applications by transparently dispatching commands and software components among sensors and controllers. This section addresses the issues raised by the dynamic nature of sparse Smart City environments, preventing controllers from communicating with sensors in a timely manner in case of network partitioning. To this purpose, we propose a quorum-based distributed actuation solution coupled with a resilience behavior policy. In particular, we consider the general case of Internet-enabled controller nodes receiving commands from a cloud-based application and forwarding them to a group of sensor devices to tune their behaviors. Controller nodes are aware of the likeliness that only part of the devices to be controlled could be reachable, while sensor nodes take into consideration the possibility that they may miss commands sent by controllers since the network could be partitioned.
The quorum-based distributed actuation solution defines how controller nodes send control packets to sensor nodes. As a preliminary phase, controller nodes offer a group chat-like service providing a different group for each supported cloud-based application, such as “webcam-based presence detection”. Then, sensor nodes perform a discovery procedure to identify nodes available in the network behaving as controllers for the desired application and select one of the discovered controller nodes supporting the required application and register themselves. The selection is based on QoS parameters, such as path length, round-trip time, or other more sophisticated solutions based on mobility [39,40].
Whenever controller nodes desire to send a command packet to sensor nodes of a given application, they exploit the following protocol:
  • controller nodes use one-to-many semantic to dispatch a pre-command packet to sensor nodes that have previously joined an application;
  • sensor nodes receiving the pre-command packet reply to the controller node with an availability here-I-am packet to confirm their reachability;
  • the controller node, based on the ratio of received/expected availability here-I-am packets and a threshold provided by the cloud-application, determines if the command should be issued or not, sending the actual command packet in the former case.
To improve the resiliency of the network, sensor nodes exploit pre-command packets to be informed that the controller node is still reachable, regardless an actual command packet is then issued or not. In case that pre-command packets are not received for a time period specified by the controller during the join procedure, sensor nodes assume that the controller node is not reachable anymore, e.g., since it has left the network or the network is partitioned. Then, they start again the discovery of controller nodes supporting their applications. In case a controller node is not available, they apply the following resilience policy:
  • sensor nodes equipped with Internet-enabled long range wireless technologies (and thus able to behave as controller nodes by interacting with the cloud-based application) start a simple leader-election algorithm (out of the scope of the paper) based on packet flooding to identify a time-limited controller node. Sensor nodes spontaneously participating in the leader-election algorithm are willing to behave as controllers, but only for a brief period of time, to limit power consumption or reduce costs to dispatch the packet to the Internet:
    in case a new controller node becomes available, the sensor node elected as time-limited controller becomes again a sensor node only;
    in case an actual controller is not available for a long time period, the current time-limited controller node stops behaving as controller, to push other Internet-enabled sensor nodes to start the leader-election algorithm again;
  • regular sensor nodes (not able to behave as time-limited controller nodes) keep looking for a new controller for a while, that is, to wait for the election of a limited-timed controller node. If a new controller is not discovered, they switch from the “remotely controlled” to the “resilience” state. Whenever in the latter state, they exploit as configuration parameters resilience values (provided via previously received command packets). In fact, since in this case sensor nodes cannot receive command packets from any controller nodes, they cannot discriminate if the active parameter is the same currently set in the cloud application or not. The availability of a resilience parameter issued via controller nodes allows, also in case of network partitioning, to switch to a standard behavior, such as allowing a proper tradeoff between metering frequency and power consumption, also in case of sensor nodes detached from the network.
Figure 4 presents Sensor and Controller components, implementing the quorum-based distributed actuation and resiliency policy solution. The Sensor component offers the following API (Figure 4, left):
Figure 4. Presence and Controller components.
  • joinApp(appName, listener), allowing to register a listener interested in:
    receiving commands related to a given application;
    being notified of controller unavailability to activate the resilience policy;
  • leaveApp(appName), allowing to leave an application and unregister the listener.
The Controller component offers the following API (Figure 4, right):
  • registerApp(appName), to add a local application;
  • removeApp(appName), to remove a local application;
  • sendCommand(appName, command, timeToWait, threshold), to send a command to sensor nodes of a given application: it sends a pre-command, waits for here-I-am messages for timeToWait, and finally sends the actual command message only if at least “threshold” sensor nodes have replied with here-I-am messages.
Pre-command, here-I-am and command messages are dispatched taking advantage of the Multi-hop Networking Overlay Layer, thus transparently achieving multi-hop and opportunistic networking capabilities. As better detailed in Section 8, PresenceSensor and PresenceController components take advantage of the Sensor and Controller ones respectively to transparently register, look for, and join controller nodes, send and receive commands, and notify resilience policy activations in an event-driven fashion.

7. Dynamic Processing Update

New services provided to (and possibly also by) citizens will require the dynamic deployment of new Wireless IoT Applications, by disseminating novel software components or upgrading already deployed ones to take advantage and integrate with additional hardware systems. This observation dictates for a solution enhancing the sparse Smart City scenario by allowing the dynamic distribution, deployment, configuration, exploitation, and upgrade of code interacting with sensors and processing gathered data on nodes composing the network itself. To this purpose, we identify three primary solutions:
  • dynamic code distribution. The behavior specifying how nodes should interact with sensors and process achieved data is distributed to nodes as pre-compiled/script files. In this manner, the amount of code distributed via RAMP packets is minimized, thus allowing to increase the efficiency of behavior change procedures. However, in this case the RAMP middleware is in charge of managing both the distribution and the execution of the code;
  • adoption of the Kura [41] IoT gateway. The adoption of the well-established and maintained open source Kura project can improve the reliability of the solution while minimizing the required maintenance effort. In fact, Kura is a Java-based Open Services Gateway initiative (OSGi) enabled framework already supporting, among the many offered features, to easily manage software components from remote locations;
  • containerization based on Docker [42]. To further support the dynamic configuration and upgrading of nodes, it is also possible to exploit Docker to distribute among nodes not only code, but also the software environment the code should run in. In fact, Docker allows to embed in a single container all the software packages it is required to deploy, thus greatly simplifying the deployment (and upgrade) of new software in heterogeneous devices.
The three solutions above present very different granularity levels; moreover, they present clear pros and cons in terms of standardization, generality of solution, efficiency, and applicability. First of all, Kura and Docker represent open-source solutions gaining momentum in the scientific and industrial community, currently under development and with continuous improvements and upgrades. In terms of generality of the solution, Docker is certainly the best choice, since it can be adopted on top of most spread operating systems, ranging from Windows to many flavors of Linux. In addition, it can contain any type of software environment, thus allowing one to distribute and deploy (almost) any solution. However, Docker containers are usually of relatively large size (likely greater than 100 MB) and thus do not fit strict efficiency requirements in terms of networking traffic of sparse Smart City scenarios, whose nodes are willing to collaborate but could not be available for dispatching huge amount of data. Moreover, Docker cannot be easily deployed on off-the-shelf smartphones, currently requiring rooted Android devices and custom kernels. Kura is already exploited on many devices and its features have been demonstrated to be well-suited on devices with limited hardware resources. However, like Docker, Kura is also not available yet for mobile environments such as Android, thus limiting its applicability. In addition, Kura applies to scenarios where nodes are Internet-enabled and thus it is not suitable for sensors nodes reachable via multi-hop paths.
Based on these considerations, we have opted for a solution based on dynamic code distribution, since it provides larger applicability (it can be easily adopted on any node running the Java-based RAMP middleware) while ensuring limited overhead (it is required to move only the code implementing the new behavior). In the future, we will better investigate the feasibility of the adoption of Docker in sparse Smart City scenarios, eventually exploiting RAMP to dispatch containers while deploying them only on suitable devices, such as Raspberry Pis.
In particular, we propose a solution based on dispatching and deployment of Java classes and script files, for example, containing Python code. Java classes are distributed as class files and then instantiated interacting with the Class Loader, while scripts are executed as external processes providing their results via the output standard stream or via a support output file. The typical usage scenario consists of a Java class and a script, the former in charge of processing data and the latter of interacting with the sensors. For instance, a Java class periodically activates a Python script, gathers pictures, and compare them to send an alert only if they differ more than a threshold, while the Python script interacts with webcams to take pictures and then provides them via the standard output stream.
The Dynamic Processing Update exploits the one-to-many communication semantic similarly to the quorum-based actuation schema presented in Section 6. Controller nodes behave as software maintainers and suppliers by issuing command packets containing Java classes and/or Python scripts. Sensor nodes interested in receiving up-to-date code related to a specific application specify their interest joining the related group; then, they receive the code on demand, for example, when joining a group, and whenever the controller node distribute an updated version of the software.

8. Experimental Results

We have thoroughly tested our prototype to quantitatively evaluate effectiveness and responsiveness along two primary guidelines:
  • evaluation of the RON component while dispatching packets in a sparse Smart City scenario exploiting our LCC and RAMP-WIA. In this case, the main purpose is to quantitatively demonstrate that the proposed solution effectively dispatches packets at multi-hop distance also in a sparse wireless network composed by off-the-shelf devices;
  • evaluation of the quorum-based distributed actuation and resilience policy solution. In this case, the goal is to test the effectiveness of the proposed solution by showing how sensor nodes dynamically discover/join controller nodes and adopt the resilience policy in case a controller node is not available anymore.
In order to estimate the RON component behavior, we have deployed our middleware in a multi-hop spontaneous environment with: Android smartphones with 2.1 GHz Cortex-A53 processor and 3 GB of RAM, running Android 7.0; Raspberry Pi 3 Single Board Computers (SBC) with 1.2 GHz ARM processor and 1 GB of RAM, running Linux Raspbian 8.0; laptops with a 2.70 GHz Intel Core i7-7500U CPU and 16 GB of RAM, running Ubuntu 17.04 Zesty Zapus; laptops with a 2.70 GHz Intel Core i7 CPU and 12 GB of RAM, running Ubuntu 16.04.02 Xenial Xerus. Let us stress that the main goal of this section is to verify and assess the effectiveness of the proposed approach by focusing on a few key aspects of the implemented RAMP-WIA middleware. In particular, performance results reported below are based on experimentation over real and off-the-shelf devices, with the main purpose of testing the technical soundness and practical feasibility of the communication and coordination mechanisms used in RAMP-WIA. More extensive and comprehensive performance results, based on a city-wide large-scale testbed, are out of the scope of this paper and one of the most relevant foci of our future research work.
First of all, we have tested the two-hop environment in Figure 5, formed by a Bluetooth hop between an android device (A1) and a laptop (L1) and a IEEE 802.11 hop between L1 and a Raspberry Pi 3 (RPi), performing the following steps:
Figure 5. Two-hop deployment scenario.
  • t0: A1 sends a packet to RPi but the Bluetooth is not available and the packet is persisted by RON;
  • t1: A1 (manual configuration) creates a Bluetooth link with L1;
  • t2: after a while RON on A1 retrieves the persisted packet and looks for the destination node RPi;
  • t3: RON on A1 does not find any available path towards RPi since the IEEE 802.11 link is not available. Therefore, it sends the packet to the neighbor node L1;
  • t4: L1 receives the packet and tries to dispatch it to RPi. However, since there is no path currently available, RON on L1 persists the packet;
  • t5: L1 creates a IEEE 802.11 link with RPi (manual configuration);
  • t6: after a while RON on L1 retrieves the persisted packet and looks for the destination node RPi;
  • t7: RON on L1 finds a path towards RPi via the IEEE 802.11 link and thus it sends the packet to RPi.
We have tested the sequence above 10 times, achieving an average end-to-end packet dispatching time of 115.8 s with a standard deviation of 4.8, demonstrating that the proposed solution can dispatch packets towards the destination in environments characterized by frequent network partitions, by fully exploiting any connectivity opportunity. Note that the previous values also include the time required to manually configure the network, which comprises the time between t0 and t1 and the time between t4 and t5 estimated in 55 s per-hop. In other words, measured values include, on the one hand, the time to wait for manual network configuration and, on the other hand, the time required for RON to periodically retry packet transmissions (period of 10 s). However, it takes only 5.8 s (115.8 s–55 s × 2) to dispatch packets if not considering the time required to manually reconfigure the network. In particular, when a node activates a new link with another RAMP node, it spends about 2.9 s to forward packets due to RAMP neighbor discovery and RON procedures (comprising periodic waiting time, packet restoring, and packet forwarding to new neighbors).
Then, we have extended the previous scenario with additional devices between A1 (the sender) and RPi (the receiver) and by making LCC work by automation with lazy and hectic policies (see Figure 6). This scenario also includes an always-on Ethernet link, while IEEE 802.11 and Bluetooth links are periodically activated and deactivated by LCC (at startup wireless links are not available). Furthermore, LCC on node A2 is in client-only role, switching among the two IEEE 802.11 networks towards L2 and RPi.
Figure 6. Four-hop deployment scenario.
When adopting lazy and hectic policies, the end-to-end packet dispatching takes on average 365.4 s and 158.9 s respectively. Therefore, it is possible to see that the hectic policy relevantly lowers the required time thanks to reduced HC, RS, and RW values. However, the required time is still relatively high due to two main issues. First of all, when the Bluetooth link between A1 and L2 is used, LCC requires about 93 s to actually activate IP connectivity (considering Bluetooth connection and DHCP protocol) while RON takes about 10 s to become aware that there is no path towards RPi (7.5 s to retry 3 times to send the packet towards the previously known path and 2.5 s to unsuccessfully look for an alternative path). Secondly, when node A2 receives the packet from L2, it has just switched network and then it has to wait for the following hotspot change period. It is worth noting that whenever LCC modifies network configurations it sends a message to the local RON, thus activating the latter as soon as additional connectivity opportunities arise and improving the efficiency of packet delivery. Considering only RON tasks and packet transmission, the delay of the first three hops is about 2.7 s per hop, of which 2.5 s to look for RPi (always unsuccessfully since there is no multi-hop path towards RPi) and the rest to transmit the packet to the newly discovered one-hop neighbor (since RPi is not available). Instead, in the last hop A2 successfully looks for RPi (the final destination of the packet) and sends the packet in about 0.2 s.
To evaluate the quorum-based distributed actuation and resilience policy solution we have developed a PresenceSensor Java application (based on the Sensor component) running on two webcam-enabled Raspberry Pis and a laptop (nodes RPix, RPiy, and L2, Figure 7) periodically acquiring images webcam and sending alert messages only if the previous and current images differ more than a threshold. In addition, a PresenceController application (based on the Controller component) runs on a laptop (node L1, Figure 7). Delving into finer details:
Figure 7. PresenceSensor application testing deployment.
  • PresenceController on node L1 registers itself as a controller node offering the “presence” application;
  • PresenceSensors on nodes RPix and RPiy look for a controller node offering the “presence” application, and then join it;
  • PresenceSensors on nodes RPix, RPiy, and L2 periodically take pictures and broadcast alert messages whenever images differ more than 40% (default value);
  • PresenceController on node L1 periodically sends pre-command messages to sensor nodes in the “presence” group;
  • PresenceSensors on nodes RPix, RPiy, and L2 reply with an here-I-am message to node A.
After a while, the PresenceController sends a command to sensor nodes of the group “presence” with following values:
  • sensor node rate: 100%, that is., the command message is sent only if every sensor node is available;
  • current image difference value: 20%, that is, alert messages are sent if images differ more than 20%;
  • resilience image difference value: 50%, that is, in case of controller unavailability, alert messages are sent whenever images differ more than 50%.
To this purpose, the PresenceController on node L1 exploits Controller to send a “pre-command” message to sensor nodes:
  • if every sensor node replies with a “here-I-am message” (default wait timeout of 3 s), node L1 actually sends the command. When sensor nodes receive the command, Sensor components exploit the registered listener to forward the command to PresenceSensor instances;
  • otherwise, controller node L1 does not send the command, since the sensor node rate requirement is not fulfilled.
We have tested the above scenario in two cases: (i) the three sensor nodes are available; and (ii) only RPix is available. In the former case, after 900 ms on average, the controller node receives the here-I-am messages and sends back the actual command message. PresenceSensor components on sensor nodes receive the command and start sending alert images with image threshold at 20%. In the latter case, the controller node receives the here-I-am message only from node RPix and, after a waiting time of 2 s (configurable parameter), aborts the command procedure.
Then, we have removed the controller node L1 from the network. After 5 s, Sensor components on sensor nodes perceive that there is not a controller node any more (since they do not receive periodic pre-command messages) and start a discovery procedure to look for a new one (since the three nodes are not Internet-enabled, they do not start the leader-election algorithm). After 20 s (configurable parameter), the Sensor components realize that there is no controller in the network and thus exploit the listener-based architecture to send to PresenceSensor instances an “activate resilience policy” event.

9. Conclusions

The paper presents a novel solution supporting the development and dynamic deployment of applications in infrastructure-less wireless environments, specifically considering sparse Smart City environments where mobile devices carried by users collaborate to support the remote monitoring and control of sensor nodes. In particular, the proposed RAMP-WIA middleware allows dynamic management of mobile nodes, not only at the communication level, by creating new single-hop links and opportunistically dispatching packets at multi-hop distance, but also at the application level, by allowing software components on mobile nodes to be deployed and upgraded on-the-fly. The RAMP-WIA middleware also takes into consideration that the greatly dynamic nature of sparse Smart City scenarios may not allow to dispatch commands to sensor nodes in a timely manner. For this reason, it dispatches commands only if a quota of sensor nodes is currently reachable and supports a resiliency policy that allows sensor nodes detached from the network to properly reconfigure themselves while waiting to restore connectivity with a controller.
The encouraging results achieved based on the working implementation of the RAMP-WIA middleware are stimulating our ongoing research work. We are mainly working on the containerization of the implemented middleware as well as of specific sensor/controller components. In addition, we are investigating the adoption of Software Defined Networking (SDN) based approaches to improve the QoS by appropriately managing different traffic flows, by avoiding interferences, and by considering network capabilities/conditions combined with application-level requirements. Finally, we intend to perform wider and more in-depth performance analyses based on a real-world testbed with the scale expected for the targeted smart city execution environments, that is, including thousands of off-the-shelf devices.

Author Contributions

All the authors have equivalently contributed to the manuscript, with a strong focus of Paolo Bellavista, Carlo Giannelli, Cesare Stefanelli and Mauro Tortonesi on architectural and design choices, and a prevalent contribution of Stefano Lanzone and Giulio Riberto in terms of experimental evaluation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cisco Systems Inc. Cisco Global Cloud Index: Forecast and Methodology, 2015–2020. Available online: http://www.cisco.com/c/en/us/solutions/collateral/service-provider/global-cloud-index-gci/Cloud_Index_White_Paper.html (accessed on 29 August 2017).
  2. Bellavista, P.; Corradi, A.; Giannelli, C. The Real Ad-hoc Multi-hop Peer-to-peer (RAMP) Middleware: An Easy-to-use Support for Spontaneous Networking. In Proceedings of the 15th IEEE Symposium on Computers and Communications (ISCC’10), Riccione-Rimini, Italy, 22–25 June 2010. [Google Scholar]
  3. Bellavista, P.; Corradi, A.; Giannelli, C. Middleware for Differentiated Quality in Spontaneous Networks. IEEE Pervasive Comput. 2012, 11, 64–75. [Google Scholar]
  4. Rahmani, A.M.; Gia, T.N.; Negash, B.; Anzanpour, A.; Azimi, I.; Jiang, M.; Liljeberg, P. Exploiting smart e-Health gateways at the edge of healthcare Internet-of-Things: A fog computing approach. Future Gener. Comput. Syst. 2018, 78, 641–658. [Google Scholar] [CrossRef]
  5. Rashid, B.; Rehmani, M.H. Applications of wireless sensor networks for urban areas: A survey. J. Netw. Comput. Appl. 2016, 60, 192–219. [Google Scholar] [CrossRef]
  6. Saleem, Y.; Crespi, N.; Rehmani, M.H.; Copeland, R.; Hussein, D.; Bertin, E. Exploitation of social IoT for Recommendation Services. In Proceedings of the IEEE 3rd World Forum on Internet of Things (WF-IoT), Reston, VA, USA, 12–14 December 2016; pp. 359–364. [Google Scholar]
  7. Govoni, M.; Michaelis, J.; Morelli, A.; Suri, N.; Tortonesi, M. Enabling Social- and Location-Aware IoT Applications in Smart Cities. In Proceedings of the 2nd EAI International Conference on Smart Objects and Technologies for Social Good, Venice, Italy, 30 November–1 December 2016. [Google Scholar]
  8. Adeleke, J.A.; Moodley, D.; Rens, G.; Adewumi, A.O. Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control. Sensors 2017, 17, 807. [Google Scholar] [CrossRef] [PubMed]
  9. Xia, X.; Chen, Z.; Liu, H.; Wang, H.; Zeng, F. A Routing Protocol for Multisink Wireless Sensor Networks in Underground Coalmine Tunnels. Sensors 2016, 16, 2032. [Google Scholar] [CrossRef] [PubMed]
  10. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Optimal Fusion Estimation with Multi-Step Random Delays and Losses in Transmission. Sensors 2017, 17, 1151. [Google Scholar] [CrossRef] [PubMed]
  11. Guijarro, L.; Pla, V.; Vidal, J.R.; Naldi, M.; Mahmoodi, T. Wireless Sensor Network-Based Service Provisioning by a Brokering Platform. Sensors 2017, 17, 1115. [Google Scholar] [CrossRef] [PubMed]
  12. Wu, Z.; Xu, Y.; Yang, Y.; Zhang, C.; Zhu, X.; Ji, Y. Towards a Semantic Web of Things: A Hybrid Semantic Annotation, Extraction, and Reasoning Framework for Cyber-Physical System. Sensors 2017, 17, 403. [Google Scholar] [CrossRef] [PubMed]
  13. Choi, H.-S.; Rhee, W.-S. IoT-Based User-Driven Service Modeling Environment for a Smart Space Management System. Sensors 2014, 14, 22039–22064. [Google Scholar] [CrossRef] [PubMed]
  14. Floris, A.; Atzori, L. Managing the Quality of Experience in the Multimedia Internet of Things: A Layered-Based Approach. Sensors 2016, 16, 2057. [Google Scholar]
  15. Bisdikian, C.; Kaplan, L.; Srivastava, M. On the quality and value of information in sensor networks. ACM Trans. Sens. Netw. 2013, 9, 48. [Google Scholar] [CrossRef]
  16. Suri, N.; Benincasa, G.; Lenzi, R.; Tortonesi, M.; Stefanelli, C.; Sadler, L. Exploring Value of Information-based Approaches to Support Effective Communications in Tactical Networks. IEEE Commun. Mag. 2015, 53, 39–45. [Google Scholar] [CrossRef]
  17. Liu, J.; Hu, Y.; Wu, B.; Wang, Y.; Xie, F. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings. Sensors 2017, 17, 1143. [Google Scholar] [CrossRef] [PubMed]
  18. Guo, Y.; Chen, X.; Wang, S.; Sun, R.; Zhao, Z. Wind Turbine Diagnosis under Variable Speed Conditions Using a Single Sensor Based on the Synchrosqueezing Transform Method. Sensors 2017, 17, 1149. [Google Scholar]
  19. Nie, P.; Dong, T.; He, Y.; Qu, F. Detection of Soil Nitrogen Using Near Infrared Sensors Based on Soil Pretreatment and Algorithms. Sensors 2017, 17, 1102. [Google Scholar] [CrossRef] [PubMed]
  20. Bellavista, P.; Giannelli, C.; Zamagna, R. The PeRvasive Environment Sensing and Sharing Solution. Sustainability 2017, 9, 585. [Google Scholar] [CrossRef]
  21. Hu, Q.; Wang, S.; Bie, R.; Cheng, X. Social Welfare Control in Mobile Crowdsensing Using Zero-Determinant Strategy. Sensors 2017, 17, 1012. [Google Scholar] [CrossRef] [PubMed]
  22. Longo, F.; Bruneo, D.; Distefano, S.; Merlino, G.; Puliafito, A. Stack4Things: A sensing-and-actuation-as-a-service framework for IoT and cloud integration. Ann. Telecommun. 2017, 72, 53–70. [Google Scholar] [CrossRef]
  23. Manzalini, A.; Crespi, N. An edge operating system enabling anything-as-a-service. IEEE Commun. Mag. 2016, 54, 62–67. [Google Scholar] [CrossRef]
  24. Amadeo, M.; Molinaro, A. CHANET: A content-centric architecture for IEEE 802.11 MANETs. In Proceedings of the International Conference on the Network of the Future, Paris, France, 28–30 November 2011. [Google Scholar]
  25. Mendes, P. Combining Data Naming and Context Awareness for Pervasive Networks. J. Netw. Comput. Appl. 2015, 50, 114–125. [Google Scholar] [CrossRef]
  26. Delmastro, F.; Arnaboldi, V.; Conti, M. People-Centric Computing and Communications in Smart Cities. IEEE Commun. Mag. 2016, 54, 122–128. [Google Scholar] [CrossRef]
  27. Tian, X.; Zhu, Y.H.; Chi, K.; Liu, J.; Zhang, D. Reliable and Energy-Efficient Data Forwarding in Industrial Wireless Sensor Networks. IEEE Syst. J. 2017, 11, 1424–1434. [Google Scholar] [CrossRef]
  28. Chen, P.Y.; Cheng, S.M.; Hsu, H.Y. Analysis of Information Delivery Dynamics in Cognitive Sensor Networks Using Epidemic Models. IEEE Internet Things J. 2017. [Google Scholar] [CrossRef]
  29. Al-Turjman, F. Energy-Aware Data Delivery Framework for Safety-Oriented Mobile IoT. IEEE Sens. J. 2017. [Google Scholar] [CrossRef]
  30. Hamrioui, S.; Lorenz, P. Bio Inspired Routing Algorithm and Efficient Communications within IoT. IEEE Netw. 2017, 31, 74–79. [Google Scholar] [CrossRef]
  31. Pilloni, V.; Atzori, L.; Mallus, M. Dynamic Involvement of Real World Objects in the IoT: A Consensus-Based Cooperation Approach. Sensors 2017, 17, 484. [Google Scholar] [CrossRef] [PubMed]
  32. Cao, Y.; Yu, W.; Ren, W.; Chen, G. An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination. IEEE Trans. Ind. Inform. 2013, 9, 427–438. [Google Scholar] [CrossRef]
  33. Caraguay, Á.L.V.; Villalba, L.J.G. Monitoring and Discovery for Self-Organized Network Management in Virtualized and Software Defined Networks. Sensors 2017, 17, 731. [Google Scholar] [CrossRef] [PubMed]
  34. Sotres, P.; Santana, J.R.; Sánchez, L.; Lanza, J.; Muñoz, L. Practical Lessons From the Deployment and Management of a Smart City Internet-of-Things Infrastructure: The SmartSantander Testbed Case. IEEE Access 2017, 5, 14309–14322. [Google Scholar] [CrossRef]
  35. Trifunovic, S.; Distl, B.; Schatzmann, D.; Legendre, F. WiFi-Opp: Adhoc-less opportunistic networking. In Proceedings of the 6th ACM Workshop on Challenged Networks (CHANTS ‘11), Las Vegas, NV, USA, 19–23 September 2011; pp. 37–42. [Google Scholar]
  36. Bellavista, P.; Giannelli, C.; Iannario, L.; Goix, L.W.; Venezia, C. Peer-to-peer Content Sharing Based on Social Identities and Relationships. IEEE Internet Comput. 2014, 18, 55–63. [Google Scholar] [CrossRef]
  37. Trifunovic, S.; Kouyoumdjieva, S.T.; Distl, B.; Pajevic, L.; Karlsson, G.; Plattner, B. A Decade of Research in Opportunistic Networks: Challenges, Relevance, and Future Directions. IEEE Commun. Mag. 2017, 55, 168–173. [Google Scholar] [CrossRef]
  38. Spyropoulos, T.; Psounis, K.; Raghavendra, C.S. Spray and wait: An efficient routing scheme for intermittently connected mobile networks. In Proceedings of the ACM SIGCOMM 2005, Workshop on Delay-Tolerant Networking (WDTN ‘05), Philadelphia, PA, USA, 22–26 August 2005; pp. 252–259. [Google Scholar]
  39. Bellavista, P.; Corradi, A.; Giannelli, C. Differentiated Management Strategies for Multi-hop Multi-Path Heterogeneous Connectivity in Mobile Environments. IEEE Trans. Netw. Serv. Manag. 2011, 8, 190–204. [Google Scholar] [CrossRef]
  40. Bellavista, P.; Corradi, A.; Giannelli, C. Evaluating Filtering Strategies for Decentralized Handover Prediction in the Wireless Internet. In Proceedings of the 11th IEEE Symposium on Computers and Communications (ISCC’06), Pula-Cagliari, Italy, 26–29 June 2006. [Google Scholar]
  41. Kura. Available online: http://www.eclipse.org/kura/ (accessed on 29 August 2017).
  42. Docker. Available online: https://www.docker.com/ (accessed on 29 August 2017).

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.