Next Article in Journal
A Multi-Agent-Based Intelligent Sensor and Actuator Network Design for Smart House and Home Automation
Previous Article in Journal
Real-Time Recognition of Action Sequences Using a Distributed Video Sensor Network
Previous Article in Special Issue
An Adaptive Strategy for an Optimized Collision-Free Slot Assignment in Multichannel Wireless Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wireless Sensor Network Operating System Design Rules Based on Real-World Deployment Survey

1
Institute of Electronics and Computer Science, 14 Dzerbenes Street, Riga, LV 1006, Latvia
2
Faculty of Computing, University of Latvia, 19 Raina blvd, Riga, LV 1586, Latvia
*
Author to whom correspondence should be addressed.
J. Sens. Actuator Netw. 2013, 2(3), 509-556; https://doi.org/10.3390/jsan2030509
Submission received: 1 June 2013 / Revised: 13 July 2013 / Accepted: 24 July 2013 / Published: 16 August 2013
(This article belongs to the Special Issue Advances in Sensor Network Operating Systems)

Abstract

:
Wireless sensor networks (WSNs) have been a widely researched field since the beginning of the 21st century. The field is already maturing, and TinyOS has established itself as the de facto standard WSN Operating System (OS). However, the WSN researcher community is still active in building more flexible, efficient and user-friendly WSN operating systems. Often, WSN OS design is based either on practical requirements of a particular research project or research group’s needs or on theoretical assumptions spread in the WSN community. The goal of this paper is to propose WSN OS design rules that are based on a thorough survey of 40 WSN deployments. The survey unveils trends of WSN applications and provides empirical substantiation to support widely usable and flexible WSN operating system design.

1. Introduction

Wireless sensor networks are a relatively new field in computer science and engineering. Although the first systems that could be called WSNs were used already in 1951, during the Cold War [1], the real WSN revolution started in the beginning of the 21st century, with the rapid advancement of micro-electro-mechanical systems (MEMS). New hardware platforms [2,3], operating systems [4,5], middleware [6,7], networking [8], time synchronization [9], localization [10] and other protocols have been proposed by the research community. The gathered knowledge has been used in numerous deployments [11,12]. TinyOS [4] has been the de facto standard operating system in the community since 2002. However, as the survey will reveal, customized platforms and operating systems are often used, emphasizing the still actual WSN user need for a flexible and easily usable OS.
The goal of this paper is to summarize WSN deployment surveys and analyze the collected data in the OS context, clarifying typical deployment parameters that are important in WSN OS design.

2. Methodology

Research papers presenting deployments are selected based on multiple criteria:
  • The years 2002 up to 2011 have been reviewed uniformly, without emphasis on any particular year. Deployments before the year 2002 are not considered, as early sensor network research projects used custom hardware, differing from modern embedded systems significantly.
  • Articles have been searched using the Association for Computing Machinery (ACM) Digital Library (http://dl.acm.org/), the Institute of Electrical and Electronics Engineers (IEEE) Xplore Digital Library (http://ieeexplore.ieee.org/), Elsevier ScienceDirect and SpringerLink databases. Several articles have been found as external references from the aforementioned databases.
  • Deployments are selected to cover a wide WSN application range, including environmental monitoring, animal monitoring, human-centric applications, infrastructure monitoring, smart buildings and military applications.
WSN deployment surveys can be found in the literature [13,14,15,16,17,18]. This survey focuses on more thorough and detailed review regarding the aspects important for WSN OS design. This survey also contains deployments in the prototyping phase, because of two reasons. First, rapid prototyping and experimentation is a significant part of sensor network application development. Second, many of the research projects develop a prototype, and stable deployments are created later as commercial products, without publishing technical details in academic conferences and journals. Therefore software tools must support experimentation and prototyping of sensor networks, and the requirements of these development phases must be taken into account.
Multiple parameters are analyzed for each of the considered WSN deployments. For presentation simplification, these parameters are grouped, and each group is presented as a separate subsection.
For each deployment, the best possible parameter extraction was performed. Part of information was explicitly stated in the analyzed papers and web pages, and part of it was acquired by making a rational guess or approximation. Such approximated values are marked with a question mark right after the approximated value.

3. Survey Results

The following subsections describe parameter values extracted in the process of deployment article analysis. General deployment attributes are shown in Table 1. Each deployment has a codename assigned. This will be used to identify each article in the following tables. Design rules are listed in the text right after conclusions substantiating the rule.
The extracted design rulesshould be considered as WSN deployment trends that suggest particular design choices to OS architects. There is no strict evidence that any particular deployment trend must be implemented in an operating system at all costs. These design rulessketch likely choices of WSN users that should be considered.
Table 1. Deployments: general information.
Table 1. Deployments: general information.
NrCodenameYearTitleClassDescription
1Habitats [11]2002Wireless Sensor Networks for Habitat MonitoringHabitat and weather monitoringOne of the first sensor network deployments, designed for bird nest monitoring on a remote island
2Minefield [12]2003Collaborative Networking Requirements for Unattended Ground Sensor SystemsOpposing force investigationUnattended ground sensor system for self healing minefield application
3Battlefield [19]2004Energy-Efficient Surveillance System Using Wireless Sensor NetworksBattlefield surveillanceSystem for tracking of the position of moving targets in an energy-efficient and stealthy manner
4Line in the sand [20]2004A Line in the Sand: A Wireless Sensor Network for Target Detection, Classification, and TrackingBattlefield surveillanceSystem for intrusion detection, target classification and tracking
5Counter-sniper [21]2004Sensor Network-Based Countersniper SystemOpposing force investigationAn ad hoc wireless sensor network-based system that detects and accurately locates shooters, even in urban environments.
6Electro-shepherd [22]2004Electronic Shepherd—A Low-Cost, Low-Bandwidth, Wireless Network SystemDomestic animal monitoring and controlExperiments with sheep GPS and sensor tracking
7Virtual fences [23]2004Virtual Fences for Controlling CowsDomestic animal monitoring and controlExperiments with virtual fence for domestic animal control
8Oil tanker [24]2005Design and Deployment of Industrial Sensor Networks: Experiences from a Semiconductor Plant and the North SeaIndustrial equipment monitoring and controlSensor network for industrial machinery monitoring, using Intel motes with Bluetooth and high-frequency sampling
9Enemy vehicles [25]2005Design and Implementation of a Sensor Network System for Vehicle Tracking and Autonomous InterceptionOpposing force investigationA networked system of distributed sensor nodes that detects an evader and aids a pursuer in capturing the evader
10Trove game [26]2005Trove: A Physical Game Running on an Ad hoc Wireless Sensor NetworkChild education and sensor gamesPhysical multiplayer real-time game, using collaborative sensor nodes
11Elder Radio-Frequency Identification (RFID) [27]2005A Prototype on RFID and Sensor Networks for Elder Healthcare: Progress ReportMedication intake accountingIn-home elder healthcare system integrating sensor networks and RFID technologies for medication intake monitoring
12Murphy potatoes [28]2006Murphy Loves Potatoes: Experiences from a Pilot Sensor Network Deployment in Precision AgriculturePrecision agricultureA rather unsuccessful sensor network pilot deployment for precision agriculture, demonstrating valuable lessons learned
13Firewxnet [29]2006FireWxNet: A Multi-Tiered Portable Wireless System for Monitoring Weather Conditions in Wildland Fire EnvironmentsForest fire detectionA multi-tier WSN for safe and easy monitoring of fire and weather conditions over a wide range of locations and elevations within forest fires
14AlarmNet [30]2006ALARM-NET: Wireless Sensor Networks for Assisted-Living and Residential MonitoringHuman health telemonitoringWireless sensor network for assisted-living and residential monitoring, integrating environmental and physiological sensors and providing end-to-end secure communication and sensitive medical data protection
15Ecuador Volcano [31]2006Fidelity and Yield in a Volcano Monitoring Sensor NetworkVolcano monitoringSensor network for volcano seismic activity monitoring, using high frequency sampling and distributed event detection
16Pet game [32]2006Wireless Sensor Network-Based Mobile Pet GameChild education and sensor gamesAugmenting mobile pet game with physical sensing capabilities: sensor nodes act as eyes, ears and skin
17Plug [33]2007A Platform for Ubiquitous Sensor Deployment in Occupational and Domestic EnvironmentsSmart energy usageWireless sensor network for human activity logging in offices; sensor nodes implemented as power strips
18B-Live [34]2007B-Live—A Home Automation System for Disabled and Elderly PeopleHome/office automationHome automation for disabled and elderly people integrating heterogeneous wired and wireless sensor and actuator modules
19Biomotion [35]2007A Compact, High-Speed, Wearable Sensor Network for Biomotion Capture and Interactive MediaSmart userinterfaces and artWireless sensor platform designed for processing multipoint human motion with low latency and high resolutions. Example applications: interactive dance, where movements of multiple dancers are translated into real-time audio or video
20AID-N [36]2007The Advanced Health and Disaster Aid Network: A Light-Weight Wireless Medical System for TriageHuman health telemonitoringLightweight medical systems to help emergency service providers in mass casualty incidents
21Firefighting [37]2007A Wireless Sensor Network and Incident Command Interface for Urban FirefightingHuman-centric applicationsWireless sensor network and incident command interface for firefighting and emergency response, especially in large and complex buildings. During a fire accident, fire spread is tracked, and firefighter position and health status are monitored.
22Rehabil [38]2007Ubiquitous Rehabilitation Center: AnImplementation of a Wireless Sensor Network-Based Rehabilitation Management SystemHuman indoor trackingZigbee sensor network-based ubiquitous rehabilitation center for patient and rehabilitation machine monitoring
23CargoNet [39]2007CargoNet: A Low-Cost Micropower Sensor Node Exploiting Quasi-Passive Wake Up for Adaptive Asynchronous Monitoring of Exceptional EventsGood and daily object trackingSystem of low-cost, micropower active sensor tags for environmental monitoring at the crate and case level for supply-chain management and asset security
24Fence monitor [40]2007Fence Monitoring—Experimental Evaluation of a Use Case for Wireless Sensor NetworksSecurity systemsSensor nodes attached to a fence for collaborative intrusion detection
25BikeNet [41]2007The BikeNet Mobile Sensing System for Cyclist Experience MappingCity environment monitoringExtensible mobile sensing system for cyclist experience (personal, bicycle and environmental sensing) mapping, leveraging opportunistic networking principles
26BriMon [42]2008BriMon: A Sensor Network System for Railway Bridge MonitoringBridge monitoringDelay tolerant network for bridge vibration monitoring using accelerometers. Gateway mote collects data and forwards opportunistically to a mobile base station attached to a train passing by.
27IP net [43]2008Experiences from Two Sensor Network Deployments—Self-Monitoring and Self-Configuration Keys to SuccessBattlefield surveillanceIndoor and outdoor surveillance network for detecting troop movement
28Smart home [44]2008The Design and Implementation of Smart Sensor-Based Home NetworksHome/office automationWireless sensor network deployed in a miniature model house, which controls different household equipment: window curtains, gas valves, electric outlets, TV, refrigerator and door locks
29SVATS [45]2008SVATS: A Sensor-Network-Based Vehicle Anti-Theft SystemAnti-theft systemsLow cost, reliable sensor-network based, distributed vehicle anti-theft system with low false-alarm rate
30Hitchhiker [46]2008The Hitchhikers Guide to Successful Wireless Sensor Network DeploymentsFlood and glacier detectionMultiple real-world sensor network deployments performed, including glacier detection; experience and suggestions reported.
31Daily morning [47]2008Detection of Early Morning Daily Activities with Static Home and Wearable Wireless SensorsDaily activity recognitionFlexible, cost-effective, wireless in-home activity monitoring system integrating static and mobile body sensors for assisting patients with cognitiveimpairments
32Heritage [48]2009Monitoring Heritage Buildings with Wireless Sensor Networks: The Torre Aquila DeploymentHeritage building and site monitoringThree different motes (sensing temperature, vibrations and deformation) deployed in a historical tower to monitor its health and identify potential damage risks
33AC meter [49]2009Design and Implementation of a High-Fidelity AC Metering NetworkSmart energy usageAC outlet power consumption measurement devices, which are powered from the same AC line, but communicate wirelessly to IPv6 router
34Coal mine [50]2009Underground Coal Mine Monitoring with Wireless Sensor NetworksCoal mine monitoringSelf-adaptive coal mine wireless sensor network (WSN) system for rapid detection of structure variations caused by underground collapses
35ITS [51]2009Wireless Sensor Networks for Intelligent Transportation SystemsVehicle tracking and traffic monitoringTraffic monitoring system implemented through WSN technology within the SAFESPOT Project
36Underwater [52]2010Adaptive Decentralized Control of Underwater Sensor Networks for Modeling Underwater PhenomenaUnderwater networksMeasurement of dynamics of underwater bodies and their impact in the global environment, using sensor networks with nodes adapting their depth dynamically
37PipeProbe [53]2010PipeProbe: A Mobile Sensor Droplet for Mapping Hidden PipelinePower line and water pipe monitoringMobile sensor system for determining the spatial topology of hidden water pipelines behind walls
38Badgers [54]2010Evolution and Sustainability of a Wildlife Monitoring Sensor NetworkWild animal monitoringBadger monitoring in a forest
39Helens volcano [55]2011Real-World Sensor Network for Long-Term Volcano Monitoring: Design and FindingsVolcano monitoringRobust and fault-tolerant WSN for active volcano monitoring
40Tunnels [56]2011Is There Light at the Ends of the Tunnel? Wireless Sensor Networks for Adaptive Lighting in Road TunnelsTunnel monitoringClosed loop wireless sensor and actuator system for adaptive lighting control in operational tunnels

3.1. Deployment State and Attributes

Table 2 describes the deployment state and used sensor node (mote) characteristics. SVATS, sensor-network-based vehicle anti-theft system.
Table 2. Deployments: deployment state and attributes.
Table 2. Deployments: deployment state and attributes.
NrCodenameDeployment stateMote countHeterog. motesBase stationsBase station hardware
1Habitatspilot32n1Mote + PC with satellite link to Internet
2Minefieldpilot20n0All motes capable of connecting to a PC via Ethernet
3Battlefieldprototype70y (soft, by role)1Mote + PC
4Line in the sandpilot90n1Root connects to long-range radio relay
5Counter-sniperprototype56n1Mote + PC
6Electro-shepherdpilot180y1+Mobile mote
7Virtual fencesprototype8n1Laptop
8Oil tankerpilot26n4Stargate gateway + Intel mote, wall powered.
9Enemy vehiclespilot100y1Mobile power motes - laptop on wheels
10Trove gamepilot10n1Mote + PC
11Elder RFIDprototype3n1Mote + PC
12Murphy potatoespilot109n1Stargate gateway + Tnode, solar panel
13Firewxnetpilot13n1 Base Station (BS) + 5 gatewaysGateway: Soekris net4801 with Gentoo Linux and Trango Access5830 long-range 10 Mbps wireless; BS: PC with satellite link 512/128Kbps
14AlarmNetprototype15yvariesStargate gateway with MicaZ, wall powered.
15Ecuador Volcanopilot19y1Mote + PC
16Pet gameprototype?n1+Mote + MIB510 board + PC
17Plugpilot35n1Mote + PC
18B-Livepilot10+y1B-Live modules connected to PC, wheelchair computer, etc.
19Biomotionpilot25n1Mote + PC
20AID-Npilot10y1+Mote + PC
21Firefightingprototype20y1+?
22Rehabilprototype?y1Mote + PC
23CargoNetpilot<10n1+Mote + PC?
24Fence monitorprototype10n1Mote + PC?
25BikeNetprototype5n7+802.15.4/Bluetooth bridge + Nokia N80 OR mote + Aruba AP-70 embedded PC
26BriMonprototype12n1Mobile train TMote, static bridge Tmotes
27IP netpilot25n1Mote + PC?
28Smart homeprototype12y1Embedded PC with touchscreen, internet, wall powered
29SVATSprototype6n1?
30Hitchhikerpilot?161?
31Daily morningprototype1n1Mote + MIB510 board + PC
32Heritagestable17y13Mate mote + Gumstix embedded PC with SD card and WiFi
33AC meterpilot49n2+Meraki Mini and the OpenMesh Mini-Router wired together with radio
34Coal mineprototype27n1?
35ITSprototype8n1?
36Underwaterprototype4n0-
37PipeProbeprototype1n1Mote + PC
38Badgersstable74 mobile + 26? staticy1+Mote
39Helens volcanopilot13n1?
40Tunnelspilot40n2Mote + Gumstix Verdex Pro
Deployment state represents maturity of the application: whether it is a prototype or a pilot test-run in a real environment or it has been running in a stable state for a while. As can be seen, only a few deployments are in a stable state; the majority are prototypes and pilot studies. Therefore, it is important to support fast prototyping and effective debugging mechanisms for these phases.
Despite theoretical assumptions about huge networks consisting of thousands of nodes, only a few deployments contain more than 100 nodes. Eighty percent of listed deployments contain 50 or less nodes, 34%: less than 10 nodes (Figure 1). It seems that the most active period of large-scale WSN deployment has been experienced in the years 2004–2006, with networks consisting of 100 and more nodes (Figure 2).
Figure 1. Distribution function of mote count in surveyed deployments—Eighty percent of deployments contain less than 50 motes; 50%: less than 20 motes; and 34%: ten or less.
Figure 1. Distribution function of mote count in surveyed deployments—Eighty percent of deployments contain less than 50 motes; 50%: less than 20 motes; and 34%: ten or less.
Jsan 02 00509 g001
Design rule 1:
The communication stack included in the default OS libraries should concentrate on usability, simplicity and resource efficiency, rather than providing complex and resource-intensive, scalable protocols for thousands of nodes.
Another theoretical assumption, which is only partially true, is a heterogeneous network. The majority of deployments are built on homogenous networks with equal nodes: 70% of deployments. However, significant amount of deployments contain heterogeneous nodes, and that must be taken into account in remote reprogramming design. Remote reprogramming is essential, as it is very time-intensive and difficult to program even more than five nodes. Additionally, often, nodes need many reprogramming iterations after initial setup at the deployment site. Users must be able to select subsets of network nodes to reprogram. Different node hardware must be supported in a single network.
Figure 2. Maximum mote count in surveyed deployments, in each year— peak size in the years 2004–2006; over 100 motes used.
Figure 2. Maximum mote count in surveyed deployments, in each year— peak size in the years 2004–2006; over 100 motes used.
Jsan 02 00509 g002
Although remote reprogramming is a low-level function, it can be considered as a debug phase feature, and external tools, such as QDiff [57], can be used to offload this responsibility from the operating system.
Almost all (95%) networks have a sink node or base station, collecting the data. A significant part of deployments use multiple sinks.
Design rule 2
Sink-oriented protocols must be provided and, optionally, multiple sink support.
Almost half of deployments use a regular mote connected to a PC (usually a laptop) as a base station hardware solution.
Design rule 3
The OS toolset must include a default solution for base station application, which is easily extensible to user specific needs.

3.2. Sensing

Table 3 lists the sensing subsystem and sampling characteristics used in deployments.
Table 3. Deployments: sensing.
Table 3. Deployments: sensing.
NrCodenameSensorsSampling rate, HzGPS used
1Habitatstemperature, light, barometric pressure, humidity and passive infrared0.0166667n
2Minefieldsound, magnetometer, accelerometers, voltage and imaging?y
3Battlefieldmagnetometer, acoustic and light10n
4Line in the sandmagnetometer and radar?n
5Counter-snipersound1,000,000n
6Electro-shepherdtemperature?y
7Virtual fences-?y
8Oil tankeraccelerometer19,200n
9Enemy vehiclesmagnetometer and ultrasound transceiver?y, on powered nodes
10Trove gameaccelerometers and light?n
11Elder RFIDRFID reader1n
12Murphy potatoestemperature and humidity0.0166667n
13Firewxnettemperature, humidity, wind speed and direction0.8333333n
14AlarmNetmotion, blood pressure, body scale, dust, temperature and light≤ 1n
15Ecuador Volcanoseismometers and acoustic100y, on BS
16Pet gametemperature, light and soundconfigurablen
17Plugsound, light, electric current, voltage, vibration, motion and temperature8,000n
18B-Livelight, electric current and switches?n
19Biomotionaccelerometer, gyroscope and capacitive distance sensor100n
20AID-Npulse oximeter, Electrocardiogram (ECG), blood pressure and heart beatdepends on queriesn
21Firefightingtemperature?n
22Rehabiltemperature, humidity and light?n
23CargoNetshock, light, magnetic switch, sound, tilt, temperature and humidity0.0166667n
24Fence monitoraccelerometer10n
25BikeNetmagnetometer, pedal speed, inclinometer, lateral tilt, Galvanic Skin Response (GSR) stress, speedometer, CO2, sound and GPSconfigurabley
26BriMonaccelerometer0.6666667n
27IP nettemperature, luminosity, vibration, microphone and movement detector?n
28Smart homeLight, temperature, humidity, air pressure, acceleration, gas leak and motion?n
29SVATSradio Received Signal Strength Indicator (RSSI)?n
30Hitchhikerair temperature and humidity, surface temperature, solar radiation, wind speed and direction, soil water content and suction and precipitation?n
31Daily morningaccelerometer50n
32Heritagefiber optic deformation, accelerometers and analog temperature200n
33AC metercurrent≤ 14,000n
34Coal mine- (sense radio neighbors only)-n
35ITSanisotropic magneto-resistive and pyroelectricvariesn
36Underwaterpressure, temperature, CDOM, salinity, dissolved oxygen and cameras; motor actuator≤1n
37PipeProbegyroscope and pressure33n
38Badgershumidity and temperature?n
39Helens volcanogeophone and accelerometer100,000?y
40Tunnelslight, temperature and voltage0.0333333n
The most popular sensors are temperature, light and accelerometer sensors (Figure 3).
Design rule 4
The WSN operating system should include an Application Programming Interface (API) for temperature, light and acceleration sensors in the default library set.
Figure 3. Sensors used in deployments—Temperature, light and acceleration sensors are the most popular: each of them used in more than 20% of analyzed deployments.
Figure 3. Sensors used in deployments—Temperature, light and acceleration sensors are the most popular: each of them used in more than 20% of analyzed deployments.
Jsan 02 00509 g003
When considering sensor sampling rate, a pattern can be observed (Figure 4). Most of the deployments are low sampling rate examples, where the mote has a very low duty cycle and the sampling rate is less than 1 Hz. Other, less popular application classes use sampling in the range 10–100 Hz and 100–1,000 kHz. The former class uses accelerometer data processing, while the latter is mainly representative of audio and high sensitivity vibration processing. A significant part of applications have a variable sampling rate, configurable in run time.
Figure 4. Sensor sampling rate used in deployments—Low duty cycle applications with sampling rate below 1 Hz are the most popular; however, high-frequency sampling is also used; the ranges 10–100 Hz and 10–100 kHz are popular.
Figure 4. Sensor sampling rate used in deployments—Low duty cycle applications with sampling rate below 1 Hz are the most popular; however, high-frequency sampling is also used; the ranges 10–100 Hz and 10–100 kHz are popular.
Jsan 02 00509 g004
Design rule 5
The operating system must set effective low-frequency, low duty-cycle sampling as the first priority. High performance for sophisticated audio signal processing and other high-frequency sampling applications is secondary, yet required.
GPS localization is a widely used technology, in general; however, it is not very popular in sensor networks, mainly due to unreasonably high power consumption. It is used in less than 18% of deployments. A GPS module should not be considered as a default component.

3.3. Lifetime and Energy

Table 4 describes energy usage and the target lifetime of the analyzed deployments.
Table 4. Deployments: lifetime and energy.
Table 4. Deployments: lifetime and energy.
NrCodenameLifetime, daysEnergy sourceSleep time, secDuty cycle, %Powered-motes present
1Habitats270battery60?yes, gateways
2Minefield?battery??yes, all
3Battlefield5–50batteryvariesvariesyes, base station
4Line in the sand?battery and solar??yes, root
5Counter-sniper?battery0100no
6Electro-shepherd50battery? < 1 no
7Virtual fences2 h 40 minbattery0100no
8Oil tanker82battery64,800 < 1 yes, gateways
9Enemy vehicles?battery??yes, mobile nodes
10Trove game?battery??yes, base station
11Elder RFID?battery0?100?yes, base station
12Murphy potatoes21battery6011yes, base station
13Firewxnet21battery8406.67yes, gateways
14AlarmNet?battery?configurableyes, base stations
15Ecuador Volcano19battery0100yes, base station
16Pet game?battery??yes, base station
17Plug-power-net0100yes, all
18B-Live-battery0100yes, all
19Biomotion5 hbattery0100yes, base stations
20AID-N6battery0100yes, base station
21Firefighting4+battery0100yes, infrastructure motes
22Rehabil?battery??yes, base station
23CargoNet1825batteryvaries0.001no
24Fence monitor?battery1?yes, base station
25BikeNet?battery??yes, gateways
26BriMon625battery0.55no
27IP net?battery?20yes, base station
28Smart home?battery??yes
29SVATSunlimitedpower-netnot implemented-yes, all
30Hitchhiker60battery and solar510yes, base station
31Daily morning?battery0?100?yes, base station
32Heritage525battery0.570.05yes, base station
33AC meter?power-net??yes, gateways
34Coal mine?battery??yes, base station?
35ITS?power-net?0?100?yes, all
36Underwater?battery??no
37PipeProbe4 hbattery0100yes, base station
38Badgers7battery?0.05no
39Helens volcano400battery0?100?yes, all
40Tunnels480battery0.25?yes, base stations
Target lifetime is very dynamic among applications, from several hours to several years. Long-living deployments use a duty-cycle below 1%, meaning that sleep mode is used 99% of the time. Both, very short and very long, sleeping periods are used: from 250 milliseconds up to 24 hours.
Operating systems should provide effective routines for duty-cycling and have low computational overhead.
A significant part of deployments (more than 30%), especially in the prototyping phase, do not concentrate on energy efficiency and use a 100% duty cycle.
Design rule 6
The option, “automatically activate sleep mode whenever possible”, would decrease the complexity and increase the lifetime for deployments in the prototyping phase and also help beginner sensor network programmers.
Although energy harvesting is envisioned as the only way for sustainable sensing systems [58], power sources other than batteries or static power networks are rarely used (5% of analyzed deployments). Harvesting module support at the operating system level is, therefore, not an essential part of deployments, until today. However, harvesting popularity may increase in future deployments, and support for it at the OS level could be a valuable research direction.
More than 80% of deployments have powered motes present in the network: at least one node has an increased energy budget. Usually, these motes are capable of running at 100% duty cycle, without sleep mode activation.
Design rule 7
Powered mote availability should be considered when designing a default networking protocol library.

3.4. Sensor Mote

Table 5 lists used motes, radio (or other communication media) chips and protocols.
Table 5. Deployments: used motes and radio chips.
Table 5. Deployments: used motes and radio chips.
NrCodenameMoteReady or customMote motivationRadio chipRadio protocol
1HabitatsMicaadaptedcustom Mica weather board and packagingRFMonolitics TR1001?
2MinefieldWINS NG 2.0 [59]customneed for high performance??
3BattlefieldMica2adaptedenergy and bandwidth efficient; simple and flexibleChipcon CC1000SmartRF
4Line in the sandMica2adapted?Chipcon CC1000SmartRF
5Counter-sniperMica2adapted?Chipcon CC1000SmartRF
6Electro-shepherdCustom + Active RFID tagscustompackaging adapted to sheep habitsunnamed Ultra High Frequency (UHF) transceiver?
7Virtual fencesZaurus PDAreadyoff-the-shelfunnamed WiFi802.11
8Oil tankerIntel Moteadapted?Zeevo TC2001PBluetooth 1.1
9Enemy vehiclesMica2Dotadapted?Chipcon CC1000SmartRF
10Trove gameMica2readyoff-the-shelfChipcon CC1000SmartRF
11Elder RFIDMica2adaptedoff-the-shelf; RFID reader addedChipcon CC1000 + RFIDSmartRF + RFID
12Murphy potatoesTNOde, Mica2-likecustompackaging + sensingChipcon CC1000SmartRF
13FirewxnetMica2adaptedMantis OS [60] support, AA batteries, extensibleChipcon CC1000SmartRF
14AlarmNetMica2 + TMote Skyadaptedoff-the-shelf; extensibleChipcon CC1000SmartRF
15Ecuador VolcanoTmote Skyadaptedoff-the-shelfChipcon CC2420802.15.4
16Pet gameMicaZreadyoff-the-shelfChipcon CC2420802.15.4
17PlugPlug Motecustomspecific sensing + packagingChipcon CC2500?
18B-LiveB-Live modulecustomcustom modular system??
19Biomotioncustomcustomsize constraintsNordic nRF2401A-
20AID-NTMote Sky + MicaZadaptedoff-the-shelf; extensibleChipcon CC2420802.15.4
21FirefightingTMote Skyadaptedoff-the-shelf; easy prototypingChipcon CC2420802.15.4
22RehabilMaxfor TIP 7xxCM: TelosB-compatiblereadyoff-the-shelfChipcon CC2420802.15.4
23CargoNetCargoNet motecustomlow power; low cost componentsChipcon CC2500-
24Fence monitorScatterweb ESB [61]readyoff-the-shelfChipcon CC1020?
25BikeNetTMote Inventadaptedoff-the-shelf mote providing required connectivityChipcon CC2420802.15.4
26BriMonTmote Skyadaptedoff the shelfChipcon CC2420802.15.4
27IP netScatterweb ESBadaptedNecessary sensors on boardTR1001?
28Smart homeZigbeXcustomspecific sensor, size and power constraintsChipcon CC2420802.15.4
29SVATSMica2readyoff-the-shelfChipcon CC1000SmartRF
30HitchhikerTinyNodeadaptedlong-range communicationSemtech XE1205?
31Daily morningMicaZreadyoff-the-shelfChipcon CC2420802.15.4
32Heritage3Mate!adaptedTinyOS supported mote with custom sensorsChipcon CC2420802.15.4
33AC meterACme (Epic core)adaptedmodular; convenient prototypingChipcon CC2420802.15.4
34Coal mineMica2readyoff-the-shelfChipcon CC1000SmartRF
35ITSCustomcustomspecific sensing needsChipcon CC2420802.15.4
36UnderwaterAquaNodecustomspecific packaging, sensor and actuator needscustom-
37PipeProbeEco moteadaptedsize and energy constraintsNordic nRF24E1?
38BadgersV1: Tmote Sky + external board; V2: customv1: adapted; v2: customv1: off-the-shelf v2: optimizationsAtmel AT86RF230802.15.4
39Helens volcanocustomcustomspecific computational, sensing and packaging needsChipcon CC2420802.15.4
40TunnelsTRITon mote [62] : TelosB-likecustomreuse and custom packagingChipcon CC2420802.15.4
Mica2 [64] and MicaZ [3] platforms were very popular in early deployments. TelosB-compatible platforms (TMote Sky and others) [2,65] have been the most popular in recent years.
Design rule 8
TelosB platform support is essential.
MicaZ support is optional, yet suggested, as sensor network research laboratories might use previously obtained MicaZ motes, especially for student projects.
Almost half of deployments (47%) use adapted versions of off-the-shelf motes by adding customized sensors, actuators and packaging (Figure 5). Almost one third (32%) use custom motes, by combining different microchips. Often, these platforms are either compatible or similar to commercial platforms (for example, TelosB) and use the same microcontrollers (MCUs) and radio chips. Only 20% use motes off-the-shelf with default sensor modules.
Design rule 9
The WSN OS must support implementation of additional sensor drivers for existing commercial motes
Design rule 10
Development of completely new platforms must be simple enough, and highly reusable code should be contained in the OS
Figure 5. Custom, adapted and off-the-shelf mote usage in deployments—Almost half of deployments adapt off-the-shelf motes by custom sensing and packaging hardware, 32% use custom platforms and only 20% use commercial motes with default sensing modules.
Figure 5. Custom, adapted and off-the-shelf mote usage in deployments—Almost half of deployments adapt off-the-shelf motes by custom sensing and packaging hardware, 32% use custom platforms and only 20% use commercial motes with default sensing modules.
Jsan 02 00509 g005
The most popular reason for building a customized mote is specific sensing and packaging constraints. The application range is very wide; there will always be applications with specific requirements.
On the other hand, part of the sensor network users are beginners in the field and do not have resources to develop a new platform to assess a certain idea in real-world settings. Off-the-shelf commercial platforms, a simple programming interface, default settings and demo applications are required for this user class.
Chipcon CC1000 [66] radio was popular for early deployments; however, Chipcon CC2420 [67] is the most popular in recent years. IEEE 802.15.4 is the most popular radio transmission protocol (used in CC2420 and other radio chips) at the moment.
Design rule 11
Driver support for CC2420 radio is essential.
More radio chips and system-on-chip solutions using the IEEE 802.15.4 protocol can be expected in the coming years.

3.5. Sensor Mote: Microcontroller

Used microcontrollers are listed in Table 6.
Table 6. Deployments: used microcontrollers (MCUs).
Table 6. Deployments: used microcontrollers (MCUs).
NrCodenameMCU countMCU NameArchitecture, bitsMHzRAM, KBProgram Memory, KB
1Habitats1Atmel ATMega103L844128
2Minefield1Hitachi SH4 77513216764,0000
3Battlefield1Atmel ATMega12887.34128
4Line in the sand1Atmel ATMega128844128
5Counter-sniper1 + Field-Programmable Gate Array (FPGA)Atmel ATMega128L87.34128
6Electro-shepherd1Atmel ATMega12887.34128
7Virtual fences1Intel StrongArm3220665,536?
8Oil tanker1Zeevo ARM7TDMI321264512
9Enemy vehicles1Atmel ATMega128L844128
10Trove game1Atmel ATMega12887.34128
11Elder RFID1Atmel ATMega12887.34128
12Murphy potatoes1Atmel ATMega128L884128
13Firewxnet1Atmel ATMega128L87.34128
14AlarmNet1Atmel ATMega128L87.34128
15Ecuador Volcano1Texas Instruments (TI) MSP430F16111681048
16Pet game1Atmel ATMega12887.34128
17Plug1Atmel AT91SAM7S6432481664
18B-Live2Microchip PIC18F25808401.532
19Biomotion1TI MSP430F149168260
20AID-N1TI MSP430F16111681048
21Firefighting1TI MSP430F16111681048
22Rehabil1TI MSP430F16111681048
23CargoNet1TI MSP430F135168?0.51216
24Fence monitor1TI MSP430F1612167.3555
25BikeNet1TI MSP430F16111681048
26BriMon1TI MSP430F16111681048
27IP net1TI MSP430F149168260
28Smart home1Atmel ATMega128884128
29SVATS1Atmel ATMega128L87.34128
30Hitchhiker1TI MSP430F16111681048
31Daily morning1Atmel ATMega12887.34128
32Heritage1TI MSP430F16111681048
33AC meter1TI MSP430F16111681048
34Coal mine1Atmel ATMega12887.34128
35ITS2ARM7 + MSP430F161132 + 8? + 864 + 10? + 48
36Underwater1NXP LPC2148 ARM7TDMI326040512
37PipeProbe1Nordic nRF24E1 DW80518164.2532
38Badgers1Atmel ATMega128V888128
39Helens volcano1Intel XScale PXA2713213 (624 max)25632768
40Tunnels1TI MSP430F16111681048
Only a few deployments use motes with more than one MCU. Therefore, OS support for multi-MCU platforms is an interesting option; however, the potential usage is limited. Multi-MCU motes are a future research area for applications running simple tasks routinely and requiring extra processing power sporadically. Gumsense mote is an example of this approach [68].
The most popular MCUs belong to Atmel ATMega AVR architecture [69] and Texas Instruments MSP430 families. The former is used in Mica-family motes, while the latter is the core of the TelosB platform, which has been widely used recently.
Design rule 12
Support for Atmel AVR and Texas Instruments MSP430 MCU architectures is essential for sensor network operating systems.
Sensor network motes use eight-bit or 16-bit architectures, with a few 32-bit ARM-family exceptions. Typical CPU frequencies are around 8 MHz; RAM amount: 4–10 KB; program memory: 48–128 KB. It must be noted that program memory size is always larger than RAM, sometimes even by a factor of 32. Therefore, RAM memory effective usage is more important, and a reasonable amount of program memory can be sacrificed for that matter.

3.6. Sensor Mote: External Memory

Used external memory characteristics are described in Table 7. While external memory of several megabytes is available on most sensor motes, it is actually seldom used (only in 25% of deployments). Motes often perform either simple decision tasks or forward all the collected data without caching. However, these 25% of deployments are still too many to be completely discarded.
Table 7. Deployments: external memory.
Table 7. Deployments: external memory.
NrCodenameAvailable external memory, KBSecure Digital (SD)External memory usedFile system used
1Habitats512nyn
2Minefield16,000nyy
3Battlefield512nnn
4Line in the sand512nn?
5Counter-sniper512nnn
6Electro-shepherd512nyn
7Virtual fences?yyy
8Oil tanker0nnn
9Enemy vehicles512nnn
10Trove game512nnn
11Elder RFID512nnn
12Murphy potatoes512nnn
13Firewxnet512nnn
14AlarmNet512nnn
15Ecuador Volcano1,024nyn
16Pet game512nnn
17Plug0nnn
18B-Live0nnn
19Biomotion0nnn
20AID-N1,024nnn
21Firefighting1,024nnn
22Rehabil1,024nnn
23CargoNet1,024nyn
24Fence monitor0nnn
25BikeNet1,024ny?n
26BriMon1,024nyn
27IP net1,024nnn
28Smart home512n?n
29SVATS512nnn
30Hitchhiker1,024nnn
31Daily morning512nnn
32Heritage1,024nnn
33AC meter2,048nyn
34Coal mine512nnn
35ITS?n??n?
36Underwater2,097,152y?n
37PipeProbe0nnn
38Badgers2,097,152yyn
39Helens volcano0nnn
40Tunnels1024nnn
Design rule 13
External memory support for user data storage at the OS level is optional; yet, it should be provided.
Although very popular consumer products, Secure Digital/MultiMediaCard (SD/MMC) cards are even less frequently used (in less than 10% of deployments). The situation is even worse with filesystem use. Despite multiple sensor network filesystems already being proposed previously [70,71], they are seldom used. Furthermore, probably, there is a connection between the (lack of) external memory and filesystem usage—external memories are rarely used, because there is no simple and efficient filesystem for these devices.
Design rule 14
A convenient filesystem interface should be provided by the operating system, so that sensor network users can use it without extra complexity.

3.7. Communication

Table 8 lists deployment communication characteristics.
Table 8. Deployments: communication.
Table 8. Deployments: communication.
NrCodenameReport rate, 1/hPayload size, BRadio range, mSpeed, kbpsConnectivity type
1Habitats60?200 (1,200 with Yagi 12dBi)40connected
2Minefield????connected
3Battlefield??30038.4intermittent
4Line in the sand?130038.4connected
5Counter-sniper??6038.4connected
6Electro-shepherd0.337+150–200?connected
7Virtual fences1,8008??54,000connected
8Oil tanker0.049?30750connected
9Enemy vehicles1,800?3038.4connected
10Trove game???38.4connected
11Elder RFID?19?38.4connected
12Murphy potatoes62276.8connected
13Firewxnet200?40038.4intermittent
14AlarmNetconfigurable29?38.4connected
15Ecuador Volcanodepends on events161,000250connected
16Pet gameconfigurable?100250connected
17Plug72021??connected
18B-Live-???connected
19Biomotion360,00016151,000connected
20AID-Ndepends on queries?66250connected
21Firefighting??20250connected
22Rehabil?1230250connected
23CargoNetdepends on events??250sporadic
24Fence monitor??30076.8connected
25BikeNetopportunistic?20250sporadic
26BriMon62116125250sporadic
27IP net??30019.2connected
28Smart home??75–100 outdoor/20–30 indoor250connected
29SVATS??40038.4connected
30Hitchhiker?2450076.8connected
31Daily morning180,0002?100250connected
32Heritage6?125250intermittent
33AC meter60 default (configurable)?125250connected
34Coal mine?74 m forced, 20 m max38.4intermittent
35ITSvaries5*n?250connected
36Underwater90011?0.3intermittent
37PipeProbe72,000?101,000connected
38Badgers2,380+101,000250connected
39Helens volcanoconfigurable?9,600250connected
40Tunnels120??250connected
The data report rate varies significantly—some applications report once a day, while others perform real-time reporting at 100 Hz. If we search for connection between Table 3 and Table 8, two conclusions can be drawn: a low report rate is associated with a low duty cycle; yet, a low report rate does not necessarily imply a low sampling rate—high-frequency sampling applications with a low report rate do exist [24,48,49].
Typical data payload size is in the range if 10–30 Bytes. However, larger packets are used in some deployments.
Design rule 15
The default packet size provided by the operating system should be at least 30 bytes, with an option to change this constant easily, when required.
Typical radio transmission ranges are on the order of a few hundred meters. Some deployments use long-range links with more than a 1-km connectivity range.
Design rule 16
The option to change radio transmission power (if provided by radio chip) is a valuable option for collision avoidance and energy efficiency.
Design rule 17
Data transmission speed is usually below 1 MBit, theoretically, and even lower, practically. This must be taken into account when designing a communication protocol stack.
Eighty percent of deployments consider the network to be connected without interruptions (Figure 6)—any node can communicate to other nodes at any time (not counting delays imposed by Media Access Control (MAC) protocols). Only 12% experience interruptions, and 8% of networks have only opportunistic connectivity.
Default networking protocols should support connected networks. Opportunistic connection support is optional.
Figure 6. Deployment network connectivity—Eighty percent of deployments consider a network to be continuously connected, while only 12% experience significant disconnections and 8% use opportunistic communication.
Figure 6. Deployment network connectivity—Eighty percent of deployments consider a network to be continuously connected, while only 12% experience significant disconnections and 8% use opportunistic communication.
Jsan 02 00509 g006

3.8. Communication Media

Used communication media characteristics are listed in Table 9.
Table 9. Deployments: communication media.
Table 9. Deployments: communication media.
NrCodenameCommunication mediaUsed channelsDirectionality used
1Habitatsradio over air1n
2Minefieldradio over air + sound over air?n
3Battlefieldradio over air1n
4Line in the sandradio over air1n
5Counter-sniperradio over air1n
6Electro-shepherdradio over air?n
7Virtual fencesradio over air2y
8Oil tankerradio over air79n
9Enemy vehiclesradio over air1n
10Trove gameradio over air1n
11Elder RFIDradio over air1n
12Murphy potatoesradio over air1n
13Firewxnetradio over air1y, gateways
14AlarmNetradio over air1n
15Ecuador Volcanoradio over air1y
16Pet gameradio over air1n
17Plugradio over air?n
18B-Livewire mixed with radio over air?n
19Biomotionradio over air1n
20AID-Nradio over air1n
21Firefightingradio over air4n
22Rehabilradio over air1?n
23CargoNetradio over air1n
24Fence monitorradio over air1n
25BikeNetradio over air1n
26BriMonradio over air16n
27IP netradio over air1n
28Smart homeradio over air16?
29SVATSradio over air?n
30Hitchhikerradio over air1n
31Daily morningradio over air1n
32Heritageradio over air1n
33AC meterradio over air1n
34Coal mineradio over air1n
35ITSradio over air1n
36Underwaterultra-sound over water1n
37PipeProberadio over air and water1n
38Badgersradio over air?n
39Helens volcanoradio over air1?y
40Tunnelsradio over air2n
With few exceptions, the communication is performed by transmitting radio signals over air. Ultrasound is used as an alternative. Some networks may use available wired infrastructure.
Eighty-five percent of applications use one, static radio channel; the remaining 15% do switch between multiple alternative channels. If radio channel switching is complex and code-consuming, it should be optional at the OS level.
While directionality usage for extended coverage and energy efficiency has been a widely discussed topic, the ideas are seldom used in practice. Only 10% of deployments use radio directionality benefits, and none of these deployments utilize electronically switchable antennas capable of adjusting directionality in real time [72]. A directionality switching interface is optional; users may implement it in the application layer as needed.

3.9. Network

Deployment networking is summarized in Table 10.
Table 10. Deployments: network.
Table 10. Deployments: network.
NrCodenameNetwork topologyMobile motesDeployment areaMax hop countRandomly deployed
1Habitatsmulti-one-hopn1,000 × 1,000 m1n
2Minefieldmulti-one-hopy30 × 40 m?y
3Battlefieldmulti-one-hopn85 m long road?y
4Line in the sandmeshn18 × 8 m?n
5Counter-snipermulti-one-hopn30 × 15 m11y
6Electro-shepherdone-hopy?1y (attached to animals)
7Virtual fencesmeshy300 × 300 m5y (attached to animals)
8Oil tankermulti-one-hopn150 × 100 m1n
9Enemy vehiclesmeshy, power node20 × 20 m6n
10Trove gameone-hopy?1y, attached to users
11Elder RFIDone-hopn (mobile RFID tags) < 10 m 2 1n
12Murphy potatoesmeshn10,00 × 1,000 m10n
13Firewxnetmulti-meshn 160 k m 2 4?n
14AlarmNetmeshy, mobile body motesapartment?n
15Ecuador Volcanomeshn8,000 × 1,000 m6n
16Pet gamemeshy??y
17Plugmeshn40 × 40?n
18B-Livemulti-one-hopnhouse2n
19Biomotionone-hopy, mobile body motesroom1n (attached to predefined body parts)
20AID-Nmeshy?1+y, attached to users
21Firefightingpredefined treey, human mote 3 , 200 m 2 ?n
22Rehabilone-hopy, human motesgymnastics room1y, attached to patients and training machines
23CargoNetone-hopytruck, ship or plane1n
24Fence monitorone-hop?n35 × 2 m1?n
25BikeNetmeshy5 km long track?y (attached to bicycles)
26BriMonmulti-meshy, mobile BS2,000 × 14n
27IP netmulti-one-hopn250 × 25 3 story building + mock-up town 500 m 2 ?n
28Smart homeone-hopn??n
29SVATSmeshy, motes in carsparking place?n
30Hitchhikermeshn500 × 500 m2?n
31Daily morningone-hopy, body motehouse1n (attached to human)
32Heritagemeshn7.8 × 4.5 × 26 m6n (initial deployment static, but can be moved later)
33AC metermeshnbuilding?y (Given to users who plug in power outlets of their choice)
34Coal minemulti-path meshn8 × 4 × ? m?n
35ITSmeshn140 m long road7?n
36Underwatermeshy?1n
37PipeProbeone-hopy0.18 × 1.40 × 3.45 m1n
38Badgersmeshy1,000 × 2,000 m ??y (attached to animals)
39Helens volcanomeshn?1+?n
40Tunnelsmulti-meshn230 m long tunnel4n
A mesh, multi-hop network is the most popular network topology-used in 47% of analyzed cases (Figure 7). The second most popular topology is a simple one-hop network: 25%. Multiple such one-hop networks are used in 15% of deployments. Altogether, routing is used in 57% of cases. Maximum hop count does not exceed 11 in the surveyed deployments. A rather surprising finding is that almost half of deployments (47%) have at least one mobile node in the network (while maintaining a connected network).
Design rule 18
Multi-hop routing is required as a default component, which can be turned off, if one-hop topology is used. Topology changes must be expected; at least 11 hops should be supported.
Figure 7. Deployment network topologies—Almost half (47%) use a multi-hop mesh network. One-hop networks are used in 25% of cases; 15% use multiple one-hop networks.
Figure 7. Deployment network topologies—Almost half (47%) use a multi-hop mesh network. One-hop networks are used in 25% of cases; 15% use multiple one-hop networks.
Jsan 02 00509 g007
Additionally, 30% have random initial node deployment, increasing the need for a neighbor discovery protocol. Neighbor discovery protocols (either explicit or built-in routing) should be provided by the OS.

3.10. In-Network Processing

In-network preprocessing, aggregation and distributed algorithm usage is shown in Table 11 and visualized in Figure 8. Application level aggregation is considered here—data averaging and other compression techniques with the goal to reduce the size of data to be sent.
Table 11. Deployments: in-network processing.
Table 11. Deployments: in-network processing.
NrCodenameRaw data preprocessAdvanced distributed algorithmsIn-network aggregation
1Habitatsnnn
2Minefieldyy?
3Battlefieldyny
4Line in the sandyy?
5Counter-sniperyny
6Electro-shepherdnnn
7Virtual fencesnnn
8Oil tankernnn
9Enemy vehiclesyyy
10Trove gamenyn
11Elder RFIDnnn
12Murphy potatoesnnn
13Firewxnetnnn
14AlarmNetynn
15Ecuador Volcanoyyn
16Pet gamennn
17Plugynn
18B-Liveynn
19Biomotionnnn
20AID-Nynn
21Firefightingnnn
22Rehabilnnn
23CargoNetnnn
24Fence monitoryyn
25BikeNetnnn
26BriMonnnn
27IP netynn
28Smart homeyn?
29SVATSyyn
30Hitchhikernnn
31Daily morningnnn
32Heritageynn
33AC meterynn
34Coal mineyyy
35ITSynn
36Underwaternyn
37PipeProbeynn
38Badgersynn
39Helens volcanoynn
40Tunnelsnnn
As the results show, raw data preprocessing is used in 52% of deployments, i.e., one out of two deployments reports raw data without processing it locally. The situation is even worse with distributed algorithms (voting, distributed motor control, etc.) and data aggregation: it is only used in 20% and 10% of cases, respectively. Therefore, sensor network theoretical assumptions, “smart devices taking in-network distributed decisions” and “to save communication bandwidth, aggregation is used”, prove not to be true in reality. Raw data preprocessing and distributed decision-making is performed at the application layer; no responsibility for the operating system is imposed. Aggregation could be performed at the operating system service level. However, it seems that such additional service is not required for most of the applications. Data packet aggregation is optional and should not be included at the OS level.
Figure 8. Deployment in-network processing—Raw data preprocessing is used in half of deployments; distributed algorithms and aggregation are seldom used.
Figure 8. Deployment in-network processing—Raw data preprocessing is used in half of deployments; distributed algorithms and aggregation are seldom used.
Jsan 02 00509 g008

3.11. Networking Stack

The networking protocol stack is summarized in Table 12.
Table 12. Deployments: networking protocol stack.
Table 12. Deployments: networking protocol stack.
NrCodenameCustom MACChannel access methodRouting usedCustom routingReactive or proactive routingIPv6 usedSafe deliveryData priorities
1HabitatsnCarrier Sense Multiple Access (CSMA)n--nnn
2Minefield????????
3BattlefieldyCSMAyyproactiveny?
4Line in the sandyCSMAyyproactivenyn
5Counter-snipernCSMAyyproactivenn-
6Electro-shepherdyCSMA---nnn
7Virtual fencesnCSMA---IPv4?nn
8Oil tankernCSMA-n-nyn
9Enemy vehiclesyCSMAyyproactivenn-
10Trove gamenCSMAn--nnn
11Elder RFIDnCSMAn--nnn
12Murphy potatoesyCSMAynproactivennn
13FirewxnetyCSMAyyproactivenyn
14AlarmNetyCSMAyn?nyy
15Ecuador VolcanonCSMAyyproactivenyn
16Pet gamenCSMAyn?nnn
17PlugyCSMAyy?nnn
18B-Live??n--n??
19BiomotionyTime Division Multiple Access (TDMA)n--nnn
20AID-N??ynproactivenyn
21FirefightingnCSMAy, staticnproactivennn
22RehabilnCSMAn--nnn
23CargoNetyCSMAn--nnn
24Fence monitornCSMA?yyproactive?nnn
25BikeNetyCSMAyyreactivenyn
26BriMonyTDMAyyproactivenyn
27IP netnCSMAyyproactive???
28Smart home??y??n??
29SVATSnCSMAyn?nnn
30HitchhikeryTDMAyyreactivenyn
31Daily morningnCSMAn--nnn
32HeritageyTDMAyyproactivenyy
33AC metern?ynproactiveyyn
34Coal minenCSMAyyproactivenyn
35ITSy?CSMA?yyreactivenyn
36UnderwateryTDMAn--nnn
37PipeProben?n--nnn
38BadgersnCSMAyyproactiveyny
39Helens volcanoyTDMAy??nyy
40TunnelsnCSMAyyproactivennn
Forty-three percent of deployments use custom MAC protocols, proving that data link layer problems either really are very application-specific or system developers are not wanting to study the huge amounts of MAC-layer-related published work.
The most commonly used MAC protocols can be divide into two classes: CSMA-based (Carrier Sense Multiple Access) and TDMA-based (Time Division Multiple Access). The former class represents protocols that check media availability shortly before transmission, while in the latter case, all communication participants agree on a common transmission schedule.
Seventy percent use CSMA-based MAC protocols and 15% use TDMA, and the remaining 15% is unclear. CSMA MACs are often used because TDMA implementation is too complex: it requires master node election and time synchronization.
Design rule 19
The operating system should provide a simple, effective and generic CSMA-based MAC protocol by default.
The TDMA MAC option would be a nice feature for the WSN OS, as TDMA protocols are more effective in many cases.
Routing is used in 65% of applications. However, no single best routing protocol is selected—between the analyzed deployment, no two applications used the same routing protocol. Forty-three percent of deployments used custom routing, not published before.
Routing can be proactive: routing tables are prepared and maintained beforehand; or it can be reactive: the routing table is constructed only upon need. The proactive approach is used in 85% of the cases; the remaining 15% use reactive route discovery.
As already mentioned above, the operating system must provide a simple, yet efficient, routing protocol, which performs fair enough for most of the cases. A proactive protocol is preferred.
Design rule 20
The interface for custom MAC and routing protocol substitution must be provided.
Although Internet Protocol version 6 (IPv6) is a widely discussed protocol for the Internet of Things and modifications (such as 6lowpan [73]) for resource-constrained devices have been developed, the protocol is very novel and not widely used yet: only 5% of surveyed deployments use it. However, it can be expected that this number will increase in the coming years. TinyOS [4] and Contiki OS [5] have already included 6lowpan as one of the main networking alternatives.
Design rule 21
It is wise to include a IPv6 (6lowpan) networking stack in the operating system to increase interoperability.
Reliable data delivery is used by 43% of deployments, showing that reliable communication in the transport layer is a significant requirement for some application classes. Another quality-of-service option, data stream prioritizing, is rarely used, though (only 10% of cases).
Design rule 22
Simple transport layer delivery acknowledgment mechanisms should be provided by the operating system.

3.12. Operating System and Middleware

Used operating systems and middleware are listed in Table 13.
Table 13. Deployments: used operating system (OS) and middleware.
Table 13. Deployments: used operating system (OS) and middleware.
NrCodenameOS usedSelf-made OSMiddleware used
1HabitatsTinyOSn
2Minefieldcustomized Linuxn
3BattlefieldTinyOSn
4Line in the sandTinyOSn
5Counter-sniperTinyOSn
6Electro-shepherd?y
7Virtual fencesLinuxn
8Oil tanker?n
9Enemy vehiclesTinyOSn
10Trove gameTinyOSn
11Elder RFIDTinyOSn
12Murphy potatoesTinyOSn
13FirewxnetMantis OS [60]y
14AlarmNetTinyOSn
15Ecuador VolcanoTinyOSnDeluge [74]
16Pet gameTinyOSnMate Virtual Machine + TinyScript [75]
17Plugcustomy
18B-Livecustomy
19Biomotionno OSy
20AID-N??
21FirefightingTinyOSnDeluge [74]?
22RehabilTinyOSn
23CargoNetcustomy
24Fence monitorScatterWebyFACTS [76]
25BikeNetTinyOSn
26BriMonTinyOSn
27IP netContikin
28Smart homeTinyOSn
29SVATSTinyOS?n
30HitchhikerTinyOSn
31Daily morningTinyOSn
32HeritageTinyOSnTeenyLIME [77]
33AC meterTInyOSn
34Coal mineTinyOSn
35ITScustom?y?
36Underwatercustomy
37PipeProbecustomy
38BadgersContikin
39Helens volcanoTinyOSncustomized Deluge [74], remote procedure calls
40TunnelsTinyOSnTeenyLIME [77]
TinyOS [4] is the de-facto operating system for wireless sensor networks, as is clearly shown in Figure 9: 60% of deployments use it. There are multiple reasons behind that. First, TinyOS has a large community supporting it; therefore, device drivers and protocols are well tested. Second, as it has reached critical mass, TinyOS is the first choice for new sensor network designers—it is being taught at universities, it has easy installation and pretty well developed documentation and even books on how to program in TinyOS [78].
Figure 9. Operating systems used in analyzed deployments—Sixty percent of deployments use the de-facto standard: TinyOS. Seventeen percent use self-made or customized OSs.
Figure 9. Operating systems used in analyzed deployments—Sixty percent of deployments use the de-facto standard: TinyOS. Seventeen percent use self-made or customized OSs.
Jsan 02 00509 g009
At the same time, many C and Unix programmers would like to use their previous skills and knowledge to program sensor networks without learning new paradigms, nesC language (used by TinyOS), component wiring, etc. One piece of evidence of this statement is that new operating systems for sensor network programming are being developed [5,71,79,80], despite the fact that TinyOS has been here for more than 10 years. Another piece of evidence: in 17% of cases, a self-made or customized OS is used; users either want to use their particular knowledge or they have specific hardware not supported by TinyOS and consider porting TinyOS to new hardware to be too complex.
Deluge [74] and TeenyLIME [77] middleware are used in more than one deployment. Deluge is a remote reprogramming add-on for TinyOS. TeenyLIME is a middleware providing a different level of programming abstraction and, also, implemented on top of TinyOS.
Conclusion: middleware usage is not very popular in sensor networks. Therefore, there is open space for research to develop an easy to use, yet powerful, middleware that is generic enough to be used in a wide application range.

3.13. Software Level Tasks

User and kernel level tasks and services are described in Table 14. The task count and objectives are an estimate of the authors of this deployment survey, developed based on information available from research articles. Networking, time synchronization and remote reprogramming protocols are considered kernel services, if not stated otherwise.
Table 14. Deployments: software level tasks.
Table 14. Deployments: software level tasks.
NrCodenameKernel service countKernel servicesApp-level task countApp-level tasks
1Habitats01sensing + caching to flash + data transfer
2Minefield?linux services11
3Battlefield2MAC, routing2 + 4Entity tracking, status, middleware (time sync, group management, sentry service, dynamic configuration)
4Line in the sand????
5Counter-sniper????
6Electro-shepherd?-sense and send
7Virtual fences?MAC1sense and issue warning (play sound file)
8Oil tanker04cluster formation and time sync, sensing, data transfer
9Enemy vehicles????
10Trove game1MAC3sense and send, receive, buzz
11Elder RFID1MAC2query RFID, report
12Murphy potatoes2MAC, routing1sense and send
13Firewxnet2MAC, routing2sensing and sending, reception and time-sync
14AlarmNet??3query processing, sensing, report sending
15Ecuador Volcano3time sync, remote reprogram, routing3sense, detect events, process queries
16Pet game2MAC, routing?sense and send, receive configuration
17Plug2MAC, routing, radio listen2sensing and statistics and report, radio RX
18B-Live??3sensing, actuation, data transfer
19Biomotion2MAC, time sync1sense and send
20AID-N3MAC, routing, transport3query processing, sensing, report sending
21Firefighting1routing2sensing and sending, user input processing
22Rehabil0??1sense and send
23CargoNet0??1sense and send
24Fence monitor2MAC, routing4sense, preprocess, report, receive neighbor response
25BikeNet1MAC5hello broadcast, neighbor discovery and task reception, sensing, data download, data upload
26BriMon3Time sync, MAC, routing3sensing, flash storage, sending
27IP net????
28Smart home????
29SVATS2MAC, time sync2listen, decide
30Hitchhiker4MAC, routing, transport, timesync1sense and send
31Daily morning1MAC1sense and send
32Heritage??
33AC meter??2sampling, routing
34Coal mine2MAC, routing2receive beacons, send beacon and update neighbor map and report accidents
35ITS2MAC, routing1listen for queries and sample and process and report
36Underwater2MAC, timesync3sensing + sending, reception, motor control
37PipeProbe0?-1sense and send
38Badgers3MAC, routing, User Datagram Protocol (UDP) connection establishment1sense and send
39Helens volcano5MAC, routing, transport, time sync, remote reprogram5sense, detect events, compress, Remote Procedure Call (RPC) response, data report
40Tunnels2MAC, routing1sense and send
Most of deployments use not more than two kernel services (55%) (Figure 10). For some deployments, up to five kernel services are used. The maximum service count must be taken into account when designing a task scheduler—if static service maps are used, they must contain enough entries to support all kernel services.
In the application layer, often, just one task is used, which is typically sense and send (33% of cases) (Figure 11). Up to six tasks are used in more complex applications.
Design rule 23
The OS task scheduler should support up to five kernel services and up to six user level tasks. An alternative configuration might be useful, providing a single user task to simplify the programming approach and provide maximum resource efficiency, which might be important for the most resource-constrained platforms.
Figure 10. The number of kernel level software services used in deployments—fifty-five percent of deployments use two or less kernel services. For 28%, the kernel service count is unknown.
Figure 10. The number of kernel level software services used in deployments—fifty-five percent of deployments use two or less kernel services. For 28%, the kernel service count is unknown.
Jsan 02 00509 g010
Figure 11. The number of application layer software tasks used in deployments.—Thirty-three percent of deployments use just one task; however, up to six tasks are used in more complex cases. The task count is unknown in 18% of deployments.
Figure 11. The number of application layer software tasks used in deployments.—Thirty-three percent of deployments use just one task; however, up to six tasks are used in more complex cases. The task count is unknown in 18% of deployments.
Jsan 02 00509 g011

3.14. Task Scheduling

Table 15 describes deployment task scheduling attributes: time sensitivity and the need for preemptive task scheduling.
Table 15. Deployments: task scheduling.
Table 15. Deployments: task scheduling.
NrCodenameTime sensitive app-level tasksPreemptive scheduling neededTask comments
1Habitats0nsense + cache + send in every period
2Minefield7+ycomplicated localization, network awareness and cooperation
3Battlefield0n
4Line in the sand1 ?n
5Counter-sniper3?nlocalization, synchronization, blast detection
6Electro-shepherd??
7Virtual fences?n
8Oil tanker1yuser-space cluster node discovery and sync are time critical
9Enemy vehicles0n
10Trove game0n
11Elder RFID0n
12Murphy potatoes0n
13Firewxnet1ysensing can take up to 200 ms; should be preemptive
14AlarmNet0n-
15Ecuador Volcano1ysensing is time-critical, but it is stopped, when the query is received
16Pet game0n
17Plug0n
18B-Live0y
19Biomotion0ypreemption needed for time sync and TDMA MAC
20AID-N0n
21Firefighting0n
22Rehabil??
23CargoNet0nwake up on external interrupts; process them; return to sleep mode
24Fence monitor0nif preprocessing is time-consuming, preemptive scheduling is needed
25BikeNet1ysensing realized as an app-level TDMA schedule and is time-critical. Data upload may be time-consuming; therefore, preemptive scheduling may be required
26BriMon0nsending is time critical, but in the MAC layer
27IP net0?
28Smart home??
29SVATS0ypreemption needed for time sync and MAC
30Hitchhiker0ypreemption needed for time sync and MAC
31Daily morning0n
32Heritage1ypreemptive scheduling needed for time sync?
33AC meter0n
34Coal mine0npreemptive scheduling needed, if the neighbor update is time-consuming
35ITS0n
36Underwater0ypreemption needed for time sync and TDMA MAC
37PipeProbe0nno MAC; just send
38Badgers0n
39Helens volcano0ypreemption needed for time sync and MAC
40Tunnels0n
Two basic scheduling approaches do exist: cooperative and preemptive. In the former case, the switch between tasks is explicit—one task yields a processor to another task. A switch can occur only in predefined code lines. In the latter case, the scheduler can preempt any task at any time and give the CPU to another task. A switch can occur anywhere in the code.
The main advantage of cooperative scheduling is resource efficiency: no CPU time and memory are wasted to perform periodic switches between concurrent tasks, which could be executed serially without any problem.
The main advantage of preemptive scheduling is that users do not have to worry about task switching—it is performed automatically. Even if the user has created an infinite loop in one task, other tasks will have access to the CPU and will be able to execute.
Preemptive scheduling can introduce new bugs, though; it requires context switching, including multiple stack management. Memory checking and overflow control is much harder for multiple stacks, compared to cooperative approaches with a single stack.
If we assume that the user written code is correct, preemptive scheduling is required only in cases where at least one task is time-sensitive and at least one other task is time-intensive (it can execute for a relatively long period of time). The latter may disturb the former from handling all important incoming events.
Twenty percent of analyzed deployments have at least one time-sensitive application layer task (most of them have exactly one), while 30% of deployments require preemptive scheduling. Even in some cases (10%), where no user-space time-sensitive tasks exist, preemption may be required by kernel-level services: MAC protocols and time synchronization.
Design rule 24
The operating system should provide both cooperative and preemptive scheduling, which are switchable as needed.

3.15. Time Synchronization

Time synchronization has been addressed as one of the core challenges of sensor networks. Therefore, its use in deployments is analyzed and statistics are shown in Table 16.
Table 16. Deployments: time synchronization.
Table 16. Deployments: time synchronization.
NrCodenameTime-sync usedAccuracy, μsecAdvanced time-syncSelf-made time-sync
1Habitatsn---
2Minefieldy1000??
3Battlefieldy?ny
4Line in the sandy110ny
5Counter-snipery17.2 (1.6 per hop)yy
6Electro-shepherdn---
7Virtual fencesn---
8Oil tankery?ny
9Enemy vehiclesn---
10Trove gamen---
11Elder RFIDn---
12Murphy potatoesn---
13Firewxnety>1000ny
14AlarmNetn---
15Ecuador Volcanoy6800yn
16Pet gamen---
17Plugn---
18B-Liven---
19Biomotiony?ny
20AID-Nn---
21Firefightingn---
22Rehabiln---
23CargoNetn---
24Fence monitorn---
25BikeNety1 ms?n, GPSn
26BriMony180ny
27IP net????
28Smart home????
29SVATSy, not implemented---
30Hitchhikery?ny
31Daily morningn---
32Heritagey732yy
33AC metern---
34Coal minen---
35ITSn---
36Underwatery??y
37PipeProben---
38Badgersn---
39Helens volcanoy1 ms?n, GPSn
40Tunnelsn---
Reliable routing is possible if at least one of two requirements holds:
  • A 100% duty cycle is used on all network nodes functioning as data routers without switching to sleep mode.
  • Network nodes agree on a cooperative schedule for packet forwarding; time synchronization is required.
Therefore, no effective duty cycling and multi-hop routing are possible without time synchronization.
Time synchronization is used in 38% of deployments, while multi-hop routing is used in 57% of cases (the remaining 19% use no duty-cycling).
Although very accurate time synchronization protocols do exist [81], simple methods, including GPS, are used most of the time, offering accuracy in millisecond, not microsecond range.
Only one of deployments used a previously developed time synchronization approach (not including GPS usage in two other deployments); all the others use custom methods. The reason is that despite many published theoretical protocols, no operating system provides an automated and easy way to “switch on” time synchronization.
Design rule 25
Time synchronization provided by the operating system would be of a high value, saving sensor network designers time and effort for custom synchronization development.

3.16. Localization

Another of the most addressed sensor network problems is localization, Table 17.
Table 17. Deployments: localization.
Table 17. Deployments: localization.
NrCodenameLocalization usedLocalization accuracy, cmAdvanced LocalizationSelf-made Localization
1Habitatsn---
2Minefieldy+/−25yy
3Battlefieldycouple feetny
4Line in the sandn---
5Counter-snipery11yy
6Electro-shepherdy, GPS>1 mnn
7Virtual fencesy, GPS>1 mnn
8Oil tankern---
9Enemy vehiclesy?ny
10Trove gamen---
11Elder RFIDn---
12Murphy potatoesn---
13Firewxnetn---
14AlarmNetyroomn, motion sensor in roomsy
15Ecuador Volcanon---
16Pet gamen---
17Plugn---
18B-Liven---
19Biomotionn---
20AID-Nn---
21Firefightingy<5 m?ny
22Rehabiln---
23CargoNetn---
24Fence monitorn---
25BikeNety, GPS>1 mnn
26BriMonn---
27IP netn---
28Smart homen---
29SVATSy?n, RSSIy
30Hitchhikern---
31Daily morningyroomny
32Heritagen---
33AC metern---
34Coal miney?n, staticy
35ITSy, static?nn
36Underwatery?ny
37PipeProbey8 cmyy
38Badgersn---
39Helens volcanon---
40Tunnelsn---
Localization is used in 38% of deployments: 8% use GPS and 30%, other methods. In contrast to time synchronization, the localization problem is very application-specific. Required localization granularity, environment, meta-information and infrastructure vary tremendously: in one case, localization of the centimeter scale must be achieved; in another, the room of a moving object must be found; in another, GPS is used in an outdoor environment. In 73% of the cases, where localization is used, it is custom for this application. It is not possible for an operating system to provide a generic localization method for a wide application class. Neighbor discovery service could be usable—it can help to solve both, localization and routing problems.

4. A Typical Wireless Sensor Network

In this section, we present a synthetic example of an average sensor network, based on the most common properties and trends found in the deployment analysis. This example can be used to describe wireless sensor networks to people becoming familiarized with the WSN field.
A typical wireless sensor network:
  • is used as a prototyping tool to test new concepts and approaches for monitoring specific environments
  • is developed and deployed incrementally in multiple iterations and, therefore, needs effective debugging mechanisms
  • contains 10–50 sensor nodes and one or several base stations (a sensor node is connected to a personal computer) that act as data collection sinks
  • uses temperature, light and accelerometer sensors
  • uses low frequency sensor sampling with less than one sample per second, on average, in most cases; some sensors (accelerometers) require sampling in the range 10–100 Hz, and some scenarios (seismic or audio sensing) use high frequency sampling with a sampling rate above 10 kHz
  • has a desired lifetime, varying from several hours (short trials) to several years; relatively often, the desired final lifetime is specified; yet, a significantly shorter lifetime is used in the first proof-of-concept trials with a 100% duty cycle (no sleep mode used)
  • has at least one sensor node with increased energy budget—either connected to a static power network or a battery with significantly larger capacity
  • has specific sensing and packaging constraints; therefore, packaging and hardware selection are important problems in WSN design
  • uses either an adapted version (custom sensors added) of a TelosB-compatible [2] or a MicaZ sensor node [3]; also, fully custom-built motes are popular
  • contains MSP430 or AVR architecture microcontrollers on the sensor nodes, typically with eight-bit or 16-bit architecture, 8 MHz CPU frequency, 4–10 KB RAM, 48–128 KB program memory and 512–1,024 KB external memory
  • has communication according to the 802.15.4 protocol; TI CC2420 is an example of a widely used wireless communication chip [67]
  • sends data packets with a size of 10–30 bytes; the report rate varies significantly—for some scenarios, only one packet per day is sent; for others, each sensor sample is sent at 100 Hz
  • uses omnidirectional communication in the range of 100–300 m (each hop) with a transmission speed less than 256 Kbps and uses a single communication channel that can lead to collisions
  • considers constant multi-hop connectivity available (with up to 11 hops on the longest route), with possible topology changes, due to mobile nodes or other environmental changes in the sensing region
  • has either a previously specified or at least a known sensor node placement (not random)
  • is likely to use at least primitive raw data preprocessing before reporting results
  • uses CSMA-based MAC protocol and proactive routing, often adapted or completely custom-developed for the particular sensing task
  • uses some form of reliable data delivery with acknowledgment reception mechanisms
  • has been programmed using the TinyOS operating system
  • uses multiple semantically simultaneous application-level tasks, and multiple kernel services are running in background, creating the necessity for effective scheduling mechanisms in the operating system and, also, careful programming of the applications; cooperative scheduling (each task voluntarily yields the CPU to other tasks) is enough in most cases; yet, it requires even more accuracy from the programmers
  • requires at least simple time synchronization with millisecond accuracy for common duty cycle management or data time stamping
  • may require some form of node localization; yet, the environments pose very specific constraints: indoor/outdoor, required accuracy, update rate, infrastructure availability and many other factors

5. OS Conformance

This section analyzes existing WSN operating system conformance to design rulesdiscussed in this paper. Three operating systems are analyzed here:
  • TinyOS [4]—de facto standard in the WSN community. Specific environment: event driven programming in nesC language.
  • Contiki [5]—more common environment with sequential programming (proto-threads [82]) in American National Standards Institute (ANSI) C programming language
  • LiteOS [71]—a WSN OS providing a Unix-like programming interface
  • MansOS [84]—a portable, C-based operating system that conforms to most of the design rulesdescribed in this paper.
The conformance to the design rulesis summarized in Table 18. The following subsections discuss the conformance of the listed operating systems, without describing their structure in detail, as they are already published in other publications [4,5,71,84].
As Table 18 reveals, the listed operating systems cover most of the design rules. Exceptions are discussed here.
Table 18. Existing OS conformance to proposed design rules.
Table 18. Existing OS conformance to proposed design rules.
#RuleTinyOSContikiLiteOSMansOS
General
1Simple, efficient networking protocols++±+
2Sink-oriented protocols+++
3Base station example+++
Sensing
4Temperature, light, acceleration API±+
5Low duty cycle sampling++++
Lifetime and energy
6Auto sleep mode++++
7Powered mode in protocol design+++
Sensor mote
8TelosB support+++
9Rapid driver development+++
10Rapid platform definition±+
11CC2420 radio chip driver++++
12AVR and MSP430 architecture support++±+
13External storage support++++
14Simple file system+++
Communication
15Configurable packet payload (default: 30 bytes)++++
16Configurable transmission power++++
17Protocols for ≤ 1 Mbps bandwidth++++
18Simple proactive routing++±+
19Simple CSMA MAC+++
20Custom MAC and routing API+++
21IPv6 support++
22Simple reception acknowledgment+++
Tasks and scheduling
23five kernel and six user task support++±±
24Cooperative and preemptive scheduling+++
25Simple time synchronization++

5.1. TinyOS

TinyOS conforms to the majority of the design rules, but not all of them. The most significant drawback is the complexity of the TinyOS architecture. Although TinyOS is portable (the wide range of supported platforms is a proof for it), code readability and simplicity is doubtful. The main reasons for TinyOS complexity are:
  • The event-driven nature: while event handlers impose less overhead compared to sequential programming, with blocking calls and polling, it is more complex for programmers to design and keep in mind the state machine for split-phase operation of the application
  • Modular component architecture: a high degree of modularity and code reuse leads to program logic distribution into many components. Each new functionality may require modification in multiple locations, requiring deep knowledge of internal system structure
  • nesC language peculiarities: confusion of interfaces and components, component composition and nesting and specific requirements for variable definitions are examples of language aspects interfering with the creativity of novice WSN programmers
These limitations are at the system design level, and there is no quick fix available. The most convenient alternative is to implement middleware on top of TinyOS for simplified access to non-expert WSN programmers. TinyOS architecture is too specific and complex to introduce groundbreaking improvements for readability while maintaining backwards compatibility for existing applications.
There are multiple TinyOS inconsistencies with the proposed design rules, which can be corrected by implementing missing features:
  • TinyOS provides an interface for writing data and debug logs to external storage devices; yet, no file system is available. Third party external storage filesystem implementations do exist, such as TinyOS FAT16 support for SD cards [85].
  • TinyOS contains Flooding Time Synchronization Protocol (FTSP) time synchronization protocol [9] in its libraries. However, it requires deep understanding of clock skew issues and FTSP protocol operation to be useful
  • The temperature, light, acceleration, sound and humidity sensing API is not provided

5.2. Contiki

Contiki is one of the most successful examples regarding conformance to the design rulesproposed in this paper.
Contiki does not provide a platform-independent API for popular sensor (temperature, light, sound) and analog-to-digital converter (ADC) access. The reason is that Contiki’s mission is not dedicated specifically to sensor networks, but rather to networked embedded device programming. Some of the platforms (such as Apple II) may not have sensors or ADC available; therefore, the API is not explicitly enforced for all the platforms.
Surprisingly, there is no base station application template included. Contiki-collect is provided as an alternative—a complete and configurable sense-and-send network toolset for simple setup of simple sensor network applications.
Portability to new platforms is partially effective. MCU architecture code may be reused. However, the existing approach in Contiki is to copy and duplicate files, even between platforms with a common code base (such as TelosB and Zolertia Z1 [63]). Portability of Contiki can be improved by creating architecture and design guidelines, where a common code base is shared and reused among platforms.

5.3. LiteOS

LiteOS conforms to the proposed design rulesonly partially.
The LiteOS operating system does not include the networking stack at the OS level. Instead, example routing protocols are implemented at the user level, as application examples. No MAC protocol is available in LiteOS, nor is a unified API for custom MAC and routing protocol development present. The provided routing implements geographic forwarding, without any powered sink node consideration. No IPv6 support or packet reception acknowledgment mechanisms are provided.
Temperature and light sensor reading API is present in LiteOS; the acceleration sensor must be implemented by users.
Only AVR-based hardware platforms are supported, but no TelosB. The source code is, therefore, not optimized for porting to new hardware platforms.
Only preemptive multithreading is available in LiteOS, but no cooperative scheduling. By default, a maximum of eight simultaneous threads are allowed. Additionally, this constant can be changed in the source files. However, each thread requires a separate stack, and running more than eight parallel threads simultaneously on a platform with 4 KiB RAM memory is a rather dangerous experience that can lead to stack overflows and hardly traceable errors. Many parallel task execution is therefore realistic only in scheduling mechanisms sharing stack space between multiple threads.
No time synchronization is included in the LiteOS code base.

5.4. MansOS

MansOS [83] is a portable and easy-to-use WSN operating system that has a smooth learning curve for users with C and Unix programming experience, described in more detail in [84]. One of the main assumptions in MansOS design was the need to adapt it to many different platforms. As the deployment survey shows, this is a very important necessity.
MansOS satisfies all design ruleswith two exceptions:
  • IPv6 support is not built into the MansOS core; it must be implemented at a different level
  • MansOS provides both scheduling techniques: preemptive and cooperative. In the preemptive case, only one kernel thread and several user threads are allowed. Multiple kernel tasks must share a single thread in this case. For the cooperative scheduler (protothreads, adopted from Contiki [82]), any number of simultaneous threads is allowed, and they all share the same stack space; therefore, the stack overflow probability is significantly lower, compared to LiteOS.

5.5. Summary

The examined WSN operating systems, TinyOS, Contiki, LiteOS and MansOS, conform to the majority of the proposed design rules. However, there is space for improvement for every OS. Some of the drawbacks can be overcome by straight-forward implementation of some missing functionality. However, in some cases, a significant OS redesign is required.

6. Conclusions

This paper surveys 40 wireless sensor network deployments described in the research literature. Based on thorough analysis, design rules for WSN operating system design are proposed. The rules include suggestions related to the task scheduler, networking protocol and other aspects of OS design. Some of the most important concluding design rules:
  • In many cases, customized commercial sensor nodes or fully custom-built motes are used. Therefore, OS portability and code reuse are very important.
  • Simplicity and extensibility should be preferred over scalability, as existing sensor networks rarely contain more than 100 nodes.
  • Both preemptive and cooperative task schedulers should be included in the OS.
  • Default networking protocols should be sink-oriented and use CSMA-based MAC and proactive routing protocols. WSN researchers should be able to easily replace default networking protocols with their own to evaluate their performance.
  • Simple time synchronization with millisecond (instead of microsecond) accuracy is sufficient for most deployments.
The authors believe that these design rules will foster more efficient, portable and easy-to-use WSN operating system and middleware design.
Another overall conclusion based on analyzed data-existing deployments is rather simple and limited. There is still the need to test larger, more complex and heterogeneous networks in real-world settings. Creation of hybrid networks and “networks of networks” are still open research topics.

Acknowledgments

The authors would like to thank Viesturs Silins for the help in analyzing deployment data and Modris Greitans for providing feedback during the research.
This work has been supported by the European Social Fund, grant Nr. 2009/0138/ 1DP/1.1.2.1.2/09/IPIA/VIAA/004 “Support for Doctoral Studies at the University of Latvia” and the Latvian National Research Program “Development of innovative multi-functional material, signal processing and information technologies for competitive and research intensive products”.

References

  1. Global Security.org. Sound Surveillance System (SOSUS). Available online: http://www.globalsecurity.org/intell/systems/sosus.htm (accessed on 8 August 2013).
  2. Polastre, J.; Szewczyk, R.; Culler, D. Telos: Enabling Ultra-low Power Wireless Research. In Proceedings of the 4th International Symposium on Information Processing in Sensor Networks, (IPSN’05), UCLA, Los Angeles, CA, USA, 25–27 April 2005.
  3. Crossbow Technology. MicaZ mote datasheet. Available online: http://www.openautomation.net/uploadsproductos/micaz_datasheet.pdf (accessed on 8 August 2013).
  4. Levis, P.; Madden, S.; Polastre, J.; Szewczyk, R.; Whitehouse, K.; Woo, A.; Gay, D.; Hill, J.; Welsh, M.; Brewer, E.; et al. Tinyos: An operating system for sensor networks. Ambient Intell. 2005, 35, 115–148. [Google Scholar]
  5. Dunkels, A.; Gronvall, B.; Voigt, T. Contiki-A Lightweight and Flexible Operating System for Tiny Networked Sensors. In Proceedings of the Annual IEEE Conference on Local Computer Networks, Tampa, FL, USA, 17–18 April 2004; pp. 455–462.
  6. Madden, S.; Franklin, M.; Hellerstein, J.; Hong, W. TinyDB: An acquisitional query processing system for sensor networks. ACM Trans. Database Syst. (TODS) 2005, 30, 122–173. [Google Scholar] [CrossRef]
  7. Muller, R.; Alonso, G.; Kossmann, D. A Virtual Machine for Sensor Networks. ACM SIGOPS Operat. Syst. Rev. 2007, 41.3, 145–158. [Google Scholar] [CrossRef]
  8. Demirkol, I.; Ersoy, C.; Alagoz, F. MAC protocols for wireless sensor networks: A survey. IEEE Commun. Mag. 2006, 44, 115–121. [Google Scholar] [CrossRef]
  9. Maróti, M.; Kusy, B.; Simon, G.; Lédeczi, Á. The Flooding Time Synchronization Protocol. In Proceedings of the 2nd International Conference on Embedded Networked Sensor Systems, (Sensys’04), Baltimore, MD, USA, 3–5 November 2004; pp. 39–49.
  10. Mao, G.; Fidan, B.; Anderson, B. Wireless sensor network localization techniques. Comput. Netw. 2007, 51, 2529–2553. [Google Scholar] [CrossRef]
  11. Mainwaring, A.; Culler, D.; Polastre, J.; Szewczyk, R.; Anderson, J. Wireless Sensor Networks for Habitat Monitoring. In Proceedings of the 1st ACM International Workshop on Wireless Sensor Networks and Applications, (WSNA’02), Atlanta, GA, USA, 28 September 2002; pp. 88–97.
  12. Merrill, W.; Newberg, F.; Sohrabi, K.; Kaiser, W.; Pottie, G. Collaborative Networking Requirements for Unattended Ground Sensor Systems. In Proceedings of IEEE Aerospace Conference, Big Shq, MI, USA, 8–15 March 2003; pp. 2153–2165.
  13. Lynch, J.; Loh, K. A summary review of wireless sensors and sensor networks for structural health monitoring. Shock Vib. Digest 2006, 38, 91–130. [Google Scholar] [CrossRef]
  14. Dunkels, A.; Eriksson, J.; Mottola, L.; Voigt, T.; Oppermann, F.J.; Römer, K.; Casati, F.; Daniel, F.; Picco, G.P.; Soi, S.; et al. Application and Programming Survey; Technical report, EU FP7 Project makeSense; Swedish Institute of Computer Science: Kista, Sweden, 2010. [Google Scholar]
  15. Mottola, L.; Picco, G.P. Programming wireless sensor networks: Fundamental concepts and state of the art. ACM Comput. Surv. 2011, 43, 19:1–19:51. [Google Scholar] [CrossRef] [Green Version]
  16. Bri, D.; Garcia, M.; Lloret, J.; Dini, P. Real Deployments of Wireless Sensor Networks. In Proceedings of SENSORCOMM’09, Athens/Glyfada, Greece, 18–23 June 2009; pp. 415–423.
  17. Yick, J.; Mukherjee, B.; Ghosal, D. Wireless sensor network survey. Comput. Netw. 2008, 52, 2292–2330. [Google Scholar] [CrossRef]
  18. Latré, B.; Braem, B.; Moerman, I.; Blondia, C.; Demeester, P. A survey on wireless body area networks. Wirel. Netw. 2011, 17, 1–18. [Google Scholar] [CrossRef]
  19. He, T.; Krishnamurthy, S.; Stankovic, J.A.; Abdelzaher, T.; Luo, L.; Stoleru, R.; Yan, T.; Gu, L.; Hui, J.; Krogh, B. Energy-efficient Surveillance System Using Wireless Sensor Networks. In Proceedings of the 2nd International Conference on Mobile Systems, Applications, and Services, (MobiSys’04), Boston, MA, USA, 6–9 June 2004; pp. 270–283.
  20. Arora, A.; Dutta, P.; Bapat, S.; Kulathumani, V.; Zhang, H.; Naik, V.; Mittal, V.; Cao, H.; Demirbas, M.; Gouda, M.; et al. A line in the sand: A wireless sensor network for target detection, classification, and tracking. Comput. Netw. 2004, 46, 605–634. [Google Scholar] [CrossRef]
  21. Simon, G.; Maróti, M.; Lédeczi, A.; Balogh, G.; Kusy, B.; Nádas, A.; Pap, G.; Sallai, J.; Frampton, K. Sensor Network-based Countersniper System. In Proceedings of the 2nd International Conference on Embedded Networked Sensor Systems, (SenSys’04), Baltimore, MD, USA, 3–5 November 2004; pp. 1–12.
  22. Thorstensen, B.; Syversen, T.; Bjørnvold, T.A.; Walseth, T. Electronic Shepherd-a Low-cost, Low-bandwidth, Wireless Network System. In Proceedings of the 2nd International Conference on Mobile Systems, Applications, and Services, (MobiSys’04), Boston, MA, USA, 6–9 June 2004; pp. 245–255.
  23. Butler, Z.; Corke, P.; Peterson, R.; Rus, D. Virtual Fences for Controlling Cows. In Proceedings of the 2004 IEEE International Conference on Robotics and Automation, (ICRA’04), Barcelona, Spain, 18–22 April 2004; Volume 5, pp. 4429–4436.
  24. Krishnamurthy, L.; Adler, R.; Buonadonna, P.; Chhabra, J.; Flanigan, M.; Kushalnagar, N.; Nachman, L.; Yarvis, M. Design and Deployment of Industrial Sensor Networks: Experiences from a Semiconductor Plant and the North Sea. In Proceedings of the 3rd International Conference on Embedded Networked Sensor Systems, (SenSys’05), San Diego, CA, USA, 2–4 November 2005; pp. 64–75.
  25. Sharp, C.; Schaffert, S.; Woo, A.; Sastry, N.; Karlof, C.; Sastry, S.; Culler, D. Design and Implementation of a Sensor Network System for Vehicle Tracking and Autonomous Interception. In Proceeedings of the Second European Workshop on Wireless Sensor Networks, Istanbul, Turkey, 31 January–2 February 2005; pp. 93–107.
  26. Mount, S.; Gaura, E.; Newman, R.M.; Beresford, A.R.; Dolan, S.R.; Allen, M. Trove: A Physical Game Running on an Ad-hoc Wireless Sensor Network. In Proceedings of the 2005 Joint Conference on Smart Objects and Ambient Intelligence: Innovative Context-Aware Services: Usages and Technologies, (sOc-EUSAI’05), Grenoble, France, 12–14 October 2005; pp. 235–239.
  27. Ho, L.; Moh, M.; Walker, Z.; Hamada, T.; Su, C.F. A Prototype on RFID and Sensor Networks for Elder Healthcare: Progress Report. In Proceedings of the 2005 ACM SIGCOMM Workshop on Experimental Approaches to Wireless Network Design and Analysis, (E-WIND’05), Philadelphia, PA, USA, 22 August 2005; pp. 70–75.
  28. Langendoen, K.; Baggio, A.; Visser, O. Murphy Loves Potatoes: Experiences from a Pilot Sensor Network Deployment in Precision Agriculture. In Proceedings of the 20th International IEEE Parallel and Distributed Processing Symposium, (IPDPS 2006), Rhodes Island, Greece, 25–29 April 2006; pp. 1–8.
  29. Hartung, C.; Han, R.; Seielstad, C.; Holbrook, S. FireWxNet: A Multi-tiered Portable Wireless System for Monitoring Weather Conditions in Wildland Fire Environments. In Proceedings of the 4th International Conference on Mobile Systems, Applications and Services, (MobiSys’06), Uppsala, Sweden, 19–22 June 2006; pp. 28–41.
  30. Wood, A.; Virone, G.; Doan, T.; Cao, Q.; Selavo, L.; Wu, Y.; Fang, L.; He, Z.; Lin, S.; Stankovic, J. ALARM-NET: Wireless Sensor Networks for Assisted-Living and Residential Monitoring; Technical Report; University of Virginia Computer Science Department: Charlottesville, VA, USA, 2006. [Google Scholar]
  31. Werner-Allen, G.; Lorincz, K.; Johnson, J.; Lees, J.; Welsh, M. Fidelity and Yield in a Volcano Monitoring Sensor Network. In Proceedings of the 7th Symposium on Operating Systems Design and Implementation, (OSDI’06), Seattle, WA, USA, 6–8 November 2006; pp. 381–396.
  32. Liu, L.; Ma, H. Wireless Sensor Network Based Mobile Pet Game. In Proceedings of 5th ACM SIGCOMM Workshop on Network and System Support for Games, (NetGames’06), Singapore, Singapore, 30–31 October 2006.
  33. Lifton, J.; Feldmeier, M.; Ono, Y.; Lewis, C.; Paradiso, J.A. A Platform for Ubiquitous Sensor Deployment in Occupational and Domestic Environments. In Proceedings of the 6th International Conference on Information Processing in Sensor Networks, (IPSN’07), Cambridge, MA, USA, 25–27 April 2007; pp. 119–127.
  34. Santos, V.; Bartolomeu, P.; Fonseca, J.; Mota, A. B-Live-a Home Automation System for Disabled and Elderly People. In Proceedings of the International Symposium on Industrial Embedded Systems, (SIES’07), Lisbon, Portugal, 04–06 July 2007; pp. 333–336.
  35. Aylward, R.; Paradiso, J.A. A Compact, High-speed, Wearable Sensor Network for Biomotion Capture and Interactive Media. In Proceedings of the 6th International Conference on Information Processing in Sensor Networks, (IPSN’07), Cambridge, MA, USA, 25–27 April 2007; pp. 380–389.
  36. Gao, T.; Massey, T.; Selavo, L.; Crawford, D.; Chen, B.; Lorincz, K.; Shnayder, V.; Hauenstein, L.; Dabiri, F.; Jeng, J.; et al. The advanced health and disaster aid network: A light-weight wireless medical system for triage. IEEE Trans. Biomed. Circuits Syst. 2007, 1, 203–216. [Google Scholar] [CrossRef] [PubMed]
  37. Wilson, J.; Bhargava, V.; Redfern, A.; Wright, P. A Wireless Sensor Network and Incident Command Interface for Urban Firefighting. In Proceedings of the 4th Annual International Conference on Mobile and Ubiquitous Systems: Networking Services, (MobiQuitous’07), Philadelphia, PA, USA, 6–10 August 2007; pp. 1–7.
  38. Jarochowski, B.; Shin, S.; Ryu, D.; Kim, H. Ubiquitous Rehabilitation Center: An Implementation of a Wireless Sensor Network Based Rehabilitation Management System. In Proceedings of the International Conference on Convergence Information Technology, (ICCIT 2007), Gyeongju, Korea, 21–23 November 2007; pp. 2349–2358.
  39. Malinowski, M.; Moskwa, M.; Feldmeier, M.; Laibowitz, M.; Paradiso, J.A. CargoNet: A Low-cost Micropower Sensor Node Exploiting Quasi-passive Wakeup for Adaptive Asychronous Monitoring of Exceptional Events. In Proceedings of the 5th International Conference on Embedded Networked Sensor Systems, (SenSys’07), Sydney, Australia, 6–9 November 2007; pp. 145–159.
  40. Wittenburg, G.; Terfloth, K.; Villafuerte, F.L.; Naumowicz, T.; Ritter, H.; Schiller, J. Fence Monitoring: Experimental Evaluation of a Use Case for Wireless Sensor Networks. In Proceedings of the 4th European Conference on Wireless Sensor Networks, (EWSN’07), Delft, The Netherlands, 29–31 January 2007; pp. 163–178.
  41. Eisenman, S.B.; Miluzzo, E.; Lane, N.D.; Peterson, R.A.; Ahn, G.S.; Campbell, A.T. BikeNet: A mobile sensing system for cyclist experience mapping. ACM Trans. Sen. Netw. 2010, 6, 1–39. [Google Scholar] [CrossRef]
  42. Chebrolu, K.; Raman, B.; Mishra, N.; Valiveti, P.; Kumar, R. Brimon: A Sensor Network System for Railway Bridge Monitoring. In Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services (MobiSys’08), Breckenridge, CO, USA, 17–20 June 2008; pp. 2–14.
  43. Finne, N.; Eriksson, J.; Dunkels, A.; Voigt, T. Experiences from Two Sensor Network Deployments: Self-monitoring and Self-configuration Keys to Success. In Proceedings of the 6th International Conference on Wired/wireless Internet Communications, (WWIC’08), Tampere, Finland, 28–30 May 2008; pp. 189–200.
  44. Suh, C.; Ko, Y.B.; Lee, C.H.; Kim, H.J. The Design and Implementation of Smart Sensor-based Home Networks. In Proceedings of the International Symposium on Ubiquitous Computing Systems, (UCS’06), Seoul, Korea, 11–13 November 2006; p. 10.
  45. Song, H.; Zhu, S.; Cao, G. SVATS: A Sensor-Network-Based Vehicle Anti-Theft System. In Proceedings of the 27th Conference on Computer Communications, (INFOCOM 2008), Phoenix, AZ, USA, 15–17 April 2008; pp. 2128–2136.
  46. Barrenetxea, G.; Ingelrest, F.; Schaefer, G.; Vetterli, M. The Hitchhiker’s Guide to Successful Wireless Sensor Network Deployments. In Proceedings of the 6th ACM Conference on Embedded Network Sensor Systems, (SenSys’08), Raleigh, North Carolina, 5–7 November 2008; pp. 43–56.
  47. Ince, N.F.; Min, C.H.; Tewfik, A.; Vanderpool, D. Detection of early morning daily activities with static home and wearable wireless sensors. EURASIP J. Adv. Signal Process. 2008. [Google Scholar] [CrossRef]
  48. Ceriotti, M.; Mottola, L.; Picco, G.P.; Murphy, A.L.; Guna, S.; Corra, M.; Pozzi, M.; Zonta, D.; Zanon, P. Monitoring Heritage Buildings with Wireless Sensor Networks: The Torre Aquila Deployment. In Proceedings of the 2009 International Conference on Information Processing in Sensor Networks, (IPSN’09), San Francisco, USA, 13–16 April 2009; pp. 277–288.
  49. Jiang, X.; Dawson-Haggerty, S.; Dutta, P.; Culler, D. Design and Implementation of a High-fidelity AC Metering Network. In Proceedings of the 2009 International Conference on Information Processing in Sensor Networks, (IPSN’09), San Francisco, CA, USA, 13–16 April 2009; pp. 253–264.
  50. Li, M.; Liu, Y. Underground coal mine monitoring with wireless sensor networks. ACM Trans. Sens. Netw. (TOSN) 2009, 5, 10:1–10:29. [Google Scholar] [CrossRef]
  51. Franceschinis, M.; Gioanola, L.; Messere, M.; Tomasi, R.; Spirito, M.; Civera, P. Wireless Sensor Networks for Intelligent Transportation Systems. In Proceedings of the IEEE 69th Vehicular Technology Conference, VTC Spring 2009, Barcelona, Spain, 26–29 April 2009; pp. 1–5.
  52. Detweiler, C.; Doniec, M.; Jiang, M.; Schwager, M.; Chen, R.; Rus, D. Adaptive Decentralized Control of Underwater Sensor Networks for Modeling Underwater Phenomena. In Proceedings of the 8th ACM Conference on Embedded Networked Sensor Systems, (SenSys’10), Zurich, Switzerland, 3–5 November 2010; pp. 253–266.
  53. Lai, T.T.T.; Chen, Y.H.T.; Huang, P.; Chu, H.H. PipeProbe: A Mobile Sensor Droplet for Mapping Hidden Pipeline. In Proceedings of the 8th ACM Conference on Embedded Networked Sensor Systems, (SenSys’10), Zurich, Switzerland, 3–5 November 2010; pp. 113–126.
  54. Dyo, V.; Ellwood, S.A.; Macdonald, D.W.; Markham, A.; Mascolo, C.; Pásztor, B.; Scellato, S.; Trigoni, N.; Wotextcolorreders, R.; Yousef, K. Evolution and Sustainability of a Wildlife Monitoring Sensor Network. In Proceedings of the 8th ACM Conference on Embedded Networked Sensor Systems, (SenSys’10), Zurich, Switzerland, 3–5 November 2010; pp. 127–140.
  55. Huang, R.; Song, W.Z.; Xu, M.; Peterson, N.; Shirazi, B.; LaHusen, R. Real-world sensor network for long-term volcano monitoring: Design and findings. IEEE Trans. Parallel Distrib. Syst. 2012, 23, 321–329. [Google Scholar] [CrossRef]
  56. Ceriotti, M.; Corrà, M.; D’Orazio, L.; Doriguzzi, R.; Facchin, D.; Guna, S.; Jesi, G.; Cigno, R.; Mottola, L.; Murphy, A.; et al. Is There Light at the Ends of the Tunnel? Wireless Sensor Networks for Adaptive Lighting in Road Tunnels. In Proceedings of the 10th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN/SPOTS), Chicago, IL, USA, 12–14 April 2011; pp. 187–198.
  57. Shafi, N.B. Efficient Over-the-Air Remote Reprogramming of Wireless Sensor Networks. MS.c Thesis, Queen’s University, Kingston, ON, Canada, 2011. [Google Scholar]
  58. Dutta, P. Sustainable sensing for a smarter planet. XRDS 2011, 17, 14–20. [Google Scholar] [CrossRef]
  59. Sensoria. Wireless Integrated Network Sensors (WINS) Next Generation. Technical Report, Defense Advanced Research Projects Agency (DARPA). 2004. Available online: http://www.trabucayre.com/page-tinyos.html (accessed on 8 August 2013).
  60. Bhatti, S.; Carlson, J.; Dai, H.; Deng, J.; Rose, J.; Sheth, A.; Shucker, B.; Gruenwald, C.; Torgerson, A.; Han, R. MANTIS OS: An embedded multithreaded operating system for wireless micro sensor platforms. Mobile Netw. Appl. 2005, 10, 563–579. [Google Scholar] [CrossRef]
  61. TU Harburg Institute of Telematics. Embedded Sensor Board. Available online: http://wiki.ti5.tu-harburg.de/wsn/scatterweb/esb (accessed on 8 August 2013).
  62. Picco, G.P. TRITon: Trentino Research and Innovation for Tunnel Monitoring. Available online: http://triton.disi.unitn.it/ (accessed on 8 August 2013).
  63. Zolertia. Z1 Platform. Available online: http://www.zolertia.com/ti (accessed on 8 August 2013).
  64. Crossbow Technology. MICA2 Wireless Measurement System datasheet. Available online: http://bullseye.xbow.com:81/Products/Product_pdf_files/Wireless_pdf/MICA2_Datasheet.pdf (accessed on 8 August 2013).
  65. Lo, B.; Thiemjarus, S.; King, R.; Yang, G. Body Sensor network–A Wireless Sensor Platform for Pervasive Healthcare Monitoring. In Proceedings of the 3rd International Conference on Pervasive Computing, Munich, Germany, 08–13 May 2005; Volume 191, pp. 77–80.
  66. Texas Instruments. CC1000: Single Chip Very Low Power RF Transceiver. Available online: http://www.ti.com/lit/gpn/cc1000 (accessed on 8 August 2013).
  67. Texas Instruments. CC2420: 2.4 GHz IEEE 802.15.4 / ZigBee-ready RF Transceiver. Available online: http://www.ti.com/lit/gpn/cc2420 (accessed on 8 August 2013).
  68. Martinez, K.; Basford, P.; Ellul, J.; Spanton, R. Gumsense-a High Power Low Power Sensor Node. In Proceedings of the 6th European Conference on Wireless Sensor Networks, (EWSN’09), Cork, Ireland, 11–13 February 2009.
  69. Atmel Corporation. AVR 8-bit and 32-bit Microcontroller. Available online: http://www.atmel.com/products/microcontrollers/avr/default.aspx (accessed on 8 August 2013).
  70. Hill, J.; Szewczyk, R.; Woo, A.; Hollar, S.; Culler, D.; Pister, K. System architecture directions for networked sensors. ACM Sigplan Not. 2000, 35, 93–104. [Google Scholar] [CrossRef]
  71. Cao, Q.; Abdelzaher, T.; Stankovic, J.; He, T. The LiteOS Operating System: Towards Unix-Like Abstractions for Wireless Sensor Networks. In Proceedings of the 7th International Conference on Information Processing in Sensor Networks, (IPSN’08), St. Louis, MO, USA, 22–24 April 2008; pp. 233–244.
  72. Prieditis, K.; Drikis, I.; Selavo, L. SAntArray: Passive Element Array Antenna for Wireless Sensor Networks. In Proceedings of the 8th ACM Conference on Embedded Networked Sensor Systems, (SenSys’10), Zurich, Switzerland, 3–5 November 2010; pp. 433–434.
  73. Shelby, Z.; Bormann, C. 6LoWPAN: The Wireless Embedded Internet; Wiley Publishing: Chippenham, Wiltshire, UK, 2010. [Google Scholar]
  74. Hui, J.W.; Culler, D. The Dynamic Behavior of a Data Dissemination Protocol for Network Programming at Scale. In Proceedings of the 2nd International Conference on Embedded Networked Sensor Systems, (SenSys’10), Zurich, Switzerland, 3–5 November 2010; pp. 81–94.
  75. Levis, P.; Culler, D. Mate: A tiny virtual machine for sensor networks. Sigplan Not. 2002, 37, 85–95. [Google Scholar] [CrossRef]
  76. Terfloth, K.; Wittenburg, G.; Schiller, J. FACTS: A Rule-Based Middleware Architecture for Wireless Sensor Networks. In Proceedings of the 1st International Conference on Communication System Software and Middleware (COMSWARE), New Delhi, India, 8–12 January 2006.
  77. Costa, P.; Mottola, L.; Murphy, A.L.; Picco, G.P. TeenyLIME: Transiently Shared Tuple Space Middleware for Wireless Sensor Networks. In Proceedings of the International Workshop on Middleware for Sensor Networks, (MidSens’06), Melbourne, Australia, 28 November 2006; pp. 43–48.
  78. Levis, P.; Gay, D. TinyOS Programming, 1st ed.; Cambridge University Press: New York, NY, USA, 2009. [Google Scholar]
  79. Saruwatari, S.; Suzuki, M.; Morikawa, H. A Compact Hard Real-time Operating System for Wireless Sensor Nodes. In Proceedings of the 2009 Sixth International Conference on Networked Sensing Systems, (INSS’09), Pittsburgh, PA, USA, 17–19 June 2009; pp. 1–8.
  80. Eswaran, A.; Rowe, A.; Rajkumar, R. Nano-RK: An Energy-aware Resource-centric RTOS for Sensor Networks. In Proceedings of the 26th IEEE International Real-Time Systems Symposium, (RTSS 2005), Miami, FL, USA, 6–8 December 2005; pp. 265–274.
  81. Ganeriwal, S.; Kumar, R.; Srivastava, M.B. Timing-sync Protocol for Sensor Networks. In Proceedings of the 1st International Conference on Embedded Networked Sensor Systems, (SenSys’03), Los Angeles, CA, USA, 5–7 November 2003; pp. 138–149.
  82. Dunkels, A.; Schmidt, O.; Voigt, T.; Ali, M. Protothreads: Simplifying Event-Driven Programming of Memory-Constrained Embedded Systems. In Proceedings of SenSys’06, Boulder, CO, USA, 31 October–3 November 2006; pp. 29–42.
  83. MansOS—Portable and easy-to-use WSN operating system. Available online: http://mansos.net (accessed on 8 August 2013).
  84. Elsts, A.; Strazdins, G.; Vihrov, A.; Selavo, L. Design and Implementation of MansOS: A Wireless Sensor Network Operating System. In Scientific Papers; University of Latvia: Riga, Latvia, 2012; Volume 787, pp. 79–105. [Google Scholar]
  85. Goavec-Merou, G. SDCard and FAT16 File System Implementation for TinyOS. Available online: http://www.trabucayre.com/page-tinyos.html (accessed on 8 August 2013).

Share and Cite

MDPI and ACS Style

Strazdins, G.; Elsts, A.; Nesenbergs, K.; Selavo, L. Wireless Sensor Network Operating System Design Rules Based on Real-World Deployment Survey. J. Sens. Actuator Netw. 2013, 2, 509-556. https://doi.org/10.3390/jsan2030509

AMA Style

Strazdins G, Elsts A, Nesenbergs K, Selavo L. Wireless Sensor Network Operating System Design Rules Based on Real-World Deployment Survey. Journal of Sensor and Actuator Networks. 2013; 2(3):509-556. https://doi.org/10.3390/jsan2030509

Chicago/Turabian Style

Strazdins, Girts, Atis Elsts, Krisjanis Nesenbergs, and Leo Selavo. 2013. "Wireless Sensor Network Operating System Design Rules Based on Real-World Deployment Survey" Journal of Sensor and Actuator Networks 2, no. 3: 509-556. https://doi.org/10.3390/jsan2030509

Article Metrics

Back to TopTop