Energy Efficiency in Short and Wide-Area IoT Technologies—A Survey

In the last years, the Internet of Things (IoT) has emerged as a key application context in the design and evolution of technologies in the transition toward a 5G ecosystem. More and more IoT technologies have entered the market and represent important enablers in the deployment of networks of interconnected devices. As network and spatial device densities grow, energy efficiency and consumption are becoming an important aspect in analyzing the performance and suitability of different technologies. In this framework, this survey presents an extensive review of IoT technologies, including both Low-Power Short-Area Networks (LPSANs) and Low-Power Wide-Area Networks (LPWANs), from the perspective of energy efficiency and power consumption. Existing consumption models and energy efficiency mechanisms are categorized, analyzed and discussed, in order to highlight the main trends proposed in literature and standards toward achieving energy-efficient IoT networks. Current limitations and open challenges are also discussed, aiming at highlighting new possible research directions.


Introduction
The Internet of Things (IoT) paradigm was introduced over two decades ago, and its deployment has been ongoing for almost one. In its most general definition, IoT is a network of devices, that is, the things, which gather and exchange data possibly over the Internet. The ultimate goal of IoT is to enhance existing services and applications or deliver new ones to users, with little to no human intervention [1,2].
The extreme heterogeneity of application domains and involved devices has led to different requirements and expectations. Therefore, a large variety of wireless communication technologies has gradually emerged for enabling IoT, and is expected to connect up to 75 billion devices by 2025, with an economic impact of around $11.1 trillion per year [3,4].
Considering the first important aspect in IoT systems, that is, the coverage area of the adopted technologies, a rough taxonomy in short-range vs. wide-range systems can be identified. Moreover, by taking into account energy efficiency and power consumption aspects, the two above categories identify so-called Low-Power Short-Area Networks (LPSAN) and Low-Power Wide-Area Networks (LPWAN) [5].
LPSAN technologies target cost/energy-efficient short coverage, aiming at wirelessly solving the so-called "last 100 m connectivity", that is, the connection of local networks to the Internet through a more structured Wide-Area Network (WAN). The latter is usually accessed via dedicated and conveniently placed gateways [6]. A plethora of different LPSAN standards and proprietary systems have been thus proposed over the years, mostly originated from networking schemes referred to as ad-hoc or Wireless Sensor Networks (WSNs). As said before, such schemes are gateway-terminated toward the Internet for IoT-specific purposes; moreover, the technologies that they are based on mainly work in the unlicensed spectrum. Besides extreme short-range Radio Frequency Identification (RFID) and Near Field Communications (NFC) systems [7,8], further standards and systems are currently adopted as LPSANs, such as Zigbee, derived from the IEEE 802.15.4 standard suite and deployed by the Zigbee Alliance [9]; Bluetooth and its Low Energy extension (Bluetooth LE, or BLE), defined within the Bluetooth Special Interest Group (Bluetooth SIG) [10]; and Z-Wave, a proprietary solution developed by Zensys and managed by the Z-Wave Alliance [11]. In order to overcome heterogeneity and thus possible interoperability issues when moving ad-hoc deployments, for example, based on Zigbee, to IP-based networks, the IPv6 over Low-Power Wireless Personal Networks (6LoWPAN) standard has been recently proposed, as a result of the activities carried out within a dedicated IETF Working Group (WG). 6LoWPAN focuses on IoT delivery over IPv6, thus allowing a seamless integration with IP-based systems [12]. In particular, 6LoWPAN has been the basis for another technology, named Thread, which is natively IP-addressable, different from the previous technologies, which require instead dedicated extensions [13].
LPWAN systems have emerged more recently than LPSANs, and play a key role in delivering specific IoT services, generally referred to as massive Machine Type Communications (mMTC) [14]. When compared to LPSANs, LPWANs target wide-area coverage, that is, up to several kilometers, in a low cost/power consumption manner [15]. Two main LPWAN groups can be identified, that is, proprietary systems working in the unlicensed spectrum and 3GPP-standardized technologies, these latter exploiting licensed cellular spectrum and architecture. The first group is well-represented by the Long Range (LoRa) technology, a physical layer standard owned by Semtech and exploiting the LoRaWAN network protocol stack designed by the LoRa Alliance [16]. Moreover, it also includes Sigfox, developed by the French operator of the same name and globally provided by several associated partners [17], and the Weightless technology [18]. The second group includes 3GPP standards introduced in the 2016 Release 13 (Rel-13) and enhanced in the 2017 , such as NarrowBand IoT (NB-IoT) [19], Long Term Evolution for Machines (LTE-M) [20], and Extended Coverage GSM IoT (EC-GSM-IoT) [21]. These technologies enable mMTC scenarios over the cellular architecture, toward the full deployment of the so-called cellular IoT (cIoT), which is a relevant use case for 5G and beyond cellular systems [22].

Related Work and Contribution
A primary target for all the above technologies is to deliver their services while taking into account energy efficiency and power consumption aspects, at both device and network levels. These aspects are extremely important for the IoT development, and for this reason recent years have seen a rising research-and-development interest toward (a) the design and implementation of techniques and mechanisms for energy efficiency, including power saving modes at the device level and cooperation schemes across the network, and (b) the derivation of theoretical and empirical models for the power consumption and battery lifetime of the above classes of devices. The peculiarity in terms of constraints and requirements of the IoT systems and scenarios requires significant extensions of mechanisms and theoretical analyses already implemented and derived for non-IoT wireless communication technologies, which however provide a reliable and valid starting point of analysis, as will be also discussed in the following [23][24][25][26][27][28].
This paper presents an extensive literature survey on the methodologies and approaches used and proposed to increase the energy efficiency and thus lower the power consumption of the most widespread IoT technologies. Moreover, the paper also provides a deep analysis of proposed energy and power consumption models for the same technologies, and attempts to create a classification of said methods. This categorization of techniques will help provide a basis for the already existing various works that study the IoT technologies as singular entity, and at the same time attempting to create a full picture that includes all the technologies, as part of one big IoT family. At the same time, it creates a landmark for future work that aims to study or improve energy efficiency, giving clear indications of previous work done in that context, and that can serve as starting point for new research.
The first aspect has been already addressed, to some extent, by several studies. Some of the most notable ones are briefly discussed in the following, in order to highlight the main differences and clarify the motivation of the present work.
One of the first surveys targeting energy-related aspects was presented in [23], with focus on WSNs and the main techniques to reduce the energy consumption of sensors. A classification of such techniques was also proposed, comprising three main categories: • Duty cycling approaches, based on the idea that the radio transceiver should be turned off, in a status generically identified as sleep mode, if it has no more data to send and/or receive (i.e., in contrast to so-called active mode). Therefore, sensor nodes alternate between active and sleep modes to conserve energy. The current state of a node is defined by the network activity. This process is called duty cycling, with a duty cycle determined as the fraction of time that nodes spend in active state during their lifetime; • Data-driven approaches, designed to reduce the amount of data exchanged by the sensors while maintaining an acceptable level of sensing accuracy, as required by the application the sensors are used for. These methods reduce the energy consumption of the nodes in two ways: first, all unneeded sampled data gathered by the nodes are not communicated at the network-side; second, by reducing the data to be sampled by the sensing subsystem; • Mobility approaches, tailored for mobile WSNs, aiming at prolonging the lifetime of mobile WSNs. These methods rely on mobile nodes in the network to pass information to static nodes. The latter ones wait for a mobile device to pass nearby, and route the messages and data they have gathered to the mobile device. The communication happens between nodes that are very close to each other, helping the nodes to preserve energy, that in other situations would be spent to send messages over long distances.
Apart from the above classification, open challenges and issues related to each approach were also highlighted. Energy efficiency in WSNs was also discussed in [24], which extended [23] by providing a description of the metrics used to evaluate energy efficiency. In particular, energy-efficient Radio Resource Management (RRM) methods were analyzed, focusing on energy saving mechanisms in case of low traffic and making use of applicationspecific Quality of Service (QoS) requirements. Another aspect considered in [24] is the energy efficiency for heterogeneous networks (i.e., networks of devices produced by different manufacturers or different computers running with different operating systems) and relay communications, that were not widely studied before. State-of-the-art schemes for addressing energy efficiency in case of networks adopting MIMO and OFDM schemes were also discussed.
Energy efficiency techniques taking into account application-level requirements were also surveyed in [25], where an in-depth description of the main applications for WSNs was introduced first, moving then on presenting standards and energy efficiency mechanisms for these latter. The mechanisms were classified into the following categories: • Radio optimization approaches, focused on the reduction of power consumption at the radio module; • Data reduction approaches, aiming to propose techniques for reducing the amount of exchanged data; • Sleep/wake-up approaches, aiming to optimize sleep and active modes of the devices; • Energy-efficient routing approaches, targeting energy savings by optimizing the adopted routing paradigms; • Charging approaches, related to the use of energy harvesting and wireless charging techniques to boost battery charging.
Energy efficiency was also studied in [26] in the context of smart grids, highlighting mechanisms making use of renewable energy, in order to improve operational practices and deploy so-called green data centers. Being related to smart grids, the work covered aspects related to wireless, wired, and optical communications.
Electric Power and Energy Systems (EPES) were analyzed in [27], where a description of EPES was given first, moving then on reviewing IoT-based EPES applications and services. In this context, a technical description and assessment of smart home applications was given, also highlighting the impact that IoT-based EPESs have on economy, society and environment. Challenges and possible solutions for IoT-based EPESs were also discussed.
Energy efficiency vs. coverage trade-offs for 5G cellular networks were discussed in [28]. After providing the definition of green communications, several schemes for energy efficiency were reviewed and classified in four categories: • Resource allocation, aiming at improving the energy efficiency through careful allocation of radio resources; • Network planning and deployment, aiming at deploying infrastructure nodes that can maximize the energy efficiency in the the covered area [28]; • Energy harvesting and transfer, which focuses on harvesting energy from the environment, and using it to operate the communication systems; • Hardware solutions, which focuses on development and design of hardware that explicitly accounts for energy consumption.
In particular, the proposals include cooperative management between LTE and 5G systems, resulting in intelligent and dynamic on/off switching of 5G base stations, aiming to decrease the power consumption at the infrastructure side.
In this paper, we follow the line drawn by the above works, offering a novel and in-depth investigation in the context of short-area and wide-area IoT technologies. In particular, the present work provides: • A review of the metrics used to evaluate energy efficiency; • A taxonomy of methods and techniques addressing energy efficiency challenges for LPSANs and LPWANs, and corresponding technologies; • A review of power and energy consumption models proposed for LPSANs and LPWANs, and corresponding technologies; • A discussion of limitations and open issues for the aforementioned methods.
A comparison between existing reviews/surveys and this work is provided in Table 1.

Structure
The rest of the paper is organized as follows-Section 2 gives a description of IoT technologies and their classification as LPSANs and LPWANs, where the main technologies for each category are studied in more detail. Section 3 describes the metrics and parameters used when studying energy efficiency and energy consumption in general, and then introduces energy and power consumption models, used to evaluate the consumption of IoT technologies. Section 4 provides a classification and detailed description of the energy efficiency methods proposed and/or used for each technology. Section 5 highlights open challenges and issues for each method used in each technology. Conclusions are finally drawn in Section 6.

IoT Technologies
The focus of this section is to provide an overview of the LPSAN and LPWAN technologies that are considered in this work, starting from their main functionalities, features and characteristics. Energy efficiency and power consumption, being the focus of this work, are analyzed in detail in Sections 3 and 4. On the one hand, RFID (and NFC), Zigbee, Bluetooth (and BLE) are reported as LPSAN representative technologies. On the other, Sigfox, LoRa (and LoRaWAN), are mentioned as proprietary LPWAN solutions, while NB-IoT, LTE-M, and EC-GSM-IoT are taken into account as 3GPP-standardized LPWAN technologies. A classification of the main IoT technologies studied in this paper is represented in Figure 1, while a comparison of the technologies described in the following, based on their main characteristics, is provided in Table 2.

Low-Power Short-Area Networks (LPSANs)
LPSAN technologies have found a vast application in several IoT scenarios, such as emergency communications, industrial monitoring and maintenance and smart environments, using lightweight protocols in order to reduce hardware complexity and device costs.

RFID and NFC
Taking roots from bar-coding mechanisms, the RFID technology allows to send and receive small amount of data, for example, Identification (ID) signals from a tag to a reader, over very short distances, not requiring in principle Line-of-Sight (LoS) propagation. RFID systems operate at Low Frequency (LF), within the 125-134 kHz and 140-148.5 kHz ranges, High Frequency (HF), at 13.56 MHz, and Ultra-High Frequency (UHF), at 868 MHz and 915 MHz, in Europe and US, respectively. Microwave RFID can also operate at 2.4 GHz and higher frequencies [29].
RFID tags can be grouped into three categories [29]: • Active tags, which are embedded with an on-board battery, and periodically transmit an ID signal; • Battery-assisted tags, which are equipped with an on-board battery, same as active tags, where the transmission is triggered only when the tags are in the coverage range of a reader; • Passive tags, which are not equipped with an on-board battery, and use the energy transmitted by the reader as communication enabler. Passive tags need to be illuminated with a power level almost a thousand times stronger than the power that is needed for signal transmission in the case of active tags.
Based on the above categories, three main transmission mechanisms can be identified [29]: • Active Reader Passive Tag (ARPT), where the tag does not emit radio signals, and the information is exchanged using the energy emitted by the reader. This solution is costeffective but limits the system coverage to a few meters in terms of tag-reader distance; • Passive Reader Active Tag (PRAT), where the tag periodically emits the signals toward the reader(s) falling in its coverage area; • Active Reader Active Tag (ARAT), where a bidirectional data exchange is established between tags and readers, for example, for transmitting information with acknowledgment.
In order to extend RFID functionalities, such systems have been more recently embedded with advanced communication and networking protocols, leading to so-called NFC technologies [30]. In the most common NFC setup, one of the two paired devices is also connected to the Internet, so that the other device can access and exchange data toward Internet-based services. NFC links always include a so-called initiator and a set of target devices, equipped with both passive and active communication modes [8].
NFC can operate to a distance between devices up to tens of centimeters within the unlicensed industrial, scientific and medical (ISM) band, and particularly in the HF spectrum around 13.56 MHz, offering data transmission rates of about 106, 212, 424 and 848 kbps, depending on the adopted combination of modulation and coding techniques. With regards to the modulation, amplitude On/Off Keying (OOK) and Binary Phase Shift Keying (BPSK) schemes are mostly adopted; moreover, Amplitude Shift Keying (ASK), and Non-Return-to-Zero Level (NRZ-L) are also used, along with Manchester and Miller code [31].
NFC devices can operate in three modes: • Card Emulation: In this mode, the NFC-enabled device acts like a smart card, allowing the users to perform data transactions; • Reader/Writer: The devices read the information stored on the tags, that can be embedded on items of different nature (e.g., labels and posters). Depending on the usage, a device can also write information on tags; • Peer-to-Peer: A pair of NFC devices exchange information in both directions.

Zigbee
The Zigbee technology is based on the IEEE 802.15.4 standard suite, and it is adopted to create personal area networks for IoT applications, such as home automation, medical device data transmissions, and other services requiring low power and bandwidth consumption. Zigbee covers a distance up to one hundred meters in LoS [32], offering data rates up to 250 kbps. It mainly operates in the 2.4 GHz ISM band, but also works at 784 MHz in China, 868 MHz in Europe, and 915 MHz in US and Australia [33]. In the ISM band, a Zigbee system allocates a total of sixteen 2 MHz channels, with a spacing of 5 MHz, using Direct-Sequence Spread Spectrum (DSSS) modulation. In 868 and 915 MHz bands, signals are modulated using BPSK or Offset Quadrature Phase-Shift keying (OQPSK).
The Zigbee protocol layers are based on the Open System Interconnect (OSI) reference model [34], implementing only the layers that are needed for low-power low-data rate networking. Zigbee standard defines the networking, application and security layers of the protocol [35], and adopts the physical and MAC layer from the IEEE 802.15.4 standard [36]. As shown in Figure 2, Zigbee defines three functional roles [37]:

•
Coordinator, that is the most important element of the network based on a tree topology. There is only one coordinator in each network, initiating the network and selecting network configurations; • Router, that acts as an intermediate node, relaying data from and to other devices; • End Device, that is typically battery-operated, and collects information from sensors and transmits it to the coordinator. End devices can only communicate with and send information to the router and the coordinator, and can enter in a sleep mode if there is no information to relay. A Zigbee network is composed of two types of nodes, classified based on their processing capabilities: • Full Function Device (FFD): a node that can play all functional roles; • Reduced Function Device (RFD): a node that can only act as end device.
In terms of network organization, Zigbee mostly uses a mesh topology, where the nodes connect directly to as many other end devices as possible, and cooperate to efficiently routing data in the network, aiming at low latency and high robustness. As reported in the latest specification, named Zigbee 2012, a Zigbee mesh network can provide access to more than 64.000 devices [38].
The communication protocols support both non-beacon and beacon-enabled networks. In the beacon-enabled network, the nodes can transmit information only in time slots that were predetermined before transmission. The coordinator allocates the guaranteed time slots for each device to transmit. The coordinator synchronizes the devices that are transmitting using beacon signals, that tell each device when it is their turn to transmit. In the non-beacon network, the devices do not have guaranteed time slots for transmission, and are not synchronized when transmitting. Non-beacon networks make use of unslotted CSMA/CA. Zigbee can also use a star topology, consisting of one coordinator and many end devices, and in this case it adopts a master-slave network model. The master is the coordinator, which is always a FFD node, while the slaves can either be FFD or RFD devices. In the star topology the nodes can only communicate to the coordinator.

Bluetooth and BLE
Bluetooth is one of the most widespread technologies for data transmission over short distances, and can also be adopted to IoT scenarios, such as medical data transmissions and real-time location systems (RTLS), among others [39].
Bluetooth uses short-wavelength UHF radio waves in the 2.4 GHz ISM band, and its physical layer is based on specifications defined in the IEEE 802.15.1 standard [40]. In the initial design, Bluetooth used Gaussian Frequency Shift Keying (GFSK) modulation, leading to a so-called Basic Rate (BR) mode, with a bit rate of 1 Mbps; the introduction of Bluetooth 2.0+Enhanced Data Rate (EDR), led to data rates up to 3 Mbps, respectively. Bluetooth adopts Frequency Hopping (FH), with 1600 hops per second, over about 80 different carriers spaced of 1 MHz to guarantee coexistence with other technologies operating in the ISM band. Time Division Duplexing (TDD) is adopted within a Bluetooth network with data transmission in uplink and downlink directions taking place alternatively in time [41,42].
A Bluetooth network, referred to as piconet, consists of one master and up to seven slave devices. Within a piconet, the master selects the hopping sequence and the transmission times to be used by slave devices. Multiple piconets can be coordinated to form a so-called scatternet, as shown in Figure 3.
Bluetooth devices can operate in one of four modes [41]: • Active mode: the regular connected mode, where the device is transmitting or receiving data; • Sniff mode: the slave device listens only to specified slots for messages that are meant for it; • Hold mode: the device does not transmit data for a long time; • Park mode: the device is temporarily deactivated, in order to allow for its active member address to be re-assigned.
The design and deployment of Bluetooth Low Energy (BLE) has been initiated as part of the Bluetooth 4.0 Core specification [43]. BLE inherits all the main features and functionalities from Bluetooth, such as operating with the same pairing, authentication and encryption functionalities, as well as the master-slave model and the use of Frequency Hopping for coexistence. Unlike Bluetooth, BLE devices operate in sleep mode and awake only when a connection is initiated. Therefore, BLE offers a power consumption in the order of 5-10% compared to that of the original Bluetooth [44,45]. BLE operates in the 2.4 GHz spectrum like Bluetooth, but it divides the available spectrum into 40 channels, with a 2 MHz spacing. Out of these channels, three are the so-called Advertising Channels, used for device discovery, connection establishment, and broadcast transmissions; the remaining channels are Data Channels, used for bidirectional data exchange [46,47]. Similarly to Bluetooth, GFSK is used for signal modulation and access, achieving a transmission rate of about 1 Mbps [48]. Bluetooth and BLE are in general used for different purposes; Bluetooth is more suitable and works best with applications requiring a relatively high throughput, while BLE is tailored on IoT applications requiring periodic data transmission, for example, building automation and lighting.

Low-Power Wide-Area Networks (LPWANs)
LPWANs allow long range communications at a low bit rate among connected things, such as battery-operated sensors. Therefore, they require far fewer Access Points (APs) than LPSANs. Most of the LPWAN technologies reported in this paper made their first appearance around 2015, with LoRa/LoRaWAN gaining more and more support, followed by LTE-M and NB-IoT cellular technologies.

Proprietary Technologies in Unlicensed Spectrum LoRa/LoRaWAN
LoRa is a long range technology enabling IoT services in heterogeneous scenarios, for example, rural, dense urban, and deep indoor environments. It operates within the sub-Gigahertz unlicensed spectrum, including 169 MHz, 433 MHz, 868 MHz (in Europe), and 915 MHz (in the North America) frequency bands.
In particular, the term LoRa identifies a specific physical layer patented by Semtech, adopting a Chirp Spread Spectrum (CSS) modulation technique, where the signal frequency linearly varies over the transmission time interval T in a frequency range [ f 0 , f 1 ] [49]. The CSS modulation technique is based on the chirp signals, using fixed amplitude frequency modulation. The data rate and symbol rate of LoRa depend on the Spreading Factor (SF) (the duration of the chirp), and the bandwidth that is used. A LoRa symbol is composed of 2 SF chirps that cover the entire frequency band. The symbol rate [symbol/s] is: where BW = f 1 − f 0 is the bandwidth [50,51].
From the above formula it can be seen that the symbol time (rate) is increased (decreased) by increasing the SF. While also considering the Coding Rate (CR), the data rate [b/s] can be found as follows: LoRaWAN does not allow device-to-device communications, and mainly works in uplink. However, if needed and required by specific applications, network servers can send downlink data and control packets to end devices. Different functionalities for bidirectional communications lead to the definition of three main classes of LoRaWAN devices [52]: • Class A: An uplink transmission time slot is immediately followed by two timeslots, in which the device can receive downlink information from the network servers. Class A devices only receive data as a consequence of their own transmissions; • Class B: With respect to Class A, the devices reserve additional timeslots for downlink communications, adopting the time synchronization promoted within dedicated beacon messages from the gateways; • Class C: With respect to previous classes, these devices are always available to receive downlink messages, except when they are transmitting.
As specified in the standards, Class A is mandatory for each end device, while Classes B and C are optional.
The LoRaWAN network is composed of a network-side server connected to many LoRaWAN gateways, that act as APs for the end devices. End devices must be activated if they want to participate in a LoRaWAN network, via either Over-The-Air Activation (OTAA) or Activation By Personalization (ABP).
Aiming at leveraging performance vs. energy efficiency trade-offs, LoRaWAN has the so-called Adaptive Data Rate (ADR) mechanism. ADR functions by adjusting the SF and thus the data rate in accordance to the estimated distance between a node and a gateway. The nodes that are closer to the gateway have a higher data rate. ADR uses two parameters that are the ADR-ACK-LIMIT and ACK-ADR-DELAY to monitor and control the number of uplink messages [51]. If there is no downlink response detected, within a certain time, it is assumed that the connectivity is lost, so the end devices must increase either SF or the transmission power to resolve this issue and establish a more reliable connection with the network.

Sigfox
Similar to LoRa (LoRaWAN), the Sigfox technology enables IoT long-range applications, with low-cost end devices exchanging small amounts of data with network applications, particularly in uplink [53].
Sigfox operates in the ISM spectrum, centered at 868 MHz in Europe and 902 MHz in the US. Sigfox signals are characterized by Ultra NarrowBand (UNB) spectrum occupation, that is, 100 Hz, DBPSK modulation and around 100 bps in uplink, and 1.5 kHz, GFSK modulation and around 600 bps in downlink [53].
Sigfox devices emit in the available frequency band, and the closest base station detects, decodes and forwards the signal to the network back-end. Bidirectional communication is possible but, similarly to Class A LoRaWAN, the communication is always initiated in uplink.
Sigfox uses the so-called Random Frequency and Time Division Multiple Access (RFTDMA), which allows active nodes to randomly access time and frequency resources. Uplink transmissions are scheduled by adopting a so-called Fire and Forget model [54]: in order to increase transmission reliability, packets are sent by the device on three different frequencies at three different times, and correct reception is not acknowledged. The MAC layer adds device identification/authentication (HMAC) and error correcting code (ECC), but no signaling and control mechanisms are provided. A unique device ID is used for authentication and correct data routing.

Cellular IoT
Cellular IoT (cIoT) technologies are one of the two categories comprising the LPWAN technologies. Cellular LPWANs extend the existing cellular systems, allowing the reuse of the same architecture for long-range IoT scenarios and applications, and in particular mMTC. They operate in the licensed spectrum, ensuring reliability while using a consistent and standardized infrastructure. There are three types of cIoT technologies, that is, NB-IoT, LTE-M and the less used EC-GSM-IoT. All three technologies derive from cellular networks and share some common characteristics. The first one is the presence of extended coverage mechanisms, in compliance with 3GPP TR 45.820 [64], defining coverage classes related to the quality of the network signal experienced by the IoT device. A second common characteristic is the presence of two common Radio Resource Control (RRC) modes, that is Idle (the UE is not connected and there is no radio link, but the UE is registered to the network and can be reached if needed) and Connected (the UE is connected and the network can transmit/receive data to/from it). Finally, cIoT technologies employ similar operations when a device requests to initiate a data transfer. This process happens in three phases [56]: (1) Idle mode operation; (2) system access procedure, with the device initiating a connection to the network; and (3) resource assignment by the network to the device, that concludes with the device entering in Connected mode. While in Idle mode, several procedures can take place, that must be taken into account in analyzing and modeling energy consumption:

•
Cell Selection: During this procedure, the device performs a full scan of the supported frequency bands, and collects information needed to identify the most suitable cell with respect to a set of criteria set by both device and network; • Cell Reselection: During cell selection, the device selects a suitable cell and then camps in that cell. Sometimes, a device may need to change the cell it is connected and select a new cell; • System Information Acquisition: After selecting and camping on a suitable, the device needs to acquire the full set of System Information (SI), which are grouped into messages called System Information Block (SIB) [56], where in LTE-M and NB-IoT, SIB1 and SIB2 are the SIBs that contain the most important SI; • Paging: It is a procedure that the network uses in order to determine the location of the subscriber, before the data exchange can be established. It is also used to alert the decide of an incoming data transmission. In case the device is in Idle mode and there is a need to re-establish a connection, the base station broadcasts a paging message reporting the device tracking area, which can include several cells. The paging message contains a set of different IDs, considering that the base station pages several users at a time. When the device decodes its own ID, it starts changing its mode form Idle to Connected; • Random Access Procedure: This procedure allows for a device to identify itself in the network and establish a connection. In essence, this procedure allows the device to transit from Idle mode to Connected mode; • Access Control: This mechanism is used in order to protect cellular networks from overuse. It is initiated in special situations, for example, during network power outages.
Beyond Idle and Connected modes, devices can move to other modes that are particularly relevant from the point of view of energy efficiency. In particular, a device can enter the Power Saving Mode (PSM), during which it does not apply any of the Idle mode operations, e.g, it does not transmit or monitor paging messages and, in general, it uses the smallest possible amount of energy. PSM is discussed in detail in Section 4.
Although the three cIoT technologies share the characteristics discussed above, the also possess different capabilities and provide specific features. Therefore, we briefly discuss them in the following dedicated paragraphs.

LTE-M
LTE-M is a 3GPP-standardized LPWAN technology, allowing a full reuse of the cellular infrastructure. LTE-M coexists with 2G, 3G, and 4G mobile networks, and benefits from all LTE-native security and privacy features, such as, support for user and mobile equipment authentication and confidentiality, and data integrity [57,65]. LTE-M devices are best suited to mission-critical applications requiring real-time data transfer, for example, emergency communications in smart cities.
LTE-M supports both TDD and Frequency Division Duplex (FDD) modes, using a sub frame structure of 1 ms, making it possible to achieve low latency data exchange. LTE-M offers extended and improved coverage, also for indoor scenarios, and is able to support Voice over LTE with device costs comparable to GSM, ultimately being attractive for different IoT applications. The LTE-M signal bandwidth was initially limited to 1.4 MHz, and the main supported features were frequency hopping and subframe repetitions. Release 14 introduced a 5 MHz bandwidth option, offering peak data rates of 384 kbps for downlink, and up to 1 Mbps for uplink, with a latency between 50 and 100 ms.
Being an LTE extension, several LTE features are inherited in LTE-M, for example, Orthogonal Frequency Division Multiple Access (OFDMA) as downlink and Single Carrier Frequency Division Multiple Access (SC-FDMA) as uplink access schemes, with equivalent settings in terms of subcarrier spacing, cyclic prefix (CP) lengths, resource grid, and frame structure, to mention a few [58]. This results in LTE-M and LTE transmissions coexisting in the same LTE cell, on the same LTE carrier, and a possible dynamic sharing of resources. LTE-M uses the same channels as LTE and LTE-M physical channels and signals are transmitted using six Physical Resource Blocks (PRBs). The LTE-M frame consists of one hyperframe cycle, which has 1024 hyperframes, each hyperframe containing 1024 frames. One frame consists of 10 subframes, each divided into two slots of 0.5 ms, where each slot is divided into 7 OFDM symbols when we have a normal CP length, and 6 symbols in the case of extended CP length [56].
The physical layer provides services for data transport to the higher layers, using the transport channels via the MAC layer, which in turn transports data through the use of logical channels.

NB-IoT
NB-IoT is another 3GPP-standardized technology for cellular LPWAN, mainly devoted to uplink transmission of small amounts of infrequent information by a massive amount of devices, for example, for smart agriculture and smart city scenarios [59]. Differently from LTE-M, NB-IoT is a brand new radio technology, and for this reason it is not fully backward compatible with existing LTE devices, even if it is designed to achieve high coexistence with LTE, GSM, and General Packet Radio Service (GPRS) technologies. With respect to LTE, new physical layer signals and channels are designed, in order to meet the demanding requirements in terms of extended coverage and low device complexity [57,60,61].
NB-IoT signals occupy a bandwidth of 180 kHz, that is a single LTE PRB, adopting a QPSK modulation over OFDMA, with 15 kHz sub-carrier spacing, in downlink, and BPSK or QPSK modulation over SC-FDMA in uplink, with configurable sub-carrier spacing of either 3.75 kHz or 15 kHz.
NB-IoT supports FDD transmission mode since Rel-13, with TDD introduced in Rel-14. With respect to the spectrum occupied within a LTE carrier, three different deployments are possible: • Stand-alone: One or more GSM carriers are used to carry NB-IoT traffic, making possible for the operators to ensure a smooth transition for mMTC services; • Guard-band: NB-IoT devices occupy guard bands of LTE carriers, thus avoiding interference and coexistence issue; • In-band: NB-IoT devices operate within a single dedicated PRB of an LTE carrier, thus requiring coordination with LTE.
In order to exchange data with the network, a NB-IoT device first performs the random access procedure, also usually denoted as RACH, which includes the RRC connection setup. When a device requires an uplink/downlink service, first it listens to cell information, and then it sends a random access request to the base station. The base station responds sending a random access response (RAR), indicating the resources that were reserved for that device. As a result of the random access, the device enters the RRC Connected mode, where it receives on the narrowband physical downlink control channel (NPDCCH),both information of the resource allocated for data reception and data packets.

EC-GSM-IoT
EC-GSM-IoT is another cellular LPWAN technology. It is built upon GSM networks and existing cellular base stations with a simple software update [56]. This allows EC-GSM-IoT networks to coexist with 2G, 3G, and 4G mobile networks. EC-GSM-IoT also benefits from all the security and privacy features from tradition cellular networks, such as support for user identity confidentiality, entity authentication, confidentiality, data integrity, and mobile equipment identification. GSM uses a combination of FDMA and TDMA as modulation schemes [56]. The channels occupy 200 kHz and have an absolute placement, called channel raster, which is also defined in steps of 200 kHz. Following GSM, EC-GSM-IoT has TDMA frames that are divided in eight time-slots, with such frames grouped in hierarchical structures denoted as multiframe, superframe and hyperframe. EC-GSM-IoT inherits from GSM the burst as the basic transmission unit. The following burst types are defined: EC-GSM-IoT has the same symbol rate as GSM of roughly 270.83 ksymbols/s, and uses 8PSK modulation, among other possible solutions. EC-GSM uses a blind transmission technique, where the transmitter transmits a predefined number of transmissions without needing any feedback from the receiving devices [63].

Energy Efficiency and Consumption Metrics and Models
In this section, we provide a list of the metrics commonly used to evaluate energy efficiency and consumption of IoT technologies. Next, we discuss relevant literature proposing power/energy consumption models for the set of IoT technologies under study. The methods proposed for improving and enhancing the energy efficiency are thus categorized and discussed throughout Section 4.

Energy Efficiency and Consumption Metrics
In this section, we describe the main metrics that are commonly used to study and evaluate energy efficiency and consumption in IoT devices and networks. Such metrics can be categorized as follows [66]: • Energy per correctly received bit, that specifies the amount of energy spent to transmit one bit of information from the source (e.g., an IoT device) to a destination (e.g., another device or the network); • Energy per reported event, that indicates the energy needed to report an event happening in the network. For example, if one device receives an information that is deemed more urgent, the transmission of this information takes precedence over information related to other devices; in this case, transmitting this information will require to consume more energy since its transmission is unexpected. In parallel, the information from other devices is halted until the urgent data is fully transmitted; hence, the normal flow of transmission is interrupted and starts again after the interruption, making the other devices to be in a waiting state for transmission to restart. This indicator is particularly used in specific IoT applications, for example, health care services, transportation systems (when it is require to transmit information of happened accidents), and monitoring and alerting systems; • Delay/energy trade-off, indicating the time it takes to report an urgent event happening in the network/system. As explained for the above indicator, when there is an event happening, the network has to prioritize the data from this event in relation to other data; so there is a delay in the time it takes to transmit these data and the time it take the network to go back to function normally; • Bits-per-Joule system capacity, that is commonly used to measure the throughput of the system for unit-energy consumption [24]. The bits-per-Joule capacity increases when the number of nodes in the network increases, as shown in [67]. The bits-per-Joule metric is studied in more detail in [68][69][70][71], among others; • Battery lifetime, that focuses on the device, and is inversely proportional to its energy consumption; • Network lifetime, that provides instead a global network characterization. It can be defined as the operational time of the network during which it is able to perform the dedicated task(s), or as the time until the first device or group of devices in the network runs out of energy. A short network lifetime implies the need for changing (or charging, if possible)IoT devices more often, leading to higher network management costs; • Duty cycle, as a way to measure energy consumption of the devices. Considering that the duty cycle consists of the period in which a node is active, it is an important metric in studying energy efficiency. Usually a network is duty-cycled to ensure long node and network lifetime, leading to the nodes being in sleep mode most of the time, with their radios turned off. When there is a transmission, the nodes have to wake up and then get ready to receive the transmission, consuming in this case energy for the time it takes to wake up, and then go back to sleep mode after the transmission has ended. When the duty cycle of a node increases, the energy consumption also increases, leading to the decrease of the nodes lifetime.

Energy and Power Consumption Models
In this section, we provide a description of models proposed for estimating the energy/power consumption of (some of) the IoT technologies described in Section 2. In particular, we first focus on LPSANs, reporting the models proposed for Zigbee, Bluetooth and BLE; we then move on LPWANs, and provide a review of consumption models for LoRaWAN, Sigfox, LTE, and NB-IoT.
As analyzed in [72], a generic IoT device switches back and forth over five main operation modes having different power consumption level: • From Sleep to Wake-up, that includes the time it takes for an IoT device to switch from sleep mode, where it is neither transmitting or receiving, to Wake-up mode; • Wake-up, during which the device is connected to the network and is able to perform data exchange with the IoT network infrastructure and/or other devices; • Sensing, during which the device scans its surroundings for possible data exchange; • Processing, during which the sensors embedded in the IoT device collect and possibly (pre)process the gathered data, in order to later initiate a transmission toward the network infrastructure and/or other devices nearby; • Transmission, during which the device transmits gathered and processed data.
A preliminary analysis of the energy consumption of the above modes is presented in [73]. It is shown that Sensing and Processing modes have the lowest and highest energy consumption, respectively, while the consumption during the Transmission mode depends on the specific applications and services delivered by the device.

LPSANs Zigbee
A model for the energy consumption of a Zigbee node is presented in [74]. It assumes that the network operates with one-hop transmissions between two nodes. Hence, the node consumption while communicating information bits, for both the transmitting and receiving node is modeled as: where I t and I r are the transmit and receive current, V r is the transceiver supply voltage, R b is the bit rate, and k is the number of transmitted bits. Based on the above formula, the total energy consumption for transmitting data packets through a wireless link, including a Header (H) and a Frame Check Sequence (FCS) is given by: The energy efficiency can be then evaluated as: where r represents the reliability of the wireless link in terms of packet acceptance rate.

Bluetooth and BLE
A model that evaluates the energy consumption of a Bluetooth network is given in [75], where a Markov chain is used for the analysis. A later work in [76] provides instead a model taking into account Bluetooth-specific hardware characteristics. This model has been further extended in [77], where the Bluetooth network composition in terms of scatternets is taken into account. In [77], the power model for Bluetooth assume that each node in a scatternet can be in active or low-power sniff modes (as mentioned in Section 2, see Bluetooth subsection). The authors validate the model over different scatternets and real devices. Hold and Park modes are however not included in the model.
As regards BLE, an empirical model based on real measurements is proposed in [78]. In particular, a power monitor is used to measure the power consumption of BLE devices in two scenarios: (1) before a connection is formed between slave and master nodes, that is, taking into account the energy consumption of these two separately; and (2) once the master/slave connection is established. Measurements confirm that BLE has a rather low consumption and fulfills its low consumption goals. In the same work, the consumption of a BLE network is modeled in order to evaluate the trade-off between energy consumption, latency, and piconet size, taking into account the following parameters: (1) connSlaveLatency, that is, the number of consecutive connections during which a slave is not required to listen to the master and can keep its radio off; (2) connInterval, that is, the time between the beginning of two consecutive connections; and (3) scannWindow, that is, the time during which a device (master or slave) scans o data exchange channel. The model allows the authors in [78] to conclude that bit errors significantly affect the energy consumption. As a matter of fact, a slave receives a packet from the master at the beginning of a connection event. When this does not happen (due to bit errors), the slave is forced to keep on listening, until the packet is received correctly, which ultimately causes a consumption increase.
Based on [78], a BLE energy consumption model is also defined in [79], focusing in particular on the impact of device discovery. The model considers (1) the energy consumption caused by the mechanism adopted for Advertisement events [80], during which a BLE device transmits the same packet on three advertising channels in a row in order to increase the probability of being discovered by a nearby device; (2) the total energy consumption in the Transmitter and Receiver devices during data exchange; and (3) the energy consumption required to switch across channels. Results show that the model can accurately predict energy consumption due to advertisement events, and provide a detailed analysis of the impact of settings adopted in advertisement procedures on energy consumption levels.

LPWANs
A study on the energy consumption of LPWANs is presented in [81], using the first order radio energy dissipation model introduced in [82], and also applied later in [83] and [84]. The battery life of transceivers is estimated using the E91-AA alkaline battery model. It is stated that most of the device energy is consumed to transmit and receive k-bit messages over a distance d, but also for signal processing operations, including (de)modulation, (de)spreading, and (de)coding. Moreover, the power amplifier is another element dissipating significant energy amounts, depending on environmental factors such as humidity, fog, rain, and transmitter/receiver distance. The following equation is used to evaluate the energy consumed by the transmitter: where E Tx elec and E Tx amp denote the radio circuitry and amplifier consumption, respectively. E Tx elec is dependent on the techniques used to process the signal, such as modulation, spreading and coding; while E Tx amp is dependent on the environmental factors, one of which is the distance between transmitter and receiver.
At the receiver, the consumption is due to the radio operations for detecting the signal. Hence, it can be expressed as a function of the number of k-bit messages and the consumption of the radio circuitry E elec : Different technologies, such as LoRa, LTE-M, NB-IoT, and Sigfox were tested in order to compare energy consumption and battery lifetime. The results showed that LTE-M has the highest energy consumption. In the case of distances less that 2 km, the energy consumption of the technologies is approximately the same, while at distances d ≥ 2 km, the energy consumption has significant gaps between each technology, with LTE-M and LoRa resulting as the highest and lowest energy-consuming technologies, respectively.

LoRaWAN
A LoRaWAN energy consumption model for Class A devices is given in [85], taking into account settings such as data rate, payload size and the impact of channel conditions as measured by BER. The model has roots in two previous works on LoRaWAN energy consumption, that is, (1) the work in [86], which focuses on modeling the consumption of sleep, transmit, and receive states, and evaluates the daily battery consumed by a device; and (2) the work in [87], which presents an evaluation of the energy consumed during the activation of a LoRaWAN node. The model was introduced to assess the difference in energy consumption between acknowledged vs. unacknowledged transmissions, taking into account the consumption during transmission, reception and sleep periods.
A LoRaWAN testbed was used to test the model, using the MultiConnectmDot platform from Multitech [88]. Results show an almost exact match between the analytical model and the testbed measurements, and highlight the strong impact of the acknowledg-ment mechanisms in reducing energy consumption, as explicit acknowledgment by the receiver allows a transmitter to enter a sleep mode and save energy.

Sigfox
A model for the current consumption of a Sigfox device is proposed in [89], based on measurements from a real Sigfox device. It is assumed that the device periodically transmits uplink data messages, and the derived model also takes into account data frame losses. First, the model focuses on unidirectional communication. The average current consumption is modeled over one period, since it is assumed that the device periodically performs an information exchange. This means that each period consists of the operations that are performed by the device to transmit one data message, including its replicas. In this period, the device transits over several states, similarly to the modeling approach adopted in [85] for a LoRaWAN device. The model evaluates the current consumption of the device as follows: where T Period is the time between two consecutive unidirectional communications (also, transactions), and T i and I i are the duration and current consumption of state i. Moreover, n i is the number of times state i is present in the uplink data frame transmission, and N states − uni is the total number of states, that is 5 in this case. By using the average current consumption I avg − uni and T Period , the model can model the energy that is consumed during the unidirectional communication, which can be obtained as follows: where V is the voltage of the battery, while E · [I delivery ] represents the amount of data that is expected to be delivered by the device. Periodic bidirectional communication (in this case the data is transmitted in both directions: send and receive) are also studied. A bidirectional transaction consists of 9 states, and expands the unidirectional case. The average current consumption is given as follows: where the variables have similar definitions of the previous case. The results of the analysis show that that the average current consumption decreases with the transaction period, with an asymptotic value that is equal to the current consumption in the sleep state. The unidirectional communication outperforms the bidirectional communication in terms of current performance. This is related to the fact that in the unidirectional communication there are less states than in the bidirectional communication, and the communication period is lower.

Cellular IoT
An energy consumption model for mMTC devices is presented in [90], consisting of a single base station providing access to a large amount of uniformly distributed, machinetype nodes. The model uses the network lifetime as power consumption metric. The uplink packet generation is modeled as a Poisson process. For the ith node, the average payload size is D i , and the power consumption in transmission mode is ξP i + P c , being P c the power consumed by the electronic circuits, ξ the inverse power amplifier efficiency, and P i the transmit power. Then, the average energy consumption per reporting period, that is, the period of time an mMTC device is transmitting data, is evaluated as follows: where R i is the average transmission rate.
In [90] an energy-efficient scheduling algorithm, referred to as ExpAlg is also proposed. The algorithm separately addresses two specific subproblems: (1) to satisfy the minimum resource requirement for the set of high-priority nodes to be scheduled at time t and (2) to provide resources to all nodes, taking into account the impact on the network lifetime. The goal of the ExpAlg algorithm is to find the nodes that have the highest impact on the network lifetime. Once they are identified, the scheduler allocates contiguous resource elements to them, in order to maximize the network lifetime. Simulation results showed that the proposed scheme significantly extends network lifetime, at the same time taking into consideration the remaining battery lifetime of the nodes, the priority class of traffic, and transmission-dependent and independent energy sources.

LTE-M
As mentioned in Section 2, LTE-M is the cellular IoT technology most similar to LTE under several characteristics, including energy consumption aspects. For this reason, and also considering the lack of LTE-M specific models in current literature, we briefly address in the following one of the most significant works on power consumption modeling for LTE User Equipment (UE).
In [91], LTE UE physical layer components are analyzed, and their impact on the total power consumption is derived. The proposed model takes into account transmit/receive power levels, uplink and downlink data rates, consumption of RRC modes (Idle vs. Connected, see Section 2.2.2) and consumption due to baseband (BB) operations, that is, transmit/receive BB phases, which focus on turbo-encoding and decoding the UE data. The

NB-IoT
A model for NB-IoT energy consumption was presented in [92], taking into account the impact of the Power Saving Mode (PSM, mentioned in Section 2). Uplink and Downlink packet arrival rates are modeled according to a Poisson model. The model was developed to analyze the power consumption when the device moves between RRC Idle, Connected and PSM, during three typical operative cycles: • Cycle 1: Idle → Connected → Idle, modeling the transmission or reception of a packet when the device is initially in Idle. • Cycle 2: Idle → PSM → Connected → Idle, modeling the transmission or reception of a packet being scheduled after the timer that causes a device to move from Idle to PSM expires; the transmission/reception takes then place as soon as the device leaves PSM. • Cycle 3: Idle → PSM → Idle, modeling the behavior of a device when there is no transmission or reception taking place.
The average total energy consumption is measured as the sum of the energy consumption of all three cycles weighted by their probabilities.
The proposed model was tested by introducing the PSM in the NS3-based NB-IoT simulator provided in [93], which is an adaptation of the LTE physical layer to NB-IoT. Results show a high accuracy in estimating the energy consumption, with the analytical model providing energy consumption estimates that closely match simulation results.
A NB-IoT power consumption model was also proposed in [94], aiming at estimating device battery lifetime. A traffic profile that resembles the behavior of sensor devices was applied, leading to data transmitted periodically within a predefined interval. The model estimates the energy consumption during four phases: • Phase 1: the UE wakes up and establishes the connection; • Phase 2: Uplink data transmission; • Phase 3: the UE disconnects and returns to Idle mode; • Phase 4: the UE is in Idle mode until the next transmission period begins.
The device power consumption was estimated according to the model as a function of system parameters including transmit power, uplink and downlink data rates, under several test cases involving PSM and Idle-eDRX (I-eDRX), where the device monitors the paging signal for 1 ms only per discontinuous reception (DRX) period (more details are given in Section 4.2.2). Results showed that, compared to 3GPP consumption estimates [64], the model provides more accurate vs. less accurate estimates for PSM vs. I-eDRX.
An additional model was proposed in [95] in order to evaluate the impact on energy consumption of collisions between transmissions in uplink NB-IoT transmissions. In the model, the Connected mode is divided into random access (RACH) and transmission states. These latter states, together with PSM and Idle states, are used to formulate a semi-Markov chain to model the behavior of periodic transmissions. In particular, the model accounts for the following operations: • During PSM, the device is unreachable until a corresponding timer, denoted as T PSM , expires; • During RACH, the device periodically transmits random access requests to the base station. The device transmits a request for each cycle T r , monitoring the downlink channel for the base station reply; • During Transmission, the device periodically transfers data to the base station in cycles of duration T ACK , and monitors the narrowband physical downlink shared channel (NPDSCH) to receive a reply, for a maximum of N max transmissions. The device switches to PSM if a data acknowledgement (ACK) is received from the base station within a cycle, otherwise it goes to Idle; • During Idle, the device releases its allocated resources, while starting an Idle timer T IDLE and monitoring the NPDSCH channel for an ACK response. If it receives the ACK during this timer, it switches to PSM, otherwise it switches back to a RACH state.
The total energy consumed when a transmission occurs for a duration L can be evaluated as follows: where E ACT (i) is the power consumption during the active period of one transmission for state i, L is specific duration during which a transmission has occurred, and E PSM is the power consumption during the PSM state. Simulation results show that the energy consumption decreases by increasing the T IDLE , along with an increase of the communication delays. No experimental validation was however provided for these models.

Classification of Energy Efficiency Methods
Section 3 served as a starting point for the study of IoT technologies in terms of energy and power consumption, where various energy and power consumption models were presented for each IoT technology. The focus of this section is to present a classification of proposed techniques and methods aiming at increasing the energy efficiency of IoT technologies. In particular, we give insights on common methods for different technologies, while also highlighting peculiarities of same methods which are tailored on specific LPSAN and LPWAN technologies.
Overall, we categorize the energy efficiency methods in: Sleep/wake-up techniques. These techniques are based on the ability of an IoT node to adaptively switch from sleep to active modes, and vice versa, in order to efficiently conserve energy. Where there is no information to exchange, the node goes into sleep mode. In this state, it continues to be registered to the network but does not monitor the status to check whether there is information to be received. These techniques are used by various IoT technologies. On the one hand, in the context of LPSANs, they have been often proposed for RFID systems, and in particular for an energy-efficient functioning of active tags. On the other hand, in the context of LPWANs, these techniques find large usage in cIoT systems. In particular, LTE-M inherits the so-called Discontinuous Reception (DRX) method from LTE/LTE-A, as also mentioned in Section 3, while NB-IoT and EC-GSM-IoT propose further enhancements, such as PSM and extended Discontinuous Reception (eDRX), which are further detailed later.
Data Reduction techniques. The main focus of these techniques is to achieve energy efficiency by reducing the transmitted data. As discussed later in detail, these techniques have found a large application for improving the efficiency of NB-IoT systems, compared to the other technologies. It is worth mentioning that the use of such techniques also has a significant impact on the Quality of Service (QoS) provided by the system, for example, in terms of latency.
Network techniques. These are among the most widely used techniques in IoT technologies, as they can be deployed on top of most technologies. Network techniques can be further divided in (a) clustering techniques, which use clusters (groups) of nodes, and aim at selecting and using appropriate cluster heads (CH) in order to receive/transmit data, ultimately impacting scalability and robustness of the network, and (b) routing techniques, aiming at transferring data minimizing system overhead and congestion. A mapping between the above categories of energy efficiency techniques and IoT technologies is provided in Figure 4.

Energy Efficiency in LPSANs
This subsection provides a description of energy efficiency methods for LPSANs. In particular, we give a description of the techniques proposed in the literature, and the improvements that are achieved when they are adopted in place of standard methods. Some previous works provided a comparison between two or more LPSAN technologies, as detailed in Table 3. An RFID tag must operate in an energy-efficient way, in or out the range of a reader. Moreover, an RFID reader has to deal with energy constraints as well, and thus also requires energy-efficient operations. As a result, Sleep/wake-up and Routing techniques have been widely proposed for RFID readers and active/passive tags, as detailed in the following dedicated sections. The surveyed literature is classified under such categories in Figure 5.

Mechanisms for RFID readers
In order to minimize the energy consumption and thus increase the energy efficiency of RFID readers, two main techniques have been proposed and used, that is, network techniques based on clustering, and resource allocation techniques based on timedivision mechanisms.
Network techniques. Most of the network technique proposed for RFID readers are based on the concept of clustering. Clustering is a way to resolve possible congestion issues in a network, arising from the fact that multiple devices may require resources for data transmission in a rather uncoordinated manner. By forming clusters, the devices also select CHs; then, all the devices in a cluster send their information to the CH, which in turn transmits it toward the network, for example, to the base station. This method helps saving energy, since it reduces data congestion and collisions.
For RFID, an energy-efficient clustering protocol was presented in [101]. It adopts the Dragonfly algorithm [102], and it is also based on previous works, which aim at solving different issues of RFID systems via clustering. In particular:

•
Pulse and Geometric Distribution Reader Anti-collision (GDRA) algorithms [103,104]-The aim of this algorithm is to solve collisions between readers, happening when signals from different readers overlap. To deal with collision issues, the Pulse protocol enables the transmission of a beacon signal when the reader is reading a tag, in order to stop other readers from transmitting signals at the same time. GDRA instead minimizes reader collision by using the Sift geometric probability distribution, which minimizes the collision probability among readers, and in turn maximizes the probability that a single reader can transmit. GDRA provides high throughput when used in dense reader environments; • Distance-based clustering [100]-The RFID systems is composed of the RFID and n tags, with the reader able to communicate with all the tags that are part of its interrogation zone. The interrogation zone proposed in [100] is then further divided into k equal sized clusters, where tags that are part of different clusters will be contacted and interrogated separately from each other. Simulation results show that this method helps reducing energy consumption and collisions between readers; The protocol proposed in [101] builds on the above works, and focuses on mobile scenarios, where the readers with higher energy and similar mobility patterns are likely to be selected as CHs. The scheme is composed of two phases; the first phase consists of choosing some of the readers as potential CHs, based on their mobility and energy level. During the second phase, each reader decides whether another node should be a CH or not. The readers that are not chosen as CH join the best CH, based on link connection time and energy. The nodes do not send their information individually to the base station, reducing the need to consume vast amounts of energy, but they send their information to the CH, which in turn sends the information to the base station. Using this scheme the nodes (readers) avoid energy wasting, and also retransmissions in the cases of collisions.
As mentioned above, the selection of CHs is addressed through the Dragonfly algorithm [102], which follows three main principles: • Selection, so to avoid collisions between neighboring readers, by evaluating the distance between eligible CHs and other readers; • Alignment, so to match eligible CHs to other neighboring readers, based on their mobility; • Cohesion, so that neighboring readers count of each eligible CH in the network. This means that neighboring readers know which readers have an energy level above a predetermined threshold, making them eligible to be selected as CH.
Simulation results showed that the proposed clustered network, significantly improves the energy efficiency with respect to its non-clustered counterpart. The use of the Dragonfly algorithm leads to significant energy savings, since it helps reducing the breakage of clusters, by selecting as CHs the readers having the highest residual energy and similar mobility. At the same time, it also eliminates redundant data, by sending aggregated data to the CHs.
Resource Allocation techniques. Resource allocation techniques for RFID readers typically rely on time division approaches, identified as an effective way to reduce energy consumption. Each reader is assigned to a time slot dedicated to its transmissions, thus triggering lower energy consumption thanks to the decrease of interference and collision between readers.
A time-slotted reader transmission scheme was proposed in [105]. The slotting algorithm, referred to as SELECT, assigns neighboring readers to different time slots, by taking turns to propagate the signal. In order for the slotting algorithm to function properly, it needs to have knowledge of the positions of all the adjacent readers. SELECT tries to minimize the number of time slots, so it can avoid the risk that a transmission can not be read, leading to a tag being able to leave the system before being detected, if the time that it takes a reader to complete its transmission is too high. The performance analysis was done by placing a number of fixed readers in an interrogation area, where the readers and a large amount of tags are in close proximity to each other, and evaluating the energy saved when assigning the reader to different time slots. Compared to a non-slotted transmission, the adoption of SELECT leads to energy savings and reduced energy consumption, even for scenarios with high number of tags.
Mechanisms for RFID Active Tags RFID active tags switch between sleep and wake-up modes; during the latter mode, the tags exchange data with the readers, and wait for the acknowledgments (ACKs) of correct reception from them, before entering again in sleep mode. When it sleeps, the tag is inactive and consumes very little energy.
Sleep/wake-up techniques. The Free2move protocol was proposed in [106], and is functional in both slotted/synchronized and asynchronous Alohan RFID systems. The Slotted Alohan RFID system, also referred as Reader Talks First (RTF) mode, is a mode where the reader transmits beacon signals to which the tags react to.The time is divided into several slots where each tag must randomly select a slot in which it will transmit its data, making the communication between the reader and the tag synchronous. In this case the tags transmit their ID using these synchronous time slots. In the asynchronous ALOHA system, called Tag Talks First (TTF) mode, data transmissions are not coordinated. In the Free2move protocol, in the TTF mode, the tags act independently of the readers, waking up randomly in order to deliver information to the reader. In the RTF mode, the reader creates a slotted scheme by continuously sending beacon signals, and listens for answers from the tags, in between the beacon signals. If two tags are synchronized with the reader beacon signal, the reader can listen to both tags, and is able to switch and transmit using two different channels. In the TTF mode, the tag uses four possible frequencies, which are grouped in two groups; where two transmitting tags can deliver information at the same time using each frequency group. In the cases when the information is transmitted by tags which are part of this system, these tags do not have to use synchronization before transmissions and the system functions on the basis of first come, first served. This means that the first tag that the system recognizes is the tag that is read by that reader. Simulations and measurements were performed to evaluate this method, with the results showing that the tags using Free2move have lower power consumption in the Transmit state compared with the Receive state.
The Enhanced protocol was an enhancement on the Free2move protocol, also proposed in [106], with the main modifications on the current protocol being: • The introduction of a deep-sleep parameter in the ACKs sent by the reader, which specifies a variable time for the tag to enter the deep-sleep mode; • An enhancement in the use of the radio channel in RTF mode, where all the available slots are used for beacon signals; which is not the case in the Free2move protocol.
In the RTF scenario, the energy consumption is evaluated by taking into account the energy spent for detecting the reader beacon, while being in Transmit, Receive and Sleep states, as follows: whereĒ Beacon is the energy consumed to receive a beacon signal from an available reader, E TX and E RX are the energy consumed in Transmit and Receive states, respectively, E Sleep1 is the energy consumed during the sleep period after the tag has communicated with the reader, and E ACK is the energy consumed to receive an ACK.
When there is no reader available, the Free2move consumption only includes the energy spent while listening for a reader, which is the same for the Enhanced protocol. Then, in this case, the energy consumption for Free2move and Enhanced is the same, as follows: whereÊ Beacon is the energy consumption when there is no reader available and the tag listens for beacon signals, while E Sleep2 is the energy consumption for the sleep period after trying to receive a beacon signal. After successfully delivering payload packets, the tag that is using the Enhanced protocol enters the deep-sleep mode. The ACK variable is used to specify how many cycles the tag should stay in deep-sleep state, after a successful transmission. In this way the tag does not consume energy in order to wake up and check for a beacon signal periodically, but wakes up based on the time shown by the ACK variable. In the TTF scenario, however, the consumption analysis does not take into account the energy for detecting the beacon, resulting in better performance over the RTF counterpart. Performance analysis showed that the energy consumption in the case of the Enhanced protocol decreases of about 90% when a reader is available and a tag enters into deep-sleep mode for a period of time, while it is lowered of about 34% when there is no available reader, since in this case the tag consumes energy in order to listen for beacon signals and discover an available reader.
Resource Allocation techniques. An algorithm that aims at reducing the energy consumption for active RFID tags using the polling techniques is presented in [107]. The protocol performs a sequence of polling exchanges.
Polling is a technique that enables a continuous interrogation of the devices (tags) by a central entity, in order to check if they have data to transmit. The reader sends out polling requests, and allows the tags to respond within a time frame. The polling request carries a contention probability p, with 0 < p ≤ 1, and the size of the frame f. A tag participates to the polling with probability p; if it decides to participate, it randomly picks a slot in the frame and to transmits a response string in that slot. A slot can be labeled as empty, if there are no tags transmitting in it, singleton, if one tag transmits, and collision, if multiple tags transmit concurrently. Singleton and collision slots are also called nonempty.
The first algorithm proposed in [107] is the Generalized Maximum Likelihood Estimation (GMLE) algorithm. This algorithm is able to use all the information that is gathered from different polling exchanges, to minimize the total number of exchanges that need to be done. GMLE records whether the slot in each polling is empty or nonempty. The protocol consists of two phases: • Initialization Phase, during which is estimated the number of tags; • Iterative Phase, during which the real number of tags is evaluated.
In order to complete these two phases, the GMLE protocol uses ω, a parameter that impacts the number of polling exchanges that can be done and the number of tags that can respond to each polling. The GMLE protocol uses the ω parameter to establish the number of tag responses.
Simulation results showed that when the ω parameter decreases, the energy spent decreases as well, because the number of tags that respond to the polling is also lower. In the GMLE protocol, this is expressed more clearly in the case that if there are two scenarios that generate the same information, but in one scenario only one tag will respond and in the other, two tags will respond, the energy cost of the second scenario will be higher than that of the first scenario, meaning that energy cost is not dependent on the generated information, but on the number of tags that respond to a poll.
An enhancement of the GMLE algorithm, called enhanced GMLE (EGMLE), is proposed to address this issue. EGMLE makes use of the history information gathered from previous polling exchanges and uses the maximum likelihood method to estimate the number of tags that will respond during the polling, to compute the number of responses in each polling, extracting more information and achieving better energy efficiency than GMLE. EGMLE and GMLE are similar in the case that both use the same polling protocol, with the difference being in the frame size that is used. The frame size that is used by the EGMLE protocol is larger than the one used by the GMLE algorithm, with the intention to reduce the probability of collision. The results showed that EGMLE is able to achieve better energy efficiency, requiring fewer number of responses.
Another set of methods based on polling was proposed in [108], aiming to minimize the average amount of energy consumed by the tags. The methods are based on the observation that energy consumption can be divided in two main components, that is, the energy for data transmissions towards the reader, and the energy for the reception of polling requests and other information from the reader. Three polling protocols are then introduced and compared: • Basic Polling Protocol (BP)-The reader broadcasts the tag IDs one by one, and waits for the tags to transmit during the correct timeslots. The tags listen to the channel to detect the list of IDs, and transmit their data when the detect their own ID, before going into sleep mode. The BP protocol requires each tag to spend large amounts of energy in order to continuously listen to the channel, compare the detected IDs, and finally transmit the data; • Coded Polling Protocol (CP)-This protocol functions on the basis that each tag ID carries two elements: the first being the identification number and the second element being a cyclic redundancy code (CRC) for error detection. The reader arranges the tag IDs in pairs, and broadcasts the code of each pair one by one. This pairing of IDs leads to a reduction of polling exchanges, by half the number of the pollings needed in BP. This is caused by the fact that in CP polling is done to detect the code of each pair of IDs, while in BP the polling is done to detect each ID separately. This process in CP leads to a reduction in the amount of data each tag has to receive, conserving the energy of the tags; • Tag-Ordering Polling Protocol (TOP)-The reader broadcasts a reporting-order vector V, instead of tag IDs. Each ID is mapped into bit in the vector V using a hash function. Each tag needs to only check its specific bit in V, using the information from the hash function of its own ID. This bit is the representative bit of the tag, and it is one if the tag is polled by the reader for data transmission, while it is zero if the tag is not polled by a reader. Information about the order in which the polled tags will transmit is kept on the vector V, which is fit into a timeslot of total length t tag . If V is not able to fit in one timeslot, it is divided into segments, and each segment is broadcasted into timeslots of length t tag . The reader also broadcasts the size of the vector V. This helps a tag to hash its ID and find the location of its bit representative in the V vector.
Simulation results showed that the CP protocol outperforms BP in terms of energy consumption and execution time, while TOP outperforms CP in terms of energy consumption, while having higher execution time on average.
The Energy-Efficient Tag Searching in Multiple reader RFID systems (ESiM) is another polling protocol proposed in [109]. ESiM functions on the basis of the interrogation range (also called read zone, it is the range within which the reader sends polling requests). The RFID reader, also called the interrogator, is able to read and write data to and from tags. Interrogators are also able communicate with and control nearby sensors in the interrogation zone. The capability of an RFID interrogator to communicate successfully with a tag is heavily dependent on two factors, that is, the distance between the interrogator and the tag, and the tag's dwell time (the time that a tag will be in the interrogator's range). The ESiM protocol first establishes if a specific tag is in the reader's interrogation range. If it is not part of any interrogation range, then that tag is not taken into consideration, and all the other remaining tags constitute the final group of tags that form the interrogation range that work with the protocol. In this way, the ESiM protocol works only with tags that are considered part of the interrogation range, and have IDs known to the reader. ESiM achieves optimal energy efficiency when all the tags are wanted tags, meaning specific tags that the reader was specifically searching for, and not random tags that are simply part of the interrogation range.

Mechanisms for RFID Passive Tags
Resource Allocation techniques. When compared to their Active counterpart, Passive tags are cost-efficient but also more prone to collision. Several protocols have been thus proposed that attempt to avoid collisions, such as three versions of ALOHA, referred to as-Frame Slotted Aloha (FSA), Dynamic Frame Slotted ALOHA (DSFA), and Multiframe Maximum Likelihood DFSA (MFML-DFSA) [110][111][112]114], as well as the so-called Query and Binary Tree schemes.

•
FSA is a protocol derived from Slotted ALOHA, where the timeslots are divided into frames, with the user (tag) able to only transmit a single packet per frame in a timeslot chosen randomly; • DFSA is an anti-collision algorithm, which helps to avoid the collision of responses sent by different tags at the same time. This is done by dynamically changing the frame length, based on the feedback that comes from the last collision. In the case that the frame size, used for transmission, is small, this will lead to this frames contending for the same slots, it could cause a large number of collisions and waste the bandwidth assigned for transmission, while a large frame size on the other hand it could cause for a larger number of slots to remain idle and also waste the bandwidth. To avoid these situations, the reader should have the ability to dynamically adjust its frame size in accordance to the number of unidentified tags; • MFML-DFSA is based on DFSA, and uses statistical information from different previous frames in order to implement a maximum likelihood estimator to compute the number of tags that are expected to compete with each other for transmission.
In [113], a combination of Query Tree and DFSA algorithms, referred to as energyefficient Query-DFSA (EEQDFSA) was proposed to improve energy efficiency. In Query tree, the reader transmits a query prefix to the tags, to be matched with their IDs. If there is one identification, where the query prefix matches the tags ID, it starts a new round of identification with a longer prefix compared to the first one. This process is called tag identification, and helps the reader to identify all the tags that are part of its query tree. EEQDFSA optimizes Query Tree, by minimizing the number identification queries, used to identify the tags, thus lowering the energy consumed by the identification queries process. Moreover, it classifies the timeslots in three categories, denoted as Successful, Empty, and Collision Slot. The latter ones are undesirable and contribute to lower the energy efficiency. EEQDFSA does not use the entire tag ID, but only a part of it called the serial number, which is unique; the algorithm proceeds to match serial numbers with the prefix transmitted by the reader. This process is more energy efficient, using less energy to transmit the serial number, compared to the energy spent when sending the entire tag ID.
In Query tree the energy consumption is dependent on the number of tree levels, where each level consists of different readers and connected tags. The higher the level, the higher the energy consumption. With the increase of tree levels, it increases the complexity of the tree and therefore the higher the consumed energy. An important element in Query Trees is the maximum number of prefixes that are transmitted by the reader in the case when there are N tags in total, with the Query Tree composed of b levels. The higher the number of prefixes transmitted, the higher the consumed energy. In [113], the energy efficiency is evaluated by introducing the energy cost E C , as follows: where B t is the total number of transmitted bits and D is the delay introduced by the tag identification process.
Simulation results showed that EEQDFSA outperforms other existing algorithms in terms of energy cost and energy consumption, with minimum bits transmitted and lower delay, achieving higher energy efficiency.

Zigbee
Zigbee is one of the main IoT standards for the transmission of small-sized data with low power consumption [115], and for this reason many works have been focused on the analysis its energy efficiency [97,98]. Most solutions fall into the Resource Allocation and Network categories, as shown in Figure 6.

Resource Allocation techniques.
In a Zigbee network, the majority of the network energy is consumed by the sensor nodes in Transmit and Receive modes. In [116], it is observed that there is the need for considering other sources of energy consumption, using the network lifetime as a metric. Hence, other energy consumption sources are highlighted, such as:

•
Energy consumption of the Power-up mode, for both nodes and routers; • Energy consumption of reporting data with application acknowledgment requirements, that is, the energy that is spent by the coordinator node or the access point when sending back an ACK for the successful reception of sensor packets. This process only happens when a particular application requires ACK responses, and in this case the process of waiting for ACKs also requires energy, shorting in this way the overall network lifetime; • Energy consumption of routers, which directly impacts the network lifetime and functioning, since many sensors will not be able to transmit data to the coordinator if a router runs out of battery.
In order to make the network more efficient, two resource allocation techniques were also proposed: • Energy-saving Access Control, that employs the use of a control method to shorten the access time of sensor nodes; • Energy-saving Sleeping Schedule, that allows a node to enter sleep mode when an ACK has been received and no other transmissions are needed.

Network techniques.
Zigbee-based LPSANs, as mentioned in Section 2, mostly use mesh topologies for data exchange [35]. Thus, in order to be bandwidth-and energyefficient, the usage of proper routing protocols is of paramount importance, and most of network techniques proposed for Zigbee are indeed based on routing. One of the most used routing protocol in Zigbee is a modified version of the Ad Hoc On Demand Distance Vector (AODV), called AODVjr [117].
AODV compares different routing paths by employing a path cost metric. In particular, each link in a path is associated with a link cost, also called cost value [35]. The cost values are summed up to evaluate the cost for the whole path. In a mesh topology, this results in the shortest paths to be usually the optimal paths. AODVjr uses a route discovery technique in order to establish the route for the transmission of the information. The route discovery consists of a source node broadcasting a Route Request (RREQ) for the destination. Every intermediate node that receives the RREQ for the first time, does not obviously have a current route to the destination, and thus has to rebroadcast the RREQ. Then the destination node receiving the RREQ sends back a Route Reply (RREP). Then, RREP back propagates, and each intermediate node is involved in the final source-destination route. This method reduces the transmission delays, because the route is pre-specified, but at the same time is energy-consuming, because of the RREQ rebroadcasting.
To address this shortcoming, the E-AODVjr routing algorithm was also proposed in [117], which takes into account the residual energy of the nodes to determine which of them can be a part of an active route. E-AODVjr requires to monitor the energy consumption of nodes, to be able to estimate the consumption in the next ∆T seconds. The scenario is modeled as a network composed of N i nodes, where each node N i monitors its energy consumption, denoted as E i (∆T j ), j ∈ {0, 1, 2, . . . } where: where 0 < α < 1, E i (∆T j−1 ) andĒ i (∆T j ) are the previous and new estimated values, respectively. Equation (17) evaluates the energy consumption of a single node; then, the total energy of the network is obtained by considering the total energy consumption for any possible route. Denoted by L = {L i | i = 1, . . . , m} the set of all the possible routes, the average total energy for the entire network, including all paths, is then derived as follows: where E total is the total transmission energy for a path, L i is a ith path, and m is the total number of paths. Simulation results showed that the residual energy of E-AODVjr is higher than the residual energy of AODV, extending in this way the network lifetime. Energy Aware Multi-Tree Routing (EAMTR) algorithm is a routing algorithm proposed in [118]. It is based on clustering and aims at building multiple routing trees in a sink-based network (a sink node represent a base station). The data collected by the other nodes that are part of a cluster will be transmitted to the CH, which in turn transmits these data to the sink node (base station). The EAMTR algorithm functions based on four phases:

•
Initiation Phase: During this phase, the sink node starts to accept nodes, thus creating the first tree. In this phase, the total number of nodes that are part of the tree is calculated, with the the sink node as a starting point, to continue with assigning the addresses to the nodes; • Tree Selection Phase: It starts from the sink (root) node to the leaf nodes, and begins from the first-level nodes (the nodes that have 1-hop distance to the sink). The nodes in the tree form a relationship that can be also called a parent-child relationship, with some nodes acting as parent nodes, when they have nodes that are dependent on them, and the other nodes called child nodes; • Normal Phase: During this phase, each node knows all the nodes it is connected to, and which of these nodes acts as parent to other different nodes. The sink node can be used as a router for the nodes that are the furthest from it, and helps redirecting the packets using the specified tree of each node; • Recovery phase: In this phase, there is a route redundancy, considering that each node knows the route to use to send the packets at the sink. At the same time the network is more resilient in the case of node failure, and in the off chance of a node failure, a new node simply replaces the old node in the tree.
For each node, EAMTR selects the least congested route to the node, thus balancing the energy consumption of the nodes and at the same time extending the network lifetime. The Improved Routing Protocol presented in [119] is another version of a tree clustering algorithm focusing on the residual energy of the nodes, and helps selecting the optimal nodes so to avoid in the nodes with low residual energy.
In [120], the Energy-Efficient Shortcut Tree Routing (ESTR) was proposed and used to find the best next-hop node, based on load balancing on links and nodes, and on the shortest path selection. It provides the network with a low-delay route and a mechanism for energy-balancing. ESTR functions on the assumption that every node knows the neighbor's table, which contains the information about relationship between the nodes (which node is a parent and which node is a child) and link-state information about neighboring devices, when a node receives any frame from a neighbor, and so forth. The table also includes other information on the neighbors, such as: , that is, the number of hops the neighbor node needs for achieving the destination; • Device Type (DT), it gives information if the node has routing capability or not; • Link Quality Indicator (LQI), it reports the quality of the received signal on the link with the neighbor; • Node Cost (NC), it counts the number of packets that are sent or received by a neighbor node; • Tree Index (TI), it represents the path that data travel from the coordinator to the node under consideration.
ESTR uses TI to calculate, for each neighbor, the number of hops that a data packet would take to arrive at destination. Then, it uses NC to estimate the congestion level of its neighbors, and LQI to find the most reliable links. This way, ESTR balances network traffic, distributing the workload on the intermediate and less-congested nodes. Simulation results show that ESTR performs better compared to EAMTR and the Improved Routing Protocol, in terms of energy efficiency and network lifetime, since it balances the consumption across the nodes in the network.
Another algorithm taking roots from AODV was presented in [121], where the energy of the nodes is classified as enough, low, and critical levels. Nodes with enough energy choose the routing algorithm they will use, while nodes with low energy choose the same mechanism used in AODV to send flood packets (a large number of packets at the same time), while critical nodes only respond to the nodes that have its address as destination.
A Zigbee network generally adopts a cluster-tree hierarchy for organizing the nodes [121]. In a cluster-tree hierarchy, the nodes are grouped in a hierarchy of clusters, with the clusters represented in a tree structure. The network is composed of a coordinator node and children nodes. The nodes are also divided into three categories: RN+, RN−, and RFD. RN+ and RN− nodes belong to fully functioning nodes, while the RFD nodes act as leaf nodes, which forward information to the parent nodes. RN− nodes are able to use only the static cluster-tree algorithm, using an already established route, while the RN+ nodes use AODVjr to search for the best possible route. Hence, RN+ nodes consume more energy, since they are responsible for collecting information, route discovery and data forwarding. A disadvantage for such nodes is that they are prone to failure more easily that the other nodes. When a RN+ node turns into a failure node, the network partitions, because RN+ nodes are in charge of routing the traffic to the destination node. The algorithm in [121] focuses on reducing the load of the RN+ nodes. Simulations show significant improvements in terms of node and network lifetime and power consumption, while also reducing the percentage of node that fail to work properly.
Several other works proposed clustering schemes for Zigbee networks. A new algorithm, called CLZBR was proposed in [122], and was outperformed later in terms of efficiency by another routing algorithm, called NCLZHR [123]. The nodes in the Zigbee network that employs NCLZHR, are grouped into clusters, with nodes being grouped in three types, that is, CH, gateway node (GW, a node selected to serve as a router), and cluster member. NCLZHR adds a node cluster label, making it possible for the node to know whether the destination node is in the same cluster or not, in order to optimize the data forwarding. NCLZHR also reduces redundant RREQ packets, since CHs and GWs are the only nodes in the network that can broadcast RREQ. The cluster members can only transmit data based on the tree routing algorithm, which is based on the neighbor tables. Simulations showed that CLZBR requires a large amount of broadcasting during the clustering process, whereas NCLZHR significantly reduces the number of broadcasting, and in turn the energy consumption of the network.
Another algorithm based on the AODVjr is presented in [124]. The algorithm improves the original Cluster-Tree algorithm, used to merge data from the CH, reducing in this way the information redundancy and the power of the source node needed for transmission. The improved Cluster-tree algorithm makes use of the following parameters to evaluate the energy: P min , that is the minimum energy the node needs to run normally, d i , that is the network depth (the number of levels in the tree), and K node , that is a specific coefficient used to adjust the energy of the CH. Using these parameters, the energy is evaluated as follows: Performance analysis showed that the algorithm outperforms the original cluster-tree, being able to select a routing path with a lower energy consumption.

Bluetooth and BLE Bluetooth
Several works highlighted the role of network organization and configuration on Bluetooth energy efficiency. Lee et al.,in [99] were among the first to analyze the power consumption of Bluetooth networks, but relevant works also include [96][97][98]. As a result, several approaches for energy efficiency in Bluetooth based on network techniques and resource allocation were proposed, as summarized in Figure 7 and described in the following.

Bluetooth
Network techniques [125][126][127][128][129] Resource Allocation techniques [130][131][132]  Network techniques. Similarly to other technologies, Bluetooth devices spend most of the energy during data exchange, while the energy consumption decreases when the device is in hold state. As also analyzed in [125], the creation of energy-aware scatternets, able to monitor and reduce energy consumption, is also very important in Bluetooth networks. This aspect has several requirements:

•
The role of devices in a scatternet should be assigned with respect to their residual energy; • Short links between devices should be preferred [125]; • The number of piconets in a scatternet should be minimized, in order to preserve scatternet performance.
Network techniques based on clustering, in particular have found an important niche in their application to Bluetooth systems.
An adaptive clustering algorithm was proposed in [126], making use of fuzzy logic to select the CHs between devices, aiming at creating a network able to flexibly share the power consumption and the traffic load with the CH. In order to evaluate the power consumption, a network composed of m clusters is considered, where each cluster has n nodes, including the CH. Between these nodes, there are l so-called bridge nodes, that is, devices that are part of more than one piconet, with possibly different roles in each piconet, for example, a bridge node can be at the same time a slave and master node in two different piconets. The CH consumes more power than normal nodes, since it is in charge of intra-cluster and inter-cluster communications. For this reason, the proposed method implements an algorithm that generates CH candidates, and uses fuzzy logic to select the CHs, in accordance to the residual energy and traffic load of each candidate.
The first phase of the algorithm consists of topology discovery, where each node has to find its 2-hops neighbors. During the following phase, the CH candidates are selected and in the third phase, fuzzy logic is applied to decide a CH. Being an iterative process, CH switches can happen, so that a new CH can replace an old one, depending on its residual energy and traffic load. When this happens, the old CH informs all the nodes about the new one, so that they can properly react to this connection change. The method was compared to more traditional methods that do not make use of clusters in a scatternet scenario, and was found to achieve higher energy saving and longer network lifetime, despite the additional power consumption caused by CH switching.
Distributed Energy-Aware Multi-hop Tree Scatternet (EMTS) was proposed in [127] for enhancing energy efficiency in multi-hop scatternet networks, by introducing an energyaware formation process and a reorganization technique aiming at extending scatternet lifetime. EMTS is based on previous work on scatternets, mostly focused on the formation of 1-hop scatternet, where all the devices are within each other's transmission range. In particular, EMTS takes root from: • SF-Devil [128], that is a scatternet formation method which shortens the communication links in order to increase the network lifetime. SF-Devil assigns the devices with the highest energy to be masters, but it incurs in high delay formation and does not consider the network diameter; • ACB-Tree Scatternet Formation (ATSF) [129], that is proposed in order to create energyefficient, binary tree scatternets (ACB-tree) in 1-hop networks, while ensuring more energy-efficient devices. In the ACB-tree, the root node is the CH while all the other nodes are leaf nodes.
The formation process proposed in EMTS consists in three phases: • Neighbor Discovery, that makes possible the connectivity of the generated scatternet; • Piconet Formation, during which the devices are grouped into piconets, and the ones with the highest residual energy are selected to be masters, with the purpose of creating as few piconets as possible. The master devices perform a PAGE operation to their neighbors to build the piconet. All the other devices perform a PAGE SCAN, and become slaves of the master that contacted them first. Each piconet is then formed of five selected slaves, with two free slots for future connections with other piconets; • ACB-Tree Growing, that makes possible the connection between piconets into a growaware ACB-tree, using ACB-Tree combinations [129].
The reorganization techniqueis instead divided in two phases: • Local Reorganization: This phase is done to relieve the workload of the energy-depleting devices, that could be pure slaves, slave/slave bridges, or masters. When the energy level of a node drops below a pre-defined thresholds, this node will be substituted by one of her children nodes; • Device Reorganization: This phase is separated in two sub-phases:

1.
Device Arrival, during which the slaves of each leaf nodes periodically perform an INQUIRY SCAN procedure to discover new devices. Any new device that has performed an INQUIRY procedure to enter the existing scatternet is absorbed in the scatternet; 2.
Device Departure, during which a slave leaves the scatternet, without affecting the overall scatternet composition.
EMTS analysis was performed via the Blueware 1.0 simulator [133], built on top of Network Simulator v2 (NS-2) [134]. Results showed a significant scatternet lifetime increase, thanks to a beneficial shortening of the scatternet diameter.
Resource Allocation techniques. Two methods for energy saving were proposed in [130], where the network lifetime was adopted as performance indicator. Both methods make use of the RSP option proposed in [125], coupled with the ability to control the transmitted power of devices, as provided in the Bluetooth standard [40].
The first method assumes that the master node is responsible for all packet transmissions to/from the slaves, so the piconet masters drain their batteries faster than slaves. The RSP option allows each device to be a piconet master, thus being able to balance the energy consumption across devices. The current master periodically monitors both its own energy and the energy of its slaves. If its own energy is less than a fraction X(0 < X < 1) of the maximum available energy among all slaves, the RSP option is activated, and the slave having the maximum residual energy becomes the new master. A disadvantage of this method is that a continuous RSP activation may degrade the overall system performance. Simulation results showed that when the value of X increases, the number of master/slave switches also increases, leading to network throughput degradation.
The second method, referred to as Distance Based Power Control (DBPC), takes root from the Bluetooth specification on the transmission power control. Three power classes are defined, each one with power ranges from 2 to 8 dBm. Assuming that the distance d between a master and a slave in known at both sides, DBPC chooses the transmission power of both master and slave, evaluating the power loss due to their distance, which is proportional to d η [131]. DBPC can significantly improve the network lifetime, with an effect that however decreases as packet arrival rate increases, due to the higher load in the devices and thus higher energy consumption.
Adaptive Share Polling (ASP) was proposed in [132], in order to reduce the power consumption of Bluetooth networks, where the traffic is composed by short packets transmitted with constant bit rate. An ideal polling scheme would be achieved by keeping the slaves in a sleep state and waking them up via dedicated polling packets by the master only if there is data for them. In real cases, arrival times of data packets cannot be predicted exactly due to traffic load fluctuations. The rate of transmission of polling packets is thus the result of a trade-off between latency in data delivery and energy consumption: the aim of ASP is indeed to optimize this trade-off by specifically addressing two sources of inefficiency: (a) unsuccessful polling procedures, where the slave replies to polling packets with a NULL packet, in order to maintain a constant bit rate, and (b) useless receptions, in which the slaves receive many access codes and headers for packets that are not meant for them.
In ASP, the master schedules the traffic so to keep the successful transmission exchange rates between each slave and the master itself within a predefined range. The master also maintains a timer for each slave, in order to poll the slave at the optimal moment in terms of data delivery latency. ASP performance was evaluated using IBM's BlueHoc Bluetooth extension for the NS-2 simulator [134]. Simulations were executed with the assumption that there were no lost packets and the traffic was unidirectional (slave-to-master). Results for a single-slave piconet (i.e., a piconet with one master and one slave) were used as a baseline in order to evaluate the power consumption of both master and slave. The analysis was then carried out for piconets including up to seven slaves. Results showed that the energy consumption using the ASP algorithm is dependent on the data rate. When both master and slaves use low data rates, the energy consumption is low. If the data rate of the slaves increases, the energy consumption also increases.

Bluetooth Low Energy (BLE)
A method for reducing BLE energy consumption was proposed in [135]. It focuses on reducing the transmission average current I average , by changing the size M of the data packets for a given sampling frequency f sample , leading to a data transmission period T between two consecutive connection events given by: A data unit of 18 Bytes was used in order to measure the energy consumption in real scenarios, taking into account the limitation of 20 Bytes in the case of one-time data transmission, as specified in [43]. It follows that the average transmission current in one data transmission can be evaluated as follows: where Q sum , the total electrical charge consumption, is evaluated as: where Q read is the electrical consumption of the sensor in charge of receiving the information, Q noti f y is the electrical consumption of the transmitting data and Q sleep is the charge consumption of sleeping. By using Equations (21) and (22), the average transmission current of data transmission can be evaluate by changing the size of data packet M: where I sleep is the sleeping current and C is a predetermined polynomial. Simulation results showed that the average current of data transmission can be reduced by varying the size of data packets, reducing in this way the total energy consumption of BLE devices, and thus extending network lifetime.

Energy Efficiency in LPWANs
In the following subsections we move on LPWAN technologies, reporting in-depth descriptions of proposed energy efficiency methods. Technology-agnostic energy efficiency methods are preliminary described; then, specific works on particular technologies are analyzed, ranging from LoRa/LoRaWAN to 3GPP cIoT standards, including EC-GSM-IoT, LTE-M and NB-IoT.
A general approach for evaluating the energy consumption of LPWAN technologies is presented in [136]. It is based on the fact that energy consumption is composed by the energy consumed by the different consuming elements. Therefore, in order to properly calculate the energy consumed, it is key to efficiently calculate the energy consumed by each element, moving from the observation that the sensor unit operates in two modes: Non-Active Period, which consists of sleep/idle mode, and Active Period, which includes information processing and data exchange operations (i.e., Connected mode). Reference [136] verified the validity of their approach on different LPWAN devices using a Keysight N6705B DC Power Analyzer [137] and the Joulescope Precision DC Energy Analyzer [138]. Experiments show that LoRaWAN has higher energy efficiency and longer battery lifetime, compared to NB-IoT and Sigfox devices. A few other works in the literature provided a comparison between two or more LPWAN technologies in terms of energy efficiency; details are provided in Table 4. Table 4. Existing comparative analyses of Low-Power Wide-Area Networks (LPWANs) in terms of energy efficiency and consumption.

Paper (s) Technologies
Methods Results/Conclusions [81] Sigfox, LoRaWAN, NB-IoT, LTE-M Simulations LTE-M has the highest energy consumed per transmitted message. For distances below 2 km, the technologies have similar energy consumption. For distances above 2 km, the energy consumption depends on temperature, payload size, and other parameters.
[93] LTE-M, NB-IoT Simulations NB-IoT has better energy efficiency for small data lengths. LTE-M offers better energy efficiency and longer lifetime for large data lengths and good coverage. NB-IoT is a better fit for simple sensors and low data rate applications in medium to poor coverage. [136] LoRaWAN, Sigfox,

NB-IoT Experiments
LoRaWAN has lower energy consumption compared to Sigfox and NB-IoT. Sigfox is better in terms of coverage but it has the highest energy consumption.

LoRa and LoRaWAN
A LoRa/LoRaWAN network formed by end devices (EDs) equipped with Wake-up receivers (WuRx) was analyzed in [139]. The network is organized as a star topology, with a Central Node (CN). The CN can wake up one or more EDs via a Wake-up Beacon, referred to as WuB. EDs spend most of their time in the sleep state, and wake-up only when they receive WuB signals.The proposed scheme uses a bidirectional long-range communication for transmission between the CN and the gateway, and unidirectional communication between ED and the gateway. It also uses Ultra-Low Power (ULP) WuRx to achieve pureasynchronous communication between the CN and the EDs. By using this method, the EDs do not have to listen the channel periodically (as Class B devices) or continuously (as Class C devices) in order to receive data from the gateway. Simulation results show that the power consumption when using the WuRx scheme is significantly lower (up to 3000 times), with respect to legacy LoRaWAN Class A, B and C devices.
Another analysis of LoRa energy efficiency was proposed in [140]. Normally, before transmissions, a LoRa device switches into Channel Activity Detection (CAD) to verify that the channel is not occupied by another device. It uses a frame which includes the CAD interval, the time it takes to transmit a packet, followed at the end of the frame by one second in standby. Only after this procedure is complete, a window is opened for the downlink. From the moment a preamble is detected, until the entire packet is received, the device continues to be in the reception mode, otherwise the device is on standby.
In order to correctly evaluate the energy consumption in each mode, the method utilizes the current levels I [mA] provided in [141]. When considering uplink transmissions, the energy consumption is evaluated by taking into account the consumption in CAD and transmission modes. The model in [139] gives a more complete exposure to the energy expenditure. The energy consumed during transmission can be evaluated as follows: where E TX is the energy spent in transmission mode, Vis the voltage supply, I TX is the transmission current and T a is the time in the air for the uplink transmission.
In [140], the energy efficiency for a single link is calculated, defined as the ratio between the number of bits in a payload that is successfully transmitted and the total energy spent to make possible the transmission, measured in bits per Joule per Hertz. In order to calculate the energy efficiency, the authors model the probability of success for a single link, and then use the energy consumed during each mode. The energy efficiency can thus be evaluated as follows: where PL is the packet length in bit, BW is the bandwidth, I is the current, V is the voltage, P s is the probability that there is a successful transmission, and T is the activity interval.
Results of the numerical analysis show that, at long distances, higher SF values result in higher energy efficiency. On the contrary, lower SF values give better performance for short distances. LoRaWAN energy efficiency is also analyzed in [142]. The study considers a fixed transmit power and the Okumura-Hata radio propagation model. Two types of energy dissipation are considered in order to measure the energy consumption: energy dissipation in the transmitter E tx , and in the receiver E rx . The energy consumption is then evaluated by considering several factors, including SF, transmission power, bandwidth, and star vs. mesh topologies. Results show that, in order to achieve low energy consumption, there is a need for adapting the transmitted power depending on the used SF. Moreover, it is also observed that signal bandwidth does not have significant impact on energy consumption.

Cellular IoT (cIoT)
cIoT enables mMTC over the cellular system. This latter is currently transitioning from the fourth to the fifth generation, which is envisioned to fully support mMTC scenarios providing connection to a large number of IoT devices. Large amounts of devices will be thus connected through the 4G/5G cellular architecture, interacting with heterogeneous users, and providing several services, particularly in the context of smart environments and industry automation [143]. To achieve this goal, capacity and usage of the cellular system will significantly increase, thus possibly leading to high energy consumption and operational costs (OPEX), and in turn to increased environmental concerns, if appropriate mechanisms for energy efficiency are not adopted [144].
Within this context, an aspect jointly highlighted by academic and industry communities is that the capacity increment targeted by next-generation systems have to match with an appropriate power consumption balance [145,146].

LTE-M
This section focuses on techniques that try to optimize LTE-M energy consumption. The main methods are: the Discontinuous Reception (DRX), as part of Sleep/wake-up techniques, and methods based on random access procedure optimization, as part of Resource Allocation techniques.
Sleep/wake-up techniques. Methods belonging to this category typically take advantage of the DRX feature that LTE-M inherits from LTE. In general, even when there is no traffic exchange between the network and the UE, the UE has to continue listening to the network, in order to decode the Physical Downlink Control Channel (PDCCH), which carries scheduling assignments and other control information [147]. By doing so, the UE remains active (ON) all the time, thus draining the battery. A solution to decrease the consumption is for the UE to go into a sleep state, that is, RRC Idle mode (OFF) for a period of time, and wake-up sporadically to check if there is any relevant data. If not, the UE can go back into the sleep mode, and will wake-up again later, after another full inactivity period, repeating this cycle over and over. DRX regulates such periodic cycle using the parameters expressed in Table 5 [148].
LTE-M energy efficiency was analyzed in conjunction with DRX in [149], which proposed a cross-layer scheme for packet scheduling, using a memetic-based algorithm. This algorithm exploits the fact that memetic algorithms require less computations, thus leading to significant power savings. DRX is used to further improve battery lifetime, by switching the LTE-M devices in Idle mode every time the devices enter an inactivity period. The scheme utilizes a scheduling algorithm that selects LTE-M devices based on channel quality and energy, thus optimizing frequency/time resource allocation while reducing the energy consumption. Before this process happens, the base station identifies the active nodes in the cell, by checking if an LTE-M device has data ready for transmission; then each device that has no data for transmission applies DRX. Each node is assigned an urgency weight which determines the time the device transmits, which is compared to a threshold value. If the urgency weight is higher than the threshold, the node will be considered an urgent node, unable to wait for its turn to transmit. The algorithm keeps this node in the Connected mode and will be the first to be allocated in the next Transmission Time Interval (TTI). If, the urgency value is lower than the threshold value, the node is considered a non-urgent node. It is very important to define the length of the DRX cycle, because a long DRX cycle can increase data latency. After the configuration of the DRX cycle has finished, the nodes will be in sleep mode for a DRX Cycle. When and if a node reaches its delay threshold, the DRX cycle is reset to its initial value while remaining in Connected mode. If the node has not reached its delay threshold, the DRX Cycle is configured with a new updated value and the node switches from Connected state to Idle. The NS3-based simulator was used to evaluate the performance of the proposed algorithm. In order to evaluate the performance of the proposed algorithm, the authors also performed measurements in real scenarios. Results showed that DRX allows to safely power down devices, thus leading to significant energy savings, with the proposed algorithm providing lower energy consumption. shortDRX-Cycle The first DRX cycle that the UE enters after the inactivity timer expires. The UE remains in short DRX cycles as long as the drxShortCycleTimer is running.

DRXShortCycleTimer
The consecutive number of subframes for which the UE remains in short DRX cycle after the DRX Inactivity Timer has expired.

Resource Allocation techniques.
An enhanced random access algorithm was introduced in [150], in order to reduce the power consumption of devices. LTE-M defines two access modes: • Contention-based Mode, where devices attempt to access at the same time, leading to competition for the channel resources; • Contention-free Mode, where the base station allocates the channel resources for all the access requests that are received.
The contention-based random access procedure presented in [150] is based on timeslotted ALOHA. The energy consumption is divided into three parts: receiving and sending (non-sleep) and sleep energy consumption. Based on slotted ALOHA, the time is divided into time slots. If there is a data frame that arrives at a time slot, it can be retransmitted only at the beginning of the next time slot. The scheme proposed in [150] is based on the principle that when a new slot arrives, the device that has initiated the access will receive a broadcast signal, containing the success rate of the last time slot. If the last time slot access has a low success, the device selects another access slot that has a higher success probability. Simulations showed that when the network starts to congest, the proposed algorithm helps improving the success rate of devices accessing the network. An increase in the number of the devices results in a decrease in the success rate; the proposed waiting scheme helps saving the energy consumption.

EC-GSM-IoT
An approach aiming at decreasing power consumption of EC-GSM-IoT devices was proposed in [151]. In particular, it focuses on the use of the paging operation (described in Section 2) during the DRX mode to efficiently save battery. The method makes use of the dummy bits [56], which are used in three different ways:

•
Paging Indicator, indicating if the paging block has a valid Paging Indication message (bit combination "11") or not (bit combination "00"). The paging indication message is a short indicator that tells the devices that there is a paging message on an associated paging channel; Simulation results showed that a receiver using the proposed paging indicator consumes considerably less power (about 40%) to decode a paging block, compared to a receiver not using paging indicator.

NB-IoT
NB-IoT is arguably the most important cIoT technology, as demonstrated by the large amount of works studying its energy efficiency and consumption aspects. Techniques belonging to all the four categories defined at the beginning of the Section were proposed, as shown in Figure 8.

cIoT LTE-M EC-GSM-IoT NB-IoT
Sleep/wake-up techniques [149] Resource Allocation techniques [150] Sleep/wake-up techniques [151] Sleep/wake-up techniques [152][153][154][155][156][157] Data Reduction techniques [158] Network techniques [159] Resource Allocation techniques [160,161]  Sleep/wake-up techniques. NB-IoT uses two advanced mechanisms, that is PSM and eDRX. PSM makes it possible for the device to enter in a deep sleep mode within the RRC Idle state, so to minimize its consumption. In this situation, the device is still registered to the network but cannot be reached. Then, if the Mobile Mobility Entity (MME) receives a downlink data notification, it notifies the serving base station to cache those data and delay the paging request. The device goes back to Connected mode only when it has data to transmit in uplink [59]. It activates an active timer (AT) when it transits from Connected to Idle, and goes into PSM mode when AT expires. More details on the time structure enabling PSM and eDRX mechanisms are provided in Figure 9.
eDRX was introduced in Rel-13, in order to further extend the sleep time during Idle mode. It is an evolution of the DRX technique, already described for LTE-M, which works on both Idle and Connected modes. In Idle, eDRX extends by up to 3 hours, compared to DRX, the time a device is not actively monitoring paging messages [152]. A set of timers, defined in Table 6 controls eDRX processes. In particular, the Active Timer (T3324) which can control the time lapse for which the device is reachable by the network while in Idle. In Connected, the device alternates between listening for paging occasions and sleep periods. eDRX can be used without PSM or in conjunction with PSM. It is responsible for extending the time interval during which a device is not listening to the network. eDRX-specific cycles are reported in Table 6.  A technique aiming to group and synchronize NB-IoT devices for multicast transmissions, in order to improve the energy efficiency, was proposed in [153]. The method is slightly different from clustering, since in this case the devices are grouped with no CH selection. The grouped devices have a common paging message within the inactivity timer, that regulates time device is inactive, and are able to receive multicast data in a single transmission, due to the fact that they go back to sleep mode only when all of them have been paged, and not before.
Three different grouping and synchronization mechanisms are proposed: The mechanism modifies the paging protocol to notify the devices that a multicast transmission is about to start. Adopting DR-SI the devices can retain their DRX cycles, as in DR-SC. The device using the DR-SI have lower energy consumption, since it does not receive any paging in order to receive downlink data, so there is no need for this device to wake-up and connect to the network. This method is however not compliant with the NB-IoT standard.
A contribution to the empirical study of energy and power consumption in NB-IoT using sleep/wake-up techniques is given in [154]. It includes measurements conducted during a period of several months, from October 2018 till October 2019, and gives an evaluation of the power saving methods supported by the available NB-IoT networks, taking into account different network configurations. The experimental setup consists in two NB-IoT modules, SARA-N211-O2B [155]  Coverage results showed that for SNR distributions overlap between good and bad locations, while RSRP distributions are more representative of different locations. It is also observed that energy consumption increases between the different Extended Coverage Levels (ECLs) a NB-IoT device can operate in, that is, ECL0, ECL1, and ECL2. In particular, the consumption increases while moving to ECL0 to ECL1 and ECL2, since in these latter, NB-IoT devices adopt a larger number of repetitions and maximum transmission power, in order to increase service reliability.
Among several insights, it is observed that, in Connected mode, the energy consumption is depending on different factors, including (a) different devices (SARA-N211 module has a higher power consumption than the Quectel-BC95), (b) network operators (they might use different configurations and timers, and thus directly affect device operations and consumption), (c) device configurations (different RAI modes lead to different consumption) and (d) services (an increase in packet size causes an increase in energy consumption).
In [157], the first NB-IoT diagnostic tool is presented and referred to as NB-Scope, able to support fine-grained collection of of power consumption and protocol message traces. NB-Scope can enable in-depth diagnoses of the NB-IoT network in terms of energy efficiency and, at the same time, is able to support heterogeneous NB-IoT modules while making minimal system modifications. NB-Scope has three key design aspects: • Layered Hardware Architecture, that partitions the hardware system into layers; • Software Abstraction, that introduces Application Program Interfaces (APIs) for letting developers and users access and control different NB-IoT modules; • Trace Fusion of Power Consumption and Radio Access, that allows to collect and synchronize power consumption traces and radio access logs.
In real-time benchmark mode, NB-Scope allows real-time interaction and analysis of the energy performance of NB-IoT modules. In field test mode, it allows the study of energy performance in real-world deployments of NB-IoT modules. The measurements presented in [157] were executed in more that 1200 locations, in two major cities in China and one city in USA; the focus of the analysis is on five NB-IoT applications-outdoor parking, indoor parking, smart door lock, smoke detection and smart water/electricity metering. Measurement results showed that indoor parking nodes consume about 2.7x more energy to transmit one packet when compared to water metering nodes. Moreover, the energy consumption of the nodes in the US is lower that the energy consumption in China, while using Quectel BC26 and BC66 NB-IoT modules respectively in China and the US. Furthermore, it is also showed that the energy consumption per packet increases after the device is more distant from the base station.
The authors also study the impact of the Inactivity Period. It is configured in the base station and used to trigger a transition from Connected to Idle mode, if a device, during this period, does not have data exchange. One way to avoid energy wasting is to completely skip this period, resulting in a faster transit to Idle mode. The optimization proposed in [157] consists in reducing the timer to 5 s, which leads to an earlier release of the connection and to a 10% energy saving.
Data reduction techniques. Energy consumption can be also improved by reducing signaling and control transmissions, as well as optimizing the procedures for efficient data exchange. As discussed in [158], the NB-IoT standard introduces two procedures for the transmission of small amounts of data with reduced signaling, that is, Control Plane (CP) cIoT Evolved Packet System (EPS) optimization, and User Plane (UP) cIoT EPS optimization.
On the one hand, CP optimization makes it possible to use the control plane of the cellular architecture, for forwarding data packets, after encapsulating them in Non Access Stratum (NAS) signaling messages directed to the MME. NAS messages include a RAI field that allows the MME to be notified by the device if no more uplink or downlink data are expected. On the other hand, UP optimization improves the establishment of the initial RRC connection between device and base station, which configures the radio bearers and the Access Stratum (AS) security context, and enables secure transmissions. When the device transits to Idle mode, the Connection Suspend procedure comes into effect, enabling the device to retain its context. In the case of new data exchange, the device resumes the connection using the Connection Resume procedure. This is done by providing a Resume ID to the base station, so that this latter can access the device-specific context.
The above procedures were tested, for uplink and downlink, with and without acknowledgment, while also using PSM. Results showed that the battery lifetime decreases significantly for short Inter Arrival Times (IATs). When the device is performing DRX, the total period of time consists of the Inactivity and Activity timers. The longer a device stays in DRX, the lower the probability of reestablishing a connection after it comes out from DRX. CP optimization makes it possible for the MME to know if there is still traffic going on, using the information gathered from the RAI notification, leading to the elimination of the need for DRX.
Network techniques. A routing-based method for boosting the energy efficiency of a NB-IoT system was proposed in [159], using cooperative relaying. In this scheme, NB-IoT devices help each other in communicating towards the base station, acting as relays and thus establishing device-to-device (D2D) links. This would also ultimately lead to infrastructure cost reduction for network operators. The analyzed scenario consists of a single base station, with N different devices uniformly distributed in its coverage area. Different from the NB-IoT standard, devices in idle mode act as relays for UL communications of active peers.
Given the above scenario, the Optimal relay selection and Greedy approaches were proposed. The first approach tries to minimize the energy consumption of the entire relay system, referred to as E TOT , assuming the presence of N a active and N i idle devices, respectively. In order to evaluate E TOT , the energy required on each link between the devices and the base station has to be evaluated. Therefore, the following parameters are used: • Transmission power P, assumed common for each link; • Resource Unit (RU) allocated to the ith device relaying to the jth device or directly transmitting to the base station, denoted as R i,j or R i,0 , respectively. The RU is a basic unit allocated over the 180 kHz Physical Uplink Shared Channel (PUSCH); • Slot duration for transmission, t s , assumed common for each link.
The energy spent on each D2D link, between devices i and j, is formulated as follows: Finally, the total energy consumed by the relay system is: where α i and β i,j are binary coefficients. In particular, α i = 1 is when the ith device transmits the gathered data directly to the base station, while β i,j = 1 when the ith device employs the jth device as a relay node. A drawback with the optimal relay selection as defined based on Equation (27) is that when there is a communication based on relay selection, it does not allow for a direct link between node i and the base station, and at the same time the only devices that can be selected as relays are the devices in Idle mode. Another drawback with the optimal relay selection is that it requires high computational efforts. To overcome these drawback, a greedy algorithm is also proposed. This algorithm works on the assumption that the device positions are known, and the ones located at the cell edges try to save energy by using relay nodes to reach the base station. The algorithm aims to sort active devices by decreasing the distance between active nodes and base station. First, it calculates the amount of energy needed to send data on the direct link towards the base station; then, it searches for the most suitable relay node across Idle devices. An idle device is selected as a candidate relay if its distance to the active device is smaller than the one between the active device and the base station. Among all possible relays, the device with the largest amount of remaining energy is selected as a relay. However, if the amount of energy needed in the 2-hop transmission involving such relay is estimated to be higher than the direct link consumption, the relay is not used and a direct transmission towards the base station is performed. The algorithm decreases the distance from the nodes to the base station, and at the same time calculates the energy that each node will need to send data using a direct link. After calculating the energy levels of the nodes, the algorithm tries to select a suitable relay node between the idle devices. The nodes that are considered to be chosen as relay nodes, must have a distance to the considered active device, smaller than the distance between active nodes and the base station. This procedure is repeated for each active user. Simulation results show that the total amount of energy consumed by the network increases along with the number of active devices. When using relay nodes, the amount of energy saving varies from 12% to 30%, depending on the adopted transmission configurations, for example, if 15 kHz or 3.75 kHz subcarrier spacing is used.
Resource allocation techniques. A resource allocation prediction-based energy saving mechanism was proposed in [160]. According to the standard, after establishing the RRC Connected state, the device has to go through a scheduling request procedure in order to transmit in uplink. This is done by each device during the random access procedure. Since NB-IoT does not have dedicated radio resources for access requests, they could collide and thus reattempt the access procedure. As also mentioned above, NB-IoT devices can also repeat access requests as well as data transmissions several times, depending on their ECCL, leading to an increase in energy consumption.
In order to reduce the energy consumption caused by this procedure, the Prediction-Based Energy Saving Mechanism (PBESM) is proposed. The mechanism aims to predict the resources allocated in uplink, as well as the time it will take to process these occurrences. Hence, PBESM pre-assigns radio resources without the need for a scheduling request procedure. PBESM adds two new functional blocks within the standard NB-IoT architecture: Packet Inspection Entity (PIE) and Packet Prediction Entity (PPE). The PIE, located at in MME, determines the protocol type and the IP address based on the packet header, and predicts the possibility that an uplink response message will come, which may be RRC message. At the same time it also measures the time it takes to respond for each occurrence, and sends this information to the PPE, which operates in the base station. The PPE is responsible for managing the organization of the information received from the uplink packet, and the time that is needed to respond to each of the occurrences. The PPE makes use of the received data from the PIE, together with the prediction algorithm, to estimate the time delay between the downlink transmission and the time it will take for the generation of the uplink packet. When this procedure is completed, the base station generates a pre-scheduling command which contains the time of the uplink transmission and the modulation scheme. The prescheduling command is then sent to the device. Simulation results show that PBESM outperforms the standard NB-IoT procedure, which requires a RRC Connection procedure, achieving up to 34% energy consumption decrease. When compared to the ideal prediction scheme, having no error prediction of processing delay, PBESM showed a reasonable energy consumption increase of about 5%.
Another method for energy-efficient uplink resource scheduling was proposed in [161], considering 3.75 kHz subcarrier-spaced, single-tone, and modulation and coding scheme (MCS). A decision problem, called Energy-efficient Ultra-reliable Scheduling Decision (EUSD), is thus formulated aiming to possibly find a number of repetitions so that devices can save most of the energy while ensuring reliable transmissions. EUSD is composed of two phases:

•
The first phase makes use of the minimal energy cost, which quantifies the energy consumed by each device. This phase determines the default parameters for each UE, such as the number of resource units the optimal number of repetitions, and the transmission parameters, so that QoS and transmission reliability requirements can be met. Once the consumed energy of each device is evaluated, the device with the minimal energy cost is selected; • The second phase is called Weighting Based Flexible Scheduling, and it is based on a score function to determine the optimal scheduling of the requests from the devices. The score function is used to evaluate which of the uplink transmission requests is the most urgent, and list them in a descending order. After this step, the scheme defines a Waste function, which determines the possibility that a device may waste the resource units that it has been allocated. Finally, the so-called cost ratio is used to determine if the resource units and the MCS of a device satisfy the delay deadline, without incurring in extra energy consumption. During this phase, the importance is placed on finding a way to balance the resource units and MCS, while keeping the energy consumption within a predefined minimal cost.
Simulation results show that the energy consumption increases with the number of requests, with the proposed scheme outperforming the standard NB-IoT random access procedure.

Open Challenges
This section presents open challenges and possible future work toward the design and implementation of mechanisms for energy efficiency in LPSANs and LPWANs. Although the following subsections focus on challenges specific to energy efficiency in LPSANs and LPWANs, it is worth noting that energy efficiency in relation to IoT in the future will have to be analyzed in the wider context of energy-aware verticals and applications supported by IoT, such as for example smart energy management in buildings [162], as well as of new technologies that can lead to a deeper integration between IoT and the surrounding environment, such as artificial materials with tunable radio properties [163].

Open Challenges for LPSANs
In the context of LPSANs, the literature related to energy efficiency mechanisms for RFID is split between solutions for readers and solutions for active or passive tags.
For the readers, clustering-based methods often assume mobile devices, thus neglecting stationary situations. Moreover, these schemes require the readers to have high initial energy levels and similar mobility patterns, which could be limiting assumptions compared to real scenarios. Clustering-based solutions may also lead to latency increases, since readers could wait a large amount of time before exchanging data with the CH (e.g., see GDRA algorithm [103,104]).
For active tags, most of the methods are based on the Free2move protocol [106], where the tags wait for ACKs from the readers to enter the sleep mode. Hence, the main drawbacks in energy efficiency happen when the ACK is received with large delays or even not received by the tags. In parallel, the improvements obtained by polling-based techniques, such as GMLE protocol [107], are dependent on how the polling is executed and regulated, particularly in terms of the interrogation range of the readers, [107], thus requiring further analyses for optimizing this procedure.
Finally, most of methods for passive tags are based on the use of Query Trees [113], as mentioned in Section 4. Here, the energy consumption is dependent on several factors, including the number of identification queries and tree levels.
Moving on Zigbee, the proposed solutions mostly rely on routing protocols, for example, AODVjr [117]. Since the AODVjr process of establishing the route is highly energy consuming, other protocols in order to build the cluster-tree formation. This causes higher energy consumption for such nodes, making them prone to failure and causing unwanted network partitions. This is an important issue to study, in order to achieve a better balance in creating a cluster-tree network while enhancing the energy efficiency, so that an increase in the number of clusters does not relate to an increase in energy consumption.
As regards Bluetooth, the literature shows how important is to create energy-aware scatternets. All the solutions focusing on this aspect adopt the cluster tree approach, where a CH can be replaced by another master/slave node when its energy level falls below a predefined threshold. However, substituting a CH requires further exchange of control messages and, in turn, energy consumption increases. The multi-hop scatternets proposed in [126] aim at dealing with such an issue; however, the consumption decrease comes with latency increase, which is often the most challenging trade-off to handle in Bluetooth networks.
Polling schemes are also proposed for Bluetooth networks. They are mostly evaluated under the assumption that the slaves in the scatternet go from sleep to wake-up mode thanks to a a dedicated polling. However, data packets have no fixed time of arrival in real scenarios, making it difficult to predict when they arrive so that the slave device can wake-up. The master node needs to be able to detect when the traffic load fluctuate. the master transmits the polls at different data rates, so that it can be able to detect traffic fluctuation. In the methods based on the scatternet, the energy consumption is dependent on data rate. If the data rate increases, so does the energy consumption, making the energy consumption dependent on the transmission data rate, and calling for further proposal for better handling this limiting trade-off.

Open Challenges for LPWANs
In the LPWAN context, and specifically for LoRa/LoRaWAN, methods relying on the Channel Activity Detection show how the SF plays an important role for energy consumption. The correct assignment of SF values is thus key for the energy efficiency of LoRaWAN systems, and thus requires further investigation towards improved solutions.
In the context of cIoT technologies, the literature show that the main methods used to increase energy efficiency in LTE-M are based on DRX and on random access enhancements.
In the DRX case, several methods proposed to assign an urgency weight to each node, which determines the transmission order across nodes. A drawback with the urgency weight is related to the fact that a node could be wrongly assigned a value that is lower than the threshold value. Hence, that node is marked as non urgent node, while it might actually have urgent information to transmit, and should have been thus labeled an urgent node. This will cause the node to lose the opportunity to transmit before all the other nodes increasing its time delay and energy consumption. Another important open point, requiring for further studies is the DRX cycle length definition, since too long DRX cycles have a negative effect on delay and consumption of LTE-M devices.
Considering the techniques based on random access enhancements, simulation results show that the energy efficiency can be improved but Therefore, the optimization of the random access procedure plays a crucial role towards energy-efficient LTE-M networks.
Sleep/wake-up methods based on PSM/(e)DRX mechanisms have been largely proposed for NB-IoT. They lead to beneficial effects in NB-IoT energy efficiency; however, several aspects require further investigations.
While using (e)DRX-based algorithms, the usage of the DR-SC mechanism [153] negatively affects the network performance during multicast transmissions, while the DR-SI scheme is not compliant to NB-IoT standards.
The majority of the routing techniques are based on the optimal relay selection. A drawback is that when using optimal relay selection, The process of finding optimal relays requires high computational efforts from the network and thus increases energy consumption. Therefore it is important to find novel solutions allowing for direct communication between devices and the base station, and also making possible the selection of relays across devices already in connected mode.
In the case of the resource allocation techniques, the proposed methods are based on the prediction of resources allocated in uplink, in order to pre-assign the required radio resources and eliminate the scheduling request procedure. Wrong predictions result in wrong resource assignment and increased energy wasting. Hence, it is important to propose novel methods for accurate prediction in order to optimally exploits these methods and achieve higher energy efficiency.

Conclusions
In this paper, we have surveyed and analyzed LPSAN and LPWAN technologies in terms of energy efficiency and consumption aspects. We have conducted an in-depth study on the characteristics of the main IoT technologies belonging to these two groups. Then, we have provided a classification of the metrics used for evaluating energy and power efficiency in IoT networks, and well as a review of energy and consumption models proposed in literature. Finally, the most prominent methods for enhancing energy efficiency have been classified in main categories and reviewed across technologies. The proposed classification and analysis provide a full overview of existing and proposed energy efficiency methods, highlighting issues as well as open challenges for these methods, and ultimately providing a basis for future work and further analysis.