Special Issue "Challenges and Opportunities of IoT Deployments—Avoiding the Internet of Junk"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 August 2019).

Special Issue Editors

Dr. Steven J Johnston
E-Mail Website
Guest Editor
Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton SO17 1BJ, UK
Interests: Internet of Things; smart city; air quality; cloud computing; embedded devices and sensors
Special Issues and Collections in MDPI journals
Prof. Dr. Simon J. Cox
E-Mail Website
Guest Editor
School of Engineering, University of Southampton, SO17 1BJ, UK
Interests: Internet of Things; high-performance computing; data science; algorithms
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The price, size and power requirements of computing power are falling drastically, opening up new opportunities and driving revolutions, such as the Internet of Things (IoT) and Smart Cities. This prolific embedding of computing power into almost everything addresses many of the Global Goals for Sustainable Development, improving our understanding of cities, industry and the environment.

This gives rise to a series of benefits and brings a new paradigm of management and product lifecycle issues, especially as we scale to billions of devices. In this Special Issue, we are particularly interested in describing, defining and quantifying the potential problems with IoT deployments; looking at solutions, prototypes and demonstrators which address all of some aspects of the IoT lifecycle and their deployments; including mitigation techniques that will extend deployed hardware lifespans, promote hardware sharing and prevent the creation of huge volumes of e-waste.

Dr. Steven J. Johnston
Prof. Dr. Simon J Cox
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • The IoT full life cycle
  • Waste mitigation techniques
  • Environmental impact of IoT
  • Disposable compute; applications, implementations and consequences
  • Smart City deployments
  • Software architecture
  • Edge Compute/ Internet of Things
  • IoT power sources and efficient usage.
  • Remote sensing applications
  • Fault detection/Repair of IoT systems
  • IoT Commissioning
  • Single Board Computers applied to IoT
  • Responsible hardware deployments
  • Low Power wireless networks for IoT (e.g SigFox/LoRa)
  • Portable Clusters
  • Remote compute/clusters

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessFeature PaperArticle
Analysis of Energy Consumption and Optimization Techniques for Writing Energy-Efficient Code
Electronics 2019, 8(10), 1192; https://doi.org/10.3390/electronics8101192 - 19 Oct 2019
Abstract
The unprecedented growth of connected devices, together with the remarkable convergence of a wide variety of technologies, have led to an exponential increase in the services that the internet of things (IoT) can offer, all aimed at improving quality of life. Consequently, in [...] Read more.
The unprecedented growth of connected devices, together with the remarkable convergence of a wide variety of technologies, have led to an exponential increase in the services that the internet of things (IoT) can offer, all aimed at improving quality of life. Consequently, in order to meet the numerous challenges this produces, the IoT has become a major subject of research. One of these challenges is the reduction of energy consumption given the significant limitations of some devices. In addition, although the search for energy efficiency was initially focused on hardware, it has become a concern for software developers too. In fact, it has become an intense area of research with the principal objective of analyzing and optimizing the energy consumption of software systems. This research analyzes the energy saving that can be achieved when using a broad set of techniques for writing energy-efficient code for Raspberry Pi devices. It also demonstrates that programmers can save more energy if they apply the proposed techniques manually than when relying on other automatic optimization options offered by the GNU compiler collection (GCC). Thus, it is important that programmers are aware of the significant impact these techniques can have on an application’s energy consumption. Full article
Show Figures

Graphical abstract

Open AccessFeature PaperArticle
IoToF: A Long-Reach Fully Passive Low-Rate Upstream PHY for IoT over Fiber
Electronics 2019, 8(3), 359; https://doi.org/10.3390/electronics8030359 - 25 Mar 2019
Cited by 2
Abstract
Internet of things (IoT) devices heavily rely on wireless connectivity. There are intrinsic overlooked limitations such as reach, availability, security and safety vulnerabilities closely associated with wireless solutions. Wired connectivity is the alternative to tackle those issues, and optical fibers directly connecting IoT [...] Read more.
Internet of things (IoT) devices heavily rely on wireless connectivity. There are intrinsic overlooked limitations such as reach, availability, security and safety vulnerabilities closely associated with wireless solutions. Wired connectivity is the alternative to tackle those issues, and optical fibers directly connecting IoT devices could provide them unique features such as huge bandwidth, long reach, signal integrity and high security grade for the transmitted information. Nevertheless, it can be prohibitive for IoT devices which are power hungry and have costly electrical-to-optical conversions. In this paper, first, a niche is identified for IoT over fiber (IoToF) based on fully passive optical solutions for long reach upstream of low data rate optical connectivity over dark fibers. Then, we proposed, implemented and characterized a prototype physical connectivity (PHY) based on fiber Bragg grating (FBG) low-cost acousto-optic modulation at IoT devices and respective optical edge-filtering as wavelength discriminator at the receiver. Finally, we performed an experimental demonstration of upstream data communication based on simple M-ary frequency-shift keying (FSK), with baud rate of 300 bps transmitted over 30 km range. In terms of data rate and reach for niche applications, IoToF can outperform traditional wireless technologies, such as Sigfox or LoRa. IoToF will enable monitoring urban areas with scarce and polluted spectrum, industrial areas requiring intrinsic safety, and upstreaming data from IoT devices in remote locations with unfavorable wireless propagation but with dark fibers available. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
Distributed Reprogramming on the Edge: A New Collaborative Code Dissemination Strategy for IoT
Electronics 2019, 8(3), 267; https://doi.org/10.3390/electronics8030267 - 28 Feb 2019
Cited by 2
Abstract
The ongoing era of the Internet of Things is opening up new opportunities towards the integration and interoperation of heterogeneous technologies at different abstraction layers, going from the so-called Edge Computing up to the Cloud and IoT Data Analytics level. With this evolution [...] Read more.
The ongoing era of the Internet of Things is opening up new opportunities towards the integration and interoperation of heterogeneous technologies at different abstraction layers, going from the so-called Edge Computing up to the Cloud and IoT Data Analytics level. With this evolution process the issue of efficient remote reprogramming on the Edge and the Extreme Edge deployments is becoming accentuated, as the amount and diversity of embedded sensing platforms is getting larger. To take advantage of the participation of heterogeneous devices and their in-field dynamic collaboration, in this work a new distributed code dissemination strategy for Edge node reprogramming is proposed, so as to efficiently support the functional reconfiguration, optimization and updating of sensor devices. It combines a partial reprogramming engine integrated into a modular sensor node architecture, with a smart IoT wearable platform for implementing the field collaborative framework. Results show that the proposed solution outperforms other traditional centric dissemination strategies, particularly when expanding the network reprogramming diversity and scale, which is an increasingly common feature in the IoT device deployments and maintenance. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
Pre- and Post-Processing Algorithms with Deep Learning Classifier for Wi-Fi Fingerprint-Based Indoor Positioning
Electronics 2019, 8(2), 195; https://doi.org/10.3390/electronics8020195 - 08 Feb 2019
Cited by 7
Abstract
To accommodate the rapidly increasing demand for connected infrastructure, automation for industrial sites and building smart cities, the development of Internet of Things (IoT)-based solutions is considered one of the major trends in modern day industrial revolution. In particular, providing high precision indoor [...] Read more.
To accommodate the rapidly increasing demand for connected infrastructure, automation for industrial sites and building smart cities, the development of Internet of Things (IoT)-based solutions is considered one of the major trends in modern day industrial revolution. In particular, providing high precision indoor positioning services for such applications is a key challenge. Wi-Fi fingerprint-based indoor positioning systems have been adapted as promising candidates for such applications. The performance of such indoor positioning systems degrade drastically due to several impairments like noisy datasets, high variation in Wi-Fi signals over time, fading of Wi-Fi signals due to multipath propagation caused by hurdles, people walking in the area under consideration and the addition/removal of Wi-Fi access points (APs). In this paper, we propose data pre- and post-processing algorithms with deep learning classifiers for Wi-Fi fingerprint-based indoor positioning, in order to provide immunity against limitations in the database and the indoor environment. In addition, we investigate the performance of the proposed system through simulation as well as extensive experiments. The results demonstrate that the pre-processing algorithm can efficiently fill in the missing Wi-Fi received signal strength fingerprints in the database, resulting in a success rate of 88.96% in simulation and 86.61% in a real-time experiment. The post-processing algorithm can improve the results from 9.05–10.94% for the conducted experiments, providing the highest success rate of 95.94% with a precision of 4 m for Wi-Fi fingerprint-based indoor positioning. Full article
Show Figures

Figure 1

Open AccessArticle
On Energy Efficiency and Performance Evaluation of Single Board Computer Based Clusters: A Hadoop Case Study
Electronics 2019, 8(2), 182; https://doi.org/10.3390/electronics8020182 - 04 Feb 2019
Cited by 3
Abstract
Energy efficiency in a data center is a challenge and has garnered researchers interest. In this study, we addressed the energy efficiency issue of a small scale data center by utilizing Single Board Computer (SBC)-based clusters. A compact layout was designed to build [...] Read more.
Energy efficiency in a data center is a challenge and has garnered researchers interest. In this study, we addressed the energy efficiency issue of a small scale data center by utilizing Single Board Computer (SBC)-based clusters. A compact layout was designed to build two clusters using 20 nodes each. Extensive testing was carried out to analyze the performance of these clusters using popular performance benchmarks for task execution time, memory/storage utilization, network throughput and energy consumption. Further, we investigated the cost of operating SBC-based clusters by correlating energy utilization for the execution time of various benchmarks using workloads of different sizes. Results show that, although the low-cost benefit of a cluster built with ARM-based SBCs is desirable, these clusters yield low comparable performance and energy efficiency due to limited onboard capabilities. It is possible to tweak Hadoop configuration parameters for an ARM-based SBC cluster to efficiently utilize resources. We present a discussion on the effectiveness of the SBC-based clusters as a testbed for inexpensive and green cloud computing research. Full article
Show Figures

Figure 1

Open AccessArticle
A Fine Frequency Estimation Algorithm Based on Fast Orthogonal Search (FOS) for Base Station Positioning Receivers
Electronics 2018, 7(12), 376; https://doi.org/10.3390/electronics7120376 - 03 Dec 2018
Cited by 1
Abstract
Base station signals have been widely studied as a promising navigation and positioning signal. The time and code division-orthogonal frequency division multiplexing (TC-OFDM) signal is a novel communication and navigation fusion signal that can simultaneously implement communication and positioning services. The TC-OFDM signal [...] Read more.
Base station signals have been widely studied as a promising navigation and positioning signal. The time and code division-orthogonal frequency division multiplexing (TC-OFDM) signal is a novel communication and navigation fusion signal that can simultaneously implement communication and positioning services. The TC-OFDM signal multiplexes the pseudorandom noise (PRN) code, called positioning code, and the Chinese mobile multimedia broadcasting (CMMB) signal in the same frequency band. For positioning in the TC-OFDM receiver, it is necessary to acquire and track the PRN code phase and the carrier frequency. The tracking performance is directly influenced by the accuracy of the signal acquisition, especially the acquired carrier frequency accuracy. This paper focuses on the fine frequency acquisition of TC-OFDM receivers and proposes a novel fine frequency estimation algorithm, which uses a non-linear modelling method, called fast orthogonal search (FOS), to improve the frequency acquisition accuracy of TC-OFDM receivers. With this algorithm, the PRN code is first stripped off in coarse code phase. Then, the candidate functions at each of the interest frequencies are generated, which consist of pairs of sine and cosine terms. Finally, the FOS algorithm is used to detect the carrier frequency. Simulation and experimental results show that, compared with the current carrier frequency estimation algorithms, the proposed algorithm effectively improves carrier frequency estimation accuracy and then reduces the time to the first fix. Full article
Show Figures

Figure 1

Open AccessArticle
Optimal Planning of Dynamic Thermal Management for NANS (N-App N-Screen) Services
Electronics 2018, 7(11), 311; https://doi.org/10.3390/electronics7110311 - 08 Nov 2018
Abstract
Existing multi-screening technologies have been limited to mirroring the current screen of the smartphone onto all the connected external display devices. In contrast, NANS (N-App N-Screen) technology is able to display different applications (N-App) on different multiple display devices (N-Screen) using only a [...] Read more.
Existing multi-screening technologies have been limited to mirroring the current screen of the smartphone onto all the connected external display devices. In contrast, NANS (N-App N-Screen) technology is able to display different applications (N-App) on different multiple display devices (N-Screen) using only a smartphone. For such NANS services, this paper empirically shows that the thermal violation constraint is more critical than the battery life constraint. For preventing the thermal violation, the existing DTM (Dynamic Thermal Management) techniques cannot be used since they consider thermal violations as abnormal, and hence prevent them by severely throttling CPU frequencies resulting in serious QoS degradation. In NANS service scenarios it is normal to operate in high temperature ranges to continue services with acceptable QoS. Targeting such scenarios, we first propose a novel thermal prediction method specially designed for NANS services. Based on the novel thermal prediction method, we then propose a novel DTM technique called, “thermal planning” to provide sustainable NANS services with sufficiently high QoS without thermal violations. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
Hardware Acceleration for RLNC: A Case Study Based on the Xtensa Processor with the Tensilica Instruction-Set Extension
Electronics 2018, 7(9), 180; https://doi.org/10.3390/electronics7090180 - 08 Sep 2018
Cited by 3
Abstract
Random linear network coding (RLNC) can greatly aid data transmission in lossy wireless networks. However, RLNC requires computationally complex matrix multiplications and inversions in finite fields (Galois fields). These computations are highly demanding for energy-constrained mobile devices. The presented case study evaluates hardware [...] Read more.
Random linear network coding (RLNC) can greatly aid data transmission in lossy wireless networks. However, RLNC requires computationally complex matrix multiplications and inversions in finite fields (Galois fields). These computations are highly demanding for energy-constrained mobile devices. The presented case study evaluates hardware acceleration strategies for RLNC in the context of the Tensilica Xtensa LX5 processor with the tensilica instruction set extension (TIE). More specifically, we develop TIEs for multiply-accumulate (MAC) operations for accelerating matrix multiplications in Galois fields, single instruction multiple data (SIMD) instructions operating on consecutive memory locations, as well as the flexible-length instruction extension (FLIX). We evaluate the number of clock cycles required for RLNC encoding and decoding without and with the MAC, SIMD, and FLIX acceleration strategies. We also evaluate the RLNC encoding and decoding throughput and energy consumption for a range of RLNC generation and code word sizes. We find that for GF ( 2 8 ) and GF ( 2 16 ) RLNC encoding, the SIMD and FLIX acceleration strategies achieve speedups of approximately four hundred fold compared to a benchmark C code implementation without TIE. We also find that the unicore Xtensa LX5 with SIMD has seven to thirty times higher RLNC encoding and decoding throughput than the state-of-the-art ODROID XU3 system-on-a-chip (SoC) operating with a single core; the Xtensa LX5 with FLIX, in turn, increases the throughput by roughly 25% compared to utilizing only SIMD. Furthermore, the Xtensa LX5 with FLIX consumes roughly three orders of magnitude less energy than the ODROID XU3 SoC. Full article
Show Figures

Figure 1

Back to TopTop