Special Issue "Selelcted papers from INTESA Workshop 2018"

A special issue of Future Internet (ISSN 1999-5903).

Deadline for manuscript submissions: closed (15 November 2018)

Special Issue Editors

Guest Editor
Prof. William Fornaciari

Politecnico di Milano, Milano, Italy
Website | E-Mail
Guest Editor
Prof. Maurizio Martina

Politecnico di Torino, Torino, Italy
Website | E-Mail

Special Issue Information

Dear Colleagues,

We have organized a Special Issue of the INTESA workshop 2018 in Future Internet.

The main purpose of this Special Issue is to publish extended versions of the INTESA2018 papers. The papers from the attendees at Embedded Systems Week are also welcome. Such papers will be expected to make significant contributions to key areas, from architectures and design methodologies to support embedded intelligence to embedded intelligence: Best practices and software support.

The Special Issue especially focuses on works related to the journal topics.

Possible topics include, but are not limited to:

  • Special purpose hardware to support deep learning in embedded architectures
  • Edge computing for smart embedded systems: hardware and software aspects
  • Run-time resource management for smart IoT/Edge Computing systems
  • HW/SW codesign of Cyber Physical Systems
  • Programming models for IoT/Edge computing applications
  • Applications and case studies of intelligent embedded systems
  • Design methodologies and platforms for wearable computing
  • In-memory computing for unsupervised learning

Prof. William Fornaciari
Prof. Maurizio Martina
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • embedded intelligencedeep learning
  • edge computing

Published Papers (2 papers)

View options order results:
result details:
Displaying articles 1-2
Export citation of selected articles as:

Research

Open AccessArticle Fog vs. Cloud Computing: Should I Stay or Should I Go?
Future Internet 2019, 11(2), 34; https://doi.org/10.3390/fi11020034
Received: 1 December 2018 / Revised: 31 December 2018 / Accepted: 11 January 2019 / Published: 2 February 2019
PDF Full-text (2318 KB) | HTML Full-text | XML Full-text
Abstract
In this article, we work toward the answer to the question “is it worth processing a data stream on the device that collected it or should we send it somewhere else?”. As it is often the case in computer science, the response is [...] Read more.
In this article, we work toward the answer to the question “is it worth processing a data stream on the device that collected it or should we send it somewhere else?”. As it is often the case in computer science, the response is “it depends”. To find out the cases where it is more profitable to stay in the device (which is part of the fog) or to go to a different one (for example, a device in the cloud), we propose two models that intend to help the user evaluate the cost of performing a certain computation on the fog or sending all the data to be handled by the cloud. In our generic mathematical model, the user can define a cost type (e.g., number of instructions, execution time, energy consumption) and plug in values to analyze test cases. As filters have a very important role in the future of the Internet of Things and can be implemented as lightweight programs capable of running on resource-constrained devices, this kind of procedure is the main focus of our study. Furthermore, our visual model guides the user in their decision by aiding the visualization of the proposed linear equations and their slope, which allows them to find if either fog or cloud computing is more profitable for their specific scenario. We validated our models by analyzing four benchmark instances (two applications using two different sets of parameters each) being executed on five datasets. We use execution time and energy consumption as the cost types for this investigation. Full article
(This article belongs to the Special Issue Selelcted papers from INTESA Workshop 2018)
Figures

Figure 1

Open AccessArticle Layer-Wise Compressive Training for Convolutional Neural Networks
Future Internet 2019, 11(1), 7; https://doi.org/10.3390/fi11010007
Received: 30 November 2018 / Revised: 17 December 2018 / Accepted: 22 December 2018 / Published: 28 December 2018
PDF Full-text (737 KB) | HTML Full-text | XML Full-text
Abstract
Convolutional Neural Networks (CNNs) are brain-inspired computational models designed to recognize patterns. Recent advances demonstrate that CNNs are able to achieve, and often exceed, human capabilities in many application domains. Made of several millions of parameters, even the simplest CNN shows large model [...] Read more.
Convolutional Neural Networks (CNNs) are brain-inspired computational models designed to recognize patterns. Recent advances demonstrate that CNNs are able to achieve, and often exceed, human capabilities in many application domains. Made of several millions of parameters, even the simplest CNN shows large model size. This characteristic is a serious concern for the deployment on resource-constrained embedded-systems, where compression stages are needed to meet the stringent hardware constraints. In this paper, we introduce a novel accuracy-driven compressive training algorithm. It consists of a two-stage flow: first, layers are sorted by means of heuristic rules according to their significance; second, a modified stochastic gradient descent optimization is applied on less significant layers such that their representation is collapsed into a constrained subspace. Experimental results demonstrate that our approach achieves remarkable compression rates with low accuracy loss (<1%). Full article
(This article belongs to the Special Issue Selelcted papers from INTESA Workshop 2018)
Figures

Figure 1

Future Internet EISSN 1999-5903 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top