Special Issue "Distributed Computing and Storage"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 31 March 2019

Special Issue Editor

Guest Editor
Dr. Mingyue Ji

Communication and Computing (C^3) Lab, Department of Electrical and Computer Engineering, College of Engineering, University of Utah, Salt Lake City, UT 84112, USA
Website | E-Mail
Interests: information theory; caching networks; storage; communication theory and signal processing; distributed computing

Special Issue Information

Dear Colleagues,

The field of Distributed Computing and Storage (DCS) covers all aspects of computing and data access across multiple processing units connected by any form of communication network. In recent years, DCS systems have emerged as a revolutionary paradigm that provide a more flexible, reliable and better cost and have more computational power compared to centralized computing and storage systems. Moreover, as the development of wireless edge network, edge DCS systems, which provide real-time or near real-time data analysis, lower operating costs, reduced network traffic load, and better security and privacy, are also attracted significant attentions.

This Special Issue seeks papers on seminal work done to bring fundamental limits, evaluate emerging trends and current technological developments and to discuss future design principles of general DCS and edge DCS systems. Topics of interest for the Special Issue include, but are not limited to, the following list:

  • Fundamental limits in DCS systems
  • New design in coded DCS systems
  • Networking in DCS systems
  • Performance analysis and optimizations in DCS systems
  • System software and middleware design of DCS systems
  • Network function virtualization for large scale DCS systems
  • Big data analytics in DCS systems
  • Wireless (edge) distributed computing applications
  • Incentive models or techniques for the applications of edge DCS systems
  • Security and privacy issues in DCS systems
  • Simulation and emulation platform for edge DCS systems
  • Algorithms and techniques for computation offloading in edge DCS paradigm
  • Physical layer communications and networking in edge DCS systems
  • Machine learning and optimization in DCS systems
  • Distributed and parallel computing in modeling and simulation

Dr. Mingyue Ji
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 850 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

View options order results:
result details:
Displaying articles 1-3
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Access Adaptive and Thread-Aware Cache Partitioning in Multicore Systems
Electronics 2018, 7(9), 172; https://doi.org/10.3390/electronics7090172
Received: 20 July 2018 / Revised: 12 August 2018 / Accepted: 27 August 2018 / Published: 1 September 2018
PDF Full-text (3408 KB) | HTML Full-text | XML Full-text
Abstract
Cache partitioning is a successful technique for saving energy for a shared cache and all the existing studies focus on multi-program workloads running in multicore systems. In this paper, we are motivated by the fact that a multi-thread application generally executes faster than
[...] Read more.
Cache partitioning is a successful technique for saving energy for a shared cache and all the existing studies focus on multi-program workloads running in multicore systems. In this paper, we are motivated by the fact that a multi-thread application generally executes faster than its single-thread counterpart and its cache accessing behavior is quite different. Based on this observation, we study applications running in multi-thread mode and classify data of the multi-thread applications into shared and private categories, which helps reduce the interferences among shared and private data and contributes to constructing a more efficient cache partitioning scheme. We also propose a hardware structure to support these operations. Then, an access adaptive and thread-aware cache partitioning (ATCP) scheme is proposed, which assigns separate cache portions to shared and private data to avoid the evictions caused by the conflicts from the data of different categories in the shared cache. The proposed ATCP achieves a lower energy consumption, meanwhile improving the performance of applications compared with the least recently used (LRU) managed, core-based evenly partitioning (EVEN) and utility-based cache partitioning (UCP) schemes. The experimental results show that ATCP can achieve 29.6% and 19.9% average energy savings compared with LRU and UCP schemes in a quad-core system. Moreover, the average speedup of multi-thread ATCP with respect to single-thread LRU is at 1.89. Full article
(This article belongs to the Special Issue Distributed Computing and Storage)
Figures

Figure 1

Open AccessArticle Evaluating the Impact of Optical Interconnects on a Multi-Chip Machine-Learning Architecture
Electronics 2018, 7(8), 130; https://doi.org/10.3390/electronics7080130
Received: 30 June 2018 / Revised: 18 July 2018 / Accepted: 25 July 2018 / Published: 27 July 2018
PDF Full-text (953 KB) | HTML Full-text | XML Full-text
Abstract
Following trends that emphasize neural networks for machine learning, many studies regarding computing systems have focused on accelerating deep neural networks. These studies often propose utilizing the accelerator specialized in a neural network and the cluster architecture composed of interconnected accelerator chips. We
[...] Read more.
Following trends that emphasize neural networks for machine learning, many studies regarding computing systems have focused on accelerating deep neural networks. These studies often propose utilizing the accelerator specialized in a neural network and the cluster architecture composed of interconnected accelerator chips. We observed that inter-accelerator communication within a cluster has a significant impact on the training time of the neural network. In this paper, we show the advantages of optical interconnects for multi-chip machine-learning architecture by demonstrating performance improvements through replacing electrical interconnects with optical ones in an existing multi-chip system. We propose to use highly practical optical interconnect implementation and devise an arithmetic performance model to fairly assess the impact of optical interconnects on a machine-learning accelerator platform. In our evaluation of nine Convolutional Neural Networks with various input sizes, 100 and 400 Gbps optical interconnects reduce the training time by an average of 20.6% and 35.6%, respectively, compared to the baseline system with 25.6 Gbps electrical ones. Full article
(This article belongs to the Special Issue Distributed Computing and Storage)
Figures

Figure 1

Review

Jump to: Research

Open AccessReview Moving to the Edge-Cloud-of-Things: Recent Advances and Future Research Directions
Electronics 2018, 7(11), 309; https://doi.org/10.3390/electronics7110309
Received: 4 October 2018 / Revised: 19 October 2018 / Accepted: 29 October 2018 / Published: 8 November 2018
PDF Full-text (935 KB) | HTML Full-text | XML Full-text
Abstract
Cloud computing has significantly enhanced the growth of the Internet of Things (IoT) by ensuring and supporting the Quality of Service (QoS) of IoT applications. However, cloud services are still far from IoT devices. Notably, the transmission of IoT data experiences network issues,
[...] Read more.
Cloud computing has significantly enhanced the growth of the Internet of Things (IoT) by ensuring and supporting the Quality of Service (QoS) of IoT applications. However, cloud services are still far from IoT devices. Notably, the transmission of IoT data experiences network issues, such as high latency. In this case, the cloud platforms cannot satisfy the IoT applications that require real-time response. Yet, the location of cloud services is one of the challenges encountered in the evolution of the IoT paradigm. Recently, edge cloud computing has been proposed to bring cloud services closer to the IoT end-users, becoming a promising paradigm whose pitfalls and challenges are not yet well understood. This paper aims at presenting the leading-edge computing concerning the movement of services from centralized cloud platforms to decentralized platforms, and examines the issues and challenges introduced by these highly distributed environments, to support engineers and researchers who might benefit from this transition. Full article
(This article belongs to the Special Issue Distributed Computing and Storage)
Figures

Figure 1

Back to Top