Special Issue "Distributed Computing and Storage"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 March 2019).

Special Issue Editor

Dr. Mingyue Ji
Website
Guest Editor
Communication and Computing (C^3) Lab, Department of Electrical and Computer Engineering, College of Engineering, University of Utah, Salt Lake City, UT 84112, USA
Interests: information theory; caching networks; storage; communication theory and signal processing; distributed computing

Special Issue Information

Dear Colleagues,

The field of Distributed Computing and Storage (DCS) covers all aspects of computing and data access across multiple processing units connected by any form of communication network. In recent years, DCS systems have emerged as a revolutionary paradigm that provide a more flexible, reliable and better cost and have more computational power compared to centralized computing and storage systems. Moreover, as the development of wireless edge network, edge DCS systems, which provide real-time or near real-time data analysis, lower operating costs, reduced network traffic load, and better security and privacy, are also attracted significant attentions.

This Special Issue seeks papers on seminal work done to bring fundamental limits, evaluate emerging trends and current technological developments and to discuss future design principles of general DCS and edge DCS systems. Topics of interest for the Special Issue include, but are not limited to, the following list:

  • Fundamental limits in DCS systems
  • New design in coded DCS systems
  • Networking in DCS systems
  • Performance analysis and optimizations in DCS systems
  • System software and middleware design of DCS systems
  • Network function virtualization for large scale DCS systems
  • Big data analytics in DCS systems
  • Wireless (edge) distributed computing applications
  • Incentive models or techniques for the applications of edge DCS systems
  • Security and privacy issues in DCS systems
  • Simulation and emulation platform for edge DCS systems
  • Algorithms and techniques for computation offloading in edge DCS paradigm
  • Physical layer communications and networking in edge DCS systems
  • Machine learning and optimization in DCS systems
  • Distributed and parallel computing in modeling and simulation

Dr. Mingyue Ji
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
Distributed Meta-Brokering P2P Overlay for Scheduling in Cloud Federation
Electronics 2019, 8(8), 852; https://doi.org/10.3390/electronics8080852 - 31 Jul 2019
Cited by 2
Abstract
The interconnected cloud (Intercloud) federation is an emerging paradigm that revolutionizes the scalable service provision of geographically distributed resources. Large-scale distributed resources require well-coordinated and automated frameworks to facilitate service provision in a seamless and systematic manner. Unquestionably, standalone service providers must communicate [...] Read more.
The interconnected cloud (Intercloud) federation is an emerging paradigm that revolutionizes the scalable service provision of geographically distributed resources. Large-scale distributed resources require well-coordinated and automated frameworks to facilitate service provision in a seamless and systematic manner. Unquestionably, standalone service providers must communicate and federate their cloud sites with other vendors to enable the infinite pooling of resources. The pooling of these resources provides uninterpretable services to increasingly growing cloud users more efficiently, and ensures an improved Service Level Agreement (SLA). However, the research of Intercloud resource management is in its infancy. Therefore, standard interfaces, protocols, and uniform architectural components need to be developed for seamless interaction among federated clouds. In this study, we propose a distributed meta-brokering-enabled scheduling framework for provision of user application services in the federated cloud environment. Modularized architecture of the proposed system with uniform configuration in participating resource sites orchestrate the critical operations of resource management effectively, and form the federation schema. Overlaid meta-brokering instances are implemented on the top of local resource brokers to keep the global functionality isolated. These instances in overlay topology communicate in a P2P manner to maintain decentralization, high scalability, and load manageability. The proposed framework has been implemented and evaluated by extending the Java-based CloudSim 3.0.3 simulation application programming interfaces (APIs). The presented results validate the proposed model and its efficiency to facilitate user application execution with the desired QoS parameters. Full article
(This article belongs to the Special Issue Distributed Computing and Storage)
Show Figures

Figure 1

Open AccessArticle
Challenges in NoSQL-Based Distributed Data Storage: A Systematic Literature Review
Electronics 2019, 8(5), 488; https://doi.org/10.3390/electronics8050488 - 30 Apr 2019
Cited by 2
Abstract
Key-Value stores (KVSs) are the most flexible and simplest model of NoSQL databases, which have become highly popular over the last few years due to their salient features such as availability, portability, reliability, and low operational cost. From the perspective of software engineering, [...] Read more.
Key-Value stores (KVSs) are the most flexible and simplest model of NoSQL databases, which have become highly popular over the last few years due to their salient features such as availability, portability, reliability, and low operational cost. From the perspective of software engineering, the chief obstacle for KVSs is to achieve software quality attributes (consistency, throughput, latency, security, performance, load balancing, and query processing) to ensure quality. The presented research is a Systematic Literature Review (SLR) to find the state-of-the-art research in the KVS domain, and through doing so determine the major challenges and solutions. This work reviews the 45 papers between 2010–2018 that were found to be closely relevant to our study area. The results show that performance is addressed in 31% of the studies, consistency is addressed in 20% of the studies, latency and throughput are addressed in 16% of the studies, query processing is addressed in 13% of studies, security is addressed in 11% of the studies, and load balancing is addressed in 9% of the studies. Different models are used for execution. The indexing technique was used in 20% of the studies, the hashing technique was used in 13% of the studies, the caching and security techniques were used together in 9% of the studies, the batching technique was used in 5% of the studies, the encoding techniques and Paxos technique were used together in 4% of the studies, and 36% of the studies used other techniques. This systematic review will enable researchers to design key-value stores as efficient storage. Regarding future collaborations, trust and privacy are the quality attributes that can be addressed; KVS is an emerging facet due to its widespread popularity, opening the way to deploy it with proper protection. Full article
(This article belongs to the Special Issue Distributed Computing and Storage)
Show Figures

Figure 1

Open AccessArticle
Simulation Model for Blockchain Systems Using Queuing Theory
Electronics 2019, 8(2), 234; https://doi.org/10.3390/electronics8020234 - 19 Feb 2019
Cited by 11
Abstract
In recent years, blockchains have obtained so much attention from researchers, engineers, and institutions; and the implementation of blockchains has started to revive a large number of applications ranging from e-finance, e-healthcare, smart home, Internet of Things, social security, logistics and so forth. [...] Read more.
In recent years, blockchains have obtained so much attention from researchers, engineers, and institutions; and the implementation of blockchains has started to revive a large number of applications ranging from e-finance, e-healthcare, smart home, Internet of Things, social security, logistics and so forth. In the literature on blockchains, it is found that most articles focused on their engineering implementation, while little attention has been devoted to the exploration of theoretical aspects of the system; however, the existing work is limited to model the mining process only. In this paper, a queuing theory-based model is proposed for understanding the working and theoretical aspects of the blockchain. We validate our proposed model using the actual statistics of two popular cryptocurrencies, Bitcoin and Ethereum, by running simulations for two months of transactions. The obtained performance measures parameters such as the Number of Transactions per block, Mining Time of Each Block, System Throughput, Memorypool count, Waiting Time in Memorypool, Number of Unconfirmed Transactions in the Whole System, Total Number of Transactions, and Number of Generated Blocks; these values are compared with actual statistics. It was found that the results gained from our proposed model are in good agreement with actual statistics. Although the simulation in this paper presents the modeling of blockchain-based cryptocurrencies only, the proposed model can be used to represent a wide range of blockchain-based systems. Full article
(This article belongs to the Special Issue Distributed Computing and Storage)
Show Figures

Figure 1

Open AccessArticle
Access Adaptive and Thread-Aware Cache Partitioning in Multicore Systems
Electronics 2018, 7(9), 172; https://doi.org/10.3390/electronics7090172 - 01 Sep 2018
Cited by 2
Abstract
Cache partitioning is a successful technique for saving energy for a shared cache and all the existing studies focus on multi-program workloads running in multicore systems. In this paper, we are motivated by the fact that a multi-thread application generally executes faster than [...] Read more.
Cache partitioning is a successful technique for saving energy for a shared cache and all the existing studies focus on multi-program workloads running in multicore systems. In this paper, we are motivated by the fact that a multi-thread application generally executes faster than its single-thread counterpart and its cache accessing behavior is quite different. Based on this observation, we study applications running in multi-thread mode and classify data of the multi-thread applications into shared and private categories, which helps reduce the interferences among shared and private data and contributes to constructing a more efficient cache partitioning scheme. We also propose a hardware structure to support these operations. Then, an access adaptive and thread-aware cache partitioning (ATCP) scheme is proposed, which assigns separate cache portions to shared and private data to avoid the evictions caused by the conflicts from the data of different categories in the shared cache. The proposed ATCP achieves a lower energy consumption, meanwhile improving the performance of applications compared with the least recently used (LRU) managed, core-based evenly partitioning (EVEN) and utility-based cache partitioning (UCP) schemes. The experimental results show that ATCP can achieve 29.6% and 19.9% average energy savings compared with LRU and UCP schemes in a quad-core system. Moreover, the average speedup of multi-thread ATCP with respect to single-thread LRU is at 1.89. Full article
(This article belongs to the Special Issue Distributed Computing and Storage)
Show Figures

Figure 1

Open AccessArticle
Evaluating the Impact of Optical Interconnects on a Multi-Chip Machine-Learning Architecture
Electronics 2018, 7(8), 130; https://doi.org/10.3390/electronics7080130 - 27 Jul 2018
Cited by 1
Abstract
Following trends that emphasize neural networks for machine learning, many studies regarding computing systems have focused on accelerating deep neural networks. These studies often propose utilizing the accelerator specialized in a neural network and the cluster architecture composed of interconnected accelerator chips. We [...] Read more.
Following trends that emphasize neural networks for machine learning, many studies regarding computing systems have focused on accelerating deep neural networks. These studies often propose utilizing the accelerator specialized in a neural network and the cluster architecture composed of interconnected accelerator chips. We observed that inter-accelerator communication within a cluster has a significant impact on the training time of the neural network. In this paper, we show the advantages of optical interconnects for multi-chip machine-learning architecture by demonstrating performance improvements through replacing electrical interconnects with optical ones in an existing multi-chip system. We propose to use highly practical optical interconnect implementation and devise an arithmetic performance model to fairly assess the impact of optical interconnects on a machine-learning accelerator platform. In our evaluation of nine Convolutional Neural Networks with various input sizes, 100 and 400 Gbps optical interconnects reduce the training time by an average of 20.6% and 35.6%, respectively, compared to the baseline system with 25.6 Gbps electrical ones. Full article
(This article belongs to the Special Issue Distributed Computing and Storage)
Show Figures

Figure 1

Review

Jump to: Research

Open AccessReview
Moving to the Edge-Cloud-of-Things: Recent Advances and Future Research Directions
Electronics 2018, 7(11), 309; https://doi.org/10.3390/electronics7110309 - 08 Nov 2018
Cited by 17
Abstract
Cloud computing has significantly enhanced the growth of the Internet of Things (IoT) by ensuring and supporting the Quality of Service (QoS) of IoT applications. However, cloud services are still far from IoT devices. Notably, the transmission of IoT data experiences network issues, [...] Read more.
Cloud computing has significantly enhanced the growth of the Internet of Things (IoT) by ensuring and supporting the Quality of Service (QoS) of IoT applications. However, cloud services are still far from IoT devices. Notably, the transmission of IoT data experiences network issues, such as high latency. In this case, the cloud platforms cannot satisfy the IoT applications that require real-time response. Yet, the location of cloud services is one of the challenges encountered in the evolution of the IoT paradigm. Recently, edge cloud computing has been proposed to bring cloud services closer to the IoT end-users, becoming a promising paradigm whose pitfalls and challenges are not yet well understood. This paper aims at presenting the leading-edge computing concerning the movement of services from centralized cloud platforms to decentralized platforms, and examines the issues and challenges introduced by these highly distributed environments, to support engineers and researchers who might benefit from this transition. Full article
(This article belongs to the Special Issue Distributed Computing and Storage)
Show Figures

Figure 1

Back to TopTop