Next Article in Journal
Fallen People Detection Capabilities Using Assistive Robot
Previous Article in Journal
Smartphone Camera-Based Optical Wireless Communication System: Requirements and Implementation Challenges
Previous Article in Special Issue
Overview of Binary Locally Repairable Codes for Distributed Storage Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Editorial of Energy-Efficient and Reliable Information Processing: Computing and Storage

951 Sandisk Dr, Milpitas, CA 95035, USA
Electronics 2019, 8(9), 914; https://doi.org/10.3390/electronics8090914
Submission received: 8 August 2019 / Accepted: 12 August 2019 / Published: 21 August 2019

1. Introduction

Recently, artificial intelligence (AI) systems have begun to approach and exceed human performance in many intelligent tasks: AlexNet [1] and ResNet [2] achieving human-level accuracy in recognition tasks, and AlphaGo [3] beating human champions in Go [4]. These acclaimed successes of AI are mainly based on computations using massive amounts of data. The two main pillars of modern AI systems are computation and data. AI systems should perform their tasks by processing (i.e., computing) massive data efficiently and reliably. The data should be stored and managed in an efficient and reliable manner to reduce storage cost.

2. The Present Issue

This special issue consists of five papers covering important topics in the field of energy-efficient and reliable computing, and storage systems.
Brain-inspired neuromorphic computing is an attractive research field as an alternative to conventional von Neumann computing. Among several techniques realizing neuromorphic computing, oscillatory neural networks (ONN) is an interesting architecture inspired by the observation of oscillatory behavior in the brain [5,6]. In [7], the authors investigate a new model of ONNs based on high order harmonics synchronization. They studied multi-level neurons instead of bi-stable neurons to reduce the number of output neurons and improve the efficiency of ONNs performing pattern recognition tasks.
Dimension reduction techniques such as principal component analysis (PCA) and linear discriminant analysis (LDA) are important information processing techniques used to reduce the total amount of data and improve computational efficiency. In [8], the authors propose a new dimension reduction technique by taking into account both the local structure and the global distribution of information. The proposed discriminative sparse graph embedding (DSGE) technique outperforms the prior dimension reduction techniques on face recognition tasks.
Distributed storage codes aim to handle big data efficiently in cloud systems via coding theory. Regenerating codes attempt to reduce the communication bandwidth for single node repair [9]. Locally repairable codes (LRCs) are used to reduce the number of node access required to repair a single node repair [10]. The LRCs over the binary field enable efficient hardware implementation and computation. The computational efficiency of binary LRCs (BLRCs) provides an attractive way to handle massive data efficiently. In [11], the authors review the recent research on BLRCs focusing on code construction methods. Also, the authors compare the code parameters of various BLRCs.
In [12,13], the authors attempt to optimize virtual machine placement (VMP) based on the Grey wolf optimization technique. It is important to optimize the data center resources to manage vast amounts of data and reduce power consumption.

3. Future

Because of the exponential growth of data, it is imperative to provide energy-efficient and reliable data storage systems. We should process massive data to obtain meaningful information and perform intelligent tasks via building efficient and reliable computing systems. Future innovations in computing and storage will provide a path to overcome the challenges of realizing AI systems.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 2012 Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  2. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  3. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef] [PubMed]
  4. Shanbhag, N.R.; Verma, N.; Kim, Y.; Patil, A.D.; Varshney, L.R. Shannon-inspired Statistical Computing for the Nanoscale Era. Proc. IEEE 2019, 107, 90–107. [Google Scholar] [CrossRef]
  5. Gray, C.M. Synchronous oscillations in neuronal systems: Mechanisms and functions. J. Comput. Neurosci. 1994, 1, 11–38. [Google Scholar] [CrossRef] [PubMed]
  6. Jackson, T.C.; Sharma, A.A.; Bain, J.A.; Weldon, J.A.; Pileggi, L. Oscillatory Neural Networks Based on TMO Nano-Oscillators and Multi-Level RRAM Cells. IEEE J. Emerg. Sel. Top. Circuits Syst. 2015, 5, 230–241. [Google Scholar] [CrossRef]
  7. Velichko, A.; Belyaev, M.; Boriskov, P.A. A model of an oscillatory neural network with multilevel neurons for pattern recognition and computing. Electronics 2019, 8, 75. [Google Scholar] [CrossRef]
  8. Tong, Y.; Zhang, J.; Chen, R. Discriminative sparsity graph embedding for unconstrained face recognition. Electronics 2019, 8, 503. [Google Scholar] [CrossRef]
  9. Dimakis, A.G.; Godfrey, P.B.; Wu, Y.; Wainwright, M.J.; Ramchandran, K. Network coding for distributed storage systems. IEEE Trans. Inf. Theory 2010, 56, 4539–4551. [Google Scholar] [CrossRef]
  10. Gopalan, P.; Huang, C.; Simitci, H.; Yekhanin, S. On the locality of codeword symbols. IEEE Trans. Inf. Theory 2012, 58, 6925–6934. [Google Scholar] [CrossRef]
  11. Kim, Y.-S.; Kim, C.; No, J.-S. Overview of Binary Locally Repairable Codes for Distributed Storage Systems. Electronics 2019, 8, 596. [Google Scholar] [CrossRef]
  12. Fatima, A.; Javaid, N.; Butt, A.A.; Sultana, T.; Hussain, W.; Bilal, M.; Hashmi, M.A.U.R.; Akbar, M.; Ilahi, M. An Enhanced Multi-Objective Gray Wolf Optimization for Virtual Machine Placement in Cloud Data Centers. Electronics 2019, 8, 218. [Google Scholar] [CrossRef]
  13. Al-Moalmi, A.; Luo, J.; Salah, A.; Li, K. Optimal virtual machine placement based on grey wolf optimization. Electronics 2019, 8, 283. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Kim, Y. Editorial of Energy-Efficient and Reliable Information Processing: Computing and Storage. Electronics 2019, 8, 914. https://doi.org/10.3390/electronics8090914

AMA Style

Kim Y. Editorial of Energy-Efficient and Reliable Information Processing: Computing and Storage. Electronics. 2019; 8(9):914. https://doi.org/10.3390/electronics8090914

Chicago/Turabian Style

Kim, Yongjune. 2019. "Editorial of Energy-Efficient and Reliable Information Processing: Computing and Storage" Electronics 8, no. 9: 914. https://doi.org/10.3390/electronics8090914

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop