1. Introduction
Recently, artificial intelligence (AI) systems have begun to approach and exceed human performance in many intelligent tasks: AlexNet [
1] and ResNet [
2] achieving human-level accuracy in recognition tasks, and AlphaGo [
3] beating human champions in Go [
4]. These acclaimed successes of AI are mainly based on computations using massive amounts of data. The two main pillars of modern AI systems are computation and data. AI systems should perform their tasks by processing (i.e., computing) massive data efficiently and reliably. The data should be stored and managed in an efficient and reliable manner to reduce storage cost.
2. The Present Issue
This special issue consists of five papers covering important topics in the field of energy-efficient and reliable computing, and storage systems.
Brain-inspired neuromorphic computing is an attractive research field as an alternative to conventional von Neumann computing. Among several techniques realizing neuromorphic computing, oscillatory neural networks (ONN) is an interesting architecture inspired by the observation of oscillatory behavior in the brain [
5,
6]. In [
7], the authors investigate a new model of ONNs based on high order harmonics synchronization. They studied multi-level neurons instead of bi-stable neurons to reduce the number of output neurons and improve the efficiency of ONNs performing pattern recognition tasks.
Dimension reduction techniques such as principal component analysis (PCA) and linear discriminant analysis (LDA) are important information processing techniques used to reduce the total amount of data and improve computational efficiency. In [
8], the authors propose a new dimension reduction technique by taking into account both the local structure and the global distribution of information. The proposed discriminative sparse graph embedding (DSGE) technique outperforms the prior dimension reduction techniques on face recognition tasks.
Distributed storage codes aim to handle big data efficiently in cloud systems via coding theory. Regenerating codes attempt to reduce the communication bandwidth for single node repair [
9]. Locally repairable codes (LRCs) are used to reduce the number of node access required to repair a single node repair [
10]. The LRCs over the binary field enable efficient hardware implementation and computation. The computational efficiency of binary LRCs (BLRCs) provides an attractive way to handle massive data efficiently. In [
11], the authors review the recent research on BLRCs focusing on code construction methods. Also, the authors compare the code parameters of various BLRCs.
In [
12,
13], the authors attempt to optimize virtual machine placement (VMP) based on the Grey wolf optimization technique. It is important to optimize the data center resources to manage vast amounts of data and reduce power consumption.
3. Future
Because of the exponential growth of data, it is imperative to provide energy-efficient and reliable data storage systems. We should process massive data to obtain meaningful information and perform intelligent tasks via building efficient and reliable computing systems. Future innovations in computing and storage will provide a path to overcome the challenges of realizing AI systems.