Abstract
Inspired by the possibility that generative models based on quantum circuits can provide a useful inductive bias for sequence modeling tasks, we propose an efficient training algorithm for a subset of classically simulable quantum circuit models. The gradient-free algorithm, presented as a sequence of exactly solvable effective models, is a modification of the density matrix renormalization group procedure adapted for learning a probability distribution. The conclusion that circuit-based models offer a useful inductive bias for classical datasets is supported by experimental results on the parity learning problem.
1. Introduction
The possibility of exponential speedups for certain linear algebra operations has inspired a wave of research into quantum algorithms for machine learning purposes [1]. Many of these exponential speedups hinge on assumptions of fault tolerant quantum devices and efficient data preparation, which are unlikely to be realized in the near future. Focus has thus shifted to hybrid quantum-classical algorithms which involve optimizing the parameters of a variational quantum circuit to prepare a desired quantum state and have the potential to be implemented on near-term intermediate scale quantum devices [2].
Hybrid quantum-classical algorithms have been found to solve difficult eigenvalue problems [3] and to perform hard combinatorial optimization [4]. A number of recent works consider unsupervised learning within the hybrid quantum-classical framework [5,6,7,8,9].
In the context of machine learning, as emphasized in [2], it is less clear if variational hybrid quantum-classical algorithms offer advantages over existing purely classical algorithms. Density estimation, which attempts to learn a probability distribution from training data, has been suggested as an area to look for advantages [7] because a quantum advantage has been identified in the ability of quantum circuits to sample from certain probability distributions that are hard to sample classically [10]. In high-dimensional density estimation relevant to machine learning, expressive power is only part of the story and indeed algorithms in high-dimensional regime rely crucially on their inductive bias. Do the highly expressive probability distributions implied by quantum circuits offer a useful inductive bias for modeling high-dimensional classical data? We address this question in this paper.
We work within the confines of a classically tractable subset of quantum states modeled by tensor networks, which may be thought of as those states that can be prepared by shallow quantum circuits. Even more narrowly, we restrict to matrix product states akin to one-dimensional shallow circuits. Mathematically, tensor networks are a graphical calculus for describing interrelated matrix factorizations for which there exist polylogarithmic algorithms for a restricted set of linear algebra computations. We propose an unsupervised training algorithm for a generative model inspired by the density matrix renormalization group (DMRG) procedure. The training dynamics take place on the unit sphere of a Hilbert space, where in contrast to many variational methods, a state is modified in a sequence of deterministic steps that do not involve gradients. The efficient access to certain vector operations afforded by the tensor network ansatz allows us to implement our algorithm in a purely classical fashion.
We experimentally probe the inductive bias of the model by training on the dataset consisting of bitstrings of length 20 having an even number of 1 bits. The algorithm rapidly learns the uniform distribution on to high precision, indicating that the tensor network quantum circuit model provides a useful inductive bias for this classical dataset and the resulting trained model is small, only 336 parameters. The dataset can be frustrating to learn for other models, such as restricted Boltzman machines (RBMs) trained with gradient-based methods. The difficulty of training RBMs to learn parity with contrastive divergence and related training algorithms is noted in [11]. The difficulty for other gradient based deep-learning methods on parity problems has been studied in [12]. To put the work in this paper in context, we note that generative modeling using tensor networks has been considered for several datasets for which classical neural models trained with gradient based methods are successful [13,14]. We also note that shallow quantum circuits have already been successful for a related supervised parity classification problem [15].
In an effort to improve accessibility, we avoid the language of quantum-many body physics and quantum information and explain the algorithm and results in terms of elementary linear algebra and statistics. While this means some motivational material is omitted, we believe it sharpens the exposition. One exception is the visual language of tensor networks where the benefits of simplifying tensor contractions outweigh the costs of using elementary, but cumbersome, notation. We refer readers unfamiliar with tensor network notation to [16,17,18,19] or to the many other surveys.
The organization of the paper is as follows. In Section 2 we state the optimization problem at the population level and propose a finite-sample estimator. In Section 3 and Section 4 we describe an abstract discrete-time dynamical system evolving on the unit sphere of Hilbert space which optimizes our empirical objective by exactly solving an effective problem in a sequence of isometrically embedded Hilbert subspaces. In Section 5 we provide a concrete realization of this dynamical system for a class of tensor networks called matrix product states. Section 6 outlines experiments demonstrating that the proposed iterative solver successfully learns the parity language using limited data.
2. The Problem Formulation
Recall that a unit vector in a finite-dimensional Hilbert space defines a probability distribution on any orthonormal basis by setting the probability of each basis vector e to be
We refer to the probability distribution in Equation (1) as the Born distribution induced by .
Let be a probability distribution on a finite set and fix a field of scalars, either or . Let be the free vector space on the set . Use to denote the vector in corresponding to the element . The space has a natural inner product defined by declaring the vectors to be an orthonormal basis.
Define a unit vector by
Notice that realizes as a Born distribution:
The formula for as written in Equation (2) involves perfect knowledge of and unrestricted access to the Hilbert space . This paper is concerned with situations when knowledge about is limited to a finite number of training examples, and is restricted to some tractable subset of the unit sphere.
At the population level, the problem to be solved is to find the closest approximation to within ,
We assume access to a sequence of samples drawn independently from , giving rise to the associated empirical distribution
It is natural to define the following estimator whose Born distribution coincides with the empirical distribution
We are thus led to consider the following optimization problem.
Problem 1.
Given a sequence of i.i.d. samples drawn from π and a subset of the unit sphere in H, find
Our proposal differs from existing literature on Born Machines which have employed log-likelihood objective functions minimized by gradient descent (see [20] for a review). As we will see, the choice of loss function as the norm allows analytical updates with guaranteed improvement. This should be contrasted with the log-likelihood objective for which no such guarantee exists and gradient descent may diverge if the learning rate is not chosen appropriately.
Although the problem formulation contains no explicit regularization term, regularization is achieved implicitly by controlling the complexity of the model class . In the experiments section, the model hypothesis class is defined by a small integer hyperparameter called bond-dimension. We solve the problem for several choices of bond-dimension using a held-out test set to measure overfitting and generalization. In the case where consists of strings, the associated Hilbert space has a dimension that is exponential in the string length. The model hypothesis class should be chosen so that the induced Born distribution offers a useful inductive bias for modeling high-dimensional probability distributions over the space of sequences. We note, as an aside, that the plug-in estimator is a biased estimator of the population objective .
3. Outline of Our Approach to Solving the Problem
We present an algorithm that, given a fixed realization of data and an initial state , produces a deterministic sequence of unit vectors in . The algorithm is a variation of the density matrix renormalization group (DMRG) procedure which we call exact single-site DMRG in which each step produces a vector closer to . The sequence is defined inductively as follows: given , the inductive step defines a subspace of , which also contains . Then is defined to be the vector in closest to . Inspired by ideas from the Renormalization Group we provide an analytic formula for . The fact that the distance to the target vector decreases after each iteration follows as a simple consequence of the following facts
See Figure 1.
Figure 1.
A bird’s eye view of the training dynamics of exact single-site DMRG on the unit sphere. (a) The initial vector and the vector lie in the unit sphere of . (b) The vector is used to define the subspace . The unit vectors in define a lower dimensional sphere in (in blue). The vector is the vector in that sphere that is closest to . (c) The vector is used to define the subspace . The unit sphere in (in blue) contains but does not contain . The vector is the unit vector in closest to . (d) The vector is used to define the subspace . The vector is the unit vector in closest to . And so on.
4. Effective Versions of the Problem
Each proposal subspace mentioned in the previous section will be defined as the image of an “effective” space. We begin with a general description of an effective space.
Let be an isometric embedding of a Hilbert space into . We refer to as the effective Hilbert space. The isometry and its adjoint map are summarized by the following diagram,
The composition is the identity on . The composition in the other order is an orthogonal projection onto which is a subspace of isometrically isomorphic to . Call this orthogonal projection P
The effective version of the problem formulated in Section 2 is to find the unit vector in the image of the effective Hilbert space that is closest to . This effective problem is solved exactly in two simple steps. The first step is orthogonal projection: is the vector in closest to . The second step is to normalize , which may not be a unit vector, to obtain the unit vector in closest to .
Therefore, the analytic solution to the effective problem is where
In the exact single-site DMRG algorithm, the space is contained within our model hypothesis class . We also offer a multi-site DMRG algorithm in the Appendix A. In this multi-site algorithm, the analytic solution to the effective problem in does not lie in so the solution to the effective problem needs to undergo an additional “model repair” step.
Before going on to the details of the algorithm, it might be helpful to look more closely at the solution to the effective problem. For each training example , call the vector an effective data point. Then, the argument of in (10) becomes the weighted sum of effective data
The effective data are not necessarily mutually orthogonal and so the vector in (11) will not be a unit vector. One may normalize to obtain a unit vector in and then apply to obtain the analytic solution to the effective problem. Normalizing in and then applying is the same as applying and then normalizing in since is an isometry.
5. The Exact Single-Site DMRG Algorithm
Now specialize to the case that is a probability distribution on a set of sequences. Suppose that consists of sequences of length N in fixed alphabet . The Hilbert space , defined as the free Hilbert space on , has a natural tensor product structure where V is the free Hilbert space on the alphabet A. We refer to V as the site space. So in this situation, the vectors are an orthonormal basis for the d-dimensional site space V and the vectors
are an orthonormal basis for the dimensional space . We choose as model hypothesis class the subset consisting of normalized elements in that have a low rank matrix product state (MPS) factorization. Vectors in this model hypothesis class have efficient representations, even in cases where the Hilbert space is of exponentially high dimension. For simplicity of presentation, we consider matrix product states with a single fixed bond space W, although everything that follows could be adapted to work with tensor networks without loops having arbitrary bond spaces.
The exact single-site DMRG algorithm begins with an initial vector and produces inductively by solving an effective problem in the subspace
which we now describe. Let us drop the subscript from the isometry and the effective Hilbert space in the relevant effective problem—just be aware that the embedding
will change from step to step. The map is defined using an MPS factorization of in mixed canonical form relative to a fixed site which varies at each step according to a predetermined schedule. For the purposes of illustration, the third site is the fixed site in the pictures below.
The effective space is and the isometric embedding is defined for any by replacing the tensor at the fixed site of with :
To see that is an isometry, use the gauge condition that the MPS factorization of is in mixed canonical form relative to the fixed site, as illustrated below:
The adjoint map has a clean pictorial depiction as well.
To see that as pictured above is, in fact, the adjoint of , note that for any and any , both and result in the same tensor contraction:
In the picture above, begin with the blue tensors. Contracting with the yellow tensor gives and then contracting with the red tensor gives . On the other hand, first contracting with the red tensor yields resulting in after contracting with the yellow tensor.
Now, Equation (10) describes an analytic solution for the vector in closest to . Namely, where
For each sample , the effective data point is given by the contraction
Once the effective form of each distinct training example has been computed, weighted by , summed, and normalized, one obtains an expression for the unit vector , depicted as follows,
Finally, apply the map to get :
To complete the description of the exact single-site DMRG algorithm, we need to choose a schedule in which to update the tensors. We use the following schedule, organized into back-and-forth sweeps, for the fixed site at each step
A schedule that proceeds by moving the fixed site one position at a time allows us to take advantage of two efficiencies resulting in an algorithm that is linear in both the number of training examples n and the number of sites N. One efficiency is that most of the calculations of the effective data in Equation (21) used to compute can be reused when computing . The second efficiency is that when inserting the updated tensor in Equation (22), it can be done so that the resulting MPS factorization of as pictured in Equation (23) will be in mixed canonical form relative to a site adjacent to the updated tensor, which avoids a costly gauge fixing step.
6. Experiments
This section considers the problem of unsupervised learning of probability distributions on bitstrings of fixed length (Code available online: https://github.com/TunnelTechnologies/dmrg-exact). The first problem we consider is the parity language , which consists of bitstrings of length N containing an even number of 1 bits. The goal of this task is to learn the probability distribution p which assigns uniform mass to each bitstring in and zero elsewhere. More explicitly,
where denotes the indicator function of the subset . The above unsupervised learning problem is harder than the parity classification problem considered in [12] because the training signal does not exploit data labels. Of the total such bitstrings, we reserved random disjoint subsets of size for training, cross-validation and testing purposes. A NLL of corresponds to the entropy of the uniform distribution on . If the model memorizes the training set, it will assign to it a negative-log-likelihood (NLL) of corresponding to the entropy of the uniform distribution on the training data. A NLL of N corresponds to the entropy of the uniform distribution on all bitstrings of length N. The measure of generalization performance is the gap between the NLL of the training and testing data. We performed exact single-site DMRG over the real number field using the dataset for different choices of bond dimension, which refers to the dimensionality of the bond space W in the effective Hilbert space . Training was terminated according to an early stopping criterion as determined by distance between the MPS state and the state of the cross-validation sample. Since the bond dimension controls the complexity of the model class, and since matrix product states are universal approximators of functions on , we expect overfitting to occur for sufficiently large bond dimension. Indeed, the NLL as a function of bond dimension reported in Figure 2 displays the expected bias-variance tradeoff, with optimal model complexity occurring at bond dimension 3 with corresponding generalization gap .
Figure 2.
A representative bias-variance tradeoff curve showing negative log-likelihood (base 2) as a function of bond dimension for exact single-site DMRG on the dataset. For bond dimension 3, the generalization gap is approximately . For reference, the uniform distribution on bitstrings has NLL of 20. Memorizing the training data would yield a NLL of approximately .
The second problem we consider is unsupervised learning of the divisible-by-7 language which consists of the binary representation of integers which are divisible by 7. The dataset was constructed using first 149797 such integers which lie in the range . We trained a length-20 MPS to learn the uniform distribution on the divisible-by-7 language as we did for P20, except utilizing subsets of size 10% for training, testing and cross-validation. Figure 3 illustrates that the model trained on exact single site DMRG with a bond dimension of 8 learns the DIV7 dataset with nearly perfect accuracy, producing a model with a generalization gap of .
Figure 3.
A representative bias-variance tradeoff curve showing negative log-likelihood (base 2) as a function of bond dimension for exact single-site DMRG on the div7 dataset. For bond dimension 8, the generalization gap is approximately . For reference, the uniform distribution on bitstrings has NLL of 20, the target distribution has a NLL of , and memorizing the training data would yield a NLL of approximately .
7. Discussion
A number of recent works have explored the parity dataset using restricted Boltzmann machines (RBMs) and found it to be difficult to learn, even in experiments that train using the entire dataset [11,21]. Recall that an RBM is a universal approximator of distributions on , given sufficiently many hidden units. Ref. [21] proved that any probability distribution on can be approximated within in KL-divergence by an RBM with hidden units. For this bound works out to be about hidden nodes. It would be interesting to know whether it could be learned with significantly fewer.
It is not difficult to train a feedforward neural network to classify bitstrings by parity using labelled data, but we do not know if there are unsupervised generative neural models that do well learning . Additionally, quantum circuits can be trained to classify labelled data [15]. It is reasonable to expect that recurrent models whose training involve conditional probabilities might be frustrated by since the conditional distributions contain no information: any bitstring of length less than N has the same number of completions in as not in .
The reader may be interested in [22,23] where quantum models are used to learn classical data. Those works considered quantum Boltzman machines which were shown to learn the distribution more effectively than their classical counterparts using the same dataset. The complexity of classically simulating a QBM scales exponentially with the number of sites in contrast to the tensor network algorithms presented here, which scale linearly in the number of sites (for fixed bond dimension).
The main goal of this paper is to demonstrate the existence of classical datasets for which tensor network models trained via DMRG learn more effectively than generative neural models. It will be interesting to understand better how and why [24].
8. Conclusions and Outlook
The essence of DMRG in the Quantum Physics literature is to solve an eigenvalue problem in a high-dimensional Hilbert space by iteratively solving an effetive eigenvalue problem in an isometrically embedded Hilbert subspace . In this paper we have shown how similar reasoning allows to solve a high-dimensional distribution estimation problem by iteratively solving a related linear algebra problem in effective Hilbert space. The proposed algorithm offers a number of advantages over existing gradient-based techniques including a guaranteed improvement theorem, and empirically performs well on tasks for which gradient-based methods are known to fail.
Author Contributions
Formal analysis, J.S. and J.T.; Software, J.S. and J.T. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by Tunnel.
Acknowledgments
The authors thank Tai-Danae Bradley, Giuseppe Carleo, Joseph Hirsh, Maxim Kontsevich, Jackie Shadlen, Miles Stoudenmire, and Yiannis Vlassopoulos for many helpful conversations.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Multi-Site DMRG
For completeness we now describe a related multi-site DMRG algorithm. The model clas now consists of normalized vectors with matrix product factorizations, with possibly different bond spaces having dimension less than a fixed upper bound. The algorithm begins with an initial vector and produces inductively. The inductive step is similar in that we solve an effective problem in the image of an effective Hilbert space
to find the unit vector in that is closest to the target state , which we now denote with a tilde:
In multi-site DMRG, as opposed to single-site DMRG, the image of the effective space is not contained in the MPS model hypothesis class . So, the solution to the effective problem must undergo a “model repair” step
to produce a vector . In summary:
- Use to define an isometric embedding with
- Let be the unit vector in closest to .
- Perform a model repair of to obtain a vector There are multiple ways to do the model repair.
In order to define the effective problem in the inductive step of multi-site DMRG, one uses an MPS factorization of in mixed canonical gauge relative to an interval of r-sites. In the picture below, the interval consists of the two sites 3 and 4.
The effective Hilbert space where and are the bond spaces to the left and right of the fixed interval of sites, and r is the length of the chosen interval. The map is given by replacing the interval of sites and contracting
The map and its adjoint are described by, and have properties proved by, pictures completely analogous to those detailed for single-site DMRG in Section 5. The effective problem is also solved the same way. What is not the same is that the vector in which solves the effective problem is outside of the model class and so one performs a model repair step , pictured graphically in by:
One way to perform the model repair is to choose
but the flexibility of the model repair step allows for other possibilities. One can use the model repair to implement a dynamic tradeoff between proximity to and other constraints of interest, such as bond dimension. Many of these implementations have good algorithms arising from singular value decompositions manageable in the effective Hilbert space. Let use denote such a model repair choice as . Be aware that if is the vector in nearest to as in Equation (A7), there is no guarantee that will be nearer to than the previous iterate. In fact, we have experimentally observed the sequence obtained by this kind of model repair to move away from . See Figure A1 for an illustration of this possibility.
Figure A1.
The shaded region represents the model class . The red points all lie in . The vector is defined to be the unit vector in closest to the target . Note that does not lie in . The vector is defined to be the vector in closest to to . In this picture, There may be a point, such as the one labelled , which lies in and is closer to than , notwithstanding the fact that is is further from . This figure, to scale, depicts a scenario in which = 0.09, = 0.10, = 0.07, = 0.06, = 0.07, and = 0.08.
Figure A1.
The shaded region represents the model class . The red points all lie in . The vector is defined to be the unit vector in closest to the target . Note that does not lie in . The vector is defined to be the vector in closest to to . In this picture, There may be a point, such as the one labelled , which lies in and is closer to than , notwithstanding the fact that is is further from . This figure, to scale, depicts a scenario in which = 0.09, = 0.10, = 0.07, = 0.06, = 0.07, and = 0.08.

One might hope to improve the model repair step, say by pre-conditioning the singular value decomposition in a way that is knowledgeable about the target . For the experiments reported in this paper, single-site DMRG consistently outperformed multi-site DMRG for several choices of model repair step, and we include multi-site DMRG only for pedagogical reasons. The adaptability of the bond dimension afforded by the multi-site DMRG algorithm could provide benefits that outweigh the challenges of good model repair in some situations.
References
- Biamonte, J.; Wittek, P.; Pancotti, N.; Rebentrost, P.; Wiebe, N.; Lloyd, S. Quantum machine learning. Nature 2017, 549, 195. [Google Scholar] [CrossRef]
- Preskill, J. Quantum Computing in the NISQ era and beyond. Quantum 2018, 2, 79. [Google Scholar] [CrossRef]
- Peruzzo, A.; McClean, J.; Shadbolt, P.; Yung, M.H.; Zhou, X.Q.; Love, P.J.; Aspuru-Guzik, A.; O’brien, J.L. A variational eigenvalue solver on a photonic quantum processor. Nat. Commun. 2014, 5, 4213. [Google Scholar] [CrossRef]
- Farhi, E.; Goldstone, J.; Gutmann, S. A quantum approximate optimization algorithm. arXiv 2014, arXiv:1411.4028. [Google Scholar]
- Huggins, W.; Patil, P.; Mitchell, B.; Whaley, K.B.; Stoudenmire, E.M. Towards quantum machine learning with tensor networks. Quantum Sci. Technol. 2019, 4, 024001. [Google Scholar] [CrossRef]
- Liu, J.G.; Wang, L. Differentiable learning of quantum circuit Born machines. Phys. Rev. A 2018, 98, 062324. [Google Scholar] [CrossRef]
- Benedetti, M.; Garcia-Pintos, D.; Perdomo, O.; Leyton-Ortega, V.; Nam, Y.; Perdomo-Ortiz, A. A generative modeling approach for benchmarking and training shallow quantum circuits. npj Quantum Inf. 2019, 5, 45. [Google Scholar] [CrossRef]
- Du, Y.; Hsieh, M.H.; Liu, T.; Tao, D. The expressive power of parameterized quantum circuits. arXiv 2018, arXiv:1810.11922. [Google Scholar]
- Killoran, N.; Bromley, T.R.; Arrazola, J.M.; Schuld, M.; Quesada, N.; Lloyd, S. Continuous-variable quantum neural networks. Phys. Rev. Res. 2019, 1, 033063. [Google Scholar] [CrossRef]
- Shepherd, D.; Bremner, M.J. Temporally unstructured quantum computation. Proc. R. Soc. A Math. Phys. Eng. Sci. 2015, 465. [Google Scholar] [CrossRef]
- Romero, E.; Mazzanti Castrillejo, F.; Delgado, J.; Buchaca, D. Weighted Contrastive Divergence. Neural Netw. 2018. [Google Scholar] [CrossRef] [PubMed]
- Shalev-Shwartz, S.; Shamir, O.; Shammah, S. Failures of Gradient-Based Deep Learning. In Proceedings of the 34th International Conference on Machine Learning (ICML 2017), Sydney, Australia, 6–11 August 2017; pp. 3067–3075. [Google Scholar]
- Han, Z.Y.; Wang, J.; Fan, H.; Wang, L.; Zhang, P. Unsupervised generative modeling using matrix product states. Phys. Rev. X 2018, 8, 031012. [Google Scholar] [CrossRef]
- Cheng, S.; Wang, L.; Xiang, T.; Zhang, P. Tree tensor networks for generative modeling. Phys. Rev. B 2019, 99, 155131. [Google Scholar] [CrossRef]
- Farhi, E.; Neven, H. Classification with quantum neural networks on near term processors. arXiv 2018, arXiv:1802.06002. [Google Scholar]
- Stoudenmire, E.M. The Tensor Network. 2019. Available online: https://tensornetwork.org (accessed on 13 February 2019).
- Schollwöck, U. The density-matrix renormalization group in the age of matrix product states. Ann. Phys. 2011, 326, 96–192. [Google Scholar] [CrossRef]
- Bridgeman, J.C.; Chubb, C.T. Hand-waving and interpretive dance: An introductory course on tensor networks. J. Phys. A Math. Theor. 2017, 50, 223001. [Google Scholar] [CrossRef]
- Orús, R. A practical introduction to tensor networks: Matrix product states and projected entangled pair states. Ann. Phys. 2014, 349, 117–158. [Google Scholar] [CrossRef]
- Glasser, I.; Sweke, R.; Pancotti, N.; Eisert, J.; Cirac, J.I. Expressive power of tensor-network factorizations for probabilistic modeling, with applications from hidden Markov models to quantum machine learning. arXiv 2019, arXiv:1907.03741. [Google Scholar]
- Montúfar, G.F.; Rauh, J.; Ay, N. Expressive Power and Approximation Errors of Restricted Boltzmann Machines. In Proceedings of the 24th International Conference on Neural Information Processing Systems (NIPS’11), Granada, Spain, 12–15 December 2011; Curran Associates Inc.: Red Hook, NY, USA, 2011; pp. 415–423. [Google Scholar]
- Amin, M.H.; Andriyash, E.; Rolfe, J.; Kulchytskyy, B.; Melko, R. Quantum boltzmann machine. Phys. Rev. X 2018, 8, 021050. [Google Scholar] [CrossRef]
- Kappen, H.J. Learning quantum models from quantum or classical data. arXiv 2018, arXiv:1803.11278. [Google Scholar]
- Bradley, T.D.; Stoudenmire, E.M.; Terilla, J. Modeling Sequences with Quantum States: A Look Under the Hood. arXiv 2019, arXiv:1910.07425. [Google Scholar]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).


