Next Article in Journal
Limit Cycles of Discontinuous Piecewise Differential Hamiltonian Systems Separated by a Straight Line
Next Article in Special Issue
Hybrid Quantum Vision Transformers for Event Classification in High Energy Physics
Previous Article in Journal
A Review Study of Prime Period Perfect Gaussian Integer Sequences
Previous Article in Special Issue
A Modified Quantum-Inspired Genetic Algorithm Using Lengthening Chromosome Size and an Adaptive Look-Up Table to Avoid Local Optima
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparison between Invariant and Equivariant Classical and Quantum Graph Neural Networks

1
Institute for Fundamental Theory, Physics Department, University of Florida, Gainesville, FL 32611, USA
2
Department of Signal Theory and Communications, Polytechnic University of Catalonia, 08034 Barcelona, Spain
3
Indian Institute of Technology Bhilai, Kutelabhata, Khapri, District-Durg, Chhattisgarh 491001, India
4
Department of Physics & Astronomy, University of Kansas, Lawrence, KS 66045, USA
5
Department of Physics & Astronomy, University of Alabama, Tuscaloosa, AL 35487, USA
6
Software Engineering Institute, Carnegie Mellon University, 4500 Fifth Avenue, Pittsburgh, PA 15213, USA
7
Physik-Department, Technische Universität München, James-Franck-Str. 1, 85748 Garching, Germany
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(3), 160; https://doi.org/10.3390/axioms13030160
Submission received: 25 January 2024 / Revised: 25 February 2024 / Accepted: 25 February 2024 / Published: 29 February 2024
(This article belongs to the Special Issue Computational Aspects of Machine Learning and Quantum Computing)

Abstract

:
Machine learning algorithms are heavily relied on to understand the vast amounts of data from high-energy particle collisions at the CERN Large Hadron Collider (LHC). The data from such collision events can naturally be represented with graph structures. Therefore, deep geometric methods, such as graph neural networks (GNNs), have been leveraged for various data analysis tasks in high-energy physics. One typical task is jet tagging, where jets are viewed as point clouds with distinct features and edge connections between their constituent particles. The increasing size and complexity of the LHC particle datasets, as well as the computational models used for their analysis, have greatly motivated the development of alternative fast and efficient computational paradigms such as quantum computation. In addition, to enhance the validity and robustness of deep networks, we can leverage the fundamental symmetries present in the data through the use of invariant inputs and equivariant layers. In this paper, we provide a fair and comprehensive comparison of classical graph neural networks (GNNs) and equivariant graph neural networks (EGNNs) and their quantum counterparts: quantum graph neural networks (QGNNs) and equivariant quantum graph neural networks (EQGNN). The four architectures were benchmarked on a binary classification task to classify the parton-level particle initiating the jet. Based on their area under the curve (AUC) scores, the quantum networks were found to outperform the classical networks. However, seeing the computational advantage of quantum networks in practice may have to wait for the further development of quantum technology and its associated application programming interfaces (APIs).

1. Introduction

Through the measurement of the byproducts of particle collisions, the Large Hadron Collider (LHC) collects a substantial amount of information about fundamental particles and their interactions. The data produced from these collisions can be analyzed using various supervised and unsupervised machine learning methods [1,2,3,4,5]. Jet tagging is a key task in high-energy physics, which seeks to identify the likely parton-level particle from which the jet originated. By viewing individual jets as point clouds with distinct features and edge connections between their constituent particles, a graph neural network (GNN) is considered a well-suited architecture for jet tagging [2,3].
Classified as deep geometric networks, GNNs have the potential to draw inferences about a graph structure, including the interactions among the elements in the graph [6,7]. Graph neural networks are typically thought of as generalizations of convolutional neural networks (CNNs), which are predominantly used for image recognition, pattern recognition, and computer vision [8,9]. This can be attributed to the fact that in an image, each pixel is connected to its nearest neighboring pixels, whereas in a general dataset, one would ideally like to construct an arbitrary graph structure among the samples. Many instances in nature can be described well in terms of graphs, including molecules, maps, social networks, and the brain. For example, in molecules, the nodal data can be attributed to the atoms, the edges can be characterized as the strength of the bond between atoms, and the features embedded within each node can be the atom’s characteristics, such as reactivity.
Generally, graphically structured problems involve unordered sets of elements with a learnable embedding of the input features. Useful information can be extracted from such graphically structured data by embedding them within GNNs. Many subsequent developments have been made to GNNs since their first implementation in 2005. These developments have included graph convolutional, recurrent, message passing, graph attention, and graph transformer architectures [2,6,10,11].
To enhance the validity and robustness of deep networks, invariant and equivariant networks have been constructed to learn the symmetries embedded within a dataset by preserving an oracle in the former and by enforcing weight sharing across filter orientations in the latter [12,13]. Utilizing analytical invariant quantities characteristic of physical symmetry representations, computational methods have successfully rediscovered fundamental Lie group structures, such as the S O ( n ) , S O ( 1 , 3 ) , and  U ( n ) groups [14,15,16,17]. Nonlinear symmetry discovery methods have also been applied to classification tasks in data domains [18]. The simplest and most useful embedded symmetry transformations include translations, rotations, and reflections, which have been the primary focus in invariant (IGNN) and equivariant (EGNN) graph neural networks [19,20,21].
The learned representations from the collection of these network components can be used to understand unobservable causal factors, uncover fundamental physical principles governing these processes, and possibly even discover statistically significant hidden anomalies. However, with increasing amounts of available data and the computational cost of these deep learning networks, large computing resources will be required to efficiently run these machine learning algorithms. The extension of classical networks, which rely on bit-wise computation, to quantum networks, which rely on qubit-wise computation, is already underway as a solution to this complexity problem. Due to superposition and entanglement among qubits, quantum networks are able to store the equivalent of 2 n characteristics from n two-dimensional complex vectors. In other words, while the expressivity of the classical network scales linearly, that of the quantum network scales exponentially with the sample size n [22]. Many APIs, including Xanadu’s Pennylane, Google’s Cirq, and IBM’s Qiskit, have been developed to allow for the testing of the quantum circuits and quantum machine learning algorithms running on these quantum devices.
In the quantum graph structure, classical nodes can be mapped to the quantum states of the qubits, real-valued features to the complex-valued entries of the states, edges to the interactions between states, and edge attributes to the strength of the interactions between the quantum states. Through a well-defined Hamiltonian operator, the larger structure of a classical model can then be embedded into the quantum model. The unitary operator constructed from this parameterized Hamiltonian determines the temporal evolution of the quantum system by acting on the fully entangled quantum state of the graph. Following several layers of application, a final state measurement of the quantum system can then be made to reach a final prediction. The theory and application of unsupervised and supervised learning tasks involving quantum graph neural networks (QGNNs), quantum graph recurrent neural networks (QGRNNs), and quantum graph convolutional neural networks (QGCNNs) have already been developed [23,24]. Improvements to these models to arbitrarily sized graphs have been made with the implementation of ego-graph-based quantum graph neural networks (egoQGNNs) [25]. Quantum analogs of other advanced classical architectures, including generative adversarial networks (GANs), transformers, natural language processors (NLPs), and equivariant networks, have also been proposed [23,26,27,28,29,30,31,32].
With the rapid development of quantum deep learning, this paper intends to offer a fair and comprehensive comparison between classical GNNs and their quantum counterparts. To classify whether a particle jet has originated from a quark or a gluon, a binary classification task was carried out using four different architectures. These architectures included a GNN, SE(2) EGNN, QGNN, and permutation EQGNN. Each quantum model was fine tuned to have an analogous structure to its classical form. In order to provide a fair comparison, all models used similar hyperparameters as well as a similar number of total trainable parameters. The final results across each architecture were recorded using identical training, validation, and testing sets. We found that QGNN and EQGNN outperformed their classical analogs on the particular binary classification task described above. Although these results seem promising for the future of quantum computing, the further development of quantum APIs is required to allow for more general implementations of quantum architectures.

2. Data

The jet tagging binary classification task is illustrated with the high-energy physics (HEP) dataset Pythia8 Quark and Gluon Jets for Energy Flow [33]. This dataset contains data from two million particle collision jets split equally into one million jets that originated from a quark and one million jets that originated from a gluon. These jets resulted from LHC collisions with total center of mass energy s = 14 TeV and were selected to have transverse momenta p T j e t between 500 to 550 GeV and rapidities | y j e t | < 1.7 . The jet kinematic distributions are shown in Figure 1. For each jet α , the classification label is provided as either a quark with y α = 1 or a gluon with y α = 0 . Each particle i within the jet is listed with its transverse momentum p T , α ( i ) , rapidity y α ( i ) , azimuthal angle ϕ α ( i ) , and PDG id  I α ( i ) .

2.1. Graphically Structured Data

A graph G is typically defined as a set of nodes V and edges E , i.e.,  G = { V , E } . Each node v ( i ) V is connected to its neighboring nodes v ( j ) via edges e ( i j ) E . In our case, each jet α can be considered as a graph J α composed of the jet’s constituent particles as the nodes v α ( i ) with node features h α ( i l ) and edge connections e α ( i j ) between the nodes in J α with edge features a α ( i j ) . It should be noted that the number of nodes within a graph can vary. This is especially true for the case of particle jets, where the number of particles within each jet can vary greatly. Each jet J α can be considered as a collection of m α particles with l distinct features per particle. An illustration of graphically structured data and an example jet in the ( ϕ , y ) plane are shown in Figure 2.

2.2. Feature Engineering

We used the Particle package [34] to find the particle masses m α ( i ) from the respective particle IDs I α ( i ) . From the available kinematic information for each particle i, we constructed new physically meaningful kinematic variables [35], which were used as additional features in the analysis. In particular, we considered the transverse mass m T , α ( i ) , the energy E α ( i ) , and the Cartesian momentum components, p x , α ( i ) , p y , α ( i ) , and  p z , α ( i ) , defined, respectively, as
m T , α ( i ) = m α ( i ) 2 + p T , α ( i ) 2 , E α ( i ) = m T , α ( i ) cos h ( y α ( i ) ) , p x , α ( i ) = p T , α ( i ) cos ( ϕ α ( i ) ) , p y , α ( i ) = p T , α ( i ) sin ( ϕ α ( i ) ) , p z , α ( i ) = m T , i j sin h ( y α ( i ) ) .
The original kinematic information in the dataset was then combined with the additional kinematic variables (1) into a feature set h α ( i l ) , l = 0 , 1 , 2 , , 7 , as follows:
h α ( i l ) p T , α ( i ) , y α ( i ) , ϕ α ( i ) , m T , α ( i ) , E α ( i ) , p x , α ( i ) , p y , α ( i ) , p z , α ( i ) .
These features were then max-scaled by their maximum value across all jets α and particles i, i.e., h α ( i l ) h α ( i l ) / m a x α , i ( h α ( i l ) ) .
Edge connections are formed via the Euclidean distance Δ R = Δ ϕ 2 + Δ y 2 between one particle and its neighbor in ( ϕ , y ) space. Therefore, the edge attribute matrix for each jet can be expressed as
a α ( i j ) Δ R α ( i j ) = ϕ α ( i ) ϕ α ( j ) 2 + y α ( i ) y α ( j ) 2 .

2.3. Training, Validation, and Testing Sets

We considered jets with at least 10 particles. This left us with N = 1,997,445 jets, 997,805 of which were quark jets. While the classical GNN is more flexible in terms of its hidden features, the size of the quantum state and the Hamiltonian scale as 2 n , where n is the number of qubits. As we shall see, the number of qubits is given by the number of nodes n α in the graph, i.e., the number of particles in the jet. Therefore, jets with large particle multiplicity require prohibitively complex quantum networks. Thus, we limited ourselves to the case of n α = 3 particles per jet by only considering the three highest momenta p T particles within each jet. In other words, each graph contained the set h α = ( h α ( 1 ) , h α ( 2 ) , h α ( 3 ) ) , where each h α ( i ) R 8 and h α R 3 × 8 . For training, we randomly picked N = 12,500 jets and used the first 10,000 for training, the next 1250 for validation, and the last 1250 for testing. These sets happened to contain 4982, 658, and 583 quark jets, respectively.

3. Models

The four different models described below, including a GNN, an EGNN, a QGNN, and an EQGNN, were constructed to perform graph classification. The binary classification task was determining whether a jet J α originated from a quark or a gluon.

3.1. Invariance and Equivariance

By making a network invariant or equivariant to particular symmetries within a dataset, a more robust architecture can be developed. In order to introduce invariance and equivariance, one must assume or learn a certain symmetry group G of transformations on the dataset. A function φ : X Y is equivariant with respect to a set of group transformations T g : X X , g G , acting on the input vector space X, if there exists a set of transformations S g : Y Y that similarly transform the output space Y, i.e.,
φ ( T g x ) = S g φ ( x ) .
A model is said to be invariant when, for all g G , S g becomes the set containing only the trivial mapping, i.e., S g = { I G } , where I G G is the identity element of the group G [12,36].
Invariance performs better as an input embedding, whereas equivariance can be more easily incorporated into the model layers. For each model, the invariant component corresponds to the translational and rotational invariant embedding distance φ Δ R α ( i j ) . Here, the function φ : R 2 × R 2 R makes up the edge attribute matrix a α ( i j ) , as defined in Equation (3). This distance is used as opposed to solely incorporating the raw coordinates. Equivariance has been accomplished through the use of simple nontrivial functions along with higher-order methods involving the use of spherical harmonics to embed the equivariance within the network [37,38]. Equivariance takes different forms in each model presented here.

3.2. Graph Neural Network

Classical GNNs take in a collection of graphs { G α } , each with nodes v α ( i ) V α and edges e α ( i j ) E α , where each graph G α = { V α , E α } is the set of its corresponding nodes and edges. Each node v α ( i ) has an associated feature vector h α ( i l ) , and the entire graph has an associated edge attribute tensor a α ( i j r ) describing r different relationships between node v α ( i ) and its neighbors v α ( j ) . Here, we can define N ( i ) as the set of neighbors of node v α ( i ) and take r = 1 , as we only consider one edge attribute. In other words, the edge attribute tensor a α ( i j r ) a α ( i j ) becomes a matrix. The edge attributes are typically formed from the features corresponding to each node and its neighbors.
In the layer structure of a GNN, multilayer perceptions (MLPs) are used to update the node features and edge attributes before aggregating, or mean pooling, the updated node features for each graph to make a final prediction. To simplify notation, we omit the graph index α , lower the node index i, and introduce a model layer index l. The first MLP is the edge MLP ϕ e , which, at each layer l, collects the features h i l , its neighbors’ features h j l , and the edge attribute a i j corresponding to the pair. Once the new edge matrix m i j is formed, we sum along the neighbor dimension j to create a new node feature m i . This extra feature is then added to the original node features h i before being input into a second node updating MLP ϕ h to form new node features h i l + 1 [8,10,21]. Therefore, a GNN is defined as
m i j = ϕ e ( h i l , h j l , a i j ) , m i = j N ( i ) m i j , h i l + 1 = ϕ h ( h i l , m i ) .
Here, h i l R k is the kth-dimensional embedding of node v i at layer l, and m i j is typically referred to as the message-passing function. Once the data are sent through the P graph layers of the GNN, the updated nodes h i P are aggregated via mean pooling for each graph to form a set of final features 1 n α i = 1 n α h i P . These final features are sent through a fully connected neural network (NN) to output the predictions. Typically, a fixed number of hidden features k = N h is defined for both the edge and node MLPs. The described GNN architecture is pictorially shown in the left panel in Figure 3.

3.3. SE(2) Equivariant Graph Neural Network

For the classical EGNN, the approach used here was informed by the successful implementation of SE(3), or rotational, translational, and permutational, equivariance on dynamic systems and the QM9 molecular dataset [21]. It should be noted that GNNs are naturally permutation equivariant, in particular invariant, when averaging over the final node feature outputs of the graph layers [39]. An SE(2) EGNN takes the same form as a GNN; however, the coordinates are equivariantly updated within each graph layer, i.e., x i x i l where x i = ( ϕ i , y i ) in our case. The new form of the network becomes
m i j = ϕ e ( h i l , h j l , a i j , | x i l x j l | ) , m i = j N ( i ) m i j , x i l + 1 = x i l + C j i ( x i l x j l ) ϕ x ( m i j ) , h i l + 1 = ϕ h ( h i l , m i ) .
Since the coordinates x i l of each node v i are equivariantly evolving, we also introduce a second invariant embedding | x i l x j l | based on the equivariant coordinates into the edge MLP ϕ e . The coordinates x i are updated by adding the summed difference between the coordinates of x i and its neighbors x j . This added term is suppressed by a factor of C, which we take to be C ( n α ) = 1 ln ( 2 n α ) . The term is further multiplied by a coordinate MLP ϕ x , which takes as input the message-passing function m i j between node i and its neighbors j. For a proof of the rotational and translational equivariance of x i l + 1 , see Appendix A. The described EGNN architecture is pictorially shown in the right panel in Figure 3.

3.4. Quantum Graph Neural Network

For a QGNN, the input data, as a collection of graphs { G α } , are the same as described above. We fix the number of qubits n to be the number of nodes n α in each graph. For the quantum algorithm, we first form a normalized qubit product state from an embedding MLP ϕ | ψ 0 , which takes in the features h i of each node v i and reduces each of them to a qubit state | ϕ | ψ 0 ( h i ) C 2 [40]. The initial product state describing the system then becomes | ψ α 0 = i = 1 n | ϕ | ψ ( h i ) C 2 n , which we normalize by ψ α 0 | ψ α 0 .
A fully parameterized Hamiltonian can then be constructed based on the adjacency matrix a i j , or trainable interaction weights W i j , and node features h i , or trainable feature weights M i [24]. Here, for the coupling term of the Hamiltonian H C , we utilize the edge matrix a i j connected to two coupled Pauli-Z operators, σ i z and σ j z , which act on the Hilbert spaces of qubits i and j, respectively. Since we embed the quantum state | ψ 0 based on the node features h i , we omit the self-interaction term which utilizes the chosen features applied to the Pauli-Z operator, σ i z , which acts on the Hilbert space of qubit i. We also introduce a transverse term H T to each node in the form of a Pauli-X operator, σ i x , with a fixed or learnable constant coefficient Q 0 , which we take to be Q 0 = 1 . Note that the Hamiltonian H contains O ( 2 n × 2 n ) entries due to the Kronecker products between Pauli operators. To best express the properties of each graph, we take the Hamiltonian of the form
H ( a i j ) = ( i , j ) E a i j I ^ i σ i z 2 I ^ j σ j z 2 2 H C + i V σ i x H T ,
where the 8 × 8 matrix representations of H C and H T are shown in Figure 4. We generate the unitary form of the operator via the complex exponentiating of the Hamiltonian with real learnable coefficients γ l q R P × Q , which can be thought of as infinitesimal parameters running over Q = 2 Hamiltonian terms and P layers of the network. Therefore, the QGNN can be defined as
U i j = ϕ u ( a i j ) = e i q = 1 Q γ l q H q ( a i j ) ,
| ψ l + 1 = ϕ | ψ ( | ψ l , U i j ) = U θ l U i j U θ l | ψ l ,
where U θ l = ( θ i I ) ( θ + i I ) 1 is a parameterized unitary Cayley transformation in which we force θ = θ + θ to be self-adjoint, i.e., θ = θ , with θ C 2 n × 2 n as the trainable parameters. The QGNN evolves the quantum state | ψ 0 by applying unitarily transformed ansatz Hamiltonians with Q terms to the state over P layers.
The final product state | ψ P   C 2 n is measured for output, which is sent to a classical fully connected NN to make a prediction. The analogy between the quantum unitary interaction function ϕ u and classical edge MLP ϕ e , as well as between the quantum unitary state update function ϕ | ψ and classical node update function ϕ h , should be clear. For a reduction in the coupling Hamiltonian H C in Equation (7), see Appendix B. The described QGNN architecture is pictorially shown in the left panel in Figure 5.

3.5. Permutation Equivariant Quantum Graph Neural Network

The EQGNN takes the same form as the QGNN; however, we aggregate the final elements of the product state 1 2 n k = 1 2 n | ψ k P via mean pooling before sending this complex value to a fully connected NN [31,40,41]. See Appendix C for a proof of the quantum product state permutation equivariance over the sum of its elements. The resulting EQGNN architecture is shown in the right panel in Figure 5.

4. Results and Analysis

For each model, a range of total parameters was tested; however, the overall comparison test was conducted using the largest optimal number of total parameters for each network. A feed-forward NN was used to reduce each network’s graph layered output to a binary one, followed by the softmax activation function to obtain the logits in the classical case and the norm of the complex values to obtain the logits in the quantum case. Each model trained over 20 epochs with the Adam optimizer consisting of a learning rate of η = 10 3 and a cross-entropy loss function. The classical networks were trained with a batch size of 64 and the quantum networks with a batch size of one due to the limited capabilities of broadcasting unitary operators in the quantum APIs. After epoch 15, the best model weights were saved when the validation AUC of the true positive rate (TPR) as a function of the false positive rate (FPR) was maximized. The results of the training for the largest optimal total number of parameters | Θ | 5100 are shown in Figure 6, with more details listed in Table 1.
Recall that for each model, we varied the number of hidden features N h in the P graph layers. Since we fixed the number of nodes n α = 3 per jet, the hidden feature number N h = 2 3 = 8 was fixed in the quantum case, and, therefore, we also varied the parameters of the encoder ϕ | ψ 0 and decoder NN in the quantum algorithms.
Based on the AUC scores, the EGNN outperformed both the classical and quantum GNN; however, this algorithm was outperformed by EQGNN with a 7.29 % increase in AUC, representing the strength of the EQGNN. Although the GNN outperformed the QGNN in the final parameter test by 1.93 % , the QGNN performed more consistently and outperformed the GNN in the mid-parameter range | Θ | ( 1500 , 4000 ) . Through the variation in the number of parameters, the AUC scores were recorded for each case, where the number of parameters taken for each point corresponded to | Θ | { 500 , 1200 , 1600 , 2800 , 3500 , 5100 } , as shown in the right panel in Figure 7.

5. Conclusions

Through several computational experiments, the quantum GNNs seemed to exhibit enhanced classifier performance compared with their classical GNN counterparts based on the best test AUC scores produced after the training of the models while relying on a similar number of parameters, hyperparameters, and model structures. These results seem promising for the quantum advantage over classical models. Despite this result, the quantum algorithms took over 100 times longer to train than the classical networks. This was primarily due to the fact that we ran our quantum simulations on classical computers and not on actual quantum hardware. In addition, we were hindered by the limited capabilities in the quantum APIs, where the inability to train with broadcastable unitary operators forced the quantum models to take in a batch size of one or train on a single graph at a time.
The action of the equivariance in the EGNN and EQGNN could be further explored and developed. This is especially true in the quantum case where more general permutation and SU(2) equivariance have been explored [40,41,42,43]. Expanding the flexibility of the networks to an arbitrary number of nodes per graph should also offer increased robustness; however, this may continue to pose challenges in the quantum case due to the current limited flexibility of quantum software. A variety of different forms of the networks can also be explored. Potential ideas for this include introducing attention components and altering the structure of the quantum graph layers to achieve enhanced performance of both classical and quantum GNNs. In particular, one can further parameterize the quantum graph layer structure to better align with the total number of parameters used in the classical structures. These avenues will be explored in future work.

6. Software and Code

PyTorch and Pennylane were the primary APIs used in the formation and testing of these algorithms. The code corresponding to this study can be found at https://github.com/royforestano/2023_gsoc_ml4sci_qmlhep_gnn (accessed on 5 February 2024).

Author Contributions

Conceptualization, R.T.F.; methodology, M.C.C., G.R.D., Z.D., R.T.F., S.G., D.J., K.K., T.M., K.T.M., K.M. and E.B.U.; software, R.T.F.; validation, M.C.C., G.R.D., Z.D., R.T.F., T.M. and E.B.U.; formal analysis, R.T.F.; investigation, M.C.C., G.R.D., Z.D., R.T.F., T.M. and E.B.U.; resources, R.T.F., K.T.M. and K.M.; data curation, G.R.D., S.G. and T.M.; writing—original draft preparation, R.T.F.; writing—review and editing, S.G., D.J., K.K., K.T.M. and K.M.; visualization, R.T.F.; supervision, S.G., D.J., K.K., K.T.M. and K.M.; project administration, S.G., D.J., K.K., K.T.M. and K.M.; funding acquisition, S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This study used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award NERSC DDR-ERCAP0025759. S.G. was supported in part by the U.S. Department of Energy (DOE) under Award No. DE-SC0012447. K.M. was supported in part by the U.S. DOE award number DE-SC0022148. K.K. was supported in part by US DOE DE-SC0024407. Z.D. was supported in part by College of Liberal Arts and Sciences Research Fund at the University of Kansas. Z.D., R.T.F., E.B.U., M.C.C., and T.M. were participants in the 2023 Google Summer of Code.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The high-energy physics (HEP) dataset Pythia8 Quark and Gluon Jets for Energy Flow [33] was used in this analysis.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
APIApplication Programming Interface
AUCArea Under the Curve
CNNConvolutional Neural Network
EGNNEquivariant Graph Neural Network
EQGNNEquivariant Quantum Graph Neural Network
FPRFalse Positive Rate
GANGenerative Adversarial Network
GNNGraph Neural Network
LHCLarge Hadron Collider
MDPIMultidisciplinary Digital Publishing Institute
MLPMultilayer Perceptron
NLPNatural Language Processor
NNNeural Network
QGCNNQuantum Graph Convolutional Neural Network
QGNNQuantum Graph Neural Network
QGRNNQuantum Graph Recurrent Neural Network
TPRTrue Positive Rate

Appendix A. Equivariant Coordinate Update Function

Let T g : X X be the set of translational and rotational group transformations with elements g T g G that act on the vector space X. The function φ : X X defined by
φ ( x ) = x i + C j i ( x i x j )
is equivariant with respect to T g .
Proof. 
Let a general transformation g T g act on X by g X = R X + T , where R T g denotes a general rotation, and T T g denotes a general translation. Then, under transformation g on X of function φ defined above, we have
φ ( g x ) = ( g x i ) + C j i ( g x i g x j ) = ( Q x i + T ) + C j i ( Q x i + T Q x j T ) = ( Q x i + T ) + C j i ( Q x i Q x j ) = Q x i + C j i Q ( x i x j ) + T = Q [ x i + C j i ( x i x j ) ] + T = g φ ( x ) ,
where φ ( g x ) = g φ ( x ) shows φ transforms equivariantly under transformations g T g acting on X. □

Appendix B. Coupling Hamiltonian Simplification

The reduction in the coupling Hamiltonian becomes
H ^ C = 1 2 ( j , k ) E Λ j k I ^ j σ j z 2 I ^ k σ k z 2 2 = 1 8 ( j , k ) E Λ j k I ^ j σ j z I ^ k σ k z 2 = 1 8 ( j , k ) E Λ j k I ^ j σ j z 2 I ^ j σ j z I ^ k σ k z I ^ k σ k z I ^ j σ j z + I ^ k σ k z 2 = 1 8 ( j , k ) E Λ j k [ I ^ j I ^ j I ^ j σ j z σ j z I ^ j + σ j z σ j z I ^ j I ^ k + I ^ j σ k z + σ j z I ^ k σ j z σ k z I ^ k I ^ j + I ^ k σ j z + σ k z I ^ j σ k z σ j z + I ^ k I ^ k I ^ k σ k z σ k z I ^ k + σ k z σ k z ] = 1 8 ( j , k ) E Λ j k I ^ j I ^ j 2 I ^ j σ j z + σ j z σ j z 2 I ^ j I ^ k + 2 I ^ j σ k z + 2 σ j z I ^ k 2 σ j z σ k z + I ^ k I ^ k 2 I ^ k σ k z + σ k z σ k z = 1 8 ( j , k ) E Λ j k I ^ j 2 σ j z + σ j z 2 2 I ^ j I ^ k + 2 I ^ j σ k z + 2 σ j z I ^ k 2 σ j z σ k z + I ^ k 2 σ k z + σ k z 2 = 1 8 ( j , k ) E Λ j k 2 I ^ j 2 σ j z 2 I ^ j I ^ k + 2 I ^ j σ k z + 2 σ j z I ^ k 2 σ j z σ k z + 2 I ^ k 2 σ k z = 1 4 ( j , k ) E Λ j k I ^ j σ j z I ^ j I ^ k + I ^ j σ k z + σ j z I ^ k σ j z σ k z + I ^ k σ k z ,
and multiplying on the left by I ^ j and on the right by I ^ k produces
H ^ C = 1 4 ( j , k ) E Λ j k I ^ j I ^ k σ j z I ^ k + I ^ j σ k z + σ j z I ^ k σ j z σ k z I ^ j σ k z = 1 4 ( j , k ) E Λ j k I ^ j I ^ k σ j z σ k z .

Appendix C. Quantum Product State Permutation Equivariance

For V, a commutable vector space, the product state i = 1 m v i : V n × × V n V n m is permutation-equivariant with respect to the sum of its entries. We prove the n = 2 case for all m Z > 0 .
Proof. 
(By Induction) Assuming we have an n = 1 final qubit state,
| ψ 1 = i = 1 1 v 1 1 v 1 2 = v 1 1 v 1 2 ,
the sum of the product state elements is trivially equivariant with respect to similar graphs. If we have n = 2 final qubit states, the product state is
| ψ i = i = { 1 , 2 } v i 1 v i 2 = v 1 1 v 1 2 v 2 1 v 2 2 = v 1 1 v 2 1 v 1 2 v 2 1 v 1 1 v 2 2 v 1 2 v 2 2 ,
where the sum of elements becomes
v 1 1 v 2 1 + v 1 2 v 2 1 + v 1 1 v 2 2 + v 1 2 v 2 2 = v 2 1 v 1 1 + v 2 1 v 1 2 + v 2 2 v 1 1 + v 2 2 v 1 2 = v 2 1 v 1 1 + v 2 2 v 1 1 + v 2 1 v 1 2 + v 2 2 v 1 2 ,
which is equivalent to the sum of the elements
v 2 1 v 1 1 v 2 2 v 1 1 v 2 1 v 1 2 v 2 2 v 1 2 = v 2 1 v 2 2 v 1 1 v 1 2 = i = { 2 , 1 } v i 1 v i 2
for commutative spaces where v i j C and { 1 , 2 } , { 2 , 1 } should be regarded as ordered sets, which again shows the sum of the state elements remaining unchanged when the qubit states switch positions in the product. We now assume the statement is true for n = N final qubit states and proceed to show the N + 1 case is true. The quantum product state over N elements becomes
i = 1 N | ψ i = i v i 1 v i 2 = v 1 1 v 1 2 v 2 1 v 2 2 v N 1 v N 2 ,
which we assume to be permutation-equivariant over the sum of its elements. We can rewrite the form of this state as
i = 1 N | ψ i = A 1 A 2 A 2 N = A j ,
where A j defines the 2 N terms in the final product state. Replacing the i + 1 th entry of the Kronecker product above with a new N + 1 th state, we have
i = 1 N + 1 | ψ i = i v i 1 v i 2 = v 1 1 v 1 2 v 2 1 v 2 2 v i 1 v i 2 v N + 1 1 v N + 1 2 v i + 1 1 v i + 1 2 v N 1 v N 2 N + 1 terms .
When this occurs, this new state consisting of 2 N + 1 elements with the N + 1 state in the i + 1 th entry of the product can be written in terms of the old state with groupings of the new elements in 2 N + 1 i batches of 2 i elements, i.e.,
i = 1 N + 1 | ψ i = B 1 = A 1 v N + 1 1 B 2 = A 2 v N + 1 1 B 2 i = A 2 i v N + 1 1 B 2 i + 1 = A 1 v N + 1 2 B 2 i + 1 = A 2 i v N + 1 2 B 2 N + 1 2 i + 1 + 1 = A 2 N 2 i + 1 v N + 1 1 B 2 N + 1 2 i = A 2 N v N + 1 1 B 2 N + 1 2 i + 1 = A 2 N 2 i + 1 v N + 1 2 B 2 N + 1 = A 2 N v N + 1 2 ,
which, when summed, becomes
k = 1 2 N + 1 B k = j = 1 2 N A j v N + 1 1 + j = 1 2 N A j v N + 1 2 = ( v N + 1 1 + v N + 1 2 ) j = 1 2 N A j .
However, the i + 1 th entry is arbitrary, and, due to the summation permutation equivariance of the initial state i = 1 N | ψ i , the sum j = 1 2 N A j is equivariant, in fact invariant, under all reorderings of the elements | ψ i in the product i = 1 N | ψ i . Therefore, we conclude i = 1 N + 1 | ψ i is permutation-equivariant with respect to the sum of its elements. □
To show a simple illustration of why (A5) is true, let us take two initial states and see what happens when we insert a new state between them, i.e., in the 2nd entry in the product. This should lead to 2 2 + 1 1 = 2 2 = 4 groupings of 2 1 = 2 elements. To begin, we have
v 1 1 v 1 2 v 2 1 v 2 2 = v 1 1 v 2 1 v 1 2 v 2 1 v 1 1 v 2 2 v 1 2 v 2 2 = A 1 A 2 A 3 A 4 ,
and when we insert the new third state in the 1st entry of the product above, we have
v 1 1 v 1 2 v 3 1 v 3 2 v 2 1 v 2 2 = v 1 1 v 3 1 v 2 1 v 1 2 v 3 1 v 2 1 v 1 1 v 3 2 v 2 1 v 1 2 v 3 2 v 2 1 v 1 1 v 3 1 v 2 2 v 1 2 v 3 1 v 2 2 v 1 1 v 3 2 v 2 2 v 1 2 v 3 2 v 2 2 = A 1 v 3 1 A 2 v 3 1 A 1 v 3 2 A 2 v 3 2 A 3 v 3 1 A 4 v 3 1 A 3 v 3 2 A 4 v 3 2 ,
which sums to
( A 1 + A 2 + A 3 + A 4 ) v 3 1 + ( A 1 + A 2 + A 3 + A 4 ) v 3 2 = ( v 3 1 + v 3 2 ) i = 1 2 N = 2 2 = 4 A j .

References

  1. Andreassen, A.; Feige, I.; Frye, C.; Schwartz, M.D. JUNIPR: A framework for unsupervised machine learning in particle physics. Eur. Phys. J. C 2019, 79, 102. [Google Scholar] [CrossRef]
  2. Shlomi, J.; Battaglia, P.; Vlimant, J.R. Graph neural networks in particle physics. Mach. Learn. Sci. Technol. 2020, 2, 021001. [Google Scholar] [CrossRef]
  3. Mikuni, V.; Canelli, F. Point cloud transformers applied to collider physics. Mach. Learn. Sci. Technol. 2021, 2, 035027. [Google Scholar] [CrossRef]
  4. Mokhtar, F.; Kansal, R.; Duarte, J. Do graph neural networks learn traditional jet substructure? arXiv 2022, arXiv:2211.09912. [Google Scholar]
  5. Mikuni, V.; Canelli, F. ABCNet: An attention-based method for particle tagging. Eur. Phys. J. Plus 2020, 135, 463. [Google Scholar] [CrossRef] [PubMed]
  6. Veličković, P. Everything is connected: Graph neural networks. Curr. Opin. Struct. Biol. 2023, 79, 102538. [Google Scholar] [CrossRef] [PubMed]
  7. Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph neural networks: A review of methods and applications. AI Open 2020, 1, 57–81. [Google Scholar] [CrossRef]
  8. Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  9. Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of the 5th International Conference on Learning Representations (ICLR ’17), Toulon, France, 24–26 April 2017. [Google Scholar]
  10. Gilmer, J.; Schoenholz, S.S.; Riley, P.F.; Vinyals, O.; Dahl, G.E. Neural Message Passing for Quantum Chemistry. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Volume 70, pp. 1263–1272. [Google Scholar]
  11. Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; Bengio, Y. Graph Attention Networks. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  12. Lim, L.; Nelson, B.J. What is an equivariant neural network? arXiv 2022, arXiv:2205.07362. [Google Scholar] [CrossRef]
  13. Ecker, A.S.; Sinz, F.H.; Froudarakis, E.; Fahey, P.G.; Cadena, S.A.; Walker, E.Y.; Cobos, E.; Reimer, J.; Tolias, A.S.; Bethge, M. A rotation-equivariant convolutional neural network model of primary visual cortex. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  14. Forestano, R.T.; Matchev, K.T.; Matcheva, K.; Roman, A.; Unlu, E.B.; Verner, S. Deep learning symmetries and their Lie groups, algebras, and subalgebras from first principles. Mach. Learn. Sci. Tech. 2023, 4, 025027. [Google Scholar] [CrossRef]
  15. Forestano, R.T.; Matchev, K.T.; Matcheva, K.; Roman, A.; Unlu, E.B.; Verner, S. Discovering Sparse Representations of Lie Groups with Machine Learning. Phys. Lett. B 2023, 844, 138086. [Google Scholar] [CrossRef]
  16. Forestano, R.T.; Matchev, K.T.; Matcheva, K.; Roman, A.; Unlu, E.B.; Verner, S. Accelerated Discovery of Machine-Learned Symmetries: Deriving the Exceptional Lie Groups G2, F4 and E6. Phys. Lett. B 2023, 847, 138266. [Google Scholar] [CrossRef]
  17. Forestano, R.T.; Matchev, K.T.; Matcheva, K.; Roman, A.; Unlu, E.B.; Verner, S. Identifying the Group-Theoretic Structure of Machine-Learned Symmetries. Phys. Lett. B 2023, 847, 138306. [Google Scholar] [CrossRef]
  18. Roman, A.; Forestano, R.T.; Matchev, K.T.; Matcheva, K.; Unlu, E.B. Oracle-Preserving Latent Flows. Symmetry 2023, 15, 1352. [Google Scholar] [CrossRef]
  19. Maron, H.; Ben-Hamu, H.; Shamir, N.; Lipman, Y. Invariant and Equivariant Graph Networks. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  20. Gong, S.; Meng, Q.; Zhang, J.; Qu, H.; Li, C.; Qian, S.; Du, W.; Ma, Z.M.; Liu, T.Y. An efficient Lorentz equivariant graph neural network for jet tagging. J. High Energy Phys. 2022, 7, 030. [Google Scholar] [CrossRef]
  21. Satorras, V.G.; Hoogeboom, E.; Welling, M. E(n) Equivariant Graph Neural Networks. arXiv 2021, arXiv:2102.09844. [Google Scholar]
  22. Preskill, J. Quantum Computing in the NISQ era and beyond. Quantum 2018, 2, 79. [Google Scholar] [CrossRef]
  23. Beer, K.; Khosla, M.; Köhler, J.; Osborne, T.J.; Zhao, T. Quantum machine learning of graph-structured data. Phys. Rev. A 2023, 108, 012410. [Google Scholar] [CrossRef]
  24. Verdon, G.; Mccourt, T.; Luzhnica, E.; Singh, V.; Leichenauer, S.; Hidary, J.D. Quantum Graph Neural Networks. arXiv 2019, arXiv:1909.12264. [Google Scholar]
  25. Ai, X.; Zhang, Z.; Sun, L.; Yan, J.; Hancock, E.R. Decompositional Quantum Graph Neural Network. arXiv 2022, arXiv:2201.05158. [Google Scholar]
  26. Niu, M.Y.; Zlokapa, A.; Broughton, M.; Boixo, S.; Mohseni, M.; Smelyanskyi, V.; Neven, H. Entangling Quantum Generative Adversarial Networks. Phys. Rev. Lett. 2022, 128, 220505. [Google Scholar] [CrossRef]
  27. Chu, C.; Skipper, G.; Swany, M.; Chen, F. IQGAN: Robust Quantum Generative Adversarial Network for Image Synthesis On NISQ Devices. In Proceedings of the ICASSP 2023—2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar] [CrossRef]
  28. Sipio, R.D.; Huang, J.H.; Chen, S.Y.C.; Mangini, S.; Worring, M. The Dawn of Quantum Natural Language Processing. In Proceedings of the ICASSP 2022—2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Virtual, 7–13 May 2021; pp. 8612–8616. [Google Scholar]
  29. Cherrat, E.A.; Kerenidis, I.; Mathur, N.; Landman, J.; Strahm, M.C.; Li, Y.Y. Quantum Vision Transformers. arXiv 2023, arXiv:2209.08167. [Google Scholar] [CrossRef]
  30. Meyer, J.J.; Mularski, M.; Gil-Fuster, E.; Mele, A.A.; Arzani, F.; Wilms, A.; Eisert, J. Exploiting Symmetry in Variational Quantum Machine Learning. PRX Quantum 2023, 4, 010328. [Google Scholar] [CrossRef]
  31. Nguyen, Q.T.; Schatzki, L.; Braccia, P.; Ragone, M.; Coles, P.J.; Sauvage, F.; Larocca, M.; Cerezo, M. Theory for Equivariant Quantum Neural Networks. arXiv 2022, arXiv:2210.08566. [Google Scholar]
  32. Schatzki, L.; Larocca, M.; Nguyen, Q.T.; Sauvage, F.; Cerezo, M. Theoretical Guarantees for Permutation-Equivariant Quantum Neural Networks. arXiv 2022, arXiv:2210.09974. [Google Scholar] [CrossRef]
  33. Komiske, P.T.; Metodiev, E.M.; Thaler, J. Energy flow networks: Deep sets for particle jets. J. High Energy Phys. 2019, 2019, 121. [Google Scholar] [CrossRef]
  34. Rodrigues, E.; Schreiner, H. Scikit-Hep/Particle: Version 0.23.0; Zenodo: Geneva, Switzerland, 2023. [Google Scholar] [CrossRef]
  35. Franceschini, R.; Kim, D.; Kong, K.; Matchev, K.T.; Park, M.; Shyamsundar, P. Kinematic Variables and Feature Engineering for Particle Phenomenology. arXiv 2022, arXiv:2206.13431. [Google Scholar] [CrossRef]
  36. Esteves, C. Theoretical Aspects of Group Equivariant Neural Networks. arXiv 2020, arXiv:2004.05154. [Google Scholar]
  37. Murnane, D.; Thais, S.; Thete, A. Equivariant Graph Neural Networks for Charged Particle Tracking. arXiv 2023, arXiv:2304.05293. [Google Scholar]
  38. Worrall, D.E.; Garbin, S.J.; Turmukhambetov, D.; Brostow, G.J. Harmonic Networks: Deep Translation and Rotation Equivariance. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, 21–26 July 2017; pp. 7168–7177. [Google Scholar] [CrossRef]
  39. Thiede, E.H.; Hy, T.S.; Kondor, R. The general theory of permutation equivarant neural networks and higher order graph variational encoders. arXiv 2020, arXiv:2004.03990. [Google Scholar]
  40. Mernyei, P.; Meichanetzidis, K.; Ceylan, İ.İ. Equivariant Quantum Graph Circuits. arXiv 2022, arXiv:2112.05261. [Google Scholar]
  41. Skolik, A.; Cattelan, M.; Yarkoni, S.; Bäck, T.; Dunjko, V. Equivariant quantum circuits for learning on weighted graphs. Npj Quantum Inf. 2023, 9, 47. [Google Scholar] [CrossRef]
  42. East, R.D.P.; Alonso-Linaje, G.; Park, C.Y. All you need is spin: SU(2) equivariant variational quantum circuits based on spin networks. arXiv 2023, arXiv:2309.07250. [Google Scholar]
  43. Zheng, H.; Kang, C.; Ravi, G.S.; Wang, H.; Setia, K.; Chong, F.T.; Liu, J. SnCQA: A hardware-efficient equivariant quantum convolutional circuit architecture. arXiv 2023, arXiv:2211.12711. [Google Scholar]
Figure 1. Distributions of the jet transverse momenta p T , total momenta p, and energies E.
Figure 1. Distributions of the jet transverse momenta p T , total momenta p, and energies E.
Axioms 13 00160 g001
Figure 2. A visualization of graphically structured data (left) and a sample jet shown in the ( ϕ , y ) plane (right) with each particle color-coded by its transverse momentum p T , α ( i ) .
Figure 2. A visualization of graphically structured data (left) and a sample jet shown in the ( ϕ , y ) plane (right) with each particle color-coded by its transverse momentum p T , α ( i ) .
Axioms 13 00160 g002
Figure 3. Graph neural network (GNN, left) and equivariant graph neural network (EGNN, right) schematic diagrams.
Figure 3. Graph neural network (GNN, left) and equivariant graph neural network (EGNN, right) schematic diagrams.
Axioms 13 00160 g003
Figure 4. The 8 × 8 matrix representations of the coupling and transverse Hamiltonians defined in Equation (7).
Figure 4. The 8 × 8 matrix representations of the coupling and transverse Hamiltonians defined in Equation (7).
Axioms 13 00160 g004
Figure 5. Quantum graph neural network (QGNN, left) and equivariant quantum graph neural network (EQGNN, right) schematic diagrams.
Figure 5. Quantum graph neural network (QGNN, left) and equivariant quantum graph neural network (EQGNN, right) schematic diagrams.
Axioms 13 00160 g005
Figure 6. (a) GNN, (b) EGNN, (c) QGNN, and (d) EQGNN training history plots.
Figure 6. (a) GNN, (b) EGNN, (c) QGNN, and (d) EQGNN training history plots.
Axioms 13 00160 g006
Figure 7. Model ROC curves (left) and AUC scores as a function of parameters (right).
Figure 7. Model ROC curves (left) and AUC scores as a function of parameters (right).
Axioms 13 00160 g007
Table 1. Metric comparison between the classical and quantum graph models 1.
Table 1. Metric comparison between the classical and quantum graph models 1.
Model | Θ | N h PTrain ACCVal ACCTest AUC
GNN512210574.25% 74.80% 63.36%
EGNN525210473.66% 74.08% 67.88%
QGNN515686 74.00 % 73.28 % 61.43%
EQGNN514086 74.42 % 72.56 % 75.17%
1 The pink color represents the GNN results. The red color represents the EGNN results. The blue color represents the QGNN results. The cyan color represents the EQGNN results. This representation extends to Figure 7.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Forestano, R.T.; Comajoan Cara, M.; Dahale, G.R.; Dong, Z.; Gleyzer, S.; Justice, D.; Kong, K.; Magorsch, T.; Matchev, K.T.; Matcheva, K.; et al. A Comparison between Invariant and Equivariant Classical and Quantum Graph Neural Networks. Axioms 2024, 13, 160. https://doi.org/10.3390/axioms13030160

AMA Style

Forestano RT, Comajoan Cara M, Dahale GR, Dong Z, Gleyzer S, Justice D, Kong K, Magorsch T, Matchev KT, Matcheva K, et al. A Comparison between Invariant and Equivariant Classical and Quantum Graph Neural Networks. Axioms. 2024; 13(3):160. https://doi.org/10.3390/axioms13030160

Chicago/Turabian Style

Forestano, Roy T., Marçal Comajoan Cara, Gopal Ramesh Dahale, Zhongtian Dong, Sergei Gleyzer, Daniel Justice, Kyoungchul Kong, Tom Magorsch, Konstantin T. Matchev, Katia Matcheva, and et al. 2024. "A Comparison between Invariant and Equivariant Classical and Quantum Graph Neural Networks" Axioms 13, no. 3: 160. https://doi.org/10.3390/axioms13030160

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop