Next Article in Journal
Inverse Acoustic Scattering from a Bounded Homogeneous Penetrable Obstacle
Next Article in Special Issue
Generalized Local Charge Conservation in Many-Body Quantum Mechanics
Previous Article in Journal
Analytical Pricing of Commodity Futures with Correlated Jumps and Seasonal Effects: An Empirical Study of Thailand’s Natural Rubber Market
Previous Article in Special Issue
An Entropic Approach to Constrained Linear Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Non-Self-Referential Characterization of the Gram–Schmidt Process via Computational Induction

1
Department of Mathematical Sciences, College of Science, Mathematics and Technology, Wenzhou-Kean University, Wenzhou 325060, China
2
Department of Mathematical Sciences, College of Science, Mathematics and Technology, Kean University, 1000 Morris Avenue, Union, NJ 07083, USA
Mathematics 2025, 13(5), 768; https://doi.org/10.3390/math13050768
Submission received: 18 January 2025 / Revised: 19 February 2025 / Accepted: 25 February 2025 / Published: 26 February 2025
(This article belongs to the Special Issue Mathematics and Applications)

Abstract

:
The Gram–Schmidt process (GSP) plays an important role in algebra. It provides a theoretical and practical approach for generating an orthonormal basis, QR decomposition, unitary matrices, etc. It also facilitates some applications in the fields of communication, machine learning, feature extraction, etc. The typical GSP is self-referential, while the non-self-referential GSP is based on the Gram determinant, which has exponential complexity. The motivation for this article is to find a way that could convert a set of linearly independent vectors { u i } j = 1 n into a set of orthogonal vectors { v } j = 1 n via a non-self-referential GSP (NsrGSP). The approach we use is to derive a method that utilizes the recursive property of the standard GSP to retrieve a NsrGSP. The individual orthogonal vector form we obtain is v k = j = 1 k β [ k j ] u j , and the collective orthogonal vectors, in a matrix form, are V k = U k ( B Δ k + ) . This approach could reduce the exponential computational complexity to a polynomial one. It also has a neat representation. To this end, we also apply our approach on a classification problem based on real data. Our method shows the experimental results are much more persuasive than other familiar methods.

1. Introduction

The Gram–Schmidt method plays an important role in both theoretical and practical aspects. It has also been applied in constructing some orthogonal elements in some algebraic systems [1], in signal processing [2], and in QR decomposition [3,4] (although there are other approaches to the QR problem [5]). In addition, it could also be applied to machine learning and feature extraction [6,7] via a modified Gram–Schmidt method [8]. However, one would like to have a non-self-referential form of the GSP; but the typical one is in a formal determinant form—whose computational complexity is exponentially high and whose form is not succinctly presented. There is a research gap in bridging the typical (self-referential) and the non-self-referential GSP. This motivates our research on finding a non-self-referential method that could reduce the complexity and enhance the presentation. The novelty part of this article is to find the internal recursive relation between the input vectors U and the converted orthogonal vectors V, as well as a hypothesis of the non-self-referential form and a proof that our hypothesis is correct. Let R denote the set of real numbers. Let ( W , ) denote an inner product space, where the inner product : W × W R satisfies u v = v u , u u 0 , u u = 0 u = 0 W , and ( α u + β v ) w = α · ( u w ) + β · ( v w ) for all α , β R and all u , v , w W . Let U = { u 1 , u 2 , , u m } W be a set of independent vectors, i.e., U is a basis. Then, one associates by the Gram–Schmidt method, the following:
  • v 1 = u 1 ; ω 1 = v 1 | | v 1 | | ;
  • v 2 = u 2 ( u 2 w 1 ) w 1 ; w 2 = v 2 | | v 2 | | ;
  • v 3 = u 3 ( u 3 w 1 ) w 1 ( u 3 w 2 ) w 2 ; w 3 = v 3 | | v 3 | | ;
  • v k = u k i = 1 k 1 ( u k w i ) w i ; w k = v k | | v k | | ;
  • v k + 1 = u k + 1 i = 1 k ( u k + 1 w i ) w i ; w k + 1 = v k + 1 | | v k + 1 | | ;
  • v m = u m j = 1 m 1 ( u m w j ) w j ; w m = v m | | v m | | .
This can also be represented in a matrix form: v 1 v 2 v 3 v k + 1 v m = u 1 u 2 u 3 u k + 1 u m 0 0 0 0 0 0 u 2 w 1 0 0 0 0 u 3 w 1 u 3 w 2 0 0 0 u k + 1 w 1 u k + 1 w 2 u k + 1 w k 0 0 u m w 1 u m w 2 u m w k u m w m 1 1 w 1 w 2 w k w m 1 This is a typical self-referential (or where v k is explicitly dependent on { v j } j = 1 k 1 ) version of the Gram–Schmidt process (GSP). For the non-self-referential version of the Gram–Schmidt process (see [9], Chapter 7), one often resorts to the Gram determinant in the following formula:
v k = F D E T ( u 1 u 1 u 1 u 2 u 1 u k u 2 u 1 u 2 u 2 u 2 u k u k 1 u 1 u k 1 u 2 u k 1 u k u 1 u 2 u k ) ,
where F D E T stands for the formal determinant and is defined by the normalized v k = j = 1 k ( 1 ) k + j d e t ( U j ) u j , where k 1 by k 1 matrix U j = [ u p u q ] 1 p k 1 1 q j k and d e t ( U j ) is the determinant of U j . For example, suppose u 1 = 1 2 3 ; u 2 = 2 9 5 ; u 3 = 6 3 2 . Then, the normalized v 1 is u 1 u 1 u 1 = 1 14 · 1 2 3 ; the normalized v 2 is ( u 1 u 1 ) · u 2 ( u 1 u 2 ) · u 1 | | ( u 1 u 1 ) · u 2 ( u 1 u 2 ) · u 1 | | = 1 ( 23 2 + ( 116 ) 2 + 85 2 ) · 23 116 85 = 0.1579 0.7965 0.5836 ; and the normalized v 3 is the normalization of the formal determinant u 1 u 1 u 1 u 2 u 1 u 3 u 2 u 1 u 2 u 2 u 2 u 3 u 1 u 2 u 3 = u 1 u 2 u 1 u 3 u 2 u 2 u 2 u 3 u 1 u 1 u 1 u 1 u 3 u 2 u 1 u 2 u 3 u 2 + u 1 u 1 u 1 u 2 u 2 u 1 u 2 u 2 u 3 = 415 · u 1 656 · u 2 + 1515 · u 3 = 7363 2189 995 , i.e., the normalized v 3 is 0.9506 0.2826 0.1285 . This computation, however, has exponential complexity. In order to reduce such complexity and to enhance the comprehension, we kept track of all the indexes of the typical GPS and found out the pattern for an inductive computation. Our approach, i.e., the NsrGSP (non-self-referential Gram–Schmidt process), shows (see   Theorem 1 and 3 for more details) the following: V k = U k ( B Δ k + ) , where V k v 1 v 2 v k ; U k u 1 u 2 u k ; and ( B Δ k + ) t = B Δ k 1 + 0 ( k 1 ) × 1 U k 1 ( B Δ k 1 + ) t ( 1 [ U k 1 ( B Δ k 1 + ) t ] c [ U k 1 ( B Δ k 1 + ) t ] ) B Δ k 1 + 1 t , where   B Δ 1 + = 1 and k 2 .

2. Theories and Methods

To show each v i v j = 0 , for all i j , it suffices to show v i w j = 0 .
Proposition 1. 
For all 1 i j m [ v i v j = 0 ] .
Proof. 
If i = 1 , then v 1 v 2 = 0 . Assume for all 1 i j k [ v i v j = 0 ] . Now, let i , j be arbitrary such that 1 i j = k + 1 . If i , j < k + 1 , then it is true by the assumption. Now, let i = k + 1 and j < k + 1 . Then, v k + 1 w j = u k + 1 w j i = 1 , i j k [ ( u k + 1 w i ) ( ˙ w i w j ) ] u k + 1 w j = 0 , since w i w j = 0 for all 1 i j k by the induction assumption.    □
Since v k = u k i = 1 k 1 ( u k w i ) w i = u k + i = 1 k 1 [ 1 | | v i | | ( u k v i ) v i ] , we can focus on finding the pattern of the latter part. An official definition for this part is given in Definition 1. Thus, it could be written as
v k = u k + i = 1 k 1 β k i v i .
Let [ p q ] denote all the connected paths, whose nodes are all natural numbers and whose edges consist of decreasing natural numbers, from p to q, where p , q N and p > q . For example, [ 4 1 ] = { P A T H 1 = { ( 4 , 1 ) } , P A T H 2 = { ( 4 , 2 ) , ( 2 , 1 ) } ,   P A T H 3 = { ( 4 , 3 ) , ( 3 , 1 ) } , P A T H 4 = { ( 4 , 3 ) , ( 3 , 2 ) , ( 2 , 1 ) } } . Indeed, [ p q ] is unique, which could be verified by the following mathematical induction: suppose [ p q ] is unique, then [ p + 1 q ] consists of two parts—all the paths from p + 1 to q are such without going through the node p, and all the paths from p + 1 to q are such via the node p. Since both sets are disjointed and both are unique by the mathematical assumption, [ p q ] is unique and the size is | [ p + 1 q ] | = 2 · | [ p q ] | . Let β [ 4 1 ] denote the sum of the [ 4 1 ] -indexed β multiplicative values, i.e.,
β [ 4 1 ] = β 41 + β 42 · β 21 + β 43 · β 31 + β 43 · β 32 · β 21 .
This definition is generalized in Definition 2.
Definition 1. 
(Projection value)
β k , i : = 1 | | v i | | 2 · ( u k v i ) .
If the meaning could be understood from the context, then β k , i is written as β k i .
Definition 2. 
(β multiplicative value) The β multiplicative value from node k to node i is defined by β [ k i ] : = j = i k 1 ( β k j · β [ j i ] ) = [ β k , i , β k , i + 1 , β k , k 1 ] β [ i i ] β [ ( i + 1 ) i ] β [ ( i + 2 ) i ] β [ ( k 2 ) i ] β [ ( k 1 ) i ] , if k > i and β [ k i ] = 1 , if k = i .
A tree-like expression of these two definitions could be referred to, as shown in Figure 1 and Figure 2.
Definition 3. 
(β multiplicative vector) The set of β multiplicative distributive values from node k to nodes { 1 , 2 , , k } is defined by B k + : = β [ k 1 ] β [ k 2 ] β [ k 3 ] β [ k ( k 1 ) ] β [ k k ] .
Definition 4. 
(Projection vector) B k : = β k , 1 β k , 2 β k , 3 β k , k 1 1 .
Lemma 1. 
B k + = B k B Δ k 1 + 0 ( k 1 ) × 1 0 1 × ( k 1 ) 1 .
Proof. 
From Definitions 2 and 3, we have
( B k + ) t = β [ 1 1 ] β [ 2 1 ] β [ ( k 1 ) 1 ] 0 0 β [ 2 2 ] β [ ( k 1 ) 2 ] 0 0 0 β [ ( k 1 ) ( k 1 ) ] 0 0 0 0 β [ k k ] β k 1 β k 2 β k , k 1 1 = ( B Δ k 1 + ) t 0 ( k 1 ) × 1 0 1 × ( k 1 ) 1 ( B k ) t .    
From this definition, we could have the following matrix representation:
β [ i i ] β [ ( i + 1 ) i ] β [ ( i + 2 ) i ] β [ ( k 1 ) i ] β [ k i ] = 1 0 0 0 0 β i + 1 , i 0 0 0 0 β i + 2 , i β i + 2 , i + 1 0 0 0 β k 1 , i β k 1 , i + 1 β k 1 , i + 2 β k 1 , k 2 0 β k , i β k , i + 1 β k , i + 2 β k , k 2 β k , k 1 β [ i i ] β [ ( i + 1 ) i ] β [ ( i + 2 ) i ] β [ ( k 2 ) i ] β [ ( k 1 ) i ] .
Let a = ( a ( 1 ) , a ( 2 ) , , a ( n ) ) be a vector with length n. Let β a = ( β n , a ( 1 ) , β n , a ( 2 ) , ,   β n , a ( n ) ) . Let 1 = ( 1 ) , 2 = ( 1 , 1 1 ) , 3 = ( 1 , 1 1 , 2 1 , 2 2 ) , , k = ( 1 , 1 1 , 2 1 , 2 2 , 3 1 , 3 2 , 3 3 , 3 4 , , h 1 ,   h 2 , , h 2 h 1 , , k 1 , k 2 , , k 2 k 1 , ) Let us investigate some terms of v k and then find their general non-self-referential expressions.
  • k = 1 : v 1 = [ β 11 ] v 1 = β [ 1 1 ] u 1 ;
  • k = 2 : v 2 = [ β 21 ] v 1 + u 2 = β [ 2 1 ] u 1 + β [ 2 2 ] u 2 ;
  • k = 3 : v 3 = u 3 + β 31 v 1 + β 32 v 2 = u 3 + β 31 v 1 + β 32 [ u 2 + β 21 v 1 ] = [ β 31 + β 32 β 21 ] u 1 + [ β 32 ] u 2 + u 3 = β [ 3 1 ] u 1 + β [ 3 2 ] u 2 + β [ 3 3 ] u 3 ;
  • k = 4 : v 4 = u 4 + β 41 v 1 + β 42 v 2 + β 43 v 3 = u 4 + β 41 v 1 + β 42 [ u 2 + β 21 v 1 ] + β 43 [ ( β 31 + β 32 β 21 ) u 1 + β 32 u 2 + u 3 ] = [ β 41 + β 42 β 21 + β 43 β 31 + β 43 β 32 · β 21 ] u 1 + [ β 42 + β 43 β 32 ] u 2 + [ β 43 ] u 3 + u 4 = β [ 4 1 ] u 1 + β [ 4 2 ] u 2 + β [ 4 3 ] u 3 + β [ 4 4 ] u 4 .
This process could keep on going. The main purpose of this article is to find out the pattern for any arbitrary terms and put them into succinct forms.
Definition 5 
(Multiplicative path). Suppose the j’th path is denoted by P A T H j = { ( α 1 j , α 2 j ) , ( α 2 j , α 3 j ) , , ( α k 2 j , α k 1 j ) , ( α k 1 j , α k j ) } . Define the beta value associated with the j’th path by β ( P A T H j ) β P A T H j : = i = 1 k 1 β α i j , α i + 1 j .
Definition 6. 
Suppose the set of all paths is denoted by P A T H = { P A T H j } j = 1 q . Define its associated beta value by β ( P A T H ) β P A T H : = j = 1 q β P A T H j .
Next, we extend these cases and form the following claims and lemmas:
Proposition 2. 
[ ( n + 1 ) i ] = { ( n + 1 , i ) } j = 2 n ( { ( n + 1 , j ) } [ j i ] ) .
Proof. 
The set of all the paths from node n + 1 to node i are the paths from n + 1 to all the other nodes { i , i + 1 , i + 2 , i + 3 , , n } and the existing paths from each node to the node i.    □
Corollary 1. 
The size of all the paths from node n to node i, or | [ n i ] | = 2 n i 1 , are for all n > i , i.e., there are 2 n i 1 paths from node n to node i.
Lemma 2. 
(Non-self-referential representation) v k = j = 1 k β [ k j ] u j U k ( B k + ) t , where   U k = u 1 u 2 u k .
Proof. 
Let us show this by mathematical induction. Assume v p = j = 1 p β [ p j ] u j is true for all 1 p k 1 . By Equation (2) and the preceding assumption, one has
v k = u k + j = 1 k 1 β k j v j = u k + j = 1 k 1 β k j [ q = 1 j β [ j q ] u q ] = u k + j = 1 k 1 ( β k j · β [ j 1 ] ) u 1 + j = 2 k 1 ( β k j · β [ j 2 ] ) u 2 + + j = k 1 k 1 ( β k j · β [ j ( k 1 ) ] ) u k 1 .
Then, the result follows immediately from Definition 2.    □
Definition 7. 
1. B Δ k + : = β [ 1 1 ] 0 0 0 β [ 2 1 ] β [ 2 2 ] 0 0 β [ ( k 1 ) 1 ] β [ ( k 1 ) 2 ] β [ ( k 1 ) ( k 1 ) ] 0 β [ k 1 ] β [ k 2 ] β [ k ( k 1 ) ] β [ k k ] ;
2. M 1 c M 2 : = M 1 [ , 1 ] M 2 [ , 1 ] M 1 [ , q 1 ] M 2 [ , q 1 ] M 1 [ , q ] M 2 [ , q ] ,   where M 1 and M 2 are both p-by-q matrices and where M i [ , j ] denotes the j’th column of matrix M i ;
3. a M [ a M i j ] i , j = 1 q , where a R and the square matrix M = [ M i j ] i , j = 1 q , whose elements are all non-zero.
4. For any vector q = ( q 1 , q 2 , , q r ) , we use d i a g ( q ) to denote q 1 0 0 0 0 q 2 0 0 0 0 0 q r .
Theorem 1. 
V k v 1 v 2 v k 1 v k = j = 1 1 β [ 1 j ] u j j = 1 2 β [ 2 j ] u j j = 1 m 1 β [ ( m 1 ) j ] u j j = 1 m β [ m j ] u j = u 1 u 2 u k 1 u k β [ 1 1 ] β [ 2 1 ] β [ ( k 1 ) 1 ] β [ k 1 ] 0 β [ 2 2 ] β [ ( k 1 ) 2 ] β [ k 2 ] 0 0 β [ ( k 1 ) ( k 1 ) ] β [ k ( k 1 ) ] 0 0 0 β [ k k ] U k ( B Δ k + ) t .
Proof. 
It follows immediately from Lemma 2 and Definition 7.    □
Lemma 3. 
The k-by-k matrix is B Δ k + = B Δ k 1 + 0 ( k 1 ) × 1 B k + .
For example, [ 5 1 ] = { ( 5 , 1 ) } ( { ( 5 , 2 ) } [ 2 1 ] ) ( { ( 5 , 3 ) } [ 3 1 ] ) ( { ( 5 , 4 ) } [ 4 1 ] ) = { P A T H 1 { ( 5 , 1 ) } , P A T H 2 { ( 5 , 2 ) , ( 2 , 1 ) } , P A T H 3 { ( 5 , 3 ) , ( 3 , 1 ) } , P A T H 4 { ( 5 , 3 ) , ( 3 , 2 ) , ( 2 , 1 ) } , P A T H 5 { ( 5 , 4 ) , ( 4 , 1 ) } , P A T H 6 { ( 5 , 4 ) , ( 4 , 3 ) , ( 3 , 1 ) } , P A T H 7 { ( 5 , 4 ) , ( 4 , 3 ) , ( 3 , 2 ) , ( 2 , 1 ) } , P A T H 8 { ( 5 , 4 ) ,   ( 4 , 2 ) , ( 2 , 1 ) } } . Moreover, β [ 5 1 ] = ( β 51 , β 52 , β 53 , β 54 ) ( β [ 1 1 ] , β [ 2 1 ] , β [ 3 1 ] , β [ 4 1 ] ) . The tree structure could refer to Figure 1.
Proposition 3. 
β k i = 1 | | v i | | 2 · [ ( β [ i 1 ] , β [ i 2 ] , , β [ i ( i 1 ) ] , β [ i i ] ) ( u k u 1 , u k u 2 , , u k u i 1 , u k u i ) ] = 1 | | v i | | 2 · [ q = 1 i β [ i q ] ( u k u q ) ] .
Proof. 
By Equation (1) and Lemma 2, we have β k i = 1 | | v i | | 2 · ( u k v i ) = 1 | | v i | | 2 · u k [ q = 1 i β [ i q ] u q ] = 1 | | v i | | 2 · [ q = 1 i β [ i q ] ( u k u q ) ] .    □
The visual depiction of this proposition is also presented in the top-left figure in Figure 2.
Corollary 2. 
1. β k 1 = 1 | | v 1 | | 2 · ( u k u 1 ) = 1 | | v 1 | | 2 · ( u k u 1 ) ;
2. β k 2 = 1 | | v 2 | | 2 · [ β [ 2 1 ] · ( u k u 1 ) + β [ 2 2 ] · ( u k u 2 ) ] ;
3. β k , k 1 = 1 | | v k 1 | | 2 · i = 1 k 1 [ β [ ( k 1 ) i ] · ( u k u i ) ] ;
4. β k + 1 , k = 1 | | v k | | 2 · i = 1 k [ β [ k i ] · ( u k + 1 u i ) ] .
The interaction between them could be seen in Figure 3.
Lemma 4. 
B k = u k u 1 u k u 2 u k u k 1 1 β [ 1 1 ] β [ 2 1 ] β [ ( k 1 ) 1 ] 0 0 β [ 2 2 ] β [ ( k 1 ) 2 ] 0 0 0 β [ ( k 1 ) ( k 1 ) ] 0 0 0 0 β [ k k ] 1 | | v 1 | | 2 0 0 0 0 1 | | v 2 | | 2 0 0 0 0 1 | | v k 1 | | 2 0 0 0 0 1 U k 1 1 β Δ k 1 + 0 ( k 1 ) × 1 0 1 × ( k 1 ) β [ k k ] t N ( V k 1 ) ,   where   U k 1 = u k u 1 u k u 2 u k u k 1 ; N ( V k 1 ) = V k 1 Δ 0 0 1 ,   and   where   V k 1 Δ = 1 | | v 1 | | 2 0 0 0 1 | | v 2 | | 2 0 0 0 1 | | v k 1 | | 2 .
Proof. 
In general, one has the following:
[ β k + 1 , i , β k + 1 , i + 1 , β k + 1 , k 1 , β k + 1 , k ] = [ 1 | | v i | | 2 · p = 1 i β [ i p ] ( u k + 1 u p ) , 1 | | v i + 1 | | 2 · p = 1 i + 1 β [ ( i + 1 ) p ] ( u k + 1 u p ) , , 1 | | v k 1 | | 2 · p = 1 k 1 β [ ( k 1 ) p ] ( u k + 1 u p ) , 1 | | v k | | 2 · p = 1 k β [ k p ] ( u k + 1 u p ) ] = p = 1 i β [ i p ] ( u k + 1 u p ) p = 1 i + 1 β [ ( i + 1 ) p ] ( u k + 1 u p ) p = 1 k β [ k p ] ( u k + 1 u p ) 1 | | v i | | 2 1 | | v i + 1 | | 2 1 | | v k | | 2 .
   □
Lemma 5. 
V k 1 Δ = 1 [ U k 1 ( B Δ k 1 + ) t ] c [ U k 1 ( B Δ k 1 + ) t ] .
Proof. 
From the definition of V k 1 Δ and Definition 7, one has
V k 1 Δ = 1 | | v 1 | | 2 0 0 0 1 | | v 2 | | 2 0 0 0 1 | | v k 1 | | 2 = 1 d i a g ( V k 1 c V k 1 ) . Then, by Theorem 1, the result follows immediately.    □
Theorem 2. 
B Δ k + = B Δ k 1 + 0 ( k 1 ) × 1 U k 1 ( B Δ k 1 + ) t ( 1 [ U k 1 ( B Δ k 1 + ) t ] c [ U k 1 ( B Δ k 1 + ) t ] ) B Δ k 1 + 1 .
Proof. 
By Lemmas 1,3, 4 and 5, we have the following:
B Δ k + = B Δ k 1 + 0 ( k 1 ) × 1 B k + = B Δ k 1 + 0 ( k 1 ) × 1 B k B Δ k 1 + 0 ( k 1 ) × 1 0 1 × ( k 1 ) 1 = B Δ k 1 + 0 ( k 1 ) × 1 U k 1 1 B Δ k 1 + 0 ( k 1 ) × 1 0 1 × ( k 1 ) β [ k k ] t N ( V k 1 ) B Δ k 1 + 0 ( k 1 ) × 1 0 1 × ( k 1 ) 1 = B Δ k 1 + 0 ( k 1 ) × 1 U k 1 1 B Δ k 1 + 0 ( k 1 ) × 1 0 1 × ( k 1 ) β [ k k ] t V k 1 Δ 0 0 1 B Δ k 1 + 0 ( k 1 ) × 1 0 1 × ( k 1 ) 1 = B Δ k 1 + 0 ( k 1 ) × 1 U k 1 ( B Δ k 1 + ) t V k 1 Δ B Δ k 1 + 1 = B Δ k 1 + 0 ( k 1 ) × 1 U k 1 ( B Δ k 1 + ) t ( 1 [ U k 1 ( B Δ k 1 + ) t ] c [ U k 1 ( B Δ k 1 + ) t ] ) B Δ k 1 + 1 .      
Observe that B Δ k + is a k-by-k square matrix.
Theorem 3. 
(Non-self-referential GSP recursion) The set of GSP orthogonal vectors is calculated by V k = U k B Δ k 1 + 0 ( k 1 ) × 1 U k 1 ( B Δ k 1 + ) t ( 1 [ U k 1 ( B Δ k 1 + ) t ] c [ U k 1 ( B Δ k 1 + ) t ] ) B Δ k 1 + 1 t , where k 2 and B Δ 1 + = [ 1 ] .
Proof. 
This follows immediately from Theorems 1 and 2.    □
The inductive steps could be referred to in Figure 4.
Example 1. 
Suppose u 1 u 2 u 3 u 4 u 5 = 2 7 10 2 4 5 1 0 5 3 1 1 3 1 11 0 2 2 7 0 4 4 1 1 6 .
Then, β 1 , 1 = β [ 1 1 ] = [ 1 ] and v 1 = u 1 . Other terms are found in the following inductive steps:
1. 
k = 2 ; β 2 , 1 β 2 , 2 = u 2 v 1 v 1 v 1 1 = 0.783 1 ; β [ 2 1 ] β [ 2 2 ] = [ β 2 , 1 , β 2 , 2 ] = 0.783 1 ; v 2 = β [ 2 1 ] · u 1 + β [ 2 2 ] · u 2 = 5.43 2.91 0.22 2.00 0.87 ; V 2 = v 1 v 2 = u 1 u 2 ( B Δ 2 + ) t = u 1 u 2 B Δ 1 + 0 β [ 2 1 ] β [ 2 2 ] t = 2 7 5 1 1 1 0 2 4 4 1 0.783 0 1 = 2 5.43 5 2.91 1 0.22 0 2.00 4 0.87 ;
2. 
k = 3 ; β 3 , 1 β 3 , 2 β 3 , 3 = u 3 v 1 v 1 v 1 u 3 v 2 v 1 2 v 2 1 = 0.41 1.36 1 ;
β [ 3 1 ] β [ 3 2 ] β [ 3 3 ] = [ β 31 · β [ 1 1 ] + β 32 · β [ 2 1 ] , β 32 , 1 ] = 0.65 1.36 1 ;
v 3 = β [ 3 1 ] · u 1 + β [ 3 2 ] · u 2 + β [ 3 3 ] · u 3 = 1.80 1.89 2.29 0.71 3.83 ; V 3 = v 1 v 2 v 3 = u 1 u 2 u 3 ( B Δ 3 + ) t = u 1 u 2 u 3 B Δ 2 + 0 β [ 3 1 ] β [ 3 2 ] β [ 3 3 ] t = 2 7 10 5 1 0 1 1 3 0 2 2 4 4 1 1 0.783 0.65 0 1 1.36 0 0 1 ;
3. 
k = 4 ; β 4 , 1 β 4 , 2 β 4 , 3 β 4 , 4 = u 4 v 1 v 1 v 1 u 4 v 2 v 2 v 2 u 4 v 3 v 3 v 3 1 = 0.39 0.94 0.62 1 ; β [ 4 1 ] β [ 4 2 ] β [ 4 3 ] β [ 4 4 ] = 1.53 1.78 0.62 1 ;
v 4 = j = 1 4 β [ 4 j ] u j = U 4 ( β Δ 4 + ) t = u 1 u 2 u 3 u 4 B Δ 3 + 0 β [ 4 1 ] β [ 4 2 ] β [ 4 3 ] β [ 4 4 ] t = 1.18 0.86 0.62 4.68 0.64 ;
4. 
k = 5 ; β 5 , 1 β 5 , 2 β 5 , 3 β 5 , 4 β 5 , 5 = 0.91 0.53 0.03 0.41 1 ; β [ 5 1 ] β [ 5 2 ] β [ 5 3 ] β [ 5 4 ] β [ 5 5 ] = 1.98 1.31 0.28 0.41 1 ; v 5 = j = 1 5 β [ 5 j ] u j = U 5 ( β Δ 5 + ) t = 2.48 3.52 9.89 0.85 3.17 .
Observe that v i v j = 0 for all 1 i j 5 .

3. Application One

Since the obtained orthonormal bases V are completely dependent on the given input vectors U, we could reduce such dependence. Let U denote the set of all permutations of unit U j * = u j 1 * u j 2 * u j m * , i.e., U * = { U j * } j = 1 m ! , where each U j * P j ( U * ) denotes the j’th permutation of U * and each u k * denotes the unit vector of u k . Let V * denote the set of all corresponding orthonormal bases with respect to U, i.e., V * = { V j * } j = 1 m ! , where V i * = v i 1 * v i 2 * v i m * denotes the obtained orthonormal basis from U j * , i.e.,   V i * = G S P ( P i ( U * ) ) . Let us represent V i * and U j * by tensor products, i.e., V i * v i 1 * v i 2 * v i m * and U j * u j 1 * u j 2 * u j m * . We could set up some criteria to locate some off the optimal bases among V * , such as, for example, a natural one based on distances between an unordered U * . Thus, the candidate basis is
V k * = arg min V i * V * { j = 1 m ! | | V i * U j * | | 2 : U j * U * } ,
where V * = { V i * } i = 1 m ! and | | V i * U j * | | 2 is defined by the Euclidean norm, i.e.,
| | V i * U j * | | 2 : = k = 1 m | | v i k * u j k * | | 2 .
Observe that | | V i * U j * | | 2 = k = 1 m | | v i k * u j k * | | 2 = k = 1 m v i k * u j k * , v i k * u j k * = k = 1 m v i k * , v i k * 2 · k = 1 m v i k * , u j k * + k = 1 m u j k * , u j k * = V i * , V i * 2 · V i * , U j * + U j * , U j * V i * U j * , V i * U j * .
Let us take m = 2 , for example, and render a precise decision procedure. Let U * = { u 1 * , u 2 * } ; U 1 * = P 1 ( U * ) = [ u 1 * , u 2 * ] ; and U 2 * = P 2 ( U * ) = [ u 2 * , u 1 * ] (or U 1 * = u 1 * u 2 * ,   U 2 * = u 2 * u 1 * ). Then, the following applies:
V 1 * = v 11 * v 12 * = u 1 * ( u 2 * u 1 * | | u 1 * | | 2 · u 1 * + u 2 * ) ;
V 2 * = v 21 * v 22 * = u 2 * ( u 1 * u 2 * | | u 2 | | 2 · u 2 * + u 1 * ) .
Next,
  • | | V 1 * U 1 * | | 2 = V 1 * U 1 * , V 1 * U 1 * = V 1 * , V 1 * 2 · V 1 * , U 1 * + U 1 * , U 1 * ;
  • | | V 1 * U 2 * | | 2 = V 1 * U 2 * , V 1 * U 2 * = V 1 * , V 1 * 2 · V 1 * , U 2 * + U 2 * , U 2 * ;
  • | | V 2 * U 1 * | | 2 = V 2 * U 1 * , V 2 * U 1 * = V 2 * , V 2 * 2 · V 2 * , U 1 * + U 1 * , U 1 * ;
  • | | V 2 * U 2 * | | 2 = V 2 * U 2 * , V 2 * U 2 * = V 2 * , V 2 * 2 · V 2 * , U 2 * + U 2 * , U 2 * .
In order to decide which GSP result is much more acceptable under the uncertainty (or permutations), it suffices to calculate the value of
α = | | V 1 * U 1 * | | 2 + | | V 1 * U 2 * | | 2 | | V 2 * U 1 * | | 2 | | V 2 * U 2 * | | 2 .
If α 0 , then we should choose the optimal GSP result to be V 2 * , and, if α < 0 , then we should choose V 1 * to be the optimal one.
α = 2 · [ V 1 * , V 1 * V 2 * , V 2 * ] + 2 · [ V 2 * , U 1 * + V 2 * , U 2 * V 1 * , U 1 * V 1 * , U 2 * ] = 2 · [ | | v 11 | | 4 · | | v 12 | | 4 | | v 21 | | 4 · | | v 22 | | 4 + V 2 * , U 2 * V 1 * , U 1 * ] , sin ce V 1 * , V 1 * = v 11 * , v 11 * · v 11 * , v 11 * ; V 2 * , V 2 * = v 22 * , v 22 * · v 22 * , v 22 * ; V 2 * , U 1 * = v 21 * v 22 * , u 1 * u 2 * = v 21 * , u 1 * · v 22 * , u 2 * = u 2 * , u 1 * · { u 1 * u 2 * | | u 2 * | | 2 · u 2 * + u 1 * , u 2 * } = 0 ; V 1 * , U 2 * = v 11 * v 12 * , u 2 * u 1 * = v 11 * , u 2 * · v 12 * , u 1 * = u 1 * , u 2 * · { u 1 * u 2 * | | u 1 * | | 2 · u 1 * + u 2 * , u 1 * } = 0 .
M o r e o v e r , V 2 * , U 2 * = v 21 * v 22 * , u 2 * u 1 * = v 21 * , u 2 * · v 22 * , u 1 * = u 2 * , u 2 * · u 1 * u 2 * | | u 2 | | 2 · u 2 * + u 1 * , u 1 * = ( u 1 * u 2 * ) 2 + | | u 1 * | | 2 · | | u 2 * | | 2 V 1 * , U 1 * = v 11 * v 12 * , u 1 * u 2 * = v 11 * , u 1 * · v 12 * , u 2 * = u 1 * , u 1 * · u 1 * u 2 * | | u 1 | | 2 · u 1 * + u 2 * , u 2 * = ( u 1 * u 2 * ) 2 + | | u 1 * | | 2 · | | u 2 * | | 2 . H e n c e , α = 2 · [ | | v 11 * | | 4 · | | v 12 * | | 4 | | v 21 * | | 4 · | | v 22 * | | 4 ] , i . e . , the sign of α is decided by the sign of the determinant | | v 11 * | | | | v 21 * | | | | v 22 * | | | | v 12 * | | .
This result could be captured by the following statement:
a r g m i n { j = 1 2 | | V * U j * | | 2 : U j * U * { U 1 * , U 2 * } , V * V * } = V 1 * , if | | v 11 * | | | | v 21 * | | | | v 22 * | | | | v 12 * | | < 0 V 2 * , if | | v 11 * | | | | v 21 * | | | | v 22 * | | | | v 12 * | | 0 ,
where V * { V 1 * , V 2 * } .
Let us take the U in Example 1 and demonstrate how to find the optimal basis (or bases).
U * = p e r m ( 0.295 0.831 0.937 0.224 0.296 0.737 0.119 0.000 0.559 0.222 0.147 0.119 0.281 0.112 0.815 0.000 0.237 0.187 0.783 0.000 0.590 0.475 0.094 0.112 0.445 ) = { U j * } j = 1 120 . After a sequence of numerical computations via R 4.2.2 programming (https://github.com/raymingchen/Gram-Schimdt-Process.git (accessed on 10 February 2025)), we obtain the results (see  Figure 5) according to Equation (4): arg min V i * V * { j = 1 120 | | V i * U j * | | 2 : U j * U * } (which are shown in Figure 5) = V 83 * = 0.831 0.449 0.256 0.068 0.194 0.119 0.165 0.174 0.937 0.226 0.119 0.316 0.932 0.124 0.036 0.237 0.009 0.024 0.253 0.937 0.475 0.819 0.184 0.196 0.176 , and its corresponding input (permutated) vector is
U 83 * = 0.831 0.937 0.296 0.295 0.224 0.119 0.000 0.222 0.737 0.559 0.119 0.281 0.815 0.147 0.112 0.237 0.187 0.000 0.000 0.783 0.475 0.094 0.445 0.590 0.112
Figure 5. Computed results of { j = 1 120 | | V i * U j * | | 2 : U j * U * } i = 1 120 . The minimal deviation lies in the 83rd permutation, and the value is 905.4479 , i.e., the optimal GSP result is V 83 * = G S P ( U 83 * ) .
Figure 5. Computed results of { j = 1 120 | | V i * U j * | | 2 : U j * U * } i = 1 120 . The minimal deviation lies in the 83rd permutation, and the value is 905.4479 , i.e., the optimal GSP result is V 83 * = G S P ( U 83 * ) .
Mathematics 13 00768 g005
A somewhat related optimization problem is to minimize the distance between input vectors and the converted orthogonal vectors. If a reader is interested in extending our method to such a problem, he could also refer to LLL, a lattice basis reduction   algorithm [10,11].

4. Application Two

In this section, we will show how to apply our non-self-referential GSP approach to a classification problem. The targets are countries, and the criteria are the governance indicators. The goal is to classify the countries with similar governance systems into the same category. In our case, there are 6 categories for the countries to be classified into.The procedures go as follows:
  • Collect time-series data (from Year 2013 to Year 2023) for 202 countries (see Numbered 202 countries.pdf in https://github.com/raymingchen/Gram-Schimdt-Process/blob/ae4e4e7d98059e4dc1cd08b3591b884d4acdc13a/Numbered%20202%20countries.pdf (accessed on 10 February 2025)) from Data Bank. The source data are further processed to fit our purpose and saved as data.csv (https://github.com/raymingchen/Gram-Schimdt-Process/blob/920de0ef528c06058cf0178b2b1718d99862c254/data.csv (accessed on 10 February 2025)). There are six indicators to be studied:
    • ( I D 1 ) Control of Corruption: Estimate;
    • ( I D 2 ) Government Effectiveness: Estimate;
    • ( I D 3 ) Political Stability and Absence of Violence/Terrorism: Estimate;
    • ( I D 4 ) Regulatory Quality: Estimate;
    • ( I D 5 ) Rule of Law: Estimate;
    • ( I D 6 ) Voice and Accountability: Estimate.
    For the source data, the meaning of the indicators and how they are collected/computed in the database, please refer to Worldwide Governance Indicators, in particular its Metadata, or https://databank.worldbank.org/source/worldwide-governance-indicators?l=en# (accessed on 10 February 2025). We could use the set of column vectors { w t j } t = 2013 : 2023 j = 1 : 202 to denote the set of all the time-series data collected, where each column vector w t j is a 6-by-1 column vector (matrix) recording the values of I D 1 , I D 2 , , I D 6 for country j at Year t.
  • Single out 6 representative countries as benchmarks: China ( 38 ), France ( 63 ), Germany ( 68 ), India ( 82 ), Russia Federatioin ( 151 ), USA ( 193 )-the exact content could refer to 6 representative.pdf (https://github.com/raymingchen/Gram-Schimdt-Process/blob/920de0ef528c06058cf0178b2b1718d99862c254/6%20representatives.pdf   (accessed on 10 February 2025)); let column vectors c t , f t , g t , i t , r t , u t denote the values of I D 1 , I D 2 , I D 3 ,   I D 4 , I D 5 , I D 6 at Year t; and a 6-by-6 matrix U t = c t f t g t i t r t u t u t 1 u t 2 u t 3 u t 4 u t 5 u t 6 ; for example, U 2013 is the matrix form of the following tabulated data of Table 1.
  • For each Year t, define a 202-by-6 matrix Y t = ( w t , 1 ) T ( w t , 2 ) T ( w t , 202 ) T , where T denotes the transpose. For example (round to 2 decimal places),
    Y 2013 = ( w 2013 , 1 ) T ( w 2013 , 2 ) T ( w 2013 , 202 ) T = 1.45 1.4 2.52 1.19 1.61 1.24 0.75 0.32 0.09 0.25 0.52 0.05 1.42 1.31 0.67 1.85 1.59 1.39
    and implement the matrix multiplication Y t U t (regarded as a set of projections from the vectors in Y t onto the vectors in U t ) to form a 202-by-6 matrix D t that records the projection values of all the 202 countries onto the 6 representatives U t at Year t. We partially demonstrate (round to 2 decimal places) the 202-by-6 matrix
    [ Y 2013 ] U 2013 = D t = d 2013 , 1 d 2013 , 2 d 2013 , 202 = 0.87 2.26 2.9 0.69 1.32 2.41 0.1 0.4 0.49 0.06 0.3 0.4 0.91 2.52 3.21 0.65 1.33 2.68 , where each row vector d t , j represents the projection of vector w t j on the representative vectors U t . For the full matrix, one could refer to the file named Y_2013ontoU2013.pdf (https://github.com/raymingchen/Gram-Schimdt-Process/blob/3443a9e08ab603e7754d959acfd1189f1e30e248/Y_2013ontoU2013.pdf (accessed on 10 February 2025)). We also skip all the other presentations of [ Y t ] U t ;
  • For each year t, find the set of orthogonal vectors V t = v t 1 v t 2 v t 3 v t 4 v t 5 v t 6 of the 6 representatives U t to yield the maximally-independent vectors (or hidden features) in terms of 6 representatives U t via the non-self referential representation presented in this article. The implementation of this conversion could refer to Non-self-referential GSP.R in https://github.com/raymingchen/Gram-Schimdt-Process/blob/3d28a5349bf514770c23a5df8950f8fc674aa9d7/Non_self_referential%20GSP.R (accessed on 10 February 2025). For the whole results from V 2013 to V 2023 , please refer to https://github.com/raymingchen/Gram-Schimdt-Process/blob/e6f2c27c9ab16a17e0370d1e1c7bd0382f49aeef/Vk_2013to2023.pdf (accessed on 10 February 2025). For example (round to 2 decimal places),
    V 2013 = v 2013 , 1 v 2013 , 2 v 2013 , 3 v 2013 , 4 v 2013 , 5 v 2013 , 6 = 0.36 0.91 0.23 0.28 0.2 0.06 0.01 1.48 0.23 0.31 0.01 0.04 0.54 0.15 0.35 0.42 0.04 0.02 0.33 0.78 0.16 0.12 0.35 0 0.53 0.82 0.06 0.08 0.09 0.13 1.63 0.58 0.18 0.03 0.02 0.03 ;
  • Since v k = j = 1 k β [ k j ] u j (by Lemma 2), we compute [ V t ] U t , indeed we could recursively compute these values directly from evaluating each β [ k j ] via our method (this is implemented by the file named code and data for application Two_19.R in https://github.com/raymingchen/Gram-Schimdt-Process/blob/5c8327b7782a16fcd06092814b9192a1fdf8182e/code%20and%20data%20for%20application%20Two_19.R (accessed on 10 February 2025)). For example (round to 2 decimal places and read as row vectors),
    [ V 2013 ] U 2013 = b 2013 , 1 b 2013 , 2 b 2013 , 3 b 2013 , 4 b 2013 , 5 b 2013 , 6 = 1 0 0 0 0 0 1.1 1 0 0 0 0 0.09 1.18 1 0 0 0 0.4 2.43 2.28 1 0 0 0.38 0.29 0.57 0.07 1 0 0.02 1.12 0.06 0.19 0.09 1 ,
    where each b t , j is the projection of the vector v t , j onto U t . For a complete computed result of { [ V t ] U t } t = 2013 2023 , please refer to the file Vk_2013to2023.pdf in https://github.com/raymingchen/Gram-Schimdt-Process/blob/8edd29d1b755559b5755a491422f534dd30dd8fa/Vk_2013to2023.pdf (accessed on 10 February 2025)
  • For each year t, compute the set of cosine values C o s t { d t , i b t , j | | d t , i | | · | | b t , j | | } i = 1 : 202 , j = 1 : 6 . For a partial demonstration (round to 2 decimal places), C o s 2013 = c o s 2013 , 1 c o s 2013 , 2 c o s 2013 , 202 = 0.18 0.18 0.02 0.01 0.01 0 0.13 0.23 0.01 0.02 0.09 0 0.18 0.2 0.02 0.01 0.03 0 , where c o s t j reflects the similarity of governance systems between country j and the other 6 orthogonal features.
  • Assign the categories (from 1 to 6) for each country by its maximal cosine value among all the 6 features. For each year t, find the positions (among columns) that yield the maximal values of C o s t , row by row—let us name the categorization at year t by C a t t . For example, in the previous case, C a t 2013 = c a t 2013 , 1 c a t 2013 , 2 c a t 2013 , 202 = 1 1 1 . The detailed categorization for all C a t 2013 , C a t 2014 , , C a t 2023 is presented in the Appendix A Listing A1. The categorization regarding the 6 representative countries is in particular singled out in Table 2. As for the statistics of the categorization distribution for the other countries, they are revealed in Figure 6.
Now let us do some evaluation and comparison regarding the classification problem with various methods based on cosine values and some bases. To this aim, together with our method (NsrGSP), we add two other approaches: one without any GSP involved (or NonGSP) and the other, with Classical GSP thought (or CGSP). The results for the NonGSP are shown in Figure 7. It shows the 3rd (Germany) and 5th (Russian) style governance systems dominate this world, and so does CGSP (the results are shown in Figure 8. Obviously this classification is not exactly what we anticipate—we would like the classification to preserve some distinct features that could differentiate it from another. This is achieved by NsrGSP, since the distinct features are now related to China and America, a result that could polarize the classification more.

5. Conclusions

In this paper, we derived a non-self-referential inductive method for Gram–Schmidt processes based on the typical Gram–Schmidt process (TGSP). The main idea is to keep track of all indices of the TGSP and find the patterns of change between these indices. The results show that we could largely reduce the computational complexity of GSP and also keep the theoretical favor that all of the generated orthonormal vectors are non-referential. This will give us another perspective for GSP. The results show that
V k = U k B Δ k 1 + 0 ( k 1 ) × 1 U k 1 ( B Δ k 1 + ) t ( 1 [ U k 1 ( B Δ k 1 + ) t ] c [ U k 1 ( B Δ k 1 + ) t ] ) B Δ k 1 + 1 t ,
where k 2 and B Δ 1 + = [ 1 ] . Unlike another non-referential GSP or FDET (see Equation (1)), whose complexity is exponential due to the permutation, our method uses only polynomial complexity. This will bring some theoretical and practical advantages; for example, in Application Two, we showed it could provide a much more persuasive classification and could maintain the polynomial complexity. There are also some disadvantages to this approach. First of all, the presentation of our method is succinct, but the computation of the coefficients still needs to go through a set of recursive procedures. Secondly, we derive our method based on the classical GSP, which tends to cause computational error [12,13]. For a better performance, the readers could derive a new version of our method based on modified GSP [14,15]. Furthermore, it still needs more experiments and research on its applicability to other real-world problems.

Funding

This research was funded by the Internal (Faculty/Staff) Start-Up Research Grant of Wenzhou-Kean University (Project No. ISRG2023029) and the 2023 Student Partnering with Faculty/Staff Research Program (Project No. WKUSPF2023035).

Data Availability Statement

The original contributions presented in this study are included in the article.

Conflicts of Interest

The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A. Categorization of the 202 Countries Year by Year: 2013–2023

Listing A1. The categories assigned to the 202 countries from Year 2013 (cat2013) to 2023 (cat2023). In total, there were six categories—1, 2, 3, 4, 5, and 6—for the 202 countries (in which square brackets are used to identify the order of these countries).
> cat2013: [1] 1 1 1 2 1 2 1 1 2 2 2 1 2 2 1 2 1 2 1 1 1 1 1 2 5 2 5 1 1 2 1 1 2 2 [35] 1 1 2 1 1 1 1 1 2 1 2 1 2 2 2 1 2 1 1 1 1 1 1 2 1 1 1 2 2 2 1 1 2 2 [69] 5 2 2 2 1 1 1 1 1 1 2 2 2 1 1 1 1 2 2 2 5 2 2 2 1 1 6 1 2 1 1 1 1 2 [103] 1 1 1 1 2 2 2 2 1 1 2 1 1 2 6 1 2 5 6 1 1 5 1 1 1 2 6 1 2 2 1 1 1 5 [137] 2 2 1 6 5 1 1 1 1 2 2 2 2 2 1 1 2 1 1 1 1 2 1 2 2 2 1 1 2 1 2 1 2 2 [171] 2 1 1 2 2 1 2 1 1 1 1 1 1 5 1 2 1 6 1 1 2 2 2 2 1 6 1 1 1 1 1 1.
> cat2014: [1] 1 3 1 2 1 2 1 1 2 2 2 1 2 2 1 2 1 2 1 1 2 1 1 2 5 2 2 1 1 2 1 1 2 2 [35] 1 1 2 1 5 1 1 1 2 1 2 1 2 2 2 1 2 1 1 1 5 1 1 2 1 1 1 2 2 2 1 1 2 2 [69] 6 2 2 2 1 1 1 1 1 1 2 2 2 1 1 1 1 2 2 2 5 2 2 2 1 1 6 1 2 1 1 1 1 2 [103] 1 1 1 1 2 2 2 2 1 1 2 1 1 2 6 1 2 1 6 1 1 2 1 1 1 2 1 1 2 2 1 1 1 2 [137] 2 2 1 6 2 1 1 5 1 2 2 2 2 2 1 2 2 1 1 6 3 2 1 2 2 2 1 1 2 1 2 1 3 2 [171] 2 1 1 2 2 1 2 1 1 1 1 1 6 5 1 2 1 6 1 1 2 2 2 2 1 6 1 1 1 1 1 1.
> cat2015: [1] 1 3 1 2 1 2 1 1 2 2 2 1 2 2 1 2 1 2 1 1 2 1 1 2 1 2 2 1 1 2 1 1 2 2 [35] 1 1 2 1 2 1 1 1 2 1 2 1 2 2 2 1 2 1 1 1 1 1 1 2 1 1 1 2 2 2 1 1 2 2 [69] 6 2 2 2 1 1 1 1 1 1 2 2 2 1 1 1 1 2 2 2 4 2 2 2 1 1 5 1 2 1 1 1 1 2 [103] 1 1 1 1 2 2 2 2 1 1 2 1 1 2 5 1 2 1 5 1 1 2 1 1 1 2 1 1 2 2 1 1 1 2 [137] 2 2 1 2 2 1 1 3 1 2 2 2 2 2 1 2 2 1 1 5 3 2 1 2 2 2 1 1 2 1 2 1 2 2 [171] 2 1 1 2 2 1 2 1 1 1 1 1 6 4 1 2 1 6 1 1 2 2 2 2 1 6 1 1 1 1 1 1.
> cat2016: [1] 1 3 1 2 1 2 1 1 2 2 2 1 2 2 1 2 1 2 1 1 2 1 1 2 1 2 2 1 1 2 1 1 2 2 [35] 1 1 2 1 4 1 1 1 2 1 2 1 2 2 2 1 2 1 1 1 1 1 1 2 1 1 5 2 2 2 1 1 2 2 [69] 6 2 2 2 1 1 1 1 1 1 2 2 2 1 1 1 1 2 2 2 4 2 2 2 1 1 5 1 2 1 1 1 1 2 [103] 1 1 1 1 2 2 2 2 1 1 2 1 1 2 5 1 2 1 5 1 4 2 1 1 1 2 1 1 2 2 1 1 1 3 [137] 2 2 1 2 3 1 1 3 1 2 2 2 2 2 1 2 2 1 1 5 4 2 1 2 2 2 1 1 2 1 2 1 2 2 [171] 2 1 1 2 2 1 2 1 1 1 1 1 6 4 1 1 1 6 1 1 2 2 2 2 1 6 1 1 1 1 1 1.
> cat2017: [1] 1 3 1 2 1 2 1 1 2 2 2 1 2 2 1 2 1 2 1 1 2 1 1 2 1 2 2 1 1 2 1 1 2 2 [35] 1 1 2 1 4 1 1 1 2 1 2 1 2 2 2 1 2 1 1 1 1 1 1 2 1 1 2 2 2 2 1 1 2 2 [69] 5 2 2 2 1 1 1 1 1 1 2 2 2 1 1 1 1 2 2 2 2 2 2 2 1 1 5 1 2 1 1 1 1 2 [103] 1 1 1 1 2 2 2 2 1 1 2 1 1 2 5 1 2 1 1 1 1 2 1 1 1 2 1 1 2 2 1 1 1 2 [137] 2 2 1 2 2 1 1 3 1 2 2 2 2 2 1 2 2 1 1 1 1 2 1 2 2 2 1 1 2 1 2 1 2 2 [171] 2 1 1 2 2 1 2 1 1 1 1 1 5 5 1 1 1 5 1 1 2 2 2 2 1 6 1 1 1 1 1 1.
> cat2018: [1] 1 5 1 2 1 2 5 3 2 2 2 1 2 2 1 2 1 2 1 1 2 1 1 2 1 2 2 1 1 2 1 1 2 2 [35] 1 1 2 1 4 1 1 1 2 1 2 1 2 2 2 1 2 1 1 1 1 1 1 2 1 1 2 2 2 2 1 1 2 2 [69] 5 2 2 3 1 1 1 1 1 1 2 2 2 1 1 1 1 2 2 2 2 2 2 2 1 1 5 1 2 1 1 1 1 2 [103] 1 1 1 1 2 2 2 2 1 1 2 1 1 2 5 1 2 1 5 1 5 2 1 1 1 2 2 1 2 2 1 1 1 2 [137] 2 2 1 2 5 1 1 3 1 2 2 2 2 2 1 2 2 1 1 4 1 2 1 2 2 2 1 1 5 1 2 1 2 2 [171] 2 1 1 2 2 1 2 1 1 1 1 1 5 5 1 1 1 6 1 1 2 2 2 2 1 5 1 1 1 1 1 1.
> cat2019: [1] 1 5 1 2 1 2 1 2 2 2 2 1 2 2 1 2 1 2 1 1 2 1 1 2 1 2 2 1 1 2 1 1 2 2 [35] 1 1 2 1 2 1 1 1 2 1 2 1 2 2 2 1 2 1 1 1 1 1 1 2 1 1 2 2 2 2 1 1 2 2 [69] 4 2 2 3 1 1 1 1 1 1 2 2 2 1 1 1 1 2 2 2 2 2 2 1 1 1 3 1 2 1 1 1 1 2 [103] 1 1 1 1 2 2 2 2 1 1 2 1 1 2 3 1 2 1 3 1 5 2 1 1 1 2 3 1 2 2 1 1 1 2 [137] 2 2 1 2 2 1 1 2 1 2 2 2 2 2 1 1 2 1 1 1 1 2 1 2 2 2 1 1 5 1 2 1 2 2 [171] 2 1 1 2 2 1 2 1 1 1 1 1 6 4 1 1 1 3 1 1 2 2 2 2 1 4 1 1 1 1 1 1.
> cat2020: [1] 1 5 1 2 1 2 4 2 2 2 2 1 2 2 1 2 1 2 4 1 2 1 1 2 4 2 2 1 1 2 1 1 2 2 [35] 1 1 2 1 5 1 1 1 2 1 2 1 2 2 2 1 2 5 1 1 5 1 1 2 1 1 2 2 2 2 1 1 2 2 [69] 4 2 2 2 1 1 1 1 1 1 2 2 2 1 5 1 1 2 2 2 2 2 2 2 1 1 2 1 2 1 2 1 1 2 [103] 1 1 1 1 2 2 2 2 1 1 2 1 1 2 4 1 2 1 4 5 5 2 1 1 1 2 2 1 2 2 1 1 1 2 [137] 2 2 1 2 2 1 5 2 1 2 2 2 2 2 1 1 2 1 1 4 5 2 1 2 2 2 4 1 5 1 2 1 2 2 [171] 2 1 1 2 2 1 2 1 1 1 4 1 6 4 4 1 1 3 1 1 2 2 2 2 1 4 1 1 1 1 1 1.
> cat2021: [1] 1 5 1 2 1 2 1 2 2 2 2 1 2 2 1 2 1 2 1 1 1 1 1 2 4 2 2 1 1 2 1 1 2 2 [35] 1 1 2 1 5 1 1 1 2 1 2 1 2 2 2 1 2 5 1 1 1 1 1 2 1 1 2 2 2 2 1 1 2 2 [69] 4 2 2 2 1 1 1 1 1 1 2 2 2 1 2 1 1 2 2 2 2 2 2 2 1 1 3 1 2 1 2 1 1 2 [103] 1 1 1 1 2 2 2 2 1 1 2 1 1 2 1 1 2 1 1 5 4 2 1 1 1 2 2 1 2 2 1 1 1 2 [137] 2 2 1 2 2 1 1 5 1 2 2 2 2 2 1 2 2 1 2 1 5 2 1 2 2 2 1 1 4 1 2 1 2 2 [171] 2 1 1 2 2 1 2 1 1 1 1 1 6 5 1 1 1 3 1 1 2 2 2 2 1 6 1 1 1 1 1 1.
> cat2022: [1] 1 2 1 2 1 2 1 4 2 2 2 1 2 2 1 2 1 2 1 1 2 1 1 2 1 2 2 1 1 2 1 1 2 2 [35] 1 1 2 1 6 1 1 1 2 1 2 1 2 2 2 1 2 6 1 1 1 1 1 2 1 1 2 2 2 2 1 1 2 2 [69] 1 2 2 2 1 1 1 1 1 1 2 2 2 1 2 1 1 2 2 2 2 2 2 2 1 1 3 1 2 1 2 1 1 2 [103] 1 1 1 1 2 2 2 2 1 1 2 1 1 2 3 1 2 1 1 6 1 2 1 1 1 2 2 1 2 2 1 1 1 2 [137] 2 2 1 2 6 1 1 6 6 2 2 2 2 2 1 2 2 1 2 1 6 2 1 2 2 2 1 1 4 1 2 1 2 2 [171] 2 1 1 2 2 1 2 1 1 2 1 1 4 4 1 1 1 2 1 1 2 2 2 2 1 3 1 1 1 1 1 1.
> cat2023: [1] 1 2 1 2 1 2 1 5 2 2 2 1 2 2 1 2 1 2 4 1 2 1 1 2 4 2 2 1 1 2 1 1 2 2 [35] 1 1 2 1 6 1 1 1 2 1 2 1 2 2 2 1 2 2 1 1 1 1 1 2 1 1 2 2 2 2 1 1 2 2 [69] 4 2 2 2 1 1 1 1 1 1 2 2 2 1 2 1 1 2 2 2 2 2 2 2 1 1 3 1 2 1 2 1 1 2 [103] 1 1 1 1 2 2 2 2 1 1 2 1 1 2 4 1 2 1 3 6 4 2 1 1 1 2 2 1 2 2 1 1 1 2 [137] 2 2 1 2 6 1 6 6 6 2 2 2 2 2 1 2 2 1 2 1 6 2 1 2 2 2 1 1 4 1 2 1 2 2 [171] 2 1 1 2 2 1 2 1 1 6 1 1 4 4 1 1 1 2 1 1 2 2 2 2 1 3 1 1 1 1 1 1.

Appendix B. Raw Categorization of 202 Countries Year by Year: 2013–2023

> cat2013_raw
   [1]  5 5 5 3 5 3 5 5 3 3 3 5 3 1 5 3 1 3 4 5 3 5 5 3 4 6 3 5 5 3 5 5 3 3
  [35] 5 5 3 1 4 5 5 5 3 5 6 1 2 6 3 5 3 5 5 5 5 5 5 3 1 5 1 3 2 2 5 5 2 3
  [69] 3 2 3 3 5 5 5 5 5 5 6 3 3 4 5 5 5 3 2 3 2 3 3 1 5 5 3 5 6 5 1 5 5 6
[103] 5 5 5 5 3 3 3 6 5 5 6 5 4 3 4 5 6 4 3 5 5 6 5 5 5 3 4 5 3 3 5 5 5 5
[137] 3 1 5 3 4 5 5 4 4 3 6 3 6 6 5 1 3 5 1 5 4 3 5 6 3 3 5 5 2 5 2 5 3 3
[171] 3 5 4 3 3 5 6 5 5 4 5 5 3 2 4 4 5 3 5 5 6 3 6 3 5 3 5 1 5 5 5 5
> cat2014_raw
   [1]  5 5 5 6 5 3 4 5 3 3 3 5 3 1 5 3 1 3 4 4 3 5 5 3 4 3 3 5 5 2 5 5 3 3
  [35] 5 5 3 1 4 5 5 5 3 5 3 1 3 3 3 5 3 5 5 5 5 5 5 3 1 5 1 3 2 3 5 5 2 3
  [69] 4 2 3 3 5 5 5 5 5 5 3 3 3 4 4 5 5 3 2 3 4 6 3 1 1 4 4 5 6 5 1 5 1 3
[103] 5 4 5 5 3 3 3 3 5 5 6 5 4 3 4 5 3 5 3 5 5 3 5 5 5 3 4 5 3 3 5 4 5 3
[137] 3 3 5 4 3 5 5 4 4 3 3 3 6 3 5 1 3 4 1 4 3 6 5 6 3 3 4 5 2 5 2 1 3 3
[171] 3 5 4 3 3 5 3 5 5 5 5 5 3 2 4 5 5 3 5 4 6 6 6 3 5 3 5 1 5 5 5 5
> cat2015_raw
   [1]  5 5 5 6 5 6 4 5 3 3 6 5 6 1 5 6 5 3 4 5 6 5 5 6 4 6 3 5 5 6 5 5 6 6
  [35] 5 5 3 1 4 5 5 5 3 5 6 1 3 6 3 5 6 5 5 5 5 5 5 3 1 5 3 3 2 3 5 5 2 3
  [69] 4 2 6 3 5 5 5 5 5 5 6 6 6 4 4 5 5 3 2 3 4 6 6 1 5 5 6 5 2 5 1 5 5 3
[103] 5 5 5 5 6 6 6 6 5 5 6 5 5 6 6 5 6 5 6 5 5 3 5 5 5 6 6 5 3 6 5 5 5 5
[137] 6 1 5 6 3 5 5 5 4 6 6 6 6 3 5 1 6 5 1 4 6 6 5 6 6 6 5 5 2 5 2 1 3 6
[171] 6 5 4 3 6 5 6 5 5 1 5 5 4 6 4 4 5 6 5 5 6 3 6 3 5 4 5 1 5 5 5 5
> cat2016_raw
   [1]  5 5 5 3 5 3 4 5 3 3 3 5 3 1 5 3 5 3 4 5 3 5 5 3 4 3 3 4 5 3 5 5 3 3
  [35] 5 5 3 1 4 5 5 5 3 5 3 1 3 3 3 5 3 5 5 5 5 5 5 3 5 5 3 3 2 6 5 5 2 3
  [69] 4 2 3 3 5 5 5 4 5 5 6 3 3 4 4 5 5 3 2 3 3 3 3 1 5 4 3 5 6 5 1 5 5 6
[103] 5 5 5 5 3 3 3 3 5 5 6 5 5 3 3 5 3 5 3 5 3 3 5 5 5 3 3 5 3 3 5 5 5 5
[137] 3 1 5 3 6 5 5 4 4 3 3 6 3 6 5 1 3 5 1 4 4 3 5 3 3 3 5 5 2 5 6 5 3 3
[171] 3 5 4 3 3 5 3 5 5 1 5 5 3 3 4 5 5 3 5 4 3 3 6 3 5 3 5 1 5 5 5 5
> cat2017_raw
   [1]  5 5 5 3 5 3 4 5 3 3 3 5 3 1 5 3 5 3 5 5 3 5 5 3 4 6 3 4 5 3 5 5 3 3
  [35] 5 5 3 1 4 5 5 5 3 5 3 1 3 3 3 5 3 5 5 5 5 5 5 3 1 5 3 3 2 2 5 5 6 3
  [69] 4 2 3 3 5 5 5 4 5 5 6 3 3 4 4 5 5 3 6 3 2 3 2 1 1 5 2 5 6 5 1 5 5 6
[103] 5 5 5 5 3 3 3 6 5 5 6 5 4 3 2 5 3 5 2 5 5 6 5 5 5 2 3 5 3 3 5 5 5 5
[137] 3 1 5 2 3 5 5 4 4 3 3 3 1 2 5 1 2 5 1 4 5 3 5 6 3 2 4 5 4 5 2 5 3 3
[171] 3 5 4 3 3 5 3 5 5 1 5 5 2 4 4 5 5 2 5 4 6 3 6 3 5 2 5 1 5 5 5 5
> cat2018_raw
   [1]  5 5 5 6 5 3 4 5 3 3 3 5 3 1 5 3 5 3 5 5 3 5 5 3 4 6 6 4 5 3 5 5 3 6
  [35] 5 5 3 1 4 5 5 5 3 5 6 1 3 6 3 5 3 5 5 5 5 5 5 3 5 5 3 3 2 6 5 5 6 3
  [69] 4 3 3 3 5 5 5 5 5 5 6 6 3 4 4 5 5 3 2 3 6 6 3 1 1 5 3 5 6 5 1 5 5 6
[103] 5 5 5 5 3 6 3 6 5 5 6 5 5 3 3 5 6 5 3 5 5 6 5 5 5 3 3 5 3 3 5 5 5 5
[137] 3 1 5 3 6 5 5 5 4 3 3 6 6 6 5 1 3 5 1 4 5 3 5 6 3 3 5 5 4 5 6 5 3 3
[171] 3 5 5 3 3 5 6 5 5 1 5 5 3 4 4 5 5 3 5 4 6 3 6 3 5 3 5 1 5 5 5 5
> cat2019_raw
   [1]  5 5 5 3 5 3 4 4 3 3 3 5 3 1 5 3 1 3 5 5 3 5 5 3 4 6 2 4 5 3 5 5 3 3
  [35] 5 5 2 1 4 5 5 5 3 5 2 1 2 2 3 5 3 5 5 5 5 5 5 3 5 5 3 3 2 2 5 5 6 3
  [69] 4 2 3 3 5 5 5 4 5 5 6 2 3 4 4 5 5 3 6 2 2 3 3 1 1 5 3 5 6 5 1 5 5 2
[103] 5 5 5 5 3 2 3 3 5 5 6 5 4 2 3 5 2 4 3 5 5 2 5 5 5 3 3 5 3 3 5 5 5 5
[137] 3 1 5 3 2 5 5 4 4 3 3 2 6 2 5 1 3 5 1 4 5 3 5 3 2 3 5 5 4 5 2 5 3 3
[171] 3 5 4 3 3 5 2 5 5 1 5 5 3 4 4 5 5 3 5 4 6 3 6 3 5 3 5 1 5 5 5 5
> cat2020_raw
   [1]  5 5 5 3 5 3 4 4 3 3 2 5 3 1 5 3 5 3 5 5 3 5 5 3 4 3 3 4 5 3 5 5 3 3
  [35] 5 5 3 1 4 5 5 5 3 5 3 5 2 2 3 5 3 5 5 5 5 5 5 3 5 5 3 3 2 2 5 5 6 3
  [69] 3 2 3 3 5 5 5 5 5 5 6 3 3 4 4 5 5 3 6 3 2 3 3 1 1 5 3 5 2 5 1 5 5 2
[103] 5 5 5 5 3 3 3 3 5 5 6 5 5 2 3 5 2 5 3 5 5 6 5 5 5 3 3 5 3 3 5 5 5 5
[137] 3 1 5 3 2 5 5 4 5 3 3 3 6 3 5 1 3 5 1 4 5 3 5 3 3 2 5 5 4 5 2 5 3 3
[171] 3 5 5 3 3 5 2 5 5 1 5 5 2 4 4 5 5 3 5 5 6 3 6 3 5 3 5 1 5 5 5 5
> cat2021_raw
   [1]  5 5 5 3 5 3 5 4 3 3 2 5 3 1 5 3 5 3 5 5 3 5 5 3 4 3 3 4 5 3 5 5 3 3
  [35] 5 5 3 1 4 5 5 5 3 5 3 5 2 3 3 5 3 5 5 5 5 5 5 3 5 5 3 3 2 2 5 5 6 3
  [69] 4 2 3 3 5 5 5 5 5 5 6 3 3 4 4 5 5 3 6 3 2 3 3 1 5 5 3 5 2 5 1 5 5 3
[103] 5 5 5 5 3 3 3 6 5 5 6 1 5 3 3 5 3 5 3 5 5 6 5 5 5 3 3 5 3 3 5 5 5 4
[137] 3 1 5 3 4 5 5 5 4 3 3 3 6 3 5 1 3 5 1 4 5 3 5 3 3 2 5 5 4 5 2 5 3 3
[171] 3 5 5 3 3 5 2 5 5 1 5 5 3 2 4 5 5 3 5 4 6 3 6 3 5 3 5 1 5 5 5 5
> cat2022_raw
   [1]  5 4 5 3 5 3 5 5 3 3 2 5 3 1 5 3 5 3 5 5 3 5 5 3 5 3 3 5 5 3 5 5 2 2
  [35] 5 5 3 1 4 5 5 5 3 5 2 5 2 2 3 5 3 4 5 5 5 5 5 3 5 5 3 3 2 2 5 5 6 3
  [69] 4 2 3 3 5 5 5 5 5 5 6 2 3 4 4 5 5 2 6 3 2 2 3 1 1 5 3 5 2 5 1 5 5 2
[103] 5 5 5 5 3 2 2 6 5 5 6 5 5 2 3 5 2 5 3 5 5 6 5 5 5 3 3 5 3 3 5 5 5 6
[137] 2 1 5 3 5 5 5 5 5 3 2 3 6 3 5 1 3 5 1 5 4 3 5 3 2 2 5 5 4 5 2 5 3 3
[171] 3 5 5 3 2 5 2 5 5 1 5 5 3 2 5 5 5 3 5 5 6 3 6 3 5 3 5 1 5 5 5 5
> cat2023_raw
   [1]  5 4 5 3 5 3 5 4 3 3 2 5 3 1 5 3 5 3 5 5 3 5 5 3 5 3 3 5 5 3 5 5 3 2
  [35] 5 5 2 1 5 5 5 5 3 5 2 5 2 2 2 5 3 4 5 5 5 5 5 3 5 5 3 3 2 2 5 5 6 3
  [69] 4 3 3 3 5 5 5 5 5 5 6 3 3 4 4 5 5 2 6 3 2 2 3 1 1 5 3 5 2 5 1 5 5 3
[103] 5 5 5 5 3 2 2 6 5 5 6 5 5 3 3 5 3 5 3 4 5 2 5 5 5 3 3 5 2 3 5 5 5 2
[137] 3 1 5 3 5 5 5 5 4 3 2 3 6 3 5 1 3 5 1 5 5 3 5 6 3 2 5 5 4 5 2 5 3 3
[171] 3 5 5 3 2 5 2 5 5 1 5 5 3 2 5 5 5 3 5 5 6 3 6 3 5 3 5 1 5 5 5 5

Appendix C. V-Space Categorization of 202 Countries Year by Year: 2013–2023

> cat2013_Vspace;
   [1]  5 5 5 3 5 3 5 1 3 3 3 5 3 1 5 3 5 3 4 5 3
  [22] 5 5 3 2 3 3 5 5 3 5 5 3 3 5 5 6 1 4 5 5 5
  [43] 3 5 3 1 6 3 3 5 3 5 5 5 5 5 5 3 1 5 5 3 2
  [64] 2 5 5 2 3 2 2 3 3 5 5 5 5 5 5 6 3 3 4 5 5
  [85] 5 3 2 3 3 3 3 1 5 5 4 5 2 5 1 5 5 3 5 5 5
[106] 5 3 3 3 6 5 5 6 5 5 3 4 5 3 4 3 5 5 3 5 5
[127] 5 3 4 5 3 3 5 5 5 5 3 6 5 3 2 5 5 5 4 3 3
[148] 2 3 2 5 1 3 5 1 5 4 3 5 6 3 3 5 5 2 5 2 5
[169] 3 3 3 5 4 3 3 5 3 5 5 5 5 5 5 2 4 4 5 4 5
[190] 5 6 6 6 3 5 3 5 1 5 5 5 5
> cat2014_Vspace;
   [1]  5 5 5 3 5 3 5 5 3 3 3 5 3 1 5 3 5 3 4 5 3
  [22] 5 5 3 4 6 3 5 5 3 5 5 3 3 5 5 3 1 4 5 5 5
  [43] 3 5 3 5 3 3 3 5 3 5 5 5 5 5 5 3 5 5 5 3 2
  [64] 3 5 5 2 3 4 2 3 3 5 5 5 5 5 5 6 3 3 4 4 5
  [85] 5 3 2 3 2 6 3 1 5 5 3 5 2 5 1 5 5 3 5 4 5
[106] 5 3 3 3 6 5 5 6 5 5 3 4 5 3 5 3 5 5 3 5 5
[127] 5 3 4 5 3 3 5 5 5 3 3 6 5 3 2 5 5 4 4 3 3
[148] 3 6 2 5 1 3 5 1 4 3 6 5 6 3 3 5 5 2 5 2 1
[169] 3 3 3 5 4 3 3 5 6 5 5 1 5 5 3 2 4 1 5 3 5
[190] 5 6 6 6 3 5 3 5 5 5 5 5 5
> cat2015_Vspace;
   [1]  5 5 5 6 5 3 5 5 3 3 3 5 3 1 5 3 5 3 5 5 6
  [22] 5 5 6 4 6 2 5 5 3 5 5 6 6 5 5 3 1 4 5 5 5
  [43] 3 5 3 5 3 3 6 5 3 5 5 5 5 5 5 3 5 5 5 6 2
  [64] 3 5 5 2 3 4 2 3 3 5 5 5 5 5 5 6 3 3 4 5 5
  [85] 5 6 2 3 2 6 3 1 5 5 3 5 2 5 1 5 5 3 5 5 5
[106] 5 6 3 6 6 5 5 6 5 5 3 3 5 6 5 3 5 5 3 5 5
[127] 5 3 3 5 3 6 5 5 5 5 3 6 5 3 3 5 5 5 4 3 3
[148] 3 6 3 5 1 3 5 1 5 3 6 5 6 3 3 5 5 2 5 2 5
[169] 3 3 3 5 5 3 6 5 6 5 5 1 5 5 5 3 4 1 5 3 5
[190] 5 6 6 6 3 5 4 5 1 5 5 5 5
> cat2016_Vspace;
   [1]  5 5 5 3 5 3 4 5 3 3 3 5 3 1 5 3 5 3 5 5 3
  [22] 5 5 3 4 3 3 5 5 3 5 5 3 3 5 5 3 1 4 5 5 5
  [43] 3 5 3 5 3 3 3 5 3 5 5 5 5 5 5 3 5 5 3 3 2
  [64] 3 5 5 2 3 4 2 3 3 5 5 5 5 5 5 6 3 3 4 5 5
  [85] 5 3 2 3 3 3 3 1 5 5 3 5 6 5 5 5 5 3 5 5 5
[106] 5 3 3 3 3 5 5 6 5 5 3 3 5 3 5 3 5 5 3 5 5
[127] 5 3 3 5 3 3 5 5 5 5 3 3 5 3 3 5 5 5 4 3 3
[148] 3 3 3 5 1 3 5 1 5 5 3 5 3 3 3 5 5 2 5 3 5
[169] 3 3 3 5 5 3 3 5 3 5 5 1 5 5 3 3 4 1 5 3 5
[190] 5 6 6 6 3 5 4 5 1 5 5 5 5
> cat2017_Vspace;
   [1]  5 5 5 3 5 3 4 5 3 3 3 5 3 1 5 3 5 2 5 5 3
  [22] 5 5 3 4 6 3 5 5 3 5 5 3 3 5 5 3 1 5 5 5 5
  [43] 3 5 3 5 3 3 3 5 3 5 5 5 5 5 5 3 5 5 3 3 2
  [64] 2 5 5 6 3 4 2 3 3 5 5 5 5 5 5 6 3 3 4 5 5
  [85] 5 3 6 2 2 3 3 1 5 5 3 5 6 5 5 5 5 3 5 5 5
[106] 5 3 3 3 6 5 5 6 5 5 3 3 5 3 5 3 5 5 6 5 5
[127] 5 3 3 5 3 3 5 5 5 5 3 6 5 3 2 5 5 5 5 3 3
[148] 3 6 2 5 1 3 5 1 4 5 3 5 6 3 3 5 5 2 5 2 5
[169] 3 3 3 5 5 3 3 5 3 5 5 1 5 5 3 2 4 5 5 3 5
[190] 5 6 3 6 3 5 3 5 5 5 5 5 5
> cat2018_Vspace;
   [1]  5 5 5 3 5 3 4 5 3 3 3 5 3 1 5 3 5 3 5 5 3
  [22] 5 5 3 5 6 3 5 5 3 5 5 3 6 5 5 3 1 5 5 5 5
  [43] 3 5 3 5 3 3 3 5 3 5 5 5 5 5 5 3 5 5 6 3 2
  [64] 3 5 5 2 3 4 3 3 3 5 5 5 5 5 5 6 3 3 4 5 5
  [85] 5 3 2 3 3 3 3 1 5 5 3 5 6 5 1 5 5 6 5 5 5
[106] 5 3 3 3 6 5 5 6 5 5 3 3 5 3 5 3 5 5 6 5 5
[127] 5 3 3 5 3 3 5 5 5 1 3 6 5 3 3 5 5 5 5 3 3
[148] 3 6 3 5 1 3 5 1 5 5 3 5 6 3 3 5 5 2 5 3 5
[169] 3 3 3 5 5 3 3 5 3 5 5 1 5 5 3 3 5 5 5 3 5
[190] 5 6 3 6 3 5 3 5 5 5 5 5 5
> cat2019_Vspace;
   [1]  5 5 5 3 5 3 4 4 3 3 3 5 3 1 5 3 5 3 5 5 3
  [22] 5 5 3 4 6 3 5 5 3 5 5 3 3 5 5 2 1 4 5 5 5
  [43] 3 5 3 5 3 3 3 5 3 5 5 5 5 5 5 3 5 5 6 3 2
  [64] 2 5 5 6 3 4 3 3 3 5 5 5 5 5 5 6 3 3 4 5 5
  [85] 5 3 6 3 3 3 3 1 1 5 3 5 6 5 1 5 5 2 5 5 5
[106] 5 3 3 3 6 5 5 6 5 5 3 3 5 3 5 3 5 5 6 5 5
[127] 5 3 3 5 3 3 5 5 5 5 3 6 5 3 3 5 5 4 5 3 3
[148] 3 6 3 5 1 3 5 1 5 5 3 5 6 3 3 5 5 2 5 3 5
[169] 3 3 3 5 5 3 3 5 3 5 5 1 5 5 3 4 4 5 5 3 5
[190] 5 6 3 6 3 5 5 5 5 5 5 5 5
> cat2020_Vspace;
   [1]  5 5 5 3 5 3 5 4 3 3 3 5 3 1 5 3 5 3 5 5 3
  [22] 5 5 3 5 6 3 5 5 3 5 5 3 3 5 5 3 1 5 5 5 5
  [43] 3 5 3 5 2 3 3 5 3 5 5 5 5 5 5 3 5 5 3 3 2
  [64] 2 5 5 6 3 3 3 3 3 5 5 5 5 5 5 6 3 3 4 4 5
  [85] 5 3 6 3 3 3 3 1 5 5 3 5 2 5 1 5 5 3 5 5 5
[106] 5 3 3 3 6 5 5 6 5 5 3 3 5 3 5 3 5 5 3 5 5
[127] 5 3 3 5 3 3 5 5 5 5 3 6 5 3 3 5 5 5 5 3 3
[148] 3 6 3 5 1 3 5 1 5 5 3 5 6 3 3 5 5 3 5 3 5
[169] 3 3 3 5 5 3 3 5 2 5 5 1 5 5 3 3 5 5 5 3 5
[190] 5 6 3 6 3 5 3 5 5 5 5 5 5
> cat2021_Vspace;
   [1]  5 5 5 3 5 3 5 4 3 3 3 5 3 1 5 3 5 3 5 5 3
  [22] 5 5 3 5 6 3 5 5 3 5 5 3 3 5 5 3 1 4 5 5 5
  [43] 3 5 3 5 3 3 2 5 3 5 5 5 5 5 5 3 5 5 6 2 2
  [64] 2 5 5 6 3 5 3 3 3 5 5 5 5 5 5 6 3 3 4 4 5
  [85] 5 3 6 3 3 3 3 1 5 5 3 5 2 5 1 5 5 3 5 5 5
[106] 5 3 3 3 6 5 5 6 5 5 3 3 5 3 5 3 5 5 3 5 5
[127] 5 3 3 5 3 3 5 5 5 5 3 1 5 3 3 5 5 5 5 3 3
[148] 3 6 3 5 1 3 5 1 5 5 3 5 6 3 3 5 5 3 5 3 5
[169] 3 3 3 5 5 3 2 5 2 5 5 1 5 5 3 3 5 5 5 3 5
[190] 5 6 3 6 3 5 3 5 5 5 5 5 5
> cat2022_Vspace;
   [1]  5 5 5 3 5 3 5 5 3 3 2 5 3 6 5 3 5 3 5 5 3
  [22] 5 5 3 5 6 3 5 5 3 5 5 3 3 5 5 3 1 5 5 5 5
  [43] 3 5 3 5 3 3 2 5 3 5 5 5 5 5 5 3 5 5 6 3 2
  [64] 2 5 5 6 3 5 3 3 3 5 5 5 5 5 5 6 3 3 4 4 5
  [85] 5 3 6 3 3 2 3 1 1 5 3 5 2 5 6 5 5 3 5 5 5
[106] 5 3 3 3 6 5 5 6 5 5 3 3 5 3 5 3 5 5 3 5 5
[127] 5 3 3 5 3 3 5 5 5 3 3 1 5 3 5 5 5 5 5 3 3
[148] 3 6 3 5 6 3 5 4 5 5 3 5 6 3 3 5 5 5 5 3 5
[169] 3 3 3 5 5 3 3 5 2 5 5 1 5 5 3 3 5 5 5 3 5
[190] 5 6 2 6 3 5 3 5 1 5 5 5 5
> cat2023_Vspace;
   [1]  5 3 5 3 5 3 5 5 3 3 3 5 3 6 5 3 5 3 5 5 3
  [22] 5 5 3 5 6 3 5 5 3 5 5 3 3 5 5 3 1 5 5 5 5
  [43] 3 5 3 5 3 3 2 5 3 3 5 5 5 5 5 3 5 5 3 2 2
  [64] 2 5 5 6 3 5 3 3 3 5 5 5 5 5 5 6 3 3 4 4 5
  [85] 5 3 6 3 3 2 3 4 5 5 3 5 2 5 6 5 5 3 5 5 5
[106] 5 3 3 3 6 5 5 6 5 5 3 3 5 3 5 3 5 5 3 5 5
[127] 5 3 3 5 2 3 5 5 5 3 3 6 5 3 5 5 5 5 5 3 3
[148] 3 6 3 5 6 3 5 6 5 5 3 5 6 3 3 5 5 5 5 3 5
[169] 3 3 3 5 5 3 2 5 2 5 5 1 5 5 3 5 5 5 5 3 5
[190] 5 6 3 6 3 5 3 5 5 5 5 5 5

References

  1. Deng, Y. On p-adic Gram–Schmidt Orthogonalization Process. Front. Math. 2024. [Google Scholar] [CrossRef]
  2. Huang, X.; Caron, M.; Hindson, D. A recursive Gram-Schmidt orthonormalization procedure and its application to communications. In Proceedings of the 2001 IEEE Third Workshop on Signal Processing Advances in Wireless Communications (SPAWC’01), Taiwan, China, 20–23 March 2001; Workshop Proceedings (Cat. No.01EX471). pp. 340–343. [Google Scholar]
  3. Balabanov, O.; Grigori, L. Randomized Gram-Schmidt process with application to GMRES. Siam J. Sci. Comput. 2022, 44, A1450–A1474. [Google Scholar] [CrossRef]
  4. Ford, W. Numerical Linear Algebra with Applications: Using MATLAB and Octave; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  5. Morrison, D.D. Remarks on the Unitary Triangularization of a Nonsymmetric Matrix. J. ACM 1960, 7, 185–186. [Google Scholar] [CrossRef]
  6. Skogholt, J.; Lil, K.H.; Næs, T.; Smilde, A.K.; Indahl, U.G. Selection of principal variables through a modified Gram–Schmidt process with and without supervision. J. Chemom. 2023, 37, e3510. [Google Scholar] [CrossRef]
  7. Robinson, P.J.; Saranraj, A. Intuitionistic Fuzzy Gram-Schmidt Orthogonalized Artificial Neural Network for Solving MAGDM Problems. Indian J. Sci. Technol. 2024, 17, 2529–2537. [Google Scholar] [CrossRef]
  8. Dax, A. A modified Gram–Schmidt algorithm with iterative orthogonalization and column pivoting. Linear Algebra Its Appl. 2000, 310, 25–42. [Google Scholar] [CrossRef]
  9. Trefethen, L.N.; Bau, D. Numerical Linear Algebra; SIAM: New Delhi, India, 1997. [Google Scholar]
  10. Bremner, M.R. Lattice Basis Reduction: An Introduction to the LLL Algorithm and Its Applications; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  11. Available online: http://www.noahsd.com/mini_lattices/02__GS_and_LLL.pdf (accessed on 10 February 2025).
  12. Giraud, L.; Langou, J.; Rozložník, M.; Eshof, J.V.D. Rounding error analysis of the classical Gram–Schmidt orthogonalization process. Numer. Math. 2005, 101, 87–100. [Google Scholar] [CrossRef]
  13. Giraud, L.; Langou, J.; Rozloznik, M. The loss of orthogonality in the Gram–Schmidt orthogonalization process. Comput. Math. Appl. 2005, 50, 1069–1075. [Google Scholar] [CrossRef]
  14. Imakura, A.; Yamamoto, Y. Efficient implementations of the modified Gram–Schmidt orthogonalization with a non-standard inner product. Jpn. J. Indust. Appl. Math. 2019, 36, 619–641. [Google Scholar] [CrossRef]
  15. Sreedharan, V.P. A note on the modified gram-schmidt process. Int. J. Comput. Math. 1988, 24, 277–290. [Google Scholar] [CrossRef]
Figure 1. Beta directional tree: we name this structure a beta directional tree in the sense that all the directed edges are only feasible from a higher-indexed node to a lower-indexed node. There are k i + 1 nodes, or { i , i + 1 , i + 2 , , k } N . Each node p is associated with two quantities: [ p i ] , the feasible paths from p to i and β [ p i ] ; and the multiplicative values of all the [ p q ] -indexed β values. The edge from p to q ( p > q ) is assigned the weight β p , q , which is defined as β p , q = 1 | | v q | | 2 · u p v q .
Figure 1. Beta directional tree: we name this structure a beta directional tree in the sense that all the directed edges are only feasible from a higher-indexed node to a lower-indexed node. There are k i + 1 nodes, or { i , i + 1 , i + 2 , , k } N . Each node p is associated with two quantities: [ p i ] , the feasible paths from p to i and β [ p i ] ; and the multiplicative values of all the [ p q ] -indexed β values. The edge from p to q ( p > q ) is assigned the weight β p , q , which is defined as β p , q = 1 | | v q | | 2 · u p v q .
Mathematics 13 00768 g001
Figure 2. Structures of β k i and β [ k i ] : the (top-left) figure is a visual description of Proposition 3 β k i = 1 | | v i | | 2 · [ q = 1 i β [ i q ] ( u k u q ) ] . The (top-right) figure depicts the matrix multiplication in Definition 2: β [ k i ] : = j = i k 1 ( β k j · β [ j i ] ) = [ β k , i , β k , i + 1 , β k , k 1 ] β [ i i ] β [ ( i + 1 ) i ] β [ ( k 1 ) i ] . The (bottom) figures depict the result from Lemma 2: v k + 1 = j = 1 k + 1 β [ ( k + 1 ) j ] u j .
Figure 2. Structures of β k i and β [ k i ] : the (top-left) figure is a visual description of Proposition 3 β k i = 1 | | v i | | 2 · [ q = 1 i β [ i q ] ( u k u q ) ] . The (top-right) figure depicts the matrix multiplication in Definition 2: β [ k i ] : = j = i k 1 ( β k j · β [ j i ] ) = [ β k , i , β k , i + 1 , β k , k 1 ] β [ i i ] β [ ( i + 1 ) i ] β [ ( k 1 ) i ] . The (bottom) figures depict the result from Lemma 2: v k + 1 = j = 1 k + 1 β [ ( k + 1 ) j ] u j .
Mathematics 13 00768 g002
Figure 3. The inductive Gram–Schmidt Table for coefficients: given the premises { β i , j } 1 j < i k ; { β [ i j ] } 1 j < i k ; and { v i } i = 1 k , this table tabulates how to inductively find the results of { β k + 1 , j } 1 j k ; { β [ ( k + 1 ) j ] } 1 j k ; and v k + 1 . Inductively, the following then applies: v k + 1 = i = 1 k β [ ( k + 1 ) i ] u i = β [ ( k + 1 ) 1 ] u 1 + β [ ( k + 1 ) 2 ] u 2 + + β [ ( k + 1 ) k ] u k + β [ ( k + 1 ) ( k + 1 ) ] u k + 1 . In the table, they are represented by notation ⊙. Here, let us demonstrate how to resolve some of the terms of the coefficients: β [ ( k + 1 ) 1 ] = [ β k + 1 , 1 , β k + 1 , 2 , , β k + 1 , k ] [ β [ 1 1 ] , β [ 2 1 ] , , β [ k 1 ] ] ( refers to the red dashed lines ) ; β [ ( k + 1 ) 2 ] = [ β k + 1 , 2 , β k + 1 , 3 , , β k + 1 , k ] [ β [ 2 2 ] , β [ 3 2 ] , , β [ k 2 ] ] ( refers to the blue dashed lines ) ; β [ ( k + 1 ) 3 ] = [ β k + 1 , 3 , β k + 1 , 4 , , β k + 1 , k ] [ β [ 3 3 ] , β [ 4 3 ] , , β [ k 3 ] ] ( refers to the green dashed lines ) .
Figure 3. The inductive Gram–Schmidt Table for coefficients: given the premises { β i , j } 1 j < i k ; { β [ i j ] } 1 j < i k ; and { v i } i = 1 k , this table tabulates how to inductively find the results of { β k + 1 , j } 1 j k ; { β [ ( k + 1 ) j ] } 1 j k ; and v k + 1 . Inductively, the following then applies: v k + 1 = i = 1 k β [ ( k + 1 ) i ] u i = β [ ( k + 1 ) 1 ] u 1 + β [ ( k + 1 ) 2 ] u 2 + + β [ ( k + 1 ) k ] u k + β [ ( k + 1 ) ( k + 1 ) ] u k + 1 . In the table, they are represented by notation ⊙. Here, let us demonstrate how to resolve some of the terms of the coefficients: β [ ( k + 1 ) 1 ] = [ β k + 1 , 1 , β k + 1 , 2 , , β k + 1 , k ] [ β [ 1 1 ] , β [ 2 1 ] , , β [ k 1 ] ] ( refers to the red dashed lines ) ; β [ ( k + 1 ) 2 ] = [ β k + 1 , 2 , β k + 1 , 3 , , β k + 1 , k ] [ β [ 2 2 ] , β [ 3 2 ] , , β [ k 2 ] ] ( refers to the blue dashed lines ) ; β [ ( k + 1 ) 3 ] = [ β k + 1 , 3 , β k + 1 , 4 , , β k + 1 , k ] [ β [ 3 3 ] , β [ 4 3 ] , , β [ k 3 ] ] ( refers to the green dashed lines ) .
Mathematics 13 00768 g003
Figure 4. Inductive procedures for calculating V h v 1 v 2 v h 1 v h : this figure demonstrates the inductive output of non-self-referential representations of orthogonal vectors V h given the input vector-based matrix U h = u 1 u 2 u h 1 u h . In the figure, U h 1 =   u h u 1 u h u 2 u h u h 1 ; N ( V h 1 ) = V h 1 Δ 0 0 1 ; B h = β h , 1 β h , 2 β h , h 2 β h , h 1 =   U h 1 1 β Δ h 1 + 0 ( h 1 ) × 1 0 1 × ( h 1 ) β [ h h ] t N ( V h 1 ) ;   B h + = β [ h 1 ] β [ h 2 ] β [ h ( h 2 ) ] β [ h ( h 1 ) ]   = B h B Δ h 1 + 0 ( h 1 ) × 1 0 1 × ( h 1 ) 1 ;   B Δ k + = B Δ k 1 + 0 ( k 1 ) × 1 U k 1 ( B Δ k 1 + ) t ( 1 [ U k 1 ( B Δ k 1 + ) t ] c [ U k 1 ( B Δ k 1 + ) t ] ) B Δ k 1 + 1 .
Figure 4. Inductive procedures for calculating V h v 1 v 2 v h 1 v h : this figure demonstrates the inductive output of non-self-referential representations of orthogonal vectors V h given the input vector-based matrix U h = u 1 u 2 u h 1 u h . In the figure, U h 1 =   u h u 1 u h u 2 u h u h 1 ; N ( V h 1 ) = V h 1 Δ 0 0 1 ; B h = β h , 1 β h , 2 β h , h 2 β h , h 1 =   U h 1 1 β Δ h 1 + 0 ( h 1 ) × 1 0 1 × ( h 1 ) β [ h h ] t N ( V h 1 ) ;   B h + = β [ h 1 ] β [ h 2 ] β [ h ( h 2 ) ] β [ h ( h 1 ) ]   = B h B Δ h 1 + 0 ( h 1 ) × 1 0 1 × ( h 1 ) 1 ;   B Δ k + = B Δ k 1 + 0 ( k 1 ) × 1 U k 1 ( B Δ k 1 + ) t ( 1 [ U k 1 ( B Δ k 1 + ) t ] c [ U k 1 ( B Δ k 1 + ) t ] ) B Δ k 1 + 1 .
Mathematics 13 00768 g004
Figure 6. Frequency distribution of six categories from Year 2013 to 2023 by NsrGSP: These frequencies represent the relative proportion of countries that lie in each category. In total, there are six categories: 1, 2, 3, 4, 5, and 6. Each of the 202 countries were assigned a distinct category, and the overall distribution was captured by these histograms. The underlying method (NsrGSP) adopted is described as an algorithm in Section Application Two. The features for the six representative countries (China, France, Germany, India, Russia, and the USA) were further analyzed by another six orthogonal (hidden) features—one could regard these six new features as six new dummy countries. By associating (see Table 2) these new six dummy countries with the original six chosen representative countries, then some of the dummy countries become immediately explainable. Dummy Country 1 is associated with China, and Dummy Country 2 is associated with the USA—this is good since we could find the dominating factors/features/countries to represent the clustering/classification. These temporal histograms, which are named the Histogram of catxxxx, show that the China-style and USA-style governance systems are dominating this world, and there is no distinct difference for their domination.
Figure 6. Frequency distribution of six categories from Year 2013 to 2023 by NsrGSP: These frequencies represent the relative proportion of countries that lie in each category. In total, there are six categories: 1, 2, 3, 4, 5, and 6. Each of the 202 countries were assigned a distinct category, and the overall distribution was captured by these histograms. The underlying method (NsrGSP) adopted is described as an algorithm in Section Application Two. The features for the six representative countries (China, France, Germany, India, Russia, and the USA) were further analyzed by another six orthogonal (hidden) features—one could regard these six new features as six new dummy countries. By associating (see Table 2) these new six dummy countries with the original six chosen representative countries, then some of the dummy countries become immediately explainable. Dummy Country 1 is associated with China, and Dummy Country 2 is associated with the USA—this is good since we could find the dominating factors/features/countries to represent the clustering/classification. These temporal histograms, which are named the Histogram of catxxxx, show that the China-style and USA-style governance systems are dominating this world, and there is no distinct difference for their domination.
Mathematics 13 00768 g006
Figure 7. Frequency distribution of the six categories from Year 2013 to 2023 by the nonGSP: These frequencies represent the relative proportion of countries that lie in each category. In total, there are six categories: 1, 2, 3, 4, 5, and 6. Each of the 202 countries were assigned a distinct category, and the overall distribution was captured by these histograms. The underlying method adopted was named nonGSP analysis (or nonGSP): the usual approach adopted in calculating cosine values to analyze the similarities between vectors/data. In our case, the representative vectors were six subjectively chosen countries (China, France, Germany, India, Russia, and the USA). Then, we calculated the cosine values of the 202 countries’ IDs with these six representative IDs and then picked up the most similar country (among the six representative countries) for each of the 202 countries. The results were then plotted as a histogram of c a t x x x _ r a w , where raw means there is no further processing regarding the given IDs for the representative countries. From the temporal histograms, one could observe that Germany and Russia dominate the main representatives since their systems mix with various other factors that are also shared with other countries. Since their IDs were not further processed, such classification was less informative—we could not perceive the underlying differences easily.
Figure 7. Frequency distribution of the six categories from Year 2013 to 2023 by the nonGSP: These frequencies represent the relative proportion of countries that lie in each category. In total, there are six categories: 1, 2, 3, 4, 5, and 6. Each of the 202 countries were assigned a distinct category, and the overall distribution was captured by these histograms. The underlying method adopted was named nonGSP analysis (or nonGSP): the usual approach adopted in calculating cosine values to analyze the similarities between vectors/data. In our case, the representative vectors were six subjectively chosen countries (China, France, Germany, India, Russia, and the USA). Then, we calculated the cosine values of the 202 countries’ IDs with these six representative IDs and then picked up the most similar country (among the six representative countries) for each of the 202 countries. The results were then plotted as a histogram of c a t x x x _ r a w , where raw means there is no further processing regarding the given IDs for the representative countries. From the temporal histograms, one could observe that Germany and Russia dominate the main representatives since their systems mix with various other factors that are also shared with other countries. Since their IDs were not further processed, such classification was less informative—we could not perceive the underlying differences easily.
Mathematics 13 00768 g007
Figure 8. Frequency distribution of the six categories from Year 2013 to 2023 by CGSP: These frequencies represent the relative proportion of countries that lie in each category. In total, there were six categories: 1, 2, 3, 4, 5, and 6. Each of the 202 countries was assigned a distinct category, and the overall distribution was captured by these histograms. The underlying method adopted was named Classical GSP analysis (or CGSP): this approach converts/projects all the original IDs into/onto the new (sub-)space spanned by the orthogonal vectors derived from the self-referential (or classical) GSP. The cosine values for the similarity computation are based on the coefficient vectors in this newly constructed space. Theoretically, it is appeasing; but experimentally, it has pretty much the same classifying results as nonGSP. It is also much more time consuming because we need to project the IDs of all the 202 countries onto the newly constructed space.
Figure 8. Frequency distribution of the six categories from Year 2013 to 2023 by CGSP: These frequencies represent the relative proportion of countries that lie in each category. In total, there were six categories: 1, 2, 3, 4, 5, and 6. Each of the 202 countries was assigned a distinct category, and the overall distribution was captured by these histograms. The underlying method adopted was named Classical GSP analysis (or CGSP): this approach converts/projects all the original IDs into/onto the new (sub-)space spanned by the orthogonal vectors derived from the self-referential (or classical) GSP. The cosine values for the similarity computation are based on the coefficient vectors in this newly constructed space. Theoretically, it is appeasing; but experimentally, it has pretty much the same classifying results as nonGSP. It is also much more time consuming because we need to project the IDs of all the 202 countries onto the newly constructed space.
Mathematics 13 00768 g008
Table 1. The values of 6 governance indicators for China, France, Germany, India, Russia, and the USA at Year 2013.
Table 1. The values of 6 governance indicators for China, France, Germany, India, Russia, and the USA at Year 2013.
Indicators c 2013 f 2013 g 2013 i 2013 r 2013 u 2013
I D 1 0.36 1.31.8−0.52 1.02 1.31
I D 2 0.01 1.481.51 0.16 0.45 1.52
I D 3 0.54 0.450.93 1.23 0.74 0.64
I D 4 0.33 1.151.54 0.48 0.36 1.26
I D 5 0.53 1.41.64 0.05 0.82 1.55
I D 6 1.63 1.221.410.43 1.02 1.10
Table 2. The orthogonal features associated with each representative country from the year 2013 to 2023: The orthogonal features associated with each country are extremely stable—this indicates the governance systems in the representatives are consistent and also indicates their representations are reliable for the classification problem.
Table 2. The orthogonal features associated with each representative country from the year 2013 to 2023: The orthogonal features associated with each country are extremely stable—this indicates the governance systems in the representatives are consistent and also indicates their representations are reliable for the classification problem.
YearChinaFranceGermanyIndiaRussiaUSA
2013122112
2014122112
2015122112
2016122112
2017122112
2018122112
2019122112
2020122112
2021122112
2022122112
2023122112
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, R.-M. A Non-Self-Referential Characterization of the Gram–Schmidt Process via Computational Induction. Mathematics 2025, 13, 768. https://doi.org/10.3390/math13050768

AMA Style

Chen R-M. A Non-Self-Referential Characterization of the Gram–Schmidt Process via Computational Induction. Mathematics. 2025; 13(5):768. https://doi.org/10.3390/math13050768

Chicago/Turabian Style

Chen, Ray-Ming. 2025. "A Non-Self-Referential Characterization of the Gram–Schmidt Process via Computational Induction" Mathematics 13, no. 5: 768. https://doi.org/10.3390/math13050768

APA Style

Chen, R.-M. (2025). A Non-Self-Referential Characterization of the Gram–Schmidt Process via Computational Induction. Mathematics, 13(5), 768. https://doi.org/10.3390/math13050768

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop