Identities Generalizing the Theorems of Pappus and Desargues

: The Theorems of Pappus and Desargues (for the projective plane over a ﬁeld) are gen-eralized here by two identities involving determinants and cross products. These identities are proved to hold in the three-dimensional vector space over a ﬁeld. They are closely related to the Arguesian identity in lattice theory and to Cayley-Grassmann identities in invariant theory.

The identities in lattice theory use join and meet and apply to lattices. The identities in invariant theory use the wedge (exterior) product (corresponding to join), bracket (a multilinear altering operation like determinant), and an operation corresponding to meet. The identities apply to Cayley-Grassmann algebras (exterior algebras with a bracket operation). The connections between the two kinds of identities are not straightforward and are studied in several of the papers listed above.
The identities presented in this paper have a more mundane origin. In the spring semester of 1997, the author taught the second semester of college geometry out of the text by Ryan [40]. The text's analytical approach included the fact that three points (or lines) in the projective plane over a field are collinear (or concurrent) if and only if the determinant of their homogeneous coordinates is zero; see (6) below. Pappus and Desargues both assert that if certain collinearities or concurrences hold, then others do, that is, if some numbers are zero, then so are others. Clearly, then, there must be algebraic formulas that assert such dependences between determinants and from which the two theorems can be deduced. Such formulas were found and included in the course notes [41]; parts of those notes appear in this paper. Section 2 presents the two identities along with definitions of their operations. They use cross products and determinants rather than the joins, meets, and brackets of invariant theory [24]. Section 3 is a review of the projective plane over a field. Section 4 derives the theorems of Pappus and Desargues from the identities. Section 5 shows how cross product relates to join and meet, followed by a Discussion, an Acknowledgement, and an Appendix A containing proofs of the identities. Good survey papers on the theorems of Pappus and Desargues include [42,43].

Cross Products and Determinants
Let F be a field. The elements of F 3 are written as column vectors. Let

The Projective Plane over F
The points A, B, . . . of the projective plane over F are the one-dimensional subspaces of the three-dimensional vector space F 3 over F, and its lines λ, µ, . . . are the two-dimensional subspaces of F 3 . A point A is on a line λ if A is a subset of λ: Every non-zero vector in F 3 determines both a point and a line. Suppose 0 = a ∈ F 3 . Then A = {ra : r ∈ F} is the 1-dimensional subspace generated by a, so A is a point in the projective plane over F, and λ = {x : x, b = 0} is the 2-dimensional subspace of vectors perpendicular to a, so λ is a line in the projective plane over F. The vector a is referred to as homogeneous coordinates for point A and line λ. The incidence relation between a point and a line holds when their homogeneous coordinates are perpendicular, for if 0 = a, b ∈ F 3 then An axiom of projective geometry (every two points are on a unique line) takes the form in the projective plane over a field that every two 1-dimensional subspaces are contained in a unique 2-dimensional subspace. The 2-dimensional subspace containing two 1-dimensional subspaces is the set of sums of two vectors, one from each subspace. To notate this, we extend the notion of addition from individual vectors to sets of vectors. For arbitrary sets of vectors A, B ⊆ F 3 , let A + B be the set of their sums: If A and B are distinct points, then there are linearly independent non-zero vectors a, b ∈ F 3 such that A = {ra : r ∈ F} and B = {sb : s ∈ F}. The unique line containing both A and B is the 2-dimensional subspace generated by a and b, consisting of all their linear combinations, The cross product a × b is perpendicular to both a and b, so any vector perpendicular to a × b will be in the unique line A + B containing A and B (and conversely), that is, Thus, points with homogeneous coordinates a and b determine a line with homogeneous coordinates a × b. Suppose two lines, λ and µ, have homogeneous coordinates a, b ∈ F, respectively: Note that a and b are linearly independent, since otherwise λ = µ (and there is only one line, not two, as postulated). An axiom for projective planes is that distinct lines meet at a unique point. In the projective plane over a field, this axiom asserts the fact that distinct 2-dimensional subspaces contain a unique 1-dimensional subspace, namely, their intersection. Thus, the intersection λ ∩ µ of the lines λ and µ is the unique point that occurs on both lines. When λ and µ have coordinates a and b, their intersection is the 1-dimensional subspace of F 3 generated by any non-zero vector perpendicular to both a and b, such as a × b: Thus, two lines with homogeneous coordinates a and b meet at a point with homogeneous coordinates a × b.
Suppose 0 = a, b, c ∈ F 3 . Let A, B, and C be the points, and λ, µ, and ν the lines, with homogeneous coordinates a, b and c, respectively: By definition, A, B, C are collinear iff A + B = A + C = B + C, and λ, µ, ν are concurrent iff λ ∩ µ ∩ ν is a point. Concurrency and collinearity are equivalent to the homogeneous coordinates having determinant zero:

The Theorems of Pappus and Desargues
It is convenient to state the Theorems of Pappus and Desargues together. Between them they deal with the fifteen lines that pass through six points A, B, C, A , B , C . Each pair of points determines a line. Twelve of these lines are intersected in pairs to produce six more constructed points P, Q, R, P , Q , R , three for Pappus and the other three for Desargues, while the remaining three lines are involved in the conclusion of Desargues. The hypotheses also fit together nicely. For Pappus the assumption is that the triples A, B, C and A , B , C are both collinear, while for Desargues the assumption is that neither triple is collinear. A difference arises in the conclusion, which for Pappus is that its three constructed points P, Q, R are collinear, while for Desargues the conclusion is that three lines are concurrent iff its three constructed points P , Q , R are concurrent.
Theorem 2 (Pappus-Desargues). Let F be a field and A, B, C, A , B , C be six points in the projective plane over F. Define six more points Proof. Choose homogeneous coordinates 0 = a, b, c, a , b , c ∈ F 3 for the points A, B, C, A , B , C , so that From the definitions of P, Q, R, P , Q , R we get their homogeneous coordinates as cross products of cross products: For (Pap), assume A, B, C are collinear and A , B , C are collinear. By (6), hence P, Q, R are collinear by (6).
For (Des), assume that A, B, C are non-collinear and A , B , C are non-collinear. By (6), We wish to show A + A , B + B , C + C are concurrent iff P , Q , R are collinear. By (6), these two statements are equivalent to (8) and (9), respectively.
The equivalence of (8) and (9) follows from identity (2), written here with assumption (7) Notice that (8) implies (9) without assuming (7) but to get (8) from (9) requires knowing that if a product of three numbers is zero and two of them are not zero then the third one must be zero. Here the assumption (7) is needed; divide both sides of the equation by the non-zero factors. For example, if A , B , C are collinear then P , Q , R are also collinear because they lie on the same line as A , B , C , but the lines A + A , B + B , and C + C can be non-concurrent.

Connection with Invariant Theory
The identities (1) and (2) have the same form as the identities in [23,24], but the operations are different. Some concepts are reviewed here to explain the difference. Let V and W be vector spaces (over a common field), with a ∈ V and b ∈ W. The tensor product of vectors a and b is a ⊗ b, where ⊗ is the generic linear transformation from V × W to another vector space (over the common field), that is, ⊗ obeys no laws other than those that are universally true for linear transformations, namely, for all vectors a, b, c and every scalar r, The tensor product V ⊗ W of V and W is the vector space whose vectors are the tensor products of vectors in the two spaces: Given a single vector space V, the exterior, wedge, antisymmetric tensor, or alternating, product V ∧ V = 2 V of a vector space V with itself is the subspace of the tensor square V ⊗ V consisting of the wedge products a ∧ b = a ⊗ b − b ⊗ a of vectors in V: Suppose e 1 , . . . , e n is a basis for V. The set of wedge products of these vectors contains pairs, a ∧ b and b ∧ a, that are linearly dependent because their sum is zero, but otherwise the set of wedge products of basis elements is linearly independent. The number of linearly independent vectors obtained as wedge products of basis vectors is therefore 1 2 n(n − 1). When n = 3, the dimension of the exterior product is 3, but for n = 4, 5, 6, . . . the dimension of ∧ 2 V is 6, 10, 15, . . . . In three dimensions only, the wedge product "is" the cross product. Suppose a = ∑ i<n x i e i and b = ∑ j<n y j e j . Simple calculations using the rules above show that, in the 3-dimensional case, the wedge product is The cross product is a × b = (x 1 y 2 − x 2 y 1 )e 0 + (x 2 y 0 − x 0 y 2 )e 1 + (x 0 y 1 − x 1 y 0 )e 2 , so a ∧ b = a × b follows by setting e 0 = e 1 ∧ e 2 , e 1 = e 2 ∧ e 0 , e 2 = e 0 ∧ e 1 .
This identification is not possible in dimensions other than 3. The setting must move from undergraduate to graduate mathematics (from 3-dimensional linear algebra to exterior products). The identities as presented here can be understood as the 3-dimensional case of identities arising in invariant theory. In three dimensions, both join and meet are realized through the cross product.

Discussion
This elementary presentation may serve as an invitation and introduction to the relevant literature of invariant theory cited in the Introduction, illustrating the concepts in the special case of three dimensions. A key problem is the relationship between the identities of linear lattices and those of Cayley-Grassmann algebras. A linear lattice is a lattice of commuting equivalence relations [13]. Jónsson's theorem [2] that lattices of commuting equivalence relations are Arguesian is a special case of a similar result: every algebra of binary relations (a set of binary relations closed under intersection, composition of relations, and the formation of converses) satisfies Jónsson's [22] infinite set of axioms for such algebras, and the Arguesian identity is one of the three simplest of Jónsson's axioms. It may well be fruitful to explore the connection between this more general setting (of binary relations, not specialized to commuting equivalence relations) and the identities of Cayley-Grassmann algebras.
Funding: This research received no external funding.
Acknowledgments: It was obvious that (1) and (2) must occur in some form somewhere in the literature. Thanks are due to Victor Pambuccian, who responded to an inquiry in 2020 by referring the author to [24,26], which led to [23].

Conflicts of Interest:
The author declares no conflict of interest.

Appendix A. Proof of Theorem 1
The basic identities used in the proofs of (1) and (2) are (A1)-(A11). Cross products are perpendicular to their factors: The scalar triple product (box product) identity: Determinants are unchanged by transposition: Scalars move in and out of determinants: Determinants distribute over vector addition: A product of determinants is the determinant of inner products: The vector triple product identity: The vector quadruple product identity: Proof of (A11).
The proofs of (1) and (2) start similarly. In both cases three vectors are defined as cross products of cross products of pairs of points drawn from a, b, c, a , b , c . For (1), they are cross products of cross products of a, b, c with cross products of a , b , c . For (2), they are cross products of cross products of one of a, b, c with one of a , b , c . The first step in both proofs is to apply the distributive law to the determinant of these three vectors (each a sum of two vectors) and get a sum of eight terms. For (1), all but two of these terms cancel and the proof ends. For (2) more steps are needed.
Proof of (1). By (A11), Then the left side of (1) is the sum of eight determinants by the distributive law (A8). Move all the scalars (the determinants) out by (A7), obtaining another sum of eight terms. Six of these eight terms cancel with each other. They are named r 1 , . . . , r 6 in notations to the right of the affected terms. Note that r 2 = −r 6 , r 4 = −r 5 , and r 1 = −r 3 by (A4) and (A5). What remains are the two terms on the right side of (1).  det[b, a, a ]b , det[c, b, b ]c , det[a, c, c ]a ] + det[ det[b, a, a ]b , det[c, b, b ]c , det[c , a , a]c] + det[ det[b, a, a ]b , det[b , c , c]b, det[a, c, c ]a ] + det[ det[b, a, a ]b , det[b , c , c]b, det[c , a , a] a , b , b]a, det[b , c , c] a , b , b]a, det[b , c , c]b, det[c , a , a] Proof of (2). Define three vectors and use (A11).
The left side of (2) is det[p, q, r], equal by (A8) to the sum of eight determinants. Six of them are zero by (A3) because two of their argument vectors are proportional. Move two determinants out of the remaining sum of two terms by (A7), obtaining another sum of two terms. These share the determinant det[a, b, c], which can be factored out. The remaining factor is the sum of two products of three determinants. This factor can be recast first as a determinant of a matrix with zeroes on the diagonal, then as a determinant of inner products, and finally as a product of the determinant det[a , b , c ] with the determinant of three cross products, as required to leave the desired right hand side of (2). det[p, q, r] = det det[a , b, b ]a + det[b , a, a ] det[b , a, a ]b, det[b , c, c ]b, det[a , c, c ]a] = 0 + det[ det[b , a, a ]b, det[c , b, b ]c, det[c , a, a ]c] = 0 + det[ det[b , a, a ]b, det[c , b, b ]c, det[a , c,