Abstract
In this paper, the notion of strongly typed language will be borrowed from the field of computer programming to introduce a calculational framework for linear algebra and tensor calculus for the purpose of detecting errors resulting from inherent misuse of objects and for finding natural formulations of various objects. A tensor bundle formalism, crucially relying on the notion of pullback bundle, will be used to create a rich type system with which to distinguish objects. The type system and relevant notation is designed to “telescope” to accommodate a level of detail appropriate to a set of calculations. Various techniques using this formalism will be developed and demonstrated with the goal of providing a relatively complete and uniform method of coordinate-free computation. The calculus of variations pertaining to maps between Riemannian manifolds will be formulated using the strongly typed tensor formalism and associated techniques. Energy functionals defined in terms of first-order Lagrangians are the focus of the second half of this paper, in which the first variation, the Euler–Lagrange equations, and the second variation of such functionals will be derived.
Keywords:
Riemannian manifolds; tensor and tensor field type system; calculus of variations; Euler–Lagrange equations MSC:
49-02; 49Q10; 53C99; 58C99
1. Introduction
Many important differential equations have a variational origin, being derived as the Euler–Lagrange equations for a particular functional on some space of functions. The variational approach lends itself particularly to physics, in which conservation of energy or minimization of action is a central concept. The naturality of such formulations cannot be understated, as solutions to such problems often depend critically on the inherent geometry of the underlying objects. For example, solutions to Laplace’s equation for a real valued function (e.g., modeling steady-state heat flow) on a Riemannian manifold depend qualitatively on the topology of the manifold (e.g., harmonic functions on a closed Riemannian manifold are necessarily constant, which makes sense geometrically because there is no boundary through which heat can escape).
A central concept in the field of software design is that of information hiding [1], in which a computer program is organized into modules, each presenting an abstract public interface. Other parts of the program can interact only through the presented interface, and the details of how each module works are hidden, thereby preventing interference in the implementation details which are not required by the inherent structure of the module. This concept has clear usefulness in the field of mathematics as well. For example, there are several formulations of the real numbers (e.g., equivalence classes of Cauchy sequences of rational numbers, Dedekind cuts, decimal expansions, etc.), but their particulars are instances of what are known as implementation details, and the details of each particular implementation are irrelevant in most areas of mathematics, which only use the inherent properties of the real numbers as a complete, totally ordered field. Of course, at certain levels, it is useful or necessary to “open up the box” [go past the public interface] and work with a particular representation of the real numbers.
Information hiding is characteristic of abstract mathematics, in which general results are proved about abstract mathematical objects without using any particular implementation of said objects. These results can then be used modularly in other proofs, just as the functionality of a computer program is organized into modularized objects and functions. For example, a fixed point theorem for contractive mappings on closed sets in Banach spaces, but a particular application of this theorem renders an existence and uniqueness theorem for first-order ODEs ([2], pp. 59, 62).
A loose conceptual analogy for modularity is that of diagonalizing a linear operator. A basis of eigenvectors is chosen so that the action of the linear operator on each eigenspace has a particularly simple expression, and distinct eigenspaces do not interact with respect to the operator’s action. In this analogy, the eigenvectors then correspond to individual lemmas, and the linear operator corresponds to a large theorem which uses each lemma. Decomposing the proof of the main result in terms of non-interacting lemmas simplifies the proof considerably, just as it simplifies the quantification of the linear operator. The term “orthogonal” has been borrowed by software design to describe two program modules whose functionality is independent ([3], Chapter 4, Section 2). Orthogonality in software design is highly desirable as it generally eases program implementation and program correctness verification, as the human designers are only capable of keeping track of a certain finite number of details simultaneously [4]. The scope of each detail level of the design is limited in complexity, making the overall design easier to comprehend.
This technique in software design carries over directly to proof design, where it is desirable (elegant) to write proofs and do calculations without introducing extraneous details, such as choice of bases in vector spaces or local coordinates in manifolds. Because such choices are generally non-unique, they can often obscure the inherent structure of the relevant objects by introducing artifacts arising from properties of the particular details used to implement said objects. For example, the choice of a particular local coordinate chart on a manifold artificially imposes an additive structure on a neighborhood of the manifold, but such a structure has nothing to do with the inherent geometry of the manifold. Furthermore, the descent to this “lower level” of calculation discards some type information, representing points in a manifold as Euclidean vectors, thereby losing the ability to distinguish points from different manifolds, or even different localities in the same manifold.
This paper makes a particular emphasis on natural formulations and calculations in order to expose the underlying geometric structures rather than relying on coordinate-based expressions. The construction of the “full” direct sum and “full” tensor product bundles are used in combination with induced covariant derivatives to this end.
For more on relevant introductory theory on manifolds, bundles and Riemannian geometry, see [5,6,7,8].
2. Notation and Conventions
Let all vector spaces, manifolds and [fiber] bundles be real and finite-dimensional unless otherwise noted (this allows the canonical identification for a vector space or vector bundle V), and let all tensor products be over . The unqualified term “bundle” will mean “fiber bundle”. The Einstein summation convention will be assumed in situations when indexed tensors are used for computation.
Unary operators are understood to have higher binding precedence than binary operators, and super and subscripts are understood to have the highest binding precedence. For example, the expression would be parenthesized as .
Apart from the obvious purpose of providing a concise and central reference for the notation in this paper, the following notation index serves to illustrate the use of telescoping notation (see Section 3.2). The high-level (terse notation which requires the reader to do more work in type inference but is more agile), mid-level, and low-level (completely type-specified, requiring little work on the part of the reader) notations are presented side-by-side with their definitions.
Let be a neighborhood of 0, let each be coordinates on I, let be sets, let be manifolds, let , let and and be vector bundles, where , let be vector spaces, and let such that
is a vector bundle isomorphism.
Table 1 and Table 2 are references which enumerate the notation used in this paper in its high-, mid-, and low-level forms.
Table 1.
Notational Reference Part 1.
Table 2.
Notational Reference Part 2.
3. Mathematical Setting
3.1. Using Strong Typing to Error-Check Calculations
Linear algebra is an excellent setting for discussion of the strong typing [9] of a language, a concept used in the design of computer programming languages. The idea is that when the human-readable source code of a program is compiled (translated into machine-readable instructions), the compiler (the program which performs this translation) or runtime (the software which executes the code) verifies that the program objects are being used in a well-defined way, producing an error for each operation that is not well-defined. For example, a vector-type value would not be allowed to be added to a permutation-type value, even though tuples of unsigned integers (i.e., bytes) are used by the computer to represent both, and the computer’s processing unit could add together their byte-valued representations. However, such an operation would be meaningless with respect to the types of the operands. The result of the operation would depend on the non-canonical choice of representation for each object. Strong type checking has the advantage of catching many programming errors, including most importantly those resulting from an inherent misuse of the program’s objects. Within this paper, certain type-explicit notations will be used to provide forms of type awareness conducive to error-checking.
An important example of semi-strong typing in math is Penrose’s abstract index notation [10], modeled on Einstein’s summation convention, in which linear algebra and tensor calculus are implemented using indexed objects (tensors) having a certain number and order of “up” and “down” indices (an abstraction of the genuine basis/coordinate expressions in which the indexed objects are arrays of scalars/functions). A non-indexed tensor is a scalar value, a tensor having a single up or down index is a vector or covector value respectively, a tensor having an up and a down index is an endomorphism, and so forth. The tensors are contracted by pairing a certain number of up indices with the same number of down indices, resulting in an object having as indices the uncontracted indices.
For example, given a finite-dimensional inner product space , where g is a -tensor (having the form , i.e., two down indices), a vector is a -tensor, and the length of v is . If , then has positive dimension, its vectors each being -tensors, and is an inner product on (which must be a -tensor in order to contract with two -tensors).
Certain type errors are detected by use of abstract index notation in the form of index mismatch. For example, with as above, if , then is a -tensor. Because of the repeated j down indices, the expression typically indicates a type error; cannot contract with because of incompatible valence (valence being the number of up and down indices). Furthermore, multiplying a -tensor with a -tensor without contraction should result in a -tensor, which should be denoted using three indices, as in .
The only explicit type information provided by abstract index notation is that of valence. The “semi” qualifier mentioned earlier is earned by the lack of distinction between the different spaces in which the tensors reside. For example, if are finite-dimensional vector spaces, then linear maps and can be written as -tensors, and their composition is written as the tensor contraction . However, while the expression makes sense in terms of valence compatibility (i.e., grammatically), the composition “” that it should represent is not well-defined. Thus this form of type error is not caught by abstract index notation, since the domains/codomains of the linear maps must be checked separately.
The use of dimensional analysis (the abstract use of units such as kilograms, seconds, etc.) in Physics is an important precedent of strong typing. Each quantity has an associated “dimension” (this is a different meaning from the “dimension” of linear algebra) which is expressed as a fraction of powers of formal symbols. The ordinary algebraic rules for fractions and formal symbols are used for the dimensional components, with the further requirement that addition and equality may only occur between quantities having the same dimension.
For example, if E, M and C represent the dimensions of energy, mass and cost, respectively, and if the energy storage density of a battery manufacturing process is known (having dimensions energy per mass) and the manufacturing weight yield of the battery is known (having dimensions mass per cost), then under the algebraic conventions of dimensional analysis, calculating the energy storage per cost (which should have dimensions energy per cost) is simple;
(the M symbols cancel in the fraction). Here, both and w are real numbers, and besides using the well-definedness of real multiplication, no type-checking is done in the expression .
A contrasting example is the quantity , having dimensions . However, these dimensions may be considered to be meaningless in the given context. The quantity’s type adds meaning to the real-valued quantity, and while the quantity is well-defined as a real number, the uselessness of the type may indicate that an error has been made in the calculations. For example, a type mismatch between the two sides of an equation is a strong indication of error.
This is also a convenient way to think about the chain rule of calculus. If , , and x measure real-valued quantities, then measures the quantity z with respect to quantity x. Using Z, Y and X for the dimensions of the quantities z, y and x respectively, the derivative has units . When worked out, the dimensions for the quantities on either side of the equation will match exactly, having a non-coincidental similarity to the calculation in the battery product example.
3.2. Telescoping Notation (Also Known as Do Not Fear the Verbosity)
Many of the computations developed in this paper will appear to be overly pedantic, owing to the decoration-heavy notation that will be introduced in Section 3.3. This decoration is largely for the purpose of tracking the myriad of types in the type system and to assist the human reader or writer in making sense of and error-checking the expressions involved. The pedantry in this paper plays the role of introducing the technique. The notation is designed to telescope (Credit for the notion of telescoping notation is due in part to David DeConde, during one of many enjoyable and insightful conversations.), meaning that there is a spectrum of notational decoration; from
- pedantically type-specified, verbose, and decoration-heavy, where [almost] no types must be inferred from context and there is little work or expertise required on the part of the reader, to
- somewhat decorated but more compact, where the reader must do a little bit of thinking to infer some types, all the way to
- tersely notated with minimal type decoration, where [almost] all types must be inferred from context and the reader must either do a lot of thinking or be relatively experienced.
Additionally, some of the chosen symbols are meant to obey the same telescoping range of specificity. For example, compare n-fold tensor contraction with type-specified as discussed in Section 3.3, or the symbols ∇, , and as discussed in Section 3.10. Tersely notated computations can be seen in Section 3.10, while fully-verbose computations abound in the careful exposition of Section 4.
3.3. Strongly-Typed Linear Algebra via Tensor Products
A fully strongly typed formulation of linear algebra will now be developed which enjoys a level of abstraction and flexibility similar to that of Penrose’s abstract index notation. Emphasis will be placed on notational and conceptual regularity via a tensor formalism, coupled with a notion of “untangled” expression which exploits and notationally depicts the associativity of linear composition.
If V denotes a finite-dimensional vector space, then let
denote the natural pairing on V, and denote using the infix notation . The natural pairing is a nondegenerate bilinear form and its bilinearity gives the expression multiplicative semantics (distributivity and commutativity with scalar multiplication), thereby justifying the use of the infix · operator normally reserved for multiplication. The natural pairing subscript V is seemingly pedantic, but will prove to be an invaluable tool for articulating and navigating the rich type system of the linear algebraic and vector bundle constructions used in this paper. When clear from context, the subscript V may be omitted.
Because V is finite-dimensional, it is reflexive (i.e., the canonical injection is a linear isomorphism). Thus the natural pairing on can be written naturally as
Note that . Though subtle, the distinction between and is important within the type system used in this paper.
Through a universal mapping property of multilinear maps, the bilinear forms and descend to the natural trace maps
each extended linearly to non-simple tensors. These operations can also be called tensor contraction. Noting that and are canonically isomorphic to and respectively, then for each and , it follows that and .
Definition 1
(Linear maps as tensors). Let V and W be finite-dimensional vector spaces, and let denote the space of vector space morphisms from V to W (i.e., linear maps). The linear isomorphism
(extended linearly to general tensors) will play a central conceptual role in the calculations employed in this paper, as it will facilitate constructions which would otherwise be awkward or difficult to express. Linear maps and appropriately typed tensor products will be identified via this isomorphism.
Given bases and , and dual bases and , a linear map can be written under the identification in (1) as
where , and in fact is the matrix representation of A with respect to the bases and , noting that the i and j indices denote the “output” and “input” components of A respectively. Tensors are therefore the strongly typed analog of matrices, where the type information is carried by the component. One particular example is the the identity map on V, which has type and is expressed simply as (equivalently, ). Throughout this paper, the identity map on V will be referred to as the identity tensor on V, or just the identity tensor if V is clear from context, and will be denoted as .
One clarifying example of the tensor formulation is the adjoint operation of the natural pairing, also known as forming the dual of a linear map. It is straightforward to show that
(where the map is extended linearly to general tensors). This is literally the tensor abstraction of the matrix transpose operation; if , then the dual A is . The matrix of is precisely the transpose of the matrix of A with respect to the relevant bases. The map * itself can be written as a 4-tensor , where .
There is a notion of the natural pairing of tensor products, which implements composition and evaluation of linear maps, and can be thought of as a natural generalization of scalar multiplication in a field. If and W are each finite-dimensional vector spaces, then the bilinear form
will be denoted also by the infix notation (i.e., ). If V itself is a tensor product of n factors which are clear from context, then may be denoted by (think an n-fold tensor contraction). If , then typically : is used in place of . For example, from above, .
Given a permutation , define a right-action by , mapping elements in the obvious way. For example, acting on puts the second factor in the third position, the third factor in the fourth position, and the fourth factor in the second, giving . This permutation is itself a linear map and of course can be written as a tensor. However, because it is defined in terms of a right action, the “domain factors” will come on the left. Thus is written as a tensor of the form (i.e., as a -tensor). Certain tensor constructions are conducive to using such permutations. In the above example, * can be written as .
The permutation right-action also works naturally when notated using superscripts. For example, if , then
and so
When multiplying the permutations and in the third line, it is important to note that they are read left-to-right, since they are acting on B on the right.
The inline cycle notation is somewhat ambiguous in isolation because the number of factors in the domain/codomain is not specified, let alone their types. This information can sometimes be inferred from context, such as from the natural pairing subscripts, as in the following examples.
Example 1

(Linearizing the inversion map). Let , i.e., the linear map inversion operator, where is an open submanifold of via the isomorphism . Its linearization (derivative) at in the direction is
( is taken arbitrarily small due to the derivative being evaluated in an arbitrarily small neighborhood of ϵ = 0).
In order to “move” the B parameter out so that it plays the same syntactical role as in the original expression , via adjacent natural pairing, some simple tensor manipulations can be done. The process is easily and accurately expressed via diagram. The following sequence of diagrams is a sequence of equalities. The diagram should be self-explanatory, but for reference, the number of boxes for a particular label denotes the rank of the tensor, with each box labeled with its type. The lines connecting various boxes are natural pairings, and the circles represent the unpaired “slots”, which comprise the type of the resulting expression.


The following step is nothing but moving the boxes for B out; the natural pairings still apply to the same slots, hence the cables dangling below.

In this setting, a tensor product amounts to flippantly gluing boxes together.

In order for B to be naturally paired in the same adjacent manner as in the original expression , the slots of must be permuted; the second moves to the third, the third to the fourth, and the fourth to the second.

The first diagram equals the last one, thus , and by the nondegeneracy of the natural pairing on , this implies that , noting that the statement of this expression does not require the direction vector B. The permutation exponent can be calculated easily using simple tensors, if not by the above diagrammatic manipulations;
Here, the expression represents the expression .
The next example will later be extended to the setting of Riemannian manifolds and their metric tensor fields, and put to use to formulate what are known as harmonic maps (see (3)). However, first, a new tensor operation must be defined.
Definition 2
(Parallel tensor product). If are vector spaces and and , then define their parallel tensor product by
The parentheses in the type specification are unnecessary, but hint at what the tensor decomposition for the quantity should be, if used as an operand to ⊠ again (see below).
If A and B represent linear maps, then represents their tensor product as linear maps (the parentheses are unnecessary but hint at what the domain and codomain are, and for use of as an operand in another parallel tensor product), which is a “parallel” composition; if and , then .
There is a slight ambiguity in the notation coming from a lack of specification on how the tensor product of the operands is decomposed in the case when there is more than one such decomposition. Notation explicitly resolving this ambiguity will not be needed in this paper as the relevant tensor product is usually clear from context.
The parallel tensor product is associative; if Y and Z are also vector spaces and , then
allowing multiply-parallel tensor products.
Example 2
(Tensor product of inner product spaces). If and are inner product spaces (noting that and are symmetric, i.e., literally invariant under ), then is an inner product space having induced inner product . Here, the “inputs” of A and B (the factors) are being paired using , while the “outputs” (the W factors) are being paired using , and the trace is used to “complete the cycle” by plugging the output into the input, thereby producing a real number. The expression can be written in a more natural way, which takes advantage of the linear composition, as (or, pedantically, ), instead of the more common but awkward trace expression mentioned earlier. In the tensor formalism, the inner product k should have type . Permuting the middle two components of the 4-tensor gives the correct type. In fact, . A further advantage to this formulation is that if any or all of are functions, there is a clear product rule for derivatives of the expression . This is something that is used critically in Riemannian geometry in the form of covariant derivatives of tensor fields (see (4)).
In this paper, the main use of the tensor formulation of linear maps is twofold: to facilitate linear algebraic constructions which would otherwise be difficult or awkward (this includes the ability to express derivatives of [possibly vector or manifold-valued] maps without needing to “plug in” the derivative’s directional argument), and to make clear the product-rule behavior of many important differentiable constructions.
3.4. Bundle Constructions
In order to use the calculus of variations involving Lagrangians depending on tangent maps of maps between smooth manifolds, it suffices to consider Lagrangians defined on smooth vector bundle morphisms. Continuing in the style of the previous section, a “full” tensor product of smooth vector bundles (4) will be formulated which will then allow expression of smooth vector bundle morphisms as tensor fields, sometimes called two-point tensor fields ([11], p. 70). The full arsenal of tensor calculus can then be used to considerable advantage.
First, some definitions and simpler bundle constructions will be introduced. A smooth [fiber] bundle (hereafter referred to simply as a smooth bundle) is a 4-tuple where , E and N are smooth manifolds and is locally trivial, i.e., N is covered by open sets such that as smooth manifolds. The manifolds , E and N are called the typical fiber, the total space, and the base space respectively. The map is called the bundle projection. The full 4-tuple specifying a bundle can be recovered from the bundle projection map, so a locally trivial smooth map can be said to define a smooth bundle. The dimension of the typical fiber of a bundle will be called its rank, and will be denoted by or when the bundle is understood from context.
The space of smooth sections of a smooth bundle defined by is
and may also be denoted by , if the bundle is clear from context. If nonempty, is generally an infinite-dimensional manifold (the exception being when the base space N is finite) [12].
Proposition 1
(Trivial bundle). Let M and N be smooth manifolds. With and
defines a smooth bundle , called a trivial bundle. Similarly, with and , is a trivial bundle.
No proof is deemed necessary for (1), as each bundle projection trivializes globally in the obvious way. The symbol is a composite of × (indicating direct product) and → or ← (indicating the base space).
If M and N are smooth manifolds as in (1), then there are two particularly useful natural identifications.
These identifications can be thought of identifying a map with its graph in and respectively. Furthermore, this allows bundle theory to be applied to reasoning about spaces of maps. The symbols and now carry a significant amount of meaning. Generally will be used in this paper, for consistency with the convention discussed in Section 3.3. The symbols and are examples of telescoping notation, as they are built notationally on ×, and conceptually on the direct product, which is what is denoted by ×. The arrow portion of the symbols can be discarded when type-specificity is not needed.
Proposition 2
(Direct product bundle). Let and be smooth bundles. Then
defines a smooth bundle . This bundle is called the direct product of and , and is not necessarily a trivial bundle.
Proof.
Let and trivialize and over open sets and respectively. Then
has inverse . Note that
and that
defines a diffeomorphism. Then
defines a diffeomorphism, and
showing that trivializes over . Since can be covered by such trivializing sets, this establishes that defines a smooth bundle. The typical fiber of is . □
A smooth vector bundle is a fiber bundle whose typical fiber is a vector space and whose local trivializations are linear isomorphisms when restricted to each fiber. If is a smooth vector bundle, then its dual vector bundle is a smooth vector bundle defined in the following way.
Because is a vector space, the notation is already defined. In analogy with Section 3.3, there are natural pairings on a vector bundle and its dual, defined simply by evaluation. If , and , then and . Both expressions evaluate to . Natural traces and n-fold tensor contraction can be defined analogously. Again, while seemingly pedantic, the subscripted natural pairing notation will prove to be a valuable tool in articulating and error-checking calculations involving vector bundles. To generalize the rest of Section 3.3 will require the definition of additional structures.
For the remainder of this section, let and now be smooth vector bundles. The following construction is essentially an alternate notation for , but is one that takes advantage of the fact that and are vector bundles, and encodes in the notation the fact that the resulting construction is also a vector bundle. This is analogous to how is a vector space with a natural structure if V and W are vector spaces, except that this is usually denoted by .
Proposition 3
(“Full” direct sum vector bundle). If
Then
defines a smooth vector bundle , called the full direct sum of and .
For each , the vector space structure on is given in the following way. Let and . Then
It is critical to see (1) for remarks on notation.
Proof.
Let U, V, , , P, , and be as in the proof of (2), and define . Noting that is a smooth bundle isomorphism over , so to show that is a linear isomorphism in each fiber, it suffices to show that it is linear in each fiber. Let , and . Then
Thus is linear in each fiber, and because it is invertible, it is a linear isomorphism in each fiber. In particular, is a smooth vector bundle isomorphism over . Applying to the above equation gives
as desired. □
This construction differs from the Whitney sum of two vector bundles, as the base spaces of the bundles are kept separate, and are not even required to be the same. This allows the identification of as , which may be done without comment later in this paper. Some important related structures are and , where .
The next construction is what will be used in the implementation of smooth vector bundle morphisms as tensor fields.
Proposition 4
(“Full” tensor product bundle). If
Then
defines a smooth vector bundle , called the full tensor product (This construction is alluded to in ([13], p. 121), but is not defined or discussed.) of and .
It is critical to see (1) for remarks on notation.
Proof.
Since the argument in the definition of is not necessarily unique, the well-definedness of must be shown. Let . Then in particular, for some , and therefore and for each index i and j. Thus and , so the expression defining is well-defined.
The set does not have an a priori global smooth manifold structure, as it is defined as the disjoint union of vector spaces. A smooth manifold structure compatible with that of the constituent vector spaces will now be defined.
Let and trivialize and over open sets and respectively, such that and are each linear in each fiber. Define
with element mapping
The map is well-defined and smooth in each fiber by construction, since for each ,
is a linear isomorphism by construction. Additionally, has been constructed so that
on . Define the smooth structure on by declaring to be a diffeomorphism. The map is trivialized over . The set can be covered by such trivializing open sets. Thus has been shown to be locally diffeomorphic to the direct product of smooth manifolds, and therefore it has been shown to be a smooth manifold. With respect to the smooth structure on , the map is smooth, and has therefore been shown to define a smooth vector bundle. □
Remark 1
(Notation regarding base space). The “full” direct sum (3) and “full” tensor product (4) bundle constructions allow direct sums and tensor products to be taken of vector bundles when the base spaces differ. If the base spaces are the same, then the construction “joins” them, producing a vector bundle over that shared base space. For example, if E and F are vector bundles over M, then has base space , while has base space M. The base space can be specified in either case as a notational aide; the latter example would be written as . If no subscript is provided on the ⊗ symbol, then the base spaces are “joined” if possible (if they are the same space), otherwise they are kept separate, as in the “full” tensor product construction. This notational convention conforms to the standard Whitney sum and tensor product bundle notation, and uses the notion of telescoping notation to provide more specificity when necessary.
Given a fiber bundle, a natural vector bundle can be constructed “on top” of it, essentially quantifying the variations of bundle elements along each fiber. This is known as the vertical [tangent] bundle ([12], p. 43), and it plays a critical role in the development of Ehresmann connections, which provide the “horizontal complement” to the vertical bundle.
Proposition 5
(Vertical bundle). Let define a smooth [fiber] bundle. If , then defines a smooth vector bundle subbundle of , called the vertical bundle over E. Furthermore, the fiber over is .
Proof.
Because is a smooth surjective submersion, is a subbundle of having corank and therefore rank equal to that of E. Furthermore, if and , then represents an arbitrary element of , and , showing that , and therefore that . This shows that . Because , this shows that . □
Given the extra structure that a vector bundle provides over a [fiber] bundle, there is a canonical smooth vector bundle isomorphism which adds significant value to the pullback bundle formalism used throughout this paper. This can be seen put to greatest use in Section 4, for example, in development of the first variation (see (1)).
Proposition 6
(Vertical bundle as pullback). If defines a smooth vector bundle, then
is a smooth vector bundle isomorphism over , called the vertical lift, having inverse
where, without loss of generality, is an E-valued variation which lies entirely in a single fiber.
Proof.
It is clear that is linear and injective on each fiber. By a dimension counting argument, it is therefore an isomorphism on each fiber. Because it preserves the basepoint, it is a vector bundle isomorphism over . Because the map is smooth, so is the defining expression for , thereby establishing smoothness. That inverts is a trivial calculation. □
3.5. Strongly-Typed Tensor Field Operations
Because vector bundles and the related operations can be thought of conceptually as “sheaves of linear algebra”, the constructions in Section 3.3, generalized earlier in this section, can be further generalized to the setting of sections of vector bundles.
If are smooth vector bundles over M, then define the natural pairing of a tensor field with a vector:
extending linearly to general tensor fields. Further, define the natural pairing of tensor fields:
extending linearly to general tensor fields. This multiple use of the symbol is a concept known as operator overloading in computer programming. No ambiguity is caused by this overloading, as the particular use can be inferred from the types of the operands. As before, the subscript F may be optionally omitted when clear from context.
The permutations defined in Section 3.3 are generalized as tensor fields. If are smooth vector bundles over M, and is a permutation, then can act on by permuting its factors, and therefore can be identified with a tensor field
defined by
An important feature of such permutation tensor fields is that they are parallel with respect to covariant derivatives on the factors (see (2) for more on this).
3.6. Pullback Bundles
The pullback bundle, defined below, is a crucial building block for many important bundle constructions, as it enriches the type system dramatically, and allows the tensor formulation of linear algebra to be extended to the vector bundle setting. In particular, the abstract, global formulation of the space of smooth vector bundle morphisms over a map is achieved quite cleanly using a pullback bundle. Furthermore, the use of pullback bundles and pullback covariant derivatives simplifies what would otherwise be local coordinate calculations, thereby giving more insight into the geometric structure of the problem.
For the duration of this section, let be a smooth bundle having rank r.
Proposition 7
(Pullback bundle). Let M and N be smooth manifolds and let be smooth. If
and
then defines a smooth bundle. In particular, is a smooth manifold having dimension . The bundle defined by is called the pullback of by .
Proof.
Recalling that denotes the typical fiber of , let trivialize over open set . Define
and
Claim (1): and are smooth. Proof: , and is clearly smooth as a map defined on the larger manifold. Therefore it restricts to a smooth map on . An analogous argument shows that is smooth. Claim (1) proved.
Claim (2): inverts . Proof: Let . Then
With ,
proving Claim (2).
Claim (3): trivializes over . Proof: Let . Then
and by claims (1) and (2), is a diffeomorphism, so trivializes over . Claim (3) proved.
Since M can be covered with sets as in claim (3) and since the typical fiber of is diffeomorphic to , this shows that defines a smooth bundle . Because is locally diffeomorphic to the product of an open subset of M with , has been shown to be a smooth manifold having dimension . □
While the pullback bundle is constructed as a submanifold of a direct product, there is a natural bundle morphism into the pulled-back bundle, which serves as an interface to maps defined on the pulled-back bundle. Usually this morphism is notationally suppressed, just as naturally isomorphic spaces can be identified without explicit notation.
Corollary 1
(Pullback fiber projection bundle morphism). If is smooth, then
is a smooth bundle morphism over ϕ which is an isomorphism when restricted to any fiber of .
Because is the projection , its tangent map is also just the projection .
Proposition 8
(Bundle pullback is a contravariant functor). The map of categories
is a contravariant functor. Here, naturally isomorphic bundles in , for each manifold M, are identified (along with the corresponding morphisms).
Proof.
Noting that
and that
it follows that , i.e., satisfies the identity axiom of functoriality.
For the contravariance axiom, let and be smooth manifold morphisms and let be a smooth bundle. Then
and
showing that , and therefore
establishing as a contravariant functor. □
The space of sections of a pullback bundle is easily quantified.
This space will be central in the theory developed in the rest of this paper. Furthermore, it is naturally identified with the space of sections along the pullback map;
These spaces are naturally isomorphic to one another, and therefore an identification can be made when convenient. While the former space is more correct from a strongly typed standpoint, the latter space is a convenient and intuitive representational form. The particular correspondence depends heavily on the fact that is a submanifold of .
Furthermore, if , then . Note that it is not true that any can be written as for some , for example when there exists some distinct such that and . Furthermore, the representation is generally non-unique, for example when is not surjective, sections which differ only away from the image of will still give . Before developing the notion of a linear connection on a pullback bundle, it will be necessary to address these features which, while inconvenient, provide the strength of the pullback bundle and pullback covariant derivative (see (5)).
Lemma 1
(Local representation of elements). Recall that r denotes the rank of smooth bundle F. If then each point has some neighborhood U in which σ can be written locally as , where is a frame for , and are defined by .
Proof.
Let , let be a neighborhood of over which is trivial, and let , so that U is a neighborhood of p. Let be a frame for (i.e., ), and let be the corresponding coframe (i.e., the unique such that for each ). Define by . Then
as desired. □
Some literature uses expressions of the form along with an implicit use of the section-identifying isomorphism to write down particular sections of pullback bundles. In most cases, this tacit identification of spaces is harmless, but certain highly involved calculations may suffer from it. The section that corresponds to under said isomorphism is . However, because this expression is unwieldy and therefore a more compact and contextually meaningful expression is called for.
Definition 3
(Pullback section). If and is smooth, then define
This is known as a pullback section.
The pullback section is deservedly named. If and are smooth, then in the sense of the proof of (8).
Proposition 9
(Bundle pullback commutes with tensor product). If E and F are smooth vector bundles over manifold N and is smooth, then the map
(extended linearly to general tensors) is a smooth vector bundle isomorphism.
Proof.
Let c denote the above map. The well-definedness of c comes from the universal mapping property on multilinear forms which induces a linear map on a corresponding tensor product. If , then which implies that or , and therefore that . Because there exists a basis for consisting only of simple tensors, this implies that c is injective, and by a dimensionality argument, that c is an isomorphism. The map is clearly smooth and respects the fiber structures of its domain and codomain. Thus c is a smooth vector bundle isomorphism. □
The contravariance of pullback and its naturality with respect to tensor product are two essential properties which provide some of the flexibility and precision of the strongly typed tensor formalism described in this paper. This will become quite apparent in Section 4.
Remark 2
(Tensor field formulation of smooth vector bundle morphisms). A particularly useful application of pullback bundles is in forming a rich-type system for smooth vector bundle morphisms. This approach was inspired by ([14], p. 11). Let and be smooth vector bundles, and let be smooth. Consider , i.e., the space of smooth vector bundle morphisms over the map ϕ. There is a natural identification with another space which lets the base map ϕ play a more direct role in the space’s type. In particular,
This particular identification of smooth vector bundle morphisms over ϕ can now be directly translated into the tensor field formalism, analogously to (1).
The inverse image of is given locally; let and denote local frames for E and F in neighborhoods and respectively, with , and let and denote their dual coframes. Then the tensor field corresponding to B is given locally in U by , where .
Quantifying smooth vector bundle morphisms as tensor fields lends itself naturally to doing calculus on vector and tensor bundles, as the relevant derivatives (covariant derivatives) take the form of tensor fields. The type information for a particular vector bundle morphism is encoded in the relevant tensor bundle.
3.7. Tangent Map as a Tensor Field
This section deals specifically with the tangent map operator by using concepts from Section 3.5 and Section 3.6 to place it in a strongly typed setting and to prepare to unify a few seemingly disparate concepts and notation for some tangible benefit (in particular, see Section 3.10).
Given a smooth map , its tangent map is a smooth vector bundle morphism over , so by (2), is naturally identified with a tensor field
which may be denoted by where type pedantry is deemed unnecessary. This construction is known as a two-point tensor field ([11], p. 70). In general, if , then and have distinct types and respectively, and therefore have no well-defined sum. Thus is a nonlinear derivative. The inscribed ∘ symbol within the symbol is meant to denote that nonlinearity, in particular distinguishing it from a linear covariant derivative.
Remark 3
(Generalized covariant derivative). The well-known one-to-one correspondence between linear connections and linear covariant derivatives ([6], p. 520) generalizes to a one-to-one correspondence between Ehresmann connections and a generalized notion of covariant derivative. To give a partial definition for the purposes of utility, a generalized covariant derivative on a smooth [fiber] bundle is a map ∇ on such that for each . The space of maps is naturally identified as , and there is a natural Ehresmann connection on the bundle , whose corresponding covariant derivative is the tangent map operator. This is the subject of another of the author’s papers and will not be discussed here further. This is mentioned here to incorporate linear covariant derivatives (to be introduced and discussed in Section 3.8) and the tangent map operator (a nonlinear covariant derivative) under the single category “covariant derivative”.
There is a subtle issue regarding construction of the cotangent map of which is handled easily by the tensor field construction. In particular, while the cotangent map is the pointwise adjoint of the tangent map , i.e., for each , is linear and is the adjoint of , it does not follow that , being some sort of “total adjoint” of . The obstruction is due to the fact that may not be surjective, so there may be some fiber that is not of the form , and therefore the domain could not be all of . Furthermore, even if were surjective, if it were not also injective, say for some distinct , then , and , so the action on the fiber is not well-defined.
In the tensor field parlance, the cotangent map simply takes the form
The permutation superscript is used here instead of * to distinguish it notationally from pullback notation, which will be necessary in later calculations. The key concept is that the tensor field encodes the base map ; the basepoint is part of the domain itself.
The chain rule in the tensor field formalism makes use of the bundle pullback. If is smooth, then
Because , to form a well-defined natural pairing, the use of the pullback
is necessary (instead of just ).
Sometimes it is useful to discard some type information and write
i.e., such that . This is easily done by the canonical fiber projection available to all pullback bundle constructions; , and the canonical fiber projection is
as defined in (1). The granularity of the type system should reflect the weight of the calculations being performed. For demonstration of contrasting situations, see the discussion at the beginning of Section 3.8 and the computation of the first variation in (1).
It is important to have notation which makes the distinction between the smooth vector bundle morphism formalism and the tensor field formalism, because it may sometimes be necessary to mix the two, though this paper will not need this. An added benefit to the tensor field formulation of tangent maps is that certain notions regarding derivatives can be conceptually and notationally combined, for example in Section 3.10.
3.8. Linear Covariant Derivatives
As will be shown in the following discussion, a linear covariant derivative (commonly referred to in the standard literature without the “linear” qualifier) provides a way to generalize the notion in elementary calculus of the differential of a vector-valued function. The linear covariant derivative interacts naturally with the notion of the pullback bundle, and this interaction leads naturally to what could be called a covariant derivative chain rule, which provides a crucial tool for the tensor calculus computations seen later.
Let V and W be finite-dimensional vector spaces let be open, and let be differentiable. Recall from elementary calculus the differential (essentially matrix-valued). There is no base map information encoded in (i.e., cannot be recovered from alone), it contains only derivative information. The vector space structure of V and W allows the trivializations and , where the first factors are the base spaces and the second factors are the fibers (see (1)). The tangent map (see Section 3.7) has a codomain that can be trivialized similarly;
Because , as a set, is a direct product, it can be decomposed into two factors. Letting and be the projections onto the first and second factors respectively,
The map is the element of identified with the base map itself; . This base map information is discarded in defining the differential of as ; the fiber portion of . This construction relies critically on the natural isomorphism for a vector space W.
An analogous construction shows that the differential of a map is well-defined even when its domain is a manifold. However, when the codomain of a map is only a manifold, there does not in general exist a natural trivialization of its tangent bundle (in contrast to the vector space case), and therefore cannot be defined without additional structure. A linear covariant derivative provides the missing structure.
For the remainder of this section, let define a smooth vector bundle having rank r.
A linear covariant derivative on E provides a means of taking derivatives of sections of E (i.e., maps such that ) without passing to a higher tangent bundle as would happen under the tangent map functor (i.e., if then and ). A linear covariant derivative provides an effective “trivialization” of analogous to the trivialization as discussed above, discarding all but the “fiber” portion of the derivative, allowing the construction of an object known as the total linear covariant derivative analogous to the differential as discussed above.
The notion of a linear covariant derivative on a vector bundle is arguably the crucial element of differential geometry (The Fundamental Lemma of Riemannian Geometry establishes the existence of the Levi-Civita connection ([8], p. 68), which is a linear covariant derivative satisfying certain naturality properties.). In particular, this operator implements the product rule property common to anything that can be called a derivation—a property which is particularly conducive to the operation of tensor calculus. The total linear covariant derivative of a vector field (i.e., section of a vector bundle) allows the generalization of many constructions in elementary calculus to the setting of smooth vector bundles equipped with linear covariant derivatives. For example, the divergence of a vector field X on generalizes to the divergence of a vector field X on N, which has an analogous divergence theorem among other qualitative similarities.
Remark 4
(Natural linear covariant derivative on trivial line bundle). Before making the general definition for the linear covariant derivative, a natural linear covariant derivative will be introduced. With N denoting a smooth manifold as before, if , then is the differential of f. Let
Because is naturally identified with , this is essentially the natural linear covariant derivative on the trivial line bundle . Note that there is an associated product rule; if , then , and
When clear from context, the superscript decoration can be omitted and the derivative denoted as .
Definition 4
(Linear covariant derivative). A linear covariant derivative on a vector bundle defined by is an -linear map satisfying the product rule
where and . The switch in order in the first term of the expression is necessary to form a tensor field of the correct type, . If , then the expression is known as the total [linear] covariant derivative of σ. If [in a subset ], then σ is said to be parallel [on U]. The “linear” qualifier is implied in standard literature and is therefore often omitted.
The inscribed | in is to indicate that the covariant derivative is linear, and can be omitted when clear from context, or when it is unnecessary to distinguish it from the nonlinear tangent map operator whose decorated symbol is . For the remainder of this section, this distinction will not be necessary, so an undecorated ∇ will be used.
For , it is customary to denote by , where V indicates the “directional” component of the derivative. Following this convention, the product rule can be written in a form where the product rule is more obvious;
Given a linear covariant derivative on E, there is a naturally induced linear covariant derivative on satisfying the product rule for the natural pairing on E, namely,
where , , and .
A covariant derivative is a local operator with respect to the base space N; if , then depends only on the restriction of to an arbitrarily small neighborhood of p ([8], p. 50), and therefore the restriction makes sense, allowing calculations using local expressions. Furthermore, a covariant derivative can be constructed locally and glued together under certain conditions. See ([6], p. 503) for more on this, and as a reference for general theory on bundles, covariant derivatives, and connections.
Linear covariant derivatives on several vector bundle constructions will now be developed. In analogy to defining a linear map by its action on a generating subset (e.g., a basis or a dense subspace) and then extending using the linear structure, Lemma (3) allows a covariant derivative to be defined on a generating subset (which can be chosen to make the defining expression particularly natural) and then extending. In this case, the relevant space is the space of sections of the vector bundle, which is a module over the ring of smooth functions on a manifold, and the extension process is done via linearity and the product rule (see (4)). This approach will allow the local trivialization implementation details to be hidden within the proof of Lemma (3)—an example of information hiding—so that constructions of covariant derivatives can proceed clearly by focusing only on the natural properties of the relevant objects and then invoking the lemma to do the “dirty” work (see (10) and (11)).
A bit of useful notation will be introduced to simplify the next definition. If is a subset of a -module whose elements are functions on N (and therefore have a notion of restriction to a subset) and is open, then let denote the set of restrictions of the elements of G to the set U. Note that by construction.
Definition 5
(Finitely generating subset). Say that a subset of a module finitely generates the module if the subset contains a finite set of generators for the module.
Definition 6
(Locally finitely generating subset). If Γ is a -module and , then G is said to be a locally finitely generating subset of if each point has a neighborhood for which finitely generates .
The space of sections of a vector bundle is the archetype for the above definition. The locally trivial nature of allows local frames to be chosen in a neighborhood of each point of N, from which global smooth sections (though not necessarily a global frame) can be made using a partition of unity subordinate to the trivializing neighborhoods. The set of such global sections forms a locally finite generating subset of .
Lemma 2.
If G is a locally finitely generating subset of , then each point in N has a neighborhood and such that forms a frame for . In other words, a local frame can be chosen out of G near each point in N.
Proof.
Let and let be a neighborhood of q for which finitely generates (here, , recalling that ). Without loss of generality, let be linearly independent (this is possible because spans the vector space ). Because is continuous for each i and the linear independence of the sections is an open condition (defined by where ), there is a neighborhood of q for which is a linearly independent set for each . Finally, letting for , the sections form a frame for . □
The following lemma shows that defining a covariant derivative on a locally finitely generating subset of the space of sections of a vector bundle is sufficient to uniquely define a covariant derivative on the whole space. The particular generating subset can be chosen so the covariant derivative has a particularly natural expression within that subset.
Lemma 3
(Linear covariant derivative construction). Let G be a locally finite generating subset of . If satisfies the linear covariant derivative axioms (What is meant by this is that the product rule must only be satisfied on if , where and .), then there is a unique linear covariant derivative whose restriction to G is .
Proof.
If , then by (2) there exists a neighborhood of q for which there are forming a frame for . If , then for some (specifically, , where denotes the dual coframe of ). Define locally on so as to satisfy the product rule
To show well-definedness, let be another frame for . Then for some . Let be the unique smooth vector bundle isomorphism such that . Writing and with respect to the frame as and respectively, it follows that and . Then
The last equality follows because , which is a constant function, so
Thus the expression defining does not depend on the choice of local frame. This establishes the well-definedness of .
Clearly the restriction of to G is . This establishes the claim of existence. Uniqueness follows from the fact that is defined in terms of the maps and . □
Lemma (3) is used in the proof of the following proposition to allow a natural formulation of the pullback covariant derivative with respect to a natural locally finite generating subset of , in which the relevant derivative has a natural chain rule.
Proposition 10
(Pullback covariant derivative). If is smooth and is a covariant derivative on E, then there is a unique covariant derivative on satisfying the chain rule
for all .
Proof.
Let
noting that a local frame over open set induces a local frame , so G is a locally finite generating subset of . Define
The well-definedness and -linearity of comes from that of . For the product rule, if and , then the product is an element of G if and only if for some , in which case, . Then it follows that
which is exactly the required product rule. By (3), there exists a unique covariant derivative on whose restriction to G is . □
The full notation is often cumbersome, so it may be denoted by when the pulled-back bundle is clear from context.
Remark 5.
There is an important feature of a pullback covariant derivative in the case that pullback map is not an immersion; the pullback covariant derivative may be nonzero even where the pullback map is singular. This fact can be obscured by a certain abuse of notation which often comes in the expression of the geodesic equations in differential geometry (see (4)). An example will illustrate this point.
Let be a covariant derivative on . Let be a unit-length vector field which describes the location of a person (the basepoint) and direction s/he is looking (the fiber portion) with respect to time (let have standard coordinate t). Define by , so that θ is the base map of Θ, i.e., θ has discarded the direction information and only encodes the location information. Say that for some closed interval , is identically zero (and so is not an immersion), but that is nonvanishing; see Figure 1. Mathematically, this means that during this time, Θ is varying only within a single fiber of . Physically, this means that during this time, the person is standing still but the direction s/he is looking is changing. Passing to a higher tangent space is often undesirable (note that takes values in ), so to avoid this, a covariant derivative is used. In order to be meaningful, the covariant derivative must capture this fiber-only variation.
Figure 1.
A picture of the manifold M, path , and vector fields and . The blue dots represent at certain points , while the green and red arrows represent and at at these points respectively. Note that is a unit-length vector field along and varies within I, whereas is a vector field along that vanishes within I.
Because Θ is a vector field along θ, it can be written as , and the covariant derivative on induces a pullback covariant derivative on , which has base space . In other words, is parameterized by time. Then is the desired covariant derivative of Θ with respect to time. A coordinate-based calculation will be made to make completely obvious why this pullback covariant derivative captures the desired information. Let be local coordinates on M and, for simplicity, assume that the image of θ lies entirely within this coordinate chart. Because is a local frame for , is a local frame for , by (1) and can be written locally as for some functions . Then
Note that . Within the interval I, vanishes, so the second term vanishes on I. However, because Θ is varying in a fiber-only direction within I, the basepoint is not changing and can be identified with an elementary vector space derivative (the fiber is a vector space and so an elementary derivative is well-defined there). This fiber-direction derivative is nonvanishing by assumption, so is nonvanishing on I as desired.
Introducing a bit of natural notation which will be helpful for the next result, if and , then define and by
for each .
Proposition 11
(Induced covariant derivatives on and ). If and are covariant derivatives on E and F respectively, then there are unique covariant derivatives
and
on and respectively, satisfying the sum rule
and the product rule
respectively, where , , and . Here, (and its dual) is used instead of the isomorphic vector bundle (and its dual).
Proof.
Suppressing the pedantic use of the subscript to avoid unnecessary notational overload, the set is a locally finite generator of , since local frames for take the form , where and are local frames for E and F respectively. Define
This map is well-defined and -linear by construction, since the connections and are well-defined and -linear. If , , and , then the product is in G (i.e., has the form for some and ) if and only if is constant. Thus the product rule (restricted to elements of G) reduces to -linearity, which is already satisfied. By (3), there exists a unique connection on whose restriction to G is .
Similarly, the set is a locally finite generator of , since local frames for take the form , where and are local frames for E and F respectively. Define
This map is well-defined and -linear by construction, since the connections and are well-defined and -linear. For the product rule, with , , and , the product is in H if and only if there exist and such that (noting that then ). In this case, with ,
which is exactly the required product rule. By (3), there exists a unique connection on whose restriction to H is . □
Remark 6
(Naturality of the covariant derivatives on and ). Letting () for brevity, the maps
and
each extended linearly to the rest of their domains, are easily shown to be smooth vector bundle isomorphisms over . Then
and
for all , , and , showing that the connections on and are ξ and ψ-related to the naturally induced connections on and respectively, and are therefore in this sense natural. The sum and product correspond to and under ξ and ψ respectively.
Many important tensor constructions involve permutations. An extremely useful property of these permutations is that they commute with the covariant derivatives induced by the covariant derivatives on the tensor bundle factors, making them natural operators in the setting of covariant tensor calculus.
Proposition 12
(Transposition tensor fields are parallel). Let be smooth vector bundles over M having covariant derivatives respectively, let and , and let and denote the induced covariant derivatives.
If denotes the tensor field which maps to (i.e., transposes the second and third factors), then is a parallel tensor field with respect to the covariant derivative induced on the vector bundle , i.e., .
Proof.
Let . Then
Because X is arbitrary, this shows that . This extends linearly to general tensors, so , as desired. □
The fact that all transposition tensor fields are parallel implies that all permutation tensor fields are parallel, since every permutation is just the product of transpositions. This gives as an easy corollary that a covariant derivative operation commutes with a permutation operation, which has quite a succinct statement using the permutation superscript notation.
Corollary 2
(Permutation tensor fields are parallel). Let be smooth vector bundles over M each having a covariant derivative, and let and . If is interpreted as the tensor field in which maps to , then σ is a parallel tensor field. Stated using the superscript notation, with and ,
Proof.
This follows from the fact that can be written as the product of transpositions; because of the product rule and because each transposition is parallel. The claim regarding commutation with the superscript permutation follows easily from its definition.
using the fact that , since is a parallel tensor field. □
Finally, any smooth vector bundle has a canonical identity tensor field acting as the identity on , i.e., for all . Given a local frame over open set , it has the expression . The identity tensor field is an invaluable tool in forming tensor field expressions and in phrasing other naturality conditions regarding covariant derivatives.
Proposition 13
(Identity tensor field is parallel). Let be a smooth vector bundle with a linear covariant derivative . Then is parallel with respect to , i.e., .
Proof.
Let . Then by definition, . Taking the covariant derivative of both sides with respect to ,
Because is arbitrary, this implies that . Because X is arbitrary, this implies that . □
3.9. Decomposition of
In using the calculus of variations on a manifold M where the Lagrangian is a function of (this form of Lagrangian is ubiquitous in mechanics), taking the first variation involves passing to . Without a way to decompose variations into more tractable components, the standard integration-by-parts trick ([15], p. 16) cannot be applied. The notion of a local trivialization of via choice of coordinates on M is one way to provide such a decomposition. A coordinate chart on M establishes a locally trivializing diffeomorphism . However such a trivialization imposes an artificial additive structure on depending on the [non-canonical] choice of coordinates, only gives a local formulation of the relevant objects, and the ensuing coordinate calculations do not give clear insight into the geometric structure of the problem. The notion of the linear connection remedies this ([16]).
A linear connection on the vector bundle is a subbundle of such that and for all and , where is the scalar multiplication action of a on E ([6], p. 512). The bundle may also be called a horizontal space of the vector bundle (“a” is used instead of “the” because a choice of is generally non-unique). For convenience, define noting then that .
A linear connection can equivalently be specified by what is known as a connection map; essentially a projection onto the vertical bundle. This is a slightly more active formulation than just the specification of a horizontal space, as a covariant derivative can be defined directly in terms of the connection map—see ([6], p. 518), ([17], p. 128), ([18], p. 173), and ([7], p. 208).
Proposition 14
(Connection map formulation of a linear connection). If (i.e., is a smooth vector bundle morphism over π) is a left-inverse for that is equivariant with respect to and (i.e., ) ([7], p. 245), then defines a linear connection on the vector bundle . Such a map v is called the connection map associated to H. Conversely, given a linear connection H, there is exactly one connection map defining H in the stated sense.
Proof.
That v is a left-inverse for implies that v has full rank, so defines a subbundle of having the same rank as . Because v is smooth, H is a smooth subbundle. Furthermore, the condition implies that for each , and therefore by a rank-counting argument.
If and , then , which equals zero if and only if , i.e., if and only if . Thus . This establishes as a linear connection.
Conversely, if H is a linear connection and and are connection maps for H, then . Then because the image of is all of , it follows that . Since by definition, and since , this shows that . Uniqueness of connection maps has been established. To show existence, define , where be the canonical projection, recalling that . It is easily shown that v is a connection map for H. □
Proposition 15
(Decomposing ). If is a connection map, then
is a smooth vector bundle isomorphism over . See Figure 2.
Figure 2.
A diagram representing the decomposition of into horizontal and vertical subbundles. The vertical lines represent individual fibers of E, while , , denotes the zero vector of , and denotes the zero subbundle of E; . By the equivariance property of the linear connection, is a submanifold of E which is entirely horizontal (its tangent space is entirely composed of horizontal vectors). The tangent spaces and are drawn; green arrows representing the vertical subspaces (“along” the fibers), red arrows representing the horizontal subspaces. Finally, c is a horizontal curve passing through .
Proof.
Because , and and , the fiber-wise restriction
is a linear isomorphism for each . The map is a smooth vector bundle morphism over by construction. It is therefore a smooth vector bundle isomorphism over . □
Remark 7
(Linear connection/covariant derivative correspondence). Given a covariant derivative on a smooth vector bundle , there is a naturally induced linear connection, defined via the connection map
where is a variation of . Here, denotes the pullback of the covariant derivative through the map (see (10)). Conceptually, all v does is replace an ordinary derivative () with the corresponding covariant one ().
Conversely, given a connection map for a linear connection , there is a naturally induced covariant derivative on the smooth vector bundle , defined by
The scaling equivariance of v is critical for showing that this map actually defines a covariant derivative. Full type safety should be observed here; by the contravariance of the pullback of bundles (see (8)), , so
and therefore as desired. This connection map construction of a covariant derivative gives (10) as an immediate consequence via the chain rule for the tangent map.
The following construction is an abstraction of taking partial derivatives of a function, inspired by ([11], p. 277). Instead of taking partial derivatives with respect to individual coordinates, partial covariant derivatives along distributions over the base manifold are formed, where the distributions (subbundles) decompose the base manifold’s tangent bundle into a direct sum. Such a construction conveniently captures the geometry of maps with respect to the geometry of its domain.
Proposition 16
(Partial covariant derivatives). Let , and for each let be a smooth vector bundle. If, for each , such that is a smooth vector bundle isomorphism over , then there exist unique sections for each such that
This decomposition of provides what will be called partial covariant derivatives of L (with respect to the given decomposition).
Proof.
The following equivalences provide a formula for directly defining .
Existence and uniqueness is therefore proven. □
Corollary 3
(Horizontal/vertical derivatives). Let as before. If is a connection map, and if is smooth, then there exist unique and such that .
It should be noted that the basepoint-preserving issue discussed in Section 3.7 plays a role in choosing to use the tensor field formulation of and . In particular, without preserving the basepoint (via the -pullback of and E to form and ), the map would not be a smooth bundle isomorphism, and the horizontal and vertical derivatives would be maps of the form and , but that, critically, are not sections of smooth vector bundles, and can only claim to be smooth [fiber] bundle morphisms. Derivative trivializations will be central in calculating the first and second variations of an energy functional having Lagrangian L (see (1) and (2)).
3.10. Curvature and Commutation of Derivatives
A ubiquitous consideration in mathematics is to determine when two operations commute. In the setting of tensor calculus, this often manifests itself in determining the commutativity (or lack thereof) of two covariant derivatives. Here, “covariant derivatives” may refer to both linear covariant derivatives and the tangent map operator (see (3)). This unified categorization of derivatives will now be leveraged to show that certain fiber bundles are flat (in a sense analogous to the vanishing of a curvature endomorphism) with respect to particular covariant derivatives. This reduces the work often done showing commutativity of derivatives in the derivation of the first variation of a function in the calculus of variations to the simple statement that a particular tensor field is symmetric, which is comes as a corollary to the aforementioned flatness.
In this section, the symbol ∇ may denote or , depending on context. This eases the expression of repeated covariant derivatives, such as the covariant Hessian of a section (see below), and is an example of telescoping notation as discussed in Section 3.3.
If defines a smooth [fiber] bundle whose space of sections has two repeated covariant derivatives defined, if , and if is a symmetric linear covariant derivative (meaning for ), then the tensor contraction
is an expression measuring the non-commutativity of the X and Y derivatives of . The quantity will be called the covariant Hessian of , because it generalizes the Hessian of elementary calculus; it contains only second-derivative information, and in the special case seen below, it is symmetric in the argument components. It should be noted that if is the vector bundle such that , then . Intentionally leaving the ∇ and · symbols undecorated in preference of contextual interpretation, unwinding the expression above gives
which is syntactically identical to the common definition for the [Riemannian] curvature endomorphism . In the traditional setting, where is a linear covariant derivative on vector bundle E, the curvature endomorphism takes the form of a tensor field . In this setting however, because may be nonlinear (for example, when M and S are manifolds), such a tensorial formulation does not generally exist. Instead,
defines a second-order covariant differential operator (“covariant” meaning tensorial in the X and Y components). Put differently,
which will be called the (possibly nonlinear) curvature operator, which in particular measures the non-commutativity of the X and Y derivatives of . If is identically zero, then the bundle E is said to be flat with respect to the relevant connections/covariant derivatives.
There are two particularly important instances of flat bundles. The first is the trivial line bundle defined by (whose space of smooth sections, as discussed in Section 3.4, is naturally identified with ; S is a smooth manifold). In this case, and
is the object referred to in most literature as the covariant Hessian of f. Here, is a real-valued function on S.
Proposition 17
(Symmetry of covariant Hessian on functions). Let S be a smooth manifold and let be a symmetric covariant derivative. If , then is a symmetric tensor field (i.e., it has a symmetry). Here, the covariant derivative on is as defined above.
Proof.
Let . Recall that . Then
Because is pointwise-arbitrary in , this shows that is symmetric. Equivalently stated, is identically zero, and therefore the relevant bundle is flat. □
The second important case involves the nonlinear covariant derivative on . Here, if , then
so .
Proposition 18
(Symmetry of covariant Hessian on maps). Let M and S be smooth manifolds and let and be symmetric covariant derivatives. If , then is a tensor field which is symmetric in the two components (i.e., it has a symmetry). Here, the covariant derivative on is as defined above.
Proof.
Let and , so that . Then
By definition, , which cancels out the other term. By (17), is symmetric, so the final term is zero. Because is pointwise-arbitrary in and X and Y are pointwise-arbitrary in , this shows that is identically zero, so the bundle defined by , whose space of sections is identified with , is flat, and therefore is symmetric in its two components. □
The construction used in (16) can be applied to nonlinear as well as linear covariant derivatives to considerable advantage. For example, if , where are smooth manifolds and and , then define and by
This gives a convenient way to express partial covariant derivatives, which will be used heavily in Section 4 in calculating the first and second variations of an energy functional. Note that in this parlance, is the full tangent map .
Defining second partial covariant derivatives , , and by
the symmetry of the covariant Hessian of can be used to show the various symmetries of these second derivatives.
Proposition 19
(Symmetries of partial covariant derivatives). With ψ and its second partial covariant derivatives as above,
and (having analogous type) are -symmetric (i.e., and ) and the mixed, second partial covariant derivatives
are mutually -symmetric (i.e., ).
Proof.
Let . If and , then
Because and are pointwise-arbitrary in and respectively, this implies that .
Analogous calculations (setting and and then separately setting and ) show that and . □
There are two final results regarding the second covariant derivative that will be especially useful in the calculation of the first and second variations of an energy functional (see (1) and (3)).
Proposition 20
(Chain rule for covariant Hessian). Let define a bundle having a first and second covariant derivative (i.e., a section of E can be covariantly differentiated twice). If and , then
Proof.
Let . Then
Because X is pointwise-arbitrary in , this establishes the desired equality. □
Proposition 21
(Pullback curvature endomorphism). Let define a vector bundle having first and second covariant derivatives. If , then
Proof.
Note that . Let and let , so that . Then
and because and are pointwise-arbitrary in their respective spaces, this establishes the desired equality. □
A common operation is to evaluate a covariant derivative along a single tangent vector. One can express a single tangent vector as a section of a particular pullback bundle, the map being the constant map evaluating to the basepoint of the vector. This allows the richly typed formalism of pullback bundles to be used to evaluate derivatives at a point, particularly noting that this safely deals with the overloading of the natural pairing operator · (see Section 3.5).
Proposition 22
(Evaluation commutes with non-involved derivatives). Let A and B be smooth manifolds and let for some smooth bundle having a covariant derivative . If and the map represents evaluation at b, then
i.e., evaluation in B commutes with a derivative along A.
Proof.
Let , and let and . Then
and because X is pointwise-arbitrary in , this implies that as desired. □
Proposition 23.
Let be smooth manifolds, let be smooth, let and , and let . If and , then
Proof.
The conditions and imply that in the product covariant derivative. Then since , it follows that
where denotes the identity tensor field on , and therefore
For the main calculation,
as desired. □
4. Riemannian Calculus of Variations
The use of the Calculus of Variations in the Riemannian setting to develop the geodesic equations and to study harmonic maps is quite well-established. A more general formulation is required for more specific applications, such as continuum mechanics in Riemannian manifolds. The tools developed in Section 3 will now be used to formulate the first and second variations and Euler–Lagrange equations of an energy functional corresponding to a first-order Lagrangian. In particular, the bundle decomposition discussed in Section 3.9 will be needed to employ the standard integration-by-parts trick seen in the formulation of the analogous parts of the elementary Calculus of Variations. The seemingly heavy and pedantic formalism built up thus far will now show its usefulness.
In this part, let and be Riemannian manifolds with M compact. Calculations will be done formally in the space , noting that its completion under various norms will give various Sobolev spaces of maps from M to S, which are ultimately the spaces which must be considered when finding critical points of the relevant energy functionals. See [17,18] for details on the analytical issues. Let denote the Riemannian volume form corresponding to metric g, and let be the induced volume form on . Let be the inclusion, and let be the unit normal covector field on . Let and , making a vector bundle.
The energy functionals in this section will be assumed to have the form
where , referred to as the Lagrangian of the functional, is smooth. Here, could be understood to take values either in or . In the former case, the composition is literal, while in the latter case, there is an implicit conversion from to via a fiber projection bundle morphism (see (1)). Either way, . Let and denote the respective Levi-Civita connections, which induce a covariant derivative on E (see (11)). Define the connection map using as in (3). For convenience, the subscript will be suppressed on the “full” tensor product defining E from here forward.
4.1. Critical Points and Variations
One of the most pertinent properties of an energy functional is its set of critical points. Often, the solution to a problem in physics will take the form of minimizing a particular energy functional. Lagrangian mechanics is the quintessential example of this. This section will deal with some of the main considerations regarding such critical points.
Because the domain of a [real-valued] functional may be a nonlinear space, the relevant first derivative is the [real-valued] differential , which is paired with the linearized variation of a map . In particular, a one-parameter variation of is a smooth map , where the I component is the variational parameter. Letting i denote the standard coordinate on I, the linearized variation is then , recalling that . Because , it follows that , i.e., is a vector field along . The object will be called a linearized variation. Call the elements of linear variations.
Proposition 24
(Each linear variation is a linearized variation). Let denote the exponential map associated to , where is a neighborhood of the zero bundle in on which exp is defined, and let denote the scalar multiplication structure on . If and if is defined by , then . In other words, every vector field over ϕ is realized as the linearization of a one-parameter variation of ϕ.
Proof.
The map is well-defined and smooth by construction. Let . Then
where denotes the zero subbundle of . The last equality follows from a naturality property of the exponential map ([5], p. 523). □
Thus each linear variation is a linearized variation, establishing a natural identification of with , which will be useful when calculating the differential of a functional on . In fact, the exponential map construction in (24) is a way to construct charts for the infinite dimensional manifold ([18], Theorem 5.2).
4.2. First Variation
This section is devoted to calculating the first variation of the previously defined energy functional. Here is where the full richness of the type system of the objects developed earlier in the paper will really show their power (and arguably, necessity). While the type-specifying notation may appear overly decorated and pedantic, subtle usage errors can be detected and avoided by keeping track of the myriad of types of the relevant objects through the sub/superscripts on covariant derivatives and natural pairings; extremely complex constructions can be made and navigated without much trouble. By contrast, performing the ensuing calculations in coordinate trivializations would result in an intractable proliferation of Christoffel symbols and indexed expressions which would prove difficult to read and would be highly prone to error.
Because the Lagrangian is defined on a vector bundle over the product space , the decomposition in (3) can be slightly refined. The projection can be decomposed into the factors and , so that . Then . Let
The letters sigma and mu have been chosen to reflect the fact that and give the “S component” (spatial) and “M component” (material) of the derivative . The connection map v will be retained as is, giving , the “E component” (fiber) of . See (8) for a discussion of how the quantities generalize the analogous structures in the elementary treatment of the calculus of variations.
Because a one-parameter variation of has the form but the energy functional involves only the M derivative of its argument, the partial tangent map must be used here. For the purposes of calculating the first and second variations, must be written as
Theorem 1
(First variation of ). Let , L, σ, μ, v and ν all be defined as above. If and , then
The expression above is often called the first variation of . A type analysis here gives and . Recall that because the domain of is M, it follows that .
Proof.
Supporting calculations will be made below in lemmas. Let be as in (24), so that . For tidiness, let and . Then
as desired.
As for the types of and , the contravariance of bundle pullback allows significant simplification. Because and ,
The supporting calculations follow. Define for purposes of evaluation of via precomposition as in (22). Then is a section of a pullback bundle; . It should be noted that by definition, and that by (22). □
Lemma 4.
Let L, Φ, A, σ, and v be as in Theorem 1. The variational derivative of decomposes in terms of the partial covariant derivatives and and the linearized variation A;
The integration-by-parts trick as in the derivation of the first variation in elementary calculus of variations generalizes to the covariant setting;
Proof.
A wonderful string of equalities follows.
Note that by (8), , and . Replacing with A gives
establishing the first equality.
For the second,
Note that , so and . □
Lemma 5.
The variation decomposes as follows.
Proof.
This calculation determines the component of .
This calculation determines the component of .
The last equality follows from the fact that does not depend on the i coordinate.
This calculation determines the v component of . Let and . The left-hand side of the third equality claimed in the lemma will be examined before evaluating at ;
Let , noting that . Then
Recall that and that the pullback of bundles is contravariant. Then evaluating at via pullback by z renders
The last equality is because , which is a bundle over M, and therefore is the total covariant derivative. Because Y is pointwise-arbitrary in , this implies that , i.e., the variational derivative commutes with the first material derivative, just as in the analogous situation in elementary calculus of variations. □
Corollary 4
(Euler–Lagrange equations). If is a critical point of (i.e., if for all ), then
These are called the Euler-Lagrange equations for the energy functional . Recall that because the domain of ϕ is M, .
Proof.
This follows trivially from (1) and the Fundamental Lemma of the Calculus of Variations ([15], p. 16). □
It should be noted that the boundary Euler–Lagrange equation is due to the fact that the admissible variations are entirely unrestricted. If, for example, the class of maps being considered had fixed boundary data, then any variation would vanish at the boundary, and there would be no boundary Euler–Lagrange equation; this is typically how geodesics and harmonic maps are formulated.
Remark 8
(Analogs in elementary calculus of variations). The quantities generalize the quantities respectively of the elementary treatment of the calculus of variations for energy functional
where is compact and is the Lagrangian. Here, and decompose the total derivative and are defined by the relation
for and . The Euler-Lagrange equation in this setting is
noting that the left hand side of the equation takes values in .
In most situations involving simpler calculations, it is desirable and acceptable to dispense with the highly decorated notation and use trimmed-town, context-dependent notation, leaving off type-specifying sub/superscripts when clear from context.
Proposition 25
(Conserved quantity). If M is a real interval, satisfies the Euler–Lagrange equation, and , then
is constant. If L is kinetic minus potential energy, then H is kinetic plus potential energy (the total energy), and is referred to as the Hamiltonian.
Proof.
Let t be the standard real coordinate. Note that because M is a real interval, it follows that . Terms appearing in the derivative of H can be simplified as follows. Note the repeated derivatives; but lands in a higher tangent space.
Again, because M is a real interval, the divergence is just the derivative, so the Euler–Lagrange equation is
and therefore . Thus
which is zero because satisfies the Euler–Lagrange equation. This shows that H is constant along solutions of the Euler–Lagrange equation, and is therefore a conserved quantity. It should be noted that this proof relies on the fact that the divergence takes a particularly simple form when the domain M is a real interval; the result does not necessarily hold for a general choice of M. □
Example 3
(Harmonic maps). Define a metric
in a manner analogous to that in (2);
To clarify, , so permuting the middle two components (as in the definition of ) gives the correct type, including the necessary metric symmetry condition. If , then is the quantity obtained by raising/lowering the indices of A and pairing it naturally with A. A useful fact is that ; if , then permutation commutativity (2) and the product rule gives
which equals zero because h and are parallel with respect to and respectively.
With Lagrangian
and energy functional
( is called theenergyof ϕ), the resulting Euler–Lagrange equations can be written down after calculating and . It is worthwhile to note that L is a quadratic form on E, which will automatically imply that . However, the calculation showing this will be carried out for demonstration purposes.
Let . Then is a vertical variation of A, since , so
The product rule gives three terms. The middle term is zero because , and therefore does not depend on ϵ. The basepoint evaluation notation for will be suppressed for brevity (see Section 3.5). Thus
where the last equality results from the symmetry of k. By the nondegeneracy of the natural pairing on (which is denoted here by :), this implies that .
To calculate , it is sufficient (and can be easier) to calculate , as , , so . Let be a horizontal curve in ; this means that . Recall that is defined by . Then
As before, the product rule gives three terms. Using the contravariance of bundle pullback, the middle term is
which equals zero because . Thus
which equals zero because . The quantity can take any value in , showing that . Finally, implies that and . This can be understood from the fact that L depends only on the fiber values of A, and has no explicit dependence on the basepoint; this relies crucially on the fact that .
Finally, the Euler–Lagrange equations can be written down. Recalling that the natural trace of a tensor (used in the divergence term in the Euler–Lagrange equation) is contraction with the appropriate identity tensor, let be a local frame for and let be its dual coframe, so that is a local expression (It should be noted that while is being written as the local expression , no inherently local property is being used; this tensor decomposition is only used so that the product rule can be used in the following calculations in a clear way.) for . The type-subscripted notation will be minimized except to help clarify. On M:
The second term vanishes because . Unraveling the definition of k gives . Contracting both sides of the above equation with gives
The quantity is the g-trace of the covariant Hessian of ϕ and can rightfully be called thecovariant Laplacianof ϕ and denoted by (this is also referred to as thetension fieldof ϕ in other literature ([14], p. 13), which is denoted ). Note that is a vector field along ϕ. This makes sense because ϕ is not necessarily a scalar function; it takes values in S. In the case , is the ordinary covariant Laplacian on scalar functions.
Aharmonic mapis defined as a critical point of the energy functional
Assuming a fixed boundary (so that the variations vanish on the boundary) eliminates the boundary Euler–Lagrange equation, the remaining equation is
which is the generalization of Laplace’s equation. Satisfying Laplace’s equation is a sufficient condition for a map to be a critical point of the energy functional. There is an abundance of literature concerning harmonic maps and the analysis thereof [14,15,19,20].
Example 4
(The geodesic equation). A fundamental problem in differential geometry is determining length-minimizing curves between given points. If M is a bounded, real interval, and t denotes the standard real coordinate, then the length functional on curves is . A topological metric on M can be defined as
It can be shown that the length functional and the energy functional have identical minimizers. Note that . It is therefore sufficient to consider the analytically preferable energy functional.
In this case, the metric g on M is just scalar multiplication on . Because M is one-dimensional and t is the standard real coordinate, is a global, parallel orthonormal frame for , and the g-trace of (i.e., ) has a single term. The Euler–Lagrange equation, on the interior of M, is
However, and , giving thegeodesic equation
This is the covariant way to state that the acceleration of ϕ is identically zero. The geodesic equation is commonly notated as , though such notation is inaccurate because is not a vector field on S, but a vector field along ϕ, and therefore use of the pullback covariant derivative is correct (see (5)).
While formulated using fixed boundary conditions (ϕ has p and q as its endpoints), the geodesic equation is a second order ODE for which initial tangent vector conditions are sufficient to uniquely determine a solution.
4.3. Second Variation
A further consideration after finding critical points of the energy functional is determining which critical points are extrema. This will involve calculating the second derivative of . Let , noting that for . The first derivative of is , as seen in the previous section. The second derivative is the covariant Hessian , where the covariant derivative is induced by ([18], Theorem 5.4).
For the remainder of this section, let be neighborhoods of zero, let i and j be their respective standard coordinates, and extend the existing -style derivative-at-a-point notation by defining , , and evaluation map . Then and ; these will be used as in the calculation of the first variation.
Theorem 2
(Second variation of ). Let , L, σ, μ, v and ν all be defined as above. If is a critical point of and , then the covariant Hessian of is
This is often called the second variation of . Here, denotes the Riemannian curvature endomorphism tensor for the Levi-Civita connection on .
Proof.
Let be a two-parameter variation such that and (e.g., ). The variation can be naturally identified with a variation which is more conducive to the use of C as a manifold. The tensor products in the generally infinite-dimensional are taken formally. Let .
By (20), taking the algebra formally in the case of infinite-rank vector bundles,
so
Supporting calculations follow.
Calculation (1): Abbreviate by . By (20),
Note that , and since is a critical point of ,
Thus
Calculation (2):
Calculation (3): As calculated in the proof of (1),
Furthermore, letting for brevity and noting that ,
Note that
and therefore
so it suffices to examine its natural pairing with elements. Let , noting that and that . Then
where the last equality follows from Calculations (4) and (5). Because X is pointwise-arbitrary in , this shows that
Calculation (4):
Calculation (5):
□
Theorem 3
(Second variation of (alternate form)). Let , L, σ, μ, v and ν all be defined as above. If is a critical point of and , then
Proof.
This result follows essentially from (2) via several instances of integration by parts to express the integrand(s) entirely in terms of A and not its covariant derivatives. Abbreviate by . Then, integrating by parts allows the covariant derivatives of A to be flipped across the natural pairings over .
Similarly,
Together with (2), this gives the desired result. □
5. Discussion
This paper is a first pass at the development of a strongly typed tensor calculus formalism. The details of its workings are by no means complete or fully polished, and its landscape is riddled with many tempting rabbit holes which would certainly produce useful results upon exploration, but which were out of the scope of a first exposition. Here is a list of some topics which the author considers worthwhile to pursue, and which will likely be the subject of his future work. Hopefully some of these topics will be inspiring to other mathematicians, and ideally will start a conversation on the subject.
- There are refinements to be made to the type system used in this paper in order to achieve better error-checking and possibly more insight into the relevant objects. There are still implicit type identifications being done (mostly the canonical identifications between different pullback bundles).
- The calculations done in this paper are not in an optimally polished and refined state. With experience, certain common operations can be identified, abstract computational rules generated for these operations, and the relevant calculations simplified.
- The language of Category Theory can be used to address the implicit/explicit handling of natural type identifications, for example, the identification used in showing the contravariance of bundle pullback; .
- The details of the particular implementation of the pullback bundle as a submanifold of the direct product are used in this paper, but there is no reason to “open up the box” like this. For most purposes, the categorical definition of pullback bundle suffices; the pullback bundle can be worked exclusively using its projection maps and . Using this abstract interface often cleans up calculations involving pullback bundles significantly.
- The type system used for any particular problem or calculation can be enriched or simplified to adjust to the level of detail appropriate for the situation. For example, if , then , but if t is the standard coordinate on , then , where is given by . This “primed” derivative has a simpler type than the total derivative, and would presumably lead to simpler calculations (e.g., in (25). This “primed” derivative could also be used in the derivation of the first and second variations. While this would simplify the type system, it would diversify the notation and make the computational system less regularized. However, some situations may benefit overall from this.
- The notion of strong typing comes from computer programming languages. The human-driven type-checking which is facilitated by the pedantically decorated notation in this paper can be done by computer by implementing the objects and operations of this tensor calculus formalism in a strongly typed language such as Haskell. This would be a step toward automated calculation checking, and could be considered a step toward automated proof checking from the top down (as opposed to from the bottom up, using a system such as the Coq Proof Assistant). Furthermore, a computer could display tensor expressions with whatever level of detail the user desires, showing low-, mid-, or high-level notation, or showing or suppressing identification isomorphisms.
- Is there some sort of completeness result about the calculational tools and type system in this paper? In other words, is it possible to accomplish “everything” in a global, coordinate-free way using a certain set of tools, such as pullback bundles, covariant derivatives, chain rules, permutations, evaluation-by-pullback?
- The alternate form of the second variation (see (3)) can be used to form a generalized Jacobi field equation for a particular energy functional. Analysis of this equation and its solutions may give insights analogous to the standard (geodesic-based) Jacobi field equation.
Funding
This research was funded in part by a 2011–2012 ARCS (Achievement Rewards for College Scientists) Fellowship.
Acknowledgments
I would like to thank the ARCS Foundation for having granted me an ARCS Fellowship, and for their generous efforts to promote excellence in young scientists. I would like to thank my advisor Debra Lewis for trusting in my abilities and providing me with the freedom in which the creative endeavor that this paper required could flourish. I would like to thank David DeConde for the invaluable conversations at the Octagon in which imagination, creativity, and exploration were gladly fostered. Thanks to Chris Shelley for showing me how to create the tensor diagrams using Tikz. Finally, I would like to thank both Debra and David for their help in editing this paper.
Conflicts of Interest
The author declares no conflict of interest. The funders had no role in the research, writing, or publication of the manuscript, nor in the decision to publish the results.
References
- Parnas, D. On the Criteria to Be Used in Decomposing Systems Into Modules. Commun. ACM 1972, 15, 1053–1058. [Google Scholar] [CrossRef]
- Walter, W. Ordinary Differential Equations; Springer: Berlin/Heidelberg, Germany, 1998; Volume 182. [Google Scholar]
- Raymond, E.S. The Art of Unix Programming; Pearson Education, Inc.: Canada, 2003; Available online: http://www.catb.org/~esr/writings/taoup/html/ (accessed on 20 July 2022).
- Miller, G.A. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. Am. Psychol. Assoc. 1955, 101, 343–352. [Google Scholar]
- Lee, J.M. Introduction to Smooth Manifolds; Springer: Berlin/Heidelberg, Germany, 2006; Volume 218. [Google Scholar]
- Lee, J.M. Manifolds and Differential Geometry; American Mathematical Society, 2009; Volume 107, Available online: https://bookstore.ams.org/gsm-107 (accessed on 25 August 2022).
- Michor, P.W. Topics in Differential Geometry; American Mathematical Society, 2008; Volume 93, Available online: https://www.mat.univie.ac.at/~michor/dgbook.pdf (accessed on 25 August 2022).
- Lee, J.M. Riemannian Manifolds: An Introduction to Curvature; Springer: Berlin/Heidelberg, Germany, 1997; Volume 176. [Google Scholar]
- Cardelli, L. Typeful Programming. 1991. (Revised 1993). Available online: http://www.lucacardelli.name/Papers/TypefulProg.pdf (accessed on 20 July 2022).
- Penrose, R. The Road to Reality; Vintage Books: UK, 2004; Available online: https://en.wikipedia.org/wiki/The_Road_to_Reality (accessed on 25 August 2022).
- Marsden, J.E.; Hughes, T.J.R. Mathematical Foundations of Elasticity; Prentice Hall, Inc.: New York, NY, USA, 1983; Available online: https://www.amazon.co.jp/Mathematical-Foundations-Elasticity-Mechanical-Engineering/dp/0486678652 (accessed on 25 August 2022).
- Palais, R.S. Foundations of Global Non-Linear Analysis; W.A. Benjamin, Inc., 1968; Available online: https://vmm.math.uci.edu/PalaisPapers/FoundationsOfGlobalNonlinearAnalysis.pdf (accessed on 25 August 2022).
- Kolár, I.; Michor, P.W.; Slovák, J. Natural Operations in Differential Geometry; Springer: Berlin/Heidelberg, Germany, 1993; Volume 434, Available online: http://www.mat.univie.ac.at/~michor/listpubl.html (accessed on 20 July 2022).
- Xin, Y. Geometry of Harmonic Maps; Birkhäuser, 1996; Volume 23, Available online: https://link.springer.com/chapter/10.1007/978-1-4612-4084-6_4 (accessed on 25 August 2022).
- Giaquinta, M.; Hildebrandt, S. Calculus of Variations I; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
- Dodson, C.; Radivoiovici, M. Second-Order Tangent Structures. Int. J. Theor. Phys. 1982, 21, 151–161. [Google Scholar] [CrossRef]
- Ebin, D.G.; Marsden, J.E. Groups of Diffeomorphisms and the Motion of an Incompressible Fluid. Ann. Math. Second. Ser. 1970, 92, 102–163. [Google Scholar] [CrossRef]
- Eliasson, H.I. Geometry of Manifolds of Maps. J. Differ. Geom. 1967, 1, 169–194. [Google Scholar] [CrossRef]
- Conference Board of the Mathematical Sciences Regional Conference Series in Mathematics. Selected Topics in Harmonic Maps; American Mathematical Society, 1983; Number 50; Available online: https://bookstore.ams.org/cdn-1655676954536/cbms-50/5 (accessed on 25 August 2022).
- Nishikawa, S. Vartiational Problems in Geometry; American Mathematical Society, 2002; Volume 205, Available online: https://www.semanticscholar.org/paper/Variational-problems-in-geometry-Nishikawa/f493efd6d5b84cff294179450e85b358b766d265 (accessed on 25 August 2022).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).