Next Article in Journal
Sensor-Based Prognostic Health Management of Advanced Driver Assistance System for Autonomous Vehicles: A Recent Survey
Next Article in Special Issue
Rolling Geodesics, Mechanical Systems and Elastic Curves
Previous Article in Journal
An Approach for Supply Chain Management Contract Selection in the Oil and Gas Industry: Combination of Uncertainty and Multi-Criteria Decision-Making Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Riemannian Calculus of Variations Using Strongly Typed Tensor Calculus

Independent Researcher, Seattle, WA 98106, USA
Homepage: https://thedods.com/victor.
Website: https://itdont.work.
Mathematics 2022, 10(18), 3231; https://doi.org/10.3390/math10183231
Submission received: 23 July 2022 / Revised: 23 August 2022 / Accepted: 26 August 2022 / Published: 6 September 2022
(This article belongs to the Special Issue Variational Methods on Riemannian Manifolds: Theory and Applications)

Abstract

:
In this paper, the notion of strongly typed language will be borrowed from the field of computer programming to introduce a calculational framework for linear algebra and tensor calculus for the purpose of detecting errors resulting from inherent misuse of objects and for finding natural formulations of various objects. A tensor bundle formalism, crucially relying on the notion of pullback bundle, will be used to create a rich type system with which to distinguish objects. The type system and relevant notation is designed to “telescope” to accommodate a level of detail appropriate to a set of calculations. Various techniques using this formalism will be developed and demonstrated with the goal of providing a relatively complete and uniform method of coordinate-free computation. The calculus of variations pertaining to maps between Riemannian manifolds will be formulated using the strongly typed tensor formalism and associated techniques. Energy functionals defined in terms of first-order Lagrangians are the focus of the second half of this paper, in which the first variation, the Euler–Lagrange equations, and the second variation of such functionals will be derived.

1. Introduction

Many important differential equations have a variational origin, being derived as the Euler–Lagrange equations for a particular functional on some space of functions. The variational approach lends itself particularly to physics, in which conservation of energy or minimization of action is a central concept. The naturality of such formulations cannot be understated, as solutions to such problems often depend critically on the inherent geometry of the underlying objects. For example, solutions to Laplace’s equation for a real valued function (e.g., modeling steady-state heat flow) on a Riemannian manifold depend qualitatively on the topology of the manifold (e.g., harmonic functions on a closed Riemannian manifold are necessarily constant, which makes sense geometrically because there is no boundary through which heat can escape).
A central concept in the field of software design is that of information hiding [1], in which a computer program is organized into modules, each presenting an abstract public interface. Other parts of the program can interact only through the presented interface, and the details of how each module works are hidden, thereby preventing interference in the implementation details which are not required by the inherent structure of the module. This concept has clear usefulness in the field of mathematics as well. For example, there are several formulations of the real numbers (e.g., equivalence classes of Cauchy sequences of rational numbers, Dedekind cuts, decimal expansions, etc.), but their particulars are instances of what are known as implementation details, and the details of each particular implementation are irrelevant in most areas of mathematics, which only use the inherent properties of the real numbers as a complete, totally ordered field. Of course, at certain levels, it is useful or necessary to “open up the box” [go past the public interface] and work with a particular representation of the real numbers.
Information hiding is characteristic of abstract mathematics, in which general results are proved about abstract mathematical objects without using any particular implementation of said objects. These results can then be used modularly in other proofs, just as the functionality of a computer program is organized into modularized objects and functions. For example, a fixed point theorem for contractive mappings on closed sets in Banach spaces, but a particular application of this theorem renders an existence and uniqueness theorem for first-order ODEs ([2], pp. 59, 62).
A loose conceptual analogy for modularity is that of diagonalizing a linear operator. A basis of eigenvectors is chosen so that the action of the linear operator on each eigenspace has a particularly simple expression, and distinct eigenspaces do not interact with respect to the operator’s action. In this analogy, the eigenvectors then correspond to individual lemmas, and the linear operator corresponds to a large theorem which uses each lemma. Decomposing the proof of the main result in terms of non-interacting lemmas simplifies the proof considerably, just as it simplifies the quantification of the linear operator. The term “orthogonal” has been borrowed by software design to describe two program modules whose functionality is independent ([3], Chapter 4, Section 2). Orthogonality in software design is highly desirable as it generally eases program implementation and program correctness verification, as the human designers are only capable of keeping track of a certain finite number of details simultaneously [4]. The scope of each detail level of the design is limited in complexity, making the overall design easier to comprehend.
This technique in software design carries over directly to proof design, where it is desirable (elegant) to write proofs and do calculations without introducing extraneous details, such as choice of bases in vector spaces or local coordinates in manifolds. Because such choices are generally non-unique, they can often obscure the inherent structure of the relevant objects by introducing artifacts arising from properties of the particular details used to implement said objects. For example, the choice of a particular local coordinate chart on a manifold artificially imposes an additive structure on a neighborhood of the manifold, but such a structure has nothing to do with the inherent geometry of the manifold. Furthermore, the descent to this “lower level” of calculation discards some type information, representing points in a manifold as Euclidean vectors, thereby losing the ability to distinguish points from different manifolds, or even different localities in the same manifold.
This paper makes a particular emphasis on natural formulations and calculations in order to expose the underlying geometric structures rather than relying on coordinate-based expressions. The construction of the “full” direct sum and “full” tensor product bundles are used in combination with induced covariant derivatives to this end.
For more on relevant introductory theory on manifolds, bundles and Riemannian geometry, see [5,6,7,8].

2. Notation and Conventions

Let all vector spaces, manifolds and [fiber] bundles be real and finite-dimensional unless otherwise noted (this allows the canonical identification V * * V for a vector space or vector bundle V), and let all tensor products be over R . The unqualified term “bundle” will mean “fiber bundle”. The Einstein summation convention will be assumed in situations when indexed tensors are used for computation.
Unary operators are understood to have higher binding precedence than binary operators, and super and subscripts are understood to have the highest binding precedence. For example, the expression X , M ϕ would be parenthesized as X , M ϕ .
Apart from the obvious purpose of providing a concise and central reference for the notation in this paper, the following notation index serves to illustrate the use of telescoping notation (see Section 3.2). The high-level (terse notation which requires the reader to do more work in type inference but is more agile), mid-level, and low-level (completely type-specified, requiring little work on the part of the reader) notations are presented side-by-side with their definitions.
Let I R be a neighborhood of 0, let ϵ , i each be coordinates on I, let A , A 1 , , A n , B be sets, let M , N be manifolds, let ϕ C M , N , let π M A : A M and π N H : H N and be vector bundles, where A = E , F , F 1 , , F n , G , let U , V , V 1 , , V n , W be vector spaces, and let c i Γ F i M T * M such that
c 1 M M c n Γ F 1 M M F n M T * M
is a vector bundle isomorphism.
Table 1 and Table 2 are references which enumerate the notation used in this paper in its high-, mid-, and low-level forms.

3. Mathematical Setting

3.1. Using Strong Typing to Error-Check Calculations

Linear algebra is an excellent setting for discussion of the strong typing [9] of a language, a concept used in the design of computer programming languages. The idea is that when the human-readable source code of a program is compiled (translated into machine-readable instructions), the compiler (the program which performs this translation) or runtime (the software which executes the code) verifies that the program objects are being used in a well-defined way, producing an error for each operation that is not well-defined. For example, a vector-type value would not be allowed to be added to a permutation-type value, even though tuples of unsigned integers (i.e., bytes) are used by the computer to represent both, and the computer’s processing unit could add together their byte-valued representations. However, such an operation would be meaningless with respect to the types of the operands. The result of the operation would depend on the non-canonical choice of representation for each object. Strong type checking has the advantage of catching many programming errors, including most importantly those resulting from an inherent misuse of the program’s objects. Within this paper, certain type-explicit notations will be used to provide forms of type awareness conducive to error-checking.
An important example of semi-strong typing in math is Penrose’s abstract index notation [10], modeled on Einstein’s summation convention, in which linear algebra and tensor calculus are implemented using indexed objects (tensors) having a certain number and order of “up” and “down” indices (an abstraction of the genuine basis/coordinate expressions in which the indexed objects are arrays of scalars/functions). A non-indexed tensor is a scalar value, a tensor having a single up or down index is a vector or covector value respectively, a tensor having an up and a down index is an endomorphism, and so forth. The tensors are contracted by pairing a certain number of up indices with the same number of down indices, resulting in an object having as indices the uncontracted indices.
For example, given a finite-dimensional inner product space V , g , where g is a 0 2 -tensor (having the form g i j , i.e., two down indices), a vector v V is a 1 0 -tensor, and the length of v is v i g i j v j . If dim V > 1 , then 2 V has positive dimension, its vectors each being 2 0 -tensors, and G i j k : = g i k g j g i g j k is an inner product on 2 V (which must be a 0 4 -tensor in order to contract with two 2 0 -tensors).
Certain type errors are detected by use of abstract index notation in the form of index mismatch. For example, with V , g as above, if α V * , then α is a 0 1 -tensor. Because of the repeated j down indices, the expression g i j α j typically indicates a type error; g i j cannot contract with α j because of incompatible valence (valence being the number of up and down indices). Furthermore, multiplying a 0 2 -tensor with a 0 1 -tensor without contraction should result in a 0 3 -tensor, which should be denoted using three indices, as in g i j α k .
The only explicit type information provided by abstract index notation is that of valence. The “semi” qualifier mentioned earlier is earned by the lack of distinction between the different spaces in which the tensors reside. For example, if U , V , W are finite-dimensional vector spaces, then linear maps A : U V and B : V W can be written as 1 1 -tensors, and their composition B A : U W is written as the tensor contraction B A j i = B k i A j k . However, while the expression A k i B j k makes sense in terms of valence compatibility (i.e., grammatically), the composition “ A B ” that it should represent is not well-defined. Thus this form of type error is not caught by abstract index notation, since the domains/codomains of the linear maps must be checked separately.
The use of dimensional analysis (the abstract use of units such as kilograms, seconds, etc.) in Physics is an important precedent of strong typing. Each quantity has an associated “dimension” (this is a different meaning from the “dimension” of linear algebra) which is expressed as a fraction of powers of formal symbols. The ordinary algebraic rules for fractions and formal symbols are used for the dimensional components, with the further requirement that addition and equality may only occur between quantities having the same dimension.
For example, if E, M and C represent the dimensions of energy, mass and cost, respectively, and if the energy storage density ρ E / M of a battery manufacturing process is known (having dimensions energy per mass) and the manufacturing weight yield w M / C of the battery is known (having dimensions mass per cost), then under the algebraic conventions of dimensional analysis, calculating the energy storage per cost (which should have dimensions energy per cost) is simple;
ρ E M w M C = ρ w E M M C = ρ w E C
(the M symbols cancel in the fraction). Here, both ρ and w are real numbers, and besides using the well-definedness of real multiplication, no type-checking is done in the expression ρ w .
A contrasting example is the quantity ρ / w , having dimensions E C / M 2 . However, these dimensions may be considered to be meaningless in the given context. The quantity’s type adds meaning to the real-valued quantity, and while the quantity is well-defined as a real number, the uselessness of the type may indicate that an error has been made in the calculations. For example, a type mismatch between the two sides of an equation is a strong indication of error.
This is also a convenient way to think about the chain rule of calculus. If z y , y x , and x measure real-valued quantities, then z y x measures the quantity z with respect to quantity x. Using Z, Y and X for the dimensions of the quantities z, y and x respectively, the derivative d z d x has units Z / X . When worked out, the dimensions for the quantities on either side of the equation d z d x = d z d y d y d x will match exactly, having a non-coincidental similarity to the calculation in the battery product example.

3.2. Telescoping Notation (Also Known as Do Not Fear the Verbosity)

Many of the computations developed in this paper will appear to be overly pedantic, owing to the decoration-heavy notation that will be introduced in Section 3.3. This decoration is largely for the purpose of tracking the myriad of types in the type system and to assist the human reader or writer in making sense of and error-checking the expressions involved. The pedantry in this paper plays the role of introducing the technique. The notation is designed to telescope (Credit for the notion of telescoping notation is due in part to David DeConde, during one of many enjoyable and insightful conversations.), meaning that there is a spectrum of notational decoration; from
  • pedantically type-specified, verbose, and decoration-heavy, where [almost] no types must be inferred from context and there is little work or expertise required on the part of the reader, to
  • somewhat decorated but more compact, where the reader must do a little bit of thinking to infer some types, all the way to
  • tersely notated with minimal type decoration, where [almost] all types must be inferred from context and the reader must either do a lot of thinking or be relatively experienced.
Additionally, some of the chosen symbols are meant to obey the same telescoping range of specificity. For example, compare n-fold tensor contraction · n with type-specified · V 1 V n as discussed in Section 3.3, or the symbols ∇, , and | as discussed in Section 3.10. Tersely notated computations can be seen in Section 3.10, while fully-verbose computations abound in the careful exposition of Section 4.

3.3. Strongly-Typed Linear Algebra via Tensor Products

A fully strongly typed formulation of linear algebra will now be developed which enjoys a level of abstraction and flexibility similar to that of Penrose’s abstract index notation. Emphasis will be placed on notational and conceptual regularity via a tensor formalism, coupled with a notion of “untangled” expression which exploits and notationally depicts the associativity of linear composition.
If V denotes a finite-dimensional vector space, then let
· V : V * × V R , α , v α v
denote the natural pairing on V, and denote · V α , v using the infix notation α · V v . The natural pairing is a nondegenerate bilinear form and its bilinearity gives the expression α · V v multiplicative semantics (distributivity and commutativity with scalar multiplication), thereby justifying the use of the infix · operator normally reserved for multiplication. The natural pairing subscript V is seemingly pedantic, but will prove to be an invaluable tool for articulating and navigating the rich type system of the linear algebraic and vector bundle constructions used in this paper. When clear from context, the subscript V may be omitted.
Because V is finite-dimensional, it is reflexive (i.e., the canonical injection V V * * , v α α v is a linear isomorphism). Thus the natural pairing · V * on V * can be written naturally as
· V * : V × V * R , v , α α v .
Note that α · V v = v · V * α . Though subtle, the distinction between · V and · V * is important within the type system used in this paper.
Through a universal mapping property of multilinear maps, the bilinear forms · V and · V * descend to the natural trace maps
tr V : V * V R , α v α v , and tr V * : V V * R , v α α v ,
each extended linearly to non-simple tensors. These operations can also be called tensor contraction. Noting that V * V * and V V * * are canonically isomorphic to V V * and V * V respectively, then for each A V * V and B V V * , it follows that tr V A = I V * · V * V A and tr V * B = I V · V V * B .
Definition 1
(Linear maps as tensors). Let V and W be finite-dimensional vector spaces, and let Hom V , W denote the space of vector space morphisms from V to W (i.e., linear maps). The linear isomorphism
W V * Hom V , W , w α V W , v w α · V v
(extended linearly to general tensors) will play a central conceptual role in the calculations employed in this paper, as it will facilitate constructions which would otherwise be awkward or difficult to express. Linear maps and appropriately typed tensor products will be identified via this isomorphism.
Given bases v 1 , , v m V and w 1 , , w n W , and dual bases v 1 , , v m V * and w 1 , , w n W * , a linear map A : V W can be written under the identification in (1) as
A = A j i w i v j ,
where A j i = w i · W A · V v j R , and in fact A j i M n × m R is the matrix representation of A with respect to the bases v 1 , , v m V and w 1 , , w n W , noting that the i and j indices denote the “output” and “input” components of A respectively. Tensors are therefore the strongly typed analog of matrices, where the W V * type information is carried by the w i v j component. One particular example is the the identity map on V, which has type V V * and is expressed simply as v i v i (equivalently, δ j i v i v j ). Throughout this paper, the identity map on V will be referred to as the identity tensor on V, or just the identity tensor if V is clear from context, and will be denoted as I V .
One clarifying example of the tensor formulation is the adjoint operation of the natural pairing, also known as forming the dual of a linear map. It is straightforward to show that
* : W V * V * W , w α α w ,
(where the map is extended linearly to general tensors). This is literally the tensor abstraction of the matrix transpose operation; if A = A j i w i α j , then the dual A is A * = A i j α i w j . The matrix of A * is precisely the transpose of the matrix of A with respect to the relevant bases. The map * itself can be written as a 4-tensor * V * W W * V , where A * = * · W V * A .
There is a notion of the natural pairing of tensor products, which implements composition and evaluation of linear maps, and can be thought of as a natural generalization of scalar multiplication in a field. If U , V , and W are each finite-dimensional vector spaces, then the bilinear form
U V * × V W U R W U W , u α , v w u α · V v w = α · V v u w
will be denoted also by the infix notation · V (i.e., u α · V v w = α · V v u w ). If V itself is a tensor product of n factors which are clear from context, then · V may be denoted by · n (think an n-fold tensor contraction). If n = 2 , then typically : is used in place of · 2 . For example, from above, A * = * · W V * A = * : A .
Given a permutation σ S n , define a right-action by σ : V 1 V n V σ 1 1 V σ 1 n , mapping elements in the obvious way. For example, 2 3 4 acting on v 1 v 2 v 3 v 4 puts the second factor in the third position, the third factor in the fourth position, and the fourth factor in the second, giving v 1 v 4 v 2 v 3 . This permutation is itself a linear map and of course can be written as a tensor. However, because it is defined in terms of a right action, the “domain factors” will come on the left. Thus σ is written as a tensor of the form V 1 * V n * V σ 1 1 V σ 1 n (i.e., as a 2 n -tensor). Certain tensor constructions are conducive to using such permutations. In the above example, * can be written as 1 2 W * V V * W .
The permutation right-action also works naturally when notated using superscripts. For example, if B U V W , then
B 1 2 : = B · U * V * W * 1 2 V U W
and so
B 1 2 2 3 = B · U * V * W * 1 2 · V * U * W * 2 3 = B · U * V * W * 1 2 · V * U * W * 2 3 = B · U * V * W * 1 2 2 3 = B · U * V * W * 1 3 2 V W U .
When multiplying the permutations 1 2 and 2 3 in the third line, it is important to note that they are read left-to-right, since they are acting on B on the right.
The inline cycle notation is somewhat ambiguous in isolation because the number of factors in the domain/codomain is not specified, let alone their types. This information can sometimes be inferred from context, such as from the natural pairing subscripts, as in the following examples.
Example 1
(Linearizing the inversion map). Let i : G L V G L V , A A 1 , i.e., the linear map inversion operator, where G L V is an open submanifold of V V * via the isomorphism V V * Hom V , V . Its linearization (derivative) D i : G L V V V * V V * * V V * V * V at A G L V in the direction B T A G L V V V * is
D i A · V V * B = D i · V V * δ A + ϵ B = δ i A + ϵ B = δ A + ϵ B 1 = δ 1 + ϵ B A 1 A 1 = δ A 1 1 + ϵ B A 1 1 = δ A 1 n = 0 ϵ B A 1 n = δ A 1 ϵ A 1 B A 1 + O ϵ 2 = A 1 · V B · V A 1 .
( ϵ B A 1 is taken arbitrarily small due to the derivative δ : = d d ϵ ϵ = 0 being evaluated in an arbitrarily small neighborhood of ϵ = 0).
In order to “move” the B parameter out so that it plays the same syntactical role as in the original expression D i A · B , via adjacent natural pairing, some simple tensor manipulations can be done. The process is easily and accurately expressed via diagram. The following sequence of diagrams is a sequence of equalities. The diagram should be self-explanatory, but for reference, the number of boxes for a particular label denotes the rank of the tensor, with each box labeled with its type. The lines connecting various boxes are natural pairings, and the circles represent the unpaired “slots”, which comprise the type of the resulting expression.
Mathematics 10 03231 i001
Mathematics 10 03231 i002
The following step is nothing but moving the boxes for B out; the natural pairings still apply to the same slots, hence the cables dangling below.
Mathematics 10 03231 i003
In this setting, a tensor product amounts to flippantly gluing boxes together.
Mathematics 10 03231 i004
In order for B to be naturally paired in the same adjacent manner as in the original expression D i A · B , the slots of A 1 A 1 must be permuted; the second moves to the third, the third to the fourth, and the fourth to the second.
Mathematics 10 03231 i005
The first diagram equals the last one, thus D i A · V V * B = A 1 A 1 2 3 4 · V V * B , and by the nondegeneracy of the natural pairing on V V * , this implies that D i A = A 1 A 1 2 3 4 , noting that the statement of this expression does not require the direction vector B. The permutation exponent 2 3 4 can be calculated easily using simple tensors, if not by the above diagrammatic manipulations;
a 1 a 2 · b 1 b 2 · a 3 a 4 = a 1 a 4 a 2 a 3 : b 1 b 2 = a 1 a 2 a 3 a 4 2 3 4 : b 1 b 2 .
Here, the expression a 1 a 2 · b 1 b 2 · a 3 a 4 represents the expression A 1 · B · A 1 .
The next example will later be extended to the setting of Riemannian manifolds and their metric tensor fields, and put to use to formulate what are known as harmonic maps (see (3)). However, first, a new tensor operation must be defined.
Definition 2
(Parallel tensor product). If U , V , W , X are vector spaces and A U V and B W X , then define their parallel tensor product A B by
A B : = A B 2 3 U W V X .
The parentheses in the type specification are unnecessary, but hint at what the tensor decomposition for the quantity A B should be, if used as an operand to again (see below).
If A and B represent linear maps, then A B U W V X represents their tensor product as linear maps (the parentheses are unnecessary but hint at what the domain and codomain are, and for use of A B as an operand in another parallel tensor product), which is a “parallel” composition; if α V * and β X * , then A B · V * X * α β = A · V * α B · X * β .
There is a slight ambiguity in the notation coming from a lack of specification on how the tensor product of the operands is decomposed in the case when there is more than one such decomposition. Notation explicitly resolving this ambiguity will not be needed in this paper as the relevant tensor product is usually clear from context.
The parallel tensor product is associative; if Y and Z are also vector spaces and C Y Z , then
A B C = A B C U W Y V X Z ,
allowing multiply-parallel tensor products.
Example 2
(Tensor product of inner product spaces). If V , g and W , h are inner product spaces (noting that g V * V * and h W * W * are symmetric, i.e., literally invariant under 1 2 ), then W V * is an inner product space having induced inner product k A , B : = tr V g 1 · V * A * · W * h · W B . Here, the “inputs” of A and B (the V * factors) are being paired using g 1 V V , while the “outputs” (the W factors) are being paired using h W * W * , and the trace is used to “complete the cycle” by plugging the output into the input, thereby producing a real number. The expression k A , B can be written in a more natural way, which takes advantage of the linear composition, as A : k : B (or, pedantically, A · W * V k · W V * B ), instead of the more common but awkward trace expression mentioned earlier. In the tensor formalism, the inner product k should have type W * V W * V . Permuting the middle two components of the 4-tensor h g 1 W * W * V V gives the correct type. In fact, k = h g 1 . A further advantage to this formulation is that if any or all of A , k , B are functions, there is a clear product rule for derivatives of the expression A : k : B . This is something that is used critically in Riemannian geometry in the form of covariant derivatives of tensor fields (see (4)).
In this paper, the main use of the tensor formulation of linear maps is twofold: to facilitate linear algebraic constructions which would otherwise be difficult or awkward (this includes the ability to express derivatives of [possibly vector or manifold-valued] maps without needing to “plug in” the derivative’s directional argument), and to make clear the product-rule behavior of many important differentiable constructions.

3.4. Bundle Constructions

In order to use the calculus of variations involving Lagrangians depending on tangent maps of maps between smooth manifolds, it suffices to consider Lagrangians defined on smooth vector bundle morphisms. Continuing in the style of the previous section, a “full” tensor product of smooth vector bundles (4) will be formulated which will then allow expression of smooth vector bundle morphisms as tensor fields, sometimes called two-point tensor fields ([11], p. 70). The full arsenal of tensor calculus can then be used to considerable advantage.
First, some definitions and simpler bundle constructions will be introduced. A smooth [fiber] bundle (hereafter referred to simply as a smooth bundle) is a 4-tuple E , E , π , N where E , E and N are smooth manifolds and π : E N is locally trivial, i.e., N is covered by open sets U α such that π 1 U α U α × E as smooth manifolds. The manifolds E , E and N are called the typical fiber, the total space, and the base space respectively. The map π is called the bundle projection. The full 4-tuple specifying a bundle can be recovered from the bundle projection map, so a locally trivial smooth map can be said to define a smooth bundle. The dimension of the typical fiber of a bundle will be called its rank, and will be denoted by rank π or rank E when the bundle is understood from context.
The space of smooth sections of a smooth bundle defined by π : E N is
Γ π : = σ C N , E π σ = Id N ,
and may also be denoted by Γ E , if the bundle is clear from context. If nonempty, Γ π is generally an infinite-dimensional manifold (the exception being when the base space N is finite) [12].
Proposition 1
(Trivial bundle). Let M and N be smooth manifolds. With M N : = M × N and
π M N : = pr 2 M × N : M N N
defines a smooth bundle M , M N , π M N , N , called a trivial bundle. Similarly, with M N : = M × N and π M N : = pr 1 M × N : M N M , N , M N , π M N , M is a trivial bundle.
No proof is deemed necessary for (1), as each bundle projection trivializes globally in the obvious way. The symbol is a composite of × (indicating direct product) and → or ← (indicating the base space).
If M and N are smooth manifolds as in (1), then there are two particularly useful natural identifications.
C M , N Γ M N C M , N Γ N M ϕ Id M × M ϕ ϕ ϕ × M Id M pr 2 M × N Φ Φ pr 1 N × M Φ Φ
These identifications can be thought of identifying a map ϕ C M , N with its graph in M × N and N × M respectively. Furthermore, this allows bundle theory to be applied to reasoning about spaces of maps. The symbols M N and N M now carry a significant amount of meaning. Generally N M will be used in this paper, for consistency with the Hom V , W W V * convention discussed in Section 3.3. The symbols and are examples of telescoping notation, as they are built notationally on ×, and conceptually on the direct product, which is what is denoted by ×. The arrow portion of the symbols can be discarded when type-specificity is not needed.
Proposition 2
(Direct product bundle). Let E , E , π E , M and F , F , π F , N be smooth bundles. Then
π E × π F : E × F M × N , e , f π E e , π F f
defines a smooth bundle E × F , E × F , π E × π F , M × N . This bundle is called the direct product of π E and π F , and is not necessarily a trivial bundle.
Proof. 
Let Ψ E : π E 1 U U × E and Ψ F : π F 1 V V × F trivialize π E and π F over open sets U M and V N respectively. Then
Ψ E × Ψ F : π E 1 U × π F 1 V U × E × V × F
has inverse Ψ E 1 × Ψ F 1 . Note that
π E 1 U × π F 1 V = e , f E × F π E e U , π F f V = e , f E × F π E × π F e , f U × V = π E × π F 1 U × V ,
and that
P : U × E × V × F U × V × E × F , u , e , v , f u , v , e , f
defines a diffeomorphism. Then
Ψ E × F : = P Ψ E × Ψ F : π E × π F 1 U × V U × V × E × F
defines a diffeomorphism, and
pr 1 U × V × E × F Ψ E × F e , f = pr 1 U × V × E × F P Ψ E × Ψ F e , f = pr 1 U × V × E × F P Ψ E e , Ψ F f = pr 1 U × V × E × F P Ψ E e , Ψ F f = pr 1 U × V × E × F P pr 1 U × E Ψ E e , pr 2 U × E Ψ E e , pr 1 V × F Ψ F f , pr 2 V × F Ψ F f = pr 1 U × V × E × F pr 1 U × E Ψ E e , pr 1 V × F Ψ F f , pr 2 U × E Ψ E e , pr 2 V × F Ψ F f = pr 1 U × E Ψ E e , pr 1 V × F Ψ F f = π E e , π F f = π E × π F e , f ,
showing that Ψ E × F trivializes π E × π F over U × V M × N . Since M × N can be covered by such trivializing sets, this establishes that π E × π F defines a smooth bundle. The typical fiber of π E × π F is E × F . □
A smooth vector bundle is a fiber bundle whose typical fiber is a vector space and whose local trivializations are linear isomorphisms when restricted to each fiber. If E , E , π , M is a smooth vector bundle, then its dual vector bundle  E * , E * , π * , M is a smooth vector bundle defined in the following way.
E * : = p M E p * , π * : E * M , η p p .
Because E is a vector space, the notation E * is already defined. In analogy with Section 3.3, there are natural pairings on a vector bundle and its dual, defined simply by evaluation. If p M , η E p * and e E p , then η · E e : = η · E p e and e · E η : = e · E p η . Both expressions evaluate to η e . Natural traces and n-fold tensor contraction can be defined analogously. Again, while seemingly pedantic, the subscripted natural pairing notation will prove to be a valuable tool in articulating and error-checking calculations involving vector bundles. To generalize the rest of Section 3.3 will require the definition of additional structures.
For the remainder of this section, let E , E , π E , M and F , F , π F , N now be smooth vector bundles. The following construction is essentially an alternate notation for π E × π F : E × F M × N , but is one that takes advantage of the fact that π E and π F are vector bundles, and encodes in the notation the fact that the resulting construction is also a vector bundle. This is analogous to how V × W is a vector space with a natural structure if V and W are vector spaces, except that this is usually denoted by V W .
Proposition 3
(“Full” direct sum vector bundle). If
E M × N F : = E × F ,
Then
π E M × N π F : = π E × π F : E M × N F M × N
defines a smooth vector bundle E F , E M × N F , π E M × N π F , M × N , called the full direct sum of π E and π F .
For each p , q M × N , the vector space structure on π E M × N π F 1 p , q is given in the following way. Let α R and e 1 , f 1 , e 2 , f 2 π E M × N π F 1 p , q . Then
α e 1 , f 1 + e 2 , f 2 = α e 1 + e 2 , α f 1 + f 2 .
It is critical to see (1) for remarks on notation.
Proof. 
Let U, V, E , F , P, Ψ E , Ψ F and Ψ E × F be as in the proof of (2), and define Ψ E M × N F : = Ψ E × F . Noting that Ψ E M × N F is a smooth bundle isomorphism over Id U × V , so to show that Ψ E M × N F is a linear isomorphism in each fiber, it suffices to show that it is linear in each fiber. Let α R , p , q U × V and e 1 , f 1 , e 2 , f 2 π E M × N π F 1 p , q . Then
Ψ E M × N F α e 1 + e 2 , α f 1 + f 2 = P Ψ E × Ψ F α e 1 + e 2 , α f 1 + f 2 = P Ψ E α e 1 + e 2 , Ψ F α f 1 + f 2 = P α Ψ E e 1 + Ψ E e 2 , α Ψ F f 1 + Ψ F f 2 ( by trivial vector bundle structures on   U × E   and   V × F ) = α P Ψ E e 1 , Ψ F f 1 + P Ψ E e 2 , Ψ F f 2 ( by trivial vector bundle structure on U × V × E × F ) = α P Ψ E × Ψ F e 1 , f 1 + P Ψ E × Ψ F e 2 , f 2 = α Ψ E M × N F e 1 , f 1 + Ψ E M × N F e 2 , f 2 .
Thus Ψ E M × N F is linear in each fiber, and because it is invertible, it is a linear isomorphism in each fiber. In particular, Ψ E M × N F is a smooth vector bundle isomorphism over Id U × V . Applying Ψ E M × N F 1 to the above equation gives
α e 1 + e 2 , α f 1 + f 2 = α e 1 , f 1 + e 2 , f 2 ,
as desired. □
This construction differs from the Whitney sum of two vector bundles, as the base spaces of the bundles are kept separate, and are not even required to be the same. This allows the identification of T M × N M × N as T M M × N T N M × N , which may be done without comment later in this paper. Some important related structures are pr 1 * π M T M : pr 1 * T M M × N and pr 2 * π N T N : pr 2 * T N M × N , where pr i : = pr i M × N .
The next construction is what will be used in the implementation of smooth vector bundle morphisms as tensor fields.
Proposition 4
(“Full” tensor product bundle). If
E M × N F : = p , q M × N E p F q ( d i s j o i n t   u n i o n ) ,
Then
π E M × N π F : E M × N F M × N , α i j e i f j π E e 1 , π F f 1 ( h e r e , α i j R )
defines a smooth vector bundle E F , E M × N F , π E M × N π F , M × N , called the full tensor product (This construction is alluded to in ([13], p. 121), but is not defined or discussed.) of π E and π F .
It is critical to see (1) for remarks on notation.
Proof. 
Since the argument α i j e i f j in the definition of π E M × N π F is not necessarily unique, the well-definedness of π E M × N π F must be shown. Let α i j e i 1 f j 1 = β i j e i 2 f j 2 . Then in particular, α i j e i 1 f j 1 , β i j e i 2 f j 2 E p F q for some p , q M × N , and therefore e i 1 , e i 2 E p and f j 1 , f j 2 F q for each index i and j. Thus π E e 1 1 = p = π E e 1 2 and π F f 1 1 = q = π F f 1 2 , so the expression defining π E M × N π F is well-defined.
The set E M × N F does not have an a priori global smooth manifold structure, as it is defined as the disjoint union of vector spaces. A smooth manifold structure compatible with that of the constituent vector spaces will now be defined.
Let Ψ E : π E 1 U U × E and Ψ F : π F 1 V V × F trivialize π E and π F over open sets U M and V N respectively, such that Ψ E and Ψ F are each linear in each fiber. Define
Ψ E M × N F : π E M × N π F 1 U × V U × V × E F ,
with element mapping
X π E M × N π F X , pr 2 U × E Ψ E pr 2 V × F Ψ F X .
The map Ψ E M × N F is well-defined and smooth in each fiber by construction, since for each p , q U × V ,
pr 2 U × E Ψ E pr 2 V × F Ψ F E p E q : E p E q E F
is a linear isomorphism by construction. Additionally, Ψ E M × N F has been constructed so that
pr 1 U × V × E M × N F Ψ E M × N F = π E M × N π F
on π E M × N π F 1 U × V . Define the smooth structure on π E M × N π F 1 U × V E M × N F by declaring Ψ E M × N F to be a diffeomorphism. The map π E M × N π F is trivialized over U × V . The set E M × N F can be covered by such trivializing open sets. Thus E M × N F has been shown to be locally diffeomorphic to the direct product of smooth manifolds, and therefore it has been shown to be a smooth manifold. With respect to the smooth structure on E M × N F , the map π E M × N π F is smooth, and has therefore been shown to define a smooth vector bundle. □
Remark 1
(Notation regarding base space). The “full” direct sum (3) and “full” tensor product (4) bundle constructions allow direct sums and tensor products to be taken of vector bundles when the base spaces differ. If the base spaces are the same, then the construction “joins” them, producing a vector bundle over that shared base space. For example, if E and F are vector bundles over M, then E M × M F has base space M × M , while E F has base space M. The base space can be specified in either case as a notational aide; the latter example would be written as E M F . If no subscript is provided on the symbol, then the base spaces are “joined” if possible (if they are the same space), otherwise they are kept separate, as in the “full” tensor product construction. This notational convention conforms to the standard Whitney sum and tensor product bundle notation, and uses the notion of telescoping notation to provide more specificity when necessary.
Given a fiber bundle, a natural vector bundle can be constructed “on top” of it, essentially quantifying the variations of bundle elements along each fiber. This is known as the vertical [tangent] bundle ([12], p. 43), and it plays a critical role in the development of Ehresmann connections, which provide the “horizontal complement” to the vertical bundle.
Proposition 5
(Vertical bundle). Let π E : E M define a smooth [fiber] bundle. If V E : = ker T π E T E , then π V E : = π E T E V E : V E E defines a smooth vector bundle subbundle of π E T E : T E E , called the vertical bundle over E. Furthermore, the fiber over e E is V e E = T e E π E e T e E .
Proof. 
Because π E is a smooth surjective submersion, V E E is a subbundle of T E E having corank dim M and therefore rank equal to that of E. Furthermore, if e E and ϵ e ϵ E π E e , then δ e ϵ represents an arbitrary element of T e E π E e , and T π E δ e ϵ = δ π E e ϵ = δ π e = 0 , showing that δ e ϵ ker T π E , and therefore that δ e ϵ V e E . This shows that T e E π E e V e E . Because dim T e E π E e = rank E , this shows that T e E π E e = V e E . □
Given the extra structure that a vector bundle provides over a [fiber] bundle, there is a canonical smooth vector bundle isomorphism which adds significant value to the pullback bundle formalism used throughout this paper. This can be seen put to greatest use in Section 4, for example, in development of the first variation (see (1)).
Proposition 6
(Vertical bundle as pullback). If π : E M defines a smooth vector bundle, then
ι V E π * E : π * E V E , x , y δ ϵ x + ϵ y
is a smooth vector bundle isomorphism over Id E , called the vertical lift, having inverse
ι π * E V E : δ e ϵ e 0 , lim ϵ 0 e ϵ e 0 ϵ ,
where, without loss of generality, e ϵ is an E-valued variation which lies entirely in a single fiber.
Proof. 
It is clear that ι V E π * E is linear and injective on each fiber. By a dimension counting argument, it is therefore an isomorphism on each fiber. Because it preserves the basepoint, it is a vector bundle isomorphism over Id E . Because the map x , y , ϵ x + ϵ y is smooth, so is the defining expression for ι V E π * E , thereby establishing smoothness. That ι π * E V E inverts ι V E π * E is a trivial calculation. □

3.5. Strongly-Typed Tensor Field Operations

Because vector bundles and the related operations can be thought of conceptually as “sheaves of linear algebra”, the constructions in Section 3.3, generalized earlier in this section, can be further generalized to the setting of sections of vector bundles.
If E , F , G are smooth vector bundles over M, then define the natural pairing of a tensor field with a vector:
· F : Γ E M F * × F E , e M ϕ , f e π F f ϕ π F f · F f ,
extending linearly to general tensor fields. Further, define the natural pairing of tensor fields:
· F : Γ E M F * × Γ F M G Γ E M G , e M ϕ , f M g p e p M ϕ p · F p f p M g p = p ϕ p · F p f p e M g p ,
extending linearly to general tensor fields. This multiple use of the · F symbol is a concept known as operator overloading in computer programming. No ambiguity is caused by this overloading, as the particular use can be inferred from the types of the operands. As before, the subscript F may be optionally omitted when clear from context.
The permutations defined in Section 3.3 are generalized as tensor fields. If F 1 , , F n are smooth vector bundles over M, and σ S n is a permutation, then σ can act on F 1 M M F n by permuting its factors, and therefore can be identified with a tensor field
σ Γ F 1 * M M F n * M F σ 1 1 M M F σ 1 n
defined by
f 1 M M f n · F 1 * M M F n * σ : = f σ 1 1 M M f σ 1 n .
An important feature of such permutation tensor fields is that they are parallel with respect to covariant derivatives on the factors F 1 , , F n (see (2) for more on this).

3.6. Pullback Bundles

The pullback bundle, defined below, is a crucial building block for many important bundle constructions, as it enriches the type system dramatically, and allows the tensor formulation of linear algebra to be extended to the vector bundle setting. In particular, the abstract, global formulation of the space of smooth vector bundle morphisms over a map ϕ : M N is achieved quite cleanly using a pullback bundle. Furthermore, the use of pullback bundles and pullback covariant derivatives simplifies what would otherwise be local coordinate calculations, thereby giving more insight into the geometric structure of the problem.
For the duration of this section, let F , F , π , N be a smooth bundle having rank r.
Proposition 7
(Pullback bundle). Let M and N be smooth manifolds and let ϕ : M N be smooth. If
ϕ * F : = m , f M × F ϕ m = π f ,
and
π ϕ * F : = pr 1 M × F ϕ * F : ϕ * F M , m , f m ,
then F , ϕ * F , π ϕ * F , M defines a smooth bundle. In particular, ϕ * F is a smooth manifold having dimension dim M + rank π . The bundle defined by π ϕ * F is called the pullback of π by ϕ .
Proof. 
Recalling that F denotes the typical fiber of π , let Ψ : π 1 U U × F trivialize π over open set U N . Define
Ψ ϕ : ϕ * π 1 U ϕ 1 U × F , m , f m , pr 2 U × F Ψ f
and
Ψ ϕ 1 : ϕ 1 U × F ϕ * π 1 U , m , f m , Ψ 1 ϕ m , f .
Claim (1): Ψ ϕ and Ψ ϕ 1 are smooth. Proof: ϕ * π 1 U ϕ 1 U × π 1 U , and Ψ ϕ is clearly smooth as a map defined on the larger manifold. Therefore it restricts to a smooth map on ϕ * π 1 U . An analogous argument shows that Ψ ϕ 1 is smooth. Claim (1) proved.
Claim (2): Ψ ϕ 1 inverts Ψ ϕ . Proof: Let m , f ϕ * π 1 U . Then
Ψ ϕ 1 Ψ ϕ m , f = Ψ ϕ 1 m , pr 2 U × F Ψ f = m , Ψ 1 ϕ m , pr 2 U × F Ψ f = m , Ψ 1 π f , pr 2 U × F Ψ f ( since   ϕ m = π f ) = m , Ψ 1 pr 1 U × F Ψ f , pr 2 U × F Ψ f = m , Ψ 1 Ψ f = m , f .
With g F ,
Ψ ϕ Ψ ϕ 1 m , g = Ψ ϕ m , Ψ 1 ϕ m , g = m , pr 2 U × F Ψ Ψ 1 ϕ m , g = m , pr 2 U × F ϕ m , g = m , g ,
proving Claim (2).
Claim (3): Ψ ϕ trivializes π ϕ * F over ϕ 1 U M . Proof: Let m , f ϕ * π 1 U . Then
pr 1 ϕ 1 U × F Ψ ϕ m , f = pr 1 ϕ 1 U × F m , pr 2 U × F Ψ f = m = π ϕ * F m , f ,
and by claims (1) and (2), Ψ ϕ is a diffeomorphism, so Ψ ϕ trivializes π ϕ * F over ϕ 1 U M . Claim (3) proved.
Since M can be covered with sets as in claim (3) and since the typical fiber of π ϕ * F is diffeomorphic to F , this shows that π ϕ * F defines a smooth bundle F , ϕ * F , π ϕ * F , M . Because ϕ * F is locally diffeomorphic to the product of an open subset of M with F , ϕ * F has been shown to be a smooth manifold having dimension dim M + dim F = dim M + rank π . □
While the pullback bundle is constructed as a submanifold of a direct product, there is a natural bundle morphism into the pulled-back bundle, which serves as an interface to maps defined on the pulled-back bundle. Usually this morphism is notationally suppressed, just as naturally isomorphic spaces can be identified without explicit notation.
Corollary 1
(Pullback fiber projection bundle morphism). If ϕ : M N is smooth, then
ρ F ϕ * F : ϕ * F F , m , f f
is a smooth bundle morphism over ϕ which is an isomorphism when restricted to any fiber of ϕ * F .
Because ρ F ϕ * F is the projection pr F M × F ϕ * F , its tangent map is also just the projection pr T F T M T F T ϕ * F .
Proposition 8
(Bundle pullback is a contravariant functor). The map of categories
P u l l b a c k : M a n i f o l d B u n d l e M M M a n i f o l d , M B u n d l e M , ϕ : M N B u n d l e N B u n d l e M , F , F , π , N F , ϕ * F , π ϕ * F , M
is a contravariant functor. Here, naturally isomorphic bundles in B u n d l e M , for each manifold M, are identified (along with the corresponding morphisms).
Proof. 
Noting that
Id N * F = n , f N × F Id N n = π f F
and that
Id N * π n , f = pr 1 N × F Id N * F n , f = n = π f Id N * π π ,
it follows that Pullback Id N = Id Bundle N = Id Pullback N , i.e., Pullback satisfies the identity axiom of functoriality.
For the contravariance axiom, let ϕ : M N and ψ : L M be smooth manifold morphisms and let F , F , π , N be a smooth bundle. Then
ψ * ϕ * F = , p L × ϕ * F ψ = π ϕ * F p = , m , f L × M × F ψ = π ϕ * F m , f   and   ϕ m = π f = , m , f L × M × F ψ = m   and   ϕ m = π f , f L × F ϕ ψ = π f = ϕ ψ * F
and
π ψ * ϕ * F , m , f = pr 1 L × ϕ * F ψ * ϕ * F , m , f =   and π ϕ ψ * F , f = pr 1 L × F ϕ ψ * F , f = ,
showing that π ψ * ϕ * F π ϕ ψ * F , and therefore
Pullback ψ Pullback ϕ = Pullback ϕ ψ ,
establishing Pullback as a contravariant functor. □
The space of sections of a pullback bundle is easily quantified.
Γ ϕ * F = σ C M , ϕ * F π ϕ * F σ = Id M .
This space will be central in the theory developed in the rest of this paper. Furthermore, it is naturally identified with the space of sections along the pullback map;
Γ ϕ F : = Σ C M , F π F Σ = ϕ .
These spaces are naturally isomorphic to one another, and therefore an identification can be made when convenient. While the former space is more correct from a strongly typed standpoint, the latter space is a convenient and intuitive representational form. The particular correspondence depends heavily on the fact that ϕ * F is a submanifold of M × F .
Γ ϕ * F Γ ϕ F σ pr 2 M × F σ , Id M × M Σ Σ .
Furthermore, if f Γ F , then f ϕ Γ ϕ F . Note that it is not true that any σ Γ ϕ F can be written as f ϕ for some f Γ F , for example when there exists some distinct p , q M such that ϕ p = ϕ q and σ p σ q . Furthermore, the representation f ϕ is generally non-unique, for example when ϕ is not surjective, sections f 1 , f 2 Γ F which differ only away from the image of ϕ will still give f 1 ϕ = f 2 ϕ . Before developing the notion of a linear connection on a pullback bundle, it will be necessary to address these features which, while inconvenient, provide the strength of the pullback bundle and pullback covariant derivative (see (5)).
Lemma 1
(Local representation of Γ ϕ F elements). Recall that r denotes the rank of smooth bundle F. If σ Γ ϕ F then each point p M has some neighborhood U in which σ can be written locally as σ U = σ i f i ϕ U , where f 1 , , f r Γ F ϕ U is a frame for F ϕ U , and σ 1 , , σ r C U , R are defined by σ i = f i ϕ U · F σ U .
Proof. 
Let p M , let V N be a neighborhood of ϕ p over which F V is trivial, and let U = ϕ 1 V , so that U is a neighborhood of p. Let f 1 , , f r Γ F V be a frame for F V (i.e., F ϕ U ), and let f 1 , , f r Γ F V * be the corresponding coframe (i.e., the unique f 1 , , f r such that f i · F f j = δ j i for each i , j ). Define σ i C M , R by σ i = f i ϕ U · F σ U . Then
σ i f i ϕ U = f i ϕ U · F σ U f i ϕ U = f i ϕ U U f i ϕ U · F σ U = f i V f i ϕ U · F σ U = I F V ϕ U · F σ U = σ U ,
as desired. □
Some literature uses expressions of the form f ϕ Γ ϕ F along with an implicit use of the section-identifying isomorphism to write down particular sections of pullback bundles. In most cases, this tacit identification of spaces is harmless, but certain highly involved calculations may suffer from it. The section that f ϕ corresponds to under said isomorphism is Id M × M f ϕ Γ ϕ * F . However, because this expression is unwieldy and therefore a more compact and contextually meaningful expression is called for.
Definition 3
(Pullback section). If f Γ F and ϕ : M N is smooth, then define
ϕ * f : = Id M × M f ϕ Γ ϕ * F .
This is known as a pullback section.
The pullback section is deservedly named. If ϕ : M N and ψ : L M are smooth, then ψ * ϕ * f ϕ ψ * f in the sense of the proof of (8).
Proposition 9
(Bundle pullback commutes with tensor product). If E and F are smooth vector bundles over manifold N and ϕ : M N is smooth, then the map
ϕ * E M ϕ * F ϕ * E N F , m , e M m , f m , e N f
(extended linearly to general tensors) is a smooth vector bundle isomorphism.
Proof. 
Let c denote the above map. The well-definedness of c comes from the universal mapping property on multilinear forms which induces a linear map on a corresponding tensor product. If c m , e M m , f = 0 , then e N f = 0 , which implies that e = 0 or f = 0 , and therefore that m , e M m , f = 0 . Because there exists a basis for ϕ * E M ϕ * F m consisting only of simple tensors, this implies that c is injective, and by a dimensionality argument, that c is an isomorphism. The map is clearly smooth and respects the fiber structures of its domain and codomain. Thus c is a smooth vector bundle isomorphism. □
The contravariance of pullback and its naturality with respect to tensor product are two essential properties which provide some of the flexibility and precision of the strongly typed tensor formalism described in this paper. This will become quite apparent in Section 4.
Remark 2
(Tensor field formulation of smooth vector bundle morphisms). A particularly useful application of pullback bundles is in forming a rich-type system for smooth vector bundle morphisms. This approach was inspired by ([14], p. 11). Let π E : E M and π F : F N be smooth vector bundles, and let ϕ : M N be smooth. Consider Hom ϕ E , F , i.e., the space of smooth vector bundle morphisms over the map ϕ. There is a natural identification with another space which lets the base map ϕ play a more direct role in the space’s type. In particular,
Hom ϕ E , F Hom Id M E , ϕ * F , A π E × E A , pr 2 M × F B B .
This particular identification of smooth vector bundle morphisms over ϕ can now be directly translated into the tensor field formalism, analogously to (1).
Γ ϕ * F M E * Hom Id M E , ϕ * F , A e A · E e .
The inverse image of B Hom Id M E , ϕ * F is given locally; let e i and f i denote local frames for E and F in neighborhoods U M and V N respectively, with ϕ U V , and let e i and f i denote their dual coframes. Then the tensor field corresponding to B is given locally in U by B j i ϕ * f i M e j , where B j i : = ϕ * d f i B e j C U , R .
Quantifying smooth vector bundle morphisms as tensor fields lends itself naturally to doing calculus on vector and tensor bundles, as the relevant derivatives (covariant derivatives) take the form of tensor fields. The type information for a particular vector bundle morphism is encoded in the relevant tensor bundle.

3.7. Tangent Map as a Tensor Field

This section deals specifically with the tangent map operator by using concepts from Section 3.5 and Section 3.6 to place it in a strongly typed setting and to prepare to unify a few seemingly disparate concepts and notation for some tangible benefit (in particular, see Section 3.10).
Given a smooth map ϕ : M N , its tangent map T ϕ : T M T N is a smooth vector bundle morphism over ϕ , so by (2), is naturally identified with a tensor field
M N ϕ Γ ϕ * T N M T * M ,
which may be denoted by ϕ where type pedantry is deemed unnecessary. This construction is known as a two-point tensor field ([11], p. 70). In general, if ψ : M N , then ϕ and ψ have distinct types Γ ϕ * T N M T * M and Γ ψ * T N M T * M respectively, and therefore have no well-defined sum. Thus is a nonlinear derivative. The inscribed ∘ symbol within the symbol is meant to denote that nonlinearity, in particular distinguishing it from a linear covariant derivative.
Remark 3
(Generalized covariant derivative). The well-known one-to-one correspondence between linear connections and linear covariant derivatives ([6], p. 520) generalizes to a one-to-one correspondence between Ehresmann connections and a generalized notion of covariant derivative. To give a partial definition for the purposes of utility, a generalized covariant derivative on a smooth [fiber] bundle F N is a map ∇ on Γ F such that σ Γ σ * V F N T * N for each σ Γ F . The space of maps C M , N is naturally identified as Γ N M , and there is a natural Ehresmann connection on the bundle N M , whose corresponding covariant derivative is the tangent map operator. This is the subject of another of the author’s papers and will not be discussed here further. This is mentioned here to incorporate linear covariant derivatives (to be introduced and discussed in Section 3.8) and the tangent map operator (a nonlinear covariant derivative) under the single category “covariant derivative”.
There is a subtle issue regarding construction of the cotangent map of ϕ which is handled easily by the tensor field construction. In particular, while the cotangent map T * ϕ is the pointwise adjoint of the tangent map T ϕ , i.e., for each p M , T p ϕ : T p M T ϕ p N is linear and T p * ϕ : T ϕ p * N T p * M is the adjoint of T p ϕ , it does not follow that T * ϕ Hom T * N , T * M , being some sort of “total adjoint” of T ϕ Hom T M , T N . The obstruction is due to the fact that ϕ may not be surjective, so there may be some fiber T q * N that is not of the form T ϕ p * N , and therefore the domain could not be all of T * N . Furthermore, even if ϕ were surjective, if it were not also injective, say ϕ p 0 = ϕ p 1 for some distinct p 0 , p 1 M , then T ϕ p 0 * N = T ϕ p 1 * N , and T p 0 M T p 1 M , so the action on the fiber T ϕ p 0 * N is not well-defined.
In the tensor field parlance, the cotangent map T * ϕ simply takes the form
ϕ 1 2 Γ T * M M ϕ * T N .
The permutation superscript 1 2 is used here instead of * to distinguish it notationally from pullback notation, which will be necessary in later calculations. The key concept is that the tensor field ϕ 1 2 encodes the base map ϕ ; the basepoint p M is part of the domain ϕ * T * N itself.
The chain rule in the tensor field formalism makes use of the bundle pullback. If ψ : L M is smooth, then
L N ϕ ψ = ψ * M N ϕ · ψ * T M L M ψ .
Because ψ Γ ψ * T M L T * L , to form a well-defined natural pairing, the use of the pullback
ψ * ϕ Γ ψ * ϕ * T N M T * M = Γ ψ * ϕ * T N L ψ * T * M = Γ ϕ ψ * T N L ψ * T * M
is necessary (instead of just ϕ Γ ϕ * T N M T * M ).
Sometimes it is useful to discard some type information and write
ϕ Γ ϕ × M Id M T N N × M T * M ,
i.e., ϕ : M T N N × M T * M such that π N T N N × M π M T * M ϕ = ϕ × M Id M . This is easily done by the canonical fiber projection available to all pullback bundle constructions; ϕ * T N M T * M ϕ × M Id M * T N N × M T * M , and the canonical fiber projection is
ρ T N N × M T * M ϕ × M Id M * T N N × M T * M : ϕ × M Id M * T N N × M T * M T N N × M T * M ,
as defined in (1). The granularity of the type system should reflect the weight of the calculations being performed. For demonstration of contrasting situations, see the discussion at the beginning of Section 3.8 and the computation of the first variation in (1).
It is important to have notation which makes the distinction between the smooth vector bundle morphism formalism and the tensor field formalism, because it may sometimes be necessary to mix the two, though this paper will not need this. An added benefit to the tensor field formulation of tangent maps is that certain notions regarding derivatives can be conceptually and notationally combined, for example in Section 3.10.

3.8. Linear Covariant Derivatives

As will be shown in the following discussion, a linear covariant derivative (commonly referred to in the standard literature without the “linear” qualifier) provides a way to generalize the notion in elementary calculus of the differential of a vector-valued function. The linear covariant derivative interacts naturally with the notion of the pullback bundle, and this interaction leads naturally to what could be called a covariant derivative chain rule, which provides a crucial tool for the tensor calculus computations seen later.
Let V and W be finite-dimensional vector spaces let U V be open, and let ϕ : U W be differentiable. Recall from elementary calculus the differential D ϕ : U W V * (essentially matrix-valued). There is no base map information encoded in D ϕ (i.e., ϕ cannot be recovered from D ϕ alone), it contains only derivative information. The vector space structure of V and W allows the trivializations T U V U and T W W W , where the first factors are the base spaces and the second factors are the fibers (see (1)). The tangent map U W ϕ : U T W W × U T * U (see Section 3.7) has a codomain that can be trivialized similarly;
T W W × U T * U W W W × U V * U W V * W × U .
Because W V * W × U , as a set, is a direct product, it can be decomposed into two factors. Letting pr 1 and pr 2 be the projections onto the first and second factors respectively,
pr 1 ϕ : U W V *   and   pr 2 ϕ : U W × U .
The map pr 2 ϕ is the element of Γ W U identified with the base map ϕ itself; pr W W × U pr 2 ϕ = ϕ . This base map information is discarded in defining the differential of ϕ as D ϕ : = pr 1 ϕ ; the fiber portion of ϕ . This construction relies critically on the natural isomorphism T W W W for a vector space W.
An analogous construction shows that the differential D ϕ of a map ϕ is well-defined even when its domain is a manifold. However, when the codomain of a map ϕ is only a manifold, there does not in general exist a natural trivialization of its tangent bundle (in contrast to the vector space case), and therefore D ϕ cannot be defined without additional structure. A linear covariant derivative provides the missing structure.
For the remainder of this section, let π : E N define a smooth vector bundle having rank r.
A linear covariant derivative on E provides a means of taking derivatives of sections of E (i.e., maps σ : N E such that π σ = Id N ) without passing to a higher tangent bundle as would happen under the tangent map functor (i.e., if σ Γ E then T σ :   T N T E and N E σ : N T E E × N T * N ). A linear covariant derivative provides an effective “trivialization” of T E analogous to the trivialization T W W W as discussed above, discarding all but the “fiber” portion of the derivative, allowing the construction of an object known as the total linear covariant derivative analogous to the differential D ϕ as discussed above.
The notion of a linear covariant derivative on a vector bundle is arguably the crucial element of differential geometry (The Fundamental Lemma of Riemannian Geometry establishes the existence of the Levi-Civita connection ([8], p. 68), which is a linear covariant derivative satisfying certain naturality properties.). In particular, this operator implements the product rule property common to anything that can be called a derivation—a property which is particularly conducive to the operation of tensor calculus. The total linear covariant derivative of a vector field (i.e., section of a vector bundle) allows the generalization of many constructions in elementary calculus to the setting of smooth vector bundles equipped with linear covariant derivatives. For example, the divergence div X : = tr D X of a vector field X on R n generalizes to the divergence div X : = tr X of a vector field X on N, which has an analogous divergence theorem among other qualitative similarities.
Remark 4
(Natural linear covariant derivative on trivial line bundle). Before making the general definition for the linear covariant derivative, a natural linear covariant derivative will be introduced. With N denoting a smooth manifold as before, if f C N , R , then d f Γ T * N is the differential of f. Let
| N R f : = d f .
Because C N , R is naturally identified with Γ R N , this is essentially the natural linear covariant derivative on the trivial line bundle R N . Note that there is an associated product rule; if f , g C N , R , then f g C N , R , and
| N R f g = d f g = g d f + f d g = g | N R f + f | N R g .
When clear from context, the superscript decoration can be omitted and the derivative denoted as | f .
Definition 4
(Linear covariant derivative). A linear covariant derivative on a vector bundle defined by π : E N is an R -linear map | E : Γ E Γ E N T * N satisfying the product rule
| E f N σ = σ N | N R f + f N | E σ ,
where f C N , R and σ Γ E . The switch in order in the first term of the expression is necessary to form a tensor field of the correct type, Γ E N T * N . If σ Γ E , then the expression | E σ is known as the total [linear] covariant derivative of σ. If | E σ = 0 [in a subset U N ], then σ is said to be parallel [on U]. The “linear” qualifier is implied in standard literature and is therefore often omitted.
The inscribed | in | is to indicate that the covariant derivative is linear, and can be omitted when clear from context, or when it is unnecessary to distinguish it from the nonlinear tangent map operator whose decorated symbol is . For the remainder of this section, this distinction will not be necessary, so an undecorated ∇ will be used.
For V Γ T N , it is customary to denote E σ · V by V E σ , where V indicates the “directional” component of the derivative. Following this convention, the product rule can be written in a form where the product rule is more obvious;
V E f N σ = V N R f N σ + f N V E σ .
Given a linear covariant derivative E on E, there is a naturally induced linear covariant derivative E * on E * satisfying the product rule for the natural pairing on E, namely,
X N R α · E σ = X E * α · E σ + α · E X E σ ,
where X Γ T N , α Γ E * , and σ Γ E .
A covariant derivative is a local operator with respect to the base space N; if p N , then E σ p depends only on the restriction of σ to an arbitrarily small neighborhood of p ([8], p. 50), and therefore the restriction E U :   Γ U E Γ U E N T * N makes sense, allowing calculations using local expressions. Furthermore, a covariant derivative can be constructed locally and glued together under certain conditions. See ([6], p. 503) for more on this, and as a reference for general theory on bundles, covariant derivatives, and connections.
Linear covariant derivatives on several vector bundle constructions will now be developed. In analogy to defining a linear map by its action on a generating subset (e.g., a basis or a dense subspace) and then extending using the linear structure, Lemma (3) allows a covariant derivative to be defined on a generating subset (which can be chosen to make the defining expression particularly natural) and then extending. In this case, the relevant space is the space of sections of the vector bundle, which is a module over the ring of smooth functions on a manifold, and the extension process is done via linearity and the product rule (see (4)). This approach will allow the local trivialization implementation details to be hidden within the proof of Lemma (3)—an example of information hiding—so that constructions of covariant derivatives can proceed clearly by focusing only on the natural properties of the relevant objects and then invoking the lemma to do the “dirty” work (see (10) and (11)).
A bit of useful notation will be introduced to simplify the next definition. If G Γ is a subset of a C N , R -module Γ whose elements are functions on N (and therefore have a notion of restriction to a subset) and U N is open, then let G U denote the set of restrictions of the elements of G to the set U. Note that G U Γ U by construction.
Definition 5
(Finitely generating subset). Say that a subset of a module finitely generates the module if the subset contains a finite set of generators for the module.
Definition 6
(Locally finitely generating subset). If Γ is a C N , R -module and G Γ , then G is said to be a locally finitely generating subset of Γ if each point q N has a neighborhood U N for which G U finitely generates Γ U .
The space of sections of a vector bundle is the archetype for the above definition. The locally trivial nature of π : E N allows local frames to be chosen in a neighborhood of each point of N, from which global smooth sections (though not necessarily a global frame) can be made using a partition of unity subordinate to the trivializing neighborhoods. The set of such global sections forms a locally finite generating subset of Γ E .
Lemma 2.
If G is a locally finitely generating subset of Γ E , then each point in N has a neighborhood U N and e 1 , , e r G U such that e 1 , , e r forms a frame for Γ U E . In other words, a local frame can be chosen out of G near each point in N.
Proof. 
Let q N and let V N be a neighborhood of q for which G V = g 1 , , g finitely generates Γ V E (here, r , recalling that r = rank E ). Without loss of generality, let g 1 q , , g r q be linearly independent (this is possible because g 1 q , , g q spans the vector space E q ). Because g i is continuous for each i and the linear independence of the sections g 1 , , g r is an open condition (defined by L 1 R \ 0 where L : N r E q , p g 1 p g r p ), there is a neighborhood U V of q for which g 1 p , , g r p is a linearly independent set for each p U . Finally, letting e i : = g i U for i 1 , , r , the sections e 1 , , e r G U form a frame for Γ U E . □
The following lemma shows that defining a covariant derivative on a locally finitely generating subset of the space of sections of a vector bundle is sufficient to uniquely define a covariant derivative on the whole space. The particular generating subset can be chosen so the covariant derivative has a particularly natural expression within that subset.
Lemma 3
(Linear covariant derivative construction). Let G be a locally finite generating subset of Γ E . If G : G Γ E N T * N satisfies the linear covariant derivative axioms (What is meant by this is that the product rule must only be satisfied on λ N g if λ g G , where λ C N , R and g G .), then there is a unique linear covariant derivative E : Γ E Γ E N T * N whose restriction to G is G .
Proof. 
If q N , then by (2) there exists a neighborhood U N of q for which there are e 1 , , e r G U forming a frame for E U . If σ Γ E , then σ U   = σ i e i for some σ 1 , , σ r C U , R (specifically, σ i = e i · E σ U , where e 1 , , e r Γ U E * denotes the dual coframe of e 1 , , e r ). Define E : Γ E Γ E N T * N locally on Γ U E so as to satisfy the product rule
E σ U : = e i N N R σ i + σ i N G e i .
To show well-definedness, let f 1 , , f r G U be another frame for E U . Then σ = τ i f i for some τ 1 , , τ r C U , R . Let Ψ : Γ U E Γ U E be the unique smooth vector bundle isomorphism such that f i = Ψ · E e i . Writing Ψ and Ψ 1 with respect to the frame e i as Ψ j i e i e j and Ψ 1 j i e i e j respectively, it follows that f i = Ψ i j e j and τ i = σ j Ψ 1 j i . Then
E τ i f i = f i N N R τ i + τ i N G f i = Ψ i j e j N N R σ k Ψ 1 k i + σ j Ψ 1 j i N G Ψ i k e k = Ψ i j e j Ψ 1 k i N N R σ k + Ψ i j e j σ k N N × R Ψ 1 k i + σ j Ψ 1 j i e k N N R Ψ i k + σ j Ψ 1 j i Ψ i k G e k = δ k j e j N N R σ k + σ j δ j k G e k + σ e k N N R Ψ i k Ψ 1 i = E σ i e i .
The last equality follows because Ψ i k Ψ 1 i = δ k , which is a constant function, so
N R Ψ i k Ψ 1 i = 0 .
Thus the expression defining E does not depend on the choice of local frame. This establishes the well-definedness of E .
Clearly the restriction of E to G is G . This establishes the claim of existence. Uniqueness follows from the fact that E is defined in terms of the maps N R and G . □
Lemma (3) is used in the proof of the following proposition to allow a natural formulation of the pullback covariant derivative with respect to a natural locally finite generating subset of Γ ϕ * E , in which the relevant derivative has a natural chain rule.
Proposition 10
(Pullback covariant derivative). If ϕ : M N is smooth and E is a covariant derivative on E, then there is a unique covariant derivative ϕ * E on ϕ * E satisfying the chain rule
ϕ * E ϕ * e = ϕ * E e · ϕ * T N M N ϕ
for all e Γ E .
Proof. 
Let
G : = σ Γ ϕ * E σ = ϕ * e   for some   e Γ E ,
noting that a local frame e 1 , , e rank E Γ U E over open set U N induces a local frame ϕ * e 1 , , ϕ * e rank E Γ ϕ 1 U ϕ * E , so G is a locally finite generating subset of Γ ϕ * E . Define
G : G Γ ϕ * E M T * N , ϕ * e ϕ * E e · ϕ * T N M N ϕ .
The well-definedness and R -linearity of G comes from that of E . For the product rule, if λ C M , R and e Γ E , then the product λ M ϕ * e is an element of G if and only if λ = ϕ * μ for some μ C N , R , in which case, λ M ϕ * e   = ϕ * μ M ϕ * e = ϕ * μ N e . Then it follows that
G λ M ϕ * e = G ϕ * μ N e = ϕ * E μ N e · ϕ * T N M N ϕ = ϕ * e N N R μ + μ N E e · ϕ * T N M N ϕ = ϕ * e N N R μ · ϕ * T N M N ϕ + ϕ * μ N E e · ϕ * T N M N ϕ = ϕ * e M ϕ * N R μ · ϕ * T N M N ϕ + ϕ * μ M ϕ * E e · ϕ * T N M N ϕ = ϕ * e M M R ϕ * μ + ϕ * μ M G ϕ * e = ϕ * e M M R λ + λ M G ϕ * e ,
which is exactly the required product rule. By (3), there exists a unique covariant derivative ϕ * E on ϕ * E whose restriction to G is G . □
The full notation ϕ * E is often cumbersome, so it may be denoted by ϕ when the pulled-back bundle is clear from context.
Remark 5.
There is an important feature of a pullback covariant derivative in the case that pullback map is not an immersion; the pullback covariant derivative may be nonzero even where the pullback map is singular. This fact can be obscured by a certain abuse of notation which often comes in the expression of the geodesic equations in differential geometry (see (4)). An example will illustrate this point.
Let T M be a covariant derivative on π M T M : T M M . Let Θ : R T M be a unit-length vector field which describes the location of a person (the basepoint) and direction s/he is looking (the fiber portion) with respect to time (let R have standard coordinate t). Define θ : R M by θ : = π M T M Θ , so that θ is the base map of Θ, i.e., θ has discarded the direction information and only encodes the location information. Say that for some closed interval I R , d θ d t I is identically zero (and so is not an immersion), but that d Θ d t I is nonvanishing; see Figure 1. Mathematically, this means that during this time, Θ is varying only within a single fiber of T M . Physically, this means that during this time, the person is standing still but the direction s/he is looking is changing. Passing to a higher tangent space is often undesirable (note that d Θ d t takes values in T T M ), so to avoid this, a covariant derivative is used. In order to be meaningful, the covariant derivative must capture this fiber-only variation.
Because Θ is a vector field along θ, it can be written as Θ Γ θ * T M , and the covariant derivative on T M induces a pullback covariant derivative on θ * T M , which has base space R . In other words, θ * T M is parameterized by time. Then d d t θ * T M Θ Γ θ * T M is the desired covariant derivative of Θ with respect to time. A coordinate-based calculation will be made to make completely obvious why this pullback covariant derivative captures the desired information. Let x i be local coordinates on M and, for simplicity, assume that the image of θ lies entirely within this coordinate chart. Because i is a local frame for T M , θ * i is a local frame for θ * T M , by (1) and Θ Γ θ * T M can be written locally as Θ t = Θ i t θ * i t for some functions Θ i : R R . Then
d d t θ * T M Θ = d d t θ * T M Θ i θ * i = d d t Θ i θ * i + Θ i d d t θ * T M θ * i = d Θ i d t θ * i + Θ i θ * T M i · θ * T M R M θ · T R d d t = d Θ i d t θ * i + Θ i θ * T M i · θ * T M d θ d t .
Note that R M θ Γ θ * T M . Within the interval I, d θ d t vanishes, so the second term vanishes on I. However, because Θ is varying in a fiber-only direction within I, the basepoint is not changing and d Θ i d t θ * i can be identified with an elementary vector space derivative (the fiber is a vector space and so an elementary derivative is well-defined there). This fiber-direction derivative is nonvanishing by assumption, so d d t θ * T M Θ is nonvanishing on I as desired.
Introducing a bit of natural notation which will be helpful for the next result, if X Γ E and Y Γ F , then define X Y X M × N Y Γ E M × N F and X Y X M × N Y Γ E M × N F by
X M × N Y p , q : = X p Y q and X M × N Y p , q : = X p Y q
for each p , q M × N .
Proposition 11
(Induced covariant derivatives on E M × N F and E M × N F ). If E and F are covariant derivatives on E and F respectively, then there are unique covariant derivatives
E M × N F : Γ E M × N F Γ E M × N F M × N T * M M × N T * N
and
E M × N F : Γ E M × N F Γ E M × N F M × N T * M M × N T * N
on E F and E F respectively, satisfying the sum rule
u v E F X Y = u E X v F Y
and the product rule
u v E F X Y = u E X Y + X v F Y ,
respectively, where X Γ E , Y Γ F , and u v T M T N . Here, T M T N M × N (and its dual) is used instead of the isomorphic vector bundle T M × N M × N (and its dual).
Proof. 
Suppressing the pedantic use of the M × N subscript to avoid unnecessary notational overload, the set G : = e f e Γ E , f Γ F is a locally finite generator of Γ E F , since local frames for E F take the form e i 0 , 0 f j , where e i and f j are local frames for E and F respectively. Define
G : G Γ E F T * M T * N , X Y u v u E X v F Y ,   where   u v T M T N .
This map is well-defined and R -linear by construction, since the connections E and F are well-defined and R -linear. If λ C M × N , R , X Γ E , and Y Γ F , then the product λ X Y is in G (i.e., has the form X ¯ Y ¯ for some X ¯ Γ E and Y ¯ Γ F ) if and only if λ is constant. Thus the product rule (restricted to elements of G) reduces to R -linearity, which is already satisfied. By (3), there exists a unique connection E F on E F whose restriction to G is G .
Similarly, the set H : = e f e Γ E , f Γ F is a locally finite generator of Γ E F , since local frames for E F take the form e i f j , where e i and f j are local frames for E and F respectively. Define
H : H Γ E F T * M T * N , X Y u v u E X Y + X v F Y ,   where   u v T M T N .
This map is well-defined and R -linear by construction, since the connections E and F are well-defined and R -linear. For the product rule, with λ C M × N , R , X Γ E , and Y Γ F , the product λ X Y is in H if and only if there exist μ C M , R and ν C N , R such that λ = μ ν R M R N (noting that then λ M × N X Y   = μ ν M × N X Y   = μ M X ν N Y ). In this case, with u v T M T N ,
u v H λ M × N X Y = u v H μ ν M × N X Y = u v H μ M X ν N Y = u E μ M X ν N Y + μ M X v F ν N Y = u M R μ M X ν N Y + μ M u E X ν N Y + μ M X v N R ν N Y + μ M X ν N v F Y = u M R μ ν + μ v N R ν M × N X Y + λ M × N u E X Y + X v F Y = u v M × N R λ M × N X Y + λ M × N u v H X Y ,
which is exactly the required product rule. By (3), there exists a unique connection E F on E F whose restriction to H is H . □
Remark 6
(Naturality of the covariant derivatives on E M × N F and E M × N F ). Letting pr i : = pr i M × N ( i 1 , 2 ) for brevity, the maps
ξ : E M × N F pr 1 * E M × N pr 2 * F , e f π E π F e f , e M × N π E π F e f , f
and
ψ : E M × N F pr 1 * E M × N pr 2 * F , e f π E π F e f , e M × N π E π F e f , f ,
each extended linearly to the rest of their domains, are easily shown to be smooth vector bundle isomorphisms over Id M × N . Then
z E F X Y = ξ 1 z pr 1 * E M × N pr 2 * F ξ X Y
and
z E F X Y = ψ 1 z pr 1 * E M × N pr 2 * F ψ X Y
for all X Γ E , Y Γ F , and z T M × N , showing that the connections on E F and E F are ξ and ψ-related to the naturally induced connections on pr 1 * E pr 2 * F and pr 1 * E M × N pr 2 * F respectively, and are therefore in this sense natural. The sum X Y Γ E F and product X Y Γ E F correspond to pr 1 * X M × N pr 2 * Y and pr 1 * X M × N pr 2 * Y Γ pr 1 * E M × N pr 2 * F under ξ and ψ respectively.
Many important tensor constructions involve permutations. An extremely useful property of these permutations is that they commute with the covariant derivatives induced by the covariant derivatives on the tensor bundle factors, making them natural operators in the setting of covariant tensor calculus.
Proposition 12
(Transposition tensor fields are parallel). Let E 1 , E 2 , E 3 , E 4 be smooth vector bundles over M having covariant derivatives E 1 , E 2 , E 3 , E 4 respectively, let A : = E 1 M E 2 M E 3 M E 4 and B : = E 1 M E 3 M E 2 M E 4 , and let A and B denote the induced covariant derivatives.
If 2 3 Γ A * M B denotes the tensor field which maps e 1 M e 2 M e 3 M e 4 A to e 1 M e 3 M e 2 M e 4 B (i.e., 2 3 transposes the second and third factors), then 2 3 is a parallel tensor field with respect to the covariant derivative induced on the vector bundle A * M B M , i.e., A * M B 2 3 = 0 .
Proof. 
Let X Γ T M . Then
e 1 M e 2 M e 3 M e 4 · A * X A * M B 2 3 = X B e 1 M e 2 M e 3 M e 4 · A * 2 3 X A e 1 M e 2 M e 3 M e 4 · A * 2 3 = X B e 1 M e 3 M e 3 M e 4 X E 1 e 1 M e 3 M e 2 M e 4 e 1 M X E 3 e 3 M e 2 M e 4 e 1 M e 3 M X E 2 e 2 M e 4 e 1 M e 3 M e 2 M X E 4 e 4 = X B e 1 M e 3 M e 3 M e 4 X B e 1 M e 3 M e 3 M e 4 = 0 .
Because X is arbitrary, this shows that e 1 M e 2 M e 3 M e 4 · A * A * M B 2 3 = 0 . This extends linearly to general tensors, so A * M B 2 3 = 0 , as desired. □
The fact that all transposition tensor fields are parallel implies that all permutation tensor fields are parallel, since every permutation is just the product of transpositions. This gives as an easy corollary that a covariant derivative operation commutes with a permutation operation, which has quite a succinct statement using the permutation superscript notation.
Corollary 2
(Permutation tensor fields are parallel). Let E 1 , , E k be smooth vector bundles over M each having a covariant derivative, and let A : = E 1 M M E k and B : = E σ 1 1 M M E σ 1 k . If σ S k is interpreted as the tensor field in Γ A * M B which maps e 1 M M e k to e σ 1 1 M M e σ 1 k , then σ is a parallel tensor field. Stated using the superscript notation, with X Γ T M and a Γ A ,
X B a σ = X A a σ .
Proof. 
This follows from the fact that σ can be written as the product of transpositions; X σ = 0 because of the product rule and because each transposition is parallel. The claim regarding commutation with the superscript permutation follows easily from its definition.
X B a σ = X B a · A * σ = a · A * X A * M B σ + X A a · A * σ = X A a σ ,
using the fact that X A * M B σ = 0 , since σ is a parallel tensor field. □
Finally, any smooth vector bundle E M has a canonical identity tensor field  I E Γ E M E * acting as the identity on Γ E , i.e., I E · E σ = σ for all σ Γ E . Given a local frame e 1 , , e r Γ U E over open set U M , it has the expression I E = e i e i . The identity tensor field is an invaluable tool in forming tensor field expressions and in phrasing other naturality conditions regarding covariant derivatives.
Proposition 13
(Identity tensor field is parallel). Let E M be a smooth vector bundle with a linear covariant derivative E . Then I E is parallel with respect to E M E * , i.e., E M E * I E = 0 .
Proof. 
Let σ Γ E . Then by definition, 0 = I E · E σ σ . Taking the covariant derivative of both sides with respect to X Γ T M ,
0 = X E I E · E σ σ = X E I E · E σ X E σ = X E M E * I E · E σ + I E · E X E σ X E σ = X E M E * I E · E σ + X σ X σ = X E M E * I E · E σ = E M E * I E · T M X · σ .
Because σ is arbitrary, this implies that E M E * I E · T M X = 0 . Because X is arbitrary, this implies that E M E * I E = 0 . □

3.9. Decomposition of π E T E : T E E

In using the calculus of variations on a manifold M where the Lagrangian is a function of T M (this form of Lagrangian is ubiquitous in mechanics), taking the first variation involves passing to T T M . Without a way to decompose variations into more tractable components, the standard integration-by-parts trick ([15], p. 16) cannot be applied. The notion of a local trivialization of T T M via choice of coordinates on M is one way to provide such a decomposition. A coordinate chart U , ϕ : U R n on M establishes a locally trivializing diffeomorphism T T U ϕ U × R n × R n × R n . However such a trivialization imposes an artificial additive structure on T T U depending on the [non-canonical] choice of coordinates, only gives a local formulation of the relevant objects, and the ensuing coordinate calculations do not give clear insight into the geometric structure of the problem. The notion of the linear connection remedies this ([16]).
A linear connection on the vector bundle π : E M is a subbundle H E of π E T E : T E E such that T E = H E V E and T λ a · H x = H a x for all a R \ 0 and x E , where λ a : E E , e a e is the scalar multiplication action of a on E ([6], p. 512). The bundle H E may also be called a horizontal space of the vector bundle π E T E : T E E (“a” is used instead of “the” because a choice of H E is generally non-unique). For convenience, define h : = π Γ π * T M E T * E , noting then that V E = ker h .
A linear connection can equivalently be specified by what is known as a connection map; essentially a projection onto the vertical bundle. This is a slightly more active formulation than just the specification of a horizontal space, as a covariant derivative can be defined directly in terms of the connection map—see ([6], p. 518), ([17], p. 128), ([18], p. 173), and ([7], p. 208).
Proposition 14
(Connection map formulation of a linear connection). If v Γ π * E E T * E (i.e., v : T E E is a smooth vector bundle morphism over π) is a left-inverse for ι V E π * E Γ V E E π * E * that is equivariant with respect to T λ a and λ a (i.e., v · T λ a = π * λ a · v ) ([7], p. 245), then H : = ker v T E defines a linear connection on the vector bundle π : E M . Such a map v is called the connection map associated to H. Conversely, given a linear connection H, there is exactly one connection map defining H in the stated sense.
Proof. 
That v is a left-inverse for ι V E π * E implies that v has full rank, so H : = ker v defines a subbundle of π E T E : T E E having the same rank as T M . Because v is smooth, H is a smooth subbundle. Furthermore, the condition implies that V e E H e = 0 for each e E , and therefore T E = H E V E by a rank-counting argument.
If x T E and a R \ 0 , then v · T λ a · x = π * λ a · v · x , which equals zero if and only if v · x = 0 , i.e., if and only if x H . Thus T λ a · H = H . This establishes H E as a linear connection.
Conversely, if H is a linear connection and v 1 and v 2 are connection maps for H, then v 1 · T E ι V E π * E = Id π * E = v 2 · T E ι V E π * E . Then because the image of ι V E π * E is all of V E , it follows that v 1 V E = v 2 V E . Since v 1 H = 0 = v 2 H by definition, and since T E = H E V E , this shows that v 1 = v 2 . Uniqueness of connection maps has been established. To show existence, define v : = ι π * E V E · V E pr V E Γ π * E E T * E , where pr V E : H E V E V E be the canonical projection, recalling that H E V E = T E . It is easily shown that v is a connection map for H. □
Proposition 15
(Decomposing π E T E : T E E ). If v Γ π * E E T * E is a connection map, then
h E v : T E π * T M E π * E
is a smooth vector bundle isomorphism over Id E . See Figure 2.
Proof. 
Because T E = H E V E , and H = ker v and V E = ker h , the fiber-wise restriction
h E v T e E : T e E π * T M E π * E e T π e M E π e
is a linear isomorphism for each e E . The map is a smooth vector bundle morphism over Id E by construction. It is therefore a smooth vector bundle isomorphism over Id E . □
Remark 7
(Linear connection/covariant derivative correspondence). Given a covariant derivative E on a smooth vector bundle π : E M , there is a naturally induced linear connection, defined via the connection map
v : T E E , δ ϵ Θ δ ϵ π Θ * E Θ ,
where Θ : I E is a variation of θ E . Here, π Θ * E denotes the pullback of the covariant derivative E through the map π Θ (see (10)). Conceptually, all v does is replace an ordinary derivative ( δ ϵ ) with the corresponding covariant one ( δ ϵ π Θ * E ).
Conversely, given a connection map v Γ π * E E T * E for a linear connection H E , there is a naturally induced covariant derivative E on the smooth vector bundle π : E M , defined by
E : Γ E Γ E M T * M , σ σ * v · σ * T E M E σ .
The scaling equivariance of v is critical for showing that this map actually defines a covariant derivative. Full type safety should be observed here; by the contravariance of the pullback of bundles (see (8)), σ * π * E π σ * E = Id M * E E , so
σ * v Γ σ * π * E E T * E Γ σ * π * E M σ * T * E Γ E M σ * T * E ,
and therefore σ * v · σ Γ E M T * M as desired. This connection map construction of a covariant derivative gives (10) as an immediate consequence via the chain rule for the tangent map.
The following construction is an abstraction of taking partial derivatives of a function, inspired by ([11], p. 277). Instead of taking partial derivatives with respect to individual coordinates, partial covariant derivatives along distributions over the base manifold are formed, where the distributions (subbundles) decompose the base manifold’s tangent bundle into a direct sum. Such a construction conveniently captures the geometry of maps with respect to the geometry of its domain.
Proposition 16
(Partial covariant derivatives). Let L C M , R , and for each i 1 , , n let F i M be a smooth vector bundle. If, for each i 1 , , n , c i Γ F i M T * M such that c 1 M M c n Γ F 1 M M F n M T * M is a smooth vector bundle isomorphism over Id E , then there exist unique sections L , c i Γ F i * for each i 1 , , n such that
M R L = L , c 1 · F 1 c 1 + + L , c n · F n c n .
This decomposition of L provides what will be called partial covariant derivatives of L (with respect to the given decomposition).
Proof. 
The following equivalences provide a formula for directly defining L , c 1 , , L , c n .
L = L , c 1 · F 1 c 1 + + L , c n · F n c n L = L , c 1 M M L , c n · F 1 M M F n c 1 M M c n L · T M c 1 M M c n 1 = L , c 1 M M L , c n .
Existence and uniqueness is therefore proven. □
Corollary 3
(Horizontal/vertical derivatives). Let h : = π Γ π * T M E T * E as before. If v Γ π * E E T * E is a connection map, and if L : E R is smooth, then there exist unique L , h Γ π * T * M and L , v Γ π * E * such that L = L , h · π * T M h + L , v · π * E v .
It should be noted that the basepoint-preserving issue discussed in Section 3.7 plays a role in choosing to use the tensor field formulation of h : T E T M and v : T E E . In particular, without preserving the basepoint (via the π -pullback of T M and E to form h Γ π * T M E T * E and v Γ π * E E T * E ), the map h E v would not be a smooth bundle isomorphism, and the horizontal and vertical derivatives would be maps of the form L , h : E T * M and L , v : E E * , but that, critically, are not sections of smooth vector bundles, and can only claim to be smooth [fiber] bundle morphisms. Derivative trivializations will be central in calculating the first and second variations of an energy functional having Lagrangian L (see (1) and (2)).

3.10. Curvature and Commutation of Derivatives

A ubiquitous consideration in mathematics is to determine when two operations commute. In the setting of tensor calculus, this often manifests itself in determining the commutativity (or lack thereof) of two covariant derivatives. Here, “covariant derivatives” may refer to both linear covariant derivatives and the tangent map operator (see (3)). This unified categorization of derivatives will now be leveraged to show that certain fiber bundles are flat (in a sense analogous to the vanishing of a curvature endomorphism) with respect to particular covariant derivatives. This reduces the work often done showing commutativity of derivatives in the derivation of the first variation of a function in the calculus of variations to the simple statement that a particular tensor field is symmetric, which is comes as a corollary to the aforementioned flatness.
In this section, the symbol ∇ may denote or | , depending on context. This eases the expression of repeated covariant derivatives, such as the covariant Hessian of a section (see below), and is an example of telescoping notation as discussed in Section 3.3.
If π : E M defines a smooth [fiber] bundle whose space of sections Γ E has two repeated covariant derivatives defined, if σ Γ E , and if T M is a symmetric linear covariant derivative (meaning X Y Y X = X , Y for X , Y Γ T M ), then the tensor contraction
2 σ : X M Y Y M X
is an expression measuring the non-commutativity of the X and Y derivatives of σ . The quantity 2 σ will be called the covariant Hessian of σ , because it generalizes the Hessian of elementary calculus; it contains only second-derivative information, and in the special case seen below, it is symmetric in the argument components. It should be noted that if F M is the vector bundle such that σ Γ F M T * M , then 2 σ Γ F M T * M M T * M . Intentionally leaving the ∇ and · symbols undecorated in preference of contextual interpretation, unwinding the expression above gives
2 σ : X M Y Y M X = Y σ · X X σ · Y = Y X σ σ · Y X X Y σ + σ · X Y = X Y σ + Y X σ + σ · X , Y = X Y σ + Y X σ + X , Y σ ,
which is syntactically identical to the common definition for the [Riemannian] curvature endomorphism R X , Y σ . In the traditional setting, where E is a linear covariant derivative on vector bundle E, the curvature endomorphism takes the form of a tensor field R E Γ E M E * M T * M M T * M . In this setting however, because E may be nonlinear (for example, M S when M and S are manifolds), such a tensorial formulation does not generally exist. Instead,
R E X , Y : = X Y E + Y X E + X , Y E
defines a second-order covariant differential operator (“covariant” meaning tensorial in the X and Y components). Put differently,
R E X , Y σ = 2 σ : X M Y Y M X ,
which will be called the (possibly nonlinear) curvature operator, which in particular measures the non-commutativity of the X and Y derivatives of σ . If R E is identically zero, then the bundle E is said to be flat with respect to the relevant connections/covariant derivatives.
There are two particularly important instances of flat bundles. The first is the trivial line bundle defined by π R S (whose space of smooth sections, as discussed in Section 3.4, is naturally identified with C S , R ; S is a smooth manifold). In this case, | S R f Γ T * S , and
2 f | T * S | S R f Γ T * S S T * S
is the object referred to in most literature as the covariant Hessian of f. Here, R S R X , Y f is a real-valued function on S.
Proposition 17
(Symmetry of covariant Hessian on functions). Let S be a smooth manifold and let T S be a symmetric covariant derivative. If f C S , R , then 2 f Γ T * S S T * S is a symmetric tensor field (i.e., it has a 1 2 symmetry). Here, the covariant derivative on C S , R is S R as defined above.
Proof. 
Let X , Y Γ T S . Recall that f d f Γ T * S . Then
2 f : X S Y Y S X = Y f · X X f · Y = Y f · X f · Y X X f · Y + f · X Y = f · X · Y f · Y · X + f · X , Y ( by symmetry of   T S ) = f · X , Y + f · X , Y ( by symmetry of   X , Y ) = 0 .
Because X S Y is pointwise-arbitrary in T S S T S , this shows that 2 f is symmetric. Equivalently stated, R S R is identically zero, and therefore the relevant bundle is flat. □
The second important case involves the nonlinear covariant derivative M S on C M , S . Here, if ϕ C M , S , then
2 ϕ | ϕ * T S M T * M M S ϕ Γ ϕ * T S M T * M M T * M ,
so R M S X , Y ϕ Γ ϕ * T S .
Proposition 18
(Symmetry of covariant Hessian on maps). Let M and S be smooth manifolds and let T M and T S be symmetric covariant derivatives. If ϕ C M , S , then 2 ϕ Γ ϕ * T S M T * M M T * M is a tensor field which is symmetric in the two T * M components (i.e., it has a 2 3 symmetry). Here, the covariant derivative on C M , S is M S as defined above.
Proof. 
Let X , Y Γ T M and f C S , R , so that ϕ * f Γ ϕ * T S . Then
ϕ * f · ϕ * T S R M S X , Y ϕ = ϕ * f · X Y ϕ + Y X ϕ + X , Y ϕ = X ϕ * f · ϕ · Y + X ϕ * f · ϕ · Y + Y ϕ * f · ϕ · X Y ϕ * f · ϕ · X + ϕ * f · ϕ · X , Y = X ϕ * f · Y + Y ϕ * f · X + ϕ * f · X , Y + ϕ * 2 f · ϕ · X · ϕ · Y ϕ * 2 f · ϕ · Y · ϕ · X = ϕ * f · Y · X + ϕ * f · X · Y + ϕ * f · X , Y ϕ * 2 f : ϕ · X M ϕ · Y ϕ · Y M ϕ · X .
By definition, ϕ * f · Y · X + ϕ * f · X · Y = ϕ * f · X , Y , which cancels out the other term. By (17), 2 f is symmetric, so the final term is zero. Because ϕ * f is pointwise-arbitrary in ϕ * T * S and X and Y are pointwise-arbitrary in T M , this shows that R M S is identically zero, so the bundle defined by π M S × M : S × M M , whose space of sections is identified with C M , S , is flat, and therefore 2 ϕ is symmetric in its two T * M components. □
The construction used in (16) can be applied to nonlinear as well as linear covariant derivatives to considerable advantage. For example, if ψ : M × N L , where M , N , L are smooth manifolds and p M : = pr 1 M × N and p N : = pr 2 M × N , then define ψ , M Γ ψ * T L M × N p M * T * M and ψ , N Γ ψ * T L M × N p N * T * N by
ψ = M × N L ψ = ψ , M · p M * T M p M + ψ , N · p N * T N p N .
This gives a convenient way to express partial covariant derivatives, which will be used heavily in Section 4 in calculating the first and second variations of an energy functional. Note that in this parlance, ψ , M × N is the full tangent map ψ .
Defining second partial covariant derivatives ψ , M M , ψ , M N , ψ , N M and ψ , N N by
ψ , M = ψ , M M · p M + ψ , M N · p N   and   ψ , N = ψ , N M · p M + ψ , N N · p N ,
the symmetry of the covariant Hessian of ψ can be used to show the various symmetries of these second derivatives.
Proposition 19
(Symmetries of partial covariant derivatives). With ψ and its second partial covariant derivatives as above,
ψ , M M Γ ψ * T L M × N p M * T * M M × N p M * T * M
and ψ , N N (having analogous type) are 2 3 -symmetric (i.e., ψ , M M 2 3 = ψ , M M and ψ , N N 2 3 = ψ , N N ) and the mixed, second partial covariant derivatives
ψ , M N Γ ψ * T L M × N p M * T * N M × N p N * T * N and ψ , N M Γ ψ * T L M × N p N * T * N M × N p M * T * M
are mutually 2 3 -symmetric (i.e., ψ , M N = ψ , N M 2 3 ).
Proof. 
Let X , Y Γ T M T N . If T p N · X = 0 and T p M · Y = 0 , then
0 = 2 ψ : X M × N Y Y M × N X ( by   ( 18 ) ) = ψ , M M : p M · X M × N p M · Y p M · Y M × N p M · X + ψ , M N : p M · X M × N p N · Y p M · Y M × N p N · X + ψ , N M : p N · X M × N p M · Y p N · Y M × N p M · X + ψ , N N : p N · X M × N p N · Y p N · Y M × N p N · X = ψ , M N : p M · X M × N p N · Y p N · Y M × N p M · X = ψ , M N ψ , N M 2 3 : p M · X M × N p N · Y .
Because p M · X and p N · Y are pointwise-arbitrary in p M * T M and p N * T N respectively, this implies that ψ , M N = ψ , N M 2 3 .
Analogous calculations (setting p M · X = 0 and p M · Y = 0 and then separately setting p N · X = 0 and p N · Y = 0 ) show that ψ , M M = ψ , M M 2 3 and ψ , N N = ψ , N N 2 3 . □
There are two final results regarding the second covariant derivative that will be especially useful in the calculation of the first and second variations of an energy functional (see (1) and (3)).
Proposition 20
(Chain rule for covariant Hessian). Let π : E N define a bundle having a first and second covariant derivative (i.e., a section of E can be covariantly differentiated twice). If ϕ : M N and e Γ E , then
2 ϕ * e = ϕ * 2 e : ϕ * T N ϕ M ϕ + ϕ * e · ϕ * T N ϕ .
Proof. 
Let X Γ T M . Then
2 ϕ * e · X = X ϕ * E ϕ * e = X ϕ * E e · ϕ * T N ϕ = X ϕ * e · ϕ * T N ϕ + ϕ * e · ϕ * T N X ϕ = ϕ * 2 e · ϕ * T N ϕ · X · ϕ * T N ϕ + ϕ * e · ϕ * T N X ϕ = ϕ * 2 e : ϕ * T N ϕ M ϕ + ϕ * e · ϕ * T N ϕ · X .
Because X is pointwise-arbitrary in T M , this establishes the desired equality. □
Proposition 21
(Pullback curvature endomorphism). Let π : E N define a vector bundle having first and second covariant derivatives. If ϕ : M N , then
R ϕ * T N = ϕ * R T N : ϕ * T N ϕ M ϕ .
Proof. 
Note that R ϕ * T N Γ ϕ * T N M ϕ * T * N M T * M M T * M . Let X , Y Γ T M and let Z Γ T N , so that ϕ * Z Γ ϕ * T N . Then
I ϕ * T N M ϕ * Z · ϕ * T N M ϕ * T * N R ϕ * T N : T M X M Y = R ϕ * T N X , Y ϕ * Z = 2 ϕ * Z : T M X M Y = ϕ * 2 Z : ϕ * T N ϕ M ϕ : T M X M Y + ϕ * Z · ϕ * T N ϕ : T M X M Y ( by   ( 20 ) ) = ϕ * 2 Z : ϕ * T N ϕ · X M ϕ · Y + ϕ * Z · ϕ * T N 0 ( by symmetry of   ϕ ) = ϕ * I T N N Z · T N N T * N R T N : ϕ * T N ϕ · X M ϕ · Y = I ϕ * T N M ϕ * Z · ϕ * T N M ϕ * T * N ϕ * R T N : ϕ * T N ϕ M ϕ : T M X M Y ,
and because X , Y and ϕ * Z are pointwise-arbitrary in their respective spaces, this establishes the desired equality. □
A common operation is to evaluate a covariant derivative along a single tangent vector. One can express a single tangent vector as a section of a particular pullback bundle, the map being the constant map evaluating to the basepoint of the vector. This allows the richly typed formalism of pullback bundles to be used to evaluate derivatives at a point, particularly noting that this safely deals with the overloading of the natural pairing operator · (see Section 3.5).
Proposition 22
(Evaluation commutes with non-involved derivatives). Let A and B be smooth manifolds and let σ Γ E for some smooth bundle E A × B having a covariant derivative E . If b B and the map z : A A × B , a a , b represents evaluation at b, then
z * σ , A = z * σ , A ,
i.e., evaluation in B commutes with a derivative along A.
Proof. 
Let X Γ T A , and let p A : = pr 1 A × B and p B : = pr 2 A × B . Then
z * σ , A · X = z * E z * σ · X = z * E σ · z * T A T B z · T A X = z * E σ · T A T B p A * X = z * σ , A · p A * T A p A · T A T B p A * X ( since   p B · T A T B p A * X = 0 ) = z * σ , A · T A X ( since   z * p A * X = p A z * X = Id A * X = X ) ,
and because X is pointwise-arbitrary in T M , this implies that z * σ , A = z * σ , A as desired. □
Proposition 23.
Let A , B , C be smooth manifolds, let ψ : A × B C be smooth, let p A : = pr 1 A × B and p B : = pr 2 A × B , and let X , Y Γ T A T B . If p B · X = 0 and p A · Y = 0 , then
ψ , A B : p A · X A × B p B · Y = Y ψ * T C X A × B C ψ .
Proof. 
The conditions p B · X = 0 and p A · Y = 0 imply that Y X = 0 in the product covariant derivative. Then since p A × A × B p B = Id A × B , it follows that
p A A × B p B = p A A × B p B = p A × A × B p B = Id T A T B = I T A T B = 0 ,
where I T A T B Γ T A T B T A T B * denotes the identity tensor field on T A T B , and therefore
Y p A · X = Y p A · X + p A · Y X = 0 · X + p A · 0 = 0 .
For the main calculation,
ψ , A B : p A · X A × B p B · Y = ψ , A B · p B * T B p B · T A T B Y · p A * T A p A · T A T B X = Y ψ × A × B p A ψ , A · p A * T A p A · T A T B X ( since   p A · Y = 0 ) = Y ψ ψ , A · p A · X ψ , A · Y p A p A · X ( by reverse product rule ) = Y ψ * T C X A × B C ψ ( since   Y p A · X = 0 ) ,
as desired. □

4. Riemannian Calculus of Variations

The use of the Calculus of Variations in the Riemannian setting to develop the geodesic equations and to study harmonic maps is quite well-established. A more general formulation is required for more specific applications, such as continuum mechanics in Riemannian manifolds. The tools developed in Section 3 will now be used to formulate the first and second variations and Euler–Lagrange equations of an energy functional corresponding to a first-order Lagrangian. In particular, the bundle decomposition discussed in Section 3.9 will be needed to employ the standard integration-by-parts trick seen in the formulation of the analogous parts of the elementary Calculus of Variations. The seemingly heavy and pedantic formalism built up thus far will now show its usefulness.
In this part, let M , g and S , h be Riemannian manifolds with M compact. Calculations will be done formally in the space C M , S , noting that its completion under various norms will give various Sobolev spaces of maps from M to S, which are ultimately the spaces which must be considered when finding critical points of the relevant energy functionals. See [17,18] for details on the analytical issues. Let d V g denote the Riemannian volume form corresponding to metric g, and let d V ¯ g be the induced volume form on M . Let ι : M M be the inclusion, and let ν Γ ι * T * M be the unit normal covector field on M . Let E : = T S S × M T * M and π : = π S T S S × M π M T * M , making π : E S × M a vector bundle.
The energy functionals in this section will be assumed to have the form
L : C M , S R , ϕ M L ϕ d V g ,
where L : E R , referred to as the Lagrangian of the functional, is smooth. Here, ϕ could be understood to take values either in E = T S S × M T * M or ϕ * T S M T * M . In the former case, the composition L ϕ is literal, while in the latter case, there is an implicit conversion from ϕ * T S M T * M to T S S × M T * M via a fiber projection bundle morphism (see (1)). Either way, L ϕ : M R . Let T S and T M denote the respective Levi-Civita connections, which induce a covariant derivative E on E (see (11)). Define the connection map v Γ π * E E T * E using E as in (3). For convenience, the S × M subscript will be suppressed on the “full” tensor product defining E from here forward.

4.1. Critical Points and Variations

One of the most pertinent properties of an energy functional is its set of critical points. Often, the solution to a problem in physics will take the form of minimizing a particular energy functional. Lagrangian mechanics is the quintessential example of this. This section will deal with some of the main considerations regarding such critical points.
Because the domain of a [real-valued] functional L may be a nonlinear space, the relevant first derivative is the [real-valued] differential d L , which is paired with the linearized variation of a map ϕ C M , S . In particular, a one-parameter variation of ϕ is a smooth map Φ : M × I S , where the I component is the variational parameter. Letting i denote the standard coordinate on I, the linearized variation is then δ i Φ : M T S , recalling that δ i : = i i = 0 . Because π S T S δ i Φ = ϕ , it follows that δ i Φ Γ ϕ * T S , i.e., δ i Φ is a vector field along ϕ . The object δ i Φ will be called a linearized variation. Call the elements of Γ ϕ * T S  linear variations.
Proposition 24
(Each linear variation is a linearized variation). Let exp : U S denote the exponential map associated to T S , where U T S is a neighborhood of the zero bundle in T S on which exp is defined, and let λ : T S × R T S , s , ϵ ϵ s denote the scalar multiplication structure on T S . If A Γ ϕ * T S and if Φ : U S is defined by Φ : = exp λ A × Id I U , then δ i Φ = A . In other words, every vector field over ϕ is realized as the linearization of a one-parameter variation of ϕ.
Proof. 
The map Φ is well-defined and smooth by construction. Let p M . Then
δ i Φ p = δ i Φ p , i = δ i exp λ A × Id I p , i = exp · δ i λ A p , i = exp · δ i i A p = exp · ι V E π * E Z π * E · A p = A p ,
where Z π * E denotes the zero subbundle of π * E . The last equality follows from a naturality property of the exponential map ([5], p. 523). □
Thus each linear variation is a linearized variation, establishing a natural identification of T ϕ C M , S with Γ ϕ * T S , which will be useful when calculating the differential of a functional on C M , S . In fact, the exponential map construction in (24) is a way to construct charts for the infinite dimensional manifold C M , S ([18], Theorem 5.2).

4.2. First Variation

This section is devoted to calculating the first variation of the previously defined energy functional. Here is where the full richness of the type system of the objects developed earlier in the paper will really show their power (and arguably, necessity). While the type-specifying notation may appear overly decorated and pedantic, subtle usage errors can be detected and avoided by keeping track of the myriad of types of the relevant objects through the sub/superscripts on covariant derivatives and natural pairings; extremely complex constructions can be made and navigated without much trouble. By contrast, performing the ensuing calculations in coordinate trivializations would result in an intractable proliferation of Christoffel symbols and indexed expressions which would prove difficult to read and would be highly prone to error.
Because the Lagrangian L : E R is defined on a vector bundle π : E S × M over the product space S × M , the decomposition in (3) can be slightly refined. The projection π can be decomposed into the factors π S : = pr S S × M π and π M : = pr M S × M π , so that π = π S × E π M . Then h = π = π S E π M . Let
σ : = π S Γ π S * T S E T * E and μ : = π M Γ π M * T M E T * E .
The letters sigma and mu have been chosen to reflect the fact that L , σ Γ π S * T * S and L , μ Γ π M * T * M give the “S component” (spatial) and “M component” (material) of the derivative E R L Γ T * E . The connection map v will be retained as is, giving L , v Γ π * E * , the “E component” (fiber) of E R L . See (8) for a discussion of how the quantities L , μ , L , σ , L , v generalize the analogous structures in the elementary treatment of the calculus of variations.
Because a one-parameter variation of ϕ C M , S has the form Φ : M × I S but the energy functional L involves only the M derivative of its argument, the partial tangent map must be used here. For the purposes of calculating the first and second variations, L must be written as
L ϕ : = M L ϕ , M d V g .
Theorem 1
(First variation of L ). Let L , L, σ, μ, v and ν all be defined as above. If ϕ C M , S and A Γ ϕ * T S , then
d L ϕ · A = M A · ϕ * T * S ϕ , M * L , σ div M ϕ , M * L , v d V g + M A · ϕ * T * S ϕ , M * L , v · T * M ν d V ¯ g .
The expression above is often called the first variation of L . A type analysis here gives ϕ , M * L , σ Γ ϕ * T * S and ϕ , M * L , v Γ ϕ * T * S M T M . Recall that because the domain of ϕ is M, it follows that ϕ ϕ , M .
Proof. 
Supporting calculations will be made below in lemmas. Let Φ : M × I S be as in (24), so that δ i Φ = A . For tidiness, let L , σ : = ϕ , M * L , σ and L , v : = ϕ , M * L , v . Then
d L ϕ · A = d L ϕ · δ i Φ = δ i L Φ = M δ i L Φ , M d V g = M L , σ · ϕ * T S A + L , v · ϕ * T S M T * M ϕ * T S A d V g ( by   ( 4 ) ) = M A · ϕ * T * S L , σ div M L , V + div M A · ϕ * T * S L , V d V g ( by   ( 4 ) ) = M A · ϕ * T * S L , σ div M L , V d V g + M A · ϕ * T * S L , V · T * M ν d V ¯ g ( divergence theorem ) ,
as desired.
As for the types of ϕ , M * L , σ and ϕ , M * L , v , the contravariance of bundle pullback allows significant simplification. Because L , σ Γ π S * T * S and L , v Γ π * E * ,
ϕ , M * L , σ Γ ϕ , M * π S * T * S = Γ π S ϕ , M * T * S = Γ ϕ * T * S and ϕ , M * L , v Γ ϕ , M * π * E = Γ π ϕ , M * T * S T M = Γ ϕ * T * S M T M .
The supporting calculations follow. Define z : M M × I , m m , 0 for purposes of evaluation of i = 0 via precomposition as in (22). Then δ i is a section of a pullback bundle; δ i = z * i Γ z * T M T I . It should be noted that Φ z = ϕ by definition, and that z * Φ , M = z * Φ , M = ϕ , M by (22). □
Lemma 4.
Let L, Φ, A, σ, and v be as in Theorem 1. The variational derivative of L Φ , M decomposes in terms of the partial covariant derivatives L , σ and L , v and the linearized variation A;
δ i L Φ , M = ϕ , M * L , σ · ϕ * T S δ i Φ + ϕ , M * L , v · ϕ × M Id M * E ϕ δ i Φ .
The integration-by-parts trick as in the derivation of the first variation in elementary calculus of variations generalizes to the covariant setting;
L , σ · ϕ * T S A + L , v · ϕ * T S M T * M ϕ A = A · ϕ * T * S L , σ div M L , v + div M A · ϕ * T * S L , v .
Proof. 
A wonderful string of equalities follows.
δ i L Φ , M = z * M × I R L Φ , M · z * T M T I δ i ( here , δ i = z * i ) = z * Φ , M * E R L · z * Φ , M * T E z * M × I E Φ , M · z * T M T I δ i ( chain   rule ) = ϕ , M * L , σ · π S * T S σ + L , μ · π M * T M μ + L , v · π * E v · ϕ , M * T E δ i Φ , M ( by   ( 16 )   and because   Φ , M z = ϕ , M ) = ϕ , M * L , σ · ϕ , M * π S * T S ϕ , M * σ · ϕ , M * T E δ i Φ , M + ϕ , M * L , μ · ϕ , M * π M * T M ϕ , M * μ · ϕ , M * T E δ i Φ , M + ϕ , M * L , v · ϕ , M * π * E ϕ , M * v · ϕ , M * T E δ i Φ , M = ϕ , M * L , σ · ϕ * T S δ i Φ + ϕ , M * L , v · ϕ × M Id M * E ϕ δ i Φ ( by   ( 5 ) )
Note that by (8), Φ , M * π S * T S = π S Φ , M * T S = Φ * T S , Φ , M * π M * T M = π M Φ , M * T M = pr M M × I * T M and Φ , M * π * E = π Φ , M * E = Φ × M × I pr M M × I * E . Replacing δ i Φ with A gives
δ i L Φ , M = L , σ · ϕ * T S A + L , v · ϕ * T S M T * M ϕ A ,
establishing the first equality.
For the second,
L , σ · ϕ * T S A + L , v · ϕ * T S M T * M ϕ A = L , σ · ϕ * T S A + tr T * M L , v · ϕ * T S ϕ A ( tracing   T M   separately ) = A · ϕ * T * S L , σ + tr T * M L , v · ϕ * T S A ϕ × M Id M L , v · ϕ * T S A ( reverse   product   rule ) = A · ϕ * T * S L , σ A · ϕ * T * S tr T * M ϕ × M Id M L , v + tr T * M A · ϕ * T * S L , v ( · ϕ * T S commutes   with   tr T * M ) = A · ϕ * T * S L , σ div M L , v + div M A · ϕ * T * S L , v ( definition   of   div M ) .
Note that L , v Γ ϕ * T * S M T M , so div M L , v Γ ϕ * T * S and A · ϕ * T * S L , v Γ T M . □
Lemma 5.
The variation δ i Φ , M decomposes as follows.
ϕ , M * σ · ϕ , M * T E δ i Φ , M = δ i Φ Γ ϕ * T S , ϕ , M * μ · ϕ , M * T E δ i Φ , M = 0 Γ T M , ϕ , M * v · ϕ , M * T E δ i Φ , M = ϕ * T S δ i Φ Γ ϕ * T S M T * M .
Proof. 
This calculation determines the σ component of δ i Φ , M .
ϕ , M * σ · ϕ , M * T E δ i Φ , M = ϕ , M * π S · ϕ , M * T E δ i Φ , M = δ i π S Φ , M = δ i pr S S × M π Φ , M = δ i Φ Γ z * Φ * T S Γ ϕ * T S .
This calculation determines the μ component of δ i Φ , M .
ϕ , M * μ · ϕ , M * T E δ i Φ , M = ϕ , M * π M · ϕ , M * T E δ i Φ , M = δ i π M Φ , M = δ i pr M S × M π Φ , M = δ i pr M M × I = 0 Γ z * pr M M × I * T M Γ T M .
The last equality follows from the fact that pr M M × I does not depend on the i coordinate.
This calculation determines the v component of δ i Φ , M . Let p M : = pr M M × I and p I : = pr I M × I . The left-hand side of the third equality claimed in the lemma will be examined before evaluating at i = 0 ;
Φ , M * v · Φ , M * T E i Φ , M = p I * i π Φ , M * E Φ , M = p I * i Φ × M × I p M * T S T * M Φ , M Γ Φ * T S M × I p M * T * M .
Let Y Γ T M , noting that p M * Y Γ p M * T M . Then
Φ , M * v · Φ , M * T E i Φ , M · p M * T M p M * Y = p I * i Φ * T S M × I p M * T * M Φ , M · p M * T M p M * Y = Φ , M I · p I * T I p I * i · p M * T M p M * Y = Φ , I M · p M * T M p M * Y · p I * T I p I * i ( by   ( 19 ) ) = Φ , I · p I * T I p I * i , M · p M * T M p M * Y Φ , I · p I * T I p I * i , M · p M * T M p M * Y = i Φ , M · p M * T M p M * Y ( since   p I * i does   not   depend   on   M ) .
Recall that Id M = p M z and that the pullback of bundles is contravariant. Then evaluating at i = 0 via pullback by z renders
ϕ , M * v · ϕ , M * T E δ i Φ , M · T M Y = Φ , M z * v · Φ , M z * T E z * i Φ , M · p M z * T M p M z * Y = z * Φ , M * v · z * Φ , M * T E z * i Φ , M · z * p M * T M z * p M * Y = z * Φ , M * v · Φ , M * T E i Φ , M · p M * T M p M * Y = z * i Φ , M · p M * T M p M * Y = z * i Φ , M · z * p M * T M z * p M * Y = z * i Φ , M · p M z * T M p M z * Y ( by   ( 22 ) ) = δ i Φ , M · T M Y = ϕ * T S δ i Φ · T M Y
The last equality is because δ i Φ Γ ϕ * T S , which is a bundle over M, and therefore δ i Φ , M is the total covariant derivative. Because Y is pointwise-arbitrary in T M , this implies that ϕ , M * · ϕ , M * T I δ i Φ , M = ϕ * T S δ i Φ , i.e., the variational derivative δ i commutes with the first material derivative, just as in the analogous situation in elementary calculus of variations. □
Corollary 4
(Euler–Lagrange equations). If ϕ C M , S is a critical point of L (i.e., if d L ϕ · A = 0 for all A Γ ϕ * T S ), then
ϕ , M * L , σ div M ϕ , M * L , v = 0   o n   M , ϕ , M * L , v · T M ν = 0   o n   M .
These are called the Euler-Lagrange equations for the energy functional L . Recall that because the domain of ϕ is M, ϕ ϕ , M .
Proof. 
This follows trivially from (1) and the Fundamental Lemma of the Calculus of Variations ([15], p. 16). □
It should be noted that the boundary Euler–Lagrange equation is due to the fact that the admissible variations are entirely unrestricted. If, for example, the class of maps being considered had fixed boundary data, then any variation would vanish at the boundary, and there would be no boundary Euler–Lagrange equation; this is typically how geodesics and harmonic maps are formulated.
Remark 8
(Analogs in elementary calculus of variations). The quantities L , μ , L , σ , L , v generalize the quantities L x , L z , L p respectively of the elementary treatment of the calculus of variations for energy functional
f : U R n U L x , f x , D f x d x ,
where U R m is compact and U × R n × R n × m x , z , p L x , z , p is the Lagrangian. Here, L x : U × R n × R n × m R m ,   L z : U × R n × R n × m R n , and L p : U × R n × R n × m R n × m decompose the total derivative d L and are defined by the relation
d L x , z , p · u , v , w = L x x , z , p · u + L z x , z , p · v + L p x , z , p : w
for u R m , v R n , and w R n × m . The Euler-Lagrange equation in this setting is
L z div U L p x , f x , D f x = 0   f o r   x U ,
noting that the left hand side of the equation takes values in R n .
In most situations involving simpler calculations, it is desirable and acceptable to dispense with the highly decorated notation and use trimmed-town, context-dependent notation, leaving off type-specifying sub/superscripts when clear from context.
Proposition 25
(Conserved quantity). If M is a real interval, ϕ C M , S satisfies the Euler–Lagrange equation, and L , μ = 0 , then
H : = ϕ * L , v · ϕ * T S M T * M ϕ ϕ * L C M , R
is constant. If L is kinetic minus potential energy, then H is kinetic plus potential energy (the total energy), and is referred to as the Hamiltonian.
Proof. 
Let t be the standard real coordinate. Note that because M is a real interval, it follows that ϕ = ϕ M d t . Terms appearing in the derivative of H can be simplified as follows. Note the repeated derivatives; ϕ : M ϕ * T S M T * M but ϕ : M ϕ * T ϕ * T S M T * M lands in a higher tangent space.
ϕ * σ · ϕ · d d t = ϕ * π S · ϕ · d d t = d d t π S ϕ = ϕ , ϕ * v · ϕ · d d t = ϕ * v · d d t ϕ = d d t ϕ , d d t ϕ * L = ϕ * L · ϕ · d d t = ϕ * L , σ · ϕ + ϕ * L , v · d d t ϕ , d d t ϕ * L , v : ϕ = d d t ϕ * L , v : ϕ M d t + ϕ * L , v : d d t ϕ = d d t ϕ * L , v · d t · ϕ + ϕ * L , v : d d t ϕ .
Again, because M is a real interval, the divergence is just the derivative, so the Euler–Lagrange equation is
0 = ϕ * L , σ div M ϕ * L , v = ϕ * L , σ ϕ * L , v : d t M d d t = ϕ * L , σ d d t ϕ * L , v · d t ,
and therefore d d t ϕ * L , v · d t = ϕ * L , σ . Thus
d d t H = d d t ϕ * L , v : ϕ ϕ * L = d d t ϕ * L , v · d t ϕ * L , σ · ϕ
which is zero because ϕ satisfies the Euler–Lagrange equation. This shows that H is constant along solutions of the Euler–Lagrange equation, and is therefore a conserved quantity. It should be noted that this proof relies on the fact that the divergence takes a particularly simple form when the domain M is a real interval; the result does not necessarily hold for a general choice of M. □
Example 3
(Harmonic maps). Define a metric
k Γ E * S × M E * Γ T S T * M S × M T S T * M
in a manner analogous to that in (2);
k : = h g 1 .
To clarify, h g 1 Γ T * S S T * S T M M T M , so permuting the middle two components (as in the definition of h g 1 ) gives the correct type, including the necessary metric symmetry condition. If A E , then A k 2 is the quantity obtained by raising/lowering the indices of A and pairing it naturally with A. A useful fact is that k = 0 ; if u v T S T M , then permutation commutativity (2) and the product rule gives
u v k = u v h g 1 = u h g 1 + h v g 1 ,
which equals zero because h and g 1 are parallel with respect to T S and T * M respectively.
With Lagrangian
L : E R , A 1 2 A k 2
and energy functional
E ϕ : = M L ϕ d V g
( E ϕ is called theenergyof ϕ), the resulting Euler–Lagrange equations can be written down after calculating L , σ and L , v . It is worthwhile to note that L is a quadratic form A A : 1 2 k : A on E, which will automatically imply that L , v A = A : k . However, the calculation showing this will be carried out for demonstration purposes.
Let A , B T S T * M . Then ϵ A + ϵ B is a vertical variation of A, since h δ A + ϵ B = 0 , so
L , v A : B = L , v A : v · δ ϵ A + ϵ B = δ ϵ L A + ϵ B = δ ϵ A + ϵ B : 1 2 π * k A + ϵ B : A + ϵ B .
The product rule gives three terms. The middle term is zero because π A + ϵ B = π A , and therefore does not depend on ϵ. The basepoint evaluation notation for π * k A will be suppressed for brevity (see Section 3.5). Thus
L , v A : B = B : 1 2 k : A + A : 1 2 k : B = A : k : B ,
where the last equality results from the symmetry of k. By the nondegeneracy of the natural pairing on T S T * M (which is denoted here by :), this implies that L , v A = A : k .
To calculate L , σ , it is sufficient (and can be easier) to calculate L , h , as h = π , π = π S × E π M , so h = σ E μ . Let A ϵ be a horizontal curve in E = T S T * M ; this means that v · d d ϵ A = 0 . Recall that v · d d ϵ A is defined by d d ϵ π A * E A . Then
L , h A · π * T S T M h · T E δ ϵ A = L , h · π * T S T M h + L , v · π * E v · T E δ ϵ A = L · T E δ ϵ A = δ ϵ L A = δ ϵ A : 1 2 A * π * k : A .
As before, the product rule gives three terms. Using the contravariance of bundle pullback, the middle term is
1 2 δ ϵ π A * E * S × M E * π A * k = 1 2 π A * E * S × M E * k · δ ϵ π A ,
which equals zero because k = 0 . Thus
L , h A · h · δ ϵ A = δ ϵ A : 1 2 k : A + A : 1 2 k : δ ϵ A ,
which equals zero because δ ϵ A = v · δ ϵ A = 0 . The quantity h · δ ϵ A can take any value in π * T S T M , showing that L , h = 0 . Finally, h = σ E μ implies that L , σ = 0 and L , μ = 0 . This can be understood from the fact that L depends only on the fiber values of A, and has no explicit dependence on the basepoint; this relies crucially on the fact that k = 0 .
Finally, the Euler–Lagrange equations can be written down. Recalling that the natural trace of a tensor (used in the divergence term in the Euler–Lagrange equation) is contraction with the appropriate identity tensor, let e i be a local frame for T M and let e i be its dual coframe, so that e i M e i is a local expression (It should be noted that while I T M is being written as the local expression e i M e i , no inherently local property is being used; this tensor decomposition is only used so that the product rule can be used in the following calculations in a clear way.) for I T M Γ T M M T * M . The type-subscripted notation will be minimized except to help clarify. On M:
0 = ϕ * L , σ div M ϕ * L , v = tr ϕ : k = e i ϕ : k · T * M e i = e i ϕ : k · e i ϕ : e i k · e i .
The second term vanishes because k = 0 . Unraveling the definition of k gives e i ϕ : k = ϕ * h · e i ϕ · g 1 . Contracting both sides of the above equation with ϕ * h 1 gives
0 = e i ϕ · g 1 · e i = e i ϕ · e i = tr g 2 ϕ Γ ϕ * T S .
The quantity tr g 2 ϕ is the g-trace of the covariant Hessian of ϕ and can rightfully be called thecovariant Laplacianof ϕ and denoted by Δ g ϕ (this is also referred to as thetension fieldof ϕ in other literature ([14], p. 13), which is denoted τ ϕ ). Note that Δ g ϕ is a vector field along ϕ. This makes sense because ϕ is not necessarily a scalar function; it takes values in S. In the case S = R , Δ g ϕ is the ordinary covariant Laplacian on scalar functions.
Aharmonic mapis defined as a critical point of the energy functional
E ϕ : = M 1 2 ϕ k 2 d V M .
Assuming a fixed boundary (so that the variations vanish on the boundary) eliminates the boundary Euler–Lagrange equation, the remaining equation is
Δ g ϕ = 0   o n   t h e   i n t e r i o r   o f   M ,
which is the generalization of Laplace’s equation. Satisfying Laplace’s equation is a sufficient condition for a map to be a critical point of the energy functional. There is an abundance of literature concerning harmonic maps and the analysis thereof [14,15,19,20].
Example 4
(The geodesic equation). A fundamental problem in differential geometry is determining length-minimizing curves between given points. If M is a bounded, real interval, and t denotes the standard real coordinate, then the length functional on curves ϕ : M S is L ϕ : = M ϕ g d t . A topological metric d : M × M R on M can be defined as
d p , q : = inf L ϕ ϕ   j o i n s   p   t o   q .
It can be shown that the length functional L ϕ : = M ϕ h d t and the energy functional E ϕ : = M 1 2 ϕ h 2 d t have identical minimizers. Note that ϕ Γ ϕ * T S . It is therefore sufficient to consider the analytically preferable energy functional.
In this case, the metric g on M is just scalar multiplication on R . Because M is one-dimensional and t is the standard real coordinate, d d t is a global, parallel orthonormal frame for T M , and the g-trace of 2 ϕ (i.e., Δ g ϕ ) has a single term. The Euler–Lagrange equation, on the interior of M, is
0 = Δ g ϕ = tr g 2 ϕ = ϕ : d d t , d d t = d d t ϕ · d d t = d d t ϕ · d d t ϕ · d d t d d t .
However, ϕ · d d t = ϕ and d d t d d t = 0 , giving thegeodesic equation
d d t ϕ * T S ϕ = 0   o n   t h e   i n t e r i o r   o f   M .
This is the covariant way to state that the acceleration of ϕ is identically zero. The geodesic equation is commonly notated as 0 = ϕ ϕ , though such notation is inaccurate because ϕ is not a vector field on S, but a vector field along ϕ, and therefore use of the pullback covariant derivative ϕ * T S is correct (see (5)).
While formulated using fixed boundary conditions (ϕ has p and q as its endpoints), the geodesic equation is a second order ODE for which initial tangent vector conditions are sufficient to uniquely determine a solution.

4.3. Second Variation

A further consideration after finding critical points of the energy functional L is determining which critical points are extrema. This will involve calculating the second derivative of L . Let C : = C M , S , noting that T ϕ C Γ ϕ * T S for ϕ C . The first derivative of L is C R L : = d L , as seen in the previous section. The second derivative is the covariant Hessian T * C C R L , where the covariant derivative T * C is induced by T S ([18], Theorem 5.4).
For the remainder of this section, let I , J R be neighborhoods of zero, let i and j be their respective standard coordinates, and extend the existing δ -style derivative-at-a-point notation by defining δ i : = i i = j = 0 , δ j : = j i = j = 0 , and evaluation map z : M M × I × J , m m , 0 , 0 . Then δ i = z * i and δ j = z * j ; these will be used as in the calculation of the first variation.
Theorem 2
(Second variation of L ). Let L , L, σ, μ, v and ν all be defined as above. If ϕ C M , S is a critical point of L and A , B T ϕ C Γ ϕ * T S , then the covariant Hessian of L is
2 L ϕ : T ϕ C A B = M A · ϕ * T * S ϕ , M * L , σ σ · ϕ * T S B + A · ϕ * T * S ϕ , M * L , σ v · ϕ * T S M T * M ϕ * T S B + ϕ * T S A · ϕ * T * S M T M ϕ , M * L , v σ · ϕ * T S B + ϕ * T S A · ϕ * T * S M T M ϕ , M * L , v v · ϕ * T S M T * M ϕ * T S B A · ϕ * T * S ϕ , M * L , v · ϕ * T S M T * M ϕ * R T S · ϕ * T S ϕ , M · ϕ * T S B d V g .
This is often called the second variation of L . Here, R T S Γ T S S T * S S T * S S T * S denotes the Riemannian curvature endomorphism tensor for the Levi-Civita connection on T S .
Proof. 
Let Φ : M × I × J S be a two-parameter variation such that δ i Φ = A and δ j Φ = B (e.g., Φ m , i , j : = exp i A m + j B m ). The variation Φ can be naturally identified with a variation Φ ¯ : I × J C , i , j m Φ m , i , j which is more conducive to the use of C as a manifold. The tensor products in the generally infinite-dimensional T C are taken formally. Let z ¯ : = 0 , 0 I × J .
By (20), taking the algebra formally in the case of infinite-rank vector bundles,
2 L Φ ¯ = Φ ¯ * 2 L : Φ ¯ * T C Φ ¯ I × J Φ ¯ + Φ ¯ * L · Φ ¯ * T C Φ ¯ ,
so
2 L C ϕ : T ϕ C A B = 2 L C ϕ : T ϕ C δ i Φ ¯ δ j Φ ¯ + L C ϕ · T ϕ C δ j Φ ¯ * T C i Φ ¯ ( since   L C ϕ = 0 ) = 2 L C Φ ¯ I × J z ¯ : z ¯ * Φ ¯ * T C δ i Φ ¯ δ j Φ ¯ + L C Φ ¯ I × J z ¯ · z ¯ * Φ ¯ * T C δ j Φ ¯ * T C i Φ ¯ = 2 L C Φ ¯ : T I T J δ i I × J δ j ( by   above ) = δ j i L C Φ ¯ = M δ j i L Φ , M d V g = M 2 L Φ , M : T M T I T J δ i M × I × J δ j d V g = M A · ϕ * T * S ϕ , M * L , σ σ · ϕ * T S B + A · ϕ * T * S ϕ , M * L , σ v · ϕ * T S M T * M ϕ * T S B + ϕ * T S A · ϕ * T * S M T M ϕ , M * L , v σ · ϕ * T S B + ϕ * T S A · ϕ * T * S M T M ϕ , M * L , v v · ϕ * T S M T * M ϕ * T S B A · ϕ * T * S ϕ , M * L , v · ϕ * T S M T * M ϕ * R T S · ϕ * T S ϕ , M · ϕ * T S B d V g ( by   Calculation   ( 1 ) ) .
Supporting calculations follow.
Calculation (1): Abbreviate ϕ , M * L , x y by L , x y . By (20),
2 L Φ , M : T M T I T J δ i M × I × J δ j = Φ , M * 2 L : Φ , M * T E Φ , M M × I × J Φ , M + Φ , M * L · Φ , M * T E Φ , M z : z * T M T I T J δ i M δ j = z * Φ , M * 2 L : z * Φ , M * T E δ i Φ , M M δ j Φ , M + z * Φ , M * L · z * Φ , M * T E δ j Φ , M * T E i Φ , M ( by   Calculation   ( 2 ) ) = L , σ σ : ϕ , M * π S * T S δ i Φ M δ j Φ + L , σ v · ϕ , M * π S * T S M ϕ , M * π * E δ i Φ M ϕ * T S δ j Φ + L , v σ · ϕ , M * π * E M ϕ , M * π S * T S ϕ * T S δ i Φ M δ j Φ + L , v v : ϕ , M * π * E ϕ * T S δ i Φ M ϕ * T S δ j Φ + L , σ · ϕ , M * π S * T S δ j Φ * T S i Φ + L , v · ϕ , M * π * E ϕ * T S δ j Φ * T S i Φ + L , v · ϕ , M * π * E I ϕ * T S M δ i Φ · ϕ * T S M ϕ * T * S ϕ * R T S · ϕ * T S δ j Φ · ϕ * T S ϕ , M ( by   Calculation   ( 3 ) ) .
Note that δ j Φ * T S i Φ Γ ϕ * T S , and since ϕ is a critical point of L ,
M L , σ · ϕ , M * π S * T S δ j Φ * T S i Φ + L , v · ϕ , M * π * E ϕ * T S δ j Φ * T S i Φ d V g = 0 .
Thus
M 2 L Φ , M : T M T I T J δ i M × I × J δ j d V g = M L , σ σ : ϕ , M * π S * T S δ i Φ M δ j Φ + L , σ v · ϕ , M * π S * T S M ϕ , M * π * E δ i Φ M ϕ * T S δ j Φ + L , v σ · ϕ , M * π * E M ϕ , M * π S * T S ϕ * T S δ i Φ M δ j Φ + L , v v : ϕ , M * π * E ϕ * T S δ i Φ M ϕ * T S δ j Φ + δ i Φ · ϕ * T * S L , v · ϕ , M * π * E ϕ * R T S · ϕ * T S δ j Φ · ϕ * T S ϕ , M d V g = M A · ϕ * T * S L , σ σ · ϕ * T S B + A · ϕ * T * S L , σ v · ϕ * T S M T * M ϕ * T S B + ϕ * T S A · ϕ * T * S M T M L , v σ · ϕ * T S B + ϕ * T S A · ϕ * T * S M T M L , v v · ϕ * T S M T * M ϕ * T S B A · ϕ * T * S L , v · ϕ * T S M T * M ϕ * R T S · ϕ * T S ϕ , M · ϕ * T S B d V g ( by   antisymmetry   of   curvature   tensor ) .
Calculation (2):
z * Φ , M : z * T M T I T J δ i M δ j = z * Φ , M * T E M × I × J T * M T * I T * J Φ , M : z * T M T I T J z * i M × I × J j = z * Φ , M * T E M × I × J T * M T * I T * J Φ , M : T M T I T J i M × I × J j = z * j Φ , M * T E M × I × J T * M T * I T * J Φ , M · T M T I T J i = z * j Φ , M * T E Φ , M · T M T I T J i ( since   j T M T I T J i = 0 ) = δ j Φ , M * T E i Φ , M .
Calculation (3): As calculated in the proof of (1),
ϕ , M * σ · ϕ , M * T E δ i Φ , M = δ i Φ Γ ϕ * T S , ϕ , M * μ · ϕ , M * T E δ i Φ , M = 0 Γ T M , ϕ , M * v · ϕ , M * T E δ i Φ , M = ϕ * T S δ i Φ Γ ϕ * T S M T * M .
Furthermore, letting P : = pr M M × I × J for brevity and noting that P z = Id M ,
ϕ , M * σ · ϕ , M * T E δ j Φ , M * T E i Φ , M = z * j Φ , M * π S * T S Φ , M * σ · Φ , M * T E i Φ , M ( since   σ = 0 ) = z * j π S Φ , M * T S i Φ ( using   calculation   from   ( 1 ) ) = δ j Φ * T S i Φ Γ z * Φ * T S Γ ϕ * T S ,
ϕ , M * μ · ϕ , M * T E δ j Φ , M * T E i Φ , M = z * j Φ , M * π M * T M Φ , M * μ · Φ , M * T E i Φ , M ( since   μ = 0 ) = z * j π M Φ , M * T M 0 ( using   calculation   from   ( 1 ) ) = 0 Γ z * π M Φ , M * T M Γ z * P * T M Γ T M ,
ϕ , M * v · ϕ , M * T E δ j Φ , M * T E i Φ , M = z * j Φ , M * π * E Φ , M * v · Φ , M * T E i Φ , M ( since   v = 0 ) = z * j π Φ , M * E i Φ , M ( using   calculation   from   ( 1 ) ) .
Note that
ϕ , M * v Γ ϕ , M * π * E M ϕ , M * T * E Γ ϕ × M Id M * E M ϕ , M * T * E ,
and therefore
ϕ , M * v · ϕ , M * T E δ j Φ , M * T E i Φ , M Γ ϕ × M Id M * E Γ ϕ * T S M T * M ,
so it suffices to examine its natural pairing with T M elements. Let X Γ T M , noting that X = Id M * X = z * P * X and that P * X = T P · X 0 T I 0 T J Γ P * T M . Then
ϕ , M * v · ϕ , M * T E δ j Φ , M * T E i Φ , M · T M X = z * j Φ * T S M × I × J P * T * M i Φ , M · z * P * T M z * P * X = z * j Φ * T S i Φ , M · P * T M P · T M T I T J X 0 T I 0 T J z * i Φ , M · j P * T M P · T M T I T J · X 0 T I 0 T J = z * j Φ * T S X 0 T I 0 T J Φ * T S i Φ i Φ , M · 0 P * T M = z * X 0 T I 0 T J Φ * T S j Φ * T S i Φ + j , X 0 T I 0 T J Φ * T S i Φ R Φ * T S j , X 0 T I 0 T J i Φ = z * X 0 T I 0 T J Φ * T S j Φ * T S i Φ + 0 Φ * T S i Φ z * I Φ * T S M × I × J i Φ · Φ * T S M × I × J Φ * T * S R Φ * T S : T M T I T J j M × I × J X 0 T I 0 T J = ϕ * T S δ j Φ * T S i Φ + I ϕ * T S M δ i Φ · ϕ * T S M ϕ * T * S ϕ * R T S · ϕ * T S δ j Φ · ϕ * T S ϕ , M · T M X ,
where the last equality follows from Calculations (4) and (5). Because X is pointwise-arbitrary in T M , this shows that
ϕ , M * v · ϕ , M * T E δ j Φ , M * T E i Φ , M = δ j Φ * T S i Φ , M + I ϕ * T S M δ i Φ · ϕ * T S M ϕ * T * S ϕ * R T S · ϕ * T S δ j Φ · ϕ * T S ϕ , M .
Calculation (4):
z * X 0 T I 0 T J Φ * T S j Φ * T S i Φ + 0 Φ * T S i Φ = z * j Φ * T S i Φ , M · z * P * T M z * P * X = δ j Φ * T S i Φ , M · T M X ( by   ( 22 ) ) = ϕ * T S δ j Φ * T S i Φ · T M X ( because δ j Φ * T S i Φ Γ z * Φ * T S Γ ϕ * T S ) .
Calculation (5):
z * R Φ * T S : T M T I T J j M × I × J X 0 T I 0 T J = z * R Φ * T S : T M T I T J X 0 T I 0 T J M × I × J j ( antisymmetry   of   R Φ * T S ) = z * Φ * R T S : Φ * T S Φ M × I × J Φ : T M T I T J X 0 T I 0 T J M × I × J j ( by   ( 21 ) ) = z * Φ * R T S : Φ * T S Φ , M · P * T M P * X M × I × J j Φ = z * Φ * R T S · Φ * T S j Φ · Φ * T S Φ , M · P * T M P * X = z * Φ * R T S · z * Φ * T S z * j Φ · z * Φ * T S z * Φ , M · z * P * T M z * P * X = ϕ * R T S · ϕ * T S δ j Φ · ϕ * T S ϕ , M · T M X .
 □
Theorem 3
(Second variation of L (alternate form)). Let L , L, σ, μ, v and ν all be defined as above. If ϕ C M , S is a critical point of L and A , B Γ ϕ * T S , then
2 L ϕ : T ϕ C A B = M A · ϕ * T * S L , σ σ · ϕ * T S B + A · ϕ * T * S L , σ v · ϕ * T S M T * M ϕ * T S B A · ϕ * T * S div M L , v σ · ϕ * T S B A · ϕ * T * S L , v σ · T * M M ϕ * T S ϕ * T S B 1 2 A · ϕ * T * S div M L , v v · ϕ * T S M T * M ϕ * T S B A · ϕ * T * S L , v v · T * M M ϕ * T S M T * M ϕ * T S M T * M ϕ * T S B 1 2 3 A · ϕ * T * S L , v · ϕ * T S M T * M ϕ * R T S · ϕ * T S ϕ , M · ϕ * T S B d V g + M A · ϕ * T * S L , v σ · ϕ * T S B · T * M ν + A · ϕ * T * S L , v v · ϕ * T S M T * M ϕ * T S B · T * M ν d V ¯ g
Proof. 
This result follows essentially from (2) via several instances of integration by parts to express the integrand(s) entirely in terms of A and not its covariant derivatives. Abbreviate ϕ , M * L , x y by L , x y . Then, integrating by parts allows the covariant derivatives of A to be flipped across the natural pairings over ϕ * T S .
M ϕ * T S A · ϕ * T * S M T M L , v σ · ϕ * T S B d V g = M tr T M ϕ * T S A 1 2 · ϕ * T S L , v σ · ϕ * T S B d V g ( T M   trace   is   taken   separately ) = M tr T M T M A · ϕ * T * S L , v σ · ϕ * T S B tr T M A · ϕ * T * S ϕ * T * S M T M M ϕ * T S L , v σ · ϕ * T S B tr T M A · ϕ * T * S L , v σ · ϕ * T S ϕ * T S B d V g ( reverse   product   rule ) = M A · ϕ * T * S div M L , v σ · ϕ * T S B ( definition   of   divergence ) A · ϕ * T * S L , v σ · T * M M ϕ * T S ϕ * T S B 1 2 d V g + M A · ϕ * T * S L , v σ · ϕ * T S B · T * M ν d V ¯ g ( divergence   theorem ) .
Similarly,
M ϕ * T S A · ϕ * T * S M T M L , v v · ϕ * T S M T * M ϕ * T S B d V g = M tr T M ϕ * T S A 1 2 · ϕ * T * S L , v v · ϕ * T S M T * M ϕ * T S B d V g = M tr T M T M A · ϕ * T * S L , v v · ϕ * T S M T * M ϕ * T S B tr T M A · ϕ * T * S ϕ * T * S M T M M ϕ * T * S M T M L , v v · ϕ * T S M T * M ϕ * T S B tr T M A · ϕ * T * S L , v v · ϕ * T S M T * M ϕ * T S M T * M ϕ * T S B d V g = M A · ϕ * T * S div M L , v v · ϕ * T S M T * M ϕ * T S B A · ϕ * T * S L , v v · T * M M ϕ * T S M T * M ϕ * T S M T * M ϕ * T S B 1 2 3 d V g + M A · ϕ * T * S L , v v · ϕ * T S M T * M ϕ * T S B · T * M ν d V ¯ g .
Together with (2), this gives the desired result. □

5. Discussion

This paper is a first pass at the development of a strongly typed tensor calculus formalism. The details of its workings are by no means complete or fully polished, and its landscape is riddled with many tempting rabbit holes which would certainly produce useful results upon exploration, but which were out of the scope of a first exposition. Here is a list of some topics which the author considers worthwhile to pursue, and which will likely be the subject of his future work. Hopefully some of these topics will be inspiring to other mathematicians, and ideally will start a conversation on the subject.
  • There are refinements to be made to the type system used in this paper in order to achieve better error-checking and possibly more insight into the relevant objects. There are still implicit type identifications being done (mostly the canonical identifications between different pullback bundles).
  • The calculations done in this paper are not in an optimally polished and refined state. With experience, certain common operations can be identified, abstract computational rules generated for these operations, and the relevant calculations simplified.
  • The language of Category Theory can be used to address the implicit/explicit handling of natural type identifications, for example, the identification used in showing the contravariance of bundle pullback; ψ * ϕ * F ϕ ψ * F .
  • The details of the particular implementation of the pullback bundle ϕ * F as a submanifold of the direct product M × F are used in this paper, but there is no reason to “open up the box” like this. For most purposes, the categorical definition of pullback bundle suffices; the pullback bundle can be worked exclusively using its projection maps π M ϕ * F and ρ F ϕ * F . Using this abstract interface often cleans up calculations involving pullback bundles significantly.
  • The type system used for any particular problem or calculation can be enriched or simplified to adjust to the level of detail appropriate for the situation. For example, if γ C R , M , then γ Γ γ * T M R T * R , but if t is the standard coordinate on R , then γ = γ R d t , where γ Γ γ * T M is given by γ · d d t . This “primed” derivative has a simpler type than the total derivative, and would presumably lead to simpler calculations (e.g., in (25). This “primed” derivative could also be used in the derivation of the first and second variations. While this would simplify the type system, it would diversify the notation and make the computational system less regularized. However, some situations may benefit overall from this.
  • The notion of strong typing comes from computer programming languages. The human-driven type-checking which is facilitated by the pedantically decorated notation in this paper can be done by computer by implementing the objects and operations of this tensor calculus formalism in a strongly typed language such as Haskell. This would be a step toward automated calculation checking, and could be considered a step toward automated proof checking from the top down (as opposed to from the bottom up, using a system such as the Coq Proof Assistant). Furthermore, a computer could display tensor expressions with whatever level of detail the user desires, showing low-, mid-, or high-level notation, or showing or suppressing identification isomorphisms.
  • Is there some sort of completeness result about the calculational tools and type system in this paper? In other words, is it possible to accomplish “everything” in a global, coordinate-free way using a certain set of tools, such as pullback bundles, covariant derivatives, chain rules, permutations, evaluation-by-pullback?
  • The alternate form of the second variation (see (3)) can be used to form a generalized Jacobi field equation for a particular energy functional. Analysis of this equation and its solutions may give insights analogous to the standard (geodesic-based) Jacobi field equation.

Funding

This research was funded in part by a 2011–2012 ARCS (Achievement Rewards for College Scientists) Fellowship.

Acknowledgments

I would like to thank the ARCS Foundation for having granted me an ARCS Fellowship, and for their generous efforts to promote excellence in young scientists. I would like to thank my advisor Debra Lewis for trusting in my abilities and providing me with the freedom in which the creative endeavor that this paper required could flourish. I would like to thank David DeConde for the invaluable conversations at the Octagon in which imagination, creativity, and exploration were gladly fostered. Thanks to Chris Shelley for showing me how to create the tensor diagrams using Tikz. Finally, I would like to thank both Debra and David for their help in editing this paper.

Conflicts of Interest

The author declares no conflict of interest. The funders had no role in the research, writing, or publication of the manuscript, nor in the decision to publish the results.

References

  1. Parnas, D. On the Criteria to Be Used in Decomposing Systems Into Modules. Commun. ACM 1972, 15, 1053–1058. [Google Scholar] [CrossRef]
  2. Walter, W. Ordinary Differential Equations; Springer: Berlin/Heidelberg, Germany, 1998; Volume 182. [Google Scholar]
  3. Raymond, E.S. The Art of Unix Programming; Pearson Education, Inc.: Canada, 2003; Available online: http://www.catb.org/~esr/writings/taoup/html/ (accessed on 20 July 2022).
  4. Miller, G.A. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. Am. Psychol. Assoc. 1955, 101, 343–352. [Google Scholar]
  5. Lee, J.M. Introduction to Smooth Manifolds; Springer: Berlin/Heidelberg, Germany, 2006; Volume 218. [Google Scholar]
  6. Lee, J.M. Manifolds and Differential Geometry; American Mathematical Society, 2009; Volume 107, Available online: https://bookstore.ams.org/gsm-107 (accessed on 25 August 2022).
  7. Michor, P.W. Topics in Differential Geometry; American Mathematical Society, 2008; Volume 93, Available online: https://www.mat.univie.ac.at/~michor/dgbook.pdf (accessed on 25 August 2022).
  8. Lee, J.M. Riemannian Manifolds: An Introduction to Curvature; Springer: Berlin/Heidelberg, Germany, 1997; Volume 176. [Google Scholar]
  9. Cardelli, L. Typeful Programming. 1991. (Revised 1993). Available online: http://www.lucacardelli.name/Papers/TypefulProg.pdf (accessed on 20 July 2022).
  10. Penrose, R. The Road to Reality; Vintage Books: UK, 2004; Available online: https://en.wikipedia.org/wiki/The_Road_to_Reality (accessed on 25 August 2022).
  11. Marsden, J.E.; Hughes, T.J.R. Mathematical Foundations of Elasticity; Prentice Hall, Inc.: New York, NY, USA, 1983; Available online: https://www.amazon.co.jp/Mathematical-Foundations-Elasticity-Mechanical-Engineering/dp/0486678652 (accessed on 25 August 2022).
  12. Palais, R.S. Foundations of Global Non-Linear Analysis; W.A. Benjamin, Inc., 1968; Available online: https://vmm.math.uci.edu/PalaisPapers/FoundationsOfGlobalNonlinearAnalysis.pdf (accessed on 25 August 2022).
  13. Kolár, I.; Michor, P.W.; Slovák, J. Natural Operations in Differential Geometry; Springer: Berlin/Heidelberg, Germany, 1993; Volume 434, Available online: http://www.mat.univie.ac.at/~michor/listpubl.html (accessed on 20 July 2022).
  14. Xin, Y. Geometry of Harmonic Maps; Birkhäuser, 1996; Volume 23, Available online: https://link.springer.com/chapter/10.1007/978-1-4612-4084-6_4 (accessed on 25 August 2022).
  15. Giaquinta, M.; Hildebrandt, S. Calculus of Variations I; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  16. Dodson, C.; Radivoiovici, M. Second-Order Tangent Structures. Int. J. Theor. Phys. 1982, 21, 151–161. [Google Scholar] [CrossRef]
  17. Ebin, D.G.; Marsden, J.E. Groups of Diffeomorphisms and the Motion of an Incompressible Fluid. Ann. Math. Second. Ser. 1970, 92, 102–163. [Google Scholar] [CrossRef]
  18. Eliasson, H.I. Geometry of Manifolds of Maps. J. Differ. Geom. 1967, 1, 169–194. [Google Scholar] [CrossRef]
  19. Conference Board of the Mathematical Sciences Regional Conference Series in Mathematics. Selected Topics in Harmonic Maps; American Mathematical Society, 1983; Number 50; Available online: https://bookstore.ams.org/cdn-1655676954536/cbms-50/5 (accessed on 25 August 2022).
  20. Nishikawa, S. Vartiational Problems in Geometry; American Mathematical Society, 2002; Volume 205, Available online: https://www.semanticscholar.org/paper/Variational-problems-in-geometry-Nishikawa/f493efd6d5b84cff294179450e85b358b766d265 (accessed on 25 August 2022).
Figure 1. A picture of the manifold M, path θ , and vector fields θ and Θ . The blue dots represent θ t at certain points t R , while the green and red arrows represent θ t and Θ t at at these points respectively. Note that Θ is a unit-length vector field along θ and varies within I, whereas θ is a vector field along θ that vanishes within I.
Figure 1. A picture of the manifold M, path θ , and vector fields θ and Θ . The blue dots represent θ t at certain points t R , while the green and red arrows represent θ t and Θ t at at these points respectively. Note that Θ is a unit-length vector field along θ and varies within I, whereas θ is a vector field along θ that vanishes within I.
Mathematics 10 03231 g001
Figure 2. A diagram representing the decomposition of T E E into horizontal and vertical subbundles. The vertical lines represent individual fibers of E, while p M , e p E p , 0 p E p denotes the zero vector of E p , and Z E denotes the zero subbundle of E; Z E M . By the equivariance property of the linear connection, Z E is a submanifold of E which is entirely horizontal (its tangent space is entirely composed of horizontal vectors). The tangent spaces T 0 p E and T e p E are drawn; green arrows representing the vertical subspaces (“along” the fibers), red arrows representing the horizontal subspaces. Finally, c is a horizontal curve passing through e p .
Figure 2. A diagram representing the decomposition of T E E into horizontal and vertical subbundles. The vertical lines represent individual fibers of E, while p M , e p E p , 0 p E p denotes the zero vector of E p , and Z E denotes the zero subbundle of E; Z E M . By the equivariance property of the linear connection, Z E is a submanifold of E which is entirely horizontal (its tangent space is entirely composed of horizontal vectors). The tangent spaces T 0 p E and T e p E are drawn; green arrows representing the vertical subspaces (“along” the fibers), red arrows representing the horizontal subspaces. Finally, c is a horizontal curve passing through e p .
Mathematics 10 03231 g002
Table 1. Notational Reference Part 1.
Table 1. Notational Reference Part 1.
High-Mid-Low-LevelDescription
Variations; variational derivatives; tangent vectors.
m ϵ mVariation of a point in M; I ϵ m ϵ M ; m : I M .
δ δ ϵ Variational derivative; δ ϵ : = ϵ ϵ = 0 .
δ m ϵ δ ϵ m Tangent vector; linearization of a variation;
δ m ϵ T m 0 M ; δ ϵ m T m 0 M .
Projection maps; canonical isomorphisms; bundle-related maps and spaces.
pr pr i pr i A 1 × × A n Set-theoretic projection onto ith factor or named factor;
pr A i pr A i A 1 × × A n pr i A 1 × × A n : A 1 × × A n A i .
ι ι B , ι A ι B A Canonical isomorphism; ι B A : A B ; ι A B : = ι B A 1 .
π π M , π F π M F Bundle projection map; π M F : F M .
ρ ρ H , ρ ϕ * H ρ H ϕ * H Pullback bundle fiber projection map; ρ H ϕ * H : ϕ * H H .
I I V Identity tensor on V; I V V V * .
I I E Identity tensor field on E; I E Γ E E * .
Trivial bundle constructions and projection maps.
M × N N M N N Trivial bundle over N; M N : = M × N ;
π N M N : M N N , m , n n .
M × N M M N M Trivial bundle over M; M N : = M × N ;
π M M N : M N M , m , n m .
Shared base-space bundle constructions and projection maps.
E × F M E × M F M Direct product; E × M F : = m M E m × F m ;
π M E × M F e , f : = π M E e π M F f .
E F M E M F M Whitney sum; E M F : = m M E m F m ;
π M E M F e f : = π M E e π M F f .
E F M E M F M Tensor product; E M F : = m M E m F m ;
π M E M F c i j e i f j : = π M E e k π M F f (for any k , ).
Separate base-space bundle constructions and projection maps.
E × H M × N E × M × N H M × N Direct product; E × M × N H : = m , n M × N E m × H n .
π M × N E × M × N H e , h : = π M E e , π N H h .
E H M × N E M × N H M × N Whitney sum; E M × N H : = m , n M × N E m H n .
π M × N E M × N H e h : = π M E e , π N H h .
E H M × N E M × N H M × N Tensor product; E M × N H : = m , n M × N E m H n .
π M × N E M × N H c i j e i h j : = π M E e k , π N H h (for any k , ).
Trace; natural pairing; tensor/tensor field contraction. Simple tensor expressions are extended linearly.
tr tr V Trace on V; tr V : V * V R , α v α v .
α · v α · V v Natural pairing; · V : V * × V R , α , v α v .
A · B A · V B Tensor contraction; · V : U V * × V W U W ,
u α · V v w : = u α · V v w α v u w .
S · n T S · V 1 V n T Alternate for · V , where V = V 1 V n .
S : T , S · 2 T S · V 1 V 2 T Special notation for n = 2 .
tr tr F Trace on F M ; tr F : Γ F * M F C M , R ,
tr F σ M f m : = σ m · F m f m for m M .
σ · f σ · F f Natural pairing; · F : Γ F * × Γ F C M , R ,
σ · F f m : = σ m · F m f m for m M .
A · f A · F f Natural pairing; · F : Γ E M F * × F E ,
e M σ · F f : = e m σ m · F m f E m ; m : = π M F f .
S · T S · F T Tensor field contraction; pointwise tensor contraction;
· F : Γ E M F * × Γ F M G Γ E M G ,
e σ · F f g m : = σ m · F m f m e m g m .
S · n T S · F 1 M M F n T Alternate for · F , where F = F 1 M M F n .
S : T , S · 2 T S · F 1 M F 2 T Special notation for n = 2 .
Table 2. Notational Reference Part 2.
Table 2. Notational Reference Part 2.
High-Mid-Low-LevelDescription
Permutations of tensors and tensor fields.
A σ , A · n σ A · V 1 * V n * σ Right-action of permutations on n-tensors/n-tensor fields;
v 1 v n σ : = v σ 1 1 v σ 1 n ; A σ τ = A σ τ .
Spaces of sections of bundles.
Γ H , Γ π N H Space of smooth sections of the bundle π N H ;
Γ H : = h C N , H π N H h = Id N .
Γ ϕ H , Γ ϕ π N H Space of smooth sections of π N H along ϕ ;
Γ ϕ H : = h C M , H π N H h = ϕ .
Vertical bundle, pullback bundle, projection maps, pullback of sections.
V E E Vertical bundle over E M ; V E : = ker T π M E T E .
projection map π E V E : = π E T E V E .
ϕ * H M Pullback bundle; ϕ * H : = m , h M × H ϕ m = π h .
π M ϕ * H m , h : = m ; ρ H ϕ * H m , h : = h .
ϕ * h Pullback of section h Γ H ; ϕ * h Γ ϕ * H
defined by ρ H ϕ * H ϕ * h = h ; h Γ H .
Covariant derivatives; partial covariant derivatives.
L | L | M R L Natural linear covariant derivative; differential of functions;
M R L | M R L : = d L Γ T * M , where L C M , R .
X | X | E X Linear covariant derivative on vector bundle E M ;
E X E X Γ E M T * M , where X Γ E .
α | α | E * α Linear covariant derivative on dual vector bundle E * M ;
E * α E * α Γ E * M T * M , where α Γ E * .
ϕ ϕ M N ϕ Tangent map as tensor field;
M N ϕ M N ϕ Γ ϕ * T N M T * M , where ϕ C M , N .
σ ϕ σ ϕ * H σ Pullback covariant derivative; σ Γ ϕ * H ;
defined by ϕ * H ϕ * h = ϕ * H h · ϕ * T N M N ϕ ; h Γ H .
L , c 1 , , L , c n Partial differential of functions;
L , c i Γ F i * , defined by | M R L = i = 1 n L , c i · F i c i .
X , c 1 , , X , c n Partial linear covariant derivative;
X , c i Γ E M F i * , defined by | E X = i = 1 n X , c i · F i c i .
ϕ , M 1 , , ϕ , M n Partial derivative decomposition of tangent map;
ϕ , M i Γ ϕ * T N M pr i * T * M i ,
where M = M 1 × × M n , pr i : = pr i M , and
M N ϕ = i = 1 n ϕ , M i · pr i * T M i M M i pr i .
Covariant Hessians.
2 L T * M M R L Covariant Hessian of functions;
| | L | T * M | M R L 2 L Γ T * M T * M ; L C M , R .
2 X E T * M E X Covariant Hessian on vector bundle E M ;
| | X | E M T * M | E X 2 X Γ E T * M T * M ; X Γ E .
2 ϕ ϕ * T N T * M M N ϕ Covariant Hessian of maps;
| ϕ | ϕ * T N M T * M M N ϕ 2 ϕ Γ ϕ * T N T * M T * M . ϕ C M , N .
Derivative conventions.
X e Directional derivative notation; X e : = e · T M X .
n e · n X 1 X n 1 X n Iterated covariant derivative convention;
defined by X n n 1 e · n 1 X 1 X n 1 .
R X , Y : = X Y + Y X + X , Y Curvature operator; R X , Y e = 2 e : X Y Y X .
z : M M × I , m m , 0 Evaluation-at-zero map.
z * i = δ i Pullback formulation of derivative-at-zero.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dods, V. Riemannian Calculus of Variations Using Strongly Typed Tensor Calculus. Mathematics 2022, 10, 3231. https://doi.org/10.3390/math10183231

AMA Style

Dods V. Riemannian Calculus of Variations Using Strongly Typed Tensor Calculus. Mathematics. 2022; 10(18):3231. https://doi.org/10.3390/math10183231

Chicago/Turabian Style

Dods, Victor. 2022. "Riemannian Calculus of Variations Using Strongly Typed Tensor Calculus" Mathematics 10, no. 18: 3231. https://doi.org/10.3390/math10183231

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop