A Bayesian Reflection on Surfaces

The topic of this paper is a novel Bayesian continuous-basis field representation and inference framework. Within this paper several problems are solved: The maximally informative inference of continuous-basis fields, that is where the basis for the field is itself a continuous object and not representable in a finite manner; the tradeoff between accuracy of representation in terms of information learned, and memory or storage capacity in bits; the approximation of probability distributions so that a maximal amount of information about the object being inferred is preserved; an information theoretic justification for multigrid methodology. The maximally informative field inference framework is described in full generality and denoted the Generalized Kalman Filter. The Generalized Kalman Filter allows the update of field knowledge from previous knowledge at any scale, and new data, to new knowledge at any other scale. An application example instance, the inference of continuous surfaces from measurements (for example, camera image data), is presented.


Overview
The paper begins by reviewing traditional approaches to surface representation and inference. Then the new field representation and inference paradigm is introduced within the context of maximally informative (MI) inference [5], early ideas appearing in [4]. The knowledge representation distribution is introduced and discussed in the context of MI inference. Then, using the MI inference approach, the here-named Generalized Kalman Filter (GKF) equations are derived for a specific example instance of inferring a surface height field. The GKF equations motivate a location-dependent adaptive scale or multigrid approach to the MI inference of continuous-basis fields.

Traditional methods
Many methods for representing surfaces have been utilized previously, however these methods involve representing the surface by a discrete basis field, perhaps with a deterministic interpolation defined (bi-linear, tensor B-splines, etc.) to provide a definition for the surface at points intermediate to the discrete field. Probability distributions or densities of these discrete fields then often take the form of normalized exponentials of sums of clique energy functions, and produce a construct commonly known as a Markov Random Field. (See Geman [2], for an often cited example.) There are several immediate observations on these approaches: • The surface remains unspecified at points intermediate to the discrete field, except by the often undefined notion of interpolation.
• When interpolation is not defined, the discrete field probability distribution says nothing about the probability distribution of surface at points intermediate to the discrete field points.
• When interpolation is defined then, given a value of the discrete field, there is no uncertainty in the surface intermediate to the discrete field points. There is a deterministic mapping from any given discrete field to the corresponding continuous surface. In particular, when the discrete field basis covers a fixed grid on the (x, y) plane with z heights at each grid point, known here as a height field, all sampling of the surface intermediate to the fixed grid is determined at the scale of the fixed grid. This is generally not physical, see next.
• The surface distribution is not an intrinsic property of any physical surface, rather a post-hoc imposition of the analyst attempting a useful regularization. For instance, necessary scaling properties are ignored: Moving a camera closer to the surface, for example, so that the density of sample points on the physical surface increases, is not properly represented in the fixed basis of the discrete field distribution; there is no consistency imposed that requires a subsampled set of points to have the same probability density that one would find by marginalizing the surface distribution over the sample points not in the subsampling.

Scaling consistency
The consistency condition mentioned in the last section, which must be imposed on probability distributions for continuous fields is: Scaling of sample points consistency: For S ⊂ A indices of discrete field variables, Note that equation 1 is a condition which must be imposed on the distributions which any modelling system learns where it is sensible to supersample or subsample the field arbitrarily, as in the continuous field basis case.

Elements of the paradigm
The rest of this paper discusses an approach to continuous field inference which corrects the deficiencies, including the intermediate value and scaling problems, of traditional discrete-basis approaches to the inference of discrete height fields, for example. The new approach is here named the Generalized Kalman Filter.
There are four central objects of importance within the inference approach described in this paper, one of which is a new object to Bayesian inference: • The prior distribution for field. The prior holds all information about fields before any data is observed.
• The likelihood distribution. The likelihood is predictive for data, given the field. It incorporates all of the physics of the measurement process.
• The posterior distribution. The posterior distribution summarizes everything knowable about the field given assumptions of likelihood form, the prior knowledge, and all data.
• The knowledge-representation (KR) distribution. Within the usual Bayesian point of view, the KR distribution is the new mathematical object. In the paradigm described in this paper the KR distribution is the object updated when new data arrives. The KR distribution is parameterized by maximally informative statistics (see [5]) for the learned field knowledge. Note that because the KR distribution has a finite number-of-values limitation, the KR distribution is not necessarily able to represent what could have been learned from data about the (continuous) field. Generally, the prior distribution and the KR distribution determine an approximation (possibly exact) to the field posterior distribution. It should be noted that modern computer architecture (memory and space-time) constraints appear to be the fundamental physical drivers for the utilization of the KR distribution, simply because storing the exact posterior generally requires an infinite amount of memory.
In the height field inference application discussed later the KR distribution is parameterized by heights at a set of discrete basis points, but holds knowledge about a continuous basis height field. However, generally, the KR distribution may use an arbitrary set of basis functions.
One advance of the GKF is that the KR distribution is naturally adaptive in both dimension and scale, allowing the learning of continuousbasis field information at the appropriate scale, where appropriate.
Benefits of the approach described in this paper are that it has these information theoretically optimal features: 1. A location-dependent adaptive and scalable multigrid-like algorithm, so that only the bytes necessary to represent the learned information are stored, leading to a style of maximally sparse representation of surface knowledge; 2. A recursive updating algorithm. It will become clear that the Bayesian GKF field inference paradigm also has these properties: • It is the information learned about the field, (the KR distribution), which takes the form of a distribution over discrete values. In the surface inference example these discrete values are heights at discrete basis points.
• The prior distribution for fields, in conjunction with the learned knowledge of the field held within the KR distribution determine a welldefined posterior distribution over continuous fields.
• The field posterior distribution is always a well defined quantity everywhere. In the surface inference example discussed later, this continuity is at points intermediate to the discrete height field basis points of the KR distribution.
• The scaling condition equation 1 is automatically imposed because the posterior distribution is a distribution over fields.
As an example consider the inference of continuous surfaces: While it may seem obvious, in the case of continuous surface inference, that what one is actually representing with a discrete set of values in memory is only a part of the information which helps to determine the surface posterior distribution, it is unusual to not be discussing the height field as the primary representation of surface. It is the inherently discrete nature of the storage of information in machines which forces us into this stance -generally it is impossible to represent an arbitrary continuous field with a finite set of discrete valuesone must also have another object from which to compute the intermediate values of the field. (Another way to look at the disparity between the current proposal for field inference and traditional proposals is that the traditional approaches are sufficient only for band-limited fields.) In section 3 the GKF is specialized to height fields, where an example, surface representation and learning, of the GKF paradigm is described. (The approach taken in this section is to specialize to a case that is then easily seen to generalize to the general continuous basis field inference paradigm.) The next section continues with observations on the update scheme. Further sections continue with the example special case for surface distributions with particularly tractable mathematics, and final sections provide explicit forms for the general GKF equations, a discussion on their relationship to the standard Kalman filter, a discussion on the amount of information learned at each update, and a search heuristic. Extensive appendices provide supporting mathematics for the derivations.

Surface representation and inference
In this section the main ideas of the Bayesian surface representation and inference paradigm presented in this paper are given. The technique is general, though: section 4 discusses the extension to an arbitrary-basis, arbitrarydimension field.

Surface distributions
The surface and height field distributions (the prior, likelihood, and posterior surface and height field distributions) are discussed in this section.

Surface and height field prior distributions
Consider a set S of surfaces where each element s ∈ S is a height field, i.e. such that s = s(x, y) is real function of two variables. Write the prior probability distribution for surfaces in S given the parameters θ which determine the prior distribution as P (s | θ).
Consider a vector v = (v 1 , . . . , v n ) of discrete (x, y) points, v i = (x i , y i ). For any given surface s denote the associated vector of heights by h(s, v) = (h 1 (s, v), . . . , h n (s, v)). Write the prior distribution of the surface heights at the chosen points v as P (h v | θ). This discrete height distribution may be found as follows: where the vector delta-function is defined as Now, given that what is known is the surface heights h v at a vector v of discrete (x, y) points, the posterior distribution of surfaces is found from Bayes' theorem as where the denominator distribution was found in equation 5.

Measurements: The Likelihood
In general, a surface s and some other parameters φ not dependent upon s (i.e. camera point spread function, camera position and direction, lighting position and direction, etc.) specify the probability distribution for data (likelihood) where the data distribution is independent of θ once s is known.

Conditioning on data: Surface and height field posterior distributions
Given data, the surface posterior distribution is inferred using Bayes' theorem as The distribution of the surface posterior marginalized to a set of discrete points may be written using equations 11-12, doing steps similar to those taken in equations 3-5, as In steps similar to equations 7-9 the surface posterior when a height field is also known is given by where we used the facts that, given a surface, the data and the surface heights are independent, and the surface distribution is independent of the camera and lighting parameters φ.

Approximating the posterior
One motivation for approximating the surface distribution is that generally a surface is an uncountably infinite, continuous entity, and therefore there is little else which can be done to represent it exactly other than to go into, literally, infinite detail (requiring an infinite supply of memory). It is therefore useful to have an approximation scheme which, although finite, captures the relevant information provided by data. Another excellent reason for developing an approximation is mathematical tractability. Having a representation scheme which allows a tractable calculation of the posterior is a huge benefit for both computation and communication. Finally, it is of great interest to not waste computational resources while representing learned surface information. The solution to the surface representation problem presented here addresses the competition for representational resources (memory) issue in a unique manner.

The knowledge representation distribution
The full posterior may be written in the form where the distributions inside the integral appear in equations 13-18. The issue of generating a finite representation is not yet resolved via equation 19 however, since storing information sufficient to determine the distributions P (s | x, φ, θ), and P (s | h v , x, φ, θ) generally requires storing an infinite set of values in a finite amount of memory, or requires that all data be stored, disallowing any discarding of data and the incremental updating of the representation. Instead, consider the following approximation where the prior conditioned on a set of heights, along with a new distribution, the knowledge representation distributionP (h v | x, φ, θ), are substituted for the distributions inside the integral of equation 19.
It is important to note at this point that any suitable surface distribution may be substituted into the right-hand side of equation 20 for P (s | h v , θ), since it is important only that the resulting integral be capable of making a good approximation to the true posterior. Further, it is not necessary to restrict the basis v to discrete height field basis points, any suitable basis may be taken, for instance Fourier components. Although all of the calculations of this paper are carried thru with the form of 20, other forms may prove more convenient, and it is not difficult to suggest others. In particular, since equation 20 will be used in an iterative update loop later, updates that take for the right-hand side prior term the last posterior term appear quite reasonable (the corresponding GKF update equations may be found immediately from those presented later).
Although conditioning on the KR distributionP (h v | x, φ, θ) may seem strange, a good way to understand the meaning is that it is the KR distribution which is being used as a statistic for the learned surface information.
The key thing to notice in equation 20 is that, with reasonable regularity conditions, choosing the points of v sufficiently dense, the approximation desired to the full posterior may become arbitrarily good. The trick will be to choose v appropriately, properly weighting the competing need to approximate arbitrarily well everywhere with the limited resources that are imposed when a finite amount of storage is available, i.e. when the dimensionality of v is fixed. This will be addressed in the next section. In the case of simple imaging systems, the point spread function and pixel diameter are good indicators of the necessary sampling scale for v. In the super-resolved case, the resolution expected available from the data is the appropriate scale for v.
The approximation to the posterior of 20 has several properties which make it valuable: • The prior distribution P (s | h v , θ) which supplies the uncertainties associated with points of the surface not in the vector v may be chosen to have a simple form (see appendix 12.1) that is easily encoded algorithmically in finite memory.
• There is a clear separation between what was already known -the prior P (s | h v , θ), and what has been learned -the KR distribution • There is a clear description of the scale at which information has been acquired in terms of the density and uncertainties associated with the points (v, h(s, v)) on the surface, and in terms of the uncertainties of their positions as encoded in the KR distribution.
In practice, it is useful to take a multinormal distribution over the discretepoint height field as the KR distribution. Let the parameterization of the KR distribution be Θ v . For example, if the KR is taken to be multinormal then the parameters of that distribution are the mean and covariance matrix of the multinormal, where the functional dependence on x indicates a data dependency through the update procedure, and the subscript v indicates that the parameters parameterize a distribution of heights at points v. Because the KR distribution and its parameters are related by a one-to-one mapping, re-write equation 20 aŝ In summary, we have arrived at an approximation to the surface posterior distribution, via the KR distribution, parameterized by Θ v .

Updating the knowledge representation
Now we discuss updating Θ v when new data are acquired. Temporarily restrict attention to the fixed v case. During this and the next sections refer to figure 1 for a flowchart of the general GKF update process.

Bayes' theorem
Having acquired Θ n v = Θ v (x n ), from previously seen data x n = (x 1 , . . . , x n ) and upon seeing new data x n+1 , the goal is to find Θ n+1 v such that the surface distribution given Θ n+1 v approximates the surface distribution given x n+1 and Θ n v . Given new data x n+1 in the context of the previously seen data x n summarized by Θ n v , our updated surface distribution is found via Bayes' theorem where we defined The updated posteriorP (s | Θ n v , x n+1 , φ, θ) will be approximated by the The approximation condition for determining Θ n+1 v is then written Equation 26 suggests we try to minimize various measures of the closeness of the two distributions. For example, one measure is the average square difference of the two distributions, but there is (apparently) no good first-principles reason to use this form.
In the next section we discuss the measure of distance which leads to the maximally informative choice of Θ n+1 v .

Maximally informative inference
The measure of distance which leads to the Θ n+1 providing the most information about the surface distribution is the maximally informative choice for the statistic Θ n+1 . The condition for being maximally informative, see [5], is that the Kullback-Leibler distance D(P 1 (s), P 2 (s)) is minimized, where and where the P 's above are posterior distributions of field, that is That is, while at the Θ n+1 v satisfying the derivative condition above i.e., the hessian is negative definite and the extremum is a local maximum. If possible, choose the global maximum. Note that the Kullback-Leibler distance is asymmetric. Generally, it is highly relevant which distribution contains the prior information and which distribution is being updated. Maximum entropy techniques reverse the roles of P 1 and P 2 which appear here. For a detailed explanation see [5].
In the following section are some observations on the approach taken to maximally informative surface inference. Section 5 then briefly makes explicit the specific distribution forms which are assumed. The Generalized Kalman Filter update equations for the surface inference example which follow from this approach are then presented in section 6, completing the derivation of the maximally informative approach.

Observations on the update scheme
Note the following: • The updating scheme described here is a maximally informative update scheme and is related to the Kalman filter. The Kalman filter is a minimum variance filtering scheme applicable in the case of fixed representation dimension. The crucial step which has been taken in the current work is the step of allowing the representation scheme to be adaptable. We have adopted the label "Generalized Kalman Filter" (GKF) to describe the idea represented here. The GKF equations are presented in section 6.
• To this point we have only optimized over Θ v . It is clear that we may also vary the number of vertices |v| of the representation, allowing optimization over the number of vertices. Varying the number of vertices of the representation is absolutely necessary if surface knowledge at scales smaller than the current set of vertices represents is to ever accumulate. In section 6 the GKF update equations are derived assuming that the number of vertices in the representation basis vertex set is arbitrary at each update.
• Beyond allowing the number of vertices to vary, the positions of the vertices may be allowed to vary. In section 6 the GKF update equations are derived assuming that the representation basis vertex set positions are arbitrary.
• Detecting when and where new vertices are necessary is a matter of observing directly in equations 28 or 31 when new data produces a lower surface uncertainty over a region, and when having smaller uncertainty at neighboring vertices is not sufficient to represent this lower uncertainty over the region.
• The vertex representation for the surface knowledge is convenient, but not necessary. For example it is possible to extend a height field to a height-and-reflectance field or "arbitrary dimension field", where the reflectance lies within a many-dimensional space. Reasonable structures for the covariance matrix allow differing correlations between reflectance values and between height values. It will be seen in in section 6 that the GKF update equations are easily used in the "arbitrary dimension field" context.
• In its most abstract form, instead of having a "field", there is simply a set of objects, while for each "object" there is an associated vector of properties, where some of the components of the property vector may be considered a location in space. In this fairly abstracted setting, the collection of objects has an associated joint probability distribution which describes the probability distribution over configurations of objects. It will be seen in in section 6 that the GKF update equations are easily understood in the "object" context.
• Equation 31 which defines the quantity to be minimized is where a penalty term which indicates how many bits in hardware is available in trade for each bit of information learned from data. For example, one might penalize the KL distance by 1/10th the number of bytes it takes to represent the new information gained by extending the number of points represented. The exact form of the information learned about the surface distribution contained in the KR distribution is found in section 8, where the dimensionality of the representation enters directly, and where bits-used penalty-terms may be introduced.
• The previous note points out how a minimum description length method fails for this problem. It is certainly the case that that our update scheme may require much more memory (in bits) to represent the information learned than the information learned (in bits). At some point, if information at small enough scales is desired, MDL would truncate and stop. Clearly, applying MDL would then be a disaster. On the other hand, what seems to work here may be called an adaptive MDL approach.
• Note that a method like maximum entropy is entirely deficient for providing distributions of surfaces: given the constraints implied by the knowledge of the distribution of the heights at discrete points: maximum entropy ignores correlations between nearby surface points no matter how close, an entirely ludicrous situation. On the other hand, a method like relative maximum entropy, based on inverting the roles of the distributions in equation 28, claims to provide the least informative inference relative to the prior information, a heuristic, difficult to justify, at best. Further, such approaches are typically based on likelihood distributions, rather than the posteriors that appear in equation 28.

Prior
For simplicity of mathematical presentation only, the prior in our surface inference example is taken multinormal over continuous, smooth height fields. One particular, conveniently chosen, representation of the prior distribution is constructed in appendix 12.1. This prior may be written in the shorthand where θ = (µ s , Σ s ) is the parameter vector. The density of the height field determined by the prior where and the projection onto the height field is given by A vs . Note that equation 37 implies that the surface density covariance is represented differently than a discrete surface distribution covariance matrix. Specifically, the projection matrix A vs is a delta-function-like operator, and Σ s is a continuous function of two positions. In appendix 12.1 we show that the surface density has a compact continuous power spectrum representation, and there give the explicit form of that representation. Thus the notation of equation 37 must be considered a shorthand for the underlying continuous construct.

Likelihood
When measurement is modelled as a linear process corrupted by gaussian noise we have or where φ = (M, Σ ǫ ) is the parameter vector.
6 The Generalized Kalman Filter equations.
In this section a concise derivation of the Generalized Kalman Filter update equations specialized to the discrete basis multinormal KR distribution of equation 22 are derived. The updated KR need not have the same basis dimension nor position as the previous KR basis, solving the problem of how to allow updates from one representation to the next, same, finer or coarser, representation. Proceeding, the KR distribution in terms of the parameterized height field of equation 22 iŝ The distribution of surface given the height field from equation 9 is Simplify the integral of the KR distribution to find Note how the full surface distribution is simply modified by the ratiô From equation 23 the Bayesian update of the KR distribution iŝ Rewriting the updated distribution using equation 42 yieldŝ For maximally informative inference of the new KR we minimize, from equation 28, Note that it is not assumed here that v and v have the same dimension. Expanding the probability distributions within the logarithm appearing above yields Each term has the form of an information (or uncertainty). Together the six terms paint a descriptive picture of how information is acquired by the maximally informative update when taken as three groups of two terms: Denote by "new KR" the two terms with v and Θ n+1 v , by "previous KR" the two terms with v and Θ n v and no data, and by "new data" the two terms with data dependency. Now, noting the signs on these quantities, because D is positive, the whole point of choosing a good Θ n+1 approximation by minimizing D is that

Expected information in new KR ≃ (Expected information in previous KR +Expected information in new data)
or in very rough terms we may see the update as capturing the sum-total of the available knowledge Total knowledge = Prior knowledge + New knowledge from data Because only terms depending upon the update parameters v and Θ n+1 v are needed to perform the minimization, we drop the other terms at this point, and after making the multinormal substitutions for the distributions in the above we havē To simplify theP 's appearing in equation 50, the distribution of surface given old knowledge and new data, marginalized to the height field v, is useful, as is seen by observing equations 47 and 50. Thus, consider found by making substitutions into 45 for the assumed distributions. Since it is not necessarily the case that v i ∈ {v j } or that v i ∈ {v j }. proceed by marginalizing to the union of the components of v and v, which we denote v ∪ v, and then to the v components. Let A v∪v,s denote the projection from v s to v ∪ v, A v,v∪v denote the projection from v ∪ v to v, and A v,v denote the projection from v to v. In performing the two projections (from v s to v ∪ v, and then from v ∪ v to v) in order we find (not necessarily in most simple form), using results of appendices 12.2-12.5, that and where Using the results of appendix 12.6, the quantities of equation 53 above correspond to the values of the mean and standard deviation parameters of the new KR, found at the minimum Kullback Leibler distance, i.e. the minimization is immediately apparent from those results. Thus: Equations 53 are the Generalized Kalman Filter (GKF) update equations for the surface inference example, yet are quite a bit more general (the necessary change of variables needed when the forward projection is nonlinear appears in appendix 12.10). Having these update equations allows one to consider updating a representation of any dimension relative to the original representation. Thus. knowledge may be represented in finer detail, corresponding to the old representation being contained in the new, knowledge may be represented in the same detail, corresponding to the case when the new representation is the same as the old representation, or knowledge may be tossed, corresponding to the case when the new representation does not contain the old representation. The maximally informative inference approach and its result of the Kullback Leibler distance on conditional posteriors led directly here to deriving the GKF and the solution of the problem of storing knowledge at scales adaptive to the actual needs of the data driving the update. The standard KF is discussed in [1].

Specializing the GKF
When the surface of interest is itself a discrete height field, and the KR representation basis never changes in dimension nor position from that height field's basis, then all projections appearing in equations 53 and following are identities, and the update equations simplify to the standard Kalman filter equations, in effect equations 55 only, given suitable identification of the variables.

Information learned
Once a new set of parameters has been chosen, and for the purpose of evaluating the new update in the context of other possible updates at different scales, using different representational bases, it is useful to have the quantity of information about the surface distribution that is contained in the KR at the maximally informative update. Using the results of appendix 12.6 in equation 50 we have this information, up to a constant, is given by Note that the d's (representation basis dimensions) from the dlog(2π)'s of equation 94 have cancelled. However the d's remain hidden within the terms as matrix dimensions. When considering optimizing learned inormation against storage resources, one must weigh a separate cost in bits for the memory used against the bits learned, the expression above. Note also, interestingly the expression above contains a BIC-like log(d) dependence term.

Search for update parameters
Now that we know what the update equations for the updating of the KR distribution look like, it is worthwhile considering how an updating scheme might be implemented to acquire information at the appropriate scale. First, we dismiss the notion that we will ever be using the continuous height field v s (the support of s) at any time. None of the update equations force that to happen! Second, since we have concluded that computationally v s is a discrete set, and since there will always be pathological cases where the surface is much rougher than we care to represent, we acknowledge that fact and proceed by presenting a useful algorithm which allows the updating of the KR while maintaining the ability to explore a large range of scales. The following multigrid-style algorithm provides the general flavor: • Choose v s denser by several orders of scale than the current representation, and using other criteria associated with the knowledge of the data acquisition system (see below).
• Choose v at regular scales intermediate between v s and the old KR on v, compute the updates on all v chosen at these scales.
• Compute the information learned at each scale.
• Plot the information learned as a function of increasing density (decreasing scale).
• Choose, based on exploration of the plot, and costs associated with storing the learned information, whether to explore other octaves of scale. If Choose to explore, repeat above procedure.
• If choice is to pick an informationally and storage attractive KR, do this and update the representation accordingly.
In the surface reconstruction problem data often comes in the form of images. The images may come from devices with vastly different resolutions, and the known parameters of pixel size, point spread function and geometry determine the appropriate reconstruction scale. Finally adapting the surface to resolve at sub-pixel scales requires a memory-aggressive approach which extends the exploration farther out on the learning curve towards smaller, denser representation scales.

Conclusion
Field inference has been generalized from the typical discrete fixed-basis setting to a continuous-basis setting. The problem of surface inference was solved in the context of continuous field inference. Using the approach of acquiring the maximally informative KR distribution, the GKF equations were found. The GKF allows the updated KR parameters to be found at any scale and/or "positions" (abstractly, basis components). The approach allows the learning of information at the relevant scales desired. It provides an information-theoretic justification for location-dependent adaptive multi-grid inference. It also effectively provides similar justification for a scale-adaptive MDL method. This is apparently the first time that the maximally informative inference of continuous-basis objects and the multigrid approach have been rigorously justified.

Acknowledgements
I thank the members of the Ames Data Understanding group for their interest and comments, especially the invaluable valiant contributions of Dr. Robin D. Morris, who thoughtfully, carefully, and painstakingly spent a week-inagony checking the maths (any remaining mistakes are fully mine, however), and Drs. Vadim Smelyanskiy and David Maluf for their comments. Finally, immense thanks go to Dr. Peter Cheeseman, for comments, and support. This project was partially supported by the NASA Ames Center for Excellence in Information Technology contract NAS-214217.

Construction of a 2D surface prior
In this appendix we first introduce the reader to the fourier representation of a gaussian process, then using the notions developed find the representation for a 2D gaussian process over the plane, where the correlations of the process at points x and y are proportional to exp(−k |x − y|), k > 0, a simple translation-invariant choice for the form of the correlation structure of the probability density of surfaces having the plane as support. The utility for the GKF of having this process is that it serves as a simply computed algorithmic representation of the prior for surfaces having the plane as support. where the coefficients c = (c k ) are constrained by f ∈ R so that c k = c * −k , and the n and k range over Z N . Let the coefficients be random variables: c k = x k +iy k with x k ∼ N(0, σ k ) and y k ∼ N(0, σ k ) both gaussian distributed random variables with mean 0 and standard deviation σ k . Now, dropping the k's, the joint density of (x, y) is given by From this the joint density of (r, θ) where r = √ x 2 + y 2 and θ = arctan(y/x) is given by The density of r is given directly by integrating over θ while the density of θ is given directly by integrating over r Making a change of variables, the density of cc * = x 2 + y 2 = r 2 is given by the exponential distribution The distribution of c k + c −k = 2Re[c k ] = 2x k , k > 0 is of interest because the process is real.
which is just a gaussian with zero mean but twice the variance of the components x and y of c. Note that the actual coefficients in equation 61 c k e ikn + c −k e −ikn = 2Re[c k e ikn ] also have the distribution of equation 67 since the phase of c k is uniformly distributed in [0, 2π]. Now, given a set of integers ζ ⊂ Z N we may ask for the density of the sampled values of the process f at ζ = (n 1 , n 2 , . . . , n m ) where m = |ζ| , n i ∈ Z N , i = 1, . . . , m. Define Then the probability density function which describes the sampled values is where Note that that the density of P (f (ζ)) is multivariate gaussian since the representation of f (ζ, c) as a fourier series shows that it is the sum of gaussian random vectors with components 2Re[c k e ikn ]. The covariances of the process are found as where we used the fact that the coefficients of different frequency are uncorrelated for k = l, i.e E[c k c * l ] = 0 for k = l. Define the power spectrum R(k) as Then we have that the covariance is given by the fourier transform of the power spectrum, where we have acknowledged that the covariance structure is dependent only upon the difference m−n. From this we see that the inverse fourier transform of the covariance is the power spectrum, Finally, note that the density of c k c * k given by equation 66 allows us to infer the parameters σ k which are the standard deviations of the gaussian processes x k and y k underlying the coefficients c k , since from equation 66 In the next section the basis for gaussian processes developed here is extended to the continuous 2D case to compute the power spectrum of a process specified by a continuous-basis covariance structure.

The continuous-basis 2D process
Similar to the development in the last section, in two dimensions, given the continuous-basis covariance Σ x = exp(−k |x|), k > 0., the power spectrum is found as the inverse fourier transform of the covariance, i.e. Finally, Note that we have neglected the proportionality constant 1/2π in the fourier transform, amounting to normalizing the delta function to 2π, and have scaled u to units of cycles per 2π. Note also that both the covariance of the process and the power spectrum scale with the same proportionality constant. Harmonic analysis is discussed in [3]

Multinormal density MGF
The moment generating function for a probability distribution f is defined as the functional where U(y, z) is defined such that U = [U ij ] and U ij (y, z) := y i z j , from which holds the property i.e the moments are found as derivatives of the MGF with respect to the parameter λ at λ = 0. Take the multinormal density function for x where U(y) is defined such that U ij (y) := U ij (y, y) and d = Dim(x). The MGF of N(Θ)(x) is then given by Minus twice the exponent of the integral above may be written as from which the moment generating function is immediately found as From the above we have which agrees with the calculation of appendix 12.2. Two things to note: 1.
The inverse of Σ is assumed to exist. 2. All moments are determined by simple products and sums of the parameters (µ, Σ).

Multinormal linear change of variables
Letting y = Ax be the change of variables, where P (x | Θ) = N(Θ)(x), the MGF of the density P (y | Θ) is found from the MGF of the density for P (x | Θ) in a straightforward manner as Note that the dropped subscripts x and x of the Θ and λ are easily determined by the context, and that the density used to take the expectation naturally changed in equation 88 from P (y | Θ) to P (x | Θ) without confusion. With this result and referring to equation 86 and preceding we find that the density for y is multinormal with Note that everywhere the condition of A was neither mentioned nor assumed, thus A may be a rectangular matrix or otherwise not of full rank.

Multinormal projections
Another useful operation is that of projection onto a subset of the components of the argument of the multinormal distribution. Projections may be trivially represented as a linear operation, where the "projection matrix" is typically a rectangular matrix having the form of a unique (single) element of value 1 in each row and column, zeroes elsewhere. Finding the distribution of the projected variables is equivalent to the operation of marginalizing over the components not in the projection. Let A be the projection matrix selecting a subset of the variables of x as y = Ax. Then, using the result of section 12.3, we immediately find integrals of the form Both vector Aµ and the matrix AΣA T are now just appropriately rearranged pieces of the original vector µ and matrix Σ. Specifically, if y k = x i k then [AΣA T ] pq = Σ ipjq .

Multinormal multiplication
One operation which frequently occurs in Bayesian inference is that of taking the product of two multinormal distributions of the same variable and normalizing that product to find a new distribution. Finding the new Θ = (µ, Σ) amounts to completing the square, but it is useful to state the result, and we do this here. Let Θ 1 = (µ 1 , Σ 1 ) and Θ 1 = (µ 1 , Σ 1 ) be the parameters of the multinormal distributions in the product. Then

Expected uncertainty in multinormals
It is useful to know the expected uncertainty of one gaussian distribution in the context of another. Consider the quantity which occurs in similar form in the development of the Generalized Kalman Filter (section 6) and represents the expected uncertainty, or entropy, of the surface representation in the context of the updated surface distribution. The value of this integral is found straightforwardly using the results mentioned in appendix 12.2 as

Maximizing the expected information
Varying Σ 2 , the minimum value of the uncertainty above occurs when Θ 2 = Θ 1 . That this is true for the µ component of Θ 2 is immediate from the positive definite quadratic nature of the first term. For the Σ component the following fact following from the properties of determinants and matrix inverses facilitates the result:

Notes on matrix inverses and submatrices
Given the invertible matrix V , composed in the following manner of submatrices V 11 , V 12 , V 21 , V 22 , and its inverse then it is immediate that the following relationships hold among the submatrices where I and N represent the identity and zero matrices respectively. Any quadratic operator x T Qx may be decomposed using projection matrices A and A where these are diagonal matrices with one and zero entries only, and where A + A = I in the following manner 100) Now, assume Q is symmetric and that both it and Q AA and Q AA are invertible, and rewrite this form as the sum of two terms as follows where α = (Q AA ) −1 Q AA x A . Thus Applying the identities of equation 98 followed by Q AAQAA + Q AAQAA = I AA (104) which immediately provides an alternate method for marginalizing gaussian distributions.

Alternate inverse forms
In the GKF update equations expressions for updating inverse matrices in terms of the sum of other inverse matrices occur. Because one of the summand matrices may not be well-conditioned, it is of interest to find an expression for the updated matrix in terms of the other matrices, which explicitly is not a function of the inverse matrices. Thus, let P , Q, R be invertible matrices such that Then we find MaxInfo Approx.
The elements going into are the prior, restricted to some knowledge about the field, . (In the main text example, is the set of known surface height field values.) and the Knowledge Representation (KR) distribution is , which is the learned knowledge about the specifics of the surface at the 'th iteration of the GKF.
These form the approximate posterior given by the integral over of the product of the KR distribution and the prior distribution given known, that is (1) At update , the new data and the approximate posterior from iteration n are incorporated using the likelihood and Bayes' theorem to produce the data-dependent posterior written . Then, the new KR that caputres an approximation to this exact posterior using (1) above with via Maximally informative statistical inference completes the GKF loop.