Approximations of Metric Graphs by Thick Graphs and Their Laplacians

The main purpose of this article is two-fold: first, to justify the choice of Kirchhoff vertex conditions on a metric graph as they appear naturally as a limit of Neumann Laplacians on a family of open sets shrinking to the metric graph (“thick graphs”) in a self-contained presentation; second, to show that the metric graph example is close to a physically more realistic model where the edges have a thin, but positive thickness. The tool used is a generalization of norm resolvent convergence to the case when the underlying spaces vary. Finally, we give some hints about how to extend these convergence results to some mild non-linear operators.


Introduction
The study of operators on metric graphs has been an ongoing and active area of research for at least two decades.Several natural questions arise in the study of Laplacians on metric graphs: As there is some freedom in defining a self-adjoint Laplacian on a metric graph due to the vertex conditions (see, e.g., [1] and the references therein), can one justify a certain choice of such vertex conditions?Second, in a realistic physical model (a thick graph), the wires have a thickness of order ε, but in the metric graph model, it is simplified to ε = 0: Can one justify some sort of limit of a Laplacian on the network with thickness ε > 0 as ε → 0?
The aim of this article is to give an answer to both questions.We show that the Neumann Laplacian on the ε-neighborhood of the metric graph (embedded in some ambient space R m+1 ) converges to the Kirchhoff Laplacian on the metric graph.This gives answers to both questions above: First, the "natural" vertex conditions are the so-called Kirchhoff conditions; see Equations ( 3) and (4).Second, the limit problem is a good approximation to a realistic physical model on a thick graph as ε → 0. Note that the problem significantly simplifies in the limit, as we only have to consider a system of ODEs instead of a PDE on a complicated and ε-dependent space.Moreover, the problem on the metric graph can often be solved explicitly.
A technical difficulty is that the Laplacian on the thick graph and on the metric graph live on different spaces.We therefore generalize the notion of norm resolvent convergence to this case; this was first done in [2]; see also the monograph [3] for a history of the problem and [4] for a recent list of references.Convergence of the (discrete) spectrum for the Neumann Laplacian on a thick graph converging to a compact metric graph has already been established by variational methods in [5][6][7].
The aim of this article is also to provide an almost self-contained presentation of the results for linear operators on thick and metric graphs to the "non-linear" community and also to give some ideas of how they can be extended to some mild non-linear operators.

Metric Graphs and Their Laplacians
For a detailed presentation of metric graphs and their Laplacians, we refer to [1,3] and the references therein.Let X 0 denote a metric graph given by the data (V, E, ), where V and E are the (at most countable) sets of vertices and edges, respectively, and where : E −→ (0, ∞). e → e denotes the length of the edge e ∈ E; a metric edge will be the interval I e := [0, e ].The metric graph X 0 is now the disjoint union of all metric edges • e∈E I e after identifying the endpoints ∂I e with the corresponding vertices.A metric graph is a metric space using the intrinsic metric (i.e., d(s, s) is the length of the shortest path in X 0 between s and s).Moreover, there is a natural measure on X 0 given by the sum of the Lebesgue measures on each metric edge I e .
As the Hilbert space on X 0 , we choose: where we write f ∈ L 2 (X 0 ) as family ( f e ) e∈E with f e ∈ L 2 (I e ); moreover, e∈E L 2 (I e ) denotes the Hilbert orthogonal sum with f being in it if its squared norm: The label "dec" refers to the fact that for k ≥ 1, there is no relation between the (well-defined) values of f e and its derivatives at a vertex v for different e ∈ E v .Here, E v denotes the set of edges that are adjacent with the vertex v ∈ V. Recall that functions in H 1 (I e ) are continuous as we have the estimate: Using a suitable cut-off function, we conclude the Sobolev trace estimate: with C e = 2/ min{1, e }, where f e (v) denotes the evaluation of f e at one of the endpoints of I e corresponding to v ∈ V.In particular, we assume that: From ( 1) and ( 2), we then conclude that the subspace: is closed in H 1 dec (X 0 ).We denote by f (v) := f e (v) the common value of f at the vertex v.It follows that: defines a closed, non-negative quadratic form in H 0 = L 2 (X 0 ).The associated self-adjoint and non-negative operator L 0 is given by: Here, f e (v) denotes the (weak) derivative of f e along e towards the vertex v.The operator L 0 is sometimes referred to as the (generalized) Neumann Laplacian or Kirchhoff Laplacian (the second because of the flux condition ∑ e∈E v f e (v) = 0 on the derivatives).Note that for vertices of degree one, the vertex condition is just the usual Neumann boundary condition f e (v) = 0, and for vertices of degree two, we have f e 1 (v) = f e 2 (v) and f e 1 (v) + f e 2 (v) = 0, i.e., the continuity of f and its derivative along v (recall that f e (v) denotes the derivative towards the vertex v).

Thick Graphs and Their Laplacians
We assume first that the metric graph X 0 is embedded in some space R m+1 (m ≥ 1) such that all edges are straight line segments in R m+1 .For ε > 0, denote by: Here, ω m is the mth root of the volume of the unit Euclidean ball in R m , i.e., We say that X ε is a graph-like space or a thick graph constructed from the metric graph X 0 if there is ε 0 > 0 such that: for all ε ∈ (0, ε 0 ] (cf. Figure 1), where X ε,v and X ε,e are open and pairwise disjoint subsets of R m+1 such that the so-called vertex and edge neighborhoods fulfil: i.e., X ε,v is isometric to the ε-scaled version of an open subset X v , X ε,e is isometric with the product of an interval of length e − 2a e ε, and B ⊂ R m is a ball of radius 1/ω m , having m-dimensional volume one by the definition of the scaling factor ω m .Moreover, 2a e ε is the sum of the lengths of the two parts of the metric edge inside the vertex neighborhood.For finite graphs, the existence of ε 0 > 0 is no restriction, but for infinite graphs with an arbitrary large vertex degree, this might be a restriction on the embedding and the edge lengths.More details on spaces constructed according to a graph (so-called "graph-like spaces") can be found in the monograph [3]; see also the references therein.
The decomposition of a graph-like space of thickness of order ε into vertex neighborhoods X ε,v (dark grey) and edge neighborhoods X ε,e (light grey) according to a metric graph X 0 embedded in R 2 .
As the Hilbert space, we set H ε := L 2 (X ε ).As the operator, we use the (non-negative) Neumann Laplacian L ε defined as the self-adjoint and non-negative operator associated with the closed and non-negative quadratic form given by: In our calculations later, it is more convenient to work with edge neighborhoods X ε,e that are isometric with the product of the original edge I e times the ε-scaled ball B, i.e., We then construct X ε as the space obtained from gluing the building blocks X ε,v and X ε,e such that a decomposition similar to (5) holds, now without the label (•) .Note that X ε is defined as an abstract flat manifold with boundary and might not be embeddable into R m+1 any longer.We also call X ε a graph-like space or thick graph.We state that the Neumann Laplacians on X ε and X ε are "close to each other" in Lemma 4.
Due to a decomposition of X ε into its building blocks similar to ( 5) and the scaling behavior, the norm in the Hilbert space where u v and u e denote the restriction of u onto the ε-independent building blocks X v and X e = I e × B. Note that with this notation, we have put all ε-dependencies into the norm (and later also into the quadratic form).
As the operator, we use the (negative) Neumann Laplacian L ε defined as the self-adjoint and non-negative operator associated with the closed and non-negative quadratic form given by: using the scaling behavior of the building blocks.Here, u e denotes the derivative with respect to the longitudinal (first) variable s, and ∇ B denotes the derivative with respect to the second variable y ∈ B.

Convergence of the Resolvents
How can we now compare the two Laplacians L 0 and L ε (resp.L ε )?The idea is first to consider the resolvents: in H 0 , resp.H ε , since they are bounded operators.In order to define a norm difference of these resolvents, we need a so-called identification operator: in our situation given by i.e., we set J f to zero on the vertex neighborhood and transversally constant on the edge neighborhood, together with an appropriate rescaling constant.As the identification operator in the opposite direction, we use J * ε : H ε −→ H 0 , where an easy calculation shows that: B u e (s, y) dy.
It is easy to see that J * ε J ε f = f , i.e., J ε is an isometry.
We now compare the two resolvents, sandwiched with J ε .Let: What does D ε look like?The best way to deal with it is to consider D ε g, w H ε for g ∈ H 0 and w ∈ H ε .We have: where L B is (minus) the Neumann Laplacian on B acting on the second variable y ∈ B. In particular, we conclude: − f e (s)u e (s, y) − f e (s) −u e + 1 ε 2 L B u e (s, y) dy ds B u e (s, y) dy + f e (s) B u e (s, y) where we used partial integration and the fact that L B is a self-adjoint operator in L 2 (B) and L B f e = 0 (as f e is independent of the second variable y) for the second equality and a reordering argument in the third equality.Moreover, plugging v into s means evaluation at s = 0, resp.s = e , if v corresponds to zero, resp.e ; for the longitudinal derivative, we assume u e (v, y) = −u e (0, y), resp.u e (v, y) = u e ( e , y) if v corresponds to zero, resp.e .We now use the fact that f ∈ dom L 0 : first note that ∑ e∈E v f e (v) = 0, so that we can smuggle in a constant C v u into the first summand, namely: We specify C v u in a moment.For the second summand, we use the fact that f e (v) = f (v) is independent of e ∈ E v , and we have: For the second equality, we used the fact that B at s = v corresponds to the subset ∂ e X v of ∂X v where the edge neighborhood is attached and that the normal derivative (pointing outwards) of u vanishes on ∂X ε,v ∩ ∂X ε due to the Neumann conditions.For the last equality, we used the Gauss-Green formula (write As u ∈ dom L ε , we expect that the average B u(v, y) dy of u over the boundary component ∂ e X v is close to the average of u over X v itself (recall that vol m B = 1); hence, we set: Define now: where deg v denotes the degree of v (i.e., the number of elements in E v ), then we have shown that: Defining G := 2 (V, deg) (with the weighted norm given by ϕ , the previous equation reads as: in operator notation, where: and: Let us now estimate the norms of the auxiliary operators: it also explains why we work with the weighted space 2 (V, deg): Lemma 1. Assume that (2) holds, then: Proof.From ( 1) and ( 2), for each f e , the fact that f (v) = f e (v), and summing over v ∈ V, we conclude: where g = R 0 f .Now, the last sum equals: hence, the second norm estimate holds.For the first one, we argue: similarly by the spectral calculus, and the first norm estimate follows.
More importantly, we now show that the ε-dependent operators have actually a norm converging to zero as ε → 0: Lemma 2. Assume that (2) and: hold (By some modifications in the decomposition (6) (namely, one uses X ε,e = (εa e , e − a e ε) for some appropriate a e > 0), one can avoid a direct upper bound d 0 on the vertex degrees, but then a e has to be large if deg v is large; also, the high degree will make vol X v larger in order to have enough space to attach all the edge neighborhood; see also the discussion in ([2], Section 3.1.)for all v ∈ V, where λ 2 (X v ) is the second (first non-zero) Neumann eigenvalue of X v , then: Proof.We need the following vector-valued version of (1): (actually, we apply (1) to u(•, y) for each y ∈ B into a line of length one at y ∈ B perpendicular to ∂ e X v ∼ = B into X v , and integrate then over y ∈ B).We then have: (recall that B dx = 1).Now, u − C v u is the projection onto the eigenspace of the Neumann problem on X v of all eigenfunctions orthogonal to the constant; hence, we have: by the variational characterization of eigenvalues.In particular, we have: Now, letting u = R ε w, we have: Moreover, For the second norm estimate, we have: From the calculation of D ε in (7) and Lemmas 1 and 2, we conclude: Theorem 1.Under the uniformity assumptions (2) and (8), the operator norm of: ) without any assumption.

Generalized Norm Resolvent Convergence
Let L ε be a family of self-adjoint and non-negative operators (ε ≥ 0) acting in an ε-independent Hilbert space H .We say that L ε converges in the norm resolvent sense to L 0 if: As a consequence, operator functions of L ε also converge in the norm, e.g., for the semigroups, we have: Moreover, the spectra converge uniformly on bounded intervals.In particular, if L ε all have a purely discrete spectrum, then λ k (L ε ) → λ k (L 0 ), where λ k (•) denotes the kth eigenvalue ordered increasingly and repeated with respect to multiplicity.
We now want to extend these results to operators acting in different Hilbert spaces.
Definition 1.For ε ≥ 0, let L ε be a self-adjoint and non-negative operator acting in a Hilbert space H ε .We say that L ε converges to L 0 in the generalized norm resolvent sense, if there is a family of bounded operators J ε : where R ε := (L ε + 1) −1 denotes the resolvent.
There are actually more general versions of generalized norm resolvent convergence; see, e.g., [2,3] or also [4] and the references therein.We can also specify the convergence speed as the maximum of the two norm estimates.
Moreover, almost all conclusions that hold for norm resolvent convergence are still true here, e.g., the convergence of eigenvalues or the spectrum.Moreover, if L ε converges to L 0 in the generalized norm resolvent sense with convergence speed δ ε → 0, then the corresponding semigroups converge, i.e., we have, e.g., e One can even control the dependency on t (C t = O(1/t) as t → 0); see ( [4], Ex. 1.10 (ii)) for details.
As an application, we show that the corresponding solutions of the heat equations converge: denote by u t , resp.f t , the solution of with initial data f 0 = J * ε u 0 at t = 0, then we have: i.e., the approximate solution J ε f t converges to the proper solution u ε of the more complicated problem on H ε uniformly with respect to the initial data u 0 .
We have already shown the first norm convergence and the equality in (10) in the previous section (cf.Theorem 1); but we even have: Theorem 2. Under the uniformity assumptions (2) and (8), the Neumann Laplacians L ε on the graph-like space X ε converge to the Kirchhoff Laplacian on the underlying metric graph X 0 in the generalized norm resolvent sense.
Proof.It remains to show the last limit in (10).We have: The integrand in the second sum can be estimated by: using again the variational characterization of eigenvalues.In particular, the second sum can be estimated by εl ε (u).The first sum is also small, as functions with bounded energy do not concentrate at the vertex neighborhoods X ε,v .The arguments to show this (actually, u 2 L 2 (X ε,v ) ≤ O(ε)l ε (u)) are very similar to the ones used in the proof of Lemma 2. Details can be found, e.g., in ([3], Section 6.3).
Note that, once having proven the generalized norm resolvent convergence, with an error term of order ε 1/2 , we can approximately solve the heat equation on X ε as in (11): note that on a metric graph, one might even find explicit formulas for the solutions of the heat equation f t , at least for simple metric graphs; hence, one has automatically approximate solutions for the corresponding heat equation on the more complicated space X ε .
Let us now come back to the original thick graph given by X ε , where the edge neighborhoods have slightly shorter edge lengths.
We say that two operators L ε and L ε are asymptotically close in the generalized norm resolvent sense, if (10) holds with R ε = (L ε + 1) −1 and R 0 replaced by (L ε + 1) −1 .We have the following result (for the proof, see, e.g., ([3], Prp.4.2.5):Lemma 3. If L ε converges to L 0 and if L ε and L ε are asymptotically close, both in the generalized norm resolvent sense, then L ε converges to L 0 in the generalized norm resolvent sense.Now, in our concrete example with the slightly shortened edges, we have (for a proof, see ([3], Prp.5.3.7)):Lemma 4. Assume that L ε and L ε are given as in Section 3, then L ε and L ε are asymptotically close in the generalized norm resolvent sense.
We then immediately conclude from Theorem 2: Corollary 1.Under the uniformity assumptions (2) and (8), the Neumann Laplacians L ε on the ε/ω m -neighborhood X ε of an embedded metric graph X 0 ⊂ R m+1 converge to the Kirchhoff Laplacian on X 0 in the generalized norm resolvent sense.

Outlook
The author is currently working on extending this result to some mildly non-linear equations with Claudio Cacciapuoti and with Michael Hinz and Jan Simmer in two different settings.Probably, the first systematic treatment of (non-linear) partial differential operators on thin domains was given in the nice overview of Geneviéve Raugel [8], combining some abstract results with concrete examples, but to the best of our knowledge, no thick graph domain and its limit were considered there explicitly.For Neumann Laplacians on thick graphs, there were actually results about the convergence of certain non-linear problems in [9,10], but Kosugi's papers did not contain an abstract approach using identification operators as we do.
At the conference, Jean-Guy Caputo also presented results on non-linear waves in networks and thick graphs justifying at least numerically the Kirchhoff vertex conditions; see [11,12].There is another interesting application of the concept of generalized norm resolvent convergence: Berkolaiko et al. [13] studied the behavior of Laplacians on metric graphs if some edge lengths shrink to zero.A similar result (a compact part of the metric graph shrinks to a point) using different methods has been presented by Cacciapuoti [14] at the conference.A general convergence scheme also for some mildly non-linear equations would allow extending their analysis to non-linear problems.
We have the following type of equations in mind.Let: for ε > 0 and ∂ t f t = L 0 f t + F 0 ( f t ).
As the non-linearity, we think of F ε (ψ) = α ε |ψ| 2µ ψ for some µ > 0 and α ε > 0. For the the solution, we make the ansatz: u t = e −tL ε u 0 − t 0 e −(t−s) L ε F ε (u s ) ds and similarly for f t .The non-linearity and the identification operators have to fulfil some compatibility conditions, namely F ε • J ε − J ε • F 0 has to be small in some sense.One might use an iteration procedure in order to obtain a sequence of functions converging to the solution.If F ε (ψ) = α ε |ψ| 2µ ψ in our example of thick metric graphs converging to metric graphs, then we must have α ε = ε mµ α 0 .
If one wants to consider the non-linear Schrödinger equation i∂ t u t = L ε u t + F ε (u t ), one faces the additional problem that the (generalized) norm resolvent convergence does not imply norm convergence of the unitary group e itL ε for general initial data u 0 ; if one restricts u 0 to the range of the spectral projection 1 [0,λ 0 ] (L ε ) for some λ 0 > 0, then there are still some operator norm estimates; see ( [3], Thm.4.2.16) for details.Nevertheless, one also has to make sure that F ε (u 0 ) still remains in the range of 1 [0,λ 0 ] (L ε ), which is probably too restrictive.