Next Article in Journal
Stability Test for Multiplicity of Solutions in Finite Element Analysis of Cracking Structures
Previous Article in Journal
Modular Suprametric Spaces and Fixed-Point Principles with Applications in Fractional Burn-Healing Dynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Asymptotic Solutions and Application to Asymptotic Expansions of Indicator Functions in Mixed-Type Media

by
Mishio Kawashita
1,* and
Wakako Kawashita
2,*
1
Mathematics Program, Graduate School of Advanced Science and Engineering, Hiroshima University, Higashihiroshima 739-8526, Japan
2
Electrical, Systems, and Control Engineering Program, Graduate School of Advanced Science and Engineering, Hiroshima University, Higashihiroshima 739-8527, Japan
*
Authors to whom correspondence should be addressed.
Mathematics 2026, 14(7), 1210; https://doi.org/10.3390/math14071210
Submission received: 11 March 2026 / Revised: 31 March 2026 / Accepted: 31 March 2026 / Published: 3 April 2026
(This article belongs to the Section C: Mathematical Analysis)

Abstract

Asymptotic solutions that can describe the incidence and reflection of waves have been used in various situations. They can also be applied to inverse problems and provide useful information in situations where a precise evaluation is required. However, the construction of standard asymptotic solutions requires higher regularity with respect to the boundaries of the observation target. This article proposes a “modified asymptotic solution” to overcome this weakness. To demonstrate its usefulness, it is applied to the analysis of the indicator function in the enclosure method for the inverse problem of the wave equation in a mixed-type medium.

1. Introduction

In many cases, it is difficult to obtain a concrete expression for the solution of a partial differential equation. Therefore, it is common to construct an approximate solution that approximates the true solution and investigate its properties. One method for constructing approximate solutions is to utilize asymptotic solutions, which appear in geometric optics. This is a classical method; however, it is effective in situations that require detailed analysis because it provides a concrete representation. In fact, asymptotic solutions have been used in a variety of situations, not only in numerical methods (e.g., [1]) but also to clarify the theoretical background of differential equations. For example, in [2,3], Ikawa successfully investigated the local energy decay estimate of the solution in the exterior of two convex obstacles and the distribution of poles of the scattering matrix by representing waves reflected repeatedly between two obstacles using asymptotic solutions. More detailed results on the distribution of the poles have been obtained by Gérard [4], but even there, something equivalent to an asymptotic solution is used.
In inverse problems as well, it is expected that detailed analysis will become possible through the introduction of asymptotic solutions. For example, in wave equations, there is a problem of estimating internal cavities by measuring the reflected waves caused by the incident waves, which are classified as active sonar-type problems. Typical examples include marine exploration and breast cancer screenings. As a formulation of the active sonar-type problem, a method called the “enclosure method” is effective, and has been actively studied, mainly by Ikehata (cf. [5,6,7] and [8] for a survay). It is important to investigate whether the asymptotic solution method can be applied to the enclosure method; Ref. [9] showed that the asymptotic solution method is also effective for the enclosure method for the wave equation.
In [9], the authors constructed an approximation for reflected waves using asymptotic solutions to the boundary value problem of the modified Helmholtz equation ( γ 0 Δ τ 2 ) w = 0 . The asymptotic solution here is an approximate solution that satisfies the following equation in the neighborhood of the boundary, where the reflection is considered:
( γ 0 Δ τ 2 ) e τ ϕ ( x ) j = 0 N ( τ ) j b j ( x ) = O ( τ N ) ( τ > > 1 ) .
As in the case of the Helmholtz equation (i.e., where τ = i k ( k > > 1 ) ), the eikonal (phase function) ϕ ( x ) satisfies the eikonal equation, and the first amplitude function b 0 ( x ) is constructed such that it satisfies the transport equation
( 2 γ 0 x ϕ · x + Δ ϕ ) b 0 ( x ) = 0 .
The problem with asymptotic solutions is that they require high regularity assumptions. Even if N = 0 , ϕ must be in class C 2 . Considering that (1) must be satisfied, b 0 must also be of class C 2 . As explained in Section 2, this would require the boundary to be at least class C 4 (for details, see [9]). Given its application in inverse problems, this assumption is not favorable.
In this paper, we propose a method to improve the weaknesses of regularity by introducing a “modified asymptotic solution”, which is an improved version of the asymptotic solution. When constructing a standard asymptotic solution, if the regularity of the cavity boundary is reduced, the amplitude function loses the regularity required for the solution approximation. Therefore, in our modified asymptotic solution, we use a mollifier only for the amplitude function that lacks regularity. On the other hand, eikonal ϕ ( x ) is not modified and retains its original properties. This is a crucial point for its application to the inverse problem we will now examine. This is because, in the enclosure method, extracting the “shortest length” from the phase function is an essential and delicate task. For more information, see also the last paragraph in Section 2.1.
The use of a mollifier is a basic method to improve situations with low regularity. Salo [10] gave an example of using a mollifier in inverse problems in a way that differs from ours. He constructed complex geometrical optics solutions for magnetic Schrödinger operators with low-regularity magnetic potentials. He used a mollifier to smoothen the magnetic potential. In contrast, we construct a modified asymptotic solution using a mollifier on the asymptotic solution itself. Therefore, our way of using a mollifier is essentially different from that in [10], and is appropriate for the purpose of our study.
The following three sections (Section 2, Section 3 and Section 4) describe the introduction of the modified asymptotic solution and its estimation that is required to accompany it. As an application of the modified asymptotic solution, we consider the inverse problems in “mixed-type media” in Section 5. Therefore, it is necessary to construct a modified asymptotic solution for the modified Helmholtz equation under the appropriate settings. The settings are explained in Section 2. Next, we confirm that the modified asymptotic solution captures the main part of the reflected wave. This means estimating the error terms between the true solution and the modified asymptotic solution. In preparation, in Section 3, we show some estimates of the operators including the mollifier. In Section 4, we estimate the remainder terms of the approximate solution, using the results presented in Section 3.
In Section 5, an example of the use of the modified asymptotic solution is described. We are conducting research on inverse problems for mixed-type media containing multiple types of heterogeneous parts, and we show that modified asymptotic solutions provide good results. In [9], the use of asymptotic solutions made it possible to clarify the relationship between the type and curvature of the cavity located at the shortest distance from the observation site and the representation of the leading terms of the indicator function. It is also expected that asymptotic solutions will provide a method for treating these problems in a unified manner. Furthermore, by introducing modified asymptotic solutions, we can overcome the weaknesses of standard asymptotic solutions and proceed with our analysis under appropriate regularity assumptions. In fact, as discussed in Section 5 below, by using the modified asymptotic solution, the regularity assumption for the cavity boundary required in [9] can be relaxed to the more appropriate class C 2 , θ . Problems involving mathematical reconstruction in inverse problems can generally be classified into two broad categories: problems that aim to detect characteristic features such as corners on the boundary, or problems that assume smoothness on the boundary. Our results can be regarded as an assumption of optimal smoothness in the latter problem formulation. Thus, the modified asymptotic solution improves the constraints regarding regularity and can be said to broaden the scope of applications for asymptotic solutions.

2. Approximate Solutions for Modified Helmholtz Equations via Modified Asymptotic Solutions

2.1. Problem Setting and Standard Asymptotic Solutions

Let Ω be an exterior domain in R 3 of the closure of bounded open sets D R 3 . Assume that D has C 2 boundary D , and D consists of D n and D d R 3 , which are bounded open sets satisfying D n ¯ D d ¯ = . Here, each D α ( α { n , d } ) is not necessarily connected and may consist of multiple parts. Take a bounded open set B satisfying B ¯ D ¯ = . For any f L 2 ( R 3 ) with supp f B ¯ , we consider the following boundary value problem for the modified Helmholtz equation with a large parameter τ 1 :
( γ 0 Δ τ 2 ) w ( x ; τ ) + f ( x ) = 0 in Ω , B τ n w ( x ; τ ) = 0 on D n , w ( x ; τ ) = 0 on D d .
In (3), γ 0 > 0 is a positive constant, B τ n w = γ 0 ν x w λ ( x ; τ ) w , λ ( x ; τ ) = λ 1 ( x ) τ + λ 0 ( x ) and ν x = k = 1 3 ( ν x ) k x k , where ν x = ( ( ν x ) 1 , ( ν x ) 2 , ( ν x ) 3 ) t is the unit outer normal of D from the D-side and x k = x k . We denote H 0 , D d 1 ( Ω ) as the Sobolev space consisting of all functions φ H 1 ( Ω ) with φ = 0 on D d in the trace sense. A typical elliptic theory implies that the weak solution w H 0 , D d 1 ( Ω ) of (3) has the conormal derivative ν x w H 1 / 2 ( Ω ) in the sense of dual form (see, e.g., Lemma 4.3 and Theorem 4.4 in [11]).
In Section 2, Section 3 and Section 4, we construct an approximate solution of w ( x ; τ ) in (3) in the form of
w ( x ; τ ) = v ( x ; τ ) | Ω + w ˜ ( x ; τ ) ,
where v ( x ; τ ) | Ω is the incident wave and w ˜ ( x ; τ ) denotes the reflected waves. The incident wave is produced by f, and is represented using the solution of
( γ 0 Δ τ 2 ) v ( x ; τ ) + f ( x ) = 0 in R 3 .
Because for any f L 2 ( R 3 ) , the solution v ( x ; τ ) of (4) is in H 2 ( R 3 ) and has the kernel representation:
v ( x ; τ ) = Ω Φ τ ( x , y ) f ( y ) d y with Φ τ ( x , y ) = 1 4 π γ 0 e τ | x y | / γ 0 | x y | ,
we construct the reflected wave w ˜ ( x ; τ ) of the form:
w ˜ ( x ; τ ) = Ω Ψ ˜ τ ( x , y ) f ( y ) d y .
Consequently, the task is reduced to constructing an approximate solution to the subsequent stationary problem
( γ 0 Δ τ 2 ) Ψ ˜ τ ( x , y ) = 0 in Ω , B τ n Ψ ˜ τ ( x , y ) = B τ n Φ τ ( x , y ) on D n , Ψ ˜ τ ( x , y ) = Φ τ ( x , y ) on D d .
One traditional approach to construct approximate solutions for differential equations with a large parameter is to use asymptotic solutions of the form
Ψ τ , N ( x , y ) = e τ ϕ ( x , y ) j = 0 N ( τ ) j b j ( x , y ) ,
which we call the (standard) asymptotic solution of N-th order (see, e.g., [9,12]). As the integral kernel Φ τ ( x , y ) for the incident wave v ( x ; τ ) has an exponential decay term e τ γ 0 | x y | as the leading part, we set the reflected wave to have a similar term e τ ϕ ( x , y ) in its main part as in (6).
Let l 0 = dist ( B , D ) be the “shortest distance”, which is the minimum distance | x y | between x D ¯ and y B ¯ . We set E 0 = { ( x 0 , y 0 ) D ¯ × B ¯ | x 0 y 0 | = l 0 } and Γ = { x 0 D ¯ | x 0 y 0 | = l 0 for some y 0 B ¯ } . Due to the law of reflection on the boundary, which is given by the boundary conditions described in (5), ϕ ( x , y ) must satisfy ϕ ( x , y ) = | x y | γ 0 ( x D , y B ¯ ). Hence, we can intuitively expect that the reflected wave Ψ ˜ τ ( x , y ) satisfies
| Ψ ˜ τ ( x , y ) | C e τ l 0 / γ 0 ( x D , y B ) .
This estimate suggests that it suffices to construct asymptotic solution Ψ τ , N ( x , y ) only near Γ D for any y B . For convenience, we divide E 0 and Γ as E 0 = α { n , d } E 0 α and Γ = α { n , d } Γ α , where E 0 α = { ( x 0 , y 0 ) E 0 x 0 D α } and Γ α = { x 0 D α ( x 0 , y 0 ) E 0 α for some y 0 B } ( α { n , d } ).
We denote B r ( x ) as the open ball centered at x with radius r and set L 0 ( x , y ) = | x y | . In this article, we consider the case in which Γ is composed of isolated points. The non-degenerate condition described below is one of the sufficient conditions for the case above.
Definition 1 
(Non-degenerate condition). We say that B and D satisfy the non-degenerate condition for L 0 ( x , y ) , if every ( x 0 , y 0 ) E 0 is a non-degenerate critical point for L 0 ( x , y ) ; there exist constants c 0 > 0 and δ > 0 satisfying L 0 ( x , y ) l 0 + c 0 ( | x x 0 | 2 + | y y 0 | 2 ) for ( x , y ) ( D B δ ( x 0 ) ) × ( B B δ ( y 0 ) ) .
Under the non-degenerate condition, E 0 α consists of only a finite number ( = M α ) of isolated points, so we assume that it can be expressed as E 0 α = { ( x j α , y j α ) j = 1 , 2 , , M α } ( α { n , d } , if E 0 α ). The non-degenerate condition is also used to obtain the remainder estimates between the true solution w ( x ; τ ) and an approximate solution, which is discussed in Section 4. Because we only construct the asymptotic solution (6) near the point x j α , we take their neighborhoods to fix the regions of the asymptotic solutions.
For each α { n , d } , we set U k α = j = 1 M α B k r 0 ( x j α ) ( k = 1 , , 4 ) and U k = α { n , d } U k α ( k = 1 , , 4 ). Note that U k α ¯ U k + 1 α ( k = 1 , , 3 ) and
| x y | l 0 + c 0 ( x D ¯ U 1 , y B ¯ )
for some constant c 0 > 0 because E 0 U 1 × B ¯ . We also choose r 0 > 0 to be small enough so that U 5 α U 5 α = if α α . In what follows, if necessary, we can make this r 0 > 0 smaller because Γ is finite set. For another condition for r 0 > 0 , see (24) below.
If we construct the asymptotic solution Ψ τ , N ( x , y ) of (6) in x U 4 Ω ¯ , y B ¯ , we should determine the eikonal (or phase function) ϕ ( x , y ) and the amplitude functions b j ( x , y ) ( j = 1 , 0 , , N , with b 1 = 0 ). Here, we summarize the notations for constructing standard asymptotic solutions.
We set B 0 n w = γ 0 ν x w λ 0 ( x ) w ,
a 0 = 1 4 π | x y | ν x · ( x y ) γ 0 | x y | + λ 1 ( x ) γ 0 , a 1 = 1 4 π | x y | ν x · ( x y ) | x y | 2 + λ 0 ( x ) γ 0
and a ˜ 0 ( x , y ) = 1 4 π γ 0 | x y | . Once we obtain the eikonal ϕ ( x , y ) , we define the transport operator T ϕ by
T ϕ = 2 ( γ 0 x ϕ ) · x + γ 0 Δ ϕ .
From (4.4) and (4.5) of [9], we can find that the eikonal ϕ and amplitude functions b j are governed by the following equations:
γ 0 | x ϕ | 2 = 1 in U 4 , ϕ ( x , y ) = | x y | / γ 0 on D U 4 , ν x ϕ ( x , y ) > 0 on D U 4 ,
T ϕ b j ( x , y ) + γ 0 Δ b j 1 ( x , y ) = 0 in U 4 , ( γ 0 ν x ϕ ) + λ 1 ( x ) b j ( x , y ) + B 0 n b j 1 ( x , y ) = a j ( x , y ) on D n U 4 , b j ( x , y ) = δ 0 , j a ˜ 0 ( x , y ) on D d U 4 ,
where δ i , j are Kronecker’s delta. We call the equations for ϕ and b j in (9) and (10) the eikonal equation and transport equations, respectively. Then, as per (4.8) in [9], we obtain the following equation for Ψ τ , N ( x , y ) :
( γ 0 Δ τ 2 ) Ψ τ , N ( x , y ) = ( τ ) N e τ ϕ ( x , y ) γ 0 Δ b N ( x , y ) in Ω U 4 , B τ n Ψ τ , N ( x , y ) = B τ n Φ τ ( x , y ) + ( τ ) N ( δ 0 , N a 1 ( x , y ) + B 0 n b N ( x , y ) ) e τ ϕ ( x , y ) on D n U 4 , Ψ τ , N ( x , y ) = Φ τ ( x , y ) on D d U 4 ,
which approximates the solution Ψ ˜ τ ( x , y ) of (5), and we can construct the main part of the solution w of (3) using Ψ τ , N ( x , y ) and Φ τ ( x , y ) . Therefore, we successfully extracted important information in the inverse problem in [9]. However, the assumption of regularity is necessary for the construction of the asymptotic solution Ψ τ , N ( x , y ) . To justify Equation (11) for Ψ τ , N ( x , y ) , we need to have b N C 2 ( Ω U 4 ) at least. For simplicity, we consider the case N = 0 . From the transport equation T ϕ b 0 = 0 , usually we need Δ ϕ C 2 ( Ω U 4 ) , which means that we should obtain the solution of (9) with ϕ C 4 ( Ω U 4 ) . Thus, for the regularity of D , we should assume that D has regularities of class C 4 . Similarly, if we obtain Ψ τ , N , D should have regularities of class C 2 N + 4 (for detail, cf. Section 4 in [9]).
The issue here is how much regularity of D , λ 1 and λ 0 is required when constructing the asymptotic solution Ψ τ , N ( x , y ) . To fix the terminology, in this article, we use the following:
Definition 2. 
For m N , 0 θ 1 , we say that ( D , λ ) is of ( m , θ ) -class if D is of class C 2 m + 4 , θ , λ 1 C 2 m + 2 , θ ( D n ) and λ 0 C 2 m , θ ( D n ) , and for m { 1 , 0 } , we say that ( D , λ ) is of ( m , θ ) -class if D is of class C 2 m + 4 , θ , λ 1 C 2 m + 2 , θ ( D n ) and λ 0 L ( D n ) .
Note that ( D , λ ) is of ( m , 0 ) -class if and only if ( D , λ ) is of m-class as in Definition 4.1 of [9]. Thus, the regularity considered in Definition 2 is a finer setting than that of Definition 4.1 in [9].
To derive the asymptotic solution Ψ τ , N ( x , y ) , as in [9], it is necessary that ( D , λ ) belongs to the ( N , 0 ) -class. We call this standard asymptotic solution Ψ τ , N ( x , y ) the asymptotic solution of N-th order. As noted above, even if we take N = 0 , the boundary D must be of class C 4 . In contrast, to obtain the eikonal ϕ using the usual approach of solving partial differential equations of first order (cf. [12,13]), we consume only C 2 regularities of the boundary D . Furthermore, we can also have the amplitude function b 0 satisfying the transport equation. Hence, if we can use ϕ and b 0 to make an approximation to the reflected wave Ψ ˜ τ ( x , y ) by another method, we have a chance to reduce the differentiability of the boundary that appears above. This reduction problem is important because it is closely connected with providing prior information for some inverse problems. Therefore, it is necessary to improve the approximation method. In the following section, we introduce “modified asymptotic solutions”constructed by modifying the highest term b N using a mollifier.
In our modified asymptotic solution, we have not modified the eikonal itself. In this problem, the eikonal represents the shortest time (also known as the optical distance) for a wave departing from point y to reach x. Modifying the eikonal alters this distance, which leads to a distortion of the geometric structure inherent to this problem. Furthermore, a change in the optical distance is expected to alter the rate of exponential decay of the reflected wave described in (7). Consequently, it is unclear whether the leading part can be correctly extracted. This is the reason why we use the original eikonal in the modified asymptotic solution proposed here.

2.2. Modified Asymptotic Solutions

Take ρ C 0 ( R 3 ) with ρ 0 , supp ρ { x R 3 | | x | < 1 } and R 3 ρ ( x ) d x = 1 and put ρ ε ( x ) = ε 3 ρ ( x / ε ) . Then, we define the mollifier of b B 0 ( U 3 × B ¯ ) as
( ρ ε b ( · , y ) ) ( x ) = R 3 ρ ε ( x z ) b ( z , y ) d z ( 0 < ε < r 0 , x U 2 , y B ¯ ) .
Under the condition that ( D , λ ) is of ( N 1 , θ ) -class with some 0 < θ 1 , we introduce a modified asymptotic solution of N-th order Ψ τ , N , ε as
Ψ τ , N , ε ( x , y ) = e τ ϕ ( x , y ) ( j = 0 N 1 ( τ ) j b j ( x , y ) + ( τ ) N ( ρ ε b N ( · , y ) ) ( x ) )
for x U 3 Ω ¯ and y B ¯ . In (12), we set b 1 ( x , y ) = 0 if it appears in the following argument.
To ensure that the modified asymptotic solution is well-defined in this class, we use the same notation as [9]: ϕ C ˜ k ( ( Ω ¯ U 4 ) × B ˜ ) if and only if ϕ ( x , y ) satisfies x α y β ϕ C ( ( Ω ¯ U 4 ) × B ˜ ) for | α | k and | β | 0 , where B ˜ is an open set and satisfies B ˜ B ¯ and B ˜ D ¯ = . Then, since e τ ϕ ( x , y ) j = 0 N 1 ( τ ) j b j ( x , y ) is an asymptotic solution of ( N 1 ) -order, we can see that ϕ C ˜ 2 N + 2 , θ ( ( Ω ¯ U 4 ) × B ˜ ) and b j C ˜ 2 ( N j ) , θ ( ( Ω ¯ U 4 ) × B ˜ ) ( j = 0 , 1 , , N 1 ) for some open set B ˜ B ¯ , and Δ b N 1 C 0 , θ ( ( Ω ¯ U 4 ) × B ˜ ) . Thus, from (10) it follows that b N C 0 , θ ( ( Ω ¯ U 4 ) × B ˜ ) and b N ( · , y ) B 0 , θ ( U 3 ) , ( x ϕ · x b N ) ( · , y ) B 0 , θ ( U 3 ) and ρ ε b N for 0 < ε < r 0 ; x U 2 is well-defined. Therefore, we can construct the modified asymptotic solution of N-th order Ψ τ , N , ε .
When discussing modified asymptotic solutions, specific function spaces and operators are frequently used. Here, we summarize those that are commonly used.
(i)
The function spaces mainly used to estimating amplitude functions: For m N { 0 } , 0 θ 1 and an open set U R 3 , we set B m ( U ) = { w C m ( U ) w B m ( U ) < } and B m , θ ( U ) = { w B m ( U ) w B m , θ ( U ) < } , where
w B m ( U ) = max | α | m sup x U | x α u ( x ) | , | w | B m , θ ( U ) = max | α | = m sup x , y U , x y | x α w ( x ) x α w ( y ) | | x y | θ , w B m , 0 ( U ) = w B m ( U ) , w B m , θ ( U ) = w B m ( U ) + | w | B m , θ ( U ) ( 0 < θ 1 ) .
For later use, we define a subset of B 0 ( U 3 × B ¯ ) to be
C ( B ¯ ; B m , θ ( U 3 ) ) = { a B 0 ( U 3 × B ¯ ) a ( · , y ) B m , θ ( U 3 ) ( y B ¯ ) }
with a norm
| | | a | | | m , θ = sup y B ¯ a ( · , y ) B m , θ ( U 3 ) ( a C ( B ¯ ; B m , θ ( U 3 ) ) ) .
(ii)
The function spaces mainly used to estimate the remainder term: For τ 1 , the function spaces H τ 1 ( Ω ) H 1 ( Ω ) and H τ 1 / 2 ( Ω ) H 1 / 2 ( Ω ) are defined with the norm
φ H τ 1 ( Ω ) = { φ L 2 ( Ω ) 2 + τ 2 φ L 2 ( Ω ) 2 } 1 / 2
and
φ H τ 1 / 2 ( D ) = φ H 1 / 2 ( D ) 2 + τ φ L 2 ( D ) 2 1 / 2 ,
respectively. Since D = Ω is of class C 2 , we have the following estimate for a trace of functions in H 1 ( Ω ) :
φ H τ 1 / 2 ( D ) C φ H τ 1 ( Ω ) ( φ H τ 1 ( Ω ) , τ 1 ) ,
(see, e.g., Lemma 6.1 of [9]).
(iii)
The operators used to define and estimate the modified asymptotic solution: For T ϕ , we set
( T ϕ ε w ) ( x ) = T ϕ ( ρ ε w ) ( x ) ( ρ ε ( T ϕ w ) ) ( x ) .
For a B 0 ( U 3 ) , we define W ε a as
( W ε a ( · , y ) ) ( x ) = ( ρ ε a ( · , y ) ) ( x ) a ( x , y ) ( 0 < ε < r 0 , x U 2 )
and
( V a ε w ) ( x ) = a ( x ) ( ρ ε w ) ( x ) ( ρ ε ( a w ) ) ( x ) ( w L l o c 1 ( U 3 ) , 0 < ε < r 0 , x U 2 ) .
Note that for a B 0 , θ ( U 3 ) with some 0 < θ 1 , we obtain
V a ε w B 0 ( U 2 ) ε θ | a | B 0 , θ ( U 3 ) w B 0 ( U 3 ) ( 0 < ε < r 0 , w B 0 , θ ( U 3 ) ) .
Indeed, from the fact that supp ρ { x R 3 | | x | < 1 } , it follows that
| ( V a ε w ) ( x ) | = | R 3 ρ ( z ) a ( x ) a ( x ε z ) w ( x ε z ) d z | ε θ | a | B 0 , θ ( U 3 ) R 3 | z | θ ρ ( z ) | w ( x ε z ) | d z ε θ | a | B 0 , θ ( U 3 ) w B 0 ( U 3 ) R 3 | z | θ ρ ( z ) d z ( x U 2 , 0 < ε < r 0 ) .
The key to constructing the modified asymptotic solution lies in obtaining an estimate for the operator T ϕ ε described in Lemma 2, which is discussed in the next section.
Now, we check what equations the modified asymptotic solution Ψ τ , N , ε ( x , y ) satisfies. Note that ρ ε a ( x ) = 0 ( x B 2 r 0 ( x 0 α ) ) for 0 < ε < r 0 , if a ( x ) = 0 ( x B 3 r 0 ( x 0 α ) ). Then,
T ϕ ( ρ ε b N ) ( x , y ) + γ 0 Δ b N 1 ( x , y ) = ( T ϕ ε b N ( · , y ) ) ( x ) γ 0 ( W ε Δ b N 1 ( · , y ) ) ( x )
for x U 2 , 0 < ε r 0 , since T ϕ b N + γ 0 Δ b N 1 = 0 in U 3 from (10). Therefore, we obtain
( γ 0 Δ τ 2 ) Ψ τ , N , ε ( x , y ) = e τ ϕ ( x , y ) { ( τ ) 1 N ( T ϕ ε b N ( · , y ) ) ( x ) γ 0 ( W ε Δ b N 1 ( · , y ) ) ( x ) + ( τ ) N γ 0 ( Δ ρ ε b N ( · , y ) ) ( x ) } ( x U 2 ) .
Similarly, we have
B τ n Ψ τ , N , ε ( x , y ) = B τ n Φ τ ( x , y ) + e τ ϕ ( x , y ) δ 0 , N a 1 ( x , y ) + e τ ϕ ( x , y ) { ( τ ) 1 N ( γ 0 ν x ϕ ) + λ 1 ( x ) ( W ε b N ( · , y ) ) ( x ) + ( τ ) N ( B 0 n ρ ε b N ( · , y ) ) ( x ) } on D n U 2 .
Since b N = 0 on D d U 3 if N 1 and e τ ϕ b 0 = e τ ϕ a ˜ 0 ( x , y ) = Φ τ ( x , y ) on D d U 3 if N = 0 , we have
Ψ τ , N , ε ( x , y ) = e τ ϕ ( x , y ) { ( 1 δ 0 , N ) a ˜ 0 ( x , y ) + ( τ ) N ( ρ ε b N ( · , y ) ) ( x ) } = Φ τ ( x , y ) + ( τ ) N e τ ϕ ( x , y ) ( W ε b N ( · , y ) ) ( x ) ( x D d U 2 ) .
Using these equations for Ψ τ , N , ε ( x , y ) , we introduce the remainder term Ψ τ , N , ε r ( x , y ) as follows: Choose a cut-off function χ C 0 ( R 3 ) with 0 χ 1 , χ ( x ) = 1 for x U 1 and supp χ U 2 . Set
Ψ τ , N , ε r ( x , y ) = Ψ ˜ τ ( x , y ) χ ( x ) Ψ τ , N , ε ( x , y ) .
Then we have
( γ 0 Δ τ 2 ) Ψ τ , N , ε r ( x , y ) = F τ , N , ε ( x , y ) in Ω , B τ n Ψ τ , N , ε r ( x , y ) = G τ , N , ε n ( x , y ) on D n , Ψ τ , N , ε r ( x , y ) = H τ , N , ε d ( x , y ) on D d ,
where F τ , N , ε , G τ , N , ε n and H τ , N , ε d are given by
F τ , N , ε ( x , y ) = { ( τ ) 1 N e τ ϕ ( x , y ) χ ( x ) ( T ϕ ε b N ( · , y ) ) ( x ) γ 0 ( W ε Δ b N 1 ( · , y ) ) ( x ) + ( τ ) N χ ( x ) e τ ϕ ( x , y ) γ 0 ( Δ ρ ε b N ( · , y ) ) ( x ) + γ 0 [ Δ , χ ] Ψ τ , N , ε ( x , y ) } ,
G τ , N , ε n ( x , y ) = ( χ ( x ) 1 ) B τ n Φ τ ( x , y ) χ ( x ) e τ ϕ ( x , y ) δ 0 , N a 1 ( x , y ) e τ ϕ ( x , y ) { ( τ ) 1 N χ ( x ) ( γ 0 ν x ϕ ) + λ 1 ( x ) ( W ε b N ( · , y ) ) ( x ) + ( τ ) N χ ( x ) ( B 0 n ( ρ ε b N ( · , y ) ) ( x ) } γ 0 ν x χ ( x ) Ψ τ , N , ε ( x , y ) ,
H τ , N , ε d ( x , y ) = ( χ ( x ) 1 ) Φ τ ( x , y ) ( τ ) N e τ ϕ ( x , y ) χ ( x ) ( W ε b N ( · , y ) ) ( x ) .
To summarize the arguments mentioned so far, w ( x ; τ ) is given by w ( x ; τ ) = w N , ε ( x ; τ ) + w N , ε r ( x ; τ ) , where
w N , ε ( x ; τ ) = B Φ τ ( x , y ˜ ) + χ ( x ) Ψ τ , N , ε ( x , y ˜ ) f ( y ˜ ) d y ˜ , w N , ε r ( x ; τ ) = B Ψ τ , N , ε r ( x , y ˜ ) f ( y ˜ ) d y ˜ .
Here, w N , ε ( x ; τ ) is the main term of the approximation and w N , ε r ( x ; τ ) is the remainder term. To show that the approximation method using this modified asymptotic solution works, it is necessary to confirm that w N , ε r ( x ; τ ) is actually a remainder term. This is confirmed by the following proposition:
Proposition 1. 
Assume that ( D , λ ) is of ( N 1 , θ ) -class with some 0 < θ 1 and N { 0 } N , and B and D satisfy the non-degenerate condition for L 0 ( x , y ) . Further, assume that B is a convex set with C 2 boundary. Then, there exist constants C N > 0 , τ 0 1 and r 0 > 0 such that
w N , ε r H τ 1 ( Ω ) + τ w N , ε r L 2 ( D n ) C N m ( τ , ε ) τ N 3 e τ γ 0 l 0 ( τ τ 0 , 0 < ε < r 0 ) ,
where m ( τ , ε ) = ε θ + τ 1 ε 2 + θ + τ ε θ + ε 1 + θ + 1 and the constant C N is determined by | | | b j | | | 2 , θ , j = 0 , 1 , , N 1 , | | | b N | | | 0 , θ and | | | ϕ | | | 2 , θ .
Remark 1. 
In [9], if ( D , λ ) is of ( N , 0 ) -class, the remainder term w N r can be introduced using the standard asymptotic solution, and can be estimated as
w N r H τ 1 ( Ω ) + τ w N r L 2 ( D n ) C N τ N 3 e τ γ 0 l 0 ( τ τ 0 ) .
Because D has lower regularities in (19), the term m ( τ , ε ) appears in (19) in contrast to (20). For the incident wave v ( x ; τ ) , as in the proof of Lemma 6.3 of [9], we have
v H τ 1 ( Ω ) + τ v L 2 ( D n ) C τ 2 e τ γ 0 l 0 ( τ τ 0 ) .
Thus, even for the case N = 0 , if we can show that m ( τ , ε ) / τ 0 as τ (and ε 0 ), we expect that w N , ε r behaves as a remainder term. For example, it suffices to take ε = τ 1 , which is used in Section 5.
If the boundary D is of class C 2 , i.e., θ = 0 , then we have m ( τ , ε ) / τ 1 for τ 1 and 0 < ε 1 . This fact suggests that the assumption of the regularities of D is crucial for the use of the modified asymptotic solution (12).
In the following section, we estimate T ϕ ε and W ε that appear when constructing w N , ε r . Using the results, we prove Proposition 1 in Section 4.

3. Estimates of T ϕ ε and W ε

To estimate the remainder terms, we prepare estimates for T ϕ ε and W ε .
Lemma 1. 
We have the following:
(1) 
| ( ρ ε a ( · , y ) ) ( x ) | | | | a | | | 0 , 0 ( a C ( B ¯ ; B 0 , 0 ( U 3 ) ) , x U 2 , y B ¯ , 0 < ε < r 0 ) ;
(2) 
For a C ( B ¯ ; B 0 , θ ( U 3 ) ) , there exists a constant C > 0 such that
W ε a ( · , y ) B 0 ( U 2 ) C | | | a | | | 0 , θ ε θ ( y B ¯ , 0 < ε < r 0 ) ;
(3) 
For a C ( B ¯ ; B 1 , θ ( U 3 ) ) , we have W ε a ( · , y ) C ( B ¯ ; B 1 , θ ( U 2 ) ) and
x W ε a ( · , y ) B 0 ( U 2 ) C | | | a | | | 1 , θ ε θ ( y B ¯ , 0 < ε < r 0 ) ;
(4) 
For a C ( B ¯ ; B 0 , θ ( U 3 ) ) , there exists a constant C β > 0 for any | β | 1 such that
| ( x β ( ρ ε a ( · , y ) ) ( x ) | = | ( ( x β ρ ε ) a ( · , y ) ) ( x ) | C β | | | a | | | 0 , θ ε | β | + θ ( x U 2 , y B ¯ , 0 < ε < r 0 ) .
Proof. 
For a C ( B ¯ ; B 0 , 0 ( U 3 ) ) and x U 2 , we have
| ( ρ ε a ( · , y ) ) ( x ) | R 3 ρ ε ( z ) | a ( x z , y ) | d z R 3 ρ ε ( z ) d z | | | a | | | 0 , 0 ,
which implies (1). For (2), we note
| ( W ε a ( · , y ) ) ( x ) | = | ( ρ ε a ( · , y ) ) ( x ) a ( x , y ) | R 3 ρ ε ( z ) | a ( x z , y ) a ( x , y ) | d z = R 3 ρ ( z ) | a ( x ε z , y ) a ( x , y ) | d z .
Since we have
| a ( x ε z , y ) a ( x , y ) | | a ( · , y ) | B 0 , θ ( U 3 ) ( ε | z | ) θ ( x U 2 , | z | 1 , y B ¯ , 0 < ε < r 0 )
and supp ρ { x R 3 | | x | < 1 } , then
| ( W ε a ( · , y ) ) ( x ) | ε θ | a ( · , y ) | B 0 , θ ( U 3 ) R 3 ρ ( z ) | z | θ d z | a ( · , y ) | B 0 , θ ( U 3 ) ε θ ( x U 2 , y B ¯ , 0 < ε < r 0 ) ,
which implies (2). For (3), if we note that
x ( W ε a ( · , y ) ) ( x ) = R 3 ρ ε ( z ) x a ( x z , y ) d z x a ( x , y ) = ( W ε x a ( · , y ) ) ( x ) ,
then (2) implies (3). To prove (4), we note that R 3 z β ρ ( z ) d z = 0 for | β | 1 . Since
( x β ( ρ ε a ( · , y ) ) ( x ) = ( ( x β ρ ε ) a ( · , y ) ) ( x ) = ε | β | R 3 ( z β ρ ) ( z ) a ( x ε z ) a ( x ) d z ,
we have (4) in the same way as (2). □
Lemma 2. 
If D is of class C 2 , θ , then for the solution ϕ ( · , y ) B 2 , θ ( U 3 ) of (9) and w ( · , y ) C ( B ¯ , B 0 , θ ( U 3 ) ) satisfying x ϕ · x w C ( B ¯ , B 0 , θ ( U 3 ) ) , we have
T ϕ ( · , y ) ε w ( · , y ) B 0 ( U 2 ) C ε θ | | | ϕ | | | 2 , θ | | | w | | | 0 , θ ( y B ¯ , 0 < ε < r 0 ) .
Proof. 
Let ϕ y ( x ) = ϕ ( x , y ) and w y ( x ) = w ( x , y ) . If there is no confusion, variable y in ϕ y and w y is omitted. For w B 0 ( U 3 ) such that x ϕ · x w B 0 ( U 3 ) , note that T ϕ ε is expressed as
( T ϕ ε w ) ( x ) = 2 γ 0 { x ϕ ( x ) · x ( ρ ε w ) ( x ) ρ ε ( x ϕ · x w ) ( x ) } + γ 0 V Δ ϕ ε w ( x ) ( x U 2 , 0 < ε < r 0 ) .
For this w, we can take a sequence { w k } with w k B ( U 3 ) ( k = 1 , 2 , ) and lim k w k = w in B 0 ( U 3 ) . Then, for x U 2 and 0 < ε < r 0 , we have
ρ ε ( x ϕ · x w ) ( x ) = lim k ρ ε ( x ϕ · x w k ) ( x ) .
To show (22), we go back to how to solve (9) and (10). As in Section 3.2 of [12] or Chap. 3 of [13], ϕ and b j ( j = 0 , 1 , , N ) are given by the method of characteristics.
For each x j α Γ α , we can take a local coordinate system:
U x j α σ = ( σ 1 , σ 2 ) s ( σ ) D α V x j α
for an open neighborhood U x j α of ( 0 , 0 ) in R 2 with s ( 0 , 0 ) = x j α and an open neighborhood V x j α of x j α in R 3 . By the method of characteristics (cf. [12] or [13]), if we take V x j α sufficiently small, we can obtain the eikonal ϕ satisfying ϕ C ˜ 2 ( V x j α × B ˜ ) for some B ˜ with B ¯ B ˜ . We introduce X ( r , σ ) = ( X 1 ( r , σ ) , X 2 ( r , σ ) , X 3 ( r , σ ) ) as the solution of
r X ( r , σ ) = x ϕ ( X ( r , σ ) ) ( | r | < 2 r 0 ˜ , σ U ˜ x j α ) , X ( 0 , σ ) = s ( σ ) ( σ U ˜ x j α ) ,
where U ˜ x j α is an open set satisfying U ˜ x j α ¯ U x j α . Then, there exists a constant r ˜ 0 > 0 such that
X : ( 2 r ˜ 0 , 2 r ˜ 0 ) × U ˜ x j α ( r , σ ) X ( r , σ ) V x j α
is a C 1 -diffeomorphism from ( 2 r ˜ 0 , 2 r ˜ 0 ) × U ˜ x j α to the image X ( ( 2 r ˜ 0 , 2 r ˜ 0 ) × U x j α ) , and we have
r w ( X ( r , σ ) ) = x ϕ · x w ( X ( r , σ ) ) ( ( r , σ ) ( 2 r ˜ 0 , 2 r ˜ 0 ) × U ˜ x j α ) .
For these X, r ˜ 0 > 0 and U ˜ x j α , we take r 0 > 0 satisfying
B 4 r 0 ( x j α ) ¯ X ( ( r ˜ 0 , r ˜ 0 ) × U ˜ x j α ) .
Because Γ α is a finite set, r 0 > 0 can be chosen independently of x j α . We also note that there exists a constant 0 < r ˜ 1 ( x j α ) < r ˜ 0 such that
B 3 r 0 ( x j α ) ¯ X ( ( r ˜ 1 ( x j α ) , r ˜ 1 ( x j α ) ) × U ˜ x j α ) .
Thus, from now on, the constant r 0 > 0 used to define U k = j = 1 M α B k r 0 ( x j α ) also satisfies (24) and (25).
Now, we are in the position to show (22). Take x U 2 and 0 < ε < r 0 arbitrarily. Then, there exists a unique x j α such that x B 2 r 0 ( x j α ) and ρ ε ( x · ) C 0 ( B 3 r 0 ( x j α ) ) . Because X C 1 ( [ r ˜ 0 , r ˜ 0 ] × U ˜ x j α ) , it follows that x ϕ ( X ( r , σ ) ) C 1 ( [ r ˜ 0 , r ˜ 0 ] × U ˜ x j α ) . This fact and (23) yield r X C 1 ( [ r ˜ 0 , r ˜ 0 ] × U ˜ x j α ) . Thus, we obtain the existence and continuity of r det X ( r , σ ) in [ r ˜ 0 , r ˜ 0 ] × U ˜ x j α . From (25), it follows that ρ ε ( x X ( ± r ˜ 0 , σ ) ) = 0 for σ U x j α . Hence, a change in variable formula and (24) yield
ρ ε ( x ϕ · x w ) ( x ) = B 3 r 0 ( x j α ) ρ ε ( x z ) x ϕ ( z ) · x w ( z ) d z = r ˜ 0 r ˜ 0 U ˜ x j α ρ ε ( x X ( r , σ ) ) r w ( X ( r , σ ) det X ( r , σ ) d r d σ = r ˜ 0 r ˜ 0 U ˜ x j α r ρ ε ( x X ( r , σ ) ) det X ( r , σ ) w ( X ( r , σ ) ) d r d σ = lim k r ˜ 0 r ˜ 0 U ˜ x j α r ρ ε ( x X ( r , σ ) ) det X ( r , σ ) w k ( X ( r , σ ) ) d r d σ
Retracing the integration of (26) backward, we see that
r ˜ 0 r ˜ 0 U ˜ x j α r ρ ε ( x X ( r , σ ) ) det X ( r , σ ) w k ( X ( r , σ ) ) d r d σ = r ˜ 0 r ˜ 0 U ˜ x j α ρ ε ( x X ( r , σ ) ) r w k ( X ( r , σ ) det X ( r , σ ) d r d σ = ρ ε ( x ϕ · x w k ) ( x ) .
Hence, the last formula of (26) is equal to lim k ρ ε ( x ϕ · x w k ) ( x ) . Thus, we obtain (22). From (22), it follows that
ρ ε ( x ϕ · x w ) ( x ) = lim k U 3 ρ ε ( x z ) x ϕ ( z ) · x w k ( z ) d z = lim k U 3 w k ( z ) div z ( ρ ε ( x z ) z ϕ ( z ) ) d z = lim k U 3 x ρ ε ( x z ) · z ϕ ( z ) w k ( z ) + ρ ε ( x z ) Δ ϕ ( z ) w k ( z ) d z = R 3 x ρ ε ( x z ) · z ϕ ( z ) w ( z ) d z ρ ε ( ( Δ ϕ ) w ) ( x ) ( x U 2 , 0 < ε < r 0 ) .
Since we know
x ϕ · x ( ρ ε w ) ( x ) = R 3 x ϕ ( x ) · x ρ ε ( x z ) w ( z ) d z ,
from (21) and (27) we have
( T ϕ ε w ) ( x ) = 2 γ 0 R 3 x ρ ε ( x z ) · ( x ϕ ( x ) z ϕ ( z ) ) w ( z ) d z + 2 γ 0 ρ ε ( ( Δ ϕ ) w ) ( x ) + γ 0 V Δ ϕ ε w ( x ) ( x U 2 , 0 < ε < r 0 ) .
Noting that
R 3 x ρ ε ( x z ) · x ϕ ( x ) z ϕ ( z ) d z = R 3 z ρ ε ( x z ) · x ϕ ( x ) z ϕ ( z ) d z = ρ ε ( Δ ϕ ) ( x ) ,
we have
( T ϕ ε w ) ( x ) = 2 γ 0 R 3 x ρ ε ( x z ) · ( x ϕ ( x ) z ϕ ( z ) ) ( w ( z ) w ( x ) ) d z + 2 γ 0 ρ ε ( ( Δ ϕ ) w ) ( x ) w ( x ) ρ ε ( Δ ϕ ) ( x ) + γ 0 V Δ ϕ ε w ( x ) = 2 γ 0 I ( x ) 2 γ 0 ( V w ε Δ ϕ ) ( x ) + γ 0 ( V Δ ϕ ε w ) ( x ) ( x U 2 , 0 < ε < r 0 ) ,
where
I ( x ) = R 3 x ρ ε ( x z ) · ( x ϕ ( x ) z ϕ ( z ) ) ( w ( z ) w ( x ) ) d z .
We have for x U 2
| x ρ ε ( x z ) · ( x ϕ ( x ) z ϕ ( z ) ) | 1 ε 3 ϕ B 2 ( U 3 ) | x z | ε | x ρ ( x z ε ) | 1 ε 3 ϕ B 2 ( U 3 ) x ρ L ( R 3 ) χ 1 ( x z ε ) ,
where χ 1 ( x ) = 1 ( | x | < 1 ), χ 1 ( x ) = 0 ( | x | 1 ). We set χ 1 , ε ( z ) = ε 3 χ 1 ( ε 1 z ) . Hence, it follows that
| I ( x ) | ϕ B 2 ( U 3 ) x ρ L ( R 3 ) R 3 χ 1 , ε ( x z ) | w ( z ) w ( x ) | d z for x U 2 .
In the same way as (2) of Lemma 1, we have
I B 0 ( U 2 ) C ε θ ϕ B 2 ( U 3 ) w B 0 , θ ( U 3 ) .
Since (14) implies
V w ε Δ ϕ B 0 ( U 2 ) + V Δ ϕ ε w B 0 ( U 2 ) C ε θ ϕ B 2 , θ ( U 3 ) w B 0 , θ ( U 3 ) ( 0 < ε < r 0 ) ,
the above together yields
T ϕ ( · , y ) ε w ( · , y ) B 0 ( U 2 ) C ε θ ϕ ( · , y ) B 2 , θ ( U 3 ) w ( · , y ) B 0 , θ ( U 3 ) ( y B ¯ , 0 < ε < r 0 ) .
This completes the proof of Lemma 2. □

4. Estimates of the Remainder Term w N , ε r ( x ; τ )

In this section, we show Proposition 1. We set
f τ , N , ε ( x ) = B F τ , N , ε ( x , y ) f ( y ) d y ( x Ω ) , g τ , N , ε n ( x ) = B G τ , N , ε n ( x , y ) f ( y ) d y ( x D n ) , h τ , N , ε d ( x ) = B H τ , N , ε d ( x , y ) f ( y ) d y ( x D d ) .
From (15), we can see that w N , ε r is a weak solution of
( γ 0 Δ τ 2 ) w N , ε r ( x ; τ ) = f τ , N , ε ( x ) in Ω , B τ n w N , ε r ( x ; τ ) = g τ , N , ε n ( x ) on D n , w N , ε r ( x ; τ ) = h τ , N , ε d ( x , y ) on D d .
Hence, as in (6.1) and (6.2) in [9], there exist constants τ 0 1 and C > 0 such that
w N , ε r H τ 1 ( Ω ) C τ 1 { f τ , N , ε L 2 ( Ω ) + τ g τ , N , ε n L 2 ( D n ) + τ h τ , N , ε d H τ 1 / 2 ( D d ) } , w N , ε r L 2 ( D n ) C τ 3 / 2 { f τ , N , ε L 2 ( Ω ) + τ g τ , N , ε n L 2 ( D n ) + τ h τ , N , ε d H τ 1 / 2 ( D d ) }
for any τ τ 0 . Thus, we can obtain Proposition 1 if we show the following estimates:
Lemma 3. 
Under the same assumptions as Proposition 1, there exist C N > 0 and τ 0 1 such that
f τ , N , ε L 2 ( Ω ) + τ g τ , N , ε n L 2 ( D n ) + τ h τ , N , ε d H τ 1 / 2 ( D d ) C N m ( τ , ε ) τ N 2 e τ γ 0 l 0 ( τ τ 0 , 0 < ε < r 0 ) ,
where m ( τ , ε ) is a function defined in Proposition 1 and C N > 0 is a constant determined by | | | b j | | | 2 , θ , j = 0 , 1 , , N 1 , | | | b N | | | 0 , θ and | | | ϕ | | | 2 , θ .
In the remainder of this section, we show Lemma 3. Take χ ˜ C 0 ( U 3 ) with 0 χ ˜ 1 and χ ˜ = 1 in U 2 . From Remark 5.1 and Lemma 6.2 of [9], if B and D satisfy all the assumptions of Proposition 1, there exists a constant C > 0 such that
B × B d y d y ˜ D e τ L ( x , y , y ˜ ) d S x C τ 5 e 2 τ γ 0 l 0 ( τ 1 ) ,
B × B d y d y ˜ Ω χ ˜ ( x ) e τ ( ϕ ( x , y ) + ϕ ( x , y ˜ ) ) d x d y d y ˜ C τ 6 e 2 τ γ 0 l 0 ( τ 1 ) ,
which are used to show Lemma 3.
Proof of Lemma 3. 
We give estimates of f τ , N , ε , g τ , N , ε n and h τ , N , ε d individually.
(i)
Estimate of f τ , N , ε L 2 ( Ω ) : From the definition in (16), it follows that F τ , N , ε ( x , y ) = F τ , N , ε , 0 ( x , y ) + F τ , N , ε , ( x , y ) , where
F τ , N , ε , 0 ( x , y ) = e τ ϕ ( x , y ) ( τ ) N χ ( x ) { τ ( T ϕ ε b N ( · , y ) ) ( x ) γ 0 ( W ε Δ b N 1 ( · , y ) ) ( x ) + γ 0 ( Δ ρ ε b N ( · , y ) ) ( x ) } , F τ , N , ε , ( x , y ) = γ 0 [ Δ , χ ] Ψ τ , N , ε ( x , y ) .
Since ϕ ( x , y ) > l 0 / γ 0 for ( x , y ) ( ( U 3 U 1 ) Ω ) × B ¯ , there exists a constant c 1 > 0 such that
γ 0 ϕ ( x , y ) l 0 + 2 c 1 for ( x , y ) ( ( U 3 U 1 ) Ω ) × B ¯ .
From (1) and (4) of Lemma 1, (12) and (31), we have
| F τ , N , ε , ( x , y ) | C χ ˜ ( x ) ( ε 1 + θ | | | b N | | | 0 , θ + j = 0 N 1 | | | b j | | | 1 , 0 ) e c 1 τ e τ γ 0 l 0 ( ( x , y ) Ω × B ¯ ) .
Let us estimate F τ , N , ε , 0 . By (2) and (4) of Lemma 1 and Lemma 2, we have
| F τ , N , ε , 0 ( x , y ) | e τ ϕ ( x , y ) τ N χ ( x ) { τ ( | T ϕ ε b N ( · , y ) ) ( x ) | + γ 0 | W ε Δ b N 1 ( · , y ) ( x ) | ) + γ 0 | ( Δ ρ ε b N ( · , y ) ) ( x ) | } C e τ ϕ ( x , y ) τ N χ ( x ) { ( τ ε θ | | | ϕ | | | 2 , θ + ε 2 + θ ) | | | b N | | | 0 , θ + τ ε θ | | | b N 1 | | | 2 , θ } ( ( x , y ) Ω × B ¯ , 0 < ε < r 0 ) .
Hence, it follows that
f τ , N , ε L 2 ( Ω ) 2 = B × B f ( y ) f ( y ˜ ) Ω F τ , N , ε ( x , y ) F τ , N , ε ( x , y ˜ ) d x d y d y ˜ C N ( m ( τ , ε ) ) 2 τ 2 N { τ 2 B × B Ω χ ˜ ( x ) e τ ( ϕ ( x , y ) + ϕ ( x , y ˜ ) ) d x d y d y ˜ + e c 1 τ e 2 τ γ 0 l 0 } .
This estimate and (30) yield that f τ , N , ε L 2 ( Ω ) C N τ N 2 m ( τ , ε ) e τ γ 0 l 0 , which is the estimate for f τ , N , ε in Lemma 3.
(ii)
Estimate of g τ , N , ε n L 2 ( D n ) : From the definition in (17), it follows that G τ , N , ε n ( x , y ) = G τ , N , ε , 0 n ( x , y ) + G τ , N , ε , n ( x , y ) , where
G τ , N , ε , 0 n ( x , y ) = e τ ϕ ( x , y ) χ ( x ) { ( τ ) 1 N ( γ 0 ν x ϕ ) + λ 1 ( x ) ( W ε b N ( · , y ) ) ( x ) + ( τ ) N ( B 0 n ( ρ ε b N ( · , y ) ) ( x ) + δ 0 , N a 1 ( x , y ) } , G τ , N , ε , n ( x , y ) = ( χ ( x ) 1 ) B τ n Φ τ ( x , y ) γ 0 ν x χ ( x ) Ψ τ , N , ε ( x , y ) .
From (1) of Lemma 1, (8) and (12), we have
| G τ , N , ε , n ( x , y ) | C j = 0 N | | | b j | | | 0 , 0 + 1 e c 0 τ γ 0 e τ γ 0 l 0 ( ( x , y ) D × B ¯ , 0 < ε < r 0 ) .
From the boundary condition of ϕ , it follows that
| G τ , N , ε , 0 n ( x , y ) | C χ ( x ) { δ 0 , N + τ N { τ | ( W ε b N ( · , y ) ) ( x ) | + | x ( ρ ε b N ( · , y ) ) ( x ) | + λ 0 L ( D d ) | ( ρ ε b N ( · , y ) ) ( x ) | } } e τ γ 0 | x y | ( x D n , y B ¯ , 0 < ε < r 0 ) ,
where the constant C depends on | | | ϕ | | | 1 , 0 . Hence, the estimates in Lemma 1 imply
| G τ , N , ε , 0 n ( x , y ) | C χ ( x ) τ N τ ε θ + ε 1 + θ | | | b N | | | 0 , θ + δ 0 , N e τ γ 0 | x y | ( x D n , y B ¯ , 0 < ε < r 0 ) .
Consequently, we obtain
g τ , N , ε n L 2 ( D n ) 2 = B × B d y d y ˜ f ( y ) f ( y ˜ ) D n G τ , N , ε n ( x , y ) G τ , N , ε n ( x , y ˜ ) d S x C N ( m ( τ , ε ) ) 2 τ 2 N { B × B D n χ ˜ ( x ) e τ ( ϕ ( x , y ) + ϕ ( x , y ˜ ) ) d S x d y d y ˜ + e c 0 τ e 2 τ γ 0 l 0 } .
This estimate and (29) yield that g τ , N , ε n L 2 ( D n ) C N τ N 5 / 2 m ( τ , ε ) e τ γ 0 l 0 , which is the estimate for g τ , N , ε n in Lemma 3.
(iii)
Estimate of h τ , N , ε d H τ 1 / 2 ( D d ) : From the definition of h τ , N , ε d and (18), we have
h τ , N , ε d ( x ) = h τ , N , ε , 0 d ( x ) + h τ , N , ε , d ( x ) for x D d , h τ , N , ε , 0 d ( x ) = ( τ ) N B e τ ϕ ( x , y ) χ ( x ) ( W ε b N ( · , y ) ) ( x ) f ( y ) d y , h τ , N , ε , d ( x ) = B ( χ ( x ) 1 ) Φ τ ( x , y ) f ( y ) d y .
For h τ , N , ε , d , from (8) and (13), it follows that
h τ , N , ε , d H τ 1 / 2 ( D d ) C x h τ , N , ε , d L 2 ( D d ) + τ h τ , N , ε , d L 2 ( D d ) C τ e τ γ 0 ( l 0 + c 0 ) ( 0 < ε < r 0 , τ 1 ) .
Hence, the main argument for estimating h τ , N , ε d H τ 1 / 2 ( D d ) is devoted to obtaining an estimate of h τ , N , ε , 0 d H τ 1 / 2 ( D d ) .
Before going to the estimation, we prepare the following lemma:
Lemma 4. 
If a C ( B ¯ ; B 0 , θ ( U 3 ) ) satisfies a ( x , y ) = 0 for ( x , y ) D d × B ¯ , then we have
| x β ( ρ ε a ( · , y ) ( x ) ) | C β ε θ | β | | | | a | | | 0 , θ ( x U 2 d ( ε ) Ω ¯ , y B ¯ , 0 < ε r 0 , | β | 0 ) .
Proof. 
To prove (33), we note that for x U 2 d ( ε ) there exists x ˜ D d and l 0 such that x = x ˜ + l ν x ˜ , ( 0 l ε ). Because a ( x ˜ , y ) = 0 , we have
x β ρ ε a ( · , y ) ( x ) = ε | β | R 3 z β ρ ( z ) a ( x ε z , y ) d z = ε | β | R 3 z β ρ ( z ) a ( x ε z , y ) a ( x ˜ , y ) d z .
From the above, it follows that
| x β ( ρ ε a ( · , y ) ( x ) ) | ε | β | R 3 | z β ρ ( z ) a ( x ε z , y ) a ( x ˜ , y ) | d z ε | β | R 3 | z β ρ ( z ) | | a ( · , y ) | B 0 , θ ( U 3 d ) | l ν x ε z | θ d z C β | a ( · , y ) | B 0 , θ ( U 3 d ) ( l + ε ) θ ε | β | .
Since we know 0 l ε , there exists a constant C > 0 such that
| x β ρ ε a ( · , y ) ( x ) | C β ε θ | β | | | | a | | | 0 , θ ( x U 2 d ( ε ) Ω ¯ , y B ¯ , 0 < ε r 0 ) ,
which completes the proof of (33). □
Because b N ( · , y ) B 0 , θ ( U 3 ) for some 0 < θ < 1 , we can extend h τ , N , ε , 0 d to a function that is Hörder continuous on { x Ω dist ( x , D d ) < 2 r 0 } and supp h τ , N , ε , 0 d U 2 d , and we use the same notation. For any 0 < δ r 0 , we choose χ δ C 0 ( R 3 ) such that χ δ ( x ) = 1 for dist ( x , D d ¯ ) < δ / 2 , χ δ ( x ) = 0 for dist ( x , D d ¯ ) > 3 δ / 4 and | x β χ δ ( x ) | C δ | β | ( x R 3 ). By (13), for 0 < δ r 0 , 0 < ε < r 0 , τ 1 , we have
h τ , N , ε , 0 d H τ 1 / 2 ( D d ) = χ δ h τ , N , ε , 0 d H τ 1 / 2 ( D d ) C x ( χ δ h τ , N , ε , 0 d ) L 2 ( Ω U 2 d ( δ ) ) + τ χ δ h τ , N , ε , 0 d L 2 ( Ω U 2 d ( δ ) ) C x h τ , N , ε , 0 d L 2 ( Ω U 2 d ( δ ) ) + ( τ + δ 1 ) h τ , N , ε , 0 d L 2 ( Ω U 2 d ( δ ) ) ,
where U 2 d ( δ ) = { x U 2 d dist ( x , D d ¯ ) < δ } . Thus, we can reduce the original problem to estimating the right side of (34), choosing δ appropriately. Now, we divide the cases into N 1 and N = 0 .
  • Case of N 1 : Since b N = 0 on D d U 3 for N 1 , we have for x D d U 2 ,
h τ , N , ε , 0 d ( x ) = ( τ ) N B e τ ϕ ( x , y ) χ ( x ) ( ρ ε b N ( · , y ) ) ( x ) f ( y ) d y .
When extending h τ , N , ε , 0 d to a function of L 2 ( Ω U 2 d ) , we extend it based on (35). To estimate x h τ , N , ε , 0 d L 2 ( Ω U 2 d ( δ ) ) , we note that
x ( e τ ϕ ( x , y ) χ ( x ) ( ρ ε b N ( · , y ) ) ( x ) ) = e τ ϕ ( x , y ) { τ x ϕ ( x , y ) χ ( x ) ρ ε b N ( · , y ) ( x ) + χ ( x ) ( x ρ ε b N ( · , y ) ) ( x ) + x χ ( x ) ( ρ ε b N ( · , y ) ) ( x ) } for x Ω U 2 d ( δ ) .
For the first and second terms of (36), we use (33). If we set δ = ε from (1) of Lemma 1, (31), (35), (36) and (33) with a = b N it follows that
x h τ , N , ε , 0 d L 2 ( Ω U 2 d ( ε ) ) 2 C τ 2 N B × B d y d y ˜ f ( y ) f ( y ˜ ) Ω U 2 d ( ε ) { e τ ( ϕ ( x , y ) + ϕ ( x , y ˜ ) ) ( χ ( x ) ) 2 ( τ ε θ + ε 1 + θ ) 2 | | | b N | | | 0 , θ 2 + e c 1 τ e 2 τ γ 0 l 0 | | | b N | | | 0 , 0 2 } d x .
Combining with (30), we have
x h τ , N , ε , 0 d L 2 ( Ω U 2 d ( ε ) ) C τ N 3 e τ γ 0 l 0 { τ ε θ + ε 1 + θ | | | b N | | | 0 , θ + | | | b N | | | 0 , 0 } ( 0 < ε < r 0 , τ 1 ) .
Since b N ( · , y ) B 0 , θ ( U 3 ) , in the same manner as above, the following is obtained:
h τ , N , ε , 0 d L 2 ( Ω U 2 d ( ε ) ) C τ N 3 e τ γ 0 l 0 ε θ | | | b N | | | 0 , θ ( 0 < ε < r 0 , τ 1 ) .
Then from (34) and the above estimates, we have the following.
h τ , N , ε , 0 d H τ 1 / 2 ( D d ) C τ N 3 e τ γ 0 l 0 τ ε θ + ε 1 + θ | | | b N | | | 0 , θ + | | | b N | | | 0 , 0 C N τ ( N + 3 ) e τ γ 0 l 0 m ( τ , ε ) ( τ 1 , 0 < ε < r 0 ) .
Case of N = 0 : Since b 0 = a ˜ 0 on D d U 3 for N = 0 , we put p ( x , y ) = b 0 ( x , y ) + a ˜ 0 ( x , y ) . Then, for x D d U 3 we have p ( x , y ) = 0 and
( W ε b 0 ( · , y ) ) ( x ) = ( ρ ε b 0 ( · , y ) ) ( x ) b 0 ( x , y ) = ( ρ ε b 0 ( · , y ) ) ( x ) + a ˜ 0 ( x , y ) = ( ρ ε ( b 0 ( · , y ) + a ˜ 0 ( · , y ) ) ( x ) ( ρ ε a ˜ 0 ( · , y ) ) ( x ) a ˜ 0 ( x , y ) = ( ( ρ ε ρ ( · , y ) ) ( x ) ( W ε a ˜ 0 ( · , y ) ) ( x )
Using (38), we denote h τ , 0 , ε , 0 d ( x ) as h τ , 0 , ε , 0 d ( x ) = h τ , 0 , ε , 0 d , 0 ( x ) + h τ , 0 , ε , 0 d , 1 ( x ) , where
h τ , 0 , ε , 0 d , 0 ( x ) = B e τ ϕ ( x , y ) χ ( x ) ( ρ ε p ( · , y ) ) ( x ) f ( y ) d y , h τ , 0 , ε , 0 d , 1 ( x ) = B e τ ϕ ( x , y ) χ ( x ) ( W ε a ˜ 0 ( · , y ) ) ( x ) f ( y ) d y .
For h τ , 0 , ε , 0 d , 1 , since a ˜ 0 C ( B ¯ , B 1 , θ ( U 3 ) ) , (2) and (3) of Lemma 1 imply
x k W ε a ˜ 0 ( · , y ) B 0 ( U 2 ) C | | | a ˜ 0 | | | k , θ ε θ ( y B ¯ , 0 < ε < r 0 , k = 0 , 1 ) .
In the same way as in the case of N 1 , the following is obtained:
h τ , 0 , ε , 0 d , 1 H τ 1 / 2 ( D d ) C τ 3 e τ γ 0 l 0 τ ε θ | | | a ˜ 0 | | | 0 , θ + ε θ | | | a ˜ 0 | | | 1 , θ ( 0 < ε < r 0 , τ 1 ) .
For h τ , 0 , ε , 0 d , 0 , we have
x ( e τ ϕ ( x , y ) χ ( x ) ( ρ ε p ( · , y ) ) ( x ) ) = e τ ϕ ( x , y ) { τ x ϕ ( x , y ) χ ( x ) ( ρ ε p ( · , y ) ) ( x ) + χ ( x ) ( x ρ ε p ( · , y ) ) ( x ) + x χ ( x ) ( ρ ε p ( · , y ) ) ( x ) } ,
Because p ( x , y ) = 0 for x D d U 3 d and y B ¯ , (33) can be used with a = p . Thus, applying (33) and (31) to the following equality:
x h τ , 0 , ε , 0 d , 0 L 2 ( Ω U 2 d ( ε ) ) 2 = B × B d y d y ˜ f ( y ) f ( y ˜ ) Ω U 2 d ( ε ) x ( e τ ϕ ( x , y ) χ ( x ) ( ρ ε p ( · , y ) ) ( x ) ) · x ( e τ ϕ ( x , y ˜ ) χ ( x ) ( ρ ε p ( · , y ) ) ( x ) ) d x ,
we have
x h τ , 0 , ε , 0 d , 0 L 2 ( Ω U 2 d ( ε ) ) 2 C B × B d y d y ˜ f ( y ) f ( y ˜ ) Ω U 2 d ( ε ) { e τ ( ϕ ( x , y ) + ϕ ( x , y ˜ ) ) ( χ ( x ) ) 2 ( τ ε θ + ε 1 + θ ) 2 | | | p | | | 0 , θ 2 + e c 1 τ γ 0 e 2 τ γ 0 l 0 | | | p | | | 0 , 0 2 } d x .
In the same way as for N 1 , using (30), we obtain the following:
x h τ , 0 , ε , 0 d , 0 L 2 ( Ω U 2 d ( ε ) ) C τ 3 e τ γ 0 l 0 { τ ε θ + ε 1 + θ | | | p | | | 0 , θ + | | | p | | | 0 , 0 } , h τ , 0 , ε , 0 d , 0 L 2 ( Ω U 2 d ( ε ) ) C τ 3 e τ γ 0 l 0 ε θ | | | p | | | 0 , θ ( 0 < ε < r 0 , τ 1 ) .
From the above estimates and (34), with δ = ε , it follows that
h τ , 0 , ε , 0 d , 0 H τ 1 / 2 ( D d ) C τ 3 e τ γ 0 l 0 { τ ε θ + ε 1 + θ | | | p | | | 0 , θ + | | | p | | | 0 , 0 } ( 0 < ε r 0 , τ 1 ) .
Combining (39) and (40), for N = 0 we obtain
h τ , 0 , ε , 0 d H τ 1 / 2 ( D d ) C τ 3 e τ γ 0 l 0 { ( τ ε θ + ε 1 + θ ) ( | | | b 0 | | | 0 , θ + | | | a ˜ 0 | | | 0 , θ ) + ε θ | | | a ˜ 0 | | | 1 , θ + | | | b 0 | | | 0 , 0 + | | | a ˜ 0 | | | 0 , 0 } C 0 τ 3 e τ γ 0 l 0 m ( τ , ε ) ( τ 1 , 0 < ε < r 0 ) ,
where the constant C 0 is determined by | | | b 0 | | | 0 , θ and | | | ϕ | | | 1 , θ .
From (32), (37) and (41), we obtain h τ , N , ε d H τ 1 / 2 ( D n ) C N τ N 3 m ( τ , ε ) e τ γ 0 l 0 , which is the estimate of h τ , N , ε d in Lemma 3.
Combining all estimates obtained for f τ , N , ε , g τ , N , ε n and h τ , N , ε d , we finish the proof of Lemma 3. □

5. Application to Inverse Problems for Mixed-Type Media

5.1. A Formulation of Active Sonar-Type Inverse Problems Using the Enclosure Method

This section deals with an active sonar-type inverse problem for the detection of mixed-type cavities. Let D be the union of cavities D n and D d as described in Section 2. Take T > 0 , and consider the following problem:
( t 2 γ 0 Δ ) u ( t , x ) = 0 in ( 0 , T ) × Ω , ( γ 0 ν x λ 1 ( x ) t λ 0 ( x ) ) u ( t , x ) = 0 on ( 0 , T ) × D n , u ( t , x ) = 0 on ( 0 , T ) × D d , u ( 0 , x ) = 0 , t u ( 0 , x ) = f ( x ) on Ω ,
where γ 0 > 0 is a constant, as in (3).
To detect cavities D, consider the method of “emitting waves from site B apart from D and observing the wave returning to the same site B by reflection”. Therefore, we choose the open set B to be B ¯ Ω . To ensure that the incident wave is emitted from B, suppose that the initial data f of (42) satisfies the emission condition from B, that is,
f L 2 ( Ω ) with supp f B ¯ and there exists a constant c 1 > 0 such that f ( x ) c 1 ( x B ) or f ( x ) c 1 ( x B ) .
We denote by γ D n : H 1 ( Ω ) H 1 / 2 ( D n ) the trace operator on D n , and set ψ , φ Ω = ψ , φ ( H 0 , D d 1 ( Ω ) ) × H 0 , D d 1 ( Ω ) . It is well known that for any f L 2 ( Ω ) , there exists a unique weak solution u L 2 ( 0 , T ; H 0 , D d 1 ( Ω ) ) of (42) with t u L 2 ( 0 , T ; L 2 ( Ω ) ) and t 2 u L 2 ( 0 , T ; ( H 1 ( Ω ) ) ) satisfying
t 2 u ( t , · ) , φ Ω + Ω γ 0 x u ( t , x ) · x φ ( x ) d x + d d t D n λ 1 ( x ) γ D n u ( t , x ) φ ( x ) d S x + D n λ 0 ( x ) γ D n u ( t , x ) φ ( x ) d S x = 0 a . e . t ( 0 , T )
for all φ H 0 , D d 1 ( Ω ) , and the initial condition in the usual sense (cf., for example, [14] Chap. 18, Sections 5 and 6).
Consider the following inverse problem: For the initial data f, measure u ( t , x ) in the range 0 t T and x B . From this data, we consider the problem of finding information about cavity D by determining the indicator function as follows:
I τ = Ω f ( x ) ( L T u ( x ; τ ) v ( x ; τ ) ) d x ,
where
L T u ( x ; τ ) = 0 T e τ t u ( t , x ) d t ( x Ω ) .
The study of a reconstruction procedure based on such a formulation is called the “enclosure method”. The enclosure method was originally introduced by Ikehata [15,16] for elliptic inverse problems, and has been shown to be effective for inverse problems for time-dependent problems such as the wave equation (e.g., [5,6,7]).
In previous studies, the asymptotic behavior of I τ was analyzed to derive the following equation:
lim τ γ 0 2 τ log | I τ | = l 0 ,
where the limit l 0 of (43) is closely connected with the “shortest lengths”corresponding to the problems. In this case, l 0 = dist ( D , B ) . If we can obtain l 0 , then by repeatedly repositioning B elsewhere, we can enclose D from the outside and obtain location information for the cavity. This is the basic idea behind the enclosure method.
There are already many results on the active sonar-type inverse problem for the wave equation using the enclosure method (see, e.g., the review paper [8]). However, in these studies, the boundaries of objects in the medium were limited to the same type. In such cases, when the parameter is sufficiently large, the sign of the integral function does not change. For example, the indicator function is negative for cavities with Dirichlet boundary conditions (negative cavities) and positive for cavities with Neumann boundary conditions (positive cavities). Therefore, it is relatively easy to analyze the indicator function for cavities of the same type. We say that the “monotonicity condition” is satisfied when each cavity indicates the same sign with respect to the indicator function, even when there are two or more cavities.
When the types of objects to be detected include both those that make the sign of the indicator function positive and those that make it negative, the inverse problem is referred to as a “mixed-type inverse problem”. Analyzing the indicator function in the mixed type is not easy, but it is essential to perform a mathematical analysis of the inverse problem in a way that is close to the actual observation conditions.
In our mixed-type inverse problem, let l 0 + be the shortest distance between the positive cavity and B, and l 0 be the shortest distance between the negative cavity and B. In this case, the following two cases are possible:
  • Separated: the case l 0 + = l 0 is not allowed to occur ( l 0 + l 0 is known in advance);
  • Non-separated: the case l 0 + = l 0 is allowed to occur.
For the separated case, as in the case of single-type cavities, it is possible to use the “elliptic estimate method”. In fact, it has been shown that the shortest distance can be detected for at least C 1 boundaries [17]. The non-separated cases are more realistic but difficult to handle, and the elliptic estimate method is not effective. Paper [18] proposes a method that combines the enclosure method with neural network learning to estimate the convex hull of inclusions from EIT (electrical impedance tomography) boundary measurement data. While [18] addresses cases where the monotonicity condition is not satisfied, it notes that accuracy decreases under non-separated situations. This suggests that further theoretical investigations are needed regarding the detectability using the indicator function.
To analyze the indicator function in the non-separated case, in [9], we used the “asymptotic solution”, which is a traditional method based on the observations of wave incidence and reflection. The asymptotic solution method allows the main part of the reflected wave to be extracted, enabling a detailed analysis following the idea of active sonar-type problems. This makes it possible to address non-separated cases. However, as stated in [9], the use of the standard asymptotic solution requires the assumption of C 4 regularity, which is a higher regularity than that required for the case of a single type of cavities. For example, in [6,7], Ikehata derived the asymptotic behavior of I τ under the assumption that the cavity boundary belongs to the class C 3 in the case where the monotonicity condition is satisfied (e.g., for cavities with Dirichlet boundary conditions).
To approach a more practical setting, we aim to improve the assumption of regularity when using the asymptotic solution. For this purpose, we use the modified asymptotic solution constructed in Section 2. The purpose of this section is to show that, under the assumption that the cavity boundaries belong to class C 2 , θ , the shortest distance can be detected using (43). As a by-product, we were able to obtain the asymptotic behavior of the indicator function, even under the assumption of looser regularity than in previous studies with the monotonicity condition. This is an advantage of introducing the modified asymptotic solutions.

5.2. Analysis Using the Modified Asymptotic Solution

In the following, we assume that ( D , λ ) is of ( N 1 , θ ) -class for some N 0 . Further, as for the setting of the inverse problem, let D n consist of two types of parts, D n + and D n , satisfying D n = D n + D n , D n + ¯ D n ¯ = and let there exist a constant μ 1 > 0 such that
0 λ 1 ( x ) < γ 0 μ 1 on D n + , 0 < γ 0 + μ 1 < λ 1 ( x ) on D n .
Note that D n + is a positive cavity and D n is a negative cavity (see, e.g., [9]).
Similar to the procedure on pages 8 and 9 in [9], we can decompose the indicator function I τ as I τ = J τ + O ( τ 1 e τ T ) ( τ ) and J τ = J τ n + + J τ n + J τ d , where:
J τ n ± = D n ± B τ n v ( x ; τ ) w ( x ; τ ) d S x , J τ d = γ 0 D d ν x w ( x ; τ ) v ( x ; τ ) d S x ,
with the solution w ( x ; τ ) of (3). Each J τ α is an integral on the boundary D α , corresponding to each type of cavity: α { n + , n , d } .
Thus, it is sufficient to investigate the asymptotic behavior of J τ α ( α { n + , n , d } ) above. Here, we use an approximation based on the modified asymptotic solution constructed in Section 2. We analyze the indicator function using the modified asymptotic solution w N , ε . For α { n + , n } , we divide J τ α into two parts: J τ α = J τ , N , ε α + J τ , N , ε α , r , where
J τ , N , ε α = D α B τ n v ( x ; τ ) w N , ε ( x ; τ ) d S x , J τ , N , ε α , r = D α B τ n v ( x ; τ ) w N , ε r ( x ; τ ) d S x .
Similarly, we divide J τ d into J τ d = J τ , N , ε d + J τ , N , ε d , r , where
J τ , N , ε d = γ 0 D d ν x w N , ε ( x ; τ ) v ( x ; τ ) d S x , J τ , N , ε d , r = γ 0 D d ν x w N , ε r ( x ; τ ) v ( x ; τ ) d S x .
First, we confirm that J τ , N , ε α , r ( α { n + , n , d } ) is a remainder term.
Lemma 5. 
Assume that ( D , λ ) is of ( N 1 , θ ) -class with some 0 < θ 1 and N { 0 } N , B is a convex set with C 2 boundary, and D and B satisfy the non-degenerate condition. Then, there exists a constant C N > 0 and τ 0 1 such that
α { n + , n , d } | J τ , N , ε α , r | C N τ N 5 m ( τ , ε ) e 2 τ l 0 / γ 0 ( τ τ 0 , 0 < ε < r 0 ) ,
where m ( τ , ε ) is the function defined in Proposition 1 and C N > 0 is a constant determined by | | | b j | | | 2 , θ , j = 0 , 1 , , N 1 , | | | b N | | | 0 , θ and | | | ϕ | | | 2 , θ .
Proof. 
Using the results obtained in Section 4, we can prove this in the same way as in the proof of Lemma 4.2 of [9]. Because B is a convex set with C 2 boundary and D is at least class C 2 , in the same way as in Lemma 6.3 of [9], we can obtain
ν x v ( · ; τ ) L 2 ( D ) + τ v ( · ; τ ) L 2 ( D ) C τ 3 / 2 e τ γ 0 l 0 ( τ 1 ) ,
v ( · ; τ ) H τ 1 / 2 ( D ) C τ 2 e τ γ 0 l 0 ( τ 1 ) .
For J τ , N , ε α , r ( α { n + , n } ), it follows that
α { n + , n } | J τ , N , ε α , r | α { n + , n } | D α B τ n v ( x ; τ ) w N , ε r ( x ; τ ) d S x | C ( ν x v ( · ; τ ) L 2 ( D n ) + τ v ( · ; τ ) L 2 ( D n ) ) w N , ε r ( · ; τ ) L 2 ( D n ) .
Hence, Proposition 1 and (44) yield the estimate in Lemma 5 for α { n + , n } .
For J τ , N , ε d , r , we use a duality argument as in the proof of Lemma 4.2 of [9] (see also Lemma 4.3 of [11]). In the same way as getting (6.18) and (6.19) of [9], from (28) and Lemma 3, it follows that
| D d γ 0 ν x w N , ε r ( x ; τ ) g ( x ) d S x | C N τ N 3 m ( τ , ε ) e τ l 0 / γ 0 g H τ 1 / 2 ( D d ) ( g H τ 1 / 2 ( D d ) ) .
Substituting g = v ( · ; τ ) in the above estimate and noting (45), we obtain the estimate in Lemma 5 for α = d . This completes the proof of Lemma 5. □
Next, we consider the terms J τ , N , ε α ( α { n + , n , d } ) containing the main term. For the standard asymptotic solution Ψ τ , N 1 ( x , y ) of order N 1 , which is defined by (6), replacing N with N 1 , we set
w N 1 ( x ; τ ) = B Φ τ ( x , y ˜ ) + χ ( x ) Ψ τ , N 1 ( x , y ˜ ) f ( y ˜ ) d y ˜ .
There are two representations of w N , ε ( x ; τ ) :
w N , ε ( x ; τ ) = w N 1 ( x ; τ ) + ( τ ) N B χ ( x ) e τ ϕ ( x , y ˜ ) ( ρ ε b N ( · , y ˜ ) ) ( x ) f ( y ˜ ) d y ˜
= w N ( x ; τ ) + ( τ ) N B χ ( x ) e τ ϕ ( x , y ˜ ) ( W ε b N ( · , y ˜ ) ) ( x ) f ( y ˜ ) d y ˜ ,
and we use (46) for J τ , N , ε d and (47) for J τ , N , ε α ( α { n + , n } ). Using the expression (47) for w N , ε ( x ; τ ) and B τ n Φ τ ( x , y ) = ( τ a 0 ( x , y ) + a 1 ( x , y ) ) e τ | x y | / γ 0 , similar to Section 5 of [9], we can expand J τ , N , ε α with α { n + , n } as
J τ , N , ε α = k = 1 N τ k K τ , k α + τ ( N 1 ) L τ , ( N 1 ) , ε α + τ N L τ , N , ε α ( α { n + , n } ) ,
where K τ , k α are the same as in Section 5 of [9], and L τ , ( N 1 ) , ε α and L τ , N , ε α are defined by
L τ , ( N j ) , ε α = D α × B × B f ( y ) f ( y ˜ ) e τ L ( x , y , y ˜ ) γ 0 ( 1 ) j N χ ( x ) a 1 j ( x , y ) ( W ε b N ( · , y ˜ ) ) ( x ) d S x d y d y ˜
( j = 0 , 1 ). Here, L ( x , y , y ˜ ) = | x y | + | x y ˜ | . For J τ , N , ε d , using (46) and ( ρ b N ( · , y ˜ ) ) ( x ) = ( W ε b N ( · , y ˜ ) ) ( x ) for x D d for N 1 , we have
J τ , N , ε d = k = 1 N 1 τ k K τ , k d τ ( N 1 ) L τ , ( N 1 ) , ε d τ N L τ , N , ε d ,
where K τ , k d are the same as in Section 5 of [9], and L τ , ( N 1 ) , ε d and L τ , N , ε d are defined by
L τ , k , ε d = D d × B × B f ( y ) f ( y ˜ ) e τ L ( x , y , y ˜ ) γ 0 κ k , ε d ( x , y , y ˜ ) d S x d y d y ˜ , κ ( N 1 ) , ε d ( x , y , y ˜ ) = ( 1 ) ( N 1 ) χ ( x ) ν x ϕ ( x , y ˜ ) 4 π | x y | ( W ε b N ( · , y ˜ ) ) ( x ) , κ N , ε d ( x , y , y ˜ ) = ( 1 ) N 4 π γ 0 | x y | ( δ 0 , N ν x · x y ˜ | x y ˜ | 4 π | x y ˜ | 2 + γ 0 ν x χ ( x ) ( ρ ε b N ( · , y ˜ ) ) ( x ) + γ 0 χ ( x ) ν x ( ρ ε b N ( · , y ˜ ) ) ( x ) ) .
Note that we have
K τ , k α = D α × B × B f ( y ) f ( y ˜ ) e τ L ( x , y , y ˜ ) γ 0 κ k α ( x , y , y ˜ ) d S x d y d y ˜ ( α { n + , n , d } ) ,
where for any k = 1 , 0 , , N and α { n + , n } or, for any k = 1 , 0 , , N 1 and α = d , κ k α ( x , y , y ˜ ) is a continuous function in D α × B ¯ × B ¯ as in Section 5 of [9]. Furthermore, we have the exact forms of κ 1 α ( x , y , y ˜ ) . Hence, we can deduce the leading part of each K τ , 1 α .
Using the asymptotic solution, it is possible to obtain a higher-order asymptotic expansion of the indicator function by assuming regularity corresponding to the desired order of approximation. Furthermore, using Lemma 5, we can show that if we use the modified asymptotic solution, we can reduce the required regularity to the following order.
Proposition 2. 
Assume that ( D , λ ) is of ( N 1 , θ ) -class with some 0 < θ 1 and N { 0 } N , B is a convex set with a C 2 boundary, and B and D satisfy the non-degenerate condition for L 0 ( x , y ) . Furthermore, we assume that f L ( B ) . Then, there exist constants C > 0 and τ 1 1 such that
| I τ α { n + , n } k = 1 N 1 τ k K τ , k α + k = 1 N 1 τ k K τ , k d | C e 2 τ l 0 γ 0 ( τ N 4 τ θ + τ 1 e τ T ) ( τ τ 1 ) .
Proof. 
First, we show the estimates of K τ , k α for any α { n + , n , d } and k = 1 , 0 , , N . From (48) it follows that
| K τ , k α | C B × B d y d y ˜ D e τ L ( x , y , y ˜ ) d S x d y d y ˜ .
Hence, (29) yields that there exists a constant C k > 0 such that
| K τ , k α | C k τ 5 e 2 τ γ 0 l 0 ( τ 1 ) .
For L τ , N , ε d , by (1) and (4) of Lemma 1 we have a constant C > 0 such that
| L τ , N , ε d | C ( ε 1 + θ + 1 ) D d × B × B e τ L ( x , y , y ˜ ) / γ 0 d S x d y d y ˜ ( 0 < ε < r 0 ) .
Then, by (29), we obtain
| L τ , N , ε d | C ( ε 1 + θ + 1 ) τ 5 e τ γ 0 2 l 0 ( 0 < ε < r 0 ) .
For L τ , k , ε α ( k = N 1 , N and α { n + , n } ) and L τ , ( N 1 ) , ε , by (2) of Lemma 1 we have
| L τ , ( N 1 ) , ε d | + α { n + , n } k { N 1 , N } | L τ , k , ε α | C D α × B × B f ( y ) f ( y ˜ ) e τ L ( x , y , y ˜ ) / γ 0 χ ( x ) | ( W ε b N ( · , y ˜ ) ) ( x ) | d S x d y d y ˜ C ε θ | | | b N | | | 0 , θ D α × B × B e τ L ( x , y , y ˜ ) / γ 0 χ ( x ) d S x d y . d y ˜ .
Hence, (29) implies
τ | L τ , ( N 1 ) , ε d | + τ α { n + , n } k { N 1 , N } | L τ , k , ε α | C ε θ τ 4 e 2 τ γ 0 l 0 ( 0 < ε < r 0 , τ 1 ) .
From all of the above, J τ α = J τ , N , ε α + J τ , N , ε α , r and Lemma 5, we obtain
| J τ α { n + , n } k = 1 N 1 τ k K τ , k α + k = 1 N 1 τ k K τ , k d | C e 2 τ l 0 γ 0 τ N 4 { M ( τ , ε ) + τ 1 m ( τ , ε ) }
where M ( τ , ε ) satisfies
| M ( τ , ε ) | C τ 1 + τ 1 ε θ 1 + ε θ ( τ 1 , 0 < ε < r 0 ) .
If we set ε = τ 1 ( τ max { τ 0 , 2 r 0 1 } ), it follows that
| M ( τ , ε ) | C τ θ , m ( τ , ε ) 5 τ 1 θ ( τ 1 ) .
These estimates yield the estimate described in Proposition 2 with τ 1 = max { τ 0 , 2 r 0 1 } . □
Thus, if we increase N as necessary and assume regularity, we can obtain a more detailed asymptotic expansion for I τ as shown above. Although it is not easy to obtain the concrete form of K τ , k α for k 0 , in [9], we were able to derive the concrete form of the highest-order term K τ , 1 α .
We show a simple case where B is a ball with radius a > 0 , which corresponds to Theorem 3.3 in [9]. For this purpose, we need to introduce notations in addition to those in Section 2. We define E 0 n + and E 0 n by E 0 n ± = { ( x , y ) E 0 n x D n ± } . Thus, we can divide the set E 0 introduced in Section 2 into E 0 = α { n + , n , d } E 0 α .
If B and D satisfy the non-degenerate condition for L 0 ( x , y ) , then each E 0 α consists of only a finite number ( = M α ) of isolated points. Hence, we can assume that E 0 α can be expressed by E 0 α = { ( x j α , y j α ) j = 1 , 2 , , M α } for α { n + , n , d } if E 0 α . Note that E 0 n = E 0 n + E 0 n , E 0 n + E 0 n = and M n = M n + + M n . In accordance with the above notation, ( x j α , y j α ) are relabeled.
We denote κ 1 , α ( x ) and κ 2 , α ( x ) as the principal curvatures of D α at x D α with κ 1 , α ( x ) κ 2 , α ( x ) for α { n + , n , d } . Set
A α ( x ) = j = 1 2 κ j , α ( x ) + 1 l 0 + a ( α { n + , n , d } ) .
From the non-degenerate condition, it follows that A α ( x ) > 0 for ( x , y ) E 0 α , α { n + , n , d } (see Section 2 in [9]).
Theorem 1. 
Assume that D n and D d are of class C 2 , θ with some 0 < θ 1 , B is a ball with radius a > 0 , and D and B satisfy the non-degenerate condition for L 0 ( x , y ) , λ 0 L ( D n ) , λ 1 C 0 , θ ( D n ) and f C 1 ( B ¯ ) . Then, there exists δ > 0 such that
I τ = π γ 0 τ 4 e τ γ 0 2 l 0 { T 0 + O ( τ θ 2 ) } + O ( e τ γ 0 ( 2 l 0 + δ ) ) + O ( τ 1 e τ T ) ( τ ) ,
where T 0 = α { n + , n , d } T 0 α , T 0 α = j = 1 M α a 2 2 ( l 0 + a ) 2 ( f ( y j α ) ) 2 A α ( x j α ) b α ( x j α ) , b α ( x ) = γ 0 λ 1 ( x ) γ 0 + λ 1 ( x ) for α { n + , n } and b d ( x ) = 1 .
Gaussian curvatures appeared in the denominator of the leading terms. It can be observed that the smaller the curvature of the cavity boundary, the easier it is to detect the cavity (for details, see [9]). Because at least C 2 regularity is required to obtain the Gaussian curvature, the current regularity assumption is almost appropriate.
Proof of Theorem 1. 
Under the assumption of Theorem 1, ( D , λ ) belongs to the ( 1 , θ ) class. From (49) and Proposition 2 with N = 0 , it follows that
| I τ α { n + , n } τ K τ , 1 α + τ K τ , 1 d | C e 2 τ l 0 γ 0 ( τ 4 τ θ + τ 1 e τ T ) ( τ τ 1 ) .
Next, we consider K τ , 1 α which includes the leading terms. The only difference from our previous result, Theorem 3.3 in [9], is the regularity of κ 1 α in the expression (48) when k = 1 , which is in C ( B ¯ , B 0 , θ ( U 3 D ) ) . By the usual Laplace method (see Lemma 4.3 of [9]) we can obtain the same expansion as in Section 5 in [9] if we replace O ( τ 1 / 2 ) by O ( τ θ / 2 ) . Then, it follows that there exists δ > 0 such that
τ K τ , 1 α = π γ 0 τ 4 e τ γ 0 2 l 0 T 0 α + e τ γ 0 2 l 0 O ( τ 4 θ / 2 ) + O ( e τ γ 0 ( 2 l 0 + δ ) )
as τ for α { n + , n , d } . Combining these results, we obtain Theorem 1. □
Once Theorem 1 is obtained, the shortest distance l 0 can be obtained from this asymptotic expansion of the indicator function, and it can be determined whether it is l 0 + or l 0 . It can also be seen that l 0 can be detected for the case l 0 + = l 0 depending on the curvature of the cavity boundary and the value of λ 1 at the boundary, see Section 3 of [9] for further details. Here, we relaxed the conditions of [9] and assumed the regularity of the boundary of class C 2 , θ to obtain the asymptotic expansion of the indicator function.

Author Contributions

M.K.: conceptualization; methodology; investigation; validation; formal analysis; supervision; funding acquisition; writing—original draft; project administration. W.K.: methodology; investigation; validation; funding acquisition; visualization; writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly supported by JSPS KAKENHI Grant Number JP23K03184 and partly supported by JSPS KAKENHI Grant Number JP20K03684.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare that they are not associated with or involved in any organization with a financial interest in the subject matter discussed in this publication.

References

  1. Runborg, O. Mathematical models and numerical methods for high frequency waves. Commun. Comput. Phys. 2007, 2, 827–880. [Google Scholar]
  2. Ikawa, M. Decay of solutions of the wave equation in the exterior of two convex obstacles. Osaka J. Math. 1982, 19, 459–509. [Google Scholar] [CrossRef]
  3. Ikawa, M. On the poles of the scattering matrix for two strictly convex obstacles. J. Math. Kyoto Univ. 1983, 23, 127–194. [Google Scholar] [CrossRef]
  4. Gérard, C. Asymptotique des pôles de la matrice de scattering pour deux obstacles strictement convexes. In Mémoires de la S. M. F., 2nd ed.; Société Mathématique de France (SMF): Paris, France, 1988; Volume 31. [Google Scholar]
  5. Ikehata, M. The enclosure method for inverse obstacle scattering problems with dynamical data over a finite time interval: II. Obstacles with a dissipative boundary or finite refractive index and back-scattering data. Inverse Prob. 2012, 28, 045010. [Google Scholar] [CrossRef]
  6. Ikehata, M. The enclosure method for inverse obstacle scattering problems with dynamical data over a finite time interval: III. Sound-soft obstacle and bistatic data. Inverse Prob. 2013, 29, 085013. [Google Scholar] [CrossRef]
  7. Ikehata, M. A remark on finding the coefficient of the dissipative boundary condition via the enclosure method in the time domain. Math. Meth. Appl. Sci. 2017, 40, 915–927. [Google Scholar] [CrossRef]
  8. Ikehata, M. Extracting discontinuity using the probe and enclosure methods. J. Inverse Ill-Posed Probl. 2023, 31, 487–575. [Google Scholar] [CrossRef]
  9. Kawashita, M.; Kawashita, W. Asymptotic behavior of the indicator function in the inverse problem of the wave equation for media with multiple types of cavities. Phys. Scr. 2024, 99, 115251. [Google Scholar] [CrossRef]
  10. Salo, M. Semiclassical pseudodifferential calculus and the reconstruction of a magnetic field. Commun. Partial. Differ. Equ. 2006, 31, 1639–1666. [Google Scholar] [CrossRef]
  11. McLean, W. Strongly Elliptic Systems and Boundary Integral Equations; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  12. Ikawa, M. Hyperbolic Partial Differential Equations and Wave Phenomena; Translations of Mathematical Monographs, 189; AMS: Providence, RI, USA, 2000. [Google Scholar]
  13. Evans, L.C. Partial Differential Equations; AMS: Providence, RI, USA, 1998. [Google Scholar]
  14. Dautray, R.; Lions, J.-L. Mathematical Analysis and Numerical Methods for Sciences and Technology, Evolution Problems I; Springer: Berlin/Heidelberg, Germany, 1992; Volume 5. [Google Scholar]
  15. Ikehata, M. Enclosing a polygonal cavity in a two-dimensional bounded domain from Cauchy data. Inverse Prob. 1999, 15, 1231–1241. [Google Scholar] [CrossRef]
  16. Ikehata, M. Reconstruction of the support function for inclusion from boundary measurements. J. Inverse Ill-Posed Probl. 2000, 8, 367–378. [Google Scholar] [CrossRef]
  17. Kawashita, M.; Kawashita, W. Inverse problems of the wave equation for media with mixed but separated heterogeneous parts. Math. Meth. Appl. Sci. 2025, 48, 4144–4172. [Google Scholar] [CrossRef]
  18. Sippola, S.; Rautio, S.; Hauptmann, A.; Ide, T.; Siltanen, S. Learned enclosure method for experimental EIT data. Appl. Math. Mod. Chall. 2025, 4, 47–65. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kawashita, M.; Kawashita, W. Modified Asymptotic Solutions and Application to Asymptotic Expansions of Indicator Functions in Mixed-Type Media. Mathematics 2026, 14, 1210. https://doi.org/10.3390/math14071210

AMA Style

Kawashita M, Kawashita W. Modified Asymptotic Solutions and Application to Asymptotic Expansions of Indicator Functions in Mixed-Type Media. Mathematics. 2026; 14(7):1210. https://doi.org/10.3390/math14071210

Chicago/Turabian Style

Kawashita, Mishio, and Wakako Kawashita. 2026. "Modified Asymptotic Solutions and Application to Asymptotic Expansions of Indicator Functions in Mixed-Type Media" Mathematics 14, no. 7: 1210. https://doi.org/10.3390/math14071210

APA Style

Kawashita, M., & Kawashita, W. (2026). Modified Asymptotic Solutions and Application to Asymptotic Expansions of Indicator Functions in Mixed-Type Media. Mathematics, 14(7), 1210. https://doi.org/10.3390/math14071210

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop