Next Article in Journal
Classifying Decision Strategies in Multi-Attribute Decision-Making: A Multi-dimensional Scaling and Hierarchical Cluster Analysis of Simulation Data
Previous Article in Journal
A Hybrid DST-Accelerated Finite-Difference Solver for 2D and 3D Poisson Equations with Dirichlet Boundary Conditions
Previous Article in Special Issue
The Local Times for Additive Brownian Sheets and the Intersection Local Times for Independent Brownian Sheets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Refraction Laws in Temporal Media

by
Cristian E. Gutiérrez
1 and
Eric Stachura
2,*
1
Department of Mathematics, Temple University, Philadelphia, PA 19122, USA
2
Department of Mathematics, Kennesaw State University, Marietta, GA 30060, USA
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(17), 2777; https://doi.org/10.3390/math13172777
Submission received: 1 August 2025 / Revised: 20 August 2025 / Accepted: 26 August 2025 / Published: 29 August 2025
(This article belongs to the Special Issue Mathematical Analysis: Theory, Methods and Applications)

Abstract

The time-dependent Maxwell system in the sense of distributions is considered in the context of temporal interfaces. Just as with spatial interfaces, electromagnetic waves at temporal interfaces scatter and create a transmitted and reflected wave. A rigorous derivation of boundary conditions for the electric and magnetic fields at temporal interfaces is provided with precise assumptions of the material parameters. In turn, this is used to obtain a general Snell’s Law at such interfaces. From this, explicit formulas for the reflection and transmission coefficients are obtained. Unlike previous works, there is no simplifying ansatz on the solution to the Maxwell system made, nor is it assumed that the fields are smooth. Material parameters which are not necessarily constant on either side of the temporal interface are also considered.

1. Introduction

The area of time-varying materials has received great attention lately, with the idea that time can be used to achieve additional control of electromagnetic waves. Many applications, especially related to temporal modulations of metamaterials [1], have been seen so far, and include frequency conversion, wave amplification, and the foundations of photonic time crystals, among many others. Photonic time crystals are systems which undergo periodic temporal variations while maintaining spatial uniformity [2].
While the applications are vast, the mathematical analysis related to the Maxwell equations, in particular, in such time-varying media is lacking in comparison (in no small part due to the mathematical challenges of time-varying material parameters). For mathematical results in the time domain with time-varying material parameters, we make particular mention of [3] for numerical investigations using a so-called Magnus expansion and [4] (Section 5) which considers weak solutions for the time-dependent Maxwell system with both space and time-varying permittivity. We also mention various works that consider time dependence through temperature-dependent material parameters; see, in particular, Refs. [5,6,7] and the references therein.
In this paper, we focus on the particular case of temporal interfaces. A temporal interface occurs [8] when a material parameter is changed in time rapidly while an optical wave is present in the material. One way this can be obtained is through strong optical nonlinearity and an ultrafast (on the order of 5–10 femtoseconds) laser pulse [9].
At a temporal interface, forward and backward waves in time (waves that propagate along the same and opposite directions of the original wave) are known to result [10,11,12]. This is the temporal analog of classical reflected and transmitted waves at a spatial interface, which are known to occur when a plane wave is incident upon such an interface [13]. We continue to refer to these as reflected, transmitted, and incident waves in this paper.
See Figure 1 for a graphical illustration of this phenomenon. In this Figure, n ( t ) and n + ( t ) denote the refractive indices on either side of the temporal interface.
For a temporal interface, the wavelength for the forward and backward waves are the same as the original wave, but the frequency changes due to the change in wave velocity. Moreover, at the temporal interface (unlike the classical spatial interface), the energy needed to change the material can affect the electromagnetic wave energy, so energy is not conserved (it can increase or decrease). We will derive these results from a rigorous mathematical perspective, and, in particular, quantify the deviation from conservation of energy—see Equations (35) and (36), which describe the reflection and transmission coefficients in terms of the material parameters on either side of the temporal interface.
The objective of this paper is to derive the Snell Law at temporal interfaces from fundamental principles. To achieve this, we will utilize the Maxwell equations. Given that the waves being considered are generally discontinuous, it is advantageous to approach this derivation from the perspective of distributions or generalized functions, thereby facilitating a comprehensive understanding of the Maxwell system. A similar distributional analysis of Maxwell equations in the context of spatial metasurfaces has been conducted in [14].
This work considers more general time-dependent material parameters than have been previously considered. In particular, this work sets a mathematical foundation to study general time-varying media and even more complicated materials, such as those with space–time interfaces [12]. This framework is useful to precisely understand how light propagates across non-standard interfaces.
The outline of this paper is as follows. In Section 2, we analyze the Maxwell system in the sense of space–time distributions. Under certain assumptions on fields which are discontinuous at a certain time, we derive a formula for the distributional time derivative of such a field; see Equation (12). Then in Section 3, under particular assumptions of the material parameters ε , μ , we provide a rigorous derivation of the boundary conditions satisfied by the electric and magnetic fields at a temporal interface. Again, the time-dependent Maxwell system is understood in the distributional sense here. In Section 4 we use these boundary conditions to derive a fully rigorous Snell’s Law for reflection and refraction at a temporal interface in the case when ε , μ vary in time but not in space (Proposition 2). From here, in Section 4.1, we consider the simplified case when ε , μ jump from one value to another (a common assumption in practice) and rigorously obtain formulas for the amplitudes (Proposition 3), and for the reflection and transmission coefficients in Section 4.1. We also note that the amplitudes of the incident, transmitted, and reflected waves are generally not arbitrary, but rather subject to the divergence equations in the Maxwell system—see Section 4.2.

2. Maxwell Equations in the Sense of Distributions

Recall the Maxwell system of equations (in CGS units)
· D = 4 π ρ ,
· B = 0 ,
× E = 1 c B t
× H = 4 π c J + 1 c D t .
where E ( x , t ) denotes the electric field, H ( x , t ) the magnetic field, D ( x , t ) the electric flux density, B ( x , t ) the magnetic flux density, and c the speed of light. See [13] Sections 1.1–1.2. We assume the charge density ρ = 0 and the current density J = 0 , and consider the constitutive equations given by
D = ε E , B = μ H
where we assume that ε = ε ( x , t ) and μ = μ ( x , t ) are positive functions of x R 3 , t R , and t t 0 for some t 0 , that satisfy appropriate piecewise smoothness assumptions that will be described in a moment.
We begin by recalling the notions needed to analyze the Maxwell system in a distributional sense. Let Ω R 3 be an open domain, which could be the whole space, and let ( a , b ) R be an open interval that could be infinite. A generalized function or distribution defined in Ω × ( a , b ) is a complex-valued continuous linear functional defined on the class of test functions denoted by D ( Ω × ( a , b ) ) = C 0 ( Ω × ( a , b ) ) , that is, the class of functions are infinitely differentiable in Ω × ( a , b ) and have compact support in Ω × ( a , b ) . More precisely, g is a distribution in Ω × ( a , b ) if g : D ( Ω × ( a , b ) ) C is a linear function such that for each compact K Ω × ( a , b ) there exist a constant C and an integer k such that
| g ( ϕ ) | C | α | k sup x K | D α ϕ ( x ) |
for each ϕ D ( Ω × ( a , b ) ) . As customary, D ( Ω × ( a , b ) ) denotes the class of distributions in Ω × ( a , b ) .
Numerous references are available for distributions; however, we only cite the seminal work by L. Schwartz, [15], and also [16].
If g D ( Ω × ( a , b ) ) , then g , ϕ denotes the value of the distribution g on the test function ϕ D ( Ω × ( a , b ) ) . If g is a locally integrable function in Ω × ( a , b ) , then g gives rise to a distribution defined by
g , ϕ = Ω × ( a , b ) g ( x , t ) ϕ ( x , t ) d x d t ,
for each ϕ D ( Ω × ( a , b ) ) .
We say that G = ( G 1 , G 2 , G 3 ) is a vector-valued distribution in Ω × ( a , b ) if each component G i D ( Ω × ( a , b ) ) , 1 i 3 . The divergence of G with respect to x is the scalar distribution defined by
· G , ϕ = i = 1 3 G i , x i ϕ ,
and the curl of G is the vector-valued distribution in Ω defined by
× G , ϕ = G 2 , ϕ x 3 G 3 , ϕ x 2 i G 1 , ϕ x 3 G 3 , ϕ x 1 j + G 1 , ϕ x 2 G 2 , ϕ x 1 k .
Then it follows that
· ( × G ) = 0
in the sense of distributions. When the distribution G = ( G 1 , G 2 , G 3 ) is given by a locally integrable function in Ω × ( a , b ) , we obtain from (6) and (7) that
· G , ϕ = Ω × ( a , b ) G · x ϕ d x d t , × G , ϕ = Ω × ( a , b ) G × x ϕ d x d t .
The derivative of G with respect to t in the sense of distributions is by definition the distribution G t defined by
G t , ϕ = G , ϕ t ,
for each ϕ D ( Ω × ( a , b ) ) .
Therefore, the fields D , B , E , and H solve the Maxwell system in the sense of distributions if D , B , E , and H are vector-valued distributions in Ω × ( a , b ) that satisfy Equations (1)–(4) in the sense of distributions.

General Formulas

Given that our application to temporal interfaces needs the consideration of distributions represented by discontinuous functions, we require a comprehensive set of representation formulas that will be utilized in Section 3 to prove the boundary conditions.
Let Ω R 3 be an open domain, which could be all of R 3 , t 0 ( a , b ) R , ( a , b ) could be infinite, and we have a field G ( x , t ) defined for x Ω and t t 0 as follows:
G ( x , t ) = G ( x , t ) for x Ω and t < t 0 G + ( x , t ) for x Ω and t > t 0
and satisfying
G C 1 Ω × ( a , t 0 ) , G + C 1 Ω × ( t 0 , b ) , and G L loc 1 Ω × ( a , b ) ;
The limits lim t t 0 G ( x , t ) : = G ( x , t 0 ) , lim t t 0 + G + ( x , t ) : = G + ( x , t 0 ) exist and are finite for all x Ω ;
G t and x G are locally integrable in Ω × ( a , b ) .
  • Note that L loc 1 Ω × ( a , t 0 ) denotes the class of complex valued functions that are locally Lebesgue-integrable in Ω × ( a , t 0 ) .
  • Given a test function ϕ C 0 Ω × ( a , b ) , the linear functional
G , ϕ = Ω a t 0 G ( x , t ) ϕ ( x , t ) d x d t + Ω t 0 b G + ( x , t ) ϕ ( x , t ) d x d t
defines a distribution in Ω × ( a , b ) . Define
[ [ G ] ] = G + ( x , t 0 ) G ( x , t 0 )
to be the jump of the field at t = t 0 .
Proposition 1. 
Under Assumptions (9)–(11) above on G we then have the following formula
G t , ϕ = Ω [ [ G ( x , t 0 ) ] ] ϕ ( x , t 0 ) d x + Ω a t 0 G t ( x , t ) ϕ ( x , t ) d x d t + Ω t 0 b G + t ( x , t ) ϕ ( x , t ) d x d t ,
for all ϕ D ( Ω × ( a , b ) ) .
Proof. 
Let ϕ D ( Ω × ( a , b ) ) . We have
G t , ϕ = Ω a t 0 G ( x , t ) ϕ t ( x , t ) d x d t Ω t 0 b G + ( x , t ) ϕ t ( x , t ) d x d t = Ω a t 0 G ( x , t ) ϕ t ( x , t ) d t + t 0 b G + ( x , t ) ϕ t ( x , t ) d t d x
Integrating by parts with respect to t yields
a t 0 G ( x , t ) ϕ t ( x , t ) d t = G ( x , t ) ϕ ( x , t ) t = a t 0 a t 0 G ( x , t ) t ϕ ( x , t ) d t
and
t 0 b G + ( x , t ) ϕ t ( x , t ) d t = G + ( x , t ) ϕ ( x , t ) t 0 t = b t 0 b G + ( x , t ) t ϕ ( x , t ) d t
for each x Ω . Inserting these into the first equation yields the desired formula.
  • From Assumptions (9)–(11) above, it follows that (6) takes the form
x · G , ϕ = i = 1 3 Ω a b G i ( x , t ) ϕ ( x , t ) x i d x d t = i = 1 3 Ω a t 0 G i ( x , t ) ϕ ( x , t ) x i d x d t Ω t 0 b G i ( x , t ) ϕ ( x , t ) x i d x d t = Ω a t 0 G ( x , t ) · x ϕ ( x , t ) d x d t + Ω t 0 b G + ( x , t ) · x ϕ ( x , t ) d x d t = a t 0 Ω G ( x , t ) · x ϕ ( x , t ) d x d t t 0 b Ω G + ( x , t ) · x ϕ ( x , t ) d x d t .
Since G ± are differentiable in x for t t 0 , we have
Ω G ± ( x , t ) · x ϕ ( x , t ) d x = Ω x · ϕ G ± ϕ x · G ± d x = Ω ϕ ( x , t ) G ± ( x , t ) · n ( x ) d σ ( x ) Ω ϕ ( x , t ) x · G ± ( x , t ) d x = Ω ϕ ( x , t ) x · G ± ( x , t ) d x
by the divergence theorem and since ϕ has compact support in Ω × ( a , b ) . Above, d σ denotes the surface measure. If Ω is unbounded, the boundary term above also vanishes since ϕ has compact support. Hence we obtain the formula
x · G , ϕ = Ω a t 0 ϕ ( x , t ) x · G ( x , t ) d x + Ω t 0 b ϕ ( x , t ) x · G + ( x , t ) d x
for all ϕ C 0 Ω × ( a , b ) . If G satisfies x · G = 0 in the sense of distributions, then x · G , ϕ = 0 , and in particular
Ω a t 0 ϕ ( x , t ) x · G ( x , t ) d x = 0
for all ϕ C 0 Ω × ( a , t 0 ) and so x · G ( x , t ) = 0 for all x Ω and a < t < t 0 . Similarly, we obtain x · G + ( x , t ) = 0 for all x Ω and t 0 < t < b .

3. Boundary Conditions

In this section, we assume the material parameters ε ( x , t ) , μ ( x , t ) are defined for x Ω and t t 0 as follows:
ε ( x , t ) = ε ( x , t ) for t < t 0 ε + ( x , t ) for t > t 0 , μ ( x , t ) = μ ( x , t ) for t < t 0 μ + ( x , t ) for t > t 0 ;
and satisfy
(H1) 
the limits
lim t t 0 ε ( x , t ) : = ε ( x , t 0 ) , lim t t 0 + ε + ( x , t ) : = ε + ( x , t 0 ) , lim t t 0 μ ( x , t ) : = μ ( x , t 0 ) , lim t t 0 + μ + ( x , t ) : = μ + ( x , t 0 )
all exist and are finite.
(H2) 
ε and μ are differentiable in t for t t 0 with bounded derivatives.
(H3) 
ε and μ are differentiable in x for all t t 0 with derivatives in x bounded in Ω .

3.1. Boundary Condition for the Electric Field

Suppose ε = ε ( x , t ) satisfies (H1), (H2), and (H3), and we interpret the Maxwell equation resulting from (4) and (5)
× H = 1 c t ε E
in the sense of distributions, i.e., if ϕ = ϕ ( x , t ) is a smooth function with compact support in Ω × ( a , b ) , then
× H , ϕ = 1 c t ε E , ϕ .
From (H1) we set
lim t t 0 ε ( x , t ) = ε ( x , t 0 ) , lim t t 0 + ε ( x , t ) = ε + ( x , t 0 ) .
We assume the field E has the form
E = E for t < t 0 E + for t > t 0
and satisfies conditions (9)–(11) from Section 2; we set
lim t t 0 E ( x , t ) = E ( x , t 0 ) , lim t t 0 + E ( x , t ) = E + ( x , t 0 ) .
Next, let
G = G ( x , t ) = ε ( x , t ) E ( x , t ) for x Ω and a < t < t 0 G + ( x , t ) = ε + ( x , t ) E + ( x , t ) for x Ω and t 0 < t < b .
Since ε satisfies (H1), (H2), and (H3), it follows that G = ε E satisfies (9)–(11) and so we can apply (12) to obtain
t ε E , ϕ = Ω [ [ ε ( x , t 0 ) E ( x , t 0 ) ] ] ϕ ( x , t 0 ) d x + Ω a t 0 ε ( x , t ) E ( x , t ) t ϕ ( x , t ) d t d x + Ω t 0 b ε + ( x , t ) E ( x , t ) t ϕ ( x , t ) d t d x ,
with
[ [ ε ( x , t 0 ) E ( x , t 0 ) ] ] = ε + ( x , t 0 ) E + ( x , t 0 ) ε ( x , t 0 ) E ( x , t 0 ) .
Assuming also that H satisfies conditions (9)–(11) from Section 2, we will show that the Maxwell Equation (13) is satisfied pointwise, that is,
1 c ε ( x , t ) E ( x , t ) t = × H ( x , t )
for all x and t t 0 . Let us prove this claim. Let ϕ C 0 Ω × ( a , t 0 ) . From Assumptions (9) and (11), H ( x , t ) is differentiable both in x and t for t t 0 , and it follows that
× H , ϕ = Ω a b H ( x , t ) × ϕ ( x , t ) d x d t = Ω a t 0 H ( x , t ) × ϕ ( x , t ) d x d t = Ω a t 0 ϕ ( x , t ) × H ( x , t ) d x d t = 1 c Ω a t 0 ε ( x , t ) E ( x , t ) t ϕ ( x , t ) d t d x
where the last equality follows from (14) since ϕ has compact support in Ω × ( a , t 0 ) . Hence
× H ( x , t ) = 1 c ε ( x , t ) E ( x , t ) t for all x Ω and a < t < t 0 .
Similarly, taking ϕ C 0 Ω × ( t 0 , b ) , we obtain the same pointwise identity with ε replaced by ε + for x Ω and t 0 < t < b .
Therefore, from the pointwise Maxwell Equation (13) we obtain
t ε E , ϕ = c × H , ϕ = c Ω a b × H ( x , t ) ϕ ( x , t ) d x d t = Ω a b ε ( x , t ) E ( x , t ) t d x d t ,
and so from (14) we obtain
Ω [ [ ε ( x , t 0 ) E ( x , t 0 ) ] ] ϕ ( x , t 0 ) d x = 0
for all ϕ . As a result, we obtain the boundary condition for the electric field:
[ [ ε ( x , t 0 ) E ( x , t 0 ) ] ] = 0
for all x.

3.2. Boundary Condition for the Magnetic Field

We next consider the Maxwell equation resulting from (3) and (5)
× E = 1 c t μ H .
in the sense of distributions, i.e., if ϕ = ϕ ( x , t ) C 0 Ω × ( a , b ) , then
× E , ϕ = 1 c t μ H , ϕ .
We will proceed in the same way as in the previous section to obtain a boundary condition for H . Since μ satisfies (H1), (H2), and (H3), we set
lim t t 0 μ ( x , t ) = μ ( x , t 0 ) , lim t t 0 + μ ( x , t ) = μ + ( x , t 0 ) .
Assuming also that H satisfies (9)–(11), we set
lim t t 0 H ( x , t ) = H ( x , t 0 ) , lim t t 0 + H ( x , t ) = H + ( x , t 0 ) .
If we let
G = G ( x , t ) = μ ( x , t ) H ( x , t ) for x Ω and a < t < t 0 G + ( x , t ) = μ + ( x , t ) H ( x , t ) for x Ω and t 0 < t < b ,
then G = μ H satisfies (9)–(11), and so applying (12) yields
t μ H , ϕ = Ω [ [ μ ( x , t 0 ) H ( x , t 0 ) ] ] ϕ ( x , t 0 ) d x + Ω a b μ ( x , t ) H ( x , t ) t ϕ ( x , t ) d t d x ,
where
[ [ μ ( x , t 0 ) H ( x , t 0 ) ] ] = μ + ( x , t 0 ) H + ( x , t 0 ) μ ( x , t 0 ) H ( x , t 0 ) ,
On the other hand, assuming E satisfies (9)–(11), proceeding as in Section 3.1, it follows that the Maxwell Equation (16) is satisfied pointwise, that is,
1 c μ ( x , t ) H ( x , t ) t = × E ( x , t )
for all x and t t 0 . Therefore, we obtain
t μ H , ϕ = c × E , ϕ = c Ω a b × E ( x , t ) ϕ ( x , t ) d x d t = Ω a b μ ( x , t ) H ( x , t ) t d x d t ,
and so from (17) we obtain
Ω [ [ μ ( x , t 0 ) H ( x , t 0 ) ] ] ϕ ( x , t 0 ) d x = 0
for all ϕ . This yields the boundary condition for the magnetic field
[ [ μ ( x , t 0 ) H ( x , t 0 ) ] ] = 0 .

4. Snell’s Law for Time-Varying Media

The organization of this section is as follows.
  • We make an ansatz for the electric and magnetic fields and obtain a generalized Snell’s Law in Proposition 2.
  • Remark 2 is a formulation of this law in the scalar case.
  • In Section 4.1, we assume that ε , μ jump between two constant values, and we calculate the corresponding magnetic fields.
  • Next, we compute the amplitudes of the reflected and transmitted waves in terms of the incident wave amplitude in Proposition 3.
  • In Section 4.1, from the previous formulas, we deduce the reflection and transmission coefficients.
  • In Section 4.2, we use the divergence equations in the Maxwell system which were used in the calculation of the amplitudes previously.
  • In Section 4.3, we prove a Lemma on exponentials which is used in the calculation of the amplitudes.
In this section, we assume that the material parameters ε and μ vary in time but not in space. Suppose that time t = t 0 corresponds to a temporal interface, and that ε ± , μ ± are as in the previous section, now independent of space. Define the velocities
v ( t ) = 1 ε ( t ) μ ( t ) , t < t 0 ,
and
v + ( t ) = 1 ε + ( t ) μ + ( t ) , t > t 0 .
We further assume that v ± ( t ) are integrable in time so that condition (11) is verified. This means that we need to assume that the quantities
d d t ( ε μ ) ( ε μ ) 3 / 2 , d d t ( ε + μ + ) ( ε + μ + ) 3 / 2 ,
i.e., the v ± ( t ) , are integrable in time. Notice that if ε ± , μ ± are bounded below by a positive constant, then the integrability follows from (H2). Clearly this condition is trivially satisfied if ε ± and μ ± are positive constants in time.
Let us make the ansatz for the incident field
E i ( x , t ) = A i e i ω 1 k i · x v ( t ) t , for t < t 0 ,
where the amplitude A i is a 3-d non-zero vector with complex components, ω 1 > 0 , and k i is a unit vector. Suppose that the transmitted wave has a part with the direction k t and a part with the direction k r , given as follows. For the transmitted field, we make the ansatz:
E t ( x , t ) = A t e i ω 3 k t · x v + ( t ) t , for t > t 0 ;
and for the reflected field:
E r ( x , t ) = A r e i ω 2 k r · x v + ( t ) t , for t > t 0 ;
where, once again, A r , A t are non-zero complex vectors and k t , k r are unit vectors. Here, ω 2 and ω 3 are real numbers different from zero and we will show that their signs will determine the directions of the wave vectors. If E = E i and E + = E t + E r , then the jump of these fields at t = t 0 is given by
[ [ E ( x , t 0 ) ] ] = lim t t 0 + E t ( x , t ) + E r ( x , t ) lim t t 0 E i ( x , t ) .
Remark 1. 
Compare this idea with the case of a spatial interface. In such a situation, since the incident wave and reflected wave are on the same side of the interface, one instead takes E = E i + E r and E + = E t . See for instance [17].
Now, if the field
E = E + for t > t 0 E for t < t 0
satisfies the Maxwell Equation (13) in the sense of distributions and the form of the fields E i , E r , and E t implies that they satisfy (9)–(11), it follows that the boundary condition (15) is applicable, and we obtain
0 = [ [ ε ( t 0 ) E ( x , t 0 ) ] ] = ε + ( t 0 ) E + ( x , t 0 ) ε ( t 0 ) E ( x , t 0 ) = ε + ( t 0 ) A t e i ω 3 k t · x v + ( t 0 ) t 0 + ε + ( t 0 ) A r e i ω 2 k r · x v + ( t 0 ) t 0 ε ( t 0 ) A i e i ω 1 k i · x v ( t 0 ) t 0
for all x = ( x 1 , x 2 , x 3 ) .
If we define
m i = ω 1 k i / v ( t 0 ) , m r = ω 2 k r / v + ( t 0 ) , m t = ω 3 k t / v + ( t 0 ) ,
with v ( t 0 ) = lim t t 0 v ( t ) and v + ( t 0 ) = lim t t 0 + v + ( t ) , then we obtain an equation of the form
B t e i m t · x + B r e i m r · x B i e i m i · x = 0 for all x = ( x 1 , x 2 , x 3 ) ,
where
B t = ε + ( t 0 ) A t e i ω 3 t 0 , B i = ε ( t 0 ) A i e i ω 1 t 0 , B r = ε + ( t 0 ) A r e i ω 2 t 0 .
Now, from Lemma 1, Equation (19) implies that m i = m t = m r , i.e.,
ω 1 k i v ( t 0 ) = ω 2 k r v + ( t 0 ) = ω 3 k t v + ( t 0 ) .
This gives the following proposition showing general relationships between the wave vectors k r , k t , and k i .
Proposition 2. 
Under the previous assumptions, we have
k t = ω 1 ω 3 v + ( t 0 ) v ( t 0 ) k i , k r = ω 1 ω 2 v + ( t 0 ) v ( t 0 ) k i .
Since the wave vectors are all units, taking absolute values in (21) yields
ω 1 ω 3 v + ( t 0 ) v ( t 0 ) = ω 1 ω 2 v + ( t 0 ) v ( t 0 ) = 1
which implies
ω 1 ω 3 = ± v ( t 0 ) v + ( t 0 ) , ω 1 ω 2 = ± v ( t 0 ) v + ( t 0 ) .
We have assumed at the outset that ω 1 > 0 . So, if ω 3 > 0 , then k t = k i , which is consistent with [18] [Equation (5)]. Moreover, the fact that ω 3 = v + ( t 0 ) v ( t 0 ) ω 1 is precisely the Snell’s Law for the transmitted wave in [11] [Equations (13) and (14)]. On the other hand, if ω 3 < 0 , we then obtain k t = k i . A similar analysis with ω 2 yields that k r = k i if ω 2 > 0 , and k r = k i if ω 2 < 0 . This agrees with [1] [Equation (4)] and [11] [Equation (13)].
Remark 2. 
As mentioned above, Equation (21) can be seen as a generalized Snell’s Law at a temporal interface which does not require that the velocities on either side of the interface be constant. Indeed, this can be written as
ω 3 n + ( t 0 ) k t = ω 1 n ( t 0 ) k i
and
ω 2 n + ( t 0 ) k r = ω 1 n ( t 0 ) k i
where n ± ( t 0 ) = c / v ± ( t 0 ) . In the plane ( x , c t ) , one can define an angle α similar to the incidence angle θ in the ( x , y ) plane with spatial interfaces, such that tan ( α ± ) = 1 n ± . This angle can be called the angle of temporal incidence [19]. Then, from (22) and (23) we obtain the scalar law
| ω 2 | tan ( α ) = ω 1 tan ( α + )
which agrees with the scalar law in Equation (15) in [11]. Since tan ( x ) can take on any real value, we see that there is no notion of total internal reflection as in the case of classical spatial interfaces. That is, there is no wave that propagates backwards in time. Finally, the material parameters ε and μ should be differentiable in time away from the interface t = t 0 and could in principle also vary in space. However, with spatially varying velocities, it is not clear if Lemma 1 applies, so we have not considered this case.

4.1. Case When ε and μ Are Jump Functions and Calculation of the Amplitudes

Now consider the case when
ε ( t ) = ε for a < t < t 0 ε + for t 0 < t < b , μ ( t ) = μ for a < t < t 0 μ + for t 0 < t < b
with μ ± , ε ± positive constants. So v ( t ) = v for a < t < t 0 and v + ( t ) = v + for t 0 < t < b are both constant. This is a common application in practice; see, e.g., Ref. [18] and the references therein. This form of velocities also enables us to compute the associated magnetic fields and establish the boundary conditions for them. Together with the boundary conditions already obtained for the electric fields, we will be able to calculate the amplitudes of the transmitted and reflected waves.
In fact, let us calculate first the magnetic fields. Consider the following Maxwell Equation (3) for the incident field when t < t 0 : seek H i satisfying
× E i = μ c H i t .
Since
E i ( x , t ) = A i e i ω 1 k i · x v t
we see that
× E i = i ω 1 A i × k i v e i ω 1 k i · x v t
and so integrating in time yields
H i = c μ × E i d t = c μ E i × k i v
plus a field depending only on x which is assumed to be zero. Now suppose t > t 0 . Since
E r ( x , t ) = A r e i ω 2 k r · x v + t
we similarly find that
H r = c μ + E r × k r v +
plus a field depending only on x which is also assumed to be zero. Finally, since
E t ( x , t ) = A t e i ω 3 k t · x v + t
we find that
H t = c μ + E t × k t v + .
Now we may consider the magnetic boundary condition (18). The jump of μ H by definition is given by
[ [ μ ( x , t 0 ) H ( x , t 0 ) ] ] = μ + H + ( t 0 ) μ H ( t 0 ) = μ + c μ + E t × k t v + c μ + E r × k r v + μ c μ E i × k i v = c A t × k t v + e i ω 3 k t · x v + t + A r × k r v + e i ω 2 k r · x v + t + c A i × k i v e i ω 1 k i · x v t , x
and at t = t 0 . Here, H + = H r + H t . The boundary condition (18) then implies that
c A t × k t v + e i ω 3 k t · x v + t 0 + A r × k r v + e i ω 2 k r · x v + t 0 + c A i × k i v e i ω 1 k i · x v t 0 = 0
for all x.
But since we know from (20) that the exponentials in x are all equal, we obtain
A t e i ω 3 t 0 × k t v + A r e i ω 2 t 0 × k r v + + A i e i ω 1 t 0 × k i v = 0 ,
that is,
A t e i ω 3 t 0 × k i ω 1 v ω 3 A r e i ω 2 t 0 × k i ω 1 v ω 2 + A i e i ω 1 t 0 × k i v = 0 ,
cancelling v yields
A t e i ω 3 t 0 × k i ω 1 ω 3 A r e i ω 2 t 0 × k i ω 1 ω 2 + A i e i ω 1 t 0 × k i = 0 ,
so
ω 1 ω 3 A t e i ω 3 t 0 ω 1 ω 2 A r e i ω 2 t 0 + A i e i ω 1 t 0 × k i = 0 .
From (20) and letting x = 0 in (19) we obtain
ε + A t e i ω 3 t 0 + ε + A r e i ω 2 t 0 ε A i e i ω 1 t 0 = 0 .
If we set
B i = A i e i ω 1 t 0 , B r = A r e i ω 2 t 0 , B t = A t e i ω 3 t 0 , k i = ( k 1 i , k 2 i , k 3 i )
B i = ( B 1 i , B 2 i , B 3 i ) , B r = ( B 1 r , B 2 r , B 3 r ) , B t = ( B 1 t , B 2 t , B 3 t ) ,
then we need to solve B r and B t in the system of equations
ε + B t + ε + B r ε B i = 0
ω 1 ω 3 B t ω 1 ω 2 B r + B i × k i = 0
which can be written as
B r + B t = ( ε / ε + ) B i ω 1 ω 2 B r + ω 1 ω 3 B t × k i = B i × k i .
We now proceed to find B r and B t from this system of equations. We shall prove the following.
Proposition 3. 
If ω 2 ω 3 , then
B r = 1 ( ω 1 / ω 3 ) ( ε / ε + ) ( ω 1 / ω 2 ) ( ω 1 / ω 3 ) B i ,
and
B t = ( ω 1 / ω 2 ) ( ε / ε + ) 1 ( ω 1 / ω 2 ) ( ω 1 / ω 3 ) B i .
If ω 2 = ω 3 , then
A t + A r e i ω 2 t 0 = ( ε / ε + ) B i .
Proof. 
The augmented matrix of the system (29)–(30) is
M = 1 0 0 1 0 0 B 1 i ε / ε + 0 1 0 0 1 0 B 2 i ε / ε + 0 0 1 0 0 1 B 3 i ε / ε + 0 k 3 i ω 1 ω 2 k 2 i ω 1 ω 2 0 k 3 i ω 1 ω 3 k 2 i ω 1 ω 3 B 2 i k 3 i B 3 i k 2 i k 3 i ω 1 ω 2 0 k 1 i ω 1 ω 2 k 3 i ω 1 ω 3 0 k 1 i ω 1 ω 3 B 3 i k 1 i B 1 i k 3 i k 2 i ω 1 ω 2 k 1 i ω 1 ω 2 0 k 2 i ω 1 ω 3 k 1 i ω 1 ω 3 0 B 1 i k 2 i B 2 i k 1 i .
If we let
N = 0 k 3 i k 2 i k 3 i 0 k 1 i k 2 i k 1 i 0 ,
(notice det N = 0 ) then the matrix M without the last column equals
I d I d ( ω 1 / ω 2 ) N ( ω 1 / ω 3 ) N
The system of equations can then be written as
I d I d ( ω 1 / ω 2 ) N ( ω 1 / ω 3 ) N B r B t = ( ε / ε + ) B i B i × k i ,
where the unknowns are B r , B t . This means
B r + B t = ( ε / ε + ) B i , N ( ω 1 / ω 2 ) B r + ( ω 1 / ω 3 ) B t = B i × k i .
From the first equation, B t = ( ε / ε + ) B i B r and substituting in the second equation yields
N ( ω 1 / ω 2 ) B r + ( ω 1 / ω 3 ) ( ε / ε + ) B i B r = B i × k i
which gives
N ( ω 1 / ω 2 ) ( ω 1 / ω 3 ) B r = B i × k i N ( ω 1 / ω 3 ) ( ε / ε + ) B i = 1 ( ω 1 / ω 3 ) ( ε / ε + ) B i × k i .
Case when ω 2 ω 3 . Then B r must solve
N B r = 1 ( ω 1 / ω 3 ) ( ε / ε + ) ( ω 1 / ω 2 ) ( ω 1 / ω 3 ) B i × k i
which means that to have a solution B r the vector 1 ( ω 1 / ω 3 ) ( ε / ε + ) ( ω 1 / ω 2 ) ( ω 1 / ω 3 ) B i × k i must be in the image of N. Given a 3-d vector X we have N X = X × k i . So the vector B r must satisfy the equation
B r 1 ( ω 1 / ω 3 ) ( ε / ε + ) ( ω 1 / ω 2 ) ( ω 1 / ω 3 ) B i × k i = 0
which means the vector B r 1 ( ω 1 / ω 3 ) ( ε / ε + ) ( ω 1 / ω 2 ) ( ω 1 / ω 3 ) B i must be parallel to the vector k i , that is,
B r 1 ( ω 1 / ω 3 ) ( ε / ε + ) ( ω 1 / ω 2 ) ( ω 1 / ω 3 ) B i = λ k i
but since ω 2 ω 3 , from (42) below and (21), dotting the last equation with k i yields λ = 0 and we then obtain that (31). We also obtain from this that
B t = ( ε / ε + ) B i B r = ( ε / ε + ) 1 ( ω 1 / ω 3 ) ( ε / ε + ) ( ω 1 / ω 2 ) ( ω 1 / ω 3 ) B i = ( ω 1 / ω 2 ) ( ε / ε + ) 1 ( ω 1 / ω 2 ) ( ω 1 / ω 3 ) B i ,
which is (32).
Case when ω 2 = ω 3 . We have
N B r + B t = ( ω 2 / ω 1 ) B i × k i
from (30), and since B r + B t = ( ε / ε + ) B i , if there is a solution B r , B t then the vector B i must satisfy
N ( ε / ε + ) B i = ( ε / ε + ) B i × k i = ( ω 2 / ω 1 ) B i × k i
which is equivalent to
ε ω 1 = ε + ω 2 .
In this case (29) and (30) are the same equation, and so we get B r + B t = ( ε / ε + ) B i which has infinitely many solutions B r , B t . In this case, from (21), we have k r = k t and therefore the electric field
E + = A t e i ω 3 k t · x v + ( t ) t + A r e i ω 2 k r · x v + ( t ) t = A t + A r e i ω 3 k t · x v + ( t ) t
showing that there are no distinctive transmitted and reflected waves. This means the amplitudes of the transmitted and reflected waves satisfy
A t + A r e i ω 2 t 0 = ( ε / ε + ) B i .
Note that (43) must hold in this case, so the amplitudes A t , A r cannot be arbitrary. □
Also note that when ω 2 = ω 3 , we have k t = k r , but the relation to k i depends on the sign of the frequencies. Indeed, if ω 3 > 0 then k t = k r = k i , but if ω 3 < 0 then we have k t = k r = k i .

Reflection and Transmission Coefficients

Defining the reflection coefficient R as | B r | / | B i | , we see that, if ω 2 = ω 3 and if ω 3 > 0 , then from (31) we have
B r = 1 2 ε ε + ω 3 ω 1 B i = 1 2 ε ε + ε μ ε + μ + B i
since ω 1 ω 3 = ± v v + , ω 1 ω 2 = ± v v + , and ω 1 > 0 . Since B t = ( ε / ε + ) B i B r , in this case we obtain
B t = ε ε + B i 1 2 ε ε + ε μ ε + μ + B i = 1 2 ε ε + + ε μ ε + μ + B i .
That is, we have shown that the reflection coefficient R is given by
R = B r B i = 1 2 ε ε + ε μ ε + μ + = 1 2 ε ε + ε ε + μ μ +
which agrees with [20] [Equation (4)]. Note that the quantity inside the absolute value can be positive or negative depending on the relationship between the impedances. We also see that the transmission coefficient T , defined by | B t | / | B i | , is given by
T = B t B i = 1 2 ε ε + + ε μ ε + μ + = 1 2 ε ε + + ε μ ε + μ + .
This agrees with [20] [Equation (5)]. Note that we do not have conservation of energy in this case, but rather we see that R + T depends on the impedances on either side of the interface. Indeed, let Z 1 = μ ε and Z 2 = μ + ε + . If Z 1 < Z 2 then we have
R + T = ε ε + .
If Z 1 > Z 2 then we have
R + T = ε μ ε + μ + = n + n .
Now, if ω 2 = ω 3 and ω 3 < 0 while ω 1 > 0 , then we find that
R = 1 2 ε ε + + ε μ ε + μ +
and
T = 1 2 ε ε + ε μ ε + μ +
That is, the transmission and reflection coefficients have switched roles in this case.
Finally, note that negative time refraction is also possible [21,22,23]. So now suppose that n > 0 but n + < 0 with ε + , μ + < 0 , so that n + = ε + μ + . Then in particular we have v + < 0 . The previous analysis up to Equation (20) does not show these material parameters, so this result still holds. Now, (21) then gives
k t = ω 1 ω 3 v + v k i
and so, again after taking absolute values, if ω 1 , ω 3 have the same sign then we obtain
k t = k i
because now, in this case, ω 3 = ( v + / v ) ω 1 . Similarly, if ω 1 , ω 3 have different signs, then we obtain k t = k i . Similarly, if ω 2 < 0 (i.e., ω 1 , ω 2 have different signs), then again from (20) we find that ω 2 = ( v + / v ) ω 1 and hence
k r = k i

4.2. Divergence-Free Conditions

Recall that we have defined
E + = E t + E r , E = E i
Assuming zero charge density, we also have the following divergence equations from the Maxwell system:
· ε ± E ± = 0
and
· μ ± H ± = 0
recalling that we associate ε + , μ + with E + and ε , μ with E . Again, let us assume that each ε ± , μ ± are positive constants. First consider E . In light of (39) we find that
A i · ω 1 v k i = 0
We also need to verify (40) with H i given by (24). Note that we find
× E i = e i ω 1 Φ i × A i + e i ω 1 Φ i × A i = e i ω 1 Φ i × A i + i ω 1 k i v e i ω 1 Φ i × A i = i ω 1 k i v e i ω 1 Φ i × A i
where
Φ i k i · x v t
We find that
· ( μ H i ) = · c E i × k i v = k i v · ( × c E i ) + c E i · ( × k i v )
But by the previous calculation of × E i above, since k i / v appears twice in the term k i / v · × ( c E i ) , we find that · ( μ H i ) = 0 . Hence (40) is satisfied for H .
Consider now the total field
E + ( t ) = A t e i ω 3 k t · x v + t + A r e i ω 2 k r · x v + t
In light of (39), we find that we must have
A t · ω 3 v + k t e i ω 3 t + A r · ω 2 v + k r e i ω 2 t = 0 , t > t 0
Now, if ω 2 ω 3 , this yields
A t · ω 3 v + k t = 0 and A r · ω 2 v + k r = 0
If ω 2 = ω 3 , we have
A t + A r · ω 3 v + k t = A t + A r · ω 3 v + k r = 0
We also need to verify (40) with H + . Let us compute H + . Since
E + ( t ) = A t e i ω 3 k t · x v + t + A r e i ω 2 k r · x v + t
we see that
× E + = i ω 3 A t × k t v + e i ω 3 k t · x v + t i ω 2 A r × k r v + e i ω 2 k r · x v + t
and so integrating in time yields
H + = c μ + × E + d t = c μ + E t × k t v + c μ + E r × k r v +
plus a field depending only on x which is assumed to be zero.
Similar to the above, we find that
· ( μ + H + ) = · c E t × k t v + + · c E r × k r v +
and both cross products will end up being zero. Hence (40) is satisfied with H + as well.

4.3. Exponential Lemma

In our previous analysis, we required the following exponential lemma:
Lemma 1. 
Let A 1 , , A N C n { 0 } , and ω 1 , , ω N R . If
j = 1 N A j e i ω j x = 0 x R ,
then ω 1 = = ω N .
Proof. 
Suppose first that n = 1 . Differentiating (45) k times with respect to x yields j = 1 N ( i ω j ) k A j e i ω j x = 0 . If we let
B = B ( x ) = e i ω 1 x e i ω 2 x e i ω N x i ω 1 e i ω 1 x i ω 2 e i ω 2 x i ω N e i ω N x ( i ω 1 ) 2 e i ω 1 x ( i ω 2 ) 2 e i ω 2 x ( i ω N ) 2 e i ω N x ( i ω 1 ) N 1 e i ω 1 x ( i ω 2 ) N 1 e i ω 2 x ( i ω N ) N 1 e i ω N x ,
then the vector A = A 1 , , A N 0 satisfies the system of equations B A t = 0 and therefore the determinant
det B = e i ω 1 x e i ω N x det 1 1 1 i ω 1 i ω 2 i ω N ( i ω 1 ) 2 ( i ω 2 ) 2 ( i ω N ) 2 ( i ω 1 ) N 1 ( i ω 2 ) N 1 ( i ω N ) N 1 = e i ω 1 x e i ω N x 1 k < N ( i ω i ω k ) = 0
by the Vandermonde determinant formula. Therefore all ω j are equal, and so the lemma follows for n = 1 .
Suppose next that n > 1 . Write A j = a 1 j , , a n j , 1 j N . So (45) implies
j = 1 N a k j e i ω j x = 0 x R
for 1 k n . Suppose, by contradiction, that not all ω j are equal. Then there exists 1 < m N such that relabeling ω j we can write ω 1 < ω 2 < < ω m and ω j = ω m for m j N . Hence we can write
j = 1 m 1 A j e i ω j x + j = m N A j e i ω m x = 0
for all x R , which, written in components, means
j = 1 m 1 a k j e i ω j x + j = m N a k j e i ω m x = 0 ,
for 1 k n . From the case when n = 1 , we obtain
a k j = 0 for 1 j m 1 and j = m N a k j = 0
for 1 k n . Consequently, the vectors A j = a 1 j , , a n j = 0 for 1 j m 1 , which leads to a contradiction.

5. Conclusions

This paper carries out a rigorous mathematical analysis of the generalized Snell’s Law at temporal interfaces from the perspective of Maxwell’s equations in the sense of distributions. The material parameters considered can depend on time, and no a priori ansatz on the type of solution to Maxwell’s equations is made in order to obtain the general temporal boundary conditions. In addition, a careful analysis of the signs of the different frequencies of the incident, reflected, and transmitted waves was carried out and precise formulas for the reflection and transmission coefficients were obtained.

Author Contributions

Conceptualization, C.E.G. and E.S.; Methodology, C.E.G. and E.S.; Formal analysis, C.E.G. and E.S.; Investigation, C.E.G. and E.S.; Writing—review & editing, C.E.G. and E.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mostafa, M.H.; Mirmoosa, M.S.; Sidorenko, M.S.; Asadchy, V.S.; Tretyakov, S.A. Temporal interfaces in complex electromagnetic materials: An overview. Opt. Mater. Express 2024, 14, 1103–1127. [Google Scholar] [CrossRef]
  2. Lustig, E.; Segal, O.; Saha, S.; Fruhling, C.; Shalaev, V.M.; Boltasseva, A.; Segev, M. Photonic time-crystals-fundamental concepts. Opt. Express 2023, 31, 9165–9170. [Google Scholar] [CrossRef] [PubMed]
  3. Faragó, I.; Havasi, Á.; Horváth, R. Numerical solution of the Maxwell equations in time-varying media using Magnus expansion. Cent. Eur. J. Math. 2012, 10, 137–149. [Google Scholar] [CrossRef]
  4. Stachura, E.; Wellander, N.; Cherkaev, E. Quantitative analysis of passive intermodulation and surface roughness. Stud. Appl. Math. 2024, 153, e12688. [Google Scholar] [CrossRef]
  5. Alam, T.M.; Paquet, L. The coupled heat Maxwell equations with temperature-dependent permittivity. J. Math. Anal. Appl. 2022, 505, 125472. [Google Scholar] [CrossRef]
  6. Yin, H.M. Regularity of weak solution to Maxwell’s equations and applications to microwave heating. J. Differ. Equ. 2004, 200, 137–161. [Google Scholar] [CrossRef]
  7. Yin, H.M. Global solutions of Maxwell’s equations in an electromagnetic field with a temperature-dependent electrical conductivity. Eur. J. Appl. Math. 1994, 5, 57–64. [Google Scholar] [CrossRef]
  8. Engheta, N. Four-dimensional optics using time-varying metamaterials. Science 2023, 379, 1190–1191. [Google Scholar] [CrossRef]
  9. Tirole, R.; Vezzoli, S.; Saxena, D.; Yang, S.; Raziman, T.; Galiffi, E.; Maier, S.A.; Pendry, J.B.; Sapienza, R. Second harmonic generation at a time-varying interface. Nat. Commun. 2024, 15, 7752. [Google Scholar] [CrossRef]
  10. Morgenthaler, F.R. Velocity modulation of electromagnetic waves. IRE Trans. Microw. Theory Tech. 1958, 6, 167–172. [Google Scholar] [CrossRef]
  11. Mendonça, J.; Shukla, P. Time refraction and time reflection: Two basic concepts. Phys. Scr. 2002, 65, 160. [Google Scholar] [CrossRef]
  12. Mendonça, J.T. Time refraction and spacetime optics. Symmetry 2024, 16, 1548. [Google Scholar] [CrossRef]
  13. Born, M.; Wolf, E. Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 6th ed.; Elsevier: Amsterdam, The Netherlands, 1980. [Google Scholar]
  14. Gutiérrez, C.E.; Sabra, A. Generalized Snell’s Law and Maxwell Equations. Ver. 2. Available online: https://arxiv.org/abs/2305.01081 (accessed on 31 July 2025).
  15. Schwartz, L. Théorie des Distributions; Publications de l’Institut de Mathématique de l’Université de Strasbourg: Strasbourg, France, 1950; Volume I. [Google Scholar]
  16. Folland, G.B. Real Analysis: Modern Techniques and Their Applications; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
  17. Gutiérrez, C.E. Refraction problems in geometric optics. In Fully Nonlinear PDEs in Real and Complex Geometry and Optics: Cetraro, Italy 2012; Gutiérrez, C.E., Lanconelli, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 95–150. [Google Scholar]
  18. Galiffi, E.; Tirole, R.; Yin, S.; Li, H.; Vezzoli, S.; Huidobro, P.A.; Silveirinha, M.G.; Sapienza, R.; Alù, A.; Pendry, J.B. Photonics of time-varying media. Adv. Photonics 2022, 4, 014002. [Google Scholar] [CrossRef]
  19. Mendonça, J.T. Theory of Photon Acceleration; CRC Press: Boca Raton, FL, USA, 2000. [Google Scholar]
  20. Xiao, Y.; Maywar, D.N.; Agrawal, G.P. Reflection and transmission of electromagnetic waves at a temporal boundary. Opt. Lett. 2014, 39, 574–577. [Google Scholar] [CrossRef] [PubMed]
  21. Bruno, V.; Vezzoli, S.; DeVault, C.; Carnemolla, E.; Ferrera, M.; Boltasseva, A.; Shalaev, V.M.; Faccio, D.; Clerici, M. Broad frequency shift of parametric processes in epsilon-near-zero time-varying media. Appl. Sci. 2020, 10, 1318. [Google Scholar] [CrossRef]
  22. Vezzoli, S.; Bruno, V.; DeVault, C.; Roger, T.; Shalaev, V.M.; Boltasseva, A.; Ferrera, M.; Clerici, M.; Dubietis, A.; Faccio, D. Optical time reversal from time-dependent epsilon-near-zero media. Phys. Rev. Lett. 2018, 120, 043902. [Google Scholar] [CrossRef]
  23. Pendry, J.B. Time reversal and negative refraction. Science 2008, 322, 71–73. [Google Scholar] [CrossRef]
Figure 1. A temporal interface where the time-varying refractive index is n ( t ) on one side of the interface and n + ( t ) on the other side.
Figure 1. A temporal interface where the time-varying refractive index is n ( t ) on one side of the interface and n + ( t ) on the other side.
Mathematics 13 02777 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gutiérrez, C.E.; Stachura, E. Refraction Laws in Temporal Media. Mathematics 2025, 13, 2777. https://doi.org/10.3390/math13172777

AMA Style

Gutiérrez CE, Stachura E. Refraction Laws in Temporal Media. Mathematics. 2025; 13(17):2777. https://doi.org/10.3390/math13172777

Chicago/Turabian Style

Gutiérrez, Cristian E., and Eric Stachura. 2025. "Refraction Laws in Temporal Media" Mathematics 13, no. 17: 2777. https://doi.org/10.3390/math13172777

APA Style

Gutiérrez, C. E., & Stachura, E. (2025). Refraction Laws in Temporal Media. Mathematics, 13(17), 2777. https://doi.org/10.3390/math13172777

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop