Current Trends in Random Walks on Random Lattices

: In a classical random walk model, a walker moves through a deterministic d -dimensional integer lattice in one step at a time, without drifting in any direction. In a more advanced setting, a walker randomly moves over a randomly conﬁgured (non equidistant) lattice jumping a random number of steps. In some further variants, there is a limited access walker’s moves. That is, the walker’s movements are not available in real time. Instead, the observations are limited to some random epochs resulting in a delayed information about the real-time position of the walker, its escape time, and location outside a bounded subset of the real space. In this case we target the virtual ﬁrst passage (or escape) time. Thus, unlike standard random walk problems, rather than crossing the boundary, we deal with the walker’s escape location arbitrarily distant from the boundary. In this paper, we give a short historical background on random walk, discuss various directions in the development of random walk theory, and survey most of our results obtained in the last 25–30 years, including the very recent ones dated 2020–21. Among different applications of such random walks, we discuss stock markets, stochastic networks, games, and queueing. numerical integration along a speciﬁc deformed contour, using the framework and best practices developed by Abate and Whitt [111]. 500 , , , , , were sampled uniformly from the region [ 0.5, 3 ] 3 × [ 10, 40 ] 3 . For each vector of parameters, 100,000 realizations of the corresponding processes were simulated and empirical probabilities were computed. The numerical scheme for at least one of the 14 probabilities failed to converge properly in 11 of 500 cases, but the maximum error on any of the 14 probabilities in the remaining 489 cases was 0.004, indicating success with numerical inversion.

In a classical random walk model, a particle or walker moves through a deterministic d-dimensional integer lattice. The walk is random without drifting in any direction. The particle's steps are also associated with time units as in the case that leads to Brownian motion. Of interest is the first passage time, that is, when the particle escapes from a bounded set.
There have been many variants of the random walk in the literature. The one we introduce is with a walker randomly moving over a lattice with a random real-valued configuration, equidistant, and formed at random times. The first passage time of such a walker and the location upon its escape is our focus.
In a further embellishment, we allow the particle to move through a random lattice not one step at a time as in the general setting, but to jump a random number of steps. In some other variants, we also allow only a limited access to the moves of the walker. That is, the walker's movements are not available in real time. Instead, the observations are limited to some random epochs, τ 1 , τ 2 , τ 3 , . . .. Consequently, we deal with delayed information on the real-time position A(t) of the particle and upon its escape at τ ν (ν is the escape index)-the virtual first passage time and virtual escape location at A(τ ν )-that may end up being arbitrarily distant from the boundary of an underlying set. Obviously the virtual first passage time τ ν is delayed compared to the real first passage time.
On the other hand, in most of our settings, we restrict the walker's moves only in positive directions. Additionally, the set that the particle is to escape is a d-dimensional rectangle, rather than an arbitrary manifold.
We note that our work on random walk models pertain to two distinct problems. In the first one, we work on the joint distribution of the first passage time t ν and the first escape location A(t ν ), where t 1 , t 2 , . . . are the real-time epochs of walker's jumps, with no relationship between time t ν and the deterministic time interval [0, t], thus referred to as time insensitive. In the second problem, the first passage time t ν (or virtual first passage time τ ν ) can be placed inside [0, t] or outside the interval and considered along with the real-time position A(t) of the particle at time t. The second problem is more complex and it is called time sensitive.
We give a short historical background on random walk, discuss various directions in the development of random walk theory, and survey most of our results obtained in the last 25-30 years, including the very recent ones dated 2020-21. Among different applications of such random walks, we discuss stock markets, stochastic networks, games, and queueing.

Introduction
The term "random walk" was first introduced in [1] by Karl Pearson in 1905. It is generally considered as a recurrent process S n = X 1 + . . . +X n made of a sequence of iid Z d -valued r.v.'s X = {X 1 ,X 2 , . . . .} In its simple form, X i ∈ [X] and X is uniformly distributed on {−1, 1} d , such that if S n = x∈ Z d , the particle or walker moves from state x to state y within the integer hypercube x+{−1, 1} d equally likely in any direction with probability 1 2d . So here the walker moves randomly within the d-dimensional integer lattice. One of the key objectives is to find the probability distribution of r.v.'s ν = inf{n : S n ∈ A c } and S ν , where A is a bounded subset of Z d . That is, the position of the walker when it escapes from set A.
If we allow X to be R d -valued and arbitrarily distributed with the entries (X 1 , . . . , X d ) of X not independent as assumed above, then such random walk is largely embellished. Here we can think of a randomly generated grid replacing Z d , so that if S n = x∈ R d , the walker moves from state x to state y according to random increment X n ∈ [X] (the equivalence class of all r.v.'s a.s. equal X) and it turns out that the grid along which the particle moves is randomly generated each time the walker lands at some x. Furthermore, we take into account the time that the walker takes to move from x to y in one step thereby forming a point process T = {t 1 , t 2 , . . .} and the associated marked point process S = ∑ ∞ n=0 X n ε t n (ε α is the unit measure at point a). The escape parameters of such random walk now require more specifications. If A is a bounded subset of R d , then ν = inf{n : S n ∈ A c } (exit index), t ν is the first passage time or escape time, and S ν is the position of the walker on its escape (escape position).
It is a challenge to find the distributions (joint or marginal) of the above random entities in a closed form. The assumption on X to be R d + -valued is helpful and still enough practical. The random walk terminology is appropriate to describe the physical motion of a walker regardless of where the walker moves, although other descriptive terms like a marked point process or marked random measure or recurrent or renewal process are also common in the literature. Furthermore, the additive components or jumps X i 's of S need not be iid and can form Markov or semi-Markov processes, although these classes of S are out of scope and interest of this paper that targets only cases that lead to analytically closed forms.
Besides the terms walker or particle applied to a moving object, some authors also use the terms like a random walk itself that walks. We note that X can also be integer-valued, whereas we retain all other assumptions on S. Then if the walker at step n at time t n lands at some S n = X 0 + . . . +X n , then at time t n+1 , the walker moves to state S n+1 = S n +X n+1 , where X n+1 is a Z d + -valued r.v. that generates a path on Z d + running in all non-negative directions from S n .
There is a way to at least partially circumvent the obstacle of mixed jumps(or increments), rather than just non-negative that we intend to discuss alittle later in this paper, but for now we stay with the above assumptions.With non-negative increments, the representation S = ∑ ∞ n=0 X n ε t n of an underlying random walk is an atomic random measure that is often a convenient alternative interpretation.

Related Literature
There are myriads of papers on random walk and applications that would make a very long list. As the result of such wealth and efforts by many from different branches of mathematics and other disciplines, there is no unity about similar notions and notations. First of all, there is an ambiguity about what a random walk is as oppose to what is not. This is because of various embellishments to the original notion of a random walk as a recurrent process, that is, a sequence {S n } of partial sums of a sequence {X 0 , X 1 , . . .} of iid r.v.'s. Note that if X i 's are non-negative, then {S n } is referred to as a renewal process. If X i 's are real-valued, {S n } is recurrent (cf. Takács [2]). Now and then we read about constructions like semi-Markov random walks, that is, {S n } is a semi-Markov process (cf. Unver et al. [3]). Another embellishment which we believe is fully legitimate, is when the walker jumps occur upon random points {t n } that make the analysis of escape parameters more challenging. In addition, the jump length X n 's can be position dependent, that is when X n depends on t n − t n−1 only, n = 1, 2, . . ., i.e., the time since the previous jump. Now because very often, one is concerned about escape of a random walk from a bounded set, the underlying analysis of the escape is referred to fluctuations of sums of random variables. However, the "sums" are not always a traditional random walk with independent jumps (cf. Andersen [4,5]). Besides, the fluctuations are also mentioned in reference to processes with continuous paths like Brownian motion and here the escape from a set A means crossing its boundary with a location next to A. The latter differs from leaving A and landing at a point distant from A as it takes place under non simple random walk. Hence, to include certain work in the literature we will use the common sense and space constraints.
The first mention of random walk was made by Pearson [1] in 1905 characterizing the distribution of the distance traveled in an N-step random walk in the plane. The walk starts at (0, 0) and involve N steps of unit length, each taken in a equally likely random direction. In reference to Pearson, the problem was posted by Rayleigh [6] also in 1905 who claimed that the random walk problem proposed by Pearson was the same as that of the composition of N iso-periodic vibrations of unit amplitude and of phases distributed at random and studied in his earlier papers.
There are two seminal articles by Andersen [4,5] that belong to the literature on fluctuations, but they deal with sums of not independent r.v.'s. Yet it is worth including them in the reference list. Takćs [2,7,8] had been a key prolific contributor to the fluctuations of sums of random variables, some of which are traditional random walks and some are embellished variants, cf. Dshalalow and Syski [9] about Takács' work. Some random walk problems pertain to exit and return to a fixed set. Van den Berg [10], obtained estimates for the "average probability" that a simple random walk in Z d starting at a point x ∈ V exits V and then returns to x. The average is taken over all points x ∈ V. Paper [11] studied the asymptotic behavior of the probability P{ν = n}, as n → ∞, where ν =inf{k > 0 : S k ≥ y} for some y ≥ 0 and S k = ∑ k j=1 X j is a recurrent process. Becker and König [12] studied a random walk in Z d targeting local times defined as l(n, x) = ∑ n k=0 1 {S k =x} , n ∈ N, giving the number of visits of x∈ Z d at step n and the large-n asymptotics of the functional Csáki, Földes, and Révész [13] studied the maximal local time l(n) = max l(n, x) : x ∈ Z d in a simple symmetric walk in Z d , i.e., with P{X 1 = e i } = P{X 1 = −e i } = (2d) −1 . Gluck [14] studied a random walk on a finite group G based on a generating set that is a union of conjugacy classes. Let the non-negative integer valued random variable T denote the first time the walk arrives at the identity element 1 of G, if the starting point of the walk is uniformly distributed on G. Under suitable hypotheses, the author shows that the distribution function F of T is almost exponential. Other work on random walks on groups are by Fayolle, Iasnogorodski, and Malyshev [15], Gluck [14], Hildebrand [16], and Takács [7].
A continuous time random walk (CTRW) process was introduced in 1965's paper by the physicists Montroll and Weiss [17]. A CTRW can be defined as follows. Let S = ∑ ∞ n=0 X n ε t n (ε α is the unit measure at point a) be a marked signed random measure. Suppose X 0 ,X 1 ,X 2 , . . . are independent and for n = 1, 2, . . . , identically distributed r.v.'s valued in R d while T = ∑ ∞ n=0 ε t n is the associated support counting measure. Thus, N t = T [0, t] is the associated counting process. Then, the CTRW is S(t) = S[0, t]. The inter-renewal times t 1 − t 0 , t 2 − t 1 , . . . are referred to as waiting times. If S is with position independent marking, then S(t) is called decoupled. A coupled CTRW is S(t) such that S is with position dependent marking, that is X n depends on t n − t n−1 . The marks X n 's are called jumps and in physics, they represent instantaneous jumps of a diffusing walker. (A so-called CTRW characteristic function pertains to fractional diffusion equations.) CTRW's find applications in physics, insurance, and finance. The literature on CTRW's is contained within its own terminology distinct from that in on random walks. It needs more scrutiny to see one and the same notions. See interesting surveys in Kutner and Masoliver [18] and Scalas [19]. See Balakrishnan and Khantha paper about the first passage time in CTRW [20].
We only briefly mention random walks on graphs. In its basic form, finite Markov chains are random walks on weighted directed graphs with possible loops. An electrical network is a multigraph G = (V, E) and with a weight function r : E → R + representing the resistance of the edges. Other notable applications on random walks on graphs are random walks on social graphs. See related work in Blanchard and Volchenkov [21], Brémaud [22], Fujie and Zhang [23], Sarkar and Moore [24], Shi [25], Takács [8], and Telcs [26].
Random walks in queueing is a very notable subset of the entire literature on random walks. They picked up their notions very early on and found very close connections to various processes in queueing systems, including queueing, waiting times, departures and other processes. The interest in random walks in queueing surged in the 80's and 90's and ever since led to independent developments. Back then, very popular queues were those with N-, D-, and T-policies that carried random walk, later on joined by maintenance and vacation disciplines. Most of them required closed form expressions that led to novel analysis of random walks. e.g., in queues under N-policy, when the queue is exhausted, the server rests until the new customers joining the waiting room will cross a positive number N. The problem becomes less trivial if the input stream bulk (that is, when it is a marked point process). If the server goes on maintenance (also referred to as vacations), he is absent from the system and cannot resume his service immediately, once the queue crosses N, because he cannot interrupt any individual vacation segment. So he does it on a first opportune time. The problem of finding the first passage time (in this case it is when the server resumes his service) and the queue level accumulated by then became a target of numerous work, including Abolnikov and Dshalalow [27][28][29][30], Abolnikov, Dshalalow, and Agarwal [31], Abolnikov, Dshalalow, and Dukhovny [32][33][34], Dshalalow [35][36][37][38][39][40], Dshalalow and Motir [41], Dshalalow and Russell [42], and Dshalalow and Yellen [43]. Now with the D-policy, the server resumes his service, when the total service needed to process certain amount of jobs exceeds a positive real D. See Agarwal and Dshalalow [44]. In all these papers, explicit joint functionals of the first passage time and the position of the queue upon the passage were obtained. The cited results in closed forms were possible through the introduction and implementation of the so-called D-operator (see Section 2), specifically designed to deal with escape parameters of random walks.
A further embellishment of the named above queues is those with hysteretic control. This is when the server suspends his service upon of the service completions, when the queue level drops below some r and resumes his service when the queue accumulates to N or more customers. Here r ≤ N. During his primary inactivity, the server may rest or go on multiple vacations, or combine a single vacation with a followed rest if the queue has not reached N on his return. A closed form expression for the joint distribution of the first passage time and the queue level at the passage we obtained in Abolnikov, Dshalalow, and Treerattrakoon [45], Dshalalow [46], Dshalalow and Dikong [47,48], Dshalalow, Kim, and Tadj [49] in different variants of hysteretic control policy. In some cases, batch (group) service took place that added to the existing complexity of the underlying random walk problems.
Bacot and Dshalalow [50] in 2001 considered a further embellishment of the hysteretic control random walks by including a so-called gated service. This was a bulk-input-batchservice queue with multiple vacation policy and hysteretic control. The gated service applies to the policy when the service consists of two phases. The server takes a batch of customers during the first phase and if all available customers joined the batch that is of a lesser size than server's capacity, then newly arriving customers can join the first batch (not in excess to server's capacity), but during the second phase, such an option is no longer honored even if server's capacity has not been filled. All related random walk problems pertaining to the joint distribution of all key escape parameters in the context of the queueing process was obtained. In this particular problem, the authors chained the results obtained during vacations and the first phase.
The utility and versatility of the D-operator enabled us to enlarge classes of random walk problems that could be identified in stochastic games, stochastic networks, queueing, and economics. In queueing, we take advantage of multidimensional versions of the D-operator to analyze queues with parallel queueing stations or servicing facilities where one server can perform simultaneous and yet asynchronous work on more than one task at the same time, as per studies in Abolnikov, Dshalalow, and Agarwal [31], Dshalalow and Merie [51], and Dshalalow, Merie, and White [52].
Further utility of the D-operator in random walk is found in its chaining property between different modes that may include multistage vacations, followed by rests, and several service phases as in Abolnikov, Dshalalow, and Treerattrakoon [45], Dshalalow [46], Dshalalow, Kim, and Tadj [49], Dshalalow and Merie [51] in the context of queueing and Dshalalow and Huang [53][54][55], in the context of stochastic games.
Multidimensional Lévy walks with competing components were noticed to model games of several players under hostile action. The game is over when one of the players is ruined, that is, when one or more competing components cross their respective thresholds. Here again we see the escape of the walk from a set. The model is definitely not a stochastic game in the traditional sense, but it serves purposes of a game-related setting and as such it works very well to model wars, economic competitions, and stock and stock option trading, to name a few. Dshalalow [56,57], Dshalalow and Liew [58][59][60] studied applications of random walk fluctuations to finance, while Dshalalow and Huang [53][54][55], Dshalalow and Iwezulu [61], Dshalalow and Ke [62,63], and Dshalalow and Treerattrakoon [64] studied exclusively antagonistic games and in the latter case, with three players two of whom can team up against the third player. Dshalalow and White [65,66] focused on random walks applications to stochastic networks. Additionally, Dshalalow and Iwezulu [61] considered applications to cancer research.
Most of work mentioned above is about random walks in R d + . Thus, random walks that move in all directions are analytically more challenging and unfortunately they end up in not close forms for their principal functionals. There has been a way to circumvent this obstacle by introducing so-called auxiliary active components. For example, if underlying random walk's components are not monotone increasing but fluctuate, the true escape scenario is difficult to model without a cost of analyticity, but appending auxiliary active components can alleviate the predicament because they can point to the direction when the nonmonotone components raise, dive, spike, all once or more times (see Section 4). So that the traditional notion of escape is modified, but it still gives us an ample amount of explicit information about, say the behavior of financial instruments. There are various alternatives that can predict the future of a stock portfolio if it comes to trading options strategies as to buy underlying contracts long or short. (See a discussion in Section 4 and in Dshalalow and Liew [59].) One simple example of an antagonistic game in finance is set about best time to exercise a stock option on a stock just before it plunges and prior to its maturity, whichever comes first.
Note that since the escape of random walks offer predictive tools for outcomes of the games such as those occurring in finance, it stands for reason to refine the information that leads to ruin. One of such efforts were undertaken in Dshalalow and Ke [62,63] by introducing a smaller subset A ⊆ A that a walk will escape from before it escapes from A that should give us an extra layer of security. Another way to refine the information is to allow an access to the underlying walk at any epoch of time, thereby making it time dependent. Until now, we meant only walks whose escape parameters were not related to any deterministic times. Such refinement allows us to have the first passage time and the position of the walker upon its escape team up with the continuous time parameter version S(t) of the walk and attempt to still yield tame analytical expressions. We call such approach time sensitive analysis. It was introduced in Dshalalow [37] and further refined in [67] and then picked up in a series of papers. We just mention some: Agarwal, Dshalalow, and O'Regan [68], Al-Matar and Dshalalow [69], Dshalalow [70,71], Dshalalow and Bacot [72], Dshalalow and Nandyose [73,74], Dshalalow, Nandyose, and White [75], Dshalalow and White [76]. See further discussions in Sections 5 and 6.
Among other random walk applications, Antal and Redner [77] studied first passage time properties of a discrete time random walk in which the length of each step is uniformly distributed on interval [−a, a]. The walker starts initially at arbitrary point x ∈ [0, 1] with end points absorbing. The idea comes from the problem of DNA sequence recognition by a mobile protein.
Hughes, in his book [78], discusses various modifications of random walks, such as random walks on triangular lattices and on fractals as well as "self-avoiding walks" in which the walker does not visit the same point more than once. Among various applications, self-avoiding walks can model long-chain polymers in dilute solutions.

The Operational Calculus of One-Dimensional Random Walks
All processes are defined on a probability space (Ω, F , P). Our work on random walk with non-negative integer jumps dates back to the 1990s [35,[37][38][39][40]105] about S = ∑ ∞ n=0 X n ε t n with position dependent marking, that is when X n depends on t n − t n−1 , but not on any other components of the support counting measure T = ∑ ∞ n=0 ε t n . More specifically, the sequence {(X 0 , t 0 ), (X 1 , t 1 ), . . .} is a delayed renewal process.
We further assume that S is a Lévy process that, in particular, warranted against clustering of t n 's. With A = [0, M), assuming M ∈ N, we are interested in the time and position of S upon its escape from A. Thus we have: ν =inf{m : S m = X 0 + . . . + X m ∈ A c }, as the exit or escape index, t ν -the exit time or first passage time, S ν -the position of the walker at t ν (or excess value of M).

The transform
. This is of the joint distribution of ν, S ν , t ν , and two more useful pre-escape quantities S ν−1 and t ν−1 (pre-exit time) representing the position and time the walker last seen in set A before its escape. Note that because the jumps X n are valued in N, the walker at time t ν is at S ν that can be positioned arbitrarily away from A on its escape time.
We claim that Φ ν can be expressed in a closed form. First we define the D-operator as follows for x ∈ B(0, 1) and Re y ≥ 0. Here ϕ is a function, analytic at zero in the first variable. Suppose the joint transforms with z ∈ B(0, 1) and Re θ ≥ 0 are known. Then, the following theorem holds.
Theorem 1. The following formula holds.
where x ∈ B(0, 1) and the rest of domains are specified in (1).

Proof. (i) Introduce the transformation applied to function
It can be readily shown that the inverse operator (2) can restore f , if we apply it for every k:

Further, introduce the auxiliary family of exit indices
for p = 0, 1, ... along with the family of the functionals for p = 0, 1, ..., noticing that ν = ν(M − 1). So, if we apply D p to Φ ν(p) , we can then restore (ii) Before we apply D p to Φ ν(p) we notice that since {A j } is a monotone, nondecreasing sequence of partial sums. Hence, Then, The rest is obvious.
Then, applying D p to Φ ν(p) and using the Fubini's theorem we have It can be shown that γ(xuv, ϑ + θ < 1, if x < 1 due to the above assumption (proven below in part (iv), which will warrant the convergence of the geometric series The rest is obvious.
The below properties of D are in support of our claim that the expression in (4) is tractable.
(i) D k is a linear functional on the space of all functions analytic at zero.
Proof. With the use the Leibnitz formula and F(x) = x j and G(x) = g(x) 1−x . Hence, when applying D k , we have and (7) and (8) enable one to calculate partial sums of a power series.

Remark 1. Formulas
(vii) For any real number a and for a positive integer n, except for a = n = 1, it holds true that (viii)For any two real numbers a, b and two positive integers n and r (except for a = n = 1 and b = r = 1) it holds that Proof. By the proof of property (iii), Then, interchanging the operator and the series and then using properties (iii) and (vii), this simplifies to (ix) If X be an integer-valued non-negative r.v. with h(z) = Ez X and k is a positive integer, then (x) If X be an integer-valued non-negative r.v. with h(z) = Ez X and k is a positive integer, then Formula (4) of Theorem 1 is largely simplified when the marginal transform In some applications t 0 = 0 and X 0 = i (≥ 0), then γ 0 (xv, θ) = (xv) i . Then, using property (iii) of D, this reduces to Example 1. To see the utility of the above expressions, consider the classical queueing system M X /G N /MV/1/∞, that is, with marked Poisson input, N-policy general service, and multiple vacations. The server goes on maintenance (known as vacations) that starts when the queue is exhausted and it consists of multiple series of random segments none of each can be interrupted. The primary service resumes when the queue accumulates to at least N units upon the end of one of the maintenance segments. Other than the usual routine with Pollaczek-Khintchine formula for the pgf of the equilibrium distribution of the queue length upon departures, there is a need of the contents of the queue upon the end of maintenance when the server attains to the system finding the line of units most likely in excess of N. In other words, there is a need of finding the distribution of the number of units entering the queue during the maintenance period whose length is implicitly controlled by N.
Here we are under the following specifications. t 0 = X 0 = 0, that is the server starts off its maintenance immediately after the queue drops to zero implying that γ 0 (z, θ) = 1. Then, where γ(θ) = Ee −t 1 θ is the LST of a maintenance segment and a(z) is the pgf of a batch size of the input (that is marked Poisson with rate λ of its support counting measure). It is thus obvious that the queueing process involved during the maintenance sequence is a random walk S observed upon the successive ends t 1 , t 2 , . . . of maintenance segments. (We are not concerned about the status of the process upon each arrival.) So S = ∑ ∞ n=0 X n ε t n is with position dependent marking characterized by the functional γ(z, θ) in (15). Combining the special case of (14) (i = 0) and (15) gives the joint transform of the maintenance length and the number of units accumulated during maintenance on server's return.
To further illustrate the use of the D-operator, consider the special case the system under the assumptions that the input is ordinary, i.e., a(z) = z and the maintenance segments are a.s. of constant length c. Hence, γ(θ) = e −cθ and The latter condition is met when |x| < 1 which is sufficient when using the D-operator in (16). It is readily seen that 1 1−e −cλ(1−zx) is analytic when xRez < 1 that we can easily satisfy with x small without forcing z ∈ B(0, 1), implying that e −cλn n k after using formula (7). Finally, the marginal pgf of S ν (the number of units in the system accumulated upon server's return) reads

Random Walks on Infinite Graphs and Cybersecurity: A Bivariate Model
Consider an infinite weighted graph in which weights are associated with nodes rather than edges. There are no infinite graphs in the real world, rather large graphs (representing large-scale networks), see Dshalalow and White [65,66]. Assume that during a series t 0 , t 1 , . . . of cyberattacks, successive batches of nodes are incapacitated upon random time increments. Associated with each node is a random weight representing its value to the health of the network. We assume the network enters a critical state wherein it may become dysfunctional if the number of nodes incapacitated by hostile attacks exceeds a fixed integer threshold M or the magnitude of weights associated with the compromised nodes exceeds a fixed real threshold W. We proceed with more formalism of the model. Let (Ω, F (Ω), P) be a probability space and let where ε a is a Dirac point measure, be a marked Poisson random measure on this probability space describing the evolution of damage taken to a network, where n k nodes are destroyed at time t k , k = 1, 2, . . . , w k = ∑ n k j=1 w jk is the non-negative real weight associated with the n k nodes and the underlying support counting measure ∑ ∞ k=1 ε t k is Poisson of rate λ directed by λ|·|, where |·| is the Borel-Lebesgue measure on B(R + ).
We assume that n k 's are iid (and independent of w jk 's) with common marginal pgf g(z), and w jk 's are iid with common LST l(u) for j, k ∈ N.
Obviously, η is a bivariate Poisson random walk on a two-dimensional random grid generated at times t k 's in such a way that if an underlying walker is located at point (∑ m k=0 n k , ∑ m k=0 w k ) at time t n , it moves to ∑ m+1 k=0 n k , ∑ m+1 k=0 w k by time t n+1 driven by the jump (n m+1 , w m+1 ) that goes to the right or upward.
We have the following representation for η as the transform of its dependent components N and W.
with Re(u) ≥ 0, where where T is a Borel subset of R + . Now, suppose random walk η is observed by a delayed renewal process such that are iid and independent of ∆ 0 = τ 0 and each with Re(θ) ≥ 0 are the LST of ∆ 0 = τ 0 and the common LST of ∆ n for n ∈ N, respectively. Then, are the functionals describing the total number of lost nodes and their associated weights observed within time intervals [0, τ 0 ] and (τ 0 , τ 1 ], respectively, that we assume known or readily obtainable.
is the bivariate random walk with mutually dependent marks The random walk X ⊗ Y ⊗ T describes the path of the walker that moves on the associated random grid updated upon walker's moves at T .
where M ∈ N, V ∈ R + , we are interested in the escape parameters upon walker's exit from set A. Namely, are the exit indices. We would say that the component X of random walk X ⊗ Y ⊗ T is terminated at time τ µ ,and component Y is terminated at time τ ν if X and Y acted alone, but we seek the time one of them terminates. If the original marked Poisson walk η is observed by T , then the embedded process will exhibit (mutually dependent) increments X n and Y n as the marks in the process X ⊗ Y ⊗ T . The motion of the walker represented by the walk X ⊗ Y ⊗ T is observed upon T and gives us the escape time from set A that takes place at τ ρ = τ µ ∧ τ ν , where ρ = µ ∧ ν, the first observed passage time, which is delayed information regarding the actual real-time crossing (which occurred earlier).
The target functional is where α 0 , α ∈ B(0, 1), Re(β 0 ) ≥ 0, Re(β) ≥ 0, Re(h 0 ) ≥ 0, and Re(h) ≥ 0. This functional includes all relevant virtual escape and pre-escape parameters including the first passage time, pre-first passage time and locations of the walker upon its virtual escape from set A and the location in set A as seen prior to the escape.
The primary tool we use is the composition of the familiar D-operator of (2) and the inverse LC −1 of the Laplace-Carson transform defined as with Re(w) > 0, with the inverse where L −1 w is the inverse of the Laplace transform. Then, the composition reads The key result lies in the following theorem [66] (see the results combined in Theorem 2.1 through Corollary 2.6 in that paper).
Note that formula (33) resembles that of (4) in Theorem 1 and for a good reason. (We explain the similarity in forthcoming results.) We skip the discussion about the claim about analytical tractability of formula (33), which by all means is verifiable, instead moving to a noteworthy embellishment of Theorem 3.1 in Section 3 of the paper [66]. Namely, to refine information on the nature of the random walk's escape from set A or equivalently, severe loss of nodes in the network under attack, we would like to add one more control level referred to as an auxiliary threshold. The latter is with the objective to offset the inevitable crudeness due to the delay through restricted observations on the walk around the first passage time.
Let M 1 < M. Introduce the auxiliary index Our attention is now on the confined sub-σ-algebra F ∩ {µ 1 < (µ ∧ ν)} and the associated functional A realization of process X ⊗ Y ⊗ T of losses (defined in (26) above) in Figure 1 illustrates how it operates with respect to the introduced main and auxiliary thresholds. We can regard X ⊗ Y ⊗ T as a two-dimensional random walk on a random grid (rather than traditional lattice). We have a rectangular region formed of rectangular sectors in white, green, and red colors. In real-time the walker attempts to escape the white-green area at the first opportune time when the cumulative loss of nodes exceeds M or the cumulative weight loss exceeds V, whichever comes first. It leaves a polygonal path in blue and a cruder, observed, path in green. The walker enters the green area indicating that a lower threshold M 1 is crossed, while neither M nor V was. In reality, the green area can be empty with a positive probability.
In Figure 1, the underlying real-time process (the blue dots) represents the real-time incoming damages, which are observed only upon τ k 's (depicted by the green dots), where the M 1 -observed-crossing occurs before the first observed passage time (FOPT) of M or V (i.e., there is an observation in the green area), at which the components of the process may or may not coincide with their values at the real-time FPT (first passage time). The following assertion on functional Φ µ 1 <(µ∧ν) about the escape parameters of random walk X ⊗ Y ⊗ T defined in (26) is an embellishment of the random walk model of Theorem 2 verbalized in the context of cyberattacks on a network, and can be found in [66] (Theorem 3.5).
Example 2. In this example we will present fully explicit probabilistic results for a special case of the process with the M 1 -auxiliary threshold, under the following five assumptions made in the context of network's security.

5.
The initial functional γ 0 = 1 (i.e., zero initial damage). We note that deterministic observations present more of a challenge than many random observations. The assumption on the number nodes destroyed in a single strike being bounded by R is analytically convenient but not too restrictive, because R can be made arbitrarily large. The general gamma distribution of weight of a single node is also pretty general.
Let E u N µ 1 e −vW µ 1 e −θτ µ 1 1 {µ 1 <(µ∧ν)} be the Φ µ 1 <(µ∧ν) -marginal functional of walk's position on the passage of threshold M 1 (see the green area in the figure above) with the main escape from set A not occurred. Then, the following holds formulated in the context of the cyberattack.
Under Assumptions 1-5, the joint transform of the number of lost nodes, their cumulative weight, and the first passage time of the crossing of M 1 preceding the first crossing of M or V (i.e., on the sub-σ-algebra F (Ω) ∩ {µ 1 < (µ ∧ ν)}) satisfies the following formula [66]: where [R] = (1, ..., R), δ = (δ 1 , ..., δ R ) ∈ N R 0 (δ j ≤ R for each j), Li s (z) = ∑ ∞ k=1 z k k −s is the polylogarithm, which is numerically tractable for our domain {e −w : Re(w) > 0} with s ∈ Z ≤0 , P(x, y) = 1 − Γ(x,y) Γ(x) is the upper regularized Gamma function, Γ(x, y) is the incomplete gamma function, and Γ(x) is the gamma function.   Some current work by White [106] goes further and considers large networks with nodes connected by edges and the edges, rather than the nodes, have weights. Here, successive attacks take out a random number of nodes, which removes a random number of connected edges, each with a weight indicating its value to the network. The network enters into a critical state if losses of any of these three types accumulate beyond some pre-specified thresholds. This is modeled through a Poisson random measure where n k nodes are incapacitated in the kth attack at time t k , the jth node lost in the kth attack has e jk incident edges that go down, and the ith edge from the jth node lost in the kth attack has weight w ijk , so we have This process is studied as above under delayed observation, so we consider the following random measure, similar to ((26)).
which is a three-dimensional random walk with mutually dependent marks (X n , Y n , Z n ) : Ω → N × N × R + and we define We will define Here, there are three thresholds, M n , M e , M w for losses of nodes, edges, and weights, respectively, and three corresponding exit indices ρ n , ρ e , ρ w with ρ = ρ n ∧ ρ e ∧ ρ w . We seek the functional which is much like the functional (29) from Theorem 2, except it includes some extra terms for the edge losses upon the exit, E ρ−1 and E ρ , and also includes a term ξ ρ corresponding to a probability-generating function of the number of observations before the network enters into its critical state. The functional has been derived through a procedure similar to Theorem 2 above, although it is a bit more difficult since this problem is three-dimensional and the weight lost per attack is more complex. To accomplish this, we can use the operator Theorem 4. The functional Φ satisfies the formula where It is easy to draw many parallels between the functionals in Theorem 2 [65] and Theorem 4 [106], which is suggestive of some structure that can extend to higher dimensional results, as has recently been shown and will be outlined in Section 5 below.

Time Insensitive Random Walk and Applications
In summary, the random walk analysis surveyed in Sections 2 and 3 is referred to as time insensitive analysis and the associate random walk is time insensitive. See Agarwal, Dshalalow, and O'Regan [107], Agarwal and Dshalalow [108], Dshalalow [56,57], and Dshalalow and Liew [58][59][60]. It pertains to the fact that the random walk process we analyzed so far cannot be associated with continuous time information, say S(t) giving us the status of the walk at any time t simultaneously with S ν , t ν and other escape parameters within interval [0, t]. Of course, we can pull out some probabilistic information about the location of τ ν such as P{t ν ≤ t} or P{t ν ∈ B} and likewise P{S ν ∈ R} or for that matter, the finite-dimensional distribution of S ν . However, this still falls short of the time continuous information S(t) on the walk that is very important in control theory. It was very obvious that with our efforts to embed an auxiliary control level M 1 prior to the main escape, we make up for the lack of S(t). In the forthcoming sections we will address this issue and present time sensitive walks which appear to provide us with a bulk of additional information, but at a cost, because the insensitive analysis is tamer.
We return to this topic later, but for now we would like to present some applications of the insensitive walks beyond the cyberattack models in Section 3.
Consider the random walk is the position of the walker on the associated random grid. From this position, the walker jumps to position (A n+1 , P n+1 ). Here, The objective is to investigate time and the position of the walker when its component Often of main interest is the position of the walker in R l . Component A of S is called the active, while P is passive implying that in our case, only A is contained, while P is unrestricted. Note that, unlike the previous assumptions in Sections 2 and 3, the walk in R l runs in all directions along a randomly generated grid. However, the escape is determined by the projection N d × R l , N d , π of the position of the walker relative to set A. Here π is defined as the projection map from N d × R l to N d .
In a nutshell, the escape coordinates in N d × R l are determined by the time and location of the active components A of the walker upon their first crossing of A. Obviously, the exit from A occurs when at least one of the active entries α nj crosses R j of rectangle R.
The exit index is defined as implying that t ρ is the first passage time and S ρ = (A ρ , P ρ ) is the global location of the walker in A × R l upon A's escape from set A. Before we continue with more formalism, let us bring up a situation that led to the above model.

Example 3.
Suppose an agent decides whether or not to short an option on some stock S 1 he does not own. In the event he decides to short the option, he wonders if he is to acquire the stock or not to dependent on the stock's chance to hit the strike price. In this case, if the agent does not own the stock, while it hits the strike price and the option holder will exercise the contract, he will have to deliver the stock and thus buy it at a market price. This particular example does not demonstrate how to find the probability that the stock described by a random walk process will cross the threshold determined by the strike price, but it will give the functional of the first passage time when the stock drops for the first time or when its increment will increase above some level M 1 . This can be used as an initial information for the stock to run its further path. This information can also be used whenever one decides whether or not to buy a volatile stock. Now, the prediction about the first drop or sharp increase can be refined by adhering to stock S 1 yet another stock, say S 2 , which is proved to be well correlated with S 1 . Then, instead of a scrutiny on stock S 1 alone, we can mix S 1 and S 2 to see whichever of the two will come first to drop or rise. The prediction can be more accurate.
In a similar situation, suppose an owner of a stock portfolio is interested to extend or update it with more stocks. A diversification is a common strategy. Another strategy is to mix a portfolio with longs and shorts. Suppose the agent would want to know whether to long or short just two stocks dependent on what direction they may take. For instance, if a sharp drop occurs, it could be a signal of price decline; if a significant rise takes place without any economical reasons, it could be an indication of overpricing that would present another risk to the stock owner. In this case, the owner would also like to predict the moment either the first drop or a significant rise is to happen (which we will associate with the first passage or exit time). Then, if necessary, shorting the two stocks, the agent would be able to minimize risk, optimize the respective portfolio's performance and thus attain a higher return on his investment.
In the context of the above notation, π n1 and π n2 give the prices of the two stocks upon reference times t n , n = 1, 2, . . . so that P n1 and P n2 are increments of the stock price changes between their respective subintervals (t n−1 , t n ], where t 0 = 0. We would like to emphasize that because the stock prices periodically change their directions, the named increments are real-valued r.v.'s, rather than positive and thus the stock prices are not monotone, in contrast with monotone components of Sections 2 and 3.
Introduce four auxiliary active components that follow the evolution of two questionable stocks' prices. Now since the stock price changes are not monotone sequences, we associate them with passive components whereas the auxiliary components of (64)-(67) are monotone. Now while stock S i 's price appreciates, A ki = 0, k = 1, 2, . . . , resulting in (α n1 , α n2 ) = (0, 0), n = 1, 2, . . . , i = 1, 2. When at some t n , at least one of the two stock prices drops for the first time, we will see (α n1 , α n2 ) = (0, 1) or (1, 0), or (1, 1). The other two active components of A will watch similarly behaving rising trends of the two stocks as per (66) and (67).
Thus, setting the rectangle , R 4 } we can predict the trend of the two stocks to appreciate or dive and suggest a longing or shorting strategy of acquiring stock portfolio. For example, crossing R 1 or R 2 at t ρ points to changing direction of the respective stock prices from rising to dropping. On the other hand, crossing R 3 or R 4 points to a solid gain in prices that suggests longing rather than shorting the stocks. A mixed trend speaks of unwanted volatility.
Back to the general case, we introduce the functional . . , d, η, ξ ∈ R l , and ϑ, θ ∈ C + . (i.e., Re(ϑ) ≥ 0 and Re(θ) ≥ 0.) This functional includes familiar escape and pre-escape parameters. One of the utilities of pre-escape parameters, at least in the context of stock prices or option trading, is to predict the highest stock price, say before a drop or a second drop or sharp drop.
Next we have the multivariate version of the D-operator defined as . . , d, and ϕ is analytic at 0 with respect to each x 1 , . . . , x d .

Theorem 5.
The functional Φ ρ satisfies the following formula. where and u•v is the Hadamard product of vectors u and v. We note that the functionals γ and γ 0 are supposed to be known or they can be obtained. We revisit Example 3 about stocks trading.

Example 4.
In the context of Example 3 consider a special case when an agent observes two stocks with some initial constant prices P 01 , P 02 at the beginning, under assumption that t 0 = 0. Because P 01 and P 02 are positive, A 01 = A 02 = 0. It makes sense to set P 01 < M 1 and P 02 < M 2 , thereby setting A 03 = A 04 = 0 as well. Thus we have u = 1, so γ 0 (1; η, 0) = e i(η 1 P 01 +η 2 P 02 ) = g 0 (η) (73) is a fixed constant. Next, the agent wants to predict the first instant t ρ when at least one of the four events takes place: Stocks prices of S 1 or S 2 drop for the first time after appreciating, or with their increments P= (P 1 , P 2 ) spike above M 1 or M 2 . If any of these events occur at time t ρ , they will turn P ρ1 or P ρ2 negative and thus A ρ1 or A ρ2 = 1, or P ρ1 ≥ M 1 or P ρ2 ≥ M 1 and thus A ρ3 or A ρ4 = 1. Therefore, A = {0, 1} 4 and R = (1, 1, 1, 1).
We need to figure out γ(u; η, ϑ) = E u A 11 1 · · · u A 14 d e iη·P 1 −ϑt 1 where M = (M 1 , M 2 ). This can be calculated dependent on the choice of distributions of P and t 1 .
(With position independent marking, that is, assuming P and t 1 independent, the computation can be straightforward.) Now applying Theorem 5, we have For example, the most explicit functional is the marginal distribution of the exit index ρ (which is the predicted observation number from 1, 2, . . . when the above mentioned events take place). Thus, the pgf of ρ reads where a = γ(0, 0, 0) = P{0 ≤ P 1 < M} (77) In particular, the mean of ρ is Next, the marginal LST of t ρ−1 is Φ ρ (1, 1, 1, 0, 0 where (if with position independent marking). Thus, under the position independent marking assumption (that is price variations are independent of the time increments, the marginal transform of the highest portfolio price before at least one of the two stocks drops or spikes is implying that the mean time prior to one of these events is

Higher Dimensional Random Walks
Consider a random measure (S, T ) = ∑ ∞ n=0 X n ε τ n , where X n = (X n , Y n ) ∈ R k + × R m , and the corresponding delayed renewal process where A n = A 1 n , ..., A k n and P n = P 1 n , ..., P m n . Unlike any model considered previously, some components of the jumps, the passive components Y n , are permitted to be negative.
Given the rectangle A = [0, L 1 ] × · · · × [0, L k ] × R m , where L = (L 1 , ..., L k ) ∈ R k + , we are interested in the escape parameters upon walker's exit from set A. Namely, are the exit indices, and we focus on the first exit index As seen in a prior section, Dshalalow and Liew [59,60] derived a functional containing the pre-exit and post-exit non-negative active components A ρ−1 and A ρ as well as realvalued passive components P ρ−1 and P ρ , where ξ ∈ B(0, 1), each component of α, β ∈ C k have non-negative real parts, and φ, ψ ∈ C m . A new work by White [109] departs from the works above to derive a general formula for the probability of an arbitrary weak ordering of threshold crossings, a question of practical interest in numerous applications outlined above related to stochastic network defense, queueing theory, finance, and actuarial sciences.
In particular, a weak ordering of the exit indices ν 1 , . . . , ν k is a member of the set where each is fixed to be either = or <. Without loss of generality, each W ∈ W may be represented as keeping in mind some permutation may be applied to the indices of the ν's. The proofs within Dshalalow [40] and Dshalalow and Liew [59,60] among others partition the sample space into W and derive functionals of the form for each weak ordering W ∈ W separately for k ≤ 4 before adding them to find Φ ρ . Note that a special case of Φ W is Φ W (1, 0, 0, 0, 0 the probability of the weak ordering W occurring. This could be done manually for k ≥ 5 in principle, but it is impractical. W contains all permutations of {ν 1 , . . . , ν k } with strict inequalities, i.e., where threshold crossings occur at distinct times, which is already k! elements. Further, W also contains many other weak orderings because arbitrary subsets of the threshold crossings may occur upon the same jump of the process. It turns out the cardinalities of W for dimension k correspond to the Fubini numbers (or ordered Bell numbers) : 1, 3, 13, 75, 541, 4683, 47,293, . . . Deriving 75 Φ W functionals for dimension four was feasible but took quite a lot of effort. Some experimentation has shown that moving up to seven dimensions on a similar problem is time-consuming but feasible with an automated procedure with a consumer-grade computer, and a few more dimensions should work on more substantial hardware, but it soon becomes infeasible regardless of computational resources.
The main result of White [109] takes an alternate path and generalizes the derivation of an arbitrary P(W) in any finite k dimensions in its proof. Before we formulate the result, let's define the composition of k operators as where This allows us to have a unified notation for the operator while permitting components of the process to be discrete, continuous, or mixed. Given this, we formulate the result.

Theorem 6.
If each component of S n is continuous and each vector x T j contains at least one component with a positive real part, then for each W ∈ W, where r j = s j − s j−1 + 1, and S j = n ∈ N : s j−1 < n ≤ s j .
For convenience, the result above was formulated under the assumption the components of S n are continuous-valued. However, if any discrete component of the process, the definition of H χ implies the appropriate change in the individual components to D operators. The only other necessary change is to replace x m with − ln(x m ) input to the appropriate γ terms in order to convert the Laplace-Stieltjes transform Ee −x m X to a probability-generating function Ex X m . In all, this result gives a probability of each weak ordering of threshold crossings, whether the components are continuous, discrete, or mixed. However, it is under k operators, which at first glance seems to do little but push the problem to another impasse, but some examples below demonstrate it is a practical result that agrees with empirical experiments in special cases.

Example 5.
Recall the stochastic network problems in two dimensions addressed in Theorem 2 above. In the context of this section's models, suppose where each X i represents the i.i.d. sizes batches of nodes incapacitated by attacks. The nodes lost from the ith attack have i.i.d. random weights Y i1 , . . . , Y iX i representing their value to the overall health of the network. An interesting question is the probability the node loss crosses its critical threshold before, after, or simultaneous to critical weight loss occurring. If one is clearly more likely, it provides some path to make decisions to improve the reliability of the network-for example, whether efforts should be made to shield nodes to reduce node losses or to decentralize the value within the network to reduce weight losses.
If we assume node batches X i are geometrically distributed with parameter p and the node weights Y ij are exponential with parameter µ, the following probabilities were computed explicitly by simplifying Theorem 6 and applying the appropriate H χ operator.  We see a sigmoid pattern for the probability ν 1 < ν 2 when M 1 is fixed and M 2 grows. If we realize ν 1 < ν 2 implies M 1 is crossed before M 2 and that the means of both the node and weight jumps from an attack are equal in this special case, this is very intuitive. A small M 2 should be crossed first with high probability resulting in a low probability of the converse, ν 1 < ν 2 . At M 1 = M 2 , the probability is 0.5. A large M 2 is rarely crossed first, giving a large probability that ν 1 < ν 2 .
For P(ν 1 = ν 2 ), we see similarly intuitive results. Here, in Figure 5 again, simulated and predicted results strongly agree, as does intuition: note that the peak occurs when M 1 = M 2 , which should be when a simultaneous crossing is most common in this case where the mean of the jumps in each dimension is equal. While we were able to simply compute the probabilities to arbitrary precision with numerical approximations in the example above, this is not always possible, especially at higher dimensions, but the next example demonstrates an alternate path to practical results. Example 6. Suppose the jumps of the process X i are made up of three independent exponential random variables, (X i1 , X i2 , X i3 ), with parameters µ 1 , µ 2 , and µ 3 , respectively. In a three-dimensional problem, W is made up of 13 weak orders of four types: Since this example has jumps with independent components, it is enough to compute the four probabilities in the first column and simply apply a permutation to the results and adjust the parameters accordingly to get the others in the same line. We refer the reader to [109] to see the full results, but we reproduce the first one for the sake of discussion. We find where I 1 is the modified Bessel function of the first kind. The formulas derived for the other 13 probabilities had similar expressions, all made up of a term involving an inverse Laplace transform of an integral involving a Bessel functions and less difficult terms to compute. The expression above is not quite explicit since some expressions are under integrals and one portion remains under a Laplace transform. The Bessel functions can be computed numerically to high precision quickly and the integrals turn out to be quite easy to approximate to high accuracy with standard numerical integration techniques.
The inverse Laplace transform poses some less widely-understood challenges, but it turns it can reliably be inverted numerically in this instance using the fixed Talbot algorithm [110], which uses trapezoidal numerical integration along a specific deformed contour, using the framework and best practices developed by Abate and Whitt [111]. These two examples demonstrate the result of Theorem 6 is versatile and can be computed explicitly or at least in a form that can be numerically approximated to high precision in numerous interesting special cases.
The probabilities of [109] are unique in this area of study, but what about the full functional Φ ρ in m + k dimensions? This is the focus of some current work by White [112], which has recently confirmed the conjecture made by Dshalalow and Liew [59,60] that their result applies for arbitrarily many active components k for a functional that may be considered in the simplest sense as where continuous jumps have a common joint LST γ(u) = Ee −u·X 1 . In the spirit of Theorem 6, it has been shown the following holds.

Theorem 7.
If at least one component of u has a positive real part, then for each W ∈ W where the permutation p is the identity function, then where γ = γ(u + χ), γ B = γ u + χ B , and Γ B = γ χ B for B ⊆ {1, ..., k}.
The result readily extends to the situation where p is any permutation, hence giving us an expression for Φ W for any weak ordering W ∈ W.
The proof is a somewhat trivial extension to the proof of Theorem 6, but the much larger challenge beyond the work in [112] is to find Φ ρ by summing the Φ W terms over all weak orderings W ∈ W, Very recently, this problem has been solved in [112] exploiting an interesting recursive pattern in the way in the expressions Φ W simplify when added together. The result is formulated below.
This result has a remarkably simple formula, but an expression analogous to γ(u)−γ 1−γ is common to many of the other results herein when the pre-exit terms, passive components, and ξ ρ terms are omitted from the functional. As such, this theorem most of the time unifies insensitive functionals above, and confirms the conjecture of Dshalalow and Liew [59,60] about a model with k active components.
Of course, many embellishments are possible, such as adding the pre-exit terms, passive components, and ξ ρ terms to the functional to seek a fuller functional This is a simple extension of Theorem 8, which will appear in [112]

Time Sensitive Analysis of Random Walks
In several models outlined above, particularly those studied by the authors in [65,66] and outlined in Section 3, considered a process running in real time with jumps at times t 1 , t 2 , . . . , which can only be observed upon an independent delayed renewal process τ 0 , τ 1 , . . . rather than in real time. In this case, the exit of the process from a k-dimensional rectangular region was studied upon the pre-exit observation time and the post-exit observation time but access to the real exit was unavailable. This approach introduces some insurmountable uncertainty dependent on the crudeness of the observation process {τ n } ∞ n=1 . A sequence of papers, Dshalalow and his collaborators [50,68,69,[73][74][75][76]113] pursue methods referred to as time sensitive analysis that try to offer more precise look into the intermediate time period between the pre-exit observation and post-exit observation times during which the real time exit actually occurs, to glean some further insights about the process upon its exit.
The simplest case of this approach is the study of a one-dimensional discrete random walk by the authors in 2016 [113], where we have a random measure S = ∑ ∞ n=0 a n ε t n where a n : Ω → Z + are independent and identically distributed non-negative random variables, and study the continuous-time Poisson process with parameter λ, where each a n has a common probability-generating function g(z) = Ez a n . S(t) is referred to as the real time stochastic process. The time insensitive methods rely upon study of the process S(t) through its observed values where the point process τ 0 , τ 1 , ... is delayed renewal process representing the observation times of S(t). As a delayed renewal process, the inter-observation times, ∆ 0 = τ 0 and ∆ n = τ n − τ n−1 , n ∈ N, are independent, and the times for n ≥ 1 are identically distributed. Denote the Laplace-Stieltjes transforms of each as L 0 (θ) = Ee −θτ 0 and L(θ) = Ee −θ∆ 1 , each with Re(θ) ≥ 0. Then, the increments of the Poisson process between observations satisfy which we assume to be known or readily obtainable. Given the interval A = [0, M], where M ∈ Z + , we are interested in the index of the first observed exit of the process from set A, ν =inf{n ≥ 0 : S n / ∈ A}. With the one-dimensional time insensitive analysis outlined in Section 2, a functional of the form was derived. In contrast, one-dimensional time sensitive analysis focuses on targets of the form of the value of S upon the observations immediately before and after the real time crossing S(τ ν−1 ) and S(τ ν ), the real time value of the process S(t), and the times of the observations immediately before and after the crossing τ ν−1 and τ ν themselves. Notice that each functional deals with t placed within a particular random time interval, either before τ ν−1 or between τ ν−1 and τ ν . In [113], the authors derived formulas for each of these two functionals under a Laplace transform, which are reproduced below. Theorem 9. The joint functional Φ 1 ν (t, u, v, ϑ, θ, y) of the process S(t) on the interval [0, τ ν−1 ) satisfies The results are each under a Laplace transform, so it is necessary to evaluate an additional inverse operator to extract probabilistic results from this expression, but they provide a path to some deeper insights than the time sensitive analysis, although they are a bit more of a challenge to derive, as we see in the following example.

Example 7.
To derive practical results, we need merely to specify some details about the real time process and the delayed renewal process of observation times and then apply the transforms. We will make the following assumptions.
The marks of the real time process are geometrically distributed with parameter a, so their PGF is g(z) = az The initial functional γ 0 = 1 (i.e., zero initial state and time). It turns out that in such special cases, time sensitive analysis can be used to derive explicit formulas for joint distributions of random quantities associated with the exit. For example, to find the joint probability mass and distribution function of the exit position of the process and pre-exit observation time, P{S ν = r, τ ν−1 > t}, one can find explicitly by applying the inverse Laplace transform and D operator before using properties of probability generating functions to find the function in question as follows.
While this expression may seem rather large, it is simply a linear combination of terms of the form G j , H j , R jr , and some constants associated with the process, which is easy to efficiently compute, as the lower regularized gamma function can be quickly computed to arbitrary precision with common numerical computing tools. This was just one example of a result that time sensitive analysis permits. It clearly could work for any pair of the time and position random variables upon the exit represented in Φ 1 ν and Φ 2 ν , noting in particular, allows one to use the post-exit observation τ ν instead of the pre-exit observation, if preferred.
Some later work by the authors in 2019 [76] pursued time sensitive analysis for a problem extended in several directions: (1) some interesting results are derived for general processes with independent and stationary increments (ISI), (2) instead of one dimension, it assumes a active components and b passive components for a process in R a + × R b , and (3) instead of just two times of interest-the pre-exit and post-exit observation previously-the position of the process and time at any finite number of such random times are included in the functional here.
The general results are worth mentioning as they have some interesting implications beyond the scope of this work. Suppose {S(t) : t ≥ 0} is a continuous-time ISI stochastic process, defined on a filtered probability space (Ω, F , (F t ), P). Let T = {T 0 , T 1 , ...} be a point process in R + with T n = T n−1 + δ n where each δ n is independent of the prior time increments δ 0 , δ 1 , ..., δ n−1 and each is non-negative. In this setting, the following interesting result is established regarding the functional for each n = 1, ..., m assuming v j , y ∈ R a+b and θ = (θ 0 , ..., θ m ) ∈ C m+1 + , where we denote C + = {z ∈ C :Re(z) ≥ 0}. This is a joint characteristic function of the ISI process S(T j ) at each time in T with j ≤ m, each corresponding random time increment δ n , and the real time value of the process S(t), restricted to times in [T n−1 , T n ).
This result gives a formula for a joint functional on some m independent random times of interest of a very general stochastic process S(t). If the process is assumed to be a collection of marked Poisson processes, the result simplifies to an interesting result. Corollary 1. If S(t) is made up of d = a + b parallel marked Poisson processes with rates λ 1 , ..., λ d and T is independent of F t , then on the trace σ-algebra F ∩ {T n−1 ≤ t ≤ T n }, the functional F n satisfies F * n (x, v 0 , v 1 , ..., v m , y, θ) = n−1 ∏ j=0 φ j (x + θ j + λ · G(b j + y)) φ n (θ n + λ · G(b n )) − φ n (x + θ n + λ · G(b n + y)) x + λ·(G(b n + y) − G(b n )) where φ j (θ) = E e −θδ j , g j (b) = E e −ibX mj , and G(b) = (1 − g 1 (b 1 ), ..., 1 − g d (b d )). Suppose next the process has two active components, a = 2, and we will make some additional assumptions to turn the process S(t) into a random walk and review the related results from [76].
Consider the random measure S = ∞ ∑ n=0 a 1 n , a 2 n , p n ε t n , where the jumps a 1 n , a 2 n , p n : Ω → R + × R + × R b are independent and identically distributed non-negative random vectors, and we study the stochastic process S(t) = ∞ ∑ n=0 a 1 n , a 2 n , p n ε t n ([0, t]), where each jump has a common joint transform G(z). The time insensitive methods rely upon study of the process S(t) through its observed values where the point process τ 0 , τ 1 , . . . is delayed renewal process representing the observation times of S(t) with initial LST L 0 (θ) = Ee −θτ 0 and common LST L(θ) = Ee −θ∆ 1 for the inter-observation times. We can represent the joint transforms of X n as for n ≥ 1, which we assume to be known or readily obtainable. Given the rectangular cylinder A = [0, M 1 ] × [0, M 2 ] × R b , where M 1 , M 2 ∈ R + , we are interested in the index of the first observed exit of the process from set A, ν = inf{n ≥ 0 : S n / ∈ A}, and we will target the time sensitive functionals Φ 1 ν (t, u, v, θ 0 , θ, y) = E e −iu·S ν−1 −iv·S ν −θ 0 τ ν−1 −θ∆ ν −iy·S(t) 1 [0,τ ν−1 ) (t) Φ 2 ν (t, u, v, θ 0 , θ, y) = E e −iu·S ν−1 −iv·S ν −θ 0 τ ν−1 −θ∆ ν −iy·S(t) 1 [τ ν−1 ,τ ν ) (t) of the positions of the process upon the pre-exit and post-exit observations, the position at the real time t, and the pre-exit and post-exit times themselves, restricted to the random time intervals [0, τ ν−1 ) before the pre-passage times and [τ ν−1 , τ ν ) between the pre-exit and post-exit times. Through a stochastic summation over a conveniently chosen partition of the sample space, Corollary 1 can be used to derive these functionals in the case where the components of the process are marked Poisson processes. Theorem 12. Let S(t) be the constant interpolation of the process embedded in a process made up of 2 + b parallel marked Poisson processes of rates λ 1 , . . . , λ 2+b where the two active components are discrete, continuous, or mixed. For the process on the trace σ-algebra F ∩ {t < τ ν−1 }, the joint functional Φ 1 ν (t, u, v, θ 0 , θ, y) satisfies Theorem 13. Let S(t) be the constant interpolation of the process embedded in a process made up of 2 + b parallel marked Poisson processes of rates λ 1 , ..., λ 2+b where the two active components are discrete, continuous, or mixed. For the process on the trace σ-algebra F ∩ {τ ν−1 ≤ t < τ ν }, the joint functional Φ 2 ν (t, u, v, θ 0 , θ, y) satisfies L t Φ 2 ν (t, u, v, θ 0 , θ, y) (x) = G −1 s γ 0 (v + y, x + θ) − γ 0 (v, θ) x + λ·(G(v) − G(v + y)) − γ 0 (v + s + y, x + θ) − γ 0 (v + s, θ) x + λ·(G(v + s) − G(v + s + y)) + γ 0 (u + v + s + y, x + θ 0 ) 1 − γ(u + v + s + y, x + θ 0 ) The expression of Theorem 13 [76] is clearly very similar to the one dimensional time sensitive result from Theorem 10 [113], but this one happens to be for a continuous problem. Indeed, this expression actually is in a similar form to the time insensitive functionals of Theorem 1, Theorem 2 [65], and Theorem 4 [106], a common thread running throughout much of the work discussed in this article. Funding: This research received no external funding.

Acknowledgments:
The authors want to give many thanks to anonymous referees whose insightful remarks and suggestions led to a largely improved version of our paper.

Conflicts of Interest:
The authors declare no conflict of interest.