Next Article in Journal
Three-Dimensional Modeling of the Behavior of a Blood Clot Using Different Mechanical Properties of a Blood Vessel
Next Article in Special Issue
The Heroic Age of Probability: Kolmogorov, Doob, Lévy, Khinchin and Feller
Previous Article in Journal
Covariant Hamilton–Jacobi Formulation of Electrodynamics via Polysymplectic Reduction and Its Relation to the Canonical Hamilton–Jacobi Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Controlled Filtered Poisson Processes

Department of Mathematics and Industrial Engineering, Polytechnique Montréal, C.P. 6079, Succursale Centre-Ville, Montréal, QC H3C 3A7, Canada
Mathematics 2025, 13(2), 284; https://doi.org/10.3390/math13020284
Submission received: 29 December 2024 / Revised: 15 January 2025 / Accepted: 15 January 2025 / Published: 17 January 2025

Abstract

:
Filtered Poisson processes are used as models in various applications, in particular in statistical hydrology. In this paper, controlled filtered Poisson processes are considered. The aim is to minimize the expected time that the process will spend in the continuation region. The dynamic programming equation satisfied by the value function is derived. To obtain the value function, and hence the optimal control, a non-linear integro-differential equation must be solved, subject to the appropriate boundary conditions. Various cases for the size of the jumps are treated and explicit results are obtained.

1. Introduction

Let { X ( t ) , t 0 } be the stochastic process defined by
X ( t ) = x + k = 1 N ( t ) w ( t , T k , Y k ) ,
where x = X ( 0 ) , { N ( t ) , t 0 } is a Poisson process with rate or intensity λ and w ( · , · , · ) is called the response function. The random variables T 1 , T 2 , are the arrival times of the events of the Poisson process, and { Y 1 , Y 2 , } is a sequence of independent random variables that are identically distributed as a random variable Y.
The process { X ( t ) , t 0 } is known as a filtered Poisson process (FPP); see [1] (p. 145). It has applications in various fields. In [2], Lefebvre and Guilbault used these processes to model a river flow. The authors also estimated return periods in hydrology based on FPPs.
In hydrology, an appropriate response function is of the form
w ( t , T k , Y k ) = Y k e ( t T k ) / c ,
where c is a positive constant. This is the response function that will be used in this paper.
Remark 1.
Notice that with the response function defined in Equation (2), we can express X ( t + Δ t ) in terms of X ( t ) :
X ( t + Δ t ) = x + k = 1 N ( t + Δ t ) Y k e ( t + Δ t T k ) / c = x + k = 1 N ( t ) Y k e ( t + Δ t T k ) / c + k = N ( t + ) N ( t + Δ t ) Y k e ( t + Δ t T k ) / c = e Δ t X ( t ) x + k = N ( t + ) N ( t + Δ t ) Y k e ( t + Δ t T k ) / c .
Moreover, since the Poisson process { N ( t ) , t 0 } has independent increments, the summation in the above equation is independent of X ( t ) .
Filtered Poisson processes have applications in physics, operation research, finance and actuarial science, among other fields. Many theoretical and applied papers have been written on FPPs or generalizations of these processes. Racicot and Moses [3] used FPPs in vibration problems. Hero [4] estimated the time shift of a non-homogeneous FPP in the presence of additive Gaussian noise. Grigoriu [5] developed methods in order to find properties of the output of linear and non-linear dynamic systems to random actions represented by FPPs. Novikov et al. [6] considered first-passage problems for FPPs. Yin et al. [7] modelled snow loads as FPPs. Theodorsen et al. [8] obtained statistical properties of an FPP with additive random noise. Alazemi et al. [9] computed estimators of FPP intensities. In Grant and Szechtman [10], the process { N ( t ) , t 0 } is replaced by a non-homogeneous Poisson process.
In this paper, we consider a controlled version of { X ( t ) , t 0 } :
X ( t ) = x + b 0 0 t u [ X ( s ) ] d s + k = 1 N ( t ) Y k e ( t T k ) / c ,
where b 0 is a positive constant and the control u ( · ) is assumed to be a continuous function.
Let τ ( x ) be the first-passage time random variable defined by
τ ( x ) = inf { t 0 : X ( t ) d X ( 0 ) = x } ,
where x d . Our aim is to minimize the expected value of the cost function
J ( x ) : = 0 τ ( x ) 1 2 q 0 u 2 [ X ( t ) ] + θ d t + K [ X ( τ ( x ) ) ] ,
where q 0 and θ are positive constants and K ( · ) is the terminal cost function. Because the parameter θ is positive, the optimizer wants to minimize the expected time that the controlled process will spend in the interval ( , d ) , while taking into account the quadratic control costs and the terminal cost. We could assume instead that θ is negative, so that the objective would be to maximize the expected time in the continuation region ( , d ) .
This type of problem, in which the optimizer controls a stochastic process until a certain event occurs, is known as a homing problem. Whittle [11] has considered these problems for n-dimensional diffusion processes. In [12], he generalized the cost criterion by taking the risk-sensitivity of the optimizer into account.
The author has written numerous papers on homing problems for different types of stochastic processes. Recently, he and Yaghoubi have solved homing problems for queueing systems and for continuous-time Markov chains; see [13].
Next, we define the value function
F ( x ) = inf u [ X ( t ) ] t [ 0 , τ ( x ) ) E [ J ( x ) ] .
It gives the expected cost obtained if the optimizer chooses the optimal value of the control u [ X ( t ) ] in the interval [ 0 , τ ( x ) ) .
In Section 2, the dynamic programming equation satisfied by the value function will be derived. Then, in Section 3, various cases for the distribution of the random variable Y, which denotes the size of the jumps, will be considered. Explicit results will be obtained for the value function and the optimal control. To do so, we will have to solve a non-linear integro-differential equation.

2. Dynamic Programming Equation

We have, for x < d ,
F ( x ) : = inf u [ X ( t ) ] t [ 0 , τ ( x ) ) E [ J ( x ) ] = inf u [ X ( t ) ] t [ 0 , τ ( x ) ) E 0 τ ( x ) 1 2 q 0 u 2 [ X ( t ) ] + θ d t = inf u [ X ( t ) ] t [ 0 , τ ( x ) ) E 0 Δ t 1 2 q 0 u 2 [ X ( t ) ] + θ d t + Δ t τ ( x ) 1 2 q 0 u 2 [ X ( t ) ] + θ d t .
For a Δ t that is small,
0 Δ t 1 2 q 0 u 2 [ X ( t ) ] + θ d t = 1 2 q 0 u 2 ( x ) + θ Δ t + o ( Δ t ) .
Moreover, using Bellman’s principle of optimality (see [14]), we can express the second integral in Equation (8) in terms of the value function:
inf u [ X ( t ) ] t [ Δ t , τ ( x ) ) E Δ t τ ( x ) 1 2 q 0 u 2 [ X ( t ) ] + θ d t = E F X ( Δ t ) + o ( Δ t ) ,
where
X ( Δ t ) = x + b 0 u ( x ) Δ t + k = 1 N ( Δ t ) Y k e ( Δ t T k ) / c + o ( Δ t ) .
It follows that it is sufficient to find the optimal value of the control u [ X ( t ) ] at the initial instant:
F ( x ) = inf u ( x ) 1 2 q 0 u 2 ( x ) + θ Δ t + E F X ( Δ t ) + o ( Δ t ) .
Next,
P [ N ( Δ t ) = k ] = 1 λ Δ t + o ( Δ t ) if k = 0 , λ Δ t + o ( Δ t ) if k = 1 ,
so that
P [ N ( Δ t ) 2 ] = o ( Δ t ) .
Hence, we can write that
X ( Δ t ) = x + b 0 u ( x ) Δ t + o ( Δ t ) with probability 1 λ Δ t + o ( Δ t ) , x + b 0 u ( x ) Δ t + Y 1 e ( Δ t T 1 ) / c + o ( Δ t ) with probability λ Δ t + o ( Δ t ) .
Now, we have
P [ N ( Δ t ) = 1 ] = P [ T 1 Δ t ] + o ( Δ t )
and
T 1 { T 1 Δ t } U ( 0 , Δ t ] ,
where T 1 is the arrival time of the first event of the Poisson process { N ( t ) , t 0 } . That is, given that T 1 Δ t , the random variable T 1 has a uniform distribution on the interval ( 0 , Δ t ] . It follows, making use of the law of total probability for continuous random variables, that
E F X ( Δ t ) = F x + b 0 u ( x ) Δ t [ 1 λ Δ t + o ( Δ t ) ] + 0 Δ t F x + b 0 u ( x ) Δ t + y e ( Δ t s ) / c 1 Δ t f Y ( y ) d s d y × [ λ Δ t + o ( Δ t ) ] + o ( Δ t ) ,
where f Y ( y ) is the probability density function of the random variable Y (the parent random variable of Y 1 , Y 2 , ).
We deduce from Taylor’s theorem that
F x + b 0 u ( x ) Δ t = F ( x ) + b 0 u ( x ) Δ t F ( x ) + o ( Δ t ) .
Furthermore,
e ( Δ t s ) / c = e s / c 1 Δ t c + o ( Δ t ) ,
which implies that
0 Δ t e ( Δ t s ) / c d s = c e Δ t / c 1 1 Δ t c + o ( Δ t ) = Δ t + o ( Δ t ) .
Similarly,
0 Δ t e k ( Δ t s ) / c d s = c k e k Δ t / c 1 1 k Δ t c + o ( Δ t ) = Δ t + o ( Δ t )
for any k N . Hence,
0 Δ t F x + b 0 u ( x ) Δ t + y e ( Δ t s ) / c d s = F ( x ) Δ t + b 0 u ( x ) ( Δ t ) 2 F ( x ) + y Δ t F ( x ) + y 2 2 Δ t F ( x ) + + o ( Δ t ) = F ( x ) Δ t + b 0 u ( x ) ( Δ t ) 2 F ( x ) + F ( x + y ) F ( x ) Δ t + o ( Δ t ) .
From what precedes, we may write that
F ( x ) = inf u ( x ) { 1 2 q 0 u 2 ( x ) + θ Δ t + F ( x ) + b 0 u ( x ) Δ t F ( x ) + o ( Δ t ) [ 1 λ Δ t + o ( Δ t ) ] + F ( x ) + b 0 u ( x ) Δ t F ( x ) + F ( x + y ) F ( x ) f Y ( y ) d y + o ( Δ t ) × [ λ Δ t + o ( Δ t ) ] } + o ( Δ t ) .
That is,
0 = inf u ( x ) { 1 2 q 0 u 2 ( x ) + θ Δ t + b 0 u ( x ) Δ t F ( x ) + λ Δ t F ( x + y ) F ( x ) f Y ( y ) d y + o ( Δ t ) } + o ( Δ t ) .
Finally, dividing both sides of the above equation by Δ t and taking the limit as Δ t decreases to zero, we obtain the following proposition.
Proposition 1.
The value function F ( x ) satisfies the dynamic programming equation (DPE)
0 = inf u ( x ) 1 2 q 0 u 2 ( x ) + θ + b 0 u ( x ) F ( x ) + λ F ( x + y ) F ( x ) f Y ( y ) d y
for x < d . Moreover, we have the boundary condition
F ( x ) = K ( x ) f o r x d .
Remark 2.
Since the stochastic process { X ( t ) , t 0 } has jumps, the terminal cost function K ( · ) can be used to penalize a final value of the process that is well above the boundary at x = d . For example, in the application of filtered Poisson processes in hydrology, the manager of a dam may have the objective of keeping the water level below a given value d. If this water level exceeds the maximum desired value by much, the consequences can be very serious. The optimizer must therefore choose a value of the control variable u ( · ) that minimizes this risk as much as possible.
Now, we deduce from the DPE that the optimal control u * ( x ) is given by
u * ( x ) = b 0 q 0 F ( x ) .
Hence, substituting this expression into Equation (26), we find that in order to obtain the value function, we must solve the non-linear integro-differential equation (IDE)
0 = 1 2 b 0 2 q 0 [ F ( x ) ] 2 + θ + λ F ( x + y ) F ( x ) f Y ( y ) d y = 1 2 b 0 2 q 0 [ F ( x ) ] 2 + θ + λ F ( x + y ) f Y ( y ) d y λ F ( x ) ,
subject to the boundary condition in Equation (27).
In the next section, various cases will be considered for the distribution of the size Y of the jumps.

3. Particular Cases

Case I. Suppose first that the random variable Y is actually a constant c d . Then,
F ( x + y ) f Y ( y ) d y = F ( x + c ) = K ( x + c ) .
Assume that x 0 and K ( x ) = K 0 for any x d . Equation (29) reduces to
0 = 1 2 b 0 2 q 0 [ F ( x ) ] 2 + θ + λ K 0 λ F ( x ) .
Let us take θ = λ = b 0 = q 0 = K 0 = d = 1 . The solution to the above first-order non-linear ordinary differential equation (ODE) that satisfies the boundary condition F ( 1 ) = K 0 = 1 is (see Figure 1)
F ( x ) = x 2 2 + ( 1 2 x ) + 2 + 1 2 for 0 x 1 .
Remark 3.
There is a second solution to Equation (31) that is such that F ( 1 ) = 1 , namely
F ( x ) = x 2 2 + ( 1 + 2 x ) 2 + 1 2 for 0 x 1 .
However, because θ is positive, the value function F ( x ) must also be positive in the interval [ 0 , 1 ] , as well as decreasing with increasing x. The function in the above equation is increasing and negative for x [ 0 , 2 1 ) . Therefore, it must be discarded.
The optimal control is an affine function:
u * ( x ) = x + 2 1 for 0 x 1 .
Case II. Suppose that, in the previous case, the terminal cost function K ( · ) is rather K [ X ( τ ( x ) ) ] = X ( τ ( x ) ) . It follows that
F ( x + y ) f Y ( y ) d y = F ( x + c ) = x + c .
We must now solve
0 = 1 2 b 0 2 q 0 [ F ( x ) ] 2 + θ + λ ( x + c ) λ F ( x ) .
Choosing the same constants as those in Case I and c = 2 , the mathematical software program Maple finds that the solution to the above equation is
F ( x ) = 3 1 2 LambertW c 1 exp { x 1 } + 1 2 + x ,
where c 1 is an arbitrary constant and LambertW is a special function defined in such a way that
LambertW ( x ) e LambertW ( x ) = x .
Solving Equation (36) with the software program Mathematica yields an equivalent expression in terms of the ProductLog function, which gives the principal solution for w in the equation z = w e w . However, both software programs fail to give the solution that satisfies the boundary condition F ( 1 ) = 1 , together with the other conditions in our problem: F ( x ) must be positive and decreasing with x [ 0 , 1 ] .
Making use of the NDSolve function in Mathematica, which employs a numerical technique, we obtain the graph shown in Figure 2. The solution that we are looking for is represented by the full line. From this function, we can determine the optimal control u * ( x ) .
Remark 4.
Maple and Mathematica can solve (simple) integral equations, but not integro-differential equations directly. This is the reason why we must try to transform the equation into a (non-linear) ordinary differential equation. There are, however, many papers on numerical solutions of integro-differential equations; see, for example, [15].
Case III. Next, we assume that Y Exp ( α ) , so that
f Y ( y ) = α e α y for y 0 .
We then have
F ( x + y ) f Y ( y ) d y = 0 d x F ( x + y ) α e α y d y + d x K ( x + y ) α e α y d y .
(a) First, we take K ( x ) = K 0 for x d , as those in Case I. It follows that
d x K ( x + y ) α e α y d y = K 0 e α ( d x ) .
Moreover, we can write that
I ( x ) : = 0 d x F ( x + y ) α e α y d y = x d F ( z ) α e α ( z x ) d z .
We must solve the IDE
0 = 1 2 b 0 2 q 0 [ F ( x ) ] 2 + θ λ F ( x ) + λ I ( x ) + K 0 e α ( d x ) .
We calculate
I ( x ) = α [ I ( x ) F ( x ) ] .
Hence, differentiating Equation (43) with respect to x, we find that F ( x ) satisfies the second-order non-linear ODE
0 = b 0 2 q 0 F ( x ) F ( x ) λ F ( x ) + 1 2 α b 0 2 q 0 [ F ( x ) ] 2 α θ .
This equation is subject to the boundary condition F ( d ) = K 0 . Furthermore, we deduce from Equation (43) (and the fact that F ( x ) must be negative) that
F ( d ) = 2 θ q 0 b 0 .
Choosing the same constants as those in Case I and α = 1 , we can obtain (using a numerical method) both the value function F ( x ) and the optimal control u * ( x ) in the interval [ 0 , 1 ] ; see Figure 3 and Figure 4.
(b) If K ( x ) = x for x d , as in Case II, we calculate
d x K ( x + y ) α e α y d y = ( α d + 1 ) α 2 e α ( d x ) .
It follows that the function F ( x ) satisfies the IDE
0 = 1 2 b 0 2 q 0 [ F ( x ) ] 2 + θ λ F ( x ) + λ I ( x ) + ( α d + 1 ) α 2 e α ( d x ) .
Proceeding as in (a), we find that F ( x ) satisfies Equation (45). However, the second boundary condition becomes
F ( d ) = 2 q 0 b 0 θ λ d + λ ( α d + 1 ) α 2 .
Hence, with the above constants, we now obtain that F ( 1 ) = 2 2 .
Case IV. Finally, we suppose that Y U [ 0 , α ] , with α d . We have
F ( x + y ) f Y ( y ) d y = 1 α 0 α F ( x + y ) d y if x + α < d , 1 α 0 d x F ( x + y ) d y + 1 α d x α K ( x + y ) d y if x + α d .
(i) When x + α < d , we must solve the IDE
0 = 1 2 b 0 2 q 0 [ F ( x ) ] 2 + θ + λ α 0 α F ( x + y ) d y λ F ( x ) = 1 2 b 0 2 q 0 [ F ( x ) ] 2 + θ + λ α x x + α F ( z ) d z λ F ( x ) .
It follows that F ( x ) satisfies the equation
0 = b 0 2 q 0 F ( x ) F ( x ) + λ α [ F ( x + α ) F ( x ) ] λ F ( x ) .
Now, we can write that
F ( x + α ) F ( x ) = α F ( x ) + α 2 2 F ( x ) + ,
so that, if α is small,
0 b 0 2 q 0 F ( x ) F ( x ) + λ α 2 F ( x ) .
Assuming that F ( x ) 0 , we thus have
F ( x ) q 0 b 0 2 λ α 2 F ( x ) q 0 b 0 2 λ α 2 x + c 0 ,
where c 0 is an arbitrary constant.
(ii) In the case when x + α d , the IDE becomes
0 = 1 2 b 0 2 q 0 [ F ( x ) ] 2 + θ + λ 1 α 0 d x F ( x + y ) d y + 1 α d x α K ( x + y ) d y λ F ( x ) = 1 2 b 0 2 q 0 [ F ( x ) ] 2 + θ + λ 1 α x d F ( z ) d z + 1 α d x + α K ( z ) d z λ F ( x ) .
Differentiating with respect to x, we have
0 = b 0 2 q 0 F ( x ) F ( x ) λ α F ( x ) + λ α K ( x + α ) λ F ( x ) .
To obtain an approximate expression for the function F ( x ) , we can first try to solve the above ODE, subject to the condition F ( d ) = K ( d ) and the value of F ( d ) deduced from Equation (56). Next, we can use the continuity of F ( x ) to determine the value of the constant c 0 in Equation (55).

The Inverse Problem

As we have seen above, finding the value function F ( x ) for a given density function f Y ( y ) is generally a difficult task. To end this paper, we will consider the inverse problem: if we assume that F ( x ) is of a certain form, is there at least one density function f Y ( y ) (and a terminal cost function K) for which the function F ( x ) can indeed be considered as the value function in a particular case of our homing problem?
Case I. The simplest case is of course the one when F ( x ) is a positive constant F 0 . Moreover, we assume that K ( x ) F 0 as well. Then, Equation (29) reduces to
0 = θ .
Hence, we deduce that F ( x ) cannot be a constant, unless θ = 0 . The corresponding optimal control would be identical to 0. This result is in fact obvious, because if θ = 0 and K ( x ) F 0 , it is clear that E [ J ( x ) ] cannot be smaller than F 0 , which is obtained by choosing u ( x ) 0 . Furthermore, as we mentioned earlier, the value function F ( x ) should be decreasing as x increases when θ > 0 , and therefore should not be a constant.
Suppose however that we replace the parameter θ in the cost function J ( x ) by θ g [ X ( t ) ] . The IDE (29) will then become
0 = 1 2 b 0 2 q 0 [ F ( x ) ] 2 + θ g ( x ) + λ F ( x + y ) f Y ( y ) d y λ F ( x ) .
Assume that x 0 and that K ( x ) = F 0 x / d (so that K ( d ) = F ( d ) = F 0 , as required). Then, if Y U [ 0 , d ] , we obtain the equation
0 = θ g ( x ) + λ F 0 P [ Y < d x ] + d x K ( x + y ) f Y ( y ) d y λ F 0 = θ g ( x ) λ F 0 P [ Y d x ] + λ d d x d K ( x + y ) d y = θ g ( x ) λ F 0 d x + λ F 0 d 2 d x d ( x + y ) d y = θ g ( x ) + λ F 0 2 d 2 x 2 .
Thus, the value function will indeed be equal to the constant F 0 (and u * ( x ) 0 ) if and only if
g ( x ) = λ F 0 2 θ d 2 x 2 .
Note that the aim would be to keep the controlled process in the interval [ 0 , d ) for as long as possible (taking into account the quadratic control costs), rather than trying to minimize the expected time spent by the process in [ 0 , d ) .
Case II. Next, we look for value functions of the form F ( x ) = F 1 x + F 0 for x [ 0 , d ] , where F 1 < 0 and F 0 F 1 d , and we assume that K ( x ) = F ( x ) . Substituting the expression for F ( x ) and K ( x ) into the IDE (29), we obtain, after simplification, that we must have
0 = θ 1 2 b 0 2 q 0 F 1 2 + λ F 1 E [ Y ] .
It follows that this solution is valid for any (positive) random variable Y having an expected value E [ Y ] such that
F 1 = λ κ E [ Y ] ( λ κ E [ Y ] ) 2 + 2 κ θ ,
where κ : = q 0 / b 0 2 . The optimal control would be u * ( x ) b 0 F 1 / q 0 > 0 .
This case could be generalized by taking K ( x ) = K 1 x + K 0 , with F 1 d + F 0 = K 1 d + K 0 .

4. Discussion

In this paper, a homing problem for FPPs was set up and solved explicitly in various particular cases. This is the first time such problems were considered for FPPs. These processes have applications in numerous fields. Therefore, the results that we obtained may be useful for people working in these fields.
To obtain exact solutions to our problem, we had to solve a non-linear integro-differential equation, which is the main limitation of the technique used in this paper. Sometimes, this equation can be transformed into a differential equation. However, solving this differential equation, subject to the appropriate boundary conditions, is in itself a difficult problem.
We can at least try to find approximate analytical expressions for the value function and the optimal control. When this is not possible, we can resort to numerical methods or simulation.
A possible extension of the work presented in this paper would be to treat the case when the parameter θ in the cost function J ( x ) is negative, so that there is a reward as long as the controlled process is in the continuation region. As mentioned above, in the application of the FPPs in hydrology, the problem could be to maintain the level of a river below a critical value for as long as possible, beyond which the risk of flooding is high. We could also add some noise in the form of a diffusion process to the model. Having time-varying parameters is another possibility. However, then the equation satisfied by the value function would include a partial derivative with respect to time, making the task of finding an explicit solution to this equation even more challenging.

Funding

This research was supported by the Natural Sciences and Engineering Research Council of Canada (RGPIN-2021-03795).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The author wishes to thank the anonymous reviewers of this paper for their constructive comments.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Parzen, E. Stochastic Processes; Holden-Day: San Francisco, CA, USA, 1962. [Google Scholar]
  2. Lefebvre, M.; Guilbault, J.-L. Using filtered Poisson processes to model a river flow. Appl. Math. Model. 2008, 32, 2792–2805. [Google Scholar] [CrossRef]
  3. Racicot, R.W.; Moses, F. Filtered Poisson process for random vibration problems. J. Eng. Mech. Div. 1972, 98, 159–176. [Google Scholar] [CrossRef]
  4. Hero, A.O. Timing estimation for a filtered Poisson process in Gaussian noise. IEEE Trans. Inf. Theory 1991, 37, 92–106. [Google Scholar] [CrossRef]
  5. Grigoriu, M. Dynamic systems with Poisson white noise. Nonlinear Dyn. 2004, 36, 255–266. [Google Scholar] [CrossRef]
  6. Novikov, A.; Melchers, R.E.; Shinjikashvili, E.; Kordzakhia, N. First passage time of filtered Poisson process with exponential shape function. Probabilist. Eng. Mech. 2005, 20, 57–65. [Google Scholar] [CrossRef]
  7. Yin, Y.-J.; Li, Y.; Bulleit, W.M. Stochastic modeling of snow loads using a filtered Poisson process. J. Cold Reg. Eng. 2011, 25, 16–36. [Google Scholar] [CrossRef]
  8. Theodorsen, A.; Garcia, O.E.; Rypdal, M. Statistical properties of a filtered Poisson process with additive random noise: Distributions, correlations and moment estimation. Phys. Scr. 2017, 92, 054002. [Google Scholar] [CrossRef]
  9. Alazemi, F.; Khalifa Es-Sebaiy, K.; Ouknine, Y. Efficient and superefficient estimators of filtered Poisson process intensities. Commun. Stat. Theory Methods 2019, 48, 1682–1692. [Google Scholar] [CrossRef]
  10. Grant, J.A.; Szechtman, R. Filtered Poisson process bandit on a continuum. Eur. J. Oper. Res. 2021, 295, 575–586. [Google Scholar] [CrossRef]
  11. Whittle, P. Optimization over Time; Wiley: Chichester, UK, 1982; Volume I. [Google Scholar]
  12. Whittle, P. Risk-Sensitive Optimal Control; Wiley: Chichester, UK, 1990. [Google Scholar]
  13. Lefebvre, M.; Yaghoubi, R. Optimal service time distribution for an M/G/1 queue. Axioms 2024, 13, 594. [Google Scholar] [CrossRef]
  14. Bellman, R. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
  15. Mohammed, J.K.; Khudair, A.R. Integro-differential equations: Numerical solution by a new operational matrix based on fourth-order hat functions. Partial Differ. Equ. Appl. Math. 2023, 8, 100529. [Google Scholar] [CrossRef]
Figure 1. Value function F ( x ) for 0 x 1 in Case I with θ = λ = b 0 = q 0 = K 0 = d = 1 .
Figure 1. Value function F ( x ) for 0 x 1 in Case I with θ = λ = b 0 = q 0 = K 0 = d = 1 .
Mathematics 13 00284 g001
Figure 2. Value function F ( x ) for 0 x 1 in Case II (full line) with θ = λ = b 0 = q 0 = K 0 = d = 1 and c = 2 .
Figure 2. Value function F ( x ) for 0 x 1 in Case II (full line) with θ = λ = b 0 = q 0 = K 0 = d = 1 and c = 2 .
Mathematics 13 00284 g002
Figure 3. Value function F ( x ) for 0 x 1 in Case III (a) with θ = λ = α = b 0 = q 0 = K 0 = d = 1 .
Figure 3. Value function F ( x ) for 0 x 1 in Case III (a) with θ = λ = α = b 0 = q 0 = K 0 = d = 1 .
Mathematics 13 00284 g003
Figure 4. Optimal control for 0 x 1 in Case III (a) with θ = λ = α = b 0 = q 0 = K 0 = d = 1 .
Figure 4. Optimal control for 0 x 1 in Case III (a) with θ = λ = α = b 0 = q 0 = K 0 = d = 1 .
Mathematics 13 00284 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lefebvre, M. Controlled Filtered Poisson Processes. Mathematics 2025, 13, 284. https://doi.org/10.3390/math13020284

AMA Style

Lefebvre M. Controlled Filtered Poisson Processes. Mathematics. 2025; 13(2):284. https://doi.org/10.3390/math13020284

Chicago/Turabian Style

Lefebvre, Mario. 2025. "Controlled Filtered Poisson Processes" Mathematics 13, no. 2: 284. https://doi.org/10.3390/math13020284

APA Style

Lefebvre, M. (2025). Controlled Filtered Poisson Processes. Mathematics, 13(2), 284. https://doi.org/10.3390/math13020284

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop