Next Article in Journal
On Equivalence Operators Derived from Overlap and Grouping Functions
Next Article in Special Issue
The Measurement Errors and Their Effects on the Cumulative Sum Schemes for Monitoring the Ratio of Two Correlated Normal Variables
Previous Article in Journal
On the Noteworthy Properties of Tangentials in Cubic Structures
Previous Article in Special Issue
Integer-Valued Split-BREAK Process with a General Family of Innovations and Application to Accident Count Data Modeling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of the Concept of Statistical Causality in Integrable Increasing Processes and Measures

by
Dragana Valjarević
1,2,*,
Vladica Stojanović
3 and
Aleksandar Valjarević
4
1
Department of Mathematics, Faculty of Sciences and Mathematics, University of Pristina in Kosovska Mitrovica, 38220 Kosovska Mitrovica, Serbia
2
Faculty of Education, University of East Sarajevo, 76300 Bijeljina, Bosnia and Herzegovina
3
Department of Informatics & Computer Sciences, University of Criminal Investigation and Police Studies, 11080 Belgrade, Serbia
4
Faculty of Geography, University of Belgrade, 11000 Belgrade, Serbia
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(2), 124; https://doi.org/10.3390/axioms13020124
Submission received: 26 December 2023 / Revised: 11 February 2024 / Accepted: 15 February 2024 / Published: 17 February 2024
(This article belongs to the Special Issue Stochastic and Statistical Analysis in Natural Sciences)

Abstract

:
In this paper, we investigate an application of the statistical concept of causality, based on Granger’s definition of causality, on raw increasing processes as well as on optional and predictable measures. A raw increasing process is optional (predictable) if the bounded (left-continuous) process X, associated with the measure μ A ( X ) , is self-caused. Also, the measure μ A ( X ) is optional (predictable) if an associated process X is self-caused with some additional assumptions. Some of the obtained results, in terms of self-causality, can be directly applied to defining conditions for an optional stopping time to become predictable.

1. Introduction

In the past few decades, scientists have been interested in investigating probabilistic concepts of causality regarding which causes should involve and induce their effects, or, to make be more clear, to raise the probability of their effects. This problem has taken a central role in philosophy and it has great applications in many different sciences.
The processes that we investigate took place under different conditions, but scientists discovered that within the changes, there are connections that remain constant. For example, objects released in mid-air under various conditions will usually fall to the ground. But if the object is a piece of paper, and if there is a strong breeze blowing, the piece of paper will almost certainly rise. Hence, we can understand the laws of nature as conditional, because they can be applied only when unusual or rarely occurring circumstances are neglected. This example is not alone; plenty of different, yet similar phenomena can be observed. Obviously, based on this type of behaviour, we can think about the possibility that during these processes, some of relationships are consistent and the resulting changes are no coincidence. This kind of consistent relationship can lead us to interpret this constancy as necessary; in other words, it could not be otherwise, because it is an essential part of an observed stochastic process. The necessary relationships between objects, events, conditions, or other things at a given time and those at later times are referred to as causal laws.
The causal laws in a specific problem are usually not known a priori; they can be discovered later in nature. The existence of a common relationship that holds between two objects regardless of the conditions is the first sign of potential causal laws. When we notice such conditions, we suppose that they are causal relationships instead of rejecting them as a result of arbitrary connections. The next step in finding regularities which we want to prove are a result of causality is to create a hypothesis about any observed connections, which can help us explain and understand their origin.
The first problem that was considered was to further investigate the relationship between causality and a realisation of events through time. For example, before winter, the leaves fall off the trees. Of course, the loss of the leaves by the trees is not the cause of winter, but is instead the effect of the process of lowering of the temperature, which first leads to the loss of leaves by the trees and, later, to the coming of winter. Clearly, the concept of causal relations implies more than just regular realisation of events, in which the preceding events represent the cause, while an event that is realised later is merely an effect. Obviously, in this case, the event of the trees losing their leaves has happened before the coming of winter, but this is not the cause of winter. In this case, it is easy to see what is the cause and what is the effect, but there are phenomena for which this conclusion is not so simple. This is the reason why some causes can be considered spurious.
But now, if we consider the idea that each occurrence and event has many causes, we have some new problems. First of all, we can say that all events and objects may have some connections, even if those connections are very tiny. On the other side, someone could say that everything is connected and there are numerous causes of some problem of interest. But usually, most of these events have insignificant effects. This is the reason why we define the “significant causes” as those occurrences which have a significant influence on the considered effects [1,2].
Sometimes, causes and effects can be observed by conducting an experiment, but this is not always possible. A very well-known example of a science in which controlled experiments cannot be applied, and in which the conditions of the problem cannot be outlined very well, is geology. “What could have caused these present structures to be what they are?” For example, consider a set of layers of rock, placed diagonally. Obviously, the layers were placed horizontally at first, when the surface was at the bottom of a sea or a lake. Then, the layers were probably pushed up and modified because of earthquakes or some other earth movements. Although this explanation seems very reasonable and credible, there is no way to prove it by controlled experiments or observations carried out under prescribed conditions, because all these processes have happened a long time ago, and that were tectonic movements that are not easy to verify.
In this paper, we consider the concept of causality, which is established based on Granger causality, where the probability change was understood as a comparison of conditional probabilities. The Granger causality pays particular attention to discrete time stochastic processes (time series). But, in many cases—for example, in physics—it is very difficult to consider relations of causality in discrete-time models, and such relations may depend on the length of interval between each sampling. The continuous time framework is fruitful for the statistical approach to causality analysis between stochastic processes that rapidly evolve (see [3]). So, continuous-time models are seeing increasing use in a range of applications (see, for example, [4,5,6]). This is the reason why we investigate the concept of statistical causality in continuous time. Here, the concept of causality is analysed by implementing conditional independence among the σ -fields, which is a foundation for a general probabilistic theory of causality.
The paper is organised as follows. After the Introduction, in Section 2, we present a generalisation of the concept of causality. “ E is a cause of G within F ”, which involves prediction in any horizon in continuous time. The foundation of this concept is Granger’s definition of causality (see [4,5]), but its generalisation, in terms of Hilbert spaces, is formulated in [7,8]. The concept of causality in continuous time associated with stopping times with some basic properties is introduced in [9]. The given concept of causality is related to separable processes [10] and extremal measures [11].
Section 3 contains our main results. In this section, we relate the given concept of causality in continuous time with the optionality and predictability of the raw increasing process associated with measure. We also give the conditions for a stopping time to be predictable, in terms of causality, which has great applications in mathematical finance, especially in risk theory. The relation between the concept of causality and optional and predictable measures is considered, too. Section 4 provides an application of obtained results on the theory of risk. We end the article with some concluding remarks.

2. Preliminaries and Notation

Causality is clearly a forecast property, and the main question is as follows: is it possible to lessen accessible information in direction to predict a given filtration?
A probabilistic model for a time-dependent system is represented by ( Ω , A , F , P ) where ( Ω , A , P ) is a probability space and F = { F t , t I , I R + } is a “framework” filtration that satisfies the usual conditions of right continuity and completeness. The F = t I F t is the smallest σ -algebra which contains all the { F t } . An equivalent notation will be used for filtrations H = { H t } , G = { G t } and E = { E t } . Let us mention that the filtration E is a subfiltration of G and marked as E G , if E t G t for each t. For a stochastic process X by { F t X } is marked the smallest σ -algebra, for which all X s with s t , are measurable and F X = { F t X , t I } is the natural filtration of the process X. The natural filtration F X is the smallest filtration that allows X to be adapted.
The definition of causality uses the conditional independence of σ -algebras, and the initial step was to construct the “smallest” sub- σ -algebra of a σ -algebra M 2 conditionally, on which M 2 becomes independent of another given σ -algebra M 1 .
Definition 1
(compare with [12,13]). Let ( Ω , A , P ) be a probability space and M 1 , M 2 and M arbitrary sub-σ-algebras from A . It is said that M is splitting for M 1 and M 2 or that M 1 and M 2 are conditionally independent given M (and written as M 1 M 2 M ) if
E [ X 1 X 2 M ] = E [ X 1 M ] E [ X 2 M ] .
where X 1 , X 2 denote positive random variables measurable with respect to the corresponding σ-algebras M 1 and M 2 , respectively.
Some elementary properties of this concept are introduced in [14].
Now, we give a definition of causality between σ -algebras using the concept of conditional independence.
Definition 2
(see [5]). Let M 1 F be a σ-algebra. A σ-algebra M 2 is a sufficient cause of M 1 at time t (relative to ( Ω , A , F t , P ) ), or within F = { F t } (and written as M 1 | < M 2 F ) if and only if
M 2 F t
and
M 1 F t M 2 .
The intuitive notion of causality in continuous time, formulated in terms of Hilbert spaces, is investigated in [8]. Now, we consider the corresponding notion of causality for filtrations based on the conditional independence between sub- σ -algebras of F (see [12,14]).
Definition 3
(See [7,8]). It is said that H is a cause of E within F relative to P (and written as E | < G ; F ; P ) if E F , G F and if E is conditionally independent of { F t } given { G t } for each t, i.e., E F t | G t (i.e., E u F t | G t holds for each t and each u), or
( A E ) P ( A | F t ) = P ( A | G t ) .
Intuitively, E | < G ; F ; P means that all information about E that gives { F t } is contained in { G t } for arbitrary t; equivalently, { G t } holds all the information from the { F t } needed for predicting E . We can consider subfiltration G F as a reduction in information.
The definition of causality introduced in [5] is of the following form: “It is said that G entirely causes E within F relative to P (and written as E | < G ; F ;P) if E F , G F and if E F t | G t for each t”. This definition is similar to the definition described in Definition 3, because instead of E F , this definition contains the condition E F , or identical E t F t for each t, without natural reasoning. Clearly, the Definition 3 is more general than the definition given in [5], so all results related to causality in the sense of the Definition 3 hold for the concept of causality introduced in the definition from [5] (p. 3), when we add the condition E F .
If E and G are equal filtrations, such that E | < E ; F ; P , we tell that E is self-caused (or, its own cause) within F (compared with [5]). It should be noted that the statement “ E is self-caused” can sometimes be applied to the theory of martingales and stochastic integration (see [15]). The hypothesis ( H ) introduced in [15] is equivalent to the concept of being “self-caused”. Also, let us mention that the notion of being "self-caused", as defined here, is equivalent to the notion of subordination (as introduced in [13]).
If E and F are such that E | < E ; E F (where E F is a σ -algebra defined by ( E F ) t = E t F t ), we say that F does not cause E . Clearly, the interpretation of Granger causality is that F does not cause E if E | < E ; E F holds (see [5]). It can be shown that the term “ F does not anticipate E ” (as introduced in [13]) and this term are equivalent.
The given concept of causality can be applied to stochastic processes. In that manner, instead of stochastic processes we consider the corresponding induced filtrations, i.e., natural filtrations. For example, { F t } -adapted stochastic process X t is self-caused if { F t X } is self-caused within { F t } , i.e., if
F X | < F X ; F ; P .
Process X, which is self-caused, is entirely determined by its way of behaving with respect to its natural filtration F X (see [6]). For example, process X = { X t , t I } is a Markov process relative to the filtration F = { F t , t I } on a filtered probability space ( Ω , A , F , P ) if and only if X is a Markov process with respect to F X and if it is self-caused within F relative to P. Obviously, the same holds for Brownian motion W = { W t , t I } . Namely, W is a Brownian motion with respect to the filtration F = { F t , t I } on a filtered probability space ( Ω , A , F , P ) if and only if it is self-caused within F = { F t , t I } relative to probability P, i.e., F W | < F W ; F ; P (see [6]).
In many situations, we observe certain systems up to some random time, for example, up to the time when something happens for the first time.
Definition 4
([16,17]). A R + random variable T defined on ( Ω , A ) is called a F -stopping time, if for each t 0 { T t } F t .
Let us remind ourselves of some elementary properties of stopping times and σ -algebras. For a given { F t } -stopping time T, the associated σ -algebra is defined by F T = { A F : A { T t } F t } . Intuitively, F T is information available at time T. For a process X, we set X T ( ω ) = X T ( ω ) ( ω ) , whenever T ( ω ) < + . We define the stopped process X T = { X t T , t I } with
X t T ( ω ) = X t T ( ω ) ( ω ) = X t χ { t < T } + X T χ { t T } .
Note that if X is adapted and cadlag and if T is a stopping time, then stopped process X T is adapted, too.
So, it is natural to consider causality in continuous time which involves stopping times, a class of random variables that plays an essential role in the theory of martingales (for details, see [18,19]).
The generalisation of the Definition 3 from fixed to stopping time is introduced in [9].
Compared to the definition in (3), in the definition from [9], we have reduced the amount of information needed to predict some other filtration.
Definition 5
([19,20]). A stopping time T is predictable if there exists a sequence of stopping times { T n } n 1 such that T n is increasing, T n < T on { T > 0 } for all n and lim n T n = T a.s. Such a sequence { T n } is said to announce T.
Let us mention that because of the right continuity of the filtration { F t } and Theorem 3.27 in [16], we can use the definition of predictable time in the sense of Definition 3.25 from [16], too.

3. Causality, Increasing Processes, Optional and Predictable Measures

Increasing processes are very important in the general theory of stochastic processes, and sometimes an increasing process can be considered as a random measure on R + .
Definition 6
([12,21]). An increasing process is any process ( A t ) which is non-negative, { F t } adapted, and whose paths are increasing and cadlag.
Therefore, every increasing process is optional.
Remark 1
([12,21]). A stochastic process which is non-negative, and whose paths are increasing and cadlag, but which is not { F t } adapted, is called a raw increasing process.
Raw increasing processes are not “more general” objects than increasing processes: they are the increasing processes for the family of σ -fields F t = F , and hence there is no need for any special theory for them. Let us mention those processes that are not necessarily adapted but whose paths have this property.
The increasing process A is called integrable if E [ A ] < . A process which can be represented as the difference of two increasing processes (resp. integrable increasing processes) is called a process of finite variation (resp. a process of integrable variation). An increasing process A is said to be natural, if it is integrable and such that E ( 0 ( Δ M t ) d A t ) = 0 for any bounded martingale ( M t ) .
In many situations, it is of interest to investigate processes which are not measurable through time, but which have some continuity properties. The left continuous and adapted processes are predictable (for example, consider the cadlag Feller process). For example, in mathematical finance, the consumption plan of an investor, when he is choosing a trading strategy, is usually described by non-negative optional process (see [22]). The right continuous and adapted processes are optional (for example, the process of Brownian motion). Let us mention that, in general, the integrands of stochastic integrals should be predictable processes. For example, in mathematical finance, the risk premiums associated with the risky Brownian motion are described by a predictable process, which considers the risky asset price evolution which satisfies NFLVR (no free lunch with vanishing risk (see [22])).
The connection between the concept of causality and optional and predictable processes is considered in [23]. Now, we investigate optional and predictable measures in the sense of Definition 3.
By   o X , we denote optional projection of the process X, while   p X represents predictable projection of the process X (in [23], we are given applications of the concept of causality on optional and predictable projections). According to Remark 45 in [12], if H is random variable and ( H t ) denotes a cadlag version of martingale E ( H F t ) , the optional projection of the process ( H t ) which is constant through time is the process ( H t ) and its predictable projection is ( H t ) . Let us mention that a self-caused process X admits a right-continuous modification, and therefore, for optional and predictable projection, we have   o X t = X t ,   p X t = X t .
For a given raw process A and bounded { F t } -measurable process X, we set
μ A ( X ) = E ( [ 0 , ) X s d A s ) .
Every measure can be obtained this way and according to Theorem 65 in [12], the converse statement holds, too.
Theorem 1.
Let ( Ω , A , P ) be a probability space. A raw integrable increasing process ( A t ) associated with measure μ A ( X ) is optional if the bounded process ( X t ) is self-caused, i.e., F X | < F X ; F ; P holds.
Proof. 
Suppose that ( A t ) is a raw integrable increasing process. Because of causality F X | < F X ; F ; P and Theorem 3 in [23], bounded process ( X t ) is optional. Then, for the optional projection of ( X t ) , we have E ( X t F t ) = X t =   o X t . Due to Theorem 59 in [12] (p. 124), for integrable, increasing process ( A t ) we have
E ( [ 0 , ) X s d A s ) = E ( [ 0 , )   o X s d A s ) .
Therefore, ( A t ) is optional process. □
Example 1.
We can take X t = a ( t ) H t , where a is a positive Borel function on R + , and let ( H t ) be a cadlag version of the martingale E ( H t F t ) . According to Theorem 3.1 in [6], ( H t ) is a self-caused process. According to Theorem 1, increasing process ( A t ) is optional. Indeed, we have
μ A ( X ) = E ( [ 0 , ) X s d A s ) = E ( [ 0 , ) H s a ( s ) d A s ) .
According to Theorem 3 in [23], ( H t ) is an optional process and therefore H t =   o H t , and we have
μ A ( X ) = E ( [ 0 , ) H s a ( s ) d A s ) = E ( [ 0 , )   o H s a ( s ) d A s ) .
According to Theorem 65 in [12], A t is an optional process.
Theorem 2.
Let ( Ω , A , P ) be a probability space. A raw integrable increasing process ( A t ) associated with measure μ A ( X ) is predictable if the bounded, left continuous process ( X t ) is self-caused, i.e., F X | < F X ; F ; P holds.
Proof. 
Suppose that ( A t ) is a raw integrable increasing process. Because of causality F X | < F X ; F ; P and Remark 2 in [23], the bounded, left-continuous process X t is predictable. Then, for the predictable projection of ( X t ) , we have E ( X t F t ) = X t =   p X t , and because of left-continuity, we have X t = X t . Due to Theorem 59 in [12] (p. 124), for integrable, increasing process ( A t ) , we have
E ( [ 0 , ) X s d A s ) = E ( [ 0 , )   p X s d A s ) .
Therefore, ( A t ) is predictable process. □
Example 2.
We can take ( H t ) , to be a cadlag version of the left-continuous, martingale E ( H t F t ) . According to Theorem 3.1 in [6], ( H t ) is a self-caused process. According to Theorem 2, increasing process ( A t ) is predictable. Indeed, we have
μ A ( H ) = E ( [ 0 , ) H s d A s ) .
According to Remark 2 in [23], ( H t ) is a predictable process and therefore H t =   p H t , and we have
μ A ( X ) = E ( [ 0 , ) H s d A s ) = E ( [ 0 , )   p H s d A s ) .
According to Theorem 65 in [12], ( A t ) is a predictable process.
Suppose that X is a measurable process. Then, based on the predictable projection denoted by   p X , we present a predictable process which can be understood as a process ‘near’ to X. This feature means that for every predictable stopping time T, the expected values of the stopped variables X T and   p X T are equal. If X is the gain process of some game and T is an exit strategy, then the stopped variable X T is the value of the game if one plays the exit strategy T. If a stopping time is ‘predictable’ then it means that we can forecast and predict it. As X T and   p X T have the same expected value for predictable exit rules, it will be irrelevant, on average, whether we play the game X or the predictable game   p X .
Next, Lemma is a consequence of the previous Theorem and has conditions in terms of causality for a stopping time to be predictable.
Lemma 1.
Let ( Ω , A , P ) be a probability space and T be a stopping time. T is predictable time if every bounded, left-continuous process ( X t ) is self-caused, i.e., F X | < F X ; F ; P holds.
Proof. 
The proof directly follows the previous Theorem 2 if an optional increasing process ( A t ) is of the form A t = I { T t } . Then, because of Remark 2 in [23], process ( X t ) is predictable, so we have
E ( [ 0 , ) X s d A s ) = E ( [ 0 , ) X s I { T s } d s ) = E ( [ 0 , )   p X s I { T s } d s ) = E ( [ 0 , )   p X s d A s ) .
According to Theorem 2, ( A t ) is a predictable process. Therefore, the set { T t } for t = is predictable. According to definition 3.25 in [16], T is a predictable stopping time. □
Corollary 1.
Let T be a stopping time and set A F T . The restriction T A is a predictable stopping time if every bounded, left-continuous process ( X t ) is self-caused.
Proof. 
Let T be a stopping time and ( X t ) be a left-continuous, bounded and self-caused process. Due to Lemma 1, T is a predictable stopping time. According to Corollary 10.15 in [24] the restriction T A is predictable, too. □
Theorem 3.
Let T be an optional time. Stopping time T is predictable if and only if bounded; process ( X t ) with E ( X T ) = 0 is self-caused, i.e., F X | < F X ; F ; P .
Proof. 
Let T be a predictable stopping time. Due to Lemma 10.3 in [24], the process K T = 1 { T t } is predictable. According to Theorem 10.12 in [24], X is a natural process, and by definition, ( X t ) is martingale. Causality is then proved by Theorem 4.1 in [6], where every martingale is a self-caused process F X | < F X , F ; P .
Conversely, suppose that ( X t ) is a bounded, self-caused process with E ( Δ X T ) = 0 . Because of causality, ( X t ) is { F t } - martingale. Due to Theorem 10.14 in [24], T is a predictable stopping time. □
Assume that K is a local martingale. The jumps Δ K of K clearly represent an ‘unpredictable’ part of K. The majority of examples of the predictable projection are the laws   p K = K and   p ( Δ K ) = 0 . The previous equality shows that we cannot ‘foresee’ the size of the jumps of a local martingale.
It is very natural to ask how someone can forecast the jumps in stock prices. In mathematical finance, stock prices are usually driven by some local martingale, and obviously no one can predict the jumps in the price processes. Thus, the modest connection   p ( Δ K ) = 0 just mentioned has remarkably significant theoretical and applied inference.
We use M to denote the σ -field of all measurable sets augmented with all the P-evanescent sets.
Definition 7
([12,16]). Let μ be a measure on A × B ( R ) + without changing any evanescent set. Measure μ is called optional (predictable) if, for every non-negative, bounded, measurable process X
μ ( X ) = μ ( o X ) ( μ ( X ) = μ ( p X ) ) ,
where μ ( x ) = X d μ = E μ ( X ) .
Obviously, for a bounded, measurable process X, we have   o X = E μ ( X O ) where μ is an optional measure. Respectively, for predictable measure μ , we have   p X = E μ ( X P ) .
Theorem 4.
Let ( Ω , A , P ) be a probability space. Then, for self-caused, bounded process X and a positive P measure μ on M , there is a raw increasing process A and the associated measure μ A ( X ) is an optional measure.
Proof. 
Let ( Ω , A , P ) be a probability space and μ a positive P measure. According to Theorem 65 in [12], there exists a raw integrable increasing process A, which is unique within an evanescent process. Then, for every bounded measurable process X, the measure μ A ( X ) associated with process A is defined by
μ A ( X ) = E ( [ 0 , ) X s d A s ) .
Let F X | < F X ; F ; P hold. Then, according to Theorem 3 in [23], process ( X t ) is an { F t } optional process. Therefore,   o X t = E ( X t F t ) = X t so
μ A ( X ) = E ( [ 0 , ) X s d A s ) = E ( [ 0 , )   o X s d A s ) = μ A ( o X ) ,
Due to definition 5.12 in [16] (p. 141), μ A ( X ) is an optional measure. □
Example 3.
Let λ be a P measure on O . We can define a P measure μ on M by setting μ ( X ) = λ ( o X ) = X d A s for every self-caused, bounded measurable process X. Then, according to Theorem 4, measure μ A ( X ) is optional.
Indeed, according to Theorem 3 in [23], a process is optional if it is self-caused; therefore, X is optional process, so μ A ( X ) = λ ( o X ) = λ ( X ) . Thus, λ ( X ) is an optional measure, so μ A ( X ) is optional, too.
Theorem 5.
Let ( Ω , A , P ) be a probability space. Then, for self-caused, left-continuous, bounded process X and a positive P measure μ on M , there exists a raw process of integrable variation A and the associated measure μ A ( X ) is a predictable measure.
Proof. 
Let ( Ω , A , P ) be a probability space and μ be a positive P measure. According to Theorem 65 in [12], there exists a raw process of integrable variation A, which is unique to within an evanescent process. Then, for every bounded measurable process X, the measure μ A ( X ) associated with process A is defined by
μ A ( X ) = E ( [ 0 , ) X s d A s ) .
Let F X | < F X ; F ; P holds. Then, according to Remark 2 in [23], the self-caused, left-continuous process ( X t ) is an { F t } -predictable process. Therefore,   p X t = E ( X t F t ) = X t = X t , so
μ A ( X ) = E ( [ 0 , ) X s d A s ) = E ( [ 0 , )   p X s d A s ) = μ A ( p X ) .
Due to definition 5.12 in [16], μ A ( X ) is a predictable measure. □

4. Application

Let ( Ω , A , F t , P ) be a filtered probability space. Mathematical risk theory investigates stochastic models of risk in finance and insurance. The basic risk model consider the value of a risk portfolio which is the sum of premium payments that increase the value of the portfolio and claim payouts that decrease the value of the portfolio. Premium payments are received to cover liabilities—expected losses from claim payouts and other costs. Claims are a result of risk events that occur at random times. Very often, claims’ payouts are modelled by point processes. An important problem in risk theory is the calculation of the probability of ruin and time to ruin, which is the likelihood that the value of a risk portfolio will ever become negative.
A risk process can be represented with R t = u + B t + N t + D t + L t , with initial conditions u > 0   B 0 = W 0 = D 0 = L 0 = 0 . The process B is a continuous predictable process of finite variation which represents the stable income payments together with premiums, N is a continuous local martingale which models a random perturbation, D is a right-continuous jump process and represents accumulated claims and may also contain jumps made by non-anticipated falls or rises in returns on investment. L is a left-continuous jump process, which models gains or losses in returns on investment. All these processes are optional and are adapted to the filtration ( F t ) .
Measures μ r and μ g are random measures that describe jumps in the processes D t and L t , while the measures ν r and ν g are unique predictable measures of the form ν ( ω , d t , d x ) = d A t ( ω ) K ( ω , d t , d x ) , where A is an increasing, predictable, right-continuous process (see Lemma in [25]). Based on Doob–Meyer decomposition of optional semimartingales, R is a special optional semimartingale adapted to ( F t ) .
For risk process R, we introduce optional cumulant function G t ( x ) . Let R t be a self-caused optional process, with Δ R T = 0 . Then, E ( Δ R T ) = 0 , and based on Theorem 3, T is a predictable stopping time. Then, according to Lemma 5.2.4 in [25], for optional cumulant process G, Δ G = ( e z x 1 ) ν r ( { t } , d x ) = 0 if Δ R T = 0 , which in the risk theory means that claims of payout cannot be predicted beforehand. In other words, if optional semimartingale X is self-caused, then claims cannot be predicted beforehand.
Thus, we have the following proposition.
Proposition 1.
Let R t be a self-caused process with Δ R T = 0 . Then, claims cannot be predicted beforehand.

5. Conclusions

In this paper, we presented several new results considering optional and predictable measures, increasing processes and predictable times. The predictable measures are very important, particularly for statistical analysis of financial data.
An open question is to consider enlargements of filtration in the context of its application in mathematical finance and its connection to the predictable and optional measures, as well as the preservation of these properties of measures. Also, it would be of interest to include the stopping time and consider what will happen when we have some finite horizon.

Author Contributions

Conceptualisation, D.V.; methodology, D.V.; investigation, D.V.; writing—original draft preparation, D.V.; formal analysis, D.V.; writing—review and editing, V.S.; visualisation, A.V.; resources, A.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bohm, D. Causality and Chance in Modern Physics; Routledge: London, UK, 1984. [Google Scholar]
  2. Eells, E. Probabilistic Causality; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  3. Comte, F.; Renault, E. Noncausality in Continuous Time Models The General. Economet. Theor. 1996, 12, 215–256. [Google Scholar] [CrossRef]
  4. Granger, C.W.J. Investigating Causal Relations by Econometric Models and Cross Spectral Methods. Econometrica 1969, 37, 424–438. [Google Scholar] [CrossRef]
  5. Mykland, P.A. Statistical Causality; Report No. 14; University of Bergen: Bergen, Norway, 1986. [Google Scholar]
  6. Petrović, L.; Stanojević, D. Statistical Causality, Extremal Measures and Weak Solutions of Stochastical Differential Equations with Driving Semimartingales. J. Math. Model. Algorithms 2010, 9, 113–128. [Google Scholar] [CrossRef]
  7. Gill, J.B.; Petrović, L. Causality and Stochastic Dynamic Systems. SIAM J. Appl. Math. 1987, 47, 1361–1366. [Google Scholar] [CrossRef]
  8. Petrović, L. Causality and Stochastic Realization Problem. Publ. Inst. Math. (Beograd) 1989, 45, 203–212. [Google Scholar]
  9. Petrović, L.; Dimitrijević, S.; Valjarević, D. Granger Causality and stopping times. Lith. Math. J. 2016, 56, 410–416. [Google Scholar] [CrossRef]
  10. Valjarević, D.; Petrović, L. Statistical causality and separable processes. Statist. Probab. Lett. 2020, 167, 108915. [Google Scholar] [CrossRef]
  11. Petrović, L.; Valjarević, D. Statistical causality and extremality of measures. Bull. Korean Math. Soc. 2018, 55, 561–572. [Google Scholar]
  12. Delacherie, C.; Meyer, P.A. Probability and Potentials; Blaisdell Publishing Company: Waltham, MA, USA, 1966. [Google Scholar]
  13. Rozanov, Y.A. Innovation Processes; V. H. Winston and Sons: New York, NY, USA, 1977. [Google Scholar]
  14. Florens, J.P.; Mouchart, M.; Rolin, J.M. Elements of Bayesian Statistics. Pure and Applied Mathematics, A Series of Monographs and Textbooks; Marcel Dekker Inc.: New York, NY, USA; Basel, Switzerland, 1990. [Google Scholar]
  15. Bremaud, P.; Yor, M. Changes of Filtration and of Probability Measures. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 1978, 45, 269–295. [Google Scholar] [CrossRef]
  16. He, S.W.; Wang, J.G.; Yan, J.A. Semimartingale Theory and Stochastic Calculus; CRC Press: Boca Raton, FL, USA, 1992. [Google Scholar]
  17. Revuz, D.; Yor, M. Continuous Martingales and Brownian Motion; Springer: New York, NY, USA, 2005. [Google Scholar]
  18. Jacod, J.; Shiryaev, A.N. Limit Theorems for Stochastic Processes; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  19. Protter, P. Stochastic Integration and Differential Equations; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  20. Medvegyev, P. Stochastic Integration Theory; Oxford University Press: Oxford, UK, 2007. [Google Scholar]
  21. Nikeghbali, A. An essay on the general theory of stochastic processes. Probab. Surv. 2006, 3, 345–412. [Google Scholar] [CrossRef]
  22. Jarow, R. Continuous-Time Asset Pricing Theory, A MArtingale Based Approach; Springer: New York, NY, USA, 2021. [Google Scholar]
  23. Valjarević, D.; Dimitrijević, S.; Petrović, L. Statistical causality and optional and predictable projections. Lith. Math. J. 2023, 63, 104–116. [Google Scholar] [CrossRef]
  24. Kallenberg, O. Foundations of Modern Probability, 3rd ed.; Springer Nature: Cham, Switzerland, 2021. [Google Scholar]
  25. Pak, A. Optional Processes and their Applications in Mathematical Finance, Risk Theory and Statistics. Ph.D. Thesis, Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, AB, Canada, 2021. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Valjarević, D.; Stojanović, V.; Valjarević, A. Application of the Concept of Statistical Causality in Integrable Increasing Processes and Measures. Axioms 2024, 13, 124. https://doi.org/10.3390/axioms13020124

AMA Style

Valjarević D, Stojanović V, Valjarević A. Application of the Concept of Statistical Causality in Integrable Increasing Processes and Measures. Axioms. 2024; 13(2):124. https://doi.org/10.3390/axioms13020124

Chicago/Turabian Style

Valjarević, Dragana, Vladica Stojanović, and Aleksandar Valjarević. 2024. "Application of the Concept of Statistical Causality in Integrable Increasing Processes and Measures" Axioms 13, no. 2: 124. https://doi.org/10.3390/axioms13020124

APA Style

Valjarević, D., Stojanović, V., & Valjarević, A. (2024). Application of the Concept of Statistical Causality in Integrable Increasing Processes and Measures. Axioms, 13(2), 124. https://doi.org/10.3390/axioms13020124

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop