Skip to Content
AppliedMathAppliedMath
  • Article
  • Open Access

11 February 2026

General Stochastic Vector Integration: A New Approach

and
Department of Mathematical Sciences, University of South Africa, Johannesburg 0003, South Africa
*
Author to whom correspondence should be addressed.

Abstract

This paper presents a topology-based approach to the general vector-valued stochastic integral for predictable integrands and semimartingale integrators. The integral is defined as a unique mapping that achieves closure under the semimartingale topology. While the topology and the closedness of the integral operator are well known, the method of defining the integral via this mapping is new and offers a significantly more efficient path to understanding the general stochastic integral compared to existing techniques. Instead of defining a basic integral and then extending it through a sequence of case distinctions, our construction performs a single topological closure: we define the vector stochastic integral as the unique continuous extension of the simple-predictable integral under the Émery topology, within the predictable σ -algebra. This single step yields the general predictable, vector-valued integral without invoking semimartingale decompositions, Doob–Meyer, or detours through H 2 /quasimartingale frameworks and without re-engineering from the componentwise to the vector case.

1. Introduction

In the past few decades, the importance of Stochastic Integration has increased immensely in multiple disciplines. This is partially due to the growth of Mathematical Finance, which relies heavily on Stochastic Calculus for pricing and hedging financial derivatives in valuation, risk management, and portfolio optimization. Additionally, Stochastic Integration is used in many other fields, such as Engineering, Physics, and Biology. For instance, it is used in filtering and control theory to study how random perturbations affect physical phenomena or to model the impact of stochastic variability on population dynamics. Although basic versions of the stochastic integral are usually sufficient for most of the mentioned applications, some cases need a more advanced version. In such scenarios, the stochastic integral should be defined for multidimensional càdlàg processes as integrators, the integral operator should be linear and continuous, and the set of integrands should include all càglàd processes and as many predictable processes as possible. Two exemplary applications of these more general requirements are the two celebrated Fundamental Theorems of Asset Pricing in continuous time for semimartingale models [1,2,3,4,5] and the development of general market models with dividends [6].
There are two alternative prominent approaches for defining the stochastic integral and deriving its properties.
The first, known as the classical method, traces the historical development of the stochastic integral. It began with its definition for Brownian Motion integrators [7,8,9,10,11,12]. Later, with the aid of the Doob–Meyer decomposition [13], it expanded to square integrable martingale integrators [14,15,16] and a broader set of integrands [17,18,19,20]. This definition was extended to include locally bounded predictable integrands and general semimartingale integrators [21,22,23]. Jacod [22,24] introduced the general stochastic integral for unbounded predictable integrands that meet specific integrability conditions. J. Jacod’s approach centered on the characterization of semimartingale jumps. An alternative method, which provided an equivalent definition of the stochastic integral, was proposed by Chou et al. [23]. One of the compelling attributes of the general semimartingale integral is that the integrands are closed under a specific topology, namely the semimartingale or Émery topology [25,26]. However, when the integral is extended to the multidimensional case via componentwise summation, the space of integrands loses its closed property. Therefore, the vector integral must be further expanded to regain closedness, which was performed by Jacod [27], Memin [28], Dellacherie and Meyer [26], and Shiryaev and Cherny [5]. Our topological approach addresses this loss of closedness directly: by defining L ( X ) as the closure of Λ d in the Émery topology from the outset, closedness is built into the definition rather than recovered post hoc. The space L ( X ) is closed by construction, and it properly contains the componentwise integrable processes (see Example 10), thereby circumventing the classical obstruction entirely. This vector extension is particularly important for applications in mathematical finance and stochastic control, where portfolios and state processes are inherently multidimensional; the classical reconstruction from componentwise integrals requires the bracket factorization [ M ] = π · C and L p ( M ) spaces, adding considerable technical overhead.
Over time, several simplifications and shortcuts have been introduced to the classical approach. For instance, employing the Fundamental Theorem of Local Martingales has simplified the extension from square integrable martingales to local martingales and, consequently, to semimartingales [29,30]. However, defining the general stochastic vector integral and its properties is still challenging and entails numerous technical proofs. At some point in the theory, any semimartingale needs to be decomposed into a continuous and a purely discontinuous part. This usually requires more advanced tools, so most authors restrict themselves to continuous integrators. Even in textbooks that treat general semimartingales as integrators, the authors usually only present the case of locally bounded predictable integrands. This simplifies the theory significantly. Not only because locally bounded predictable integrands allow for a straightforward application of the monotone class theorem but also because local martingales are stable under stochastic integration for locally bounded predictable integrands, which avoids the more complicated theory of sigma martingales (see [31,32] for a recent sigma martingale account).
In contrast to the classical approach, the functional analytic approach presents an alternative. This more recent approach is built on a result by Bichteler [33,34] and Dellacherie [35] (first published in Meyer [36] and inspired by results by Pellaumail [37] and Métivier and Pellaumail [38]), stating that semimartingales are the most general class of integrators for which one can define the stochastic integral for predictable integrands in a way that the integral operator is continuous with respect to specific metrics. This was then further developed by Lenglart [39] and Protter [40,41]. Protter [42] even wrote an entire comprehensive textbook based on the functional analytic approach, covering the results from all the above publications.
The functional analytic approach directly formulates the stochastic integral for one-dimensional semimartingale integrators and càglàd integrands by drawing upon techniques from functional analysis. However, for extending the stochastic integral to general predictable integrands, this approach again relies on the decomposition of semimartingales and utilizes similar, very technical methods as the classical approach. Also, the functional analytic method does not address the vector-valued scenario. To extend the integral in this context, one must revert to classical methods again.
Owing to these complexities in both approaches, standard textbooks and university lectures rarely cover the general stochastic integral. As a result, topics like the Fundamental Theorems of Asset Pricing and their associated proofs are typically reserved for experts in stochastic analysis.
In this paper, we introduce a novel approach to defining the stochastic vector integral through the topology of semimartingales. Traditional approaches to stochastic integration, such as the classical and the extension from càglàd to predictable integrators in the functional analytic approach, rely heavily on the decomposition of semimartingales. However, by utilizing the semimartingale topology, we can bypass this decomposition, allowing for a direct and streamlined definition of the integral. The key novelty of our contribution lies not in proposing a new topology but in introducing a new way to define the stochastic integral that exploits the Émery topology in one continuous extension. This definition of the stochastic integral is essentially new but builds on ideas and results from the works of Chou et al. [23], Jacod [27], Mémin [28], Bichteler [34,43], Shiryaev and Cherny [5], Assefa and Harms [44], and Karandikar and Rao [45]. We emphasize that the resulting integral operator is isomorphic to the standard Jacod/Protter integral; the contribution lies in the efficiency and directness of the derivation, not in extending the class of integrable functions beyond what is already known in advanced theory (as shown in Remark 13). Our construction avoids several classical bottlenecks: it does not require separating the local-martingale and finite-variation parts nor splitting integrators into continuous and purely discontinuous components; it does not rely on the Doob–Meyer or special semimartingale structure; it dispenses with excursions through H 2 or quasimartingale machinery for unbounded predictable integrands; and—perhaps most significantly for applied mathematicians—it extends naturally from the scalar to the vector case without any additional componentwise reconstruction, bracket factorization, or measurable-selection apparatus. As a result, our definition shortens proofs while retaining all standard properties of the general stochastic integral. Moreover, the classical results concerning semimartingale decompositions and their structural properties are quite technical and require long and intricate proofs, which makes them particularly unattractive for university lectures and introductory readings. The new topological approach therefore provides a clean and efficient alternative for deriving results essential to applications such as mathematical finance, where the full machinery of stochastic integration is indispensable.
This paper is organized as follows. In Section 2, we establish our notation and review the needed preliminaries from probability theory, including fundamental definitions related to martingales. The new, topology-based definition of semimartingales is introduced in Section 3, along with several classical characterizations and a discussion of the Bichteler–Dellacherie–Mokobodzki Theorem. Afterwards, Section 4 collects additional semimartingale properties, focusing on localization procedures and changes of measure. The construction of the general stochastic integral appears in Section 5, where topological arguments allow us to bypass the usual decomposition of semimartingales. In Section 6, we investigate the key properties of the stochastic integral. In Section 7, attention shifts to the stability of local martingales under this integral, with specific criteria for preserving their local martingale nature. The discussion in Section 8 covers the multidimensional quadratic variation and its interplay with the semimartingale topology. Section 9 demonstrates how the topological integral is applied in practice through concrete examples involving stochastic differential equations. Section 10 provides a systematic comparison with the classical and functional analytic approaches. Finally, detailed proofs of technical results, including a comprehensive version of the Dominated Convergence Theorem in this setting, are provided in Appendix A.
This topology-based approach simplifies several arguments in the general theory of stochastic integration. In particular, it provides a direct route to defining the multidimensional vector semimartingale integral. It therefore holds significant promise for further applications in financial mathematics, especially in studying the Fundamental Theorems of Asset Pricing and the robust stability of local martingales.
A key point for practitioners is that applications of stochastic integration—such as solving stochastic differential equations (SDEs), pricing financial derivatives, or analyzing filtering problems—rely on the properties of the stochastic integral rather than on the details of its construction. Properties such as linearity, the integration-by-parts formula, Itô’s formula, the dominated convergence theorem, and the stability of local martingales under integration are what matter in applications. Since our topological integral coincides with the classical Jacod/Protter integral (see Remark 13), it inherits all these properties, which we establish in Section 6, Section 7 and Section 8. Consequently, any application that can be carried out with the classical or functional analytic integral can equally be carried out with our topological construction. We illustrate this explicitly in Section 9 with standard examples from SDE theory.

2. Preliminaries

We always assume a complete filtered probability space ( Ω , F , P ) , equipped with a filtration F = ( F t ) t 0 satisfying the usual conditions. This means:
  • F 0 includes all P -null sets from F .
  • The filtration F = ( F t ) t 0 is right-continuous, i.e., F t = u > t F u for all t 0 .
We use the following definitions and notations:
  • For p [ 1 , ) , define the seminorm
    X L p : = E | X | p 1 / p .
    The space L p is then defined as
    L p : = X : Ω R is measurable and X L p < .
  • Astochastic process X on ( Ω , F , P ) is a collection of R -valued or R d -valued random variables ( X t ) t 0 .
  • A process X is said to be adapted to F if X t is F t -measurable for each t.
  • Two stochastic processes X and Y are modifications of each other if X t = Y t a.s. for all t, and they are indistinguishable if for almost every ω Ω , X t ( ω ) = Y t ( ω ) for all t.
  • A stopping time T is a non-negative random variable such that { T t } F t for each t 0 .
  • For a stopping time T, the σ -algebra F T of events occurring up to time T is the σ -algebra consisting of those events A F with
    A { T t } F t for every t 0 .
  • A process X is said to be càdlàg (respectively càglàd ) if its paths are right (resp. left) continuous with left (resp. right) limits. We denote the set of càdlàg (resp. càglàd) processes with D (resp. L ).
  • The essential supremum of a random variable X is given by
    ess sup ω Ω X ( ω ) = inf { c R ; P ( X > c ) = 0 } .
  • The function 1 A denotes the indicator function of a subset A:
    1 A ( x ) = 1 if x A , 0 else .
  • Let S , T be stopping times. Then, the sets
    S , T : = { ( ω , t ) Ω × R + ; S ( ω ) t T ( ω ) } S , T : = { ( ω , t ) Ω × R + ; S ( ω ) t < T ( ω ) } S , T : = { ( ω , t ) Ω × R + ; S ( ω ) < t T ( ω ) } S , T : = { ( ω , t ) Ω × R + ; S ( ω ) < t < T ( ω ) } S : = S , S .
    are called stochastic intervals.
  • For a stopping time T and a process X, X t T : = X t T denotes the stopped process.
  • A family of random variables ( U α ) α A is uniformly integrable if
    lim n sup α A { | U α | n } | U α | d P = 0 .
  • A real-valued, adapted process X = ( X t ) 0 t < is called a martingale (resp. supermartingale, submartingale ) with respect to the filtration F if:
    (i)
    X t L 1 ( P ) ;
    (ii)
    if s t , then E [ X t | F s ] = X s , a.s. (resp. X s , X s ).
  • A process X is a local martingale if there exists a sequence of stopping times ( T n ) n N increasing to a.s. such that X T n is a martingale for each n.
  • For a càdlàg process Y, we write Y for the process with Y t = lim s t Y s (with the convention Y 0 : = Y 0 ). For products, we set ( X Y ) 0 : = X 0 Y 0 , so, in particular, Δ X 0 = Δ Y 0 = 0 .
  • For a càdlàg process X, the jump process Δ X is defined as Δ X t : = X t X t with X t = lim s t X s . Hence, one gets X = X Δ X and Δ X 0 = 0 .
  • A property is said to hold locally (resp. prelocally) for a stochastic process X if there exists a localizing sequence of stopping times ( T n ) n N such that the property holds for each X T n (resp. X T n ) almost surely.
  • The predictable σ -algebra is the smallest σ -algebra making all left-continuous, adapted processes measurable. It is denoted by P . A predictable process is a stochastic process X = ( X t ) t 0 such that the mapping
    ( t , ω ) X t ( ω )
    is measurable with respect to the predictable σ -algebra P on Ω × [ 0 , ) and the Borel σ -algebra on R . The set of all predictable processes is also denoted by P .
  • A càdlàg process A is an FV process if almost all of its paths have finite variation on each compact interval.
Theorem 1 
([46], Section 2). The predictable σ-algebra P on Ω × [ 0 , ) can be characterized equivalently in the following ways:
(i) 
P is the smallest σ-algebra such that all left-continuous, adapted processes are measurable.
(ii) 
P is the σ-algebra generated by all continuous, adapted processes.
Definition 1. 
An adapted process B = ( B t ) 0 t < taking values in R n is called an n-dimensional Brownian motion if
(i)  
for 0 s < t < , B t B s is independent of F s (increments are independent of the past);
(ii) 
for 0 < s < t , B t B s is a Gaussian random variable with mean zero and variance matrix ( t s ) C for a given, non-random matrix C.
The Brownian motion starts at x if P ( B 0 = x ) = 1 .
Theorem 2 
([42], Theorem I.26). Let B be a Brownian motion. Then, there exists a modification of B which has continuous paths almost surely.
We assume that all Brownian motions discussed have continuous paths and, unless stated otherwise, the covariance matrix C is the identity matrix, making it a standard Brownian motion.
Theorem 3 
([42], Theorem I.12). Let X be a right-continuous martingale which is uniformly integrable. Then, Y = lim t X t a.s. exists, E [ | Y | ] < , and one says Y closes X as a martingale.
Theorem 4 
([42], Theorem I.13). Let X be a (right-continuous) martingale. Then, ( X t ) 0 t < is uniformly integrable if and only if Y = lim t X t exists a.s., E [ | Y | ] < , and ( X t ) 0 t is a martingale, where X = Y .
The following lemma is rarely explicitly mentioned in the literature. It is, however, crucial for our approach.
Lemma 1 
([47], Theorem 7.38). Let M be a local martingale, T a stopping time, and ξ a real F T -measurable random variable. Then
N = ξ ( M M T )
is a local martingale.
The following theorem goes back to Burkholder [48]. Updated versions with notations aligned to stochastic integration and simplified proofs can be found in [46] (Theorem 47), [49] (Corollaire VIII-3-14), and [50]; also see [51] for similar results.
Theorem 5 
([49], Corollaire VIII-3-14). Let ( M t ) t N be a time-discrete martingale and ( H t ) t N a predictable process with sup t | H t | < 1 almost surely. For a sequence of time-discrete random times ( T n ) n N and t N , define
( H M ) t : = k = 1 t H k 1 ( M k M k 1 ) ,
where H k are random variables with respect to F k . Then, for any T N , we have
P sup n T | ( H M ) n | > α 18 α E | M T |
for all α > 0 . If M is positive, then 18 can be replaced by 9. If M can be closed by a random variable M , we can replace T by ∞.

3. Semimartingales

Definition 2. 
(a) 
For a d-dimensional stochastic process
X = ( X 1 , , X d ) ,
we put
X t * = max i { 1 , , d } sup s [ 0 , t ] | X s i | and X * = sup t 0 X t * .
(b) 
A d-dimensional stochastic process H : Ω × R + R d is called simple predictable if it can be represented as
H t ( ω ) = H 0 ( ω ) 1 { 0 } ( t ) + i = 1 n H i ( ω ) 1 T i , T i + 1 ( ω , t ) ,
where n N , ( T i ) i = 1 , , n + 1 are stopping times with 0 = T 1 T 2 T n + 1 < and H i = ( h i 1 , , h i d ) are F T i -measurable R d -valued random variables that are almost surely bounded in all components for all i = 1 , , n .
The set of R d -valued simple predictable processes will be denoted by Λ d .
(c) 
For H Λ d , we define
H u : = ess sup ω Ω max j { 1 , , d } max i { 1 , , n } | h i j | = ess sup ω Ω ( H * )
and we let Λ u d denote Λ d equipped with the topology induced by · u (i.e., uniform convergence in the essential supremum norm): we write H n H in Λ u d if H , H n Λ d , and H n H u 0 .
Remark 1. 
A stochastic process is a d-dimensional simple predictable process if and only if each component is a one-dimensional simple predictable process.
Remark 2. 
For each predictable process H, there exists a sequence of simple predictable processes H n that converges pointwise to a process that is indistinguishable from H. This is a simple consequence of the monotone class theorem (see, for example, [52] (Corollary 7.4.3)).
The definition of the stochastic integral for simple predictable integrands is quite intuitive.
Definition 3. 
Let X be an R d -valued stochastic process with càdlàg paths and H Λ d as the one given in (1). We define the stochastic integral J X : Λ d D by
J X ( H ) : = H X : = H 0 X 0 + i = 1 n H i ( X T i + 1 X T i ) .
The processes H and X are called integrand and integrator.
For simplicity, we will treat processes and their equivalence classes interchangeably. Consequently, the subsequent metric spaces should be viewed as quotient spaces.
Definition 4. 
(a) 
For any left- or right-continuous process X, we put
X ucp : = n = 1 1 2 n E X n * 1 .
Then, the topology induced by the metric d ucp ( X , Y ) : = X Y ucp is the topology of ucp convergence.
(b) 
The Émery norm of a càdlàg process X is defined as
X S : = sup K Λ d K u 1 K X ucp .
The topology induced by the metric d S ( X , Y ) : = X Y S is called the semimartingale topology or Émery topology.
(c) 
For sequences ( H n ) n N , we write H n ucp H if it converges to a process H in the topology of ucp convergence. We use X n S X analogously.
Remark 3. 
Even though the name suggests otherwise, the Émery norm is not a norm but only a metric (see, for example, [52] (page 278) for elaborations).
Remark 4. 
A sequence of processes ( X n ) n N converges in the ucp topology to a process X if and only if we have
P ( X n X ) t * > K 0 as n for each t , K > 0 .
This explains the name ucp (uniformly on compacts in probability) convergence.
Remark 5. 
In the definition of X t * , the supremum is taken over the uncountable set [ 0 , t ] . This means that it might not be measurable for arbitrary processes. However, by limiting ourselves to either left-continuous or right-continuous processes, the supremum can be considered over the countable set of rational times, ensuring measurability. Additionally, it has been demonstrated that the supremum is measurable for jointly measurable processes, provided the probability space is complete. For further details and an example of a process that “behaves badly”, refer to Pollard et al. [53] (page 214).
In most publications, a semimartingale is defined to be a process which can be written as the sum of a local martingale and an FV process. In some publications, a semimartingale is defined as a process under which the integral operator is continuous. We give a different definition and will see later that all of these definitions are equivalent. For clarity, we will use the term topological semimartingale to highlight the different definition.
Definition 5. 
An R d -valued stochastic process X = ( X t ) t 0 is called a topological semimartingale if:
(a) 
X is càdlàg and adapted,
(b) 
X S < , and
(c) 
for every sequence ( λ n ) R with λ n 0 , we have λ n X S 0 .
The set of all d-dimensional topological semimartingales is denoted by S d , and we write S e d for the space S d endowed with the semimartingale topology.
Remark 6. 
At first sight, (c) may look redundant: for a genuine norm, one would have λ X = | λ | X 0 as λ 0 . The point is that the so-called Émery “norm” · S is not a norm (indeed, not even a homogeneous seminorm); it is a metric generating the semimartingale (Émery) topology. (Recall X ucp = n 1 2 n E [ X n * 1 ] and X S : = sup K u 1 K X ucp . The truncation 1 breaks homogeneity.) More precisely, the space S e d is an F-space (a complete, metrizable topological vector space) but not a locally convex space; the truncation x x 1 is bounded, so c X ucp 1 regardless of c, which destroys the scaling property c X = | c | X characteristic of norms (see, e.g., [54] for background on F-spaces). Consequently, continuity of scalar multiplication at 0 is not automatic and must be imposed; this is exactly what (c) does. Together with the (translation-invariant) metric structure, (c) ensures that S e d is a topological vector space. In particular, without (c), the map ( λ , X ) λ X need not be (jointly) continuous in the product topology.
The necessity of (c) is not merely formal: there exist càdlàg processes X such that λ n X S 0 for some λ n 0 ; see Example 3 for a stochastic example (fractional Brownian motion with H < 1 2 ) and Example 2 for a deterministic pure-jump example. Moreover, in Theorem 6, we show that this seemingly minimal definition still recovers the functional–analytic characterization of semimartingales as “good integrators”. The deeper reason why continuity at 0 is essential for our “topological semimartingale” definition is that it guarantees the integral operator J X : H H X inherits the continuity properties needed for the closure argument. Without (c), the map ( λ , H ) λ H X could fail joint continuity, which would prevent us from extending the integral from simple to general predictable integrands via a single topological closure. In essence, (c) ensures that the stochastic integral behaves like a genuine continuous linear operator on the integrand space, which is the cornerstone of our construction.
Remark 7. 
With our (standard) choice of the ucp metric using the truncation y y 1 , we always have Y ucp 1 for any càdlàg process Y, and therefore, X S 1 for any càdlàg X. Hence, item (b) is automatically satisfied; we keep it only to stress that · S controls the topology rather than the magnitude. If one removes the truncation and works with the untruncated family
X S , t untr : = sup K Λ 1 K u 1 E sup s t | ( K X ) s | , t > 0 ,
then, one can construct càdlàg processes with X S , t untr = + ; see the following example.
Example 1. 
Let t n : = 2 n and define a (deterministic, hence adapted) càdlàg pure-jump process
X t : = n = 1 ( 1 ) n n 1 { t t n } , t 0 .
Then, X is finite for every t since the alternating harmonic series n = 1 ( 1 ) n / n converges. However, X has infinite total variation on [ 0 , 1 ] : n 1 | Δ X t n | = n 1 1 n = + .
Now, set the simple predictable integrand
K t : = n = 1 sgn ( Δ X t n ) 1 t n + 1 , t n ( t ) , so that K u = 1 .
For any t ( 0 , 1 ] , we have
( K X ) t = t n t sgn ( Δ X t n ) Δ X t n = t n t | Δ X t n | = { n : 2 n t } 1 n = + .
Hence, sup s 1 | ( K X ) s | = + , and, consequently, X S , 1 untr = + . By contrast, under our truncated definition of · S used in this paper, one always has X S 1 , so the phenomenon X S = + cannot occur. However, it turns out that X is still not a semimartingale, as (c) is not satisfied (see, Example 2).
Example 2. 
Let X be the deterministic pure-jump process from Example 1, i.e.,
X t = n = 1 ( 1 ) n n 1 { t t n } , t n : = 2 n .
For m N , choose N m so large that the harmonic sum
H N m : = n = 1 N m 1 n m .
Define the simple predictable integrand
K ^ t ( m ) : = n = 1 N m sgn ( Δ X t n ) 1 t n + 1 , t n ( t ) , so that K ^ ( m ) u = 1 ,
and set λ m : = 1 / m . Then, at t = 1 ,
( K ^ ( m ) ( λ m X ) ) 1 = λ m n = 1 N m sgn ( Δ X t n ) Δ X t n = 1 m n = 1 N m | Δ X t n | = 1 m n = 1 N m 1 n 1 .
The key observation is that the integrand K ^ ( m ) is designed to flip all signs of the alternating jumps Δ X t n = ( 1 ) n / n to positive, so the stochastic integral accumulates the absolute values | Δ X t n | = 1 / n . Since N m is chosen so that n = 1 N m 1 / n m , we obtain a deterministic lower bound ( K ^ ( m ) ( λ m X ) ) 1 1 that holds for every ω. Hence, sup s 1 | ( K ^ ( m ) ( λ m X ) ) s | 1 deterministically, and therefore
λ m X S = sup K u 1 K ( λ m X ) ucp K ^ ( m ) ( λ m X ) ucp 1 2
(for the ucp metric, the n = 1 term contributes 2 1 E [ ( · ) 1 * 1 ] = 1 2 ). Thus, λ m X S 0 as m , so item (c) is violated. In particular, X is not a topological semimartingale; by Theorem 8, it is not a classical semimartingale either.
Remark 8. 
More generally, a deterministic càdlàg process is a semimartingale if and only if it has finite variation on compacts; the above X has infinite total variation on [ 0 , 1 ] , hence fails this criterion.
Probably the most prominent example of a stochastic process that is not a semimartingale is the fractional Brownian motion B H with Hurst parameter H ( 0 , 1 ) 1 2 (see, for example, [55] for a comprehensive treatment). The fact that a fractional Brownian motion cannot be a semimartingale for H 1 2 has been proved by several authors (see, for example, Liptser and Shiryaev [56] (Example 2 of Section 4.9.13), Lin [57] (Corollary 2.2) or Rogers [58] (Section 2)). However, these authors use the classical definition of semimartingales. With our definition, it is possible to present an alternative and short proof for H < 1 2 .
Example 3. 
Let X be a fractional Brownian motion on the interval [ 0 , T ] with X 0 = 0 . Furthermore, let π be the set of partitions of the interval [ 0 , T ] .
For a partition π 0 = { t 0 = 0 , , t n = T } π and N N , we define
H π 0 , N = j = 0 n 1 1 { | X t j | N } X t j N 1 ( t j , t j + 1 ] .
Then, H π 0 , N Λ 1 with H π 0 , N u 1 and for λ > 0 on the set
A π 0 , N , λ : = X T * N j = 0 n 1 ( X t j + 1 X t j ) 2 > 2 N λ + N 2
we have
H π 0 , N ( λ X ) T = λ N j = 0 n 1 X t j ( X t j + 1 X t j ) = λ N j = 0 n 1 X t j 2 X t j X t j + 1 = λ 2 N j = 0 n 1 ( X t j + 1 X t j ) 2 X T 2 > λ 2 N 2 N λ + N 2 N 2 = 1 .
Therefore, P ( H π 0 , N ( λ X ) T > 1 ) P ( A π 0 , N , λ ) , and it remains to show that P ( A π 0 , N , λ ) is greater than some positive constant independent of λ for feasible π 0 and N.
It is well known that the quadratic variation of the fractional Brownian motion with H < 1 2 is unbounded in probability (see, for example, [59] (Lemma 4.1.2)). Hence
c : = lim λ 0 sup π P j = 0 n 1 X t j + 1 X t j 2 > 1 λ
exists and is greater than 0.
In particular, that means for each N N , we can select a partition π 0 = ( t 0 , , t n ) π such that
P j = 0 n 1 X t j + 1 X t j 2 > 2 N λ + N 2 > c 2 .
Since X is continuous, we can define N N in a way such that
P X T * > N < c 4 .
With Equations (4) and (5), we conclude
P A π , N , λ > c 4 > 0 .
Example 4. 
Following a similar strategy as above, one can show that for a Brownian motion B, | B | α with 0 < α < 1 is also not a semimartingale. A proof of this fact in the classical semimartingale framework can be found in Wang [60] or Yor [61].
In the functional analytic approach, as presented by Protter [42], a semimartingale is defined as a “good integrator”, which means a process for which the integral operator is continuous with respect to certain topologies. The next theorem shows that this definition is equivalent to ours.
Theorem 6. 
Let X be an adapted càdlàg process. Then, X is a topological semimartingale if and only if for each sequence of simple predictable processes ( H n ) n N with H n u 0 the random variables H n X t converge for each t 0 in probability to 0.
For the proof, we are going to use a Lemma that will also be instrumental in the proof of the stochastic Dominated Convergence Theorem.
Lemma 2. 
Let X be a d-dimensional semimartingale and t > 0 . For every M > 0 ,
sup K Λ d K u 1 E ( K X ) t * M = sup K Λ d K u 1 E | ( K X ) t | M .
In particular, for M = 1 ,
sup K u 1 E [ ( K X ) t * 1 ] = sup K u 1 E [ | ( K X ) t | 1 ] .
Proof. 
The inequality “≥” is immediate since | ( K X ) t | ( K X ) t * .
For the converse, we fix G Λ d with G u 1 and λ > 0 and define the stopping time
T : = inf { s > 0 ; | ( G X ) s | > λ } t , H : = 1 0 , T .
Then, H is predictable and G H Λ d with G H u 1 . By the definition of the stochastic integral for simple predictable integrands,
( G H X ) t = ( G X ) T .
Hence
P ( G X ) t * > λ P | ( G X ) T | λ = P | ( G H X ) t | λ .
Taking the supremum over all G Λ d with G u 1 , we obtain for every λ > 0 ,
sup G u 1 P ( G X ) t * > λ sup K u 1 P | ( K X ) t | λ .
Integrating over λ ( 0 , M ) and using the layer-cake representation E [ Y M ] = 0 M P ( Y > λ ) d λ for any nonnegative Y, we get
sup G u 1 E ( G X ) t * M sup K u 1 E | ( K X ) t | M .
This proves the converse inequality and thus the equality. □
Proof of Theorem 6. 
(i) ⇒ (ii): For any càdlàg X, we have
H X S = sup K u 1 ( K H ) X ucp H u sup L u 1 L X ucp = H u X S .
Hence, H n u 0 implies H n X S 0 , and, in particular, ( H n X ) t 0 in probability.
(ii) ⇒ (i): Fix t > 0 . We show continuity of J X at 0 in the semimartingale topology at horizon t. Let H n Λ d with H n u 0 . Set Y n : = H n X . By definition,
Y n S , t = sup K Λ d K u 1 K Y n ucp , t = sup K u 1 E ( K Y n ) t * 1 .
By Lemma 2,
sup K u 1 E ( K Y n ) t * 1 = sup K u 1 E | ( K Y n ) t | 1 .
For each fixed K Λ d with K u 1 , we have K H n u H n u 0 . Hence, by assumption (ii) applied to the sequence K H n , we have
( K Y n ) t = ( ( K H n ) X ) t P 0 .
Since | ( K Y n ) t | 1 1 , it follows that E [ | ( K Y n ) t | 1 ] 0 for each fixed K.
We claim that necessarily
sup K u 1 E | ( K Y n ) t | 1 0 .
Indeed, if not, there exist ε > 0 , a subsequence n j , and K ( j ) Λ d with K ( j ) u 1 such that E [ | ( K ( j ) Y n j ) t | 1 ] ε for all j. Define a new sequence K ˜ n by setting K ˜ n j : = K ( j ) and choosing K ˜ n Λ d , K ˜ n u 1 arbitrarily for other n. Then, K ˜ n H n u H n u 0 , so by (ii), we have ( ( K ˜ n H n ) X ) t 0 in probability, which implies E [ | ( ( K ˜ n H n ) X ) t | 1 ] 0 . This contradicts E [ | ( K ( j ) Y n j ) t | 1 ] ε along the subsequence.
Therefore, Y n S , t 0 . Since t > 0 was arbitrary, we obtain Y n S 0 , i.e., H n X 0 in the semimartingale topology. By linearity, this yields continuity at every H, hence X is a topological semimartingale. □
Remark 9. 
Theorem 6 states that semimartingales are the càdlàg processes for which the integral operator is continuous. It can be shown that different types of convergence could be chosen in Theorem 6 that would yield an equivalent definition of semimartingales (see [62] (Theorem 5.89)).

4. Properties of Semimartingales

Theorem 7. 
(a) 
An R d -valued càdlàg process X is a d-dimensional topological semimartingale if and only if all components are one-dimensional topological semimartingales.
(b) 
Local topological semimartingales and processes that are prelocally topological semimartingales are topological semimartingales.
(c) 
Let Q be a probability measure that is absolutely continuous with respect to P . Then, every P -semimartingale is also a Q -semimartingale.
Proof. 
(a)
If all components are topological semimartingales, the result follows from the triangle inequality. For the converse, we note that since for K i Λ 1 with K i u 1 , we have K ˜ i : = ( 0 , , 0 , K i , 0 , 0 ) Λ d with K ˜ i u 1 . Therefore, we get for λ n > 0
sup K ˜ Λ 1 K ˜ u 1 K ˜ ( λ n X i ) ucp sup K Λ d K u 1 K ( λ n X ) ucp .
(b)
We use Theorem 6. Let ( H n ) n N be a sequence with H n u 0 . We have to show H n X t 0 in probability. Let X be a local topological semimartingale with ( T k ) k N being a localizing sequence. We define
R n t = T n T n t T n > t .
Now, we have
P | H n X t | ε P | H n X t T n | ε 0 by assumption + P R n t < . = P T n t 0 since T n
Therefore, X is a topological semimartingale. If X is prelocally a topological semimartingale, then the proof proceeds analogously.
(c)
Since convergence in probability in P implies convergence in probability in Q , this is obvious.
Historically, semimartingales have been defined as the set of processes that can be written as the sum of a local martingale and an FV process. In the functional analytical approach, semimartingales have been defined as the processes for which the integral operator is continuous (see Theorem 6). In fact, these definitions are equivalent, which was proved in the late 1970s independently of each other by Klaus Bichteler [33,34] and Claude Dellacherie, with contributions by Paul-André Meyer and Gabriel Mokobodzki [63]. In addition to the literature mentioned above, the theorem is also discussed and proven in modern textbooks such as in Cohen and Elliott [52] (Theorem 12.3.26), in Karandikar and Rao [62] (Theorem 5.59), and in Protter [42] (Theorem III.47). While the latter two employ classical methods, the former adopts an approach by Beiglböck, who has presented alternative proofs in [64,65,66].
As the topological stochastic integral does not rely on the decomposition of semimartingales, the theory does not require the Bichteler–Dellacherie–Mokobodzki. However, for the sake of completeness and because it provides many examples of semimartingales, we present it here.
Theorem 8 
(Bichteler–Dellacherie–Mokobodzki Theorem). Let X be a càdlàg process. The following are equivalent:
(i)  
X is a topological semimartingale;
(ii) 
X is a classical semimartingale in each component;
(iii) 
X is a good integrator in each component.
Before we proceed with the proof of Theorem 8, we will demonstrate two inequalities that are useful when dealing with the topological stochastic integral. The first one, a variant of the Itô inequality, shows that square-integrable martingales are semimartingales. The second, based on the Burkholder inequality, demonstrates the same for “ordinary” martingales. While the latter is sufficient for our purposes here, we will present both for the sake of completeness.
Theorem 9. 
Let M be a d-dimensional càdlàg process and H Λ d .
(a) 
If M is a square-integrable martingale, then we have
E ( H M t ) 2 d H u 2 E M t R d 2 .
(b) 
If M is a martingale, then the following inequality holds:
α P ( H X ) t * α 18 H u d j d E | X t j | .
Proof. 
(a)
By Lemma 1, an application of the tower rule and the Cauchy–Schwarz inequality, we get
E ( H M t ) 2 = E H 0 M 0 + i = 1 n H i M t T i + 1 M t T i 2 = E H 0 M 0 2 + i = 1 n H i M t T i + 1 M t T i 2 E H 0 R d 2 M 0 R d 2 + i = 1 n H i R d 2 M t T i + 1 M t T i R d 2 d H u 2 E M 0 R d 2 + i = 1 n M t T i + 1 M t T i R d 2 = d H u 2 E M t R d 2
(b)
The idea is to transform the time-continuous statement into a time-discrete one and then apply Theorem 5. We first assume d = 1 and sup t | H | < 1 . Since H Λ , there is a representation
H = H 0 1 { 0 } ( t ) + i = 1 n H i 1 T i , T i + 1 ,
with stopping times T i . By the tower rule and Doob’s optional stopping theorem, one easily sees that ( M T i T ) i N is a time-discrete martingale for any stopping time T.
By the definition of the stochastic integral for simple predictable integrands, we obtain H M T i = H M i , where ∗ describes the time-discrete stochastic integral as in Theorem 5. An application of Theorem 5 yields the result for d = 1 . For general d and H, we obtain
α P ( H M ) t * α = α P H H u M t * α H u α j = 1 d P H j H u M j t * α d H u 18 H u d j = 1 d E | M t j | .
This completes the proof. □
We now outline the proof of the Bichteler–Dellacherie–Mokobodzki Theorem.
Proof of Theorem 8. 
We first assume X = M + A with a local martingale M and an FV process A. For the FV process A with A 0 = 0 , we have
Var [ H A ] = Var i = 1 n H i A T i + 1 A T i i = 1 n j = 1 d h i j Var A j , T i + 1 A j , T i H u j = 1 d Var A j .
For a series H n with H n u 0 , we see that H A t converges in probability to 0. Thus, A is a semimartingale by Theorem 6. For any martingale M, we can proceed analogously by applying Theorem 9 and see that M is a semimartingale. Via localization, we get that any local martingale is locally a semimartingale; thus, by Theorem 7, any local martingale is a semimartingale. By Theorem 7, M + A is a semimartingale as well.
The converse is elegantly proven in [66]. The proof can be readily adapted to our context with minimal changes and does not require the definition or properties of the stochastic integral. Therefore, we will not repeat it here; instead, we will refer to the original article. □
Remark 10. 
By employing the Fundamental Theorem of Local Martingales, we can assume without loss of generality that any semimartingale can be decomposed into X = M + A , where A is a finite variation process and M is a locally bounded local martingale. Therefore, the first inequality in Theorem 9 would suffice to demonstrate one direction of the Bichteler–Dellacherie–Mokobodzki Theorem.
Example 5. 
  • Since a Brownian motion is a martingale, it is also a semimartingale.
  • A counting process (and hence a Poisson process) (see Cohen and Elliott [52] Definition 5.5.16) is an FV process and hence a semimartingale. The compensated Poisson Process (see Cohen and Elliott [52] Theorem 5.5.18) is a martingale and thus also a semimartingale.
  • Any Lévy process (see Cohen and Elliott [52] Definition 13.5.1) can be written as a sum of a local martingale and an FV process (this was initially shown in Lévy [67], see also Itô [68] for an English proof, Kunita and Watanabe [16] for a proof in the classical semimartingale setting, or Applebaum [69] Proposition 2.7.1 for a modern textbook treatment). Hence, each Lévy process is a semimartingale.

5. The General Stochastic Integral

To extend the integral, we are interested in the topological properties of the above-defined spaces. These were initially studied by Émery [25,70] and Memin [28]. We note that the Émery norm was initially defined to take the supremum over all bounded predictable processes and not only simple predictable processes. But it was shown in Memin [28] that these definitions are equivalent.
Definition 6. 
For X S d and H Λ d , we put H X : = H X S . Then, d X ( H , K ) : = H K X is a metric on Λ d . The corresponding topological space is denoted by Λ X d , and for a sequence ( H n ) n N , we will write H n X H if H n converges to a process H with respect to X .
Remark 11. 
We re-emphasize that we work with equivalence classes of processes. Specifically, the metric d X ( H , K ) : = H K X is defined on the quotient space Λ d / N X , where N X = { K Λ d ; K X = 0 } .
Theorem 10. 
Let H n Λ d and X n S d for all n.
(a) 
If H n converges to H in the ucp topology, then it also converges to H with respect to the topology induced by the metric · X . This implies that the ucp topology is finer than the topology induced by · X .
(b) 
If X n converges to X in the semimartingale topology, then it also converges to X in the ucp topology. This means that the semimartingale topology is finer than the ucp topology.
Proof. 
(a)
Let t R + be fixed and ε > 0 . Since X is a semimartingale, we can always find a c > 0 such that c X S ε . Without loss of generality, we assume H n ucp 0 and, hence, we can always find a N N such that P ( ( H n ) t * > c ) ε for all n N .
For K Λ 1 , it is easy to check that we have
K ( H n X ) t = ( K H n ) X t .
We obtain for all n N and K Λ 1 with K u 1
P K ( H n X ) t * > ε = P K H n X ) t * > ε = P K H n X ) t * > ε H n t * > c + P K H n X ) t * > ε H n t * c P ( H n ) t * > c + P c ( K X ) t * > ε ε + ε
Hence, we have H n X 2 ε for all n N , and K ( H n X ) converges uniformly on compacts in probability to 0 and thus H n X 0 .
(b)
This follows immediately with
X S : = sup K Λ d K u 1 K X ucp X ucp .
This completes the proof. □
Remark 12. 
The converse of these statements does not hold (see Example 6). However, for martingales M n , M, it can be shown that M n H 1 M implies M n S M (see, for example, [23]).
Example 6. 
Let X t ( ω ) = t and
H t n ( ω ) = 1 { 0 } ( t ) + k = 0 n 2 k 1 n 2 n 1 k 1 n 2 , k n 2 ( t ) .
Since X is increasing and H n is positive and zero for all t > 1 , we have
( H n X ) t * H n X 1 = k = 0 n 2 k 1 n 2 n 1 n 2 < 0 1 x n d x = 1 n + 1 0 .
Hence, we have H n X ucp 0 for n , and one easily sees (again, because of the monotony of X and since H n 0 ) that we also have H n X S 0 and thus H n X 0 .
But we also have
( H n ) 1 * = n 2 1 n 2 n 1 .
Hence, for no ε < 1 there exists a N N such that ( H n ) 1 * < ε for all n N . Thus, we conclude H n ucp 0 .
The following results were initially shown by Memin [28]. In our setting, we can provide a simplified proof.
Theorem 11. 
The space of semimartingales is complete under the semimartingale topology.
Proof. 
Let X n be a Cauchy sequence in · S . By Theorem 10, X n converges in ucp to a càdlàg process X. For H , H m Λ d with H m H u 0 , we get
H m X H X ucp H m X n H X n ucp + H X n H X ucp H X n H X ucp ( m ) .
Since X n converges to X with respect to the semimartingale topology, the right-hand side converges to 0 for n . Therefore, H m X converges in ucp, and thus X is a semimartingale. □
Semimartingales, as integrators, are the largest class of processes, such that the integral operator has some desirable properties. The integral, however, has so far only been defined for simple predictable integrands. Now, we are going to increase the class of possible integrands.
Let X S d . We examine the operator J X from Λ X d to S e d :
J X : Λ X d S e d H H X
The function is well defined as we work with equivalence classes of functions on a quotient space.
By the definition of the corresponding metrics, it is clear that J X is a linear isometry between topological vector spaces and hence is uniformly continuous. Therefore, by the completion of maps theorem (see, for example, Jänich [71] (page 55)), we can extend the mapping continuously. From classical theory, we know that the predictability of the integrand is an essential property for ensuring desirable features, such as the stability of the local martingale property under integration with respect to locally bounded integrands. Consequently, we choose to close Λ X d as a subspace of the set of predictable processes.
This motivates the following definition.
Definition 7. 
Let X be a d-dimensional semimartingale, L ( X ) : = Λ X d ¯ the closure of Λ X d in P , and J ¯ X the unique, continuous extension of J X . Then, we define
J ¯ X : L ( X ) S e d H H X : = J ¯ X ( H ) .
We will call H X the stochastic integral of H with respect to X.
Remark 13 
(Coincidence with classical integrals). Because of the uniqueness of the continuous extension, the stochastic integral H X defined via our topological closure coincides with the stochastic integral defined via the classical approach (Jacod [22], Cohen–Elliott [52]) and the functional analytic approach (Protter [42]). Indeed, all three constructions:
(i) 
agree on simple predictable integrands Λ d (by definition),
(ii) 
extend continuously to the closure under the Émery topology (the classical integral is known to be continuous in the Émery topology; see [25,28]).
By uniqueness of continuous extensions, the three integrals must coincide on L ( X ) .
Remark 14. 
A key property of this construction is that L ( X ) is closed under dominated convergence: if ( H n ) n N are predictable processes converging pointwise a.s. to H, and if | H n | G componentwise for some G L ( X ) (componentwise), then H L ( X ) and H n X H X in the semimartingale topology. This Dominated Convergence Theorem (Theorem 14 below) is central to many subsequent proofs; its statement appears later to keep the flow of definitions, but the reader should note that it is a fundamental property of the space L ( X ) .
One might wonder whether we could obtain a broader range of possible integrands by defining the integral for all processes that are only (pre)locally in L ( X ) , similar to how it was conducted by P.-A. Meyer in the classical approach towards stochastic integration (see [18] (page 97 ff) for more details). However, this is not the case, as the following theorem demonstrates.
Theorem 12. 
Let H P be locally in L ( X ) . Then, H L ( X ) . In other words:
L ( X ) loc = L ( X ) .
We utilize the following lemma for the proof.
Lemma 3. 
Let ( X n ) n N S d , X S d , and ( T m ) m N an increasing sequence of stopping times with T m a.s. Assume that for every m,
( X n ) T m S X T m in S d as n .
Then, X n S X in S d .
Proof. 
Suppose not. Then, there exist ε > 0 , a subsequence n k , and K ( k ) Λ d with K ( k ) u 1 such that
K ( k ) ( X n k X ) ucp ε for all k .
Write Z ( k ) : = K ( k ) ( X n k X ) . By the definition, Z ( k ) ucp = r = 1 2 r E [ ( Z r ( k ) ) * 1 ] . Choose N so large that r > N 2 r < ε / 4 . Then, for every k,
r = 1 N 2 r E [ ( Z r ( k ) ) * 1 ] ε ε 4 = 3 ε 4 .
Hence, for each k there exists an index r k { 1 , , N } with
E [ ( Z r k ( k ) ) * 1 ] 3 ε 4 N .
Passing to a further subsequence (not relabeled), we may assume r k t is constant. Thus
E ( Z t ( k ) ) * 1 c ε : = 3 ε 4 N for all k .
Fix this t { 1 , , N } . By Item (d), for any m,
Z ( k ) T m = K ( k ) 1 0 , T m ( X n k X ) = K ( k ) ( X n k X ) T m .
Therefore, for each fixed m,
lim k E Z ( k ) t T m , * 1 lim k sup K u 1 K ( X n k X ) T m ucp = 0 ,
by the assumed convergence in S d of ( X n ) T m to X T m .
Next, note that truncation at T m can only change the path on { T m < t } , hence
E ( Z t ( k ) ) * 1 E Z ( k ) t T m , 1 + P ( T m < t ) .
Since T m a.s., choose m so large that P ( T m < t ) < c ε / 2 , and then k large so that E [ ( Z ( k ) ) t T m , * 1 ] < c ε / 2 . This gives E [ ( Z t ( k ) ) 1 ] < c ε , contradicting (6). The contradiction proves X n S X in S d . □
Proof of Theorem 12. 
Since H L ( X ) loc , there exists a sequence of stopping times T n tending to infinity such that H T n L ( X ) .
As H T n L ( X ) , there exists a sequence ( H n , m ) m N with H n , m Λ d and H n , m X H T n as m , which is equivalent to H n , m X S H T n X .
Without loss of generality, we can assume there exists a sequence ( H n ˜ ) n ˜ N such that for all m
( H n ˜ X ) T m S ( H X ) T m .
By Lemma 3, we have H n ˜ X S H X , and thus H L ( X ) . □
Remark 15. 
For any semimartingale X, the space { H X } with H L ( X ) is complete with respect to the semimartingale topology. This result was first established by Memin [28]. An alternative proof is presented in [52] (Theorem A.6.17). For the topological stochastic integral, this follows directly from the definition of the stochastic integral.
So far, we have defined the stochastic integral in great generality, but we hardly have any information about the set L ( X ) . The following theorem provides more information.
Theorem 13. 
Let X S d , then
L b P b P loc L ( X ) = L ( X ) loc P .
Proof. 
By definition, L ( X ) consists of predictable processes. A simple application of the monotone class theorem yields b P L ( X ) (a fact that is also shown in the proof of Theorem A1). With Theorem 12, we get b P loc L ( X ) . □
Remark 16. 
Usually, we even have L L ( X ) P . In Example 7, we provide examples of predictable processes that are not in L ( X ) and demonstrate that there are processes in some L ( X ) that are not càglàd processes.
Remark 17. 
Corollary 1 and Theorem A4 will provide a more detailed understanding of L ( X ) .
Remark 18. 
Even though, for a given semimartingale X, there are typically predictable processes that do not belong to L ( X ) , every predictable process can be approximated pointwise almost surely by simple predictable processes (see Remark 2), and therefore by elements of L ( X ) . However, such pointwise approximation does not, in general, imply that the limit process lies in L ( X ) , as integrability with respect to X is not preserved under pointwise convergence.
The following theorem will be instrumental in the proof of the properties of the stochastic integral. At this point, we deliberately avoid invoking any of the properties later proved in Section 6. Instead, Appendix A provides a self-contained proof based on two auxiliary seminorms on integrands, a bounded-convergence step via a monotone-class argument, and a bootstrap from bounded to dominated convergence. The overall structure mirrors the “define-close-extend” pattern of Carathéodory-type arguments but is carried out entirely within the predictable σ -algebra and the semimartingale topology.
Theorem 14 
(Dominated Convergence Theorem). Let X S d be a semimartingale, and let H m = ( H ˜ 1 , m , H ˜ d , m ) P be processes converging a.s. to a limit H. If there exists a process G = ( G ˜ 1 , , G ˜ d ) with G ˜ i L ( X ) for all i { 1 , , d } such that | H ˜ i , m | G i for all i { 1 , , d } and all m N , then H m , H are in L ( X ) and H m X converges to H X in the semimartingale topology (and hence also in ucp).
The key bounded/dominated convergence mechanisms are proved self-contained in Appendix A; see the brief roadmap in Remark 20.
Remark 19. 
The condition requiring G to be componentwise integrable is crucial, as demonstrated in the example in Example 11. This is because one cannot simply reduce the values of a component without potentially compromising integrability: higher values in one component might be necessary to offset high values in other components, as will be illustrated in Example 10.
Remark 20. 
The proof of the Dominated Convergence Theorem for our topological integral is deferred to Appendix A. It is self-contained and does not use any of the properties proved later in Section 6. The argument proceeds by a bootstrap in four transparent steps:
(A) 
Auxiliary seminorms on integrands. We introduce two auxiliary seminorms on simple predictable, respectively bounded predictable, integrands (parts (a,b) of Definition A1):
H Λ , X : = sup { G X ucp ; G Λ 1 , | G | | H | } , H b P , X : = inf n G n Λ , X ; | H | n | G n | .
They are designed so that convergence in these seminorms implies ucp convergence of the resulting stochastic integrals.
(B) 
Bounded Convergence. Using these seminorms plus a monotone-class argument, we prove a bounded convergence theorem for bounded predictable integrands (Theorem A1), without invoking any of the later properties in Section 6.
(C) 
Dominated Convergence on L 1 ( X ) . We define L 1 ( X ) (Definition A2), prove dominated convergence on L 1 ( X ) (Theorem A3), and use only (i) associativity for simple integrands (Theorem A2), and (ii) the continuity of J X in the X-seminorm (both established within Appendix A).
(D) 
Identification L 1 ( X ) = L ( X ) . Finally, we show L 1 ( X ) = L ( X ) (Theorem A4) by a telescoping/series argument and truncation; hence, dominated convergence holds for all integrands in L ( X ) .
This bootstrap structure is specifically designed to avoid circular reasoning. The potential circularity would arise if the DCT proof relied on properties of the integral (such as linearity, stopping, or the jump identity) that themselves require the DCT. We circumvent this by (i) proving bounded convergence (Theorem A1) using only the definition of the integral as a continuous extension and a monotone-class argument—no integral properties from Section 6 are invoked—(ii) establishing associativity for simple integrands (Theorem A2) directly from the definition, before the DCT, and (iii) building dominated convergence on L 1 ( X ) using only these foundational results. The properties in Section 6 (linearity, localization, Itô’s formula, etc.) are then proved after the DCT is established, often using it as a tool, but never as a prerequisite for its own proof.
The Dominated Convergence Theorem has a useful corollary that can help comprehend the space of integrands more thoroughly.
Corollary 1. 
Let X be a semimartingale and ( H n ) n N be a sequence of uniformly locally bounded predictable processes, that is, for a given sequence ( T n ) n N , we have sup m | ( H m ) T n | < K n for some constants K n . Suppose that H t n H t almost surely for every t. Then, H L ( X ) and H n X H .
Proof. 
Without loss of generality, take the localizing sequence ( T n ) n N to be increasing with T n a.s. Set T 0 : = 0 and define the predictable (vector) process G = ( G ˜ 1 , , G ˜ d ) by
G ˜ i : = n = 1 K ˜ n 1 T n 1 , T n , K ˜ n : = max j n K j .
Then, on each slice T n 1 , T n we have | H t m | K n K ˜ n , hence | H m | G componentwise for all m and | H | G as well. Moreover, G is locally bounded (indeed, G T n K ˜ n ), so by Theorem 13, we have G ˜ i L ( X ) for every component i.
Since H t n H t a.s. for each t and | H n | G componentwise with G L ( X ) (componentwise), the Dominated Convergence Theorem for the topological stochastic integral (Theorem 14) yields
H L ( X ) and H n X H .
As there is no version of the Dominated Convergence Theorem which guarantees the integrability of a process that is not componentwise integrable, the following result is very useful. The proof can be found in Appendix A.
Theorem 15. 
Let X S d and H a d-dimensional predictable process. Then, H L ( X ) if and only if H n X with H n = H 1 { H u n } converges in the semimartingale topology.

6. Properties of the Stochastic Integral

In this section, we are going to examine the most important properties of the stochastic integral.
The following theorem lists most of the important properties of the stochastic integral. All these properties are well known. However, because of our different approach, our proofs differ from the ones in the mentioned literature.
Theorem 16. 
Let X be a d-dimensional semimartingale and H L ( X ) .
(a) 
For α , β R and J L ( X ) , we have α H + β J L ( X ) and
( α H + β J ) X = α H X + β J X .
Thus, L ( X ) is a vector space.
(b) 
For Y S d and H L ( X ) L ( Y ) , we have H L ( X + Y ) and
H ( X + Y ) = H X + H Y .
(c) 
The process ( Δ ( H X ) s ) s 0 is indistinguishable from ( H s ( Δ X s ) ) s 0 .
(d) 
We have
( H X ) T = H 1 0 , T X = H ( X T ) .
(e) 
Assume X = ( X 1 , , X d ) S d and H = ( H 1 , , H d ) with H i L ( X i ) for all i. Furthermore, let K = ( K 1 , , K d ) be a predictable d-dimensional process, and we set Y i : = H i X i , Y : = ( Y 1 , , Y d ) , G i : = K i H i , and G = ( G 1 , , G d ) . Then, K i L ( Y i ) for all i if and only if G i L ( X i ) for all i, and K L ( Y ) if and only if G L ( X ) . In both cases, we have
K Y = G X .
(f) 
If X is an FV process, then H X is indistinguishable from the Lebesgue–Stieltjes integral, computed path by path.
(g) 
For a probability measure Q with Q P , H L ( X ) holds under Q as well, and H Q X = H P X , Q -almost surely.
Proof. 
(a)
Let A be the set for which (a) holds. It is clear that Λ d A . With the help of the bounded convergence Theorem A1, we can show b P A and an application of Theorem 15 yields the result.
(b)
Analogous to (a).
(c)
Let A be the set of processes for which (c) holds. First, we show that Λ d A . We assume H Λ d and the stochastic integral with respect to X is given by
H X = H 0 X 0 + i = 1 n H i X T i + 1 X T i ,
with H i as in Definition 2.
Hence, we have
Δ ( H X ) t = ( H X ) t ( H X ) t = i = 1 n H i X t T i + 1 X t T i + 1 X t T i X t T i = i = 1 n H i Δ X t T i + 1 Δ X t T i ,
which implies Δ ( H X ) s = ( H Δ X ) s .
Furthermore, for the stopped process, we have
Δ X t T i = X t T i X t T i = X T i X T i = 0 for t > T i Δ X t for t T i .
That implies
Δ X t T i + 1 Δ X t T i = 1 T i , T i + 1 Δ X t ,
and thus
Δ ( H X ) t = i = 1 n 1 T i , T i + 1 H i Δ X t = i = 1 n 1 T i , T i + 1 H i Δ X t = H t Δ X t .
We conclude Λ d A .
Now, let ( H n ) n N be a uniformly bounded, increasing sequence converging pointwise to a process H. By the bounded convergence Theorem A1, we have
Δ ( H X ) t = lim n Δ ( H n X ) t = lim n H t n ( Δ X t ) = H ( Δ X t ) .
Since Q [ 0 , ) is countable, (8) holds almost everywhere for all rational t 0 . Since H X is càdlàg, (8) holds almost surely for all t 0 . Thus, we have H A and can apply the monotone class theorem, which yields b P A .
For H L ( X ) , we define H n : = H 1 { H n } , and the result follows with Theorem 15.
(d)
The result is obvious for H Λ d and a bounded stopping time T taking only finitely many values.
For a bounded stopping time T, we can approximate T from above by stopping time taking finitely many values. For a potentially unbounded stopping time, we can define T n = T n , and the result still holds.
For general integrands, we proceed as before with the monotone class theorem and then with Theorem 15.
(e)
See Theorem A5.
(f)
The proof is again analogous to the ones above, with the only difference that the Dominated Convergence Theorem for the Lebesgue–Stieltjes integral needs to be applied.
(g)
For a probability measure Q that is absolutely continuous with respect to P , convergence in probability under P implies convergence in probability under Q . Hence, the result is immediate.
Example 7. 
Let X t : = t and define
H t : = 1 t 1 ( 0 , ) ( t ) ( so H 0 : = 0 ) .
Then, X is a semimartingale of finite variation and H is predictable. If H L ( X ) , then by Item (f), the stochastic integral coincides pathwise with the Lebesgue–Stieltjes integral:
( H X ) t = ( 0 , t ] 1 s d s = + for every t > 0 ,
which is impossible for a càdlàg process. Hence H L ( X ) .
Remark 21. 
For the same integrator X t = t , the bounded predictable integrand H 1 is in L ( X ) , and
( 1 X ) t = 0 t 1 d s = t
by Item (f). Thus, Example 7 does not say that “ 1 d s does not exist”; it shows that not every predictable process is X-integrable. The obstruction for H t = 1 / t is the divergence of the pathwise Lebesgue–Stieltjes integral at 0.
Example 8. 
Let H t : = 1 [ 1 , ) and X t : = t . Then, it is easy to see that we have H L ( X ) , but H L .
In many publications on stochastic integration, the multidimensional integral is often defined as the sum of integrals for each component. However, Example 10 demonstrates that an integrand being integrable does not necessarily imply its componentwise integrability. Nevertheless, when an integrand is componentwise integrable, the subsequent theorem confirms that the stochastic integral is indeed the sum of the integrals of its components.
Theorem 17. 
Let X be a d-dimensional semimartingale and
H = ( H 1 , , H n )
be a stochastic process with H i L ( X i ) for all i. Then, we have H L ( X ) and
H X = i = 1 d H i X i .
Proof. 
This follows directly from the linearity. □
We provide an example illustrating that L ( X ) encompasses more than just the componentwise integrable processes with respect to X. Additionally, this example demonstrates that the space H X , where H represents componentwise integrable processes, is not closed under the semimartingale topology. Further details and a comprehensive proof are available in [52] (Example 12.5.1) or [5] (Example 6.4).
Example 9. 
Let W 1 , W 2 be two independent Brownian motions. We put H t = t and define the two-dimensional process X by
X = W 1 ( 1 H ) W 1 + H W 2 .
Then, the space
L C ( X ) = i = 1 2 K i X i ; K i L ( X i )
is not closed in the semimartingale topology.

7. Stability of Local Martingales Under Stochastic Integration

To the best of our knowledge, the subsequent criterion for the convergence of a sequence of local martingales to a local martingale has not been documented in the existing literature. Nevertheless, this criterion streamlines the proofs of several established theorems. For instance, it offers a more straightforward approach to the general stochastic integral as presented in [42] and facilitates our proof in Theorem 18. More concretely, it is used in the following cases
  • Closure of { H n X } under H n X H to keep local martingale limits (via locally bounded H n );
  • associativity proofs via approximants K n , H n Λ with K n H n X K H .
Lemma 4. 
Let ( X n ) n N be a sequence of local martingales, which converges to a process X in ucp. If ( sup n ( X n ) t * ) t R + is locally integrable, then X is a local martingale.
Proof. 
Without loss of generality, we assume all processes to be one-dimensional. Because of the ucp convergence, we can conclude that X is also càdlàg and adapted. By assuming a suitable subsequence, we can also assume that the convergence is almost surely on compact subsets, and thus
M t : = sup n ( X n ) t *
is also càdlàg and adapted. Furthermore, X * is increasing, and we have
| Δ M t | 2 sup n ( X n ) t * .
By assumption, the right-hand side is locally integrable and thus M is also locally integrable.
Now, we can find a sequence ( T k ) k N such that ( X n ) T k is a martingale for all n and k and M T k is integrable for all k. Furthermore, because of X T k M , we can apply the Dominated Convergence Theorem and obtain for every bounded stopping time τ
E X τ T k = E lim n ( X n ) τ T k = lim n E ( X n ) τ T k = lim n E X 0 T k = E X 0 T k .
Hence, X T k is a martingale for all k, and thus, we conclude that X is a local martingale. □
Remark 22. 
We have found a criterion for the limit of a sequence of local martingales to be a local martingale again. In fact, there are numerous examples of criteria for that kind of problem in the literature. A detailed description with examples and counterexamples is given in the chapter `Some Particular Problems of Martingale Theory’ in Kabanov et al. [72].
Theorem 18. 
Let X be a local martingale (locally square-integrable martingale) and let H P be locally bounded. Then, the stochastic integral H X is also a local martingale (locally square-integrable martingale).
Proof. 
Let X be a local martingale. Without loss of generality, we assume there exists a K with H u K . Now, let ( H n ) n N be a sequence with H n Λ d and H n X H .
By putting H ˜ n : = H n K (where the minimum is taken in each component), we obtain a uniformly bounded sequence H ˜ n with H ˜ n X H .
By Lemma 1, H ˜ n X is a local martingale for all n. Furthermore, by Lemma 4 and since a process is locally uniformly bounded if and only if its jump process is locally uniformly bounded, it suffices to show that M t : = sup n ( Δ H ˜ X n ) t * is locally integrable.
By Item (a) from Theorem 16, we obtain
sup n Δ ( H ˜ n X ) = sup n ( H n ) Δ X K Δ X ,
which is, by assumption, locally integrable.
For X being a locally square-integrable martingale or L 2 -martingale, we again consider a sequence of simple predictable processes converging to H, then we apply Theorem 9 and proceed analogously. □
Remark 23. 
Since each H L is locally bounded, H M is always a local martingale for a local martingale M and H L .
Remark 24. 
For a semimartingale X = M + A with M M loc and an FV process A and H L ( M ) , one cannot assume that H M M loc . However, there is always a decomposition X = M ¯ + A ¯ with M ¯ M loc and an FV process A such that H M ¯ M loc . This is a direct consequence of the Fundamental Theorem of Local Martingales and Theorem 18. For details, we refer to Shiryaev and Cherny [5] (Remarks after Definition 3.9).

8. The Quadratic Variation of a Semimartingale

The quadratic variation plays a central role in the development of the general stochastic integral in both the classical approach and the extension from left-continuous integrands to general predictable integrands in the functional analytic approach.
As the quadratic variation of semimartingales can be defined without the more complicated concepts of integration with respect to general predictable integrands (as opposed to stochastic integration with respect to càglàd integrands) and integration of non-componentwise integrable multidimensional stochastic processes, the concept of the quadratic variation is usually not extended to multidimensional processes and is often omitted in the literature on multidimensional stochastic integration (see [5,27]). We also defined the stochastic integral without resorting to the quadratic variation. However, for the sake of completeness, we will define a multidimensional version of the quadratic variation that generalizes the concept of one-dimensional quadratic variation and mention some interesting facts and examples for further treatment and development of the theory. Our approach is consistent with the so-called tensor quadratic variation as it was presented by Métivier [73].
Definition 8. 
Let X , Y S d and
X Y = X 1 Y 1 X 1 Y d X d Y 1 X d Y d .
Then, we define quadratic covariation of X and Y as
[ X , Y ] : = X 1 Y 1 X 1 Y 1 Y 1 X 1 X 1 Y d X 1 Y d Y d X 1 X d Y 1 X d Y 1 Y 1 X d X d Y d X d Y d Y d X d .
The quadratic variation of X is defined as [ X , X ] and denoted by [ X ] or [ X , X ] .
In the literature, the quadratic variation is often defined to be the limit of particular processes (see, for example, [74] (Section 8.5)), which is an equivalent definition. There are even more ways to define the quadratic variation, especially for martingales (see, for example, [52] (Chapter 11), [42] (Chapter II Corollary 2 of Theorem 27), or [75] (Definition 1.15)).
One easily sees that the ( X , Y ) [ X , Y ] is bilinear and symmetric. Therefore, we get the polarization identity
[ X , Y ] = 1 2 ( [ X + Y , X + Y ] [ X , X ] [ Y , Y ] )
In the following, we are going to present the main properties of the quadratic (co-)variation. In order to justify the name quadratic variation by the approximating property, we need the following definition.
Definition 9. 
A finite set σ of stopping times, which satisfies 0 = T 1 T 2 T k < , is called a random partition. If ( σ n ) n N is a sequence of random partitions σ n : = ( T 1 n , , T k n n ) , k n N , then we say that ( σ n ) n N converges to the identity if lim n sup i = 1 , , k n T i n = almost surely and
σ n : = sup i = 1 , , k n | T i + 1 n T i n | n 0 P - a . s .
holds.
Theorem 19. 
Let X , Y S d . The process [ X , Y ] is, in each component, an FV process and a semimartingale and has the following properties.
(a) 
[ X , Y ] 0 = X 0 Y 0 and Δ [ X , Y ] i , j = Δ X i Δ Y j .
(b) 
For a sequence ( σ n ) n N of random partitions that tends to the identity, we have
X 0 Y 0 + i = 1 k n ( X T i + 1 n X T i n ) ( Y T i + 1 n Y T i n ) [ X , Y ] , n ,
where σ n is given by 0 = T 1 n T 2 n T k n n = T .
(c) 
Let T be a stopping time. Then, we have
[ X T , Y ] = [ X , Y T ] = [ X T , Y T ] = [ X , Y ] T .
(d) 
The quadratic variation [ X , X ] is a positive, increasing process.
(e) 
If X is an FV process, we have
[ X , Y ] t = X 0 Y 0 + 0 < s t Δ X s Δ Y s .
Proof. 
As the set of semimartingales form an algebra, it is easy to see that each component is a semimartingale, and by Item (d), it is an FV process.
We can, for all properties, without loss of generality, assume d = 1 .
(a)
By definition, [ X , Y ] 0 = X 0 Y 0 X Y 0 Y X 0 = X 0 Y 0 , and the first equation is clear. For the second equation, using Theorem 16,
Δ X Δ Y = ( X X ) ( Y Y ) = X Y X Y Y X + X Y = ( X Y X Y ) X Y Y X + 2 X Y = Δ ( X Y ) X ( Y Y ) Y ( X X ) = Δ ( X Y ) X Δ Y Y Δ X = Δ ( X Y ) Δ ( X Y ) Δ ( Y X ) ,
where the equality holds almost surely. This proves the result.
(b)
By the polarization identity, it suffices to consider the case where X = Y , and without loss of generality, we can assume X 0 = 0 .
Let ( σ n ) n N be a sequence of partitions that converges to the identity, with σ n = { T 1 n , T 2 n , , T k n n } . We define X σ n as follows:
X t σ n = X T i n for t T i n , T i + 1 n for each i .
For a fixed N N , we set R = inf { t 0 : | X t | N } . Since X is càdlàg, we have | X σ n 1 0 , R | N for all n, and X t σ n 1 0 , R X t 1 0 , R almost surely as n .
Now, by expressing ( X R ) t 2 as a telescoping sum, we obtain:
( X R ) t 2 = ( X 0 R ) 2 + 2 i X T i R ( X T i + 1 t R X T i t R ) + i ( X T i + 1 t R X T i t R ) 2 = ( X 0 R ) 2 + 2 ( X σ n 1 0 , R ) X t + i ( X T i + 1 t R X T i t R ) 2 .
By the assumptions on σ n , we note that only finitely many terms of this sum are nonzero. Define Z ( σ n , t ) : = i ( X T i + 1 n t X T i n t ) 2 . According to Proposition 4.32 from [42], we have:
( X σ n 1 0 , R ) X X X t R
as n in the semimartingale topology. Furthermore,
{ Z ( σ n , R t ) } t 0 { Z ( R ) t } t 0
in ucp for some càdlàg process Z ( R ) .
Since N was arbitrary, we now have a family of processes { Z ( R N ) } N N , where R N = inf { t 0 : | X t | N } . It is easy to verify that if M > N (and thus R M R N ), then Z ( R N ) = Z ( R M ) on the interval 0 , R N . Hence, by pasting these processes together, we can define a single process Z such that Z = Z ( R N ) on 0 , R N for all N, and Z ( σ n ) Z in ucp as n .
Thus, on each interval 0 , R , Equation (11) implies Z = [ X , X ] , and therefore Z = [ X , X ] in general.
(c)
We assume d = 1 . By Theorem 16, we get
[ X , Y ] T = ( X Y ) T ( X Y ) T ( Y X ) T = X T Y T X T Y T Y T X T = [ X T , Y T ] .
Furthermore,
[ X T , Y ] = X T Y ( X T ) Y Y ( X T ) = X T ( Y T + Y Y T ) ( 1 { t > T } X T + 1 [ 0 , T ] X ) Y Y ( X T ) = ( X Y ) T + X T ( Y Y T ) ( 1 { t > T } X T ) Y = 0 ( X Y ) T ( Y X ) T = [ X , Y ] T .
Thus,
[ X T , Y ] = [ X , Y ] T = [ Y , X ] T = [ Y T , X ] = [ X , Y T ]
and everything is shown.
(d)
This follows with X = Y directly from the summation representation in Item (b).
(e)
X is a semimartingale, and a stochastic integral with respect to X coincides pathwise with the Lebesgue–Stieltjes integral. The formula for partial integration in the Lebesgue–Stieltjes integral yields
X t 2 = 0 t X s d X s + 0 t X s d X s .
The formula for partial integration for semimartingales, in turn, yields
X t 2 = 2 0 t X s d X s + [ X , X ] .
Equating the two equations for X 2 , we obtain
0 t X s d X s = 0 t X s d X s + [ X , X ] t ,
thus
[ X , X ] t = 0 t ( X s X s ) d X s = 0 t Δ X s d X s = s t ( Δ X s ) 2 .
Theorem 20. 
For X , Y S 1 , H L ( X ) and K L ( Y ) , we have
[ H X , K Y ] t = 0 t H s K s d [ X , Y ] s ( t 0 ) .
Proof. 
Without loss of generality, assume X 0 = Y 0 = 0 . We first consider that H is of the form H = 1 0 , T for a stopping time T. Now, by Theorem 19, we have
[ H X , Y ] = [ 1 0 , T X , Y ] = [ X T , Y ] = [ X , Y ] T = 1 0 , T [ X , Y ] = H [ X , Y ] .
If H is of the form H = U 1 S , T with stopping times S T and an F S -measurable random variable U, we get
[ H X , Y ] = [ U ( 1 S , T X ) , Y ] = [ U ( X T X S ) , Y ] = U ( [ X T , Y ] [ X S , Y ] ) = U ( [ X , Y ] T [ X , Y ] S ) = U ( 1 S , T [ X , Y ] ) = H [ X , Y ] .
With the usual monotone-class argument, this can be extended to any locally bounded H. And finally, Theorem 15 provides the result for H L ( X ) . □
Remark 25. 
Theorem 20 has a multi-dimensional version as well [5] (Theorem 4.19). However, the multidimensional variant requires some more technical definitions, and we, therefore, stick to the much simpler version in one dimension.
To apply standard arguments, for instance, those used in the proof of Itô’s formula, it is essential to decompose the quadratic variation into its continuous and purely discontinuous parts. For completeness, we state this result below. With this decomposition at hand, the proofs of the usual results follow directly.
Proposition 1. 
Let X , Y S 1 be (one-dimensional) semimartingales and recall that [ X , Y ] is càdlàg and of finite variation with Δ [ X , Y ] t = Δ X t Δ Y t and [ X , Y ] 0 = X 0 Y 0 by Theorem 19. Define
[ X , Y ] t d : = [ X , Y ] 0 + 0 < s t Δ X s Δ Y s , [ X , Y ] c : = [ X , Y ] [ X , Y ] d .
Then:
(i)  
The sum in the definition of [ X , Y ] d converges absolutely on compacts; hence, [ X , Y ] d is a càdlàg finite-variation process which is purely discontinuous. Moreover,
Δ [ X , Y ] t d = Δ X t Δ Y t for all t 0 , [ X , Y ] 0 d = X 0 Y 0 .
Equivalently, [ X , Y ] t d = X 0 Y 0 + 0 < s t Δ X s Δ Y s .
(ii) 
The process [ X , Y ] c is continuous, of finite variation, and satisfies
[ X , Y ] 0 c = 0 .
(iii) 
The decomposition is unique and behaves well under stopping: for every stopping time T,
[ X , Y ] c , T = [ X T , Y T ] c , [ X , Y ] d , T = [ X T , Y T ] d .
(iv) 
For every H L ( X ) , K L ( Y ) ,
[ H X , K Y ] c = 0 · H s K s d [ X , Y ] s c , [ H X , K Y ] t d = ( H X ) 0 ( K Y ) 0 + 0 < s t H s K s Δ X s Δ Y s .
Equivalently,
[ H X , K Y ] t = [ H X , K Y ] 0 + 0 t H s K s d [ X , Y ] s c + 0 < s t H s K s Δ X s Δ Y s .
All assertions hold componentwise for X , Y S d .
Proof. 
By Theorem 19, the bracket [ X , Y ] is (componentwise) a finite-variation semimartingale with Δ [ X , Y ] t = Δ X t Δ Y t and [ X , Y ] 0 = X 0 Y 0 . For any càdlàg finite-variation process V, the total variation on [ 0 , t ] dominates the sum of absolute jumps, hence, 0 < s t | Δ [ X , Y ] s | Var ( [ X , Y ] ) t < . Thus, the series in [ X , Y ] d converges absolutely (on compacts), so [ X , Y ] d is of finite variation and purely discontinuous, with the stated jumps and initial value. Setting [ X , Y ] c : = [ X , Y ] [ X , Y ] d gives a càdlàg finite-variation process with no jumps; hence, it is continuous and [ X , Y ] 0 c = 0 . Uniqueness follows from equality of jumps and initial value.
The stopping identities follow from Theorem 19: since [ X T , Y T ] = [ X , Y ] T , the same algebra gives [ X T , Y T ] d = [ X , Y ] d , T (because both have the same initial value and jumps), and hence also [ X T , Y T ] c = [ X , Y ] c , T .
For the integration statement, Theorem 20 yields
[ H X , K Y ] = H K d [ X , Y ] .
Decomposing [ X , Y ] = [ X , Y ] c + [ X , Y ] d and using linearity of the (Lebesgue–)Stieltjes integral for finite-variation integrators (cf. Item (f)) gives [ H X , K Y ] = H K d [ X , Y ] c + H K d [ X , Y ] d .
Since [ X , Y ] d is purely discontinuous with jumps Δ [ X , Y ] t d = Δ X t Δ Y t , the pathwise Stieltjes integral against [ X , Y ] d reduces to the sum of atoms, 0 t H s K s d [ X , Y ] s d = 0 < s t H s K s Δ X s Δ Y s , while the integral against [ X , Y ] c yields the claimed continuous part. The displayed formulas follow. □
Remark 26. 
Two immediate consequences of Proposition 1 align with Theorem 19:
  • If one of X or Y has finite variation, then [ X , Y ] c 0 and [ X , Y ] t = X 0 Y 0 + 0 < s t Δ X s Δ Y s .
  • If X and Y are continuous, then [ X , Y ] d X 0 Y 0 and the evolving part [ X , Y ] X 0 Y 0 is continuous.
Remark 27. 
It would have been possible to define the quadratic covariation of two local martingales, M and N, as the unique FV process [ M , N ] such that M N [ M , N ] is a local martingale in each component, and Δ [ M , N ] i , j = Δ M i Δ N j .
By the integration by parts formula and Theorem 18, it is established that M N [ M , N ] behaves as a local martingale in each component. Furthermore, according to Theorem 19, the relation Δ [ M , N ] i , j = Δ M i Δ N j is clear. The uniqueness of [ M , N ] is derived from the fact that any local martingale of finite variation must be a pure jump process. This condition regarding the jumps then secures the uniqueness.
Now, we have the tools at hand to give some more examples.
Example 10. 
Let W be a standard Brownian motion. We define
H t = 1 t 1 t and X t = W t 1 W t 1 + 1 [ 1 , ) W t .
We assume H is componentwise integrable with respect to X. Then, by Theorem 19, [ H 1 X 1 , H 1 X 1 ] is a semimartingale. But since
[ H 1 X 1 , H 1 X 1 ] t = 0 t 1 1 s d s = ,
we obtain a contradiction, and hence H is not componentwise integrable with respect to X.
However, we have H L ( X ) . To show that, we define
H t n = 1 t + 1 n 1 t + 1 n .
Then, one easily sees that H n is Cauchy in the X-topology and converges to H.
Remark 28 
(Practical implications of Example 10). The fact that an integrand can be integrable without being componentwise integrable has significant practical implications, particularly in mathematical finance. This is not merely a theoretical curiosity—there are real-world scenarios where such strategies arise naturally:
(i) 
Portfolio theory: In multi-asset portfolio models, a trading strategy H = ( H 1 , , H d ) may be admissible (i.e., H L ( X ) , where X is the vector of asset prices) even when individual positions H i are not separately integrable with respect to X i . This occurs when cross-asset correlations provide “offsetting effects” that tame the singularities of individual components.
(ii) 
Market completeness: A market is complete if every contingent claim can be replicated by a self-financing trading strategy. In certain multi-asset models, the replicating strategy for a given claim may fail to be componentwise integrable yet still belong to L ( X ) . If one restricts attention to componentwise integrable strategies, such markets would appear incomplete, even though they are complete when the full space L ( X ) of admissible strategies is considered. Thus, the distinction between L ( X ) and componentwise integrable processes is essential for correctly characterizing market completeness.
(iii) 
Arbitrage and the Fundamental Theorems of Asset Pricing: The Fundamental Theorems of Asset Pricing (FTAP) establish the equivalence between no-arbitrage conditions and the existence of equivalent martingale measures. Crucially, arbitrage opportunities must be ruled out over the entire space of admissible strategies L ( X ) , not merely over componentwise integrable strategies. If arbitrage opportunities exist that are built on strategies in L ( X ) {componentwise integrable}, a theory based solely on componentwise integrability would fail to detect them. Conversely, a market that appears arbitrage-free under componentwise integration might harbor arbitrage when the full space L ( X ) is considered. This is why the general stochastic vector integral—not just the componentwise version—is indispensable for a rigorous treatment of the FTAP; see [1,2,5].
(iv) 
Hedging in correlated markets: The example demonstrates that hedging strategies exploiting correlations between assets can be integrable even when the individual “legs” of the hedge would blow up if considered in isolation. This is precisely the phenomenon underlying certain spread trades and relative-value strategies in practice.
(v) 
Model robustness: Classical componentwise constructions require checking integrability for each asset separately. Our topological approach shows that the true space of admissible strategies is strictly larger. Any financial model that restricts to componentwise integrable strategies is potentially misspecified: it may exclude legitimate trading strategies, mischaracterize completeness, or overlook arbitrage opportunities.
Hence, Example 10 illustrates that a stochastic integration theory based solely on componentwise integrability is insufficient for mathematical finance. The full general stochastic vector integral, as developed in this paper, is essential for the Fundamental Theorems of Asset Pricing to hold in their proper generality.
Remark 29. 
There are several criteria for determining when an integrand is componentwise integrable. One such criterion is discussed in Remark 30. Moreover, all locally bounded predictable integrands are necessarily integrable (see Theorem 13). Additionally, if H L ( X ) and [ X i , X j ] = 0 for all i , j with i j , then H i L ( X i ) for all i.
The next example shows that the condition of componentwise integrability in the Dominated Convergence Theorem cannot be dropped.
Example 11. 
Let H , X be the processes from the previous example and K = ( 0 , H 2 ) with H 2 being the second component of H. Clearly, we have | K i | | H i | for i = 1 , 2 and by Example 10, we also have H L ( X ) , but we have K L ( X ) . This demonstrates that the condition of componentwise integrability cannot be dropped in the Dominated Convergence Theorem.
Remark 30. 
One might criticize that the topological approach does not directly yield a description of the class of integrands via integrability conditions. However, such criteria are straightforward to derive, particularly for componentwise integrability.
Let X S 1 and H P , and consider a decomposition of X as X = N + A . Assume that, locally,
E 0 H s 2 d [ N , N ] s + E 0 | H s | | d A s | 2 <
holds. Then, H L ( X ) .
This can be demonstrated by setting H n = 1 { | H | n } H . Then, H n L ( X ) , and it remains to show that ( H n ) n N is a Cauchy sequence with respect to d X . To this end, we first establish that ( H n X ) n N is Cauchy in H 2 (see, for example, [42] (Theorem IV.14)). Next, we demonstrate that H n X converges in the semimartingale topology [40] (Theorem VI.4.19). The rest follows with the equivalence of the H 2 and X norms [42] (Corollary of Theorem IV.24 and Theorem V.14).

9. Applications to Stochastic Differential Equations

In this section, we demonstrate how the topological stochastic integral developed in this paper is applied in practice. The key observation is that applications of stochastic integration rely on the properties of the integral—linearity, Itô’s formula, the dominated convergence theorem, stability results—rather than on the specifics of its construction. Since our integral coincides with the classical one and we have established all the standard properties in the preceding sections, the topological integral can be used in exactly the same way as the classical or functional analytic integral.
Remark 31 
(Equivalence with classical integrals). As established in Remark 13, the stochastic integral H X defined via our topological closure coincides with the integral defined via the classical approach (Jacod, Cohen–Elliott) and the functional analytic approach (Protter). In particular:
(i)   
For locally bounded predictable integrands H and semimartingales X, all three definitions yield the same process H X .
(ii)  
The space of integrable processes L ( X ) coincides with the space identified in the classical theory.
(iii) 
In settings where results such as Itô’s formula or Girsanov’s theorem hold for the classical integral (under the usual hypotheses on the filtration and noise), they hold equally for our topological integral, since the integrals coincide.
This equivalence ensures that any result or application developed using the classical integral transfers immediately to our setting.
We now present several examples demonstrating how the topological integral is used to solve and analyze stochastic differential equations. These examples use only the properties established in this paper.
Example 12 
(Linear SDE). Consider the linear stochastic differential equation
d X t = μ X t d t + σ X t d W t , X 0 = x 0 > 0 ,
where W is a standard Brownian motion and μ , σ R are constants. To solve this equation, we seek a process X such that
X t = x 0 + 0 t μ X s d s + 0 t σ X s d W s .
The solution is found using Itô’s formula (see, e.g., [42] (Theorem II.32)). Let f ( x ) = log x for x > 0 . Applying Itô’s formula to Y t = f ( X t ) = log X t yields
d Y t = 1 X t d X t 1 2 X t 2 d [ X , X ] t = μ σ 2 2 d t + σ d W t .
Integrating and exponentiating gives the explicit solution
X t = x 0 exp μ σ 2 2 t + σ W t .
This is the geometric Brownian motion, fundamental to the Black–Scholes model in mathematical finance.
Example 13 
(Ornstein–Uhlenbeck process). The Ornstein–Uhlenbeck process satisfies
d X t = θ X t d t + σ d W t , X 0 = x 0 ,
where θ > 0 is the mean-reversion rate. To solve this, we apply Itô’s formula to Y t = e θ t X t :
d Y t = θ e θ t X t d t + e θ t d X t = θ e θ t X t d t + e θ t ( θ X t d t + σ d W t ) = σ e θ t d W t .
Integrating yields
Y t = x 0 + σ 0 t e θ s d W s , and hence X t = e θ t Y t = e θ t x 0 + σ 0 t e θ ( t s ) d W s .
The integrand s σ e θ s is deterministic and bounded on [ 0 , t ] , hence locally bounded and therefore in L ( W ) by Theorem 13. The integral is well defined, and the solution is a Gaussian process.
Example 14 
(Multidimensional SDE system). Consider the two-dimensional system
d X t 1 d X t 2 = μ 1 μ 2 d t + σ 11 σ 12 σ 21 σ 22 d W t 1 d W t 2 ,
where ( W 1 , W 2 ) is a two-dimensional Brownian motion. In integral form:
X t = X 0 + 0 t μ d s + 0 t d W s ,
where = ( σ i j ) is the volatility matrix and the stochastic integral is the vector integral W . Since Σ is constant (hence locally bounded), we have L ( W ) by Theorem 13, and the vector integral is well defined via our topological construction. The solution is
X t = X 0 + μ t + W t ,
a multivariate Gaussian process. This example illustrates the seamless handling of vector integration in our framework.
Example 15 
(SDE with jumps). Let N be a Poisson process with intensity λ > 0 , and consider
d X t = μ X t d t + σ X t d W t + γ X t d ( N t λ t ) , X 0 = x 0 .
The compensated Poisson process N ˜ t = N t λ t is a martingale. Define the two-dimensional semimartingale Z : = ( W , N ˜ ) . The integrand H t = ( σ X t , γ X t ) is predictable (left-continuous in the solution process). By the stability result Theorem 18, the stochastic integral H Z is defined and is a semimartingale once H is locally bounded, which holds locally for any càdlàg candidate solution X.
The solution can be constructed via a fixed-point argument or the Doléans-Dade exponential. Using Itô’s formula for semimartingales with jumps (see, e.g., [42] (Theorem II.32)), one obtains
X t = x 0 exp μ σ 2 2 λ γ t + σ W t s t ( 1 + γ ) Δ N s ,
which is a jump-diffusion process used extensively in financial modeling.
Remark 32 
(Existence and uniqueness of SDE solutions). The standard existence and uniqueness theorems for SDEs (e.g., under Lipschitz and linear growth conditions) rely on the following properties of the stochastic integral:
(i)   
Linearity of the integral map H H X (Theorem 16(a)).
(ii)  
The Burkholder–Davis–Gundy (BDG) inequalities relating E [ sup s t | H X | p ] to E [ [ H X , H X ] t p / 2 ] .
(iii) 
The dominated convergence theorem (Theorem 14) for passing limits inside integrals.
(iv)  
Stability: if H is locally bounded predictable and X is a semimartingale, then H X is a semimartingale (Theorem 18).
Properties (i), (iii), and (iv) have been established directly in this paper. For the BDG inequalities (ii), since our integral coincides with the classical semimartingale integral (Remark 13), these classical tools apply unchanged; see [42] (Theorem IV.48). Therefore, the classical Picard iteration argument for proving existence and uniqueness of SDE solutions applies verbatim in our setting. Specifically, for an SDE of the form
X t = X 0 + 0 t b ( s , X s ) d s + 0 t σ ( s , X s ) d W s ,
if b and σ satisfy standard Lipschitz and linear growth conditions, there exists a unique strong solution, and this solution is a semimartingale.
Remark 33 
(Stochastic partial differential equations). The present paper focuses on finite-dimensional semimartingales as integrators. Stochastic partial differential equations (SPDEs) typically involve infinite-dimensional processes, such as cylindrical Brownian motion in a Hilbert space, and require infinite-dimensional stochastic integration theory.
While our topological approach is developed for the finite-dimensional setting, the underlying philosophy—defining the integral as the unique continuous extension under an appropriate topology—has natural extensions to infinite dimensions. Indeed, the work of Assefa and Harms [44] develops cylindrical stochastic integration using similar topological ideas. A full treatment of SPDEs is beyond the scope of this paper, but we note that:
(i)   
SPDEs with finite-dimensional noise: Our theory applies directly to finite-dimensional approximations (e.g., Galerkin schemes), where the SPDE reduces to an R n -valued SDE driven by finitely many semimartingales. The resulting stochastic integrals are exactly of the form treated in this paper. Similarly, when evaluating the mild solution against test functions or taking coordinate projections, one obtains R -valued semimartingale integrals covered by our framework.
(ii)  
Extensions to infinite-dimensional noise: For truly infinite-dimensional integrators (cylindrical or Q-Wiener processes), the key changes would be:
  • The integrator becomes a cylindrical Brownian motion or Q-Wiener process in a Hilbert space H.
  • Integrands are Hilbert–Schmidt operator-valued predictable processes.
  • One would need a semimartingale topology on the space of H-valued processes.
The topological closure philosophy remains applicable, but the functional–analytic details differ; developing a full Banach- or Hilbert-space-valued theory is beyond the present scope.
(iii) 
Galerkin approximations: Many SPDE results are established via finite-dimensional Galerkin approximations. For each n-dimensional approximating system, our framework applies directly. The passage to the infinite-dimensional limit then uses compactness arguments specific to the SPDE setting.
We leave the systematic development of infinite-dimensional topological stochastic integration as an interesting direction for future research.

10. Conclusions and Comparison with the Existing Approaches

We have presented a novel approach to defining the general stochastic vector integral through a single topological closure in the Émery topology. This construction achieves in one step what traditionally required multiple layers of technical machinery. To conclude, we now systematically compare our topological route with the two established approaches: the classical path (à la Itô–Doob–Meyer and its modern presentations, e.g., Cohen–Elliott), and the functional–analytic path (à la Protter).
This comparison highlights where traditional approaches expend technical effort and indicates precisely which steps are bypassed when one defines the integral as the unique continuous extension in the Émery topology described in Section 5. The key insight is that a single closure in the Émery topology yields the full predictable, vector-valued integral H X without passing through H 2 , without bracket factorization, and without a canonical decomposition. A longer overview and further references are collected in the survey [76].
The standard classical development originated with Itô [7,8] for Brownian motion, was extended by Doob [77] and Meyer [13,78] via the Doob–Meyer decomposition, further developed by Kunita and Watanabe [16] for square-integrable martingales, generalized by Doléans-Dade and Meyer [79] to locally bounded predictable integrands, and completed by Jacod [22,24] for unbounded integrands. The presentation in [52] provides a modern synthesis of these classical results. The approach proceeds in the following layers:
(1)
Square–integrable martingale integrators. First, the integral is defined for simple predictable, left–continuous H Λ , then it is extended by an isometry (Itô’s isometry) to all H in the closure under the L 2 ( [ M ] ) -type norm; see [16] and the modern treatment in [52], Defs. 12.1.1–12.1.3 and Lemma 12.1.4.
(2)
From H 2 to semimartingales. The Doob–Meyer decomposition is used to write X = M + A and define H X : = H M + H d A , and it is verified that this is decomposition-independent and that the basic properties (linearity, stopping, jump identity, etc.) continue to hold for special and then general semimartingales; see [22,79], and the synthesis in [52], Chapter 11 and Section 12.3.
(3)
Vector integration. For d-dimensional local martingales M, the factorization [ M ] = π · C (matrix density π against a scalar increasing C) is introduced, the spaces L p ( M ) are built via H L p ( M ) = ( ( H π H ) · C ) 1 / 2 L p , and H M is defined as the unique local martingale characterized by [ H M , N ] = ( H K ) · C whenever [ M , N ] = K · C ; see [27,28], and the exposition in [52], Lem. 12.5.3, Defs. 12.5.5–12.5.6, and Thm. 12.5.7.
Two pain points in the vector case are (i) the need to drag along the bracket factorization [ M ] = π · C and the L p ( M ) spaces and (ii) the failure of the componentwise construction to be closed in the Émery topology; cf. [52] (Example 12.5.2).
The functional–analytic approach, building on the characterization of good integrators by Bichteler [33,34] and Dellacherie [35], emphasizes functional–analytic closability and completeness, proceeding in two distinct stages: first for càglàd integrands without decomposition, then for general predictable integrands using decomposition.
Stage 1: Càglàd integrands (no decomposition needed).
(1)
The integral on simple predictable Λ . One starts with simple predictable processes and defines H X directly for any semimartingale X, without decomposing X.
(2)
Extension via ucp topology. The topology of uniform convergence on compacts in probability (ucp) is introduced, and it is shown that the integral operator J X : Λ ucp D ucp is continuous.
(3)
Completion to L . Since Λ is dense in the space L of càglàd processes under ucp topology, the integral is extended by continuity to all H L . This yields the stochastic integral for càglàd integrands without any decomposition of X or use of H 2 machinery.
Stage 2: Extension to predictable integrands.
(1)
Foundation on H 2 . For special semimartingales X H 2 with canonical decomposition X = N + A , a norm is introduced that controls both [ N , N ] and the variation of A; stability estimates provide uniform control of sup t | X t | in terms of that norm.
(2)
Extension from b Λ to b P . The integral map H H X is closed from bounded left-continuous simple integrands b Λ to all bounded predictable b P by approximating simultaneously in appropriate seminorms (Theorems 6–8 of Ch. IV: closability and density results).
(3)
Localization to arbitrary semimartingales and general H P . After the b P step in H 2 , localization yields the integral for general semimartingales and predictable integrands.
The background equivalence between good integrators and semimartingales is proved via a Bichteler–Dellacherie–Mokobodzki argument.
There are several key advantages of the topological approach over both the classical and functional analytic routes:
  • No canonical decomposition is needed for the definition. While the functional analytic approach achieves this for càglàd integrands, it requires decomposition and H 2 machinery for the extension to predictable integrands. The classical approach uses decomposition throughout. Our definition never splits X; it is a closure on Λ d in the Émery topology that directly yields L ( X ) .
  • Single-stage construction. The functional analytic approach proceeds in two stages: first extending to càglàd integrands without decomposition (similar to our approach), then requiring H 2 machinery and decomposition for predictable integrands. We accomplish both extensions in a single closure operation. This single-step topological closure improves upon the two-stage functional analytic extension in three concrete ways: (i) it eliminates the need for the intermediate H 2 space and its associated d x / d y seminorms, which require special semimartingale decomposition; (ii) it avoids the separate closability arguments for bounded versus unbounded integrands; and (iii) it provides a unified treatment where the vector case requires no additional machinery beyond the scalar case—the same closure operation yields both.
  • No passage through H 2 nor b Λ b P closure under the d x / d y seminorms. In [42], properties for bounded predictable H are inherited from b Λ by a two-metric approximation inside H 2 (Theorems 5–8). We replace these steps by a single closure and a direct bounded/dominated convergence on L ( X ) (Theorems 14 and A1). The crucial difference is that our closure is performed directly on the predictable σ -algebra P under the Émery topology, whereas Protter’s functional analytic approach first closes in L 2 -type spaces (specifically, in H 2 using the d x and d y seminorms) before localizing to general semimartingales. This direct closure on P is what enables our single-stage construction.
  • Vector integration without π · C and L p ( M ) . The classical vector theory requires the bracket factorization [ M ] = π · C and a separate L p ( M ) -completion to define H M before adding the finite-variation part (Cohen–Elliott Section 12.5, based on [27]). In our setting, the vector case is handled identically to the scalar one: L ( X ) is the closed subspace of P generated by Λ d under · X , so no auxiliary factorization/measurable-selection apparatus is needed.
  • No quasimartingale detour is needed for the construction. The Bichteler–Dellacherie– Mokobodzki theorem is part of the equivalence theory (and in Protter’s Chapter III its proof runs through quasimartingales under an equivalent measure). Our construction and basic properties do not rely on Bichteler–Dellacherie; we only invoke it afterwards (Theorem 8) to connect our topological definition to the classical one.
  • No σ -martingale/BMO machinery is needed for defining the integral. In [42] Chapter IV, σ -martingales enter when one seeks sharp criteria for when H M is a (local) martingale for possibly unbounded H (stability questions), not to construct the integral itself. Our construction and the properties proved in Section 6 avoid these tools; we only use elementary stability under locally bounded H (Theorem 18).
In short, the topological route replaces both a multi-stage historical build (Itô isometry → Doob–Meyer decomposition → Kunita–Watanabe L 2 structure → Doléans-Dade–Meyer localization → Jacod’s jump classification → Jacod–Mémin vector factorization) and the functional analytic two-stage approach (càglàd without decomposition → predictable with H 2 and decomposition) by a single, global closure on the predictable σ -algebra under the Émery topology. This yields the scalar and vector integrals in one stroke, while keeping the basic proofs short and uniform.

Author Contributions

Conceptualization, M.S.; methodology, M.S.; formal analysis, M.S.; investigation, M.S.; writing—original draft preparation, M.S.; writing—review and editing, A.Z.I.; supervision, A.Z.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank the Department of Mathematical Sciences at the University of South Africa for their support.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. The Technical Proofs

This appendix contains the proofs advertised in Remark 20: the Dominated Convergence Theorem for our topological stochastic integral (Theorem 14), the associativity of the integral (Theorem A5), and the truncation criterion for multi-dimensional integrands (Theorem A6). The proofs are self-contained and do not use any results from Section 6. For convenience, we first introduce two auxiliary seminorms on integrands and establish a bounded convergence theorem (Theorem A1); the dominated convergence proof then follows the bootstrap outlined in Remark 20.
As the Dominated Convergence Theorem requires componentwise integrability, we can, without loss of generality, assume for its proof that d = 1 .
Definition A1. 
Let X be a one-dimensional semimartingale.
(a) 
For H Λ 1 , we define
H Λ , X = sup { G X ucp ; G Λ 1 , | G | | H | } .
(b) 
For H b P , we define
H b P , X : = inf n = 1 G n Λ , X ; G n Λ 1 , | H | n = 1 | G n | .
(c) 
Furthermore, we define A b P to be the closure of Λ 1 with respect to · b P , X .
Remark A1. 
We will see later in the proof of the bounded convergence theorem that A coincides with b P .
Lemma A1. 
Let X be a one-dimensional semimartingale.
(i)   
If H Λ 1 and G Λ 1 satisfy | G | | H | , then G Λ , X H Λ , X . In particular, | H 1 | | H 2 | implies H 1 Λ , X H 2 Λ , X .
(ii)  
For all H 1 , H 2 Λ 1 ,
H 1 + H 2 Λ , X H 1 Λ , X + H 2 Λ , X .
(iii) 
Consequently, after identifying indistinguishable processes, · Λ , X is a seminorm on Λ 1 , and · b P , X is a metric on P .
Proof. 
 
(i) Is immediate from the definition of the supremum in · Λ , X .
(ii) Fix G Λ 1 with | G | | H 1 + H 2 | . Set
α : = | H 1 | | H 1 | + | H 2 | 1 { | H 1 | + | H 2 | > 0 } , G 1 : = α G , G 2 : = ( 1 α ) G .
On the common refinement of the simple partitions, α is predictable and bounded by 1, hence G 1 , G 2 Λ 1 . Moreover, | G 1 | | H 1 | and | G 2 | | H 2 | , and G = G 1 + G 2 . Therefore
G X ucp G 1 X ucp + G 2 X ucp H 1 Λ , X + H 2 Λ , X .
Taking the supremum over all G with | G | | H 1 + H 2 | yields the claim.
(iii) follows from (i)–(ii) and the definitions. □
Lemma A2. 
Let ( H n ) n N Λ 1 be such that
H n H n + 1 K for all n , for some deterministic constant K .
Then, for every ε > 0 , there exists n 0 with
H m H n Λ , X ε for all m , n n 0 .
Proof. 
Fix t > 0 . We show that ( H n ) is Cauchy for the seminorm
· Λ , X , t : = sup K Λ 1 K u 1 E ( K ( · X ) ) t * 1 ,
which yields Cauchy for · Λ , X after summation over t N with the 2 t weights.
Let m > n and set D m , n : = H m H n Λ 1 . By linearity,
H m H n Λ , X , t = sup K u 1 E ( K ( D m , n X ) ) t * 1 .
By Lemma 2,
H m H n Λ , X , t = sup K u 1 E | ( ( K D m , n ) X ) t | 1 .
We argue by contradiction. Suppose ( H n ) is not Cauchy in · Λ , X , t . Then, there exist ε > 0 , increasing integer sequences m j > n j , and simple predictable K ( j ) with K ( j ) u 1 such that
E | ( ( K ( j ) D m j , n j ) X ) t | 1 ε for all j .
Fix j. Because K ( j ) is simple predictable, it has a finite grid of stopping times 0 = T 1 ( j ) T N ( j ) ( j ) on [ 0 , t ] and
K ( j ) = i = 1 N ( j ) 1 k i ( j ) 1 T i ( j ) , T i + 1 ( j ) , | k i ( j ) | 1 .
Since H n pointwise and is bounded (by the deterministic constant K) in each component, we have for every fixed i and almost every ω ,
lim m , n m > n H T i ( j ) ( ω ) m ( ω ) H T i ( j ) ( ω ) n ( ω ) = 0 .
Consequently,
( K ( j ) D m , n ) s ( ω ) = i = 1 N ( j ) 1 k i ( j ) ( ω ) H T i ( j ) ( ω ) m ( ω ) H T i ( j ) ( ω ) n ( ω ) 1 T i ( j ) , T i + 1 ( j ) ( s , ω ) m , n 0
pointwise for all s t , for almost every ω . Since each K ( j ) D m , n has a finite grid on [ 0 , t ] , the terminal variable can be written as a finite sum
( K ( j ) D m , n ) X t = i = 1 N ( j ) 1 k i ( j ) H T i ( j ) m H T i ( j ) n X t T i + 1 ( j ) X t T i ( j ) ,
and therefore
( K ( j ) D m , n ) X t m , n a . s . 0 .
Because the truncation x | x | 1 is bounded and continuous, we obtain
E | ( ( K ( j ) D m , n ) X ) t | 1 m , n 0 for each fixed j .
In particular, for each j, there exists N ( j ) such that for all m , n N ( j ) , m > n ,
E | ( ( K ( j ) D m , n ) X ) t | 1 ε 2 .
Now, choose j large and take m j , n j N ( j ) with m j > n j . This contradicts Equation (A1). Hence, ( H n ) is Cauchy for · Λ , X , t , and summing over t N with the 2 t weights yields that ( H n ) is Cauchy in · Λ , X . □
Lemma A3. 
Assume positive processes H , ( H n ) n N b P such that
H = j = 1 H j .
Then, we have
j = 1 H j b P , X j = 1 H j b P , X .
Proof. 
We have
j = 1 n H j b P , X j = 1 n H j b P , X j = 1 H j b P , X .
The result follows from the continuity of the metric. □
Lemma A4. 
Suppose H b P and let ( H n ) n N be a sequence in b P such that
H n H b P , X 0 .
Then, we have H A .
Proof. 
For each H n , there exists a sequence ( H n m ) m N Λ 1 such that
H n m H n b P , X 0 for m .
Choosing a diagonal sequence yields a sequence ( H n m k ) k N with
H H n m k b P , X 0 for k .
Hence, H A . □
Lemma A5. 
Let H b P and ( H n ) n N A with | H n | | H n + 1 | | H | for all n N and H b P such that H n H pointwise almost surely. Then, we have H A and
H H n b P , X 0 .
Proof. 
By the definition of A , for each H n A , there exists a sequence
( G n , m ) m N Λ 1 ,
such that G n , m H n b P , X 0 for m . Without loss of generality, we assume all sequences ( G n , m ) m N to be increasing. By assumption, for each ε > 0 and each n, there exists an m n ( ε ) , such that G m H n b P , X < ε for all m m n ( ε ) .
We put
m ¯ n ( ε ) : = max m ¯ n 1 ε 3 + 1 , m n ε 3 and G ˜ n : = G m ¯ n ( ε ) .
As G ˜ m G ˜ n Λ 1 , we have
G ˜ m G ˜ n b P , X G ˜ m G ˜ n Λ , X
and by Lemma A2, there exists an n ˜ ( ε ) N , such that G ˜ m G ˜ n Λ , X < ε 3 for all n , m n ˜ ( ε ) .
We obtain for m , n max { n ˜ ( ε ) , m ¯ n ( ε ) }
H m H n b P , X = H m G ˜ m + ( G ˜ n H n ) + ( G ˜ m G ˜ n ) b P , X H m G ˜ m + b P , X ε 3 + G ˜ n H n b P , X ε 3 + G ˜ m G ˜ n b P , X ε 3 .
Hence, for each ε , there exists an n ( ε ) N , such that H m H n b P , X ε for all m , n n ( ε ) .
Now, we put n k = n ( 2 k ) and obtain a sequence ( H n k ) k N such that
H n k + 1 H n k b P , X 2 k .
Because of
H H n k = lim n H n H n k = m = k H n m + 1 H n m ,
we obtain with Lemma A3
H H n k b P , X m = k H n m + 1 H n m b P , X 2 k + 1 .
Hence, H H n k b P , X 0 and thus, every subsequence of ( H n ) n N has a further subsequence that converges to H under · b P , X . By Lemma A4, this proves the result and implies H A . □
Theorem A1 
(Bounded Convergence Theorem). Let H b P and ( H n ) n N be a sequence in b P such that | H n | K for a constant K and H n H pointwise almost surely. Then, we have
H n X S H X .
Proof. 
Let ( G n ) n N be a sequence in A converging uniformly to a process G b P . That means there exists a sequence ( a n ) n N in R such that ( G n G ) * a n and a n 0 . Now, we have
G n G b P , X ( G n G ) * b P , X a n b P , X = | a n | X ucp 0 .
By Lemma A4, we have G A and hence A is closed under uniform convergence. Clearly, A also contains all constant functions. Together with Lemma A5, we can apply the monotone class theorem and obtain A = b P . Thus, we have H n A for all n. We put G ˜ n : = sup m n | H m H | and G ^ n : = G ˜ 1 G ˜ n and have | G ^ n | | G ˜ 1 | and G ^ n G ˜ 1 pointwise almost surely. By Lemma A5, we obtain
H n H b P , X G ˜ n b P , X = G ^ n G ˜ 1 b P , X 0 .
Hence, we deduce H n H b P , X 0 .
Furthermore, it is easy to check that we have
H n H X H n H b P , X .
Thus, H n H X 0 , which implies H n X H . □
Definition A2. 
Let X S 1 . We define L 1 ( X ) L ( X ) to be the set of predictable processes G such that, for each sequence of processes H n b P with | H n | | G | and H n 0 pointwise a.s., we have H n X 0 .
For the proof of the Dominated Convergence Theorem for L 1 ( X ) , we need a simple property of the stochastic integral. We will examine this property in greater generality in Theorem A5 and Theorem 16.
Theorem A2. 
Let X be a one-dimensional semimartingale, H Λ 1 and K L ( X ) . Then
H ( K X ) = ( H K ) X .
Proof. 
This is a simple calculation for K Λ 1 (see Theorem 16). Taking limits yields the result:
H ( K X ) = lim n H ( K n X ) = lim n ( H K n ) X = ( H K ) X
with K n Λ 1 such that K n X K . □
Theorem A3 
(Dominated Convergence Theorem for L 1 ( X ) ). Let X S 1 and ( H n ) n N be a sequence of predictable processes converging pointwise a.s. to H with | H n | | G | for some G L 1 ( X ) . Then, H n H X 0 .
Proof. 
Set Y n : = ( H n H ) X . Recall that
Y n S = sup K Λ 1 K u 1 K Y n ucp = sup K Λ 1 K u 1 ( K ( H n H ) ) X ucp
where the second equality follows from Theorem A2, and H n H X = Y n S by definition.
Let ( K n ) n N Λ 1 with K n u 1 . Then, K n ( H n H ) 0 pointwise a.s. and | K n ( H n H ) | 2 | G | . Since G L 1 ( X ) implies 2 G L 1 ( X ) , Theorem A1 yields
( K n ( H n H ) ) X 0 .
Because Z ucp Z S (take the outer integrand J 1 in the definition of · S ), we get by Theorem A2
K n Y n ucp = ( K n ( H n H ) ) X ucp 0 .
Now, suppose Y n S 0 . Then, there exist ε > 0 , a subsequence n k , and K ( k ) Λ 1 with K ( k ) u 1 such that
K ( k ) Y n k ucp ε for all k .
Define the sequence K ˜ n by K ˜ n k : = K ( k ) and choose K ˜ n arbitrarily (with K ˜ n u 1 ) for other n. Then, (A2) applied to the sequence ( K ˜ n ) n gives K ˜ n Y n ucp 0 , contradicting the choice of ( n k , K ( k ) ) . Hence, Y n S 0 .
Since H n H X = Y n S , we conclude H n H X 0 . □
Lemma A6. 
Let X S 1 , ( G n ) n N be a sequence in b P with
n = 1 G n X < and G : = n = 1 | G n | .
Furthermore, let ( H n ) n N be a sequence in b P with | H n | | G | .
(a) 
Let ( λ n ) n N be a sequence of positive numbers such that | λ n H n | | G | . Then
sup n N λ n H n X < .
(b) 
If H n 0 pointwise almost surely, then H n X 0 and thus G L 1 ( X ) holds.
(c) 
The set A = { ω Ω ; G = } is predictable, and we have
1 A H X = 0
for all H b P .
Proof. 
We put K n : = k = 1 n | G k | . For any λ > 0 and H b P with λ | H | | G | , λ ( H K n ) ( K n ) converges pointwise almost surely to λ H and we have λ ( H K n ) ( K n ) | λ H | .
Hence, we can apply bounded convergence and obtain
λ H X = lim n λ H K n ( K n ) X .
Furthermore, we have
λ ( H K n ) ( K n ) k = 1 n | λ H | | G k |
and thus
λ H X = lim n λ ( H K n ) ( K n ) X lim n k = 1 n | λ H | | G k | X = n = 1 | λ H | | G n | X ,
where we used the triangle inequality of the metric · X .
Hence, we obtain
sup m N λ m H m X sup m N n = 1 | λ m H m | G n X n = 1 G n X < .
That proves (a). For (b), we proceed analogously and obtain, by putting λ m = 1 ,
lim m H m X lim m n = 1 | H m | G n X ,
and as each summand of the right-hand side is bounded by G n X and, furthermore, n = 1 G n X < , we can commute the sum and the limes and obtain
lim m H m X n = 1 lim m | H m | G n X .
Since | H m | G n G n for all m and H m converges pointwise almost surely to 0, we can apply bounded convergence and obtain
| H m | G n X X 0 for m .
Hence,
lim m H m X n = 1 lim m | H m | G n X = 0 .
For (c), we use (a) with a sequence ( λ n ) n N such that lim n λ n . As | λ n 1 A H | | G | , we obtain sup n N λ n 1 A H X < and thus 1 A H X = 0 . □
Theorem A4. 
We have L 1 ( X ) = L ( X ) . That is, the stochastic Dominated Convergence Theorem holds for all H L ( X ) .
Proof. 
Let H L ( X ) . By definition, there exists a sequence ( H n ) n N Λ 1 with H n X H . That means, for each ε > 0 , there exists an n 0 N such that H m H n X ε for all m , n n 0 . We put H 1 : = 0 , and by passing to subsequences, we can assume that
H n + 1 H n X 2 n for all n N .
We put
G n : = H n + 1 H n and G : = n = 1 | G n | .
We have n = 1 G n X n = 1 2 n = 1 , and we put A : = { ω Ω ; G = } . By Lemma A6, 1 A c G L 1 ( X ) . Since n = 1 G n is absolutely convergent outside of A, we can define
H ˜ : = lim n 1 A c H n = n = 1 1 A c G n .
Since | 1 A c H n | | 1 A c G | , we have H ˜ L 1 ( X ) and by dominated convergence, we obtain 1 A c H n H ˜ X 0 . By Lemma A6, we have 1 A H n X = 0 , hence,
lim n H n X = lim n 1 A c H n X = H ˜ X
and thus H H ˜ X = 0 . We conclude H = H ˜ and hence H L 1 ( X ) . □
Lemma A7. 
Let X S d and H L ( X ) . If H n = H 1 { H u n } , then H n X H X in the semimartingale topology.
Proof. 
The proof is analogous to Theorem 6. □
For the proof of the important Theorem A6, we need to show the associativity of the stochastic integral first.
Theorem A5. 
Let X = ( X 1 , , X d ) S d and H = ( H 1 , , H d ) with H i L ( X i ) for all i. Furthermore, let K = ( K 1 , , K d ) be a predictable d-dimensional process and we set Y i : = H i X i , Y : = ( Y 1 , , Y d ) , G i : = K i H i and G = ( G 1 , , G d ) . Then, K i L ( Y i ) for all i if and only if G i L ( X i ) for all i, and K L ( Y ) if and only if G L ( X ) . In both cases, we have
K Y = G X .
Proof. 
For H , K Λ d , this is a basic calculation.
Now, we suppose G i = K i H i L ( X i ) with i { 1 , , d } . By Remark 2, there exists a K ˜ n = ( K ˜ 1 , n , , K ˜ d , n ) Λ d , such that | K ˜ i , n | | K i | and K ˜ t i , n K t i for all t almost surely. Now, by the first part and by the Dominated Convergence Theorem 14, we have
K ˜ i , n Y i = K ˜ i , n H i X i K i H i X i = G i X i .
Hence, K ˜ i , n Y i is Cauchy and we have K i L ( Y i ) . Since this holds for all i { 1 , , d } , we obtain K Y = G X .
Conversely, assume K i L ( Y i ) and let G ˜ i , n Λ 1 be a sequence such that G ˜ i , n G i = K i H i pointwise almost surely. We can write
G ˜ i , n = G ˜ i , n H i 1 { H i 0 } H i
and by proceeding analogously, we obtain G ˜ i , n H i 1 { H i 0 } L ( Y i ) and
G ˜ i , n H i 1 { H i 0 } Y i = G ˜ i , n X .
By dominated convergence, this tends to K i Y i and we have K i H i = G i L ( X i ) .
For G L ( X ) , we define K ˜ n = K 1 K n , G ˜ accordingly. As all K ˜ n and G ˜ n are bounded and hence componentwise integrable, we get
K Y G X S K Y K ˜ n Y S + ( K ˜ n ) Y G ˜ n X S + G ˜ n X G X S
For the middle term, we can apply bounded convergence and see that it vanishes. The first and last tend to 0 (Lemma A7). □
Theorem A6. 
Let X S d and H be a d-dimensional predictable process. Then, H L ( X ) if and only if H n X with H n = H 1 { H u n } converges in the semimartingale topology.
Proof. 
The one direction has already been shown in Lemma A7. For the other direction, we follow [5] and note that Remark 15 ensures the existence of a K L ( X ) with Z = K X . Let
G = n = 1 1 n 1 { n 1 H < n }
where G is predictable with 0 < G 1 . By Theorem A5 and the definition of the semimartingale topology, it is easy to see that
G ( H n X ) S G Z = ( G K ) X .
Since G H 1 , we can apply dominated convergence together with Theorem A5 and obtain
G ( H n X ) = ( G H n ) X S ( G H ) X .
Hence, ( G H ) X coincides with ( G K ) X .
Since K = G 1 G K is an element of L ( X ) , Theorem A5 states that G 1 also belongs to L ( ( G K ) X ) , which is identical to L ( ( G H ) X ) . A subsequent application of Theorem A5 shows that H (represented as G 1 G H ) is within L ( X ) , leading to
H X = G 1 ( ( G H ) X ) = G 1 ( ( G K ) X ) = K X = Z ,
which concludes the proof. □

References

  1. Delbaen, F.; Schachermayer, W. A general version of the fundamental theorem of asset pricing. Math. Ann. 1994, 300, 463–520. [Google Scholar] [CrossRef]
  2. Delbaen, F.; Schachermayer, W. The Fundamental Theorem of Asset Pricing for Unbounded Stochastic Processes. Math. Ann. 1998, 312, 215–250. [Google Scholar] [CrossRef]
  3. Harrison, J.M.; Pliska, S.R. Martingales and stochastic integrals in the theory of continuous trading. Stoch. Processes Their Appl. 1981, 11, 215–260. [Google Scholar] [CrossRef]
  4. Harrison, J.M.; Pliska, S.R. A stochastic calculus model of continuous trading: Complete markets. Stoch. Processes Their Appl. 1983, 15, 313–316. [Google Scholar] [CrossRef]
  5. Shiryaev, A.N.; Cherny, A. Vector stochastic integrals and the fundamental theorems of asset pricing. Proc. Steklov Inst. Math.-Interperiod. Transl. 2002, 237, 6–49. [Google Scholar]
  6. Sohns, M. The General Semimartingale Market Model. AppliedMath 2025, 5, 97. [Google Scholar] [CrossRef]
  7. Itô, K. Stochastic integral. Proc. Imp. Acad. 1944, 20, 519–524. [Google Scholar] [CrossRef]
  8. Itô, K. On a stochastic integral equation. Proc. Jpn. Acad. 1946, 22, 32–35. [Google Scholar] [CrossRef]
  9. Itô, K. Stochastic differential equations in a differentiable manifold. Nagoya Math. J. 1950, 1, 35–47. [Google Scholar] [CrossRef]
  10. Itô, K. Multiple Wiener integral. J. Math. Soc. Jpn. 1951, 3, 157–169. [Google Scholar] [CrossRef]
  11. Itô, K. On a formula concerning stochastic differentials. Nagoya Math. J. 1951, 3, 55–65. [Google Scholar] [CrossRef]
  12. Itô, K. On Stochastic Differential Equations; Memoirs of the American Mathematical Society; American Mathematical Society: Providence, RI, USA, 1951; Volume 4. [Google Scholar]
  13. Meyer, P.A. Decomposition of supermartingales: The uniqueness theorem. Ill. J. Math. 1963, 7, 1–17. [Google Scholar] [CrossRef]
  14. Itô, K. Transformation of Markov processes by multiplicative functionals. Ann. Inst. Fourier 1965, 15, 15–30. [Google Scholar] [CrossRef]
  15. Motoo, M.; Watanabe, S. On a class of additive functionals of Markov processes. J. Math. Kyoto Univ. 1965, 4, 429–469. [Google Scholar] [CrossRef]
  16. Kunita, H.; Watanabe, S. On square integrable martingales. Nagoya Math. J. 1967, 30, 209–245. [Google Scholar] [CrossRef]
  17. Meyer, P.A. Intégrales stochastiques I. Sémin. Probab. Strasbg. 1967, 39, 72–94. [Google Scholar]
  18. Meyer, P.A. Intégrales stochastiques II. Sémin. Probab. Strasbg. 1967, 1, 95–117. [Google Scholar]
  19. Meyer, P.A. Intégrales stochastiques III. Sémin. Probab. Strasbg. 1967, 1, 118–141. [Google Scholar]
  20. Meyer, P.A. Intégrales stochastiques IV. Sémin. Probab. Strasbg. 1967, 1, 142–162. [Google Scholar]
  21. Meyer, P.A. Un Cours sur les Intégrales Stochastiques. In Séminaire de Probabilités 1967–1980; Émery, M., Yor, M., Eds.; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2002; Volume 1771, pp. 174–329. [Google Scholar] [CrossRef]
  22. Jacod, J. Calcul Stochastique et Problèmes de Martingales; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1979; Volume 714, pp. x+280. [Google Scholar]
  23. Chou, C.S.; Meyer, P.A.; Stricker, C. Sur les intégrales stochastiques de processus prévisibles non bornés. In Séminaire de Probabilités XIV, 1978/79; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1980; Volume 784, pp. 128–139. [Google Scholar] [CrossRef]
  24. Jacod, J. Sur la construction des intégrales stochastiques et les sous-espaces stables de martingales. In Séminaire de Probabilités XI; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1977; Volume 581, pp. 390–410. [Google Scholar]
  25. Émery, M. Une topologie sur l’espace des semimartingales. In Séminaire de Probabilités XIII; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1979; Volume 721, pp. 260–280. [Google Scholar] [CrossRef]
  26. Dellacherie, C.; Meyer, P.A. Probabilities and Potential; North-Holland Mathematics Studies; Hermann: Amsterdam, The Netherlands, 1978. [Google Scholar]
  27. Jacod, J. Intégrales stochastiques par rapport à une semimartingale vectorielle et changements de filtration. In Séminaire de Probabilités XIV, 1978/79; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1980; Volume 784, pp. 161–172. [Google Scholar]
  28. Memin, J. Espaces de semi martingales et changement de probabilité. Probab. Theory Relat. Fields 1980, 52, 9–39. [Google Scholar] [CrossRef]
  29. Doléans-Dade, C. On the existence and unicity of solutions of stochastic integral equations. Z. Wahrscheinlichkeitstheor. Verwandte Geb. 1976, 36, 93–101. [Google Scholar] [CrossRef]
  30. Meyer, P.A. Le théorème fondamental sur les martingales locales. In Séminaire de Probabilités XI; Springer: Berlin/Heidelberg, Germany, 1977; Volume 581, pp. 463–464. [Google Scholar]
  31. Sohns, M. σ-Martingales: Foundations, Properties, and a New Proof of the Ansel–Stricker Lemma. Mathematics 2025, 13, 682. [Google Scholar] [CrossRef]
  32. Sohns, M. The Jacod–Yor Theorem for Sigma Martingales and the Second Fundamental Theorem of Asset Pricing. J. Stoch. Anal. 2025, 6, 1. [Google Scholar] [CrossRef]
  33. Bichteler, K. Stochastic integrators. Bull. Am. Math. Soc. 1979, 1, 761–765. [Google Scholar] [CrossRef][Green Version]
  34. Bichteler, K. Stochastic Integration and Lp-Theory of Semimartingales. Ann. Probab. 1981, 9, 49–89. [Google Scholar] [CrossRef]
  35. Dellacherie, C. Un survol de la theorie de l’integrale stochastique. Stoch. Processes Their Appl. 1980, 10, 115–144. [Google Scholar] [CrossRef]
  36. Meyer, P.A. Caractérisation des semimartingales, d’après Dellacherie. In Séminaire de Probabilités XIII; Dellacherie, C., Meyer, P.A., Weil, M., Eds.; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1979; Volume 721, pp. 620–623. [Google Scholar] [CrossRef]
  37. Pellaumail, J. Sur l’intégrale stochastique et la décomposition de Doob–Meyer; Mémoires de la Société Mathématique de France; Société Mathématique de France: Paris, France, 1973; Volume 35, pp. 1–100. [Google Scholar]
  38. Métivier, M.; Pellaumail, J. Mesures stochastiques à valeurs dans des espaces L0. Z. Wahrscheinlichkeitstheor. Verwandte Geb. 1977, 40, 101–114. [Google Scholar] [CrossRef]
  39. Lenglart, É. Semi-martingales et intégrales stochastiques en temps continu. Rev. CETHEDEC 1983, 20, 91–160. [Google Scholar]
  40. Protter, P. Semimartingales and Stochastic Differential Equations; Lecture Notes 3rd Chilean Winter School in Probability and Statistics; Technical Report #85-25; Purdue University: West Lafayette, IN, USA, 1985. [Google Scholar]
  41. Protter, P. Stochastic integration without tears. Stoch. Int. J. Probab. Stoch. Processes 1986, 16, 295–325. [Google Scholar] [CrossRef]
  42. Protter, P.E. Stochastic Integration and Differential Equations: Version 2.1, 2nd ed.; Stochastic Modelling and Applied Probability; Springer: Berlin/Heidelberg, Germany, 2005; Volume 21, pp. xviii+415. [Google Scholar] [CrossRef]
  43. Bichteler, K. Stochastic Integration with Jumps; Encyclopedia of Mathematics and its Applications; Cambridge University Press: Cambridge, UK, 2002; Volume 89. [Google Scholar] [CrossRef]
  44. Assefa, J.; Harms, P. Cylindrical stochastic integration and applications to financial term structure modeling. arXiv 2023, arXiv:2208.03939. [Google Scholar] [CrossRef]
  45. Karandikar, R.L.; Rao, B.V. On the second fundamental theorem of asset pricing. Commun. Stoch. Anal. 2016, 10, 5. [Google Scholar] [CrossRef][Green Version]
  46. Meyer, P.A. Probabilités et Potentiel; Publications de l’Institut de Mathématique de l’Université de Strasbourg; Hermann: Paris, France, 1966; Volume 14. [Google Scholar]
  47. He, S.W.; Wang, J.G.; Yan, J.A. Semimartingale Theory and Stochastic Calculus; Kexue Chubanshe (Science Press): Beijing, China; CRC Press: Boca Raton, FL, USA, 1992; pp. xiv+546. [Google Scholar]
  48. Burkholder, D.L. Martingale Transforms. Ann. Math. Stat. 1966, 37, 1494–1504. [Google Scholar] [CrossRef]
  49. Neveu, J. Martingales à temps discret; Masson et Cie: Paris, France, 1972; pp. vii+218. [Google Scholar]
  50. Edwards, D.A. A note on stochastic integrators. Math. Proc. Camb. Philos. Soc. 1990, 107, 395–400. [Google Scholar] [CrossRef]
  51. Rao, M. Doob’s Decomposition and Burkholder’s Inequalities. In Séminaire de Probabilités, VI (Univ. Strasbourg, Année Universitaire 1970–1971; Journées Probabilistes de Strasbourg, 1971); Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 1972; Volume 258, pp. 198–201. [Google Scholar]
  52. Cohen, S.N.; Elliott, R.J. Stochastic Calculus and Applications; Probability and Its Applications; Springer: New York, NY, USA, 2015. [Google Scholar] [CrossRef]
  53. Pollard, D.; Gill, R.D.; Ripley, B.D. A User’s Guide to Measure Theoretic Probability; Cambridge Series in Statistica; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
  54. Kalton, N.J.; Peck, N.T.; Roberts, J.W. An F-space Sampler; London Mathematical Society Lecture Note Series; Cambridge University Press: Cambridge, UK, 2003; Volume 89. [Google Scholar]
  55. Biagini, F.; Hu, Y.; Øksendal, B.; Zhang, T. Stochastic Calculus for Fractional Brownian Motion and Applications; Probability and Its Applications; Springer: London, UK, 2008. [Google Scholar]
  56. Liptser, R.; Shiryaev, A. Theory of Martingales; Mathematics and its Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1989; Volume 49. [Google Scholar]
  57. Lin, S. Stochastic analysis of fractional Brownian motions. Stoch. Int. J. Probab. Stoch. Processes 1995, 55, 121–140. [Google Scholar] [CrossRef]
  58. Rogers, L.C.G. Arbitrage with fractional Brownian motion. Math. Financ. 1997, 7, 95–105. [Google Scholar] [CrossRef]
  59. Embrechts, P.; Maejima, M. Selfsimilar Processes; Princeton Series in Applied Mathematics; Princeton University Press: Princeton, NJ, USA, 2002; pp. xii+111. [Google Scholar]
  60. Wang, A.T. Quadratic variation of functionals of Brownian motion. Ann. Probab. 1977, 5, 756–769. [Google Scholar] [CrossRef]
  61. Yor, M. Un exemple de processus qui n’est pas une semi-martingale. Temps Locaux 1978, 52, 219–222. [Google Scholar]
  62. Karandikar, R.L.; Rao, B.V. Introduction to Stochastic Calculus; Indian Statistical Institute Series; Springer: Singapore, 2018; pp. xiii+441. [Google Scholar] [CrossRef]
  63. Dellacherie, C. Mesurabilite des debuts et theoreme de section: Le lot a la portee de toutes les boures. In Séminaire de Probabilités XV 1979/80; Springer: Berlin/Heidelberg, Germany, 1981; pp. 351–370. [Google Scholar]
  64. Beiglböck, M.; Schachermayer, W.; Veliyev, B. A direct proof of the Bichteler–Dellacherie theorem and connections to arbitrage. Ann. Probab. 2011, 39, 2424–2440. [Google Scholar] [CrossRef]
  65. Beiglböck, M.; Schachermayer, W.; Veliyev, B. A short proof of the Doob–Meyer theorem. Stoch. Processes Their Appl. 2012, 122, 1204–1209. [Google Scholar] [CrossRef]
  66. Beiglböck, M.; Siorpaes, P. Riemann-integration and a new proof of the Bichteler–Dellacherie theorem. Stoch. Processes Their Appl. 2014, 124, 1226–1235. [Google Scholar] [CrossRef][Green Version]
  67. Lévy, P. Théorie de L’addition des Variables Aléatoires; Gauthier-Villars: Paris, France, 1937. [Google Scholar]
  68. Itô, K. Lectures on Stochastic Processes; Tata Institute of Fundamental Research: Bombay, India, 1961; Volume 24. [Google Scholar]
  69. Applebaum, D. Lévy Processes and Stochastic Calculus, 2nd ed.; Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 2009; Volume 116, p. 490. [Google Scholar] [CrossRef]
  70. Émery, M. Métrisabilité de quelques espaces de processus aléatoires. Sémin. Probab. Strasbg. 1980, 14, 140–147. [Google Scholar]
  71. Jänich, K. Topology; Undergraduate Texts in Mathematics; Springer: New York, NY, USA, 1984. [Google Scholar]
  72. Kabanov, Y.M.; Shiryaev, A.N.; Stojanov, J.M.; Liptser, R.S. From Stochastic Calculus to Mathematical Finance: The Shiryaev Festschrift; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar] [CrossRef]
  73. Métivier, M. Semimartingales: A Course on Stochastic Processes; De Gruyter Studies in Mathematics XI; De Gruyter: Berlin, Germany; New York, NY, USA, 1982; Volume 2. [Google Scholar]
  74. Klebaner, F.C. Introduction to Stochastic Calculus with Applications, 2nd ed.; Imperial College Press: London, UK, 2005; pp. xvi+412. [Google Scholar]
  75. Aksamit, A.; Jeanblanc, M. Enlargement of Filtration with Finance in View; SpringerBriefs in Quantitative Finance; Springer International Publishing: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  76. Sohns, M. General Stochastic Vector Integration—Three Approaches. Preprints 2025, 2025010645. [Google Scholar] [CrossRef]
  77. Doob, J.L. Stochastic Processes; Wiley Classics Library; John Wiley & Sons: New York, NY, USA, 1953; Volume 7. [Google Scholar]
  78. Meyer, P.A. A decomposition theorem for supermartingales. Ill. J. Math. 1962, 6, 193–205. [Google Scholar] [CrossRef]
  79. Doléans-Dade, C.; Meyer, P.A. Intégrales stochastiques par rapport aux martingales locales. In Séminaire de Probabilités IV, Université de Strasbourg; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1970; Volume 124, pp. 77–107. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.