Next Article in Journal
Using the Support Functions to Embed the Families of Fuzzy Sets into Banach Spaces
Previous Article in Journal
A Note on Factorization and the Number of Irreducible Factors of xnλ over Finite Fields
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stubbornness as Control in Professional Soccer Games: A BPPSDE Approach

by
Paramahansa Pramanik
Department of Mathematics and Statistics, University of South Alabama, Mobile, AL 36688, USA
Mathematics 2025, 13(3), 475; https://doi.org/10.3390/math13030475
Submission received: 17 November 2024 / Revised: 24 January 2025 / Accepted: 29 January 2025 / Published: 31 January 2025

Abstract

:
This paper defines stubbornness as an optimal feedback Nash equilibrium within a dynamic setting. Stubbornness is treated as a player-specific parameter, with the team’s coach initially selecting players based on their stubbornness and making substitutions during the game according to this trait. The payoff function of a soccer player is evaluated based on factors such as injury risk, assist rate, pass accuracy, and dribbling ability. Each player aims to maximize their payoff by selecting an optimal level of stubbornness that ensures their selection by the coach. The goal dynamics are modeled using a backward parabolic partial stochastic differential equation (BPPSDE), leveraging its theoretical connection to the Feynman–Kac formula, which links stochastic differential equations (SDEs) to partial differential equations (PDEs). A stochastic Lagrangian framework is developed, and a path integral control method is employed to derive the optimal measure of stubbornness. The paper further applies a variant of the Ornstein–Uhlenbeck BPPSDE to obtain an explicit solution for the player’s optimal stubbornness.

1. Introduction

In this paper, we determine the optimal degree of stubbornness for a soccer player to enhance their chances of being selected by the coach, considering goal dynamics governed by a backward parabolic partial stochastic differential equation (BPPSDE) by utilizing a Feynman-type path integral control method [1,2]. Since stubbornness is an inherent trait of a player that cannot be easily or flexibly modified, it is treated as a fixed parameter for each individual. The coach selects players based on this trait, considering the specific circumstances of the match. Stubbornness is a valuable quality in soccer, particularly for scoring goals, as it fosters persistence and resilience. Several factors highlight the benefits of stubbornness: first, a determined player will not give up easily, even when faced with strong defenses or multiple failed attempts, creating scoring opportunities through persistent efforts. Second, challenges such as defensive pressure, adverse weather, or fatigue can obstruct scoring, yet stubborn players are more likely to persist, maintaining their focus until they succeed. Third, a stubborn mindset keeps players mentally strong and confident, with the belief that they can eventually get the ball past the goalkeeper. Fourth, when combined with adaptability, stubbornness enables players to learn from missed opportunities, adjusting their tactics instead of getting discouraged, which typically leads to higher success rates. Lastly, a relentless attitude can inspire teammates to adopt the same persistence, boosting the team’s energy and increasing the chances of breakthroughs. The examples are the performance of Diego Maradona in the quarterfinal against England and the semifinal against Belgium in the 1986 Men’s Soccer World Cup. Soccer history is filled with examples of skilled players whose careers faltered due to a lack of discipline and teamwork. In this analysis, the coach has the authority to assess and manage a player’s level of stubbornness. If a player is overly stubborn, the coach may view them as a liability to the team, where the costs outweigh the benefits. While the coach may naturally prefer the least stubborn players, relying solely on this strategy may not lead to consistent success for the team, as this approach could become predictable to the opposition. If the opposing team is aware of this preference, they can simply focus on countering the coach’s strategies. Conversely, if a player demonstrates some level of flexibility, it forces the opposition to spend additional time analyzing potential strategies. Moreover, spontaneous decisions made by a player in the heat of the moment could yield favorable outcomes for their team. Therefore, we adopt a balanced approach to determine an optimal level of stubbornness, striking a middle ground that benefits the team strategically. In this analysis, it is assumed that a player’s stubbornness arises from a mean field interaction [3,4]. We also provide an explicit solution for stubbornness as a feedback Nash equilibrium, assuming the player’s objective function incorporates factors such as stubbornness. Since the coach cannot frequently or continuously substitute players due to a limited number of allowed substitutions, and players’ behavioral traits (e.g., stubbornness levels) tend to remain stable during a match, these practical constraints simplify the general optimization problem into a discrete and finite framework appropriate for soccer. To adapt the feedback control model to this scenario, we first determine the control values from the general model. The coach then partitions the continuous range of stubbornness levels into discrete sub-intervals, forming distinct categories. Based on the feedback control value provided by the general model, the coach assigns a player to the corresponding category. For instance, if the stubbornness level is defined over the interval [ 0 , 1 ] , the coach might divide it into subgroups such as A = [ 0 , 0.25 ) , B = [ 0.25 , 0.5 ) , C = [ 0.5 , 0.75 ) , and D = [ 0.75 , 1 ] . If a player’s feedback control value is determined to be 0.32, they would be placed in category B. This approach allows us to translate continuous control values into actionable discrete decisions.
Soccer enjoys widespread popularity globally thanks to its straightforward rules. Among the most significant tournaments in the sport are the World Cup, the Euro Cup, and the Copa America, with the World Cup being the most renowned. Santos (2014) [5] emphasized that FIFA has been concerned about teams becoming overly defensive in their goal-scoring strategies since the early 1990s, which has led to a reduction in the total number of goals over time and a subsequent dip in the sport’s appeal. For example, during the 2006 men’s World Cup, both Italy and Spain conceded only two goals throughout their seven matches. A USA Today article from 17 March 1994, highlighted FIFA’s goal to promote more attacking and high-scoring games. This objective has prompted some teams to alter their approach, as seen in the 2014 men’s World Cup semifinal, where Germany scored five goals against Brazil in the first half. In the 2018 men’s World Cup, France adopted an offensive strategy despite lacking star players and having a younger team, allowing them to play freely and ultimately win the tournament. On the other hand, teams with well-known soccer stars sometimes underperformed because their predictable playstyles, honed over long careers, were easily countered by opponents. This highlights how mental stress can significantly impact game outcomes, influencing the financial stakes for both winners and losers.
Soccer is a game rich in strategic complexity, providing numerous avenues for analyzing optimal tactics [6]. In a match, two teams face off for ninety minutes, each composed of a goalkeeper and ten outfield players. Coaches have the flexibility to organize their players in any formation and can alter their team’s playing style at any point during the game [6]. The primary goal for each team is to score while simultaneously preventing the opposition from doing so. The strategies employed by both teams are highly interdependent, as the approach taken by one team directly affects the other’s chances of scoring or conceding goals [6,7].
The soccer literature contains numerous papers across various branches of physics [8,9]. Researchers in statistical physics have primarily examined sports through the lens of stochastic processes, such as analyzing the temporal evolution of scores [10]. Meanwhile, other investigations have introduced novel approaches, utilizing methods like ordinary differential equations [11], stochastic agent-based game simulations [12], and network science theory [13] to capture the intricate dynamic behavior of team players. Continuous-time models are valuable for sports like soccer, hockey, athletics, swimming, speed skating, and basketball. These models produce observation sequences representing events such as goals, shifts in possession, lead changes, and baskets scored, all occurring over time intervals that are not fixed or countable [14]. Although there have been recent advancements, soccer analytics appears to lag behind other major team sports, such as basketball or baseball. This has resulted in soccer team management and strategy being less recognized as analytics-driven. A key challenge in soccer lies in data collection. Typically, data in ball-based sports focuses on events occurring near the ball (on-ball actions). However, in soccer, a significant portion of the game’s dynamics takes place away from the ball (off-ball dynamics), which is essential for evaluating team performance [15]. As a result, on-ball actions may offer limited insights compared to off-ball dynamics when it comes to strategy and player assessment [9]. In this context, a possible solution should be constructing a stochastic control environment in such a way that the coach of a team can select players based on their stubbornness, and furthermore, chooses players based on stubbornness. This is a new approach to address this issue. Furthermore, BPPSDE provides more flexibility on the goal dynamics.
Palomino (1998) made a notable impact on the development of dynamic game-theoretic models aimed at optimizing strategic decisions in sports [16]. This research examined how two competing soccer teams continuously chose between defensive and offensive formations. It was observed that a team leading in the score tends to adopt a defensive stance when the match is tied or they are behind, particularly in the second half. In a similar vein, Banerjee (2007) [17] explored the strategic shifts in the National Hockey League following a change in the point-scoring system for games tied at the end of regulation time [6]. The analysis in these studies, however, is constrained by certain assumptions that are either unrealistic or contentious, limiting the scope for a broader dynamic strategic evaluation. For example, one study assumes—without verification—that adopting an offensive strategy boosts the goal-scoring rate of the attacking team more significantly than that of the defending team. This assumption is tested by examining the 2014 men’s soccer World Cup semifinal between Germany and Brazil, where Germany scored five goals in just 29 min, including four within the first six minutes, ultimately leading 7-0 by the second half [4]. However, despite Germany’s aggressive approach early on, they only managed to score two additional goals in the remaining hour of the game. Another study challenges the previous assumption by introducing a parameter that accounts for a comparative advantage in either offense or defense. Both articles under review focus on the strategic decisions available to soccer teams, yet they primarily discuss the binary options of attack and defense, neglecting a vital secondary dimension: the choice between a violent and non-violent style of play [6]. Teams that play violently may commit fouls and risk red cards, while other aggressive acts may aim to disrupt or sabotage the opposing team [6]. The study by Dobson (2010) [6] assumes that these strategic choices are discrete, forcing teams to select between defensive and attacking formations, as well as violent and non-violent styles of play. These choices significantly affect the chances of scoring or conceding goals and the probability of receiving red cards. Through numerical simulations, Dobson shows that optimal strategic decisions depend on various factors, including the current score difference and the time remaining in the game [6]. In this paper, we treat stubbornness as a continuous control variable, with its maximum value representing an attacking approach.
The structure of the paper is as follows: Section 2 discusses the properties of the payoff function and the BPPSDE. Section 3 presents an explicit solution for optimal stubbornness using the Wick-rotated Schrödinger-type equation. Finally, Section 4 offers a brief conclusion.

2. Background Framework

In this section, we will be discussing the construction of forward-looking stochastic goal dynamics along with a conditional expected dynamic objective function.

2.1. General Notation

In this paper, we define sample space Ω , σ -field F and probability measure P . We denote F t t I as a family of sub- σ -fields of F , such that I represents an ordered indexed set with the condition F s F t for every continuous time s < t , and s , t I . Moreover, we define canonical filtration, augmented by P null sets, F t W : = σ W s | 0 s t N , t [ 0 , ) , where N is the collection of P null sets. We denote the goal dynamics and adjusted goal dynamics as x ( . ) and x ( . ) , respectively. We denote u ( . ) as the stubbornness of a player, which is fixed throughout match such that functional control space U takes values from R . Similarly, functional state space X takes the values from R k . Throughout our analysis, we use μ and σ as drift and diffusion components of the SDEs. σ consists of two subparts, σ 1 and σ 2 . ϕ is the value function that solves the BPPSDE, V represents the rate of change ϕ and Θ represents a source term that influences the evolution of ϕ such that | Θ | C ( 1 + | x | ) , where C is a positive finite constant. We denote T [ t , x ( t ) ] as the terminal condition of the BPPSDE. We denote regular expectation and conditional expectation on goal dynamics as E { . } and E s : = E { . | F s } , respectively. For t > 0 , we denote as the σ -field corresponding to the predictable sets on { Ω , ( 0 , t ) } associated with the filtration F s 0 , L p is the functional space, which is defined using a natural generalization of the p-norm for finite dimensional vector spaces and ˜ x α is the vector of weak derivatives with respect to the vector x with α ˜ th order, where α ˜ = ( α 1 , α 2 , , α ) . We denote Sobolev space as S J , p , where the integer J N and p > 1 with f S being a function takes values from Sobolev space. For j and j > 1 , we use the notation G j = G j ( R ) for the Sobolev space S j , p ( R ) on R for all j 0 . We define
G 0 = L 2 = G 0 ( R ) = L 2 ( R ) , G j = G j ( R ) = L 2 Ω × [ 0 , t ] , P , G j .
Furthermore, consider the notation | | . | | j = | | . | | G j . Additionally, for a function f defined on Ω × [ 0 , t ] × R j is
| | | f | | | j 2 = E 0 t | | f ( s , . ) | | j 2 d s .
We also denote B ( R ) as the Borel measurable set on R . We use H 1 and H 2 are two separable Hilbert spaces such that H 1 is densely embedded in H 2 . We also assume that both H 1 and H 2 have the same dual space H 2 such that H 1 H 2 H 2 with the linear operators Ξ 1 ( ω , s ) : H 1 H 2 and Ξ 2 ( ω , s ) : H 2 H 2 with corresponding norms | | . | | H 1 , | | . | | H 2 and | | . | | H 2 , respectively. We define a stopping time as
k ( ω ) = inf s ; sup s t | | ϕ ( ω , s , x ( s ) , u ( s ) ) | | H 2 k t .
Finally, we denote J ( u ) , π ( . ) , x 0 , L ^ and λ as expected payoff, payoff functions, initial goal dynamics, Lagrangian and Lagrangian multiplier, respectively. There are additional notations introduced in Section 3. As it is not needed to state the main results, we do not introduce it here.

2.2. Probabilistic Construction

Let t > 0 be a fixed finite time. Define W s as a k-dimensional Wiener process that appeared in a soccer game at time s for all s [ 0 , t ] defined on a complete probability space ( Ω , F , P ) with sample space Ω , σ -field F and probability measure P . Here, k represents the number of independent sources of uncertainty (e.g., player actions, weather conditions, strategy randomness).
Definition 1. 
Consider F t t I as a family of sub-σ-fields of F , such that I represents an ordered indexed set with the condition F s F t for every s < t , and s , t I . This family F t t I is called a filtration of the above process [18,19].
If we simply talk of a goal dynamics { x } t I or simply x ( t ) , then this implies the choice of filtration corresponding to a soccer match is
F t : = F t x : = σ x ( s ) | s t , s , t I ,
which is called canonical or natural filtration of x t I . In our case, the Wiener process is associated with the canonical filtration, augmented by P null sets,
F t W : = σ W s | 0 s t N , t [ 0 , ) ,
where N is the collection of P null sets.
In this paper, x ( s ) X denotes the goal dynamics at time s for an individual player, with values in R . The goalkeepers are excluded from this analysis, as they typically remain at their goal posts and generally do not move to the opposite side to score. By goal dynamics, we refer to the probability of scoring a goal at a specific continuous time point s. A higher probability of scoring does not guarantee that a goal has actually been scored. For instance, this includes situations where a striker misses the goal, a shot is deflected by the opposing goalkeeper or players, or an offside call is made, among other possibilities. Let the goal dynamics be represented by a stochastic differential equation (SDE)
d x ( s ) = μ s , x ( s ) , u ( s ) , σ 2 s , x ( s ) , u ( s ) d s + σ s , x ( s ) , u ( s ) d W s ,
where μ : [ 0 , t ] × X × U × R R k , σ 2 : [ 0 , t ] × X × U R k × k and σ : [ 0 , t ] × X × U R k × k are the drift and diffusion coefficients, respectively, x ( 0 ) = x 0 , and the control variable u U takes values in R . Clearly, x ( s ) in Equation (1) does not guarantee that x [ 0 , 1 ] . To make this restriction, we need to do a logistic transformation x ( s ) = [ 1 + exp { x ( s ) } ] 1 .
In Equation (1), as σ 2 measures uncertainty or variability in the goal dynamics, it acts as a feedback mechanism, adjusting μ to account for this variability. In other words, this represents how a system compensates for expected fluctuations or adapts to anticipated noise. Furthermore, since σ 2 encodes variance or correlation structures in the noise, μ could use this information to stabilize the system. In this environment, σ 2 reflects how uncertainty evolves under different values of stubbornness u ( s ) . Including it in μ allows for anticipating performance due to stubbornness under uncertainty, or risk-aware stubbornness that adapts the deterministic trajectory based on noise levels. Moreover, inclusion of σ 2 introduces nonlinear relationships between μ and σ . As σ 2 depends on σ , the drift term μ could account for second-order effects or corrections arising from goal dynamics. In our context, σ 2 represents variability in scoring rates or possession changes due to environmental uncertainties. Including it in the drift implies the deterministic trend accounts for these uncertainties.
In this framework, the control variable u represents the selection of players based on their fixed stubbornness parameter. Relying on prior knowledge of each player’s stubbornness, the coach initially selects the team before the game begins and subsequently substitutes players during the match based on their level of stubbornness and the game’s conditions at time s. A highly stubborn player makes decisions independently, disregarding strategies set in the dressing room or by the coach, leading to a high value of u. Conversely, a less stubborn player adheres to team strategies, resulting in a low value of u. We can assume, without loss of generality, that u [ 0 , 1 ] , where 0 indicates no stubbornness. The coach always prefers a player with u 0 . Although the coach might instinctively favor u 0 , relying exclusively on this strategy could compromise the team’s long-term success, as it risks becoming predictable to the opposition. If the opposing team recognizes this preference, they can concentrate their efforts on countering the coach’s tactics. On the other hand, when a player exhibits a certain degree of flexibility, it compels the opposition to invest more time and effort in evaluating potential strategies. Additionally, quick, on-the-spot decisions by players can sometimes yield advantageous outcomes for the team. Therefore, we assume u a ˜ such that a ˜ > 0 , where a ˜ is a constant. Furthermore, define
σ s , x ( s ) , u ( s ) : = σ 1 s , x ( s ) , u ( s ) σ 2 s , x ( s ) , u ( s ) ,
where σ 1 represents a stochastic network of passing the ball from a single player to others. In this context, we account for the passing of 19 players, as a missed pass might result in the ball being intercepted by an opposing player, and σ 2 arises due to environmental and strategic uncertainties. The negative sign in Equation (2) indicates that σ 2 negatively impacts σ 1 , as environmental and strategic uncertainties contribute to reducing the size of the stochastic ball-passing network.

2.3. Existence and Uniqueness of a Solution of the BPPSDE

Consider the BPPSDE in divergence form as
[ ϕ s [ s , x ( s ) , u ( s ) ] V [ s , x ( s ) , u ( s ) ] ϕ [ s , x ( s ) , u ( s ) ] + Θ [ s , x ( s ) , u ( s ) ] + ϕ x [ s , x ( s ) , u ( s ) ] μ s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] + 1 2 2 ϕ x 2 [ s , x ( s ) , u ( s ) ] σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] 2 ] d s ϕ x [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] d W s = 0 ,
with terminal condition ϕ ( t , x , u ) = T [ t , x ( t ) ] , where ϕ ( s , x , u ) is the value function, representing the solution of the BPPSDE, V ( s , x , u ) represents the discount factor or rate of change in ϕ due to external influences, which is assumed to be smooth and bounded: | V ( s , x , u ) | C , and Θ ( s , x , u ) represents a source term that influences the evolution of ϕ such that | Θ ( s , x , u ) | C ( 1 + | x | ) .
Assumption 1. 
The drift coefficient μ is smooth (continuously differentiable) in all its arguments and satisfies linear growth and Lipschitz conditions:
| μ ( s , x , u , σ 2 ) | C ( 1 + | x | ) , | μ ( s , x 1 , u , σ 2 ) μ ( s , x 2 , u , σ 2 ) | L | x 1 x 2 | ,
and the diffusion coefficients σ 1 ( s , x , u ) and σ 2 ( s , x , u ) are smooth in all their arguments and satisfy similar growth and Lipschitz conditions as μ:
| σ i ( s , x , u ) | C ( 1 + | x | ) , | σ i ( s , x 1 , u ) σ i ( s , x 2 , u ) | L | x 1 x 2 | ,
for i = 1 , 2 .
In the following lemma, we adopt a similar approach to the Feynman–Kac formula. However, since we assume the existence of a unique solution to a stochastic parabolic partial differential equation, we cannot label it as the Feynman–Kac formula, as that specifically relates a deterministic linear parabolic partial differential equation to path integrals.
Lemma 1.
Let
[ V [ s , x ( s ) , u ( s ) ] ϕ [ s , x ( s ) , u ( s ) ] + Θ [ s , x ( s ) , u ( s ) ] + ϕ [ s , x ( s ) , u ( s ) ] s + ϕ [ s , x ( s ) , u ( s ) ] x μ s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] + 2 ϕ [ s , x ( s ) , u ( s ) ] 2 x 2 σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] 2 ] d s ϕ [ s , x ( s ) , u ( s ) ] x σ 2 [ s , x ( s ) , u ( s ) ] d W s = 0 ,
be a BPPSDE where [ s , x ( s ) , u ( s ) ] [ 0 , t ] × R × [ 0 , 1 ] subject to the terminal condition  ϕ [ t , x ( t ) , u ( t ) ] = T [ t , x ( t ) ] , Θ , V , μ , σ 1 , and σ 2 are known functions and ϕ [ s , x ( s ) , u ( s ) ] : [ 0 , t ] × R × [ 0 , 1 ] R is an unknown function. Then, assuming a unique solution exists, it can be written as the conditional expectation
ϕ [ s , x ( s ) , u ( s ) ] = E { T [ t , x ( t ) ] exp s t V [ κ , x ( κ ) , u ( κ ) ] d κ + s t Θ [ s 1 , x ( s 1 ) , u ( s 1 ) ] exp s s 1 V [ κ , x ( κ ) , u ( κ ) ] d κ d s 1 | F s }
for all s 1 [ 0 , s ] such that X is an It o ^ ’s process of the form represented by SDE (1).
Proof. 
Assume that ϕ [ s , x ( s ) , u ( s ) ] is a solution to the stochastic differential equation given in Equation (3). Define a stochastic process
Z ( s 1 ) = exp s s 1 V [ κ , x ( κ ) , u ( κ ) ] d κ ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] + s s 1 exp s l V ( κ , x ( κ ) , u ( κ ) ) d κ Θ [ l , x ( l ) , u ( l ) ] d l ,
Applying Itô’s Formula to Equation (4) yields,
d Z ( s 1 ) = exp s s 1 V [ κ , x ( κ ) , u ( κ ) ] d κ V [ s 1 , x ( s 1 ) , u ( s 1 ) ] ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] d s 1 + exp s s 1 V [ κ , x ( κ ) , u ( κ ) ] d κ d ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] + exp s s 1 V [ κ , x ( κ ) , u ( κ ) ] d κ Θ [ s 1 , x ( s 1 ) , u ( s 1 ) ] d s 1 = exp s s 1 V [ κ , x ( κ ) , u ( κ ) ] d κ { d ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] + Θ [ s 1 , x ( s 1 ) , u ( s 1 ) ] d s 1 V [ s 1 , x ( s 1 ) , u ( s 1 ) ] ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] d s 1 } .
By Lemma (A1) of the appendix, it is shown that
d ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] = { ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] s 1 + ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] x μ { s 1 , x ( s 1 ) , u ( s 1 ) , σ 2 [ s 1 , x ( s 1 ) , u ( s 1 ) ] } + 1 2 2 ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] x 2 σ 1 [ s 1 , x ( s 1 ) , u ( s 1 ) ] σ 2 [ s 1 , x ( s 1 ) , u ( s 1 ) ] 2 } d s 1 + ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] x σ 1 [ s 1 , x ( s 1 ) , u ( s 1 ) ] d W s 1 ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] x σ 2 [ s 1 , x ( s 1 ) , u ( s 1 ) ] d W s 1 .
Equation (6) into (5) implies,
d Z ( s 1 ) = exp s s 1 V [ κ , x ( κ ) , u ( κ ) ] d κ { { V [ s 1 , x ( s 1 ) , u ( s 1 ) ] ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] + Θ [ s 1 , x ( s 1 ) , u ( s 1 ) ] + ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] s 1 + ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] x μ s 1 , x ( s 1 ) , u ( s 1 ) , σ 2 [ s 1 , x ( s 1 ) , u ( s 1 ) ] + 1 2 2 ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] x 2 σ 1 [ s 1 , x ( s 1 ) , u ( s 1 ) ] σ 2 [ s 1 , x ( s 1 ) , u ( s 1 ) ] 2 } d s 1 ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] x σ 2 [ s 1 , x ( s 1 ) , u ( s 1 ) ] d W s 1 } + exp s s 1 V [ κ , x ( κ ) , u ( κ ) ] d κ ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] x σ 1 [ s 1 , x ( s 1 ) , u ( s 1 ) ] d W s 1 .
Because ϕ [ s , x ( s ) , u ( s ) ] is a solution of the stochastic differential equation given in Equation (3). Equation (7) becomes,
d Z ( s 1 ) = exp s s 1 V [ κ , x ( κ ) , u ( κ ) ] d κ ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] x σ 1 [ s 1 , x ( s 1 ) , u ( s 1 ) ] d W s 1
The integral form of Equation (8) is,
Z ( t ) Z ( s ) = s t exp s s 1 V [ κ , x ( κ ) , u ( κ ) ] d κ ϕ [ s 1 , x ( s 1 ) , u ( s 1 ) ] x σ 1 [ s 1 , x ( s 1 ) , u ( s 1 ) ] d W s 1
From Lemma (1), we can conclude that if the BPPSDE has a unique solution then that can be written as a conditional expectation, and
d ϕ [ s , x ( s ) , u ( s ) ] = { V [ s , x ( s ) , u ( s ) ] ϕ [ s , x ( s ) , u ( s ) ] Θ [ s , x ( s ) , u ( s ) ] ϕ [ s , x ( s ) , u ( s ) ] x μ s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] x 2 σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] 2 } d s + ϕ [ s , x ( s ) , u ( s ) ] x σ 2 [ s , x ( s ) , u ( s ) ] d W s .
In Lemma (1), we demonstrate the relationship between a BPPSDE with a unique solution and the path integral. Since our main focus is on establishing the existence of a unique solution for a dynamic objective function governed by an SDE, in Proposition 1, we identify the conditions under which this SDE has a unique solution. To achieve this, we first define a complete filtration, which is a sequence of complete σ -algebras in the F s -measurable space. Then, utilizing Assumptions 2–5, and finally, with Definitions 2 and 3, we obtain the unique solution for the SDE.
Suppose, for t > 0 , is the σ -field corresponding to the predictable sets on { Ω , ( 0 , t ) } associated with the filtration F s 0 , L p is the functional space, which is defined using a natural generalization of the p-norm for finite dimensional vector spaces and ˜ x α is the vector of weak derivatives with respect to the vector x with α ˜ th order, where α ˜ = ( α 1 , α 2 , , α ) .
Definition 2 
(Sobolev space [20,21]). For 1 suppose Ω is an open set in R . Let p 1 and j N . The Sobolev space S j , p is defined as
{ f s L p ( Ω × ( 0 , t ) × R ) ; f o r a l l | α | j , x α L p ( Ω ) } ,
where α ˜ = ( α 1 , α 2 , , α ) , | α ˜ | = k = 1 α k and ˜ x α = ( x 1 α , x 2 α , x 3 α , , x α ) is the vector of weak derivatives.
Suppose j is an integer and G j = G j ( R ) is the Sobolev space S j , p ( R ) on R for all j 0 . Denote
G 0 = L 2 = G 0 ( R ) = L 2 ( R ) , G j = G j ( R ) = L 2 ( Ω × [ 0 , t ] , P , G j ) .
Furthermore, consider the notation | | . | | j = | | . | | G j . Additionally, for a function f defined on Ω × [ 0 , t ] × R j is
| | | f | | | j 2 = E 0 t | | f ( s , . ) | | j 2 d s .
Remark 3.2 of Du et al. (2010) [20] provides the conditions for finding a solution to Equation (10).
Definition 3 
([20]). The function pair ϕ [ s , x ( s ) , u ( s ) ] , σ 2 [ s , x ( s ) , u ( s ) ] that maps Ω × [ 0 , t ] × R to R 2 is a weak solution to the parabolic partial stochastic differential Equation (10) if ϕ [ s , x ( s ) , u ( s ) ] L 2 [ Ω × ( 0 , t ) , , G 1 ] and σ 2 [ s , x ( s ) , u ( s ) ] L 2 [ Ω × ( 0 , t ) , , G 0 ] , such that for each τ G 1 and each ( ω , s ) Ω × [ 0 , t ] ,
R ϕ [ s , x ( s ) , u ( s ) ] τ [ x ( s ) ] d x ( s ) = R T [ t , x ( t ) ] τ [ x ( t ) ] d x ( t ) + 0 t R { V [ s , x ( s ) , u ( s ) ] ϕ [ s , x ( s ) , u ( s ) ] + Θ [ s , x ( s ) , u ( s ) ] + ϕ [ s , x ( s ) , u ( s ) ] x μ s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] + 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] x 2 σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] 2 } τ [ x ( s ) ] d x ( s ) d s + 0 t R ϕ [ s , x ( s ) , u ( s ) ] x σ 2 [ s , x ( s ) , u ( s ) ] d x ( s ) d W s .
Assumption 2. 
The functions ϕ , V , Θ , μ , σ 1 and σ 2 are × B ( R ) -measurable real valued functions where the terminal value T [ t , x ( t ) ] is a F t × B ( R ) measurable real valued function, where B ( R ) is the Borel measurable set on R .
Assumption 3. 
(Super parabolicity) For any two given constants C ( 1 , ) and c ( 0 , 1 ) c + σ 1 2 ( ω , s , x ) + σ 2 2 ( ω , s , x ) 2 σ 1 ( ω , s , x ) σ 2 ( ω , s , x ) C for all ( ω , s , x ) Ω × [ 0 , t ] × R 1 .
Assumption 4. 
Suppose H 1 and H 2 are two separable Hilbert spaces such that H 1 is densely embedded in H 2 . We also assume that both H 1 and H 2 have the same dual space H 2 such that H 1 H 2 H 2 with the linear operators Ξ 1 ( ω , s ) : H 1 H 2 and Ξ 2 ( ω , s ) : H 2 H 2 given by
Ξ 1 ( ω , s ) ϕ [ s , x ( s ) , u ( s ) ] = ϕ [ s , x ( s ) , u ( s ) ] x μ s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] ,
and
Ξ 2 ( ω , s ) σ i [ s , x ( s ) , u ( s ) ] = 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] x 2 σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] 2
for i = 1 , 2 . We further assume T [ t , x ( t ) ] and Θ [ s , x ( s ) , u ( s ) ] take values in H 1 and H 2 .
Denote the norms of the spaces H 1 , H 2 and H 2 as | | . | | H 1 , | | . | | H 2 and | | . | | H 2 , respectively. Furthermore, in order to define inner products and duality products, we use ( . , . ) and . , . , respectively. Finally, define
σ ^ [ s , x ( s ) , u ( s ) ] : = x ϕ [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] H 2 .
Let us assume three processes h 1 , h 2 and h ϵ , which are defined on Ω × [ 0 , t ] and take the values from H 1 , H 2 and H 2 , respectively. Moreover, for all ω Ω , consider h 1 ( ω , s ) is measurable with respect to ( ω , s ) , and F s -measurable in ω for almost everywhere s. For every η H 1 , η , h ϵ ( ω , s ) is F s -measurable in ω for almost everywhere in s and is measurable with respect to ( ω , s ) . Let h 2 ( ω , s ) be strongly continuous in s and is F s -measurable with respect to ω for any s, and is a local martingale. Assume h 2 is an increasing process for | | h 2 | | H 2 2 in a Doob–Meyer decomposition (p. 1240 of [22]). Lemma 3.1 of [20] implies if ϕ L 2 ( Ω , F t , H 2 ) , for each η H 1 and ( ω , s ) Ω × [ 0 , t ] ,
η , h 1 ( s ) = ( η , ϕ ) + s t η , h ϵ ( ν ) d ν + η , h 2 ( t ) h 2 ( s ) .
Then, Ω Ω such as P ( Ω ) = 1 , and another function f, which takes values from H 1 , so that the following conditions hold:
(i)
f ( s ) is F s -measurable s [ 0 , t ] and strongly continuous with respect to s for any ω Ω and f ( s ) = h 1 ( s ) H 2 for almost surely ( ω , s ) Ω × [ 0 , t ] , and the terminal condition f ( t ) = ϕ .
(ii)
for every ω Ω and s [ 0 , t ] ,
| | f ( s ) | | H 2 2 = | | ϕ | | H 2 2 + 2 s t h 1 ( ν ) , h ϵ ( ν ) d ν + 2 s t ( f ( ν ) , d h 2 ( ν ) ) h 2 t + h 2 s .
Define H 2 : = { h 1 = h 11 , h 12 , , h 1 j | h 1 j H 2 , j = 1 , 2 , , j } with corresponding norm | | h 1 | | H 2 : = j = 1 j | | h 1 j | | H 2 2 1 / 2 .
Assumption 5
(coercivity condition). For any two constants ζ 1 > 0 and ζ 2 > 0 there exists
2 x , Ξ 1 x + | | Ξ 2 x | | H 2 2 ζ 1 | | x | | H 1 2 + ζ 2 | | x | | H 2 2 , | | Ξ 1 x | | H 2 Ξ 2 | | x | | H 1 ,
where ( ω , s ) Ω × [ 0 , t ] and Ξ 2 : H 1 H 2 is the adjoint operator of Ξ 2 .
Now we are going to give an example showing that Hilbert spaces and their dual spaces satisfy Assumptions 4 and 5. The assumptions imply
H 1 H 2 H 2 ,
where the embedding H 1 H 2 is dense, and H 2 is the dual space of H 2 . This yields
| | x | | H 2 C 1 | | x | | H 1 , | | x | | H 2 C 2 | | x | | H 2 ,
for embedding constants C 1 , C 2 > 0 . Let Ω R d be a bounded domain with a smooth boundary. Consider the following spaces:
(i)
H 1 = H 0 1 ( Ω ) , the Sobolev space of functions that are square-integrable, have square-integrable weak derivatives, and vanish on the boundary of Ω .
(ii)
H 2 = L 2 ( Ω ) , the space of square-integrable functions.
(iii)
H 2 = H 1 ( Ω ) , the dual space of H 0 1 ( Ω ) , consisting of continuous linear functionals on H 0 1 ( Ω ) .
Furthermore, the spaces have the following properties, the embedding H 1 H 2 is dense and continuous, the dual pairing x , y between x H 2 and y H 1 satisfies the required structure, and the norms are ordered as:
| | x | | H 1 | | x | | H 2 | | x | | H 2 ,
consistent with the hierarchical embedding. Define Ξ 1 as a first-order differential operator, e.g.,
Ξ 1 x = x ϕ [ s , x ( s ) , u ( s ) ] μ [ s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] ] ,
which maps H 1 to H 2 . Define Ξ 2 as a second-order differential operator, e.g.,
Ξ 2 x = 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] x 2 σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] 2 ,
which maps H 2 to H 2 . The coercivity condition is:
2 x , Ξ 1 x + | | Ξ 2 x | | H 2 2 ζ 1 | | x | | H 1 2 + ζ 2 | | x | | H 2 2 ,
and
| | Ξ 1 x | | H 2 Ξ 2 | | x | | H 1 .
For Ξ 1 x = x ϕ [ s , x ( s ) , u ( s ) ] and Ξ 2 x = 2 ϕ [ s , x ( s ) , u ( s ) ] x 2 , we use integration by parts and the Poincaré inequality:
2 x , Ξ 1 x H 2 + | | Ξ 2 x | | H 2 2 ζ 1 | | x | | H 1 2 + ζ 2 | | x | | H 2 2 .
This holds for appropriate constants ζ 1 > 0 and ζ 2 > 0 , ensuring coercivity. Finally, from the notation, H 1 H 2 , indicating H 1 is the more restrictive space (e.g., H 0 1 ( Ω ) ), while H 2 is less restrictive (e.g., L 2 ( Ω ) ). Their dual spaces satisfy H 2 H 2 , which is consistent with the embeddings.
Proposition 1. 
Suppose all the previous assumptions and definitions hold. Furthermore, let us consider Θ L 2 ( Ω × ( 0 , t ) , , H 2 ) and T L 2 ( Ω , F t , H 2 ) . Then, Equation (10) has a unique solution ( ϕ , σ ^ ) L 2 ( Ω × ( 0 , t ) , , H 1 × H 2 ) such that ϕ ζ ( [ 0 , t ] , H 2 ) (almost sure), and moreover,
E sup s t | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 + E 0 t | | ϕ [ s , x ( s ) , u ( s ) ] | | H 1 + | | σ ^ [ s , x ( s ) , u ( s ) ] | | H 2 d s ζ ( E 0 t | | Θ [ s , x ( s ) , u ( s ) ] | | H 2 d s + E | | T [ t , x ( t ) , u ( t ) ] | | H 2 2 )
where ζ = ζ ( ζ 1 , ζ 2 , t ) is a constant.
Proof. 
To prove the above theorem we will take two steps. In the first step, we will assume that a solution to Equation (10) exists and demonstrate that this solution is unique. In the second step, we will establish the existence of the solution.
Proposition 3.2 of Du et al. (2010) [20] and Hu et al. [23] imply a process ( ϕ , σ ^ ) is a F s -adapted H 1 × H 2 valued solution of Equation (10), if ϕ L 2 ( Ω × ( 0 , t ) , , H 1 ) and σ ^ L 2 ( Ω × ( 0 , t ) , , H 2 ) such that for each η H 1 and ( ω , s ) Ω × [ 0 , t ] (almost everywhere),
( η , ϕ ( s , x ( s ) , u ( s ) ) ) = 0 t [ η , Ξ 1 ϕ ( s , x ( s ) , u ( s ) ) + Ξ 2 σ ^ ( s , x ( s ) , u ( s ) ) + Θ ( s , x ( s ) , u ( s ) ) ] d s 0 t σ ^ ( s , x ( s ) , u ( s ) ) d W s .
Lemma 3.1 in [20] implies ϕ ( s , x ( s ) , u ( s ) ) ζ ( [ 0 , t ] , H 2 ) (almost sure).
Let E sup s t | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 be measurable. In other words,
E sup s t | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 < ,
and we define a stopping time
k ( ω ) = inf s ; sup s t | | ϕ ( ω , s , x ( s ) , u ( s ) ) | | H 2 k t .
Condition (16) implies that k ( ω ) t almost surely as k . After implementing Itô’s formula, Equation (10) yields
ϕ ( t k , x ( t ) , u ( t ) ) = 0 s k [ V ( s , x ( s ) , u ( s ) ) ϕ ( s , x ( s ) , u ( s ) ) + Θ ( s , x ( s ) , u ( s ) ) + ϕ ( s , x ( s ) , u ( s ) ) x μ [ s , x ( s ) , u ( s ) , σ 2 ( s , x ( s ) , u ( s ) ) ] + 2 ϕ ( s , x ( s ) , u ( s ) ) 2 x 2 [ σ 1 ( s , x ( s ) , u ( s ) ) σ 2 ( s , x ( s ) , u ( s ) ) ] 2 ] d s + o s k ϕ ( s , x ( s ) , u ( s ) ) x σ 2 ( s , x ( s ) , u ( s ) ) d W s .
Moreover, Assumption 4 and Lemma 3.1 in [20] imply,
ϕ ( t k , x ( t ) , u ( t ) ) = 0 s k [ 2 ϕ ( s , x ( s ) , u ( s ) ) , Ξ 1 ϕ ( s , x ( s ) , u ( s ) ) + 2 ( Ξ 2 ϕ ( s , x ( s ) , u ( s ) ) , σ ^ ( s , x ( s ) , u ( s ) ) ) + 2 ϕ ( s , x ( s ) , u ( s ) ) , Θ ( s , x ( s ) , u ( s ) ) | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] d s + 0 s k 2 ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) d W s ,
and Assumption 5 yields
ϕ ( t k , x ( t ) , u ( t ) ) = 0 s k [ 2 ϕ ( s , x ( s ) , u ( s ) ) , Ξ 1 ϕ ( s , x ( s ) , u ( s ) ) + 2 ( Ξ 2 ϕ ( s , x ( s ) , u ( s ) ) , σ ^ ( s , x ( s ) , u ( s ) ) ) + 2 ϕ ( s , x ( s ) , u ( s ) ) , Θ ( s , x ( s ) , u ( s ) ) | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] d s + 0 s k 2 ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) d W s ζ ( ζ 2 ) 0 t ( | | ϕ ( s , x ( s ) , u ( s ) ) | | H 1 2 + | | σ 2 ( s , x ( s ) , u ( s ) ) | | H 2 2 + | | Θ ( s , x ( s ) , u ( s ) ) | | H 1 2 ) d s + 0 s k 2 ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) d W s .
Burkholder–Davis–Gundy (BDG) [see Proposition A3 in Appendix A] implies
E | sup s k 0 t ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) ) d W s | ζ [ E 0 k | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] 1 2 1 4 E sup s k | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 + ζ E 0 t | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 d s .
Hence,
E sup s k | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 ζ ( ζ 2 ) 0 t ( | | ϕ ( s , x ( s ) , u ( s ) ) | | H 1 2 + | | σ 2 ( s , x ( s ) , u ( s ) ) | | H 2 2 + | | Θ ( s , x ( s ) , u ( s ) ) | | H 1 2 ) d s .
Since constant ζ does not depend on k, as k we will obtain
E sup s t | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 < .
After using Itô’s formula one more time like in condition (19) and using Assumption 5, we obtain,
| | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 = T ( t , x ( t ) , u ( t ) ) + 0 t [ 2 ϕ ( s , x ( s ) , u ( s ) ) , Ξ 1 ϕ ( s , x ( s ) , u ( s ) ) + 2 ( Ξ 2 ϕ ( s , x ( s ) , u ( s ) ) , σ ^ ( s , x ( s ) , u ( s ) ) ) + 2 ϕ ( s , x ( s ) , u ( s ) ) , Θ ( s , x ( s ) , u ( s ) ) | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] d s + 0 t 2 ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) d W s T ( t , x ( t ) , u ( t ) ) + 0 t [ 2 ϕ ( s , x ( s ) , u ( s ) ) , Ξ 1 ϕ ( s , x ( s ) , u ( s ) ) + ( 1 + ϵ ) | | Ξ 2 ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 + 1 1 + ϵ | | σ 2 ( s , x ( s ) , u ( s ) ) | | H 2 2 | | σ 2 ( s , x ( s ) , u ( s ) ) | | H 2 2 + ϵ | | ϕ ( s , x ( s ) , u ( s ) ) | | H 1 2 + 1 ϵ | | Θ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] d s + 0 t 2 ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) d W s T ( t , x ( t ) , u ( t ) ) + 0 t [ 2 ϵ ϕ ( s , x ( s ) , u ( s ) ) , Ξ 1 ϕ ( s , x ( s ) , u ( s ) ) + ( 1 + ϵ ) ( ζ 1 | | ϕ ( s , x ( s ) , u ( s ) ) | | H 1 2 + ζ 2 | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 ) ϵ 1 + ϵ | | σ 2 ( s , x ( s ) , u ( s ) ) | | H 2 2 + ϵ | | ϕ ( s , x ( s ) , u ( s ) ) | | H 1 2 + 1 ϵ | | Θ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] d s + 0 t 2 ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) d W s T ( t , x ( t ) , u ( t ) ) + 0 t [ { 2 ϵ ζ 2 ζ 1 ( 1 + ϵ ) + ϵ } | | ϕ ( s , x ( s ) , u ( s ) ) | | H 1 2 + ( 1 + ϵ ) ζ 2 | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 ϵ 1 + ϵ | | σ 2 ( s , x ( s ) , u ( s ) ) | | H 2 + 1 ϵ | | Θ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] d s + 0 t 2 ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) d W s .
Since ϵ 0 then 2 ϵ ζ 2 ζ 1 ( 1 + ϵ ) + ϵ ζ 1 < 0 . Thus
| | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 + 0 t | | ϕ ( s , x ( s ) , u ( s ) ) | | H 1 2 + | | σ 2 ( s , x ( s ) , u ( s ) ) | | H 2 d s ζ ( ζ 1 , ζ 2 ) [ T ( t , x ( t ) , u ( t ) ) + 0 t | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 + | | Θ ( s , x ( s ) , u ( s ) ) | | H 2 d s ] + 0 t 2 ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) d W s .
Since E sup s t | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 < , repeating (22) with
0 t ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) d W s
being a uniform martingale. After taking expectations of both sides in (23) and by using Gronwall inequality [see Corollary A1 in Appendix A] we get,
sup s t E | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 + E 0 t | | ϕ ( s , x ( s ) , u ( s ) ) | | H 1 2 + | | σ 2 ( s , x ( s ) , u ( s ) ) | | H 2 d s ζ e ζ t ( E T ( t , x ( t ) , u ( t ) ) + E 0 t | | Θ ( s , x ( s ) , u ( s ) ) | | H 2 d s ) .
Equation (23) and the BDG inequality yield
E sup s t | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 ζ ( ζ 1 , ζ 2 , t ) [ T ( t , x ( t ) , u ( t ) ) + 0 t | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 + | | Θ ( s , x ( s ) , u ( s ) ) | | H 2 d s ] + 1 2 E sup s t | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 + ζ E 0 t | | σ 2 ( s , x ( s ) , u ( s ) ) | | H 2 2 d s .
Inequality (25) with (24) gives the estimate of (20).
In this second part of the proof, we will demonstrate the existence of the solution. Assume that b i : i = 1 , 2 , 3 , is a complete orthogonal basis in H 2 , which also serves as an orthogonal basis in H 1 . We are employing the Galerkin approximation method here. Now, consider our system of BSDEs without the terminal condition in R 1 with the basis function as follows
ϕ n i ( s , x ( s ) , u ( s ) ) = 0 t [ b i , Ξ 1 ( s , x ( s ) , u ( s ) ) b j ϕ n j ( s , x ( s ) , u ( s ) ) + b i , Ξ 2 ( s , x ( s ) , u ( s ) ) σ ^ n j ( s , x ( s ) , u ( s ) ) + b i , Θ ( s , x ( s , u ( s ) ) ) ] d s 0 t σ 2 n i ( s , x ( s ) , u ( s ) ) d W s .
where ϕ n i ( s , x ( s ) , u ( s ) ) and σ ^ n j ( s , x ( s ) , u ( s ) ) are two unknown processes in R 1 × R 1 . From Lemma 2 of Mahmudov et al. (2007) [24], we know that E [ b i , T ( t , x ( t ) , u ( t ) ) ] < and E 0 t b i , Θ ( s , x ( s ) , u ( s ) ) 2 < . Then, by Lemma 4.2 of Du et al. (2010) [20], there exists a unique solution of Equation (26). Define this solution as
ϕ n ( s , x ( s ) , u ( s ) ) : = i = 1 n ϕ n i ( s , x ( s ) , u ( s ) ) b i ,
and σ ^ n ( s , x ( s ) , u ( s ) ) = i = 1 n σ ^ n i ( s , x ( s ) , u ( s ) ) b i . After applying Itô’s lemma and Burkholder–Davis–Gundy inequality of | | ϕ n ( s , x ( s ) , u ( s ) ) | | H 2 2 like in the previous section, we obtain,
E sup s t | | ϕ n ( s , x ( s ) , u ( s ) ) | | H 2 2 + E 0 t | | ϕ n ( s , x ( s ) , u ( s ) ) | | H 1 2 + | | σ 2 n ( s , x ( s ) , u ( s ) ) | | H 2 d s ζ ( ζ 1 , ζ 2 ) ( E [ T ( t , x ( t ) , u ( t ) ) ] + E 0 t | | Θ ( s , x ( s ) , u ( s ) ) | | H 2 2 d s ) .
From (27) and Proposition 3.2 of Du et al. (2010) [20], we know { n 1 } { n } , where { n 1 } is the subsequence of { n } and ( ϕ , σ ^ ) L 2 ( Ω × ( 0 , t ) , , H 1 × H 2 ) such that,
ϕ n 1 w ϕ L 2 ( Ω × ( 0 , t ) , , H 1 ) , σ ^ n 1 w σ ^ L 2 ( Ω × ( 0 , t ) , , H 2 ) .
Suppose : ( Ω , F ) R 1 and : [ 0 , t ] R 1 , where both of them are bounded measurable functions. Now the BSDE (26) with n N and b i { b i } , where i n becomes,
E 0 t ( s ) ( b i , ϕ n 1 ( s , x ( s ) , u ( s ) ) ) d s = E 0 t ( s ) { 0 t [ b i , Ξ 1 ( s , x ( s ) , u ( s ) ) b j ϕ n j ( s , x ( s ) , u ( s ) ) + b i , Ξ 2 ( s , x ( s ) , u ( s ) ) σ ^ n j ( s , x ( s ) , u ( s ) ) + b i , Θ ( s , x ( s , u ( s ) ) ) ] d s 0 t σ 2 n i ( s , x ( s ) , u ( s ) ) d W s } d s .
Furthermore,
E 0 t ( s ) ( b i , ϕ n 1 ( s , x ( s ) , u ( s ) ) ) d s E 0 t ( s ) ( b i , ϕ ( s , x ( s ) , u ( s ) ) ) d s .
In view of the second condition of Assumption 5 and estimate (27), we obtain
E | b i , Ξ 1 ϕ n 1 ( s , x ( s ) , u ( s ) ) d s | < ζ < .
Since ζ is independent of n 1 , for every s [ 0 , t ]
E 0 t b i , Ξ 1 ϕ n 1 ( s , x ( s ) , u ( s ) ) d s E 0 t b i , Ξ 1 ϕ ( s , x ( s ) , u ( s ) ) d s .
Applying Fubini’s theorem and Lebesgue-dominated convergence theorem yields
E 0 t ( s ) s 1 t b i , Ξ 1 ϕ n 1 ( s , x ( s ) , u ( s ) ) d s 1 d s = 0 t ( s ) E s 1 t b i , Ξ 1 ϕ n 1 ( s , x ( s ) , u ( s ) ) d s 1 d s 0 t ( s ) E s 1 t b i , Ξ 1 ϕ ( s , x ( s ) , u ( s ) ) d s 1 d s .
Similarly,
E 0 t ( s ) s 1 t b i , Ξ 2 σ ^ n 1 ( s , x ( s ) , u ( s ) ) d s 1 d s E 0 t ( s ) s 1 t b i , Ξ 2 σ ^ ( s , x ( s ) , u ( s ) ) d s 1 d s .
Assumption 5 and Equation (27) imply
E | 0 t ( b i , σ ^ n 1 ( s , x ( s ) , u ( s ) ) ) d W s | < ζ < .
Since ( b i , σ ^ n 1 ( s , x ( s ) , u ( s ) ) ) w ( b i , σ ^ ( s , x ( s ) , u ( s ) ) ) in L 2 ( 0 , t ) , the previous result implies
0 t ( b i , σ ^ n 1 ( s , x ( s ) , u ( s ) ) ) d W s w 0 t ( b i , σ ^ ( s , x ( s ) , u ( s ) ) ) d W s L 2 ( Ω , F t , R 1 ) .
Fubini’s theorem and Lebesgue’s dominated convergence theorem imply,
E 0 t ( s ) s 1 t b i , σ ^ n 1 ( s , x ( s ) , u ( s ) ) d W s d s E 0 t ( s ) s 1 t b i , σ ^ ( s , x ( s ) , u ( s ) ) d W s d s .
Therefore, we find for ( ω , s ) Ω × [ 0 , t ] (a.e.),
( b i , ϕ ( s , x ( t ) , u ( t ) ) = ( b i , T ( t , x ( t ) , u ( t ) ) ) + 0 t [ b i , Ξ 1 ( s , x ( s ) , u ( s ) ) b j ϕ j ( s , x ( s ) , u ( s ) ) + b i , Ξ 2 ( s , x ( s ) , u ( s ) ) σ ^ j ( s , x ( s ) , u ( s ) ) + b i , Θ ( s , x ( s , u ( s ) ) ) ] d s 0 t σ 2 i ( s , x ( s ) , u ( s ) ) d W s
Hence, we conclude that if we have a stochastic differential equation like BPPSDE, there exists a unique solution. This completes the proof. □
In Lemma 1, we demonstrate the connection between a unique solution to a BPPSDE and path integrals. Then, using Assumptions 2–5 and Proposition 1, we establish the conditions under which a unique solution to the BPPSDE exists. Our goal is now to maximize the payoff function of a soccer player π ( s , x ( s ) , u ( s ) ) , where the stochastic goal dynamics take the form
d x ( s ) = μ [ s , x ( s ) , u ( s ) , σ 2 ( s , x ( s ) , u ( s ) ) ] d s + [ σ 1 ( s , x ( s ) , u ( s ) ) σ 2 ( s , x ( s ) , u ( s ) ) ] d W s .
Let x be the optimal output share of the firm corresponding to the optimal strategy u ( s , x ( s ) ) . Following Du et al. (2013) [25], define ρ [ 0 , 1 ) such that a strategy function of spike variation becomes,
u ˜ ( s , x ( s ) ) : = u ( s , x ( s ) ) if s [ ρ , ρ + ϵ ) u ( s , x ( s ) ) otherwise ,
where ϵ > 0 and ϵ 0 with any given strategy function u ( s , x ( s ) ) . Condition (39) implies x ˜ ( s ) is the probability that a goal might be scored when strategy u ˜ ( s ) has been used.
Assumption 6. 
The stubbornness set U such that U u ( s , x ( s ) ) : [ 0 , t ] × H 2 R 1 is convex and the Brownian motion W s is independent of this set such that
E u ( s , x ( s ) ) | σ ( W s ) = u ( s , x ( s ) ) ,
where σ ( W s ) is the σ-algebra generated by the Brownian motion.
Assumption 7. 
For each ( x , u ) H 2 × U , we have μ ( . , x , u ) , σ 1 ( . , x , u ) , σ 2 ( . , x , u ) and π ( . , x , u ) are all predictable processes, where U is a non-empty Borel measurable subset of a metric space. We also assume for each ( s , x , u ) ( 0 , t ) × H 2 × U ; π , μ , σ 1 , and σ 2 are globally twice Fréchet differentiable with respect to the goal dynamics x. The functions μ x , σ 1 x , σ 2 x , π x x , μ x x , σ 1 x x are dominated by M , σ 2 x x are continuous in x ( s ) and dominated by M ( 1 + | | x | | H 2 + | | u | | H 1 ) . π is dominated by M ( 1 + | | x | | H 2 2 + | | u | | H 1 2 ) .
Suppose, F = π , μ , σ 1 , σ 2 , π x , μ x , σ 1 x , σ 2 x , π x x , μ x x , σ 1 x x such that
F ˜ ( s ) : = F ( s , x ˜ ( s ) , u ˜ ( s ) ) , F ˜ Δ ( s ) : = F ( s , x ˜ ( s ) , u ˜ ( s ) ) F ˜ ( s ) , F ˜ δ ( s ) : = F ˜ Δ ( s ) . I [ ρ , ρ + ϵ ] ( s ) , F ˜ x x ( s ) : = 2 0 1 τ 1 F x x ( s , τ 1 x ( s ) + ( 1 τ 1 ) x ˜ ( s ) , u ˜ ( s ) ) d τ 1 .
Theorem 1. 
Let Assumptions 2 through 7 hold. Define a penalized payoff function P : [ 0 , t ] × H 1 × [ 0 , 1 ] R 1 as the form
P : = π ( s , x ( s ) , u ( s ) ) + p 1 , μ ( s , x ( s ) , u ( s ) + p 2 , σ 1 ( s , x ( s ) , u ( s ) , σ 2 ( s , x ( s ) , u ( s ) ) .
Furthermore, assume x ˜ ( s ) is the optimal goal dynamics corresponding to the stubbornness u ˜ ( s ) . Then
d p ( t ) = P x ( s , x ˜ ( s ) , u ˜ ( s ) , p ( s ) , q ( s ) ) d s + q ( s ) d W s ,
has a weak unique solution, where q is a different adjoint process.
Proof. 
Lemmas 5.2 and 5.3 in [25] and Proposition 1 directly imply this result. □
Theorem 1 states the existence of a weak solution when we construct a deterministic Hamiltonian since under stochastic control theory Hamiltonian and Lagrangian approaches are similar.

2.4. Payoff Function

In this section, we formally define the expected payoff function of a single soccer player. Let E be the expectation with respect to the probability measure P , and E 0 { . | F t } = E { . | F t , x ( 0 ) = 0 } is a conditional expectation based on fixed random initial goal dynamics x ( 0 ) , and the stubbornness u ( s ) [ 0 , 1 ] is adapted to filtration F t . Define an expected payoff function
J ( u ) : = E 0 0 t exp ( r s ) π ( s , x ( s ) , u ( s ) ) d s + M x ( t ) | F 0 ,
where M x ( t ) is the terminal condition, and an individual soccer player’s payoff function can be defined as
π ( s , x ( s ) , u ( s ) ) : = θ + i = 1 3 α i x ( s ) c ( u ( s ) ) 2 ( r μ ¯ ) x ( s ) ,
where θ > 0 is risk of injury, α 1 is constant assist rate at time s, α 2 is constant pass accuracy at time s, α 3 is the dribbling skill of a player, c > 0 represents a constant marginal cost of the player that comes with sacrifices made by that player to be an eligible member of the team, and μ ¯ is the average drift coefficient of the BPPSDE expressed in Equation (1). Moreover c ( u ( s ) ) 2 ( r μ ¯ ) x ( s ) is the performance cost of a player, and M ( x ( t ) ) = ω exp ( r t ) x ( t ) is the terminal bonus offered to each player, where ω is some positive constant [26]. The soccer player’s objective is to find J ( u ) = sup u J ( u ) . The payoff function π : I × X × U R for all X R and U R , has a local maximum at u U if there exists a finite number ϵ > 0 so that for some u ( u ϵ , u + ϵ ) U , J ( u ) J ( u ) .
Lemma 2.
If the payoff function J ( . ) is differentiable in the open functional space U , and if it attains a local maximum at u U then d J / d u = 0 .
Proof. 
Let J have a local maximum at u U if there exists a finite number ϵ > 0 so that for some u ( u ϵ , u + ϵ ) U , J ( u ) J ( u ) . The first order total derivative of J with respect to u is
d d u J ( u ) = lim u u J ( u ) J ( u ) u u .
Since J ( u ) is the maximum, the numerator of the limit is never positive, but the denominator is positive in the cases where u > u and negative for u < u . As we assume J ( . ) is differentiable at u , the left and right limits exist and they are equal. This occurs if d J / d u = 0 . This completes the proof. □
We assume J ( . ) is continuously differentiable (i.e., smooth) with respect to u in ( u ϵ , u + ϵ ) U , ϵ > 0 . Define u u = ϵ κ . For ϵ 0 , a second order Taylor series expansion implies
J ( u ) = j ( u ) + ϵ κ d d u J ( u ) + ϵ 2 2 ! κ 2 d 2 d u 2 J ( u ) + O ( ϵ 3 ) .
If d J / d u 0 and ϵ 0 , the sign of J ( u ) J ( u ) is unchanged in ( u ϵ , u + ϵ ) , such that κ d J / d u would have the same sign for every κ . It is trivial to understand that κ can be non-zero. This implies κ d J / d u can be non-zero too. Hence, κ d J / d u = 0 . Therefore, the Taylor series expansion yields that the sign of the difference in the stubbornness of a player is that of the quadratic term. If d 2 J / d u 2 < 0 , then J ( u ) attains a local maximum.

3. Computation of the Optimal Stubbornness

In this section, we are going to construct a stochastic Lagrangian based on the system consisting of Equations (1) and (43). In soccer, designing a player’s stubbornness that rewards the past performance is, in many ways, a much simpler task than designing one to predict future performance. This exercise boils down to assigning values to the stubbornness in decision making of a player during a match or matches. Therefore, the objective of a player is to maximize (43) subject to the SDE (1). From Equation (45) of Ewald et al. (2024) [27] for a player, the stochastic Lagrangian at time s [ 0 , t ] is defined as
L ^ s , x , λ , u = E { 0 t exp ( r s ) π ( s , x ( s ) , u ( s ) ) d s + M x ( t ) d s       + 0 t x ( s ) x 0 0 s [ μ [ ν , x ( ν ) , u ( ν ) , σ 2 [ ν , x ( ν ) , u ( ν ) ] ] d ν σ [ ν , x ( ν ) , u ( ν ) ] d W ν ] d λ ( s ) } ,
where λ ( s ) is the Lagrangian multiplier.
Proposition 2. 
For a player, if X 0 = { x ( s ) , s [ 0 , t ] } is a goal dynamics then, the optimal stubbornness as a feedback Nash equilibrium u ( s , x ) U would be the solution of the following equation
u f ( s , x , u ) 2 ( x ) 2 f ( s , x , u ) 2 = 2 x f ( s , x , u ) 2 x u f ( s , x , u ) ,
where for an Itô process h ( s , x ) [ 0 , t ] × R
f ( s , x , u ) = exp ( r s ) π ( s , x ( s ) , u ( s ) ) d s + M x ( t ) + h ( s , x ) d λ ( s ) + h ( s , x ) s d λ ( s ) + d λ ( s ) d s h ( s , x ) + h ( s , x ) x μ s , x , u , σ 2 ( s , x , u ) d λ ( s ) + 1 2 σ s , x , u 2 2 h ( s , x ) ( x ) 2 d λ ( s ) .
Proof. 
The Euclidean action function of a player can be represented as
A 0 , t ( x ) = 0 t E s { exp ( r s ) π ( s , x ( s ) , u ( s ) ) d s + M x ( t ) + [ x ( s ) x 0 μ s , x , u , σ 2 ( s , x , u ) d s σ s , x , u d B ( ν ) ] d λ ( s ) } ,
where E s is the conditional expectation on goal dynamics x ( s ) at the beginning of time s [28,29]. For all ε > 0 , and the normalizing constant L ε > 0 , define a transitional probability in small time interval as
Ψ s , s + ε ( x ) : = 1 L ε R exp { ε A s , s + ε ( x ) } Ψ s ( x ) d x ( s ) ,
for ϵ 0 , and Ψ s ( x ) is the value of the transition probability at s and goal dynamics x ( s ) with the initial condition Ψ 0 ( x ) = Ψ 0 .
For continuous time interval [ s , τ ] , where τ = s + ε , the stochastic Lagrangian is
A s , τ ( x ) = s τ E s { exp ( r ν ) π ( ν , x ( ν ) , u ( ν ) ) d ν + x ( ν ) x 0 μ ν , x , u , σ 2 ( ν , x , u ) d ν σ ν , x , u d B ( ν ) d λ ( ν ) } ,
with the constant initial condition x ( 0 ) = x 0 . This conditional expectation is valid when the stubbornness u ( ν ) of a player’s goal dynamics is determined at time ν such that all other players’ goal dynamics are given [30]. The evolution takes place as the action function is stationary. Therefore, the conditional expectation with respect to time only depends on the expectation of the initial time point of interval [ s , τ ] .
Fubini’s Theorem implies,
A s , τ ( x ) = E s { s τ exp ( r ν ) π ( ν , x ( ν ) , u ( ν ) ) d ν + [ x ( ν ) x 0 μ ν , x , u , σ 2 ( ν , x , u ) d ν σ ν , x , u d B ( ν ) ] d λ ( ν ) } .
By Itô’s Theorem [31], there exists a function h [ ν , x ( ν ) ] C 2 ( [ 0 , ) × R ) such that Y ( ν ) = h [ ν , x ( ν ) ] , where Y ( ν ) is an Itô process.
Assuming
h [ ν + Δ ν , x ( ν ) + Δ x ( ν ) ] = x ( ν ) x 0 μ ν , x , u , σ 2 ( ν , x , u ) d ν σ ν , x , u d B ( ν ) ,
Equation (50) implies,
A s , τ ( x ) = E s s τ exp ( r ν ) π ( ν , x ( ν ) , u ( ν ) ) d ν + h ν + Δ ν , x ( ν ) + Δ x ( ν ) d λ ( ν ) .
Itô’s Lemma implies,
ε A s , τ ( x ) = E s { ε exp ( r s ) π ( s , x ( s ) , u ( s ) ) + ε h [ s , x ( s ) ] d λ ( s ) + ε h s [ s , x ( s ) ] d λ ( s ) + ε h x [ s , x ( s ) ] μ s , x ( s ) , u ( s ) , σ 2 ( s , x ( s ) , u ( s ) ) d λ ( s ) + ε h x [ s , x ( s ) ] σ s , x ( s ) , u ( s ) d λ ( s ) d B ( s ) + 1 2 ε σ s , x ( s ) , u ( s ) 2 h x x [ s , x ( s ) ] d λ ( s ) + o ( ε ) } ,
where h s = s h , h x = x h and h x x = 2 ( x ) 2 h , and we use the condition [ d x ( s ) ] 2 ε with d x ( s ) ε μ [ s , x ( s ) , u ( s ) , σ 2 ( s , x ( s ) , u ( s ) ) ] + σ [ s , x ( s ) , u ( s ) ] d B ( s ) .
We use Itô Lemma and a similar approximation to approximate the integral. With ε 0 , dividing throughout by ε and taking the conditional expectation yields,
ε A s , τ ( x ) = E s { ε exp ( r s ) π ( s , x ( s ) , u ( s ) ) + ε h [ s , x ( s ) ] d λ ( s ) + ε h s [ s , x ( s ) ] d λ ( s ) + ε h x [ s , x ( s ) ] μ s , x ( s ) , u ( s ) , σ 2 ( s , x ( s ) , u ( s ) ) d λ ( s ) + 1 2 ε σ 2 s , x ( s ) , u ( s ) h x x [ s , x ( s ) ] d λ ( s ) + o ( 1 ) } ,
since E s [ d B ( s ) ] = 0 and E s [ o ( ε ) ] / ε 0 for all ε 0 . For ε 0 , denote a transition probability at s as Ψ s ( x ) . Hence, using Equation (48), the transition function yields
Ψ s , τ ( x ) = 1 L ϵ i R exp { ε [ exp ( r s ) π ( s , x ( s ) , u ( s ) ) + h [ s , x ( s ) ] d λ ( s ) + h s [ s , x ( s ) ] d λ ( s ) + h x [ s , x ( s ) ] μ s , x ( s ) , u ( s ) , σ 2 ( s , x ( s ) , u ( s ) ) d λ ( s ) + 1 2 σ s , x ( s ) , u ( s ) 2 h x x [ s , x ( s ) ] d λ ( s ) ] } Ψ s ( x ) d x ( s ) + o ( ε 1 / 2 ) .
Since ε 0 , first-order Taylor series expansion on the left-hand side of Equation (54) yields
Ψ s ( x ) + ε Ψ s ( x ) s + o ( ε ) = 1 L ε R exp { ε [ exp ( r s ) π ( s , x ( s ) , u ( s ) ) + h [ s , x ( s ) ] d λ ( s ) + h s [ s , x ( s ) ] d λ ( s ) + h x [ s , x ( s ) ] μ s , x ( s ) , u ( s ) , σ 2 ( s , x ( s ) , u ( s ) ) d λ ( s ) + 1 2 σ s , x ( s ) , u ( s ) 2 h x x [ s , x ( s ) ] d λ ( s ) ] } Ψ s ( x ) d x ( s ) + o ( ε 1 / 2 ) .
For any given s and τ define x ( s ) x ( τ ) : = ξ such that x ( s ) = x ( τ ) + ξ [32]. For the instance where ξ is not around zero, for a positive number η < assume | ξ | η ε x ( s ) such that for ε 0 , ξ attains smaller values and the goal dynamics 0 < x ( s ) η ε / ( ξ ) 2 . Thus,
Ψ s ( x ) + ε Ψ s ( x ) s = 1 L ϵ R Ψ s ( x ) + ξ Ψ i s ( x ) x + o ( ϵ ) × exp { ε [ exp ( r s ) π ( s , x ( s ) , u ( s ) ) + h [ s , x ( s ) ] d λ ( s ) + h x [ s , x ( s ) ] μ s , x ( s ) , u ( s ) , σ 2 ( s , x ( s ) , u ( s ) ) d λ ( s ) + 1 2 σ s , x ( s ) , u ( s ) 2 h x x [ s , x ( s ) ] d λ ( s ) ] } d ξ + o ( ε 1 / 2 ) .
Before solving for a Gaussian integral of each term of the right-hand side of the above Equation, define a C 2 function
f [ s , ξ , λ ( s ) , u ( s ) ] = exp ( r s ) π ( s , x ( s ) + ξ , u ( s ) ) + h [ s , x ( s ) + ξ ] d λ ( s ) + h s [ s , x ( s ) + ξ ] d λ ( s ) + h x [ s , x ( s ) + ξ ] μ s , x ( s ) + ξ , u ( s ) , σ 2 ( s , x ( s ) + ξ , u ( s ) ) d λ ( s ) + 1 2 σ 2 s , x ( s ) + ξ , u ( s ) h x x [ s , x ( s ) + ξ ] d λ ( s ) + o ( 1 ) .
Hence,
Ψ s ( x ) + ε Ψ s ( x ) s = Ψ s ( x ) 1 L ϵ R exp ε f [ s , ξ , λ ( s ) , u ( s ) ] d ξ + Ψ s ( x ) x 1 L ϵ R ξ exp ε f [ s , ξ , λ ( s ) , u ( s ) ] d ξ + o ( ε 1 / 2 ) .
After taking ε 0 , Δ u 0 and a Taylor series expansion with respect to x of f [ s , ξ , λ ( s ) , u ( s ) ] yields,
f [ s , ξ , λ ( s ) , u ( s ) ] = f [ s , x ( τ ) , λ ( s ) , u ( s ) ] + f x [ s , x ( τ ) , λ ( s ) , γ , u ( s ) ] [ ξ x ( τ ) ] + 1 2 f x x [ s , x ( τ ) , λ ( s ) , u ( s ) ] [ ξ x ( τ ) ] 2 + o ( ε ) .
Define y : = ξ x ( τ ) so that d ξ = d y . The first integral on the right-hand side of Equation (56) yields
R exp { ε f [ s , ξ , λ ( s ) , u ( s ) ] } d ξ = exp ε f [ s , x ( τ ) , λ ( s ) , u ( s ) ] R exp { ε [ f x [ s , x ( τ ) , λ ( s ) , u ( s ) ] y + 1 2 f x x [ s , x ( τ ) , λ ( s ) , u ( s ) ] y 2 ] } d y .
Assuming a = 1 2 f x x [ s , x ( τ ) , λ ( s ) , u ( s ) ] and b = f x [ s , x ( τ ) , λ ( s ) , u ( s ) ] the argument of the exponential function in Equation (57) becomes,
a ( y ) 2 + b y = a ( y ) 2 + b a y a y + b 2 a 2 ( b ) 2 4 ( a ) 2 ,
as a > 0 and a 0 . We are using the fact that for a very small positive value of a, a 2 a .
Therefore,
exp ε f [ s , x ( τ ) , λ ( s ) , u ( s ) ] R exp ε [ a ( y ) 2 + b y ] d y = exp ε b 2 4 a 2 f [ s , x ( τ ) , λ ( s ) , u ( s ) ] R exp ε a y + b 2 a 2 d y = π ε a exp ε b 2 4 a 2 f [ s , x ( τ ) , λ ( s ) , u ( s ) ] ,
and
Ψ s ( x ) 1 L ε R exp { ε f [ s , ξ , λ ( s ) , u ( s ) ] } d ξ = Ψ s ( x ) 1 L ε π ε a exp ε b 2 4 a 2 f [ s , x ( τ ) , λ ( s ) , u ( s ) ] .
Substituting ξ = x ( τ ) + y into the second integrand of the right-hand side of Equation (56) yields
R ξ exp ε { f [ s , ξ , λ ( s ) , u ( s ) ] } d ξ = exp { ε f [ s , x ( τ ) , λ ( s ) , u ( s ) ] } R [ x ( τ ) + y ] exp ε a y 2 + b y d y = exp ε b 2 4 a 2 f [ s , x ( τ ) , λ ( s ) , u ( s ) ] [ x ( τ ) π ε a + R y exp ε a y + b 2 a 2 d y ] .
Substituting k = y + b / ( 2 a ) in Equation (61) yields,
exp ε b 2 4 a 2 f [ s , x ( τ ) , λ ( s ) , u ( s ) ] [ x ( τ ) π ε a + R k b 2 a exp [ a ε k 2 ] d k ] = exp ε b 2 4 a 2 f [ s , x ( τ ) , λ ( s ) , u ( s ) ] [ x ( τ ) b 2 a ] π ε a .
Hence,
1 L ε Ψ s ( x ) x R ξ exp ε f [ s , ξ , λ ( s ) , u ( s ) ] d ξ = 1 L ε Ψ s ( x ) x exp ε b 2 4 a 2 f [ s , x ( τ ) , λ ( s ) , u ( s ) ] [ x ( τ ) b 2 a ] π ε a .
Plugging in Equations (60) and (63) into Equation (56) implies,
Ψ s ( x ) + ε Ψ s ( x ) s = 1 L ε π ε a Ψ s ( x ) exp ε b 2 4 a 2 f [ s , x ( τ ) , λ ( s ) , u ( s ) ] + 1 L ε Ψ s ( x ) x π ε a exp ε b 2 4 a 2 f [ s , x ( τ ) , λ ( s ) , u ( s ) ] [ x ( τ ) b 2 a ] + o ( ε 1 / 2 ) .
Let f be in Schwartz space. This leads to derivatives that are rapidly falling and furthermore, assuming 0 < | b | η ε , 0 < | a | 1 2 [ 1 ( ξ ) 2 ] 1 and x ( s ) x ( τ ) = ξ yields,
x ( τ ) b 2 a = x ( s ) ξ b 2 a = x ( s ) b 2 a , ξ 0 ,
such that
| x ( s ) b 2 a | = | η ε ( ξ ) 2 η ε 1 1 ( ξ ) 2 | η ε .
Therefore, the Wick rotated Schrödinger-type Equation for the player is,
Ψ s ( x ) s = b 2 4 a 2 f [ s , x ( τ ) , λ ( s ) , u ( s ) ] Ψ s ( x ) .
Differentiating Equation (65) with respect to stubbornness yields
2 f x f x x f x x f x u f x f x x u ( f x x ) 2 f u Ψ s ( x ) = 0 ,
where f x = x f , f x x = 2 ( x ) 2 f , f x u = 2 x u f and f x x u = 3 ( x ) 2 u f = 0 . Thus, optimal feedback stubbornness of a player in stochastic goal dynamics is represented as u ( s , x ) and is found by setting Equation (66) equal to zero. Hence, u ( s , x ) is the solution of the following Equation
f u ( f x x ) 2 = 2 f x f x u .
This completes the proof. □
Remark 1.
The central idea of Proposition 2 is to choose h appropriately. Therefore, one natural candidate should be a function of the integrating factor of the stochastic goal dynamics represented in Equation (1).
To demonstrate the preceding proposition, we present a detailed example to identify an optimal stubbornness under this environment. Consider a player has to maximize the expected payoff expressed in Equation (44)
J ( u ) : = E 0 0 t exp ( r s ) θ + i = 1 3 α i x ( s ) c ( u ( s ) ) 2 ( r μ ¯ ) x ( s ) d s + ω exp ( r t ) x ( t ) | F 0 ,
subject to the goal dynamics represented by a BPPSDE
d x ( s ) = a x ( s ) σ 2 x ( s ) u ( s ) d s + σ 1 σ 2 x ( s ) d W ( s ) ,
where a is a constant, σ 1 and σ 2 are constant volatilities, and the diffusion component is σ 1 σ 2 x ( s ) . We are going to implement Proposition 2 to determine the optimal stubbornness. By this problem
f ( s , x , u ) = exp ( r s ) θ + i = 1 3 α i x ( s ) c ( u ( s ) ) 2 ( r μ ¯ ) x ( s ) + ω exp ( r t ) x ( t ) + h ( s , x ) d λ ( s ) + h ( s , x ) s d λ ( s ) + d λ ( s ) d s h ( s , x ) + h ( s , x ) x a x ( s ) σ 2 x ( s ) u ( s ) d λ ( s ) + 1 2 σ 1 σ 2 x ( s ) 2 2 h ( s , x ) ( x ) 2 d λ ( s ) .
In Equation (70), we treat the terminal condition as a constant; hence, define ω exp ( r t ) x ( t ) = M ¯ . The diffusion part σ 1 σ 2 x ( s ) of the SDE (69) suggests that we can simplify the equation by focusing on this part. One common approach is to consider an exponential integrating factor to counterbalance the σ 2 x ( s ) term in both the drift and diffusion terms. Therefore, the integrating factor h ( s , x ) = exp ( σ 2 x ( s ) ) . Therefore, Equation (70) yields
f ( s , x , u ) = exp ( r s ) θ + i = 1 3 α i x ( s ) c ( u ( s ) ) 2 ( r μ ¯ ) x ( s ) + M ¯ + exp ( σ 2 x ( s ) ) d λ ( s ) + d λ ( s ) d s exp ( σ 2 x ( s ) ) + σ 2 exp ( σ 2 x ( s ) ) a x ( s ) σ 2 x ( s ) u ( s ) d λ ( s ) + 1 2 σ 1 σ 2 x ( s ) 2 ( σ 2 ) 2 exp ( σ 2 x ( s ) ) .
Hence,
u f ( s , x , u ) = exp ( r s ) 2 c u ( s ) ( r μ ¯ ) x ( s ) σ 2 exp ( σ 2 x ( s ) ) d λ ( s ) ,
f ( s , x , u ) x ( s ) = exp ( r s ) θ + i = 1 3 α i c ( u ( s ) ) 2 2 ( r μ ¯ ) x ( s ) 3 / 2 + σ 2 exp ( σ 2 x ( s ) ) [ d λ ( s ) + d λ ( s ) d s + a 2 x ( s ) σ 2 d λ ( s ) + σ 2 a x ( s ) σ 2 x ( s ) u ( s ) d λ ( s ) ] σ 1 σ 2 x ( s ) ( σ 2 ) 3 exp ( σ 2 x ( s ) ) + 1 2 σ 1 σ 2 x ( s ) 2 ( σ 2 ) 3 exp ( σ 2 x ( s ) ) ,
2 f ( s , x , u ) x ( s ) 2 = exp ( r s ) 15 c ( u ( s ) ) 2 4 ( r μ ¯ ) x ( s ) 5 / 2 + ( σ 2 ) 2 exp ( σ 2 x ( s ) ) d λ ( s ) + ( σ 2 ) 2 d λ ( s ) d s exp ( σ 2 x ( s ) ) + ( σ 2 ) 2 exp ( σ 2 x ( s ) ) a 2 x ( s ) σ 2 d λ ( s ) σ 2 exp ( σ 2 x ( s ) ) 3 a 4 x ( s ) 5 / 2 d λ ( s ) + ( σ 2 ) 3 exp ( σ 2 x ( s ) ) a x ( s ) σ 2 x ( s ) u ( s ) d λ ( s ) + ( σ 2 ) 2 exp ( σ 2 x ( s ) ) a 2 x ( s ) d λ ( s ) ( σ 2 ) 2 exp ( σ 2 x ( s ) ) a 4 x ( s ) 3 / 2 d λ ( s ) + ( σ 2 ) 4 exp ( σ 2 x ( s ) ) σ 1 σ 2 x ( s ) ( σ 2 ) 5 exp ( σ 2 x ( s ) ) + 1 2 σ 1 σ 2 x ( s ) 2 ( σ 2 ) 4 exp ( σ 2 x ( s ) ) ,
and
2 f ( s , x , u ) x u = c exp ( r s ) u ( s ) ( r μ ¯ ) x ( s ) 3 / 2 .
Equation (66) yields,
exp ( r s ) 2 c u ( s ) ( r μ ¯ ) x ( s ) A 1 × exp ( r s ) 15 c ( u ( s ) ) 2 4 ( r μ ¯ ) x ( s ) 5 / 2 ( σ 2 ) 3 exp ( σ 2 x ( s ) ) u ( s ) d λ ( s ) + A 3 2 = 2 exp ( r s ) c ( u ( s ) ) 2 2 ( r μ ¯ ) x ( s ) 3 / 2 ( σ 2 ) 2 exp ( σ 2 x ( s ) ) u ( s ) d λ ( s ) + A 2 c exp ( r s ) u ( s ) ( r μ ¯ ) x ( s ) 3 / 2 ,
where
A 1 = σ 2 exp ( σ 2 x ( s ) ) d λ ( s ) , A 2 = exp ( r s ) θ + i = 1 3 α i + σ 2 exp ( σ 2 x ( s ) ) [ d λ ( s ) + d λ ( s ) d s + a 2 x ( s ) σ 2 d λ ( s ) + σ 2 a x ( s ) σ 2 x ( s ) d λ ( s ) ] σ 1 σ 2 x ( s ) ( σ 2 ) 3 exp ( σ 2 x ( s ) ) + 1 2 σ 1 σ 2 x ( s ) 2 ( σ 2 ) 3 exp ( σ 2 x ( s ) ) , A 3 = ( σ 2 ) 2 exp ( σ 2 x ( s ) ) d λ ( s ) + ( σ 2 ) 2 d λ ( s ) d s exp ( σ 2 x ( s ) ) + ( σ 2 ) 2 exp ( σ 2 x ( s ) ) a 2 x ( s ) σ 2 d λ ( s ) σ 2 exp ( σ 2 x ( s ) ) 3 a 4 x ( s ) 5 / 2 d λ ( s ) + ( σ 2 ) 3 exp ( σ 2 x ( s ) ) a x ( s ) σ 2 x ( s ) d λ ( s ) + ( σ 2 ) 2 exp ( σ 2 x ( s ) ) a 2 x ( s ) d λ ( s ) ( σ 2 ) 2 exp ( σ 2 x ( s ) ) a 4 x ( s ) 3 / 2 d λ ( s ) + ( σ 2 ) 4 exp ( σ 2 x ( s ) ) σ 1 σ 2 x ( s ) ( σ 2 ) 5 exp ( σ 2 x ( s ) ) + 1 2 σ 1 σ 2 x ( s ) 2 ( σ 2 ) 4 exp ( σ 2 x ( s ) ) .
Given the complexity of the terms (including exponents and products), the equation may result in a high-degree polynomial in u ( s ) , and an explicit solution of stubbornness requires further simplification or assumptions to reduce the degree of the polynomial. Assume the effect of d λ ( s ) is very small (i.e., d λ ( s ) 0 ). This would remove A 1 , ( σ 2 ) 3 exp ( σ 2 x ( s ) ) u ( s ) d λ ( s ) , and ( σ 2 ) 2 exp ( σ 2 x ( s ) ) u ( s ) d λ ( s ) from Equation (76). Equation (76) implies
exp ( r s ) 2 c u ( s ) ( r μ ¯ ) x ( s ) exp ( r s ) 15 c ( u ( s ) ) 2 4 ( r μ ¯ ) x ( s ) 5 / 2 + A 3 2 = 2 exp ( r s ) c ( u ( s ) ) 2 2 ( r μ ¯ ) x ( s ) 3 / 2 + A 2 c exp ( r s ) u ( s ) ( r μ ¯ ) x ( s ) 3 / 2 .
Simplifying the left and the right-hand sides of Equation (77), we obtain
exp ( 2 r s ) 2 c u ( s ) ( r μ ¯ ) x ( s ) 15 c ( u ( s ) ) 2 4 ( r μ ¯ ) x ( s ) 5 / 2 + A 3 2 ,
and
exp ( 2 r s ) c 2 u ( s ) 3 ( r μ ¯ ) 2 x ( s ) 3 2 A 2 c exp ( r s ) u ( s ) ( r μ ¯ ) x ( s ) 3 / 2 ,
respectively. Therefore, equating both sides and dividing by exp ( 2 r s ) yields,
2 c u ( s ) ( r μ ¯ ) x ( s ) 15 c ( u ( s ) ) 2 4 ( r μ ¯ ) x ( s ) 5 / 2 + A 3 2 = c 2 u ( s ) 3 ( r μ ¯ ) 2 x ( s ) 3 2 A 2 c u ( s ) exp ( 3 r s ) ( r μ ¯ ) x ( s ) 3 / 2 2 c u ( s ) ( r μ ¯ ) x ( s ) 15 c ( u ( s ) ) 2 4 ( r μ ¯ ) x ( s ) 5 / 2 + A 3 2 c 2 u ( s ) 3 ( r μ ¯ ) 2 x ( s ) 3 + 2 A 2 c u ( s ) exp ( 3 r s ) ( r μ ¯ ) x ( s ) 3 / 2 = 0 u ( s ) 2 c ( r μ ¯ ) x ( s ) 15 c ( u ( s ) ) 2 4 ( r μ ¯ ) x ( s ) 5 / 2 + A 3 2 c 2 u ( s ) 2 ( r μ ¯ ) 2 x ( s ) 3 + 2 A 2 c exp ( 3 r s ) ( r μ ¯ ) x ( s ) 3 / 2 = 0 .
This gives one solution u ( s ) = 0 , and we need to find the non-trivial solution by solving:
2 c ( r μ ¯ ) x ( s ) 15 c ( u ( s ) ) 2 4 ( r μ ¯ ) x ( s ) 5 / 2 + A 3 2 c 2 u ( s ) 2 ( r μ ¯ ) 2 x ( s ) 3 + 2 A 2 c exp ( 3 r s ) ( r μ ¯ ) x ( s ) 3 / 2 = 0 .
The above equation is quadratic in u ( s ) . Let k 1 = 2 c ( r μ ¯ ) x ( s ) , k 2 = 15 c 4 ( r μ ¯ ) x ( s ) 5 / 2 , k 3 = c 2 ( r μ ¯ ) 2 x ( s ) 3 , and k 4 = 2 A 2 c exp ( 3 r s ) ( r μ ¯ ) x ( s ) 3 / 2 . We express the equation as k 1 k 2 u ( s ) 4 + ( 2 k 1 k 2 A 3 k 3 ) u ( s ) 2 + ( k 1 A 3 2 + k 4 ) = 0 . Define z ( s ) : = u ( s ) 2 . The quadratic formula implies
z ( s ) = 1 2 1 k 1 k 2 k 3 2 k 1 k 2 ± 1 ( k 1 k 2 ) 2 k 3 2 k 1 k 2 2 4 k 1 k 2 k 1 A 3 2 k 4 1 / 2 .
Therefore, the optimal stubbornness is,
u ( s ) = ± 1 2 1 k 1 k 2 k 3 2 k 1 k 2 ± 1 ( k 1 k 2 ) 2 k 3 2 k 1 k 2 2 4 k 1 k 2 k 1 A 3 2 k 4 1 / 2 1 / 2 .
Since we assume the stubbornness is a non-negative function, we ignore the negative squared-root part.

4. Conclusions

Over the past decade, research on modeling scores in soccer games has increasingly focused on dynamics to explain changes in team strengths over time. A crucial aspect of this involves evaluating the performance of all team players. Consequently, a game-theoretic approach to determining optimal stubbornness has become essential. To compute this optimal stubbornness, we begin by constructing a stochastic Lagrangian based on the payoff function and the BPPSDE. We then apply a Euclidean path integral approach, derived from the Feynman action function [33], over small continuous time intervals. Through Taylor series expansion and Gaussian integral solutions, we derive a Wick-rotated Schrödinger-type equation. The analytical solution for optimal stubbornness is obtained by taking the first derivative with respect to stubbornness. This method simplifies challenges associated with the value function in the Hamiltonian–Jacobi–Bellman (HJB) equation. Moreover, under the BPPSDE framework, the path integral control approach performs more effectively than the HJB equation.
Considering the significant impact of stubbornness research on predicting match outcomes—not only for the betting industry but also for soccer clubs and analytics teams—there is a strong motivation for researchers to explore this field. We believe many game theorists who are also soccer enthusiasts would welcome seeing this Moneyball effect extend into soccer. Ultimately, these concepts could be applied beyond soccer to any sport where score predictions rely on strength dynamics [34].

Funding

This research received no external funding.

Data Availability Statement

The author declares that no data have been used in this paper.

Conflicts of Interest

The author declares that he has no conflicts of interest.

Appendix A

Proposition A1. 
A stochastic differential equation can be represented as a Feynman path integration.
Proof. 
For the simplistic case, consider the Feynman action function has the form
S ( x ) = 0 t π ( s , x ( s ) , u ( s ) ) + λ g ( s , x ( s ) , x ˙ ( s ) ) d s .
All the symbols have the same meaning as described in the main text. In this context, we assume that the penalization function λ g ( s , x ( s ) , x ˙ ( s ) ) is a proxy of an SDE, and the payoff function is stable and can be added to the drift part of the stochastic differential equation. Let the penalization function be of the form
d x d s = A ( s , x ( s ) , u ( s ) ) + σ ( s , x ( s ) , u ( s ) ) W s .
Including a payoff function yields
d x d s = π ( s , x ( s ) , u ( s ) ) + A ( s , x ( s ) , u ( s ) ) + σ ( s , x ( s ) , u ( s ) ) W s i . e . d x d s = μ ( s , x ( s ) , u ( s ) ) + σ ( s , x ( s ) , u ( s ) ) W s
where μ ( s , x ( s ) , u ( s ) ) = π ( s , x ( s ) , u ( s ) ) + A ( s , x ( s ) , u ( s ) ) . Therefore, after using condition (A1), our new general Langevin form becomes,
d x d s = μ ( s , x ( s ) , u ( s ) ) + σ ( s , x ( s ) , u ( s ) ) W s .
Equation (A2) can be written as,
d x = μ ( s , x ( s ) , u ( s ) ) d s + σ ( s , x ( s ) , u ( s ) ) d W s .
We further assume that the functions μ ( s , x ( s ) , u ( s ) ) and σ ( s , x ( s ) , u ( s ) ) obey all the properties of Ito’s stochastic differential equation. We want to derive a probability density function (PDF) for a soccer player’s goal dynamics at time s [i.e., x ( s ) ]. The decentralized form of Equation (A3) with the small Ito interpretation of small time step h is
x j + 1 x j = μ j h + σ j ω j h , j { 0 , 1 , , n } ,
where the initial time is zero, x j = x ( 0 + j h ) , T = n h , μ j = μ ( 0 + j h , x j ) , σ j = σ ( 0 + j h , x j ) , ω j is a normally distributed discrete random variable with w j = 0 and w i w j = Δ j , k . Chow (2015) [35] defines Δ j , k as the Kronecker delta function.
The conditional PDF of goal dynamics can be written as,
Ψ ( x | ω ) = j = 0 n δ ( x j + 1 x j μ j h σ j ω j h )
The probability in Equation (A5) is nothing but the delta Dirac function constrained on the stochastic differential equation. We know the Fourier transformation of delta Dirac function is,
δ ( b j ) = 1 2 π R exp { i L j b j } d b j
where i is the complex number. After putting the stochastic differential Equation (A4) into L j , we obtain,
Ψ ( x | ω ) = R j = 0 n 1 2 π e i j b j ( x j + 1 x j μ j h σ j ω j h ) d b j .
Chow (2015) [35] suggests that the zero mean unit-variance Gaussian white noise density function can be written as
Ψ ( ω j ) = 1 2 π e ( 1 / 2 ) ω j 2
Combining Equations (A7) and (A8), we obtain,
Ψ ( 0 , t , x 0 , x t ) = R Ψ ( x | ω ) j = 0 n Ψ ( ω j ) d ω j = R j = 0 n 1 2 π e i j b j ( x j + 1 x j μ j h ) d b j × R j = 0 n 1 2 π e i b j σ j ω j h e ( 1 / 2 ) ω j 2 d ω j = R j = 0 n 1 2 π e j ( i b j ) x j + 1 x j h μ j h + j ( 1 / 2 ) 2 σ j 2 ( i b j ) 2 h d b j
Since h 0 , n such that T = n h , Equation (A9) yields
Ψ ( 0 , t , x 0 , x t ) = R e 0 t [ x ˜ ( s ) ( x ˙ ( s ) μ ( s , x ( s ) , u ( s ) ) ) 1 2 x ˜ ( s ) 2 σ ( s , x ( s ) , u ( s ) ) ] d s D x
with a newly defined complex variable i b j x ˜ . Chow et al. (2015) [35] imply that although they use continuum notation for covariance, x ( s ) does not need to be differentiable and they interpret the action function by discreet definition. In the power of the exponential integrand in Equation (A10), there are two parts, one is real and another is imaginary. Furthermore, x ˜ ( s ) ( x ˙ ( s ) μ ( s , x ( s ) , u ( s , m ) ) ) is the only imaginary part as i b j x ˜ ( s ) . In our proposed integral, we do not have any imaginary part. Therefore, we take the absolute value of it to obtain the magnitude of this complex number. Therefore,
x ˜ ( s ) ( x ˙ ( s ) μ ( s , x ( s ) , u ( s ) ) ) = x ˜ ( s ) 2 ( x ˙ ( s ) μ ( s , x ( s ) , u ( s ) ) ) 2 = x ˜ ( s ) 2 ( x ˙ ( s ) μ ( s , x ( s ) , u ( s ) ) ) = | x ˜ ( s ) ( x ˙ ( s ) μ ( s , x ( s ) , u ( s ) ) ) |
Condition (A11) implies
S ( x ) = 0 t x ˜ ( s ) 2 ( x ˙ ( s ) μ ( s , x ( s ) , u ( s ) ) ) 1 2 x ˜ ( s ) 2 σ ( s , x ( s ) , u ( s ) ) d s
The action function defined in Equation (A12) is the integration of the Lagrangian, which includes the dynamics of the system. This function and the function defined as
S ( x ) = 0 t π ( s , x ( s ) , u ( s ) ) + λ g ( s , x ( s ) , x ˙ ( s ) , u ( s ) ) d s
are the same. Throughout this paper, one of the main objectives is to find a solution of a dynamic payoff function with an SDE as the penalization function. In this context, the stochastic part appears only from the penalization function and λ is a parameter. □
Proposition A2. 
The Gaussian integral value of R exp q ϵ ( 1 + β ) t ξ 2 + λ ϵ ( 1 + β ) t ξ d ξ is
exp λ 2 ϵ 3 4 q ( 1 + β ) t ϵ π ( 1 + β ) t q .
Proof. 
Define α 1 : = 2 q ϵ 1 ( 1 + β ) t and α 2 : = λ ϵ ( 1 + β ) t . Now we are interested in
R exp 1 2 α 1 ξ 2 + α 2 ξ d ξ .
After factoring out, we obtain,
1 2 α 1 ξ 2 + α 2 ξ = 1 2 α 1 ξ 2 2 α 2 α 1 ξ + α 2 2 α 1 2 α 2 2 α 1 2 = 1 2 α 1 ξ α 2 α 1 2 + α 2 2 2 α 1
After plugging in the result obtained in (A13) into the integral, we obtain
R exp 1 2 α 1 ξ 2 + α 2 ξ d ξ = R exp α 2 2 2 α 1 exp 1 2 α 1 ξ α 2 α 1 2 d ξ = exp α 2 2 2 α 1 R exp 1 2 α 1 ξ α 2 α 1 2 d ξ = exp α 2 2 2 α 1 R exp 1 2 α 1 ξ 2 d ξ = exp α 2 2 2 α 1 2 π α 1
After using the values of α 1 and α 2 we obtain,
R exp q ϵ ( 1 + β ) t ξ 2 + λ ϵ ( 1 + β ) t ξ d ξ = exp λ 2 ϵ 3 4 q ( 1 + β ) t ϵ π ( 1 + β ) t q .
Let us consider the SDE (1).
Lemma A1. 
Consider the stochastic differential equation,
d x = μ { s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] } d s + { σ 1 ( s , x ( s ) , u ( s ) ) σ 2 [ s , x ( s ) , u ( s ) ] } d W s ,
where μ , u , σ 1 and σ 2 are real valued functions, and ϕ [ s , x ( s ) , u ( s ) ] : [ 0 , t ] × R × [ 0 , 1 ] R is a continuous and at least twice differentiable in x and u function that is at least one time differentiable with respect to s. Then
d ϕ [ s , x ( s ) , u ( s ) ] = { ϕ [ s , x ( s ) , u ( s ) ] s + ϕ [ s , x ( s ) , u ( s ) ] x μ s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] + 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] x 2 σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] 2 } d s + ϕ [ s , x ( s ) , u ( s ) ] x σ 1 [ s , x ( s ) , u ( s ) ] d W s ϕ [ s , x ( s ) , u ( s ) ] x σ 2 [ s , x ( s ) , u ( s ) ] d W s .
Proof. 
The general Itô formula (∅ksendal 2003, Theorem 4.2.1) [31] implies
d ϕ [ s , x ( s ) , u ( s ) ] = ϕ [ s , x ( s ) , u ( s ) ] s d s + ϕ [ s , x ( s ) , u ( s ) ] x d x ( s ) + ϕ [ s , x ( s ) , u ( s ) ] u d u ( s ) + 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] x 2 d x 2 ( s ) + 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] x u d x ( s ) d u ( s ) + 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] x u d x ( s ) d u ( s ) + 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] u 2 d u 2 ( s )
Let us assume the stubbornness is somewhat neutral over time such that d u ( s ) = 0 . The stubbornness neutrality means a player is consistent with their stubbornness, if there is any inconsistency then, they would not be able to make the team. Therefore, Equation (A17) becomes,
d ϕ [ s , x ( s ) , u ( s ) ] = ϕ [ s , x ( s ) , u ( s ) ] s d s + ϕ [ s , x ( s ) , u ( s ) ] x d x ( s ) + 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] x 2 d x 2 ( s )
Substituting the stochastic differential equation given in Equation (A16) into Equation (A17) yields,
d ϕ [ s , x ( s ) , u ( s ) ] = ϕ [ s , x ( s ) , u ( s ) ] s d s + ϕ [ s , x ( s ) , u ( s ) ] x { μ s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] d s + σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] d W s } + 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] x 2 { μ s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] d s + σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] d W s } 2 .
Therefore,
d ϕ [ s , x ( s ) , u ( s ) ] = ϕ [ s , x ( s ) , u ( s ) ] s d s + ϕ [ s , x ( s ) , u ( s ) ] x μ s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] d s + ϕ [ s , x ( s ) , u ( s ) ] x σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] d W s + 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] x 2 μ 2 s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] d s 2 + 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] x 2 σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] 2 d W s 2 + 2 ϕ [ s , x ( s ) , u ( s ) ] x 2 μ s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] d s d W s .
The differential rules from Itô’s formula imply d s 2 = d s d W s = 0 and d W s 2 = d s . Equation (A20) yields,
d ϕ [ s , x ( s ) , u ( s ) ] = ϕ [ s , x ( s ) , u ( s ) ] s d s + ϕ [ s , x ( s ) , u ( s ) ] x μ s , x ( s ) , u ( s ) , σ 2 [ s , x ( s ) , u ( s ) ] d s + 1 2 2 ϕ [ s , x ( s ) , u ( s ) ] x 2 σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] 2 d s + ϕ [ s , x ( s ) , u ( s ) ] x σ 1 [ s , x ( s ) , u ( s ) ] σ 2 [ s , x ( s ) , u ( s ) ] d W s ,
giving the result. □
Proposition A3
(Burkholder–Davis–Gundy inequality). Let { M } s = { ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) } be a continuous local martingale such that M 0 = 0 and s [ 0 , t ] . There exist two constants ζ ρ and ζ such that,
ζ ρ [ E 0 k | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] δ 2 E | sup s k 0 t ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) ) d W s | δ ζ [ E 0 k | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] δ 2
where δ [ 0 , ) and ζ ρ and ζ are independent of t > 0 and { M } s [ 0 , t ] . Furthermore, when δ = 1 then we obtain,
ζ ρ [ E 0 k | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] 1 2 E | sup s k 0 t ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) ) d W s | ζ [ E 0 k | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] 1 2
Proof. 
Suppose, { M } s [ 0 , t ] = { ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) } is a continuous local martingale and M s [ 0 , t ] = | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 . Let us define δ 1 > 2 and κ ( 0 , 1 ) . Itô formula used in Krylov (1981) [22] on { M } s [ 0 , t ] implies
d | M s [ 0 , t ] | δ 1 = δ 1 | M s [ 0 , t ] | δ 1 1 sgn ( M s [ 0 , t ] ) d M s [ 0 , t ] + 1 2 δ 1 ( δ 1 1 ) | M s [ 0 , t ] | δ 1 2 d M s [ 0 , t ] = δ 1 sgn ( M s [ 0 , t ] ) | M s [ 0 , t ] | δ 1 1 d M s [ 0 , t ] + 1 2 δ 1 ( δ 1 1 ) | M s [ 0 , t ] | δ 1 2 d M s [ 0 , t ] .
For every bounded stopping time υ , Doob’s stopping theorem yields
E | M υ [ 0 , t ] | δ 1 | F 0 1 2 δ 1 ( δ 1 1 ) E { 0 υ | M s [ 0 , t ] | δ 1 2 d M s [ 0 , t ] | F 0 } .
Finally, Lenglart’s domination inequality [36] implies
E [ ( sup s ( 0 , t ) | M s [ 0 , t ] | δ 1 ) κ ] 2 κ 1 κ ( 1 2 δ 1 ( δ 1 1 ) ) κ E [ ( 0 t | M s [ 0 , t ] | δ 1 2 d M s [ 0 , t ] | F 0 ) κ ] .
Now in inequality (A25), if we find out the bound of the expectation part, then we can obtain the upper bound of this entire inequality. Therefore,
E [ ( 0 t | M s [ 0 , t ] | δ 1 2 d M s [ 0 , t ] | F 0 ) κ ] E [ ( sup s [ 0 , t ] | M s [ 0 , t ] | ) κ ( δ 1 2 ) ( 0 t d M s [ 0 , t ] | F 0 ) κ ] E [ ( sup s [ 0 , t ] | M s [ 0 , t ] | ) κ δ 1 ] 1 2 δ 1 E [ M t κ δ 1 2 ] 2 δ 1 .
After using the result of the inequality (A26) in (A25), we obtain the upper bound of the expectation as
E [ ( sup s [ 0 , t ] | M s [ 0 , t ] | δ 1 ) κ ] 2 κ 1 κ ( 1 2 δ 1 ( δ 1 1 ) ) κ E [ ( sup s [ 0 , t ] | M s [ 0 , t ] | ) κ δ 1 ] 1 2 δ 1 E [ M t κ δ 1 2 ] 2 δ 1 .
Now as δ = δ 1 κ , for s k [ 0 , t ] we obtain,
E [ sup s k | M s [ 0 , t ] | δ ] ζ E [ M t δ 2 ] E | sup s k 0 t ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) ) d W s | δ ζ [ E 0 k | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] δ 2 .
Furthermore, when δ = 1 , we have
E | sup s k 0 t ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) ) d W s | ζ [ E 0 k | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] 1 2 .
Therefore, inequality (A29) shows the right inequality of the proposition. Oksendal (2013) [31] implies
M s [ 0 , t ] 2 = M s [ 0 , t ] + 2 0 t M s [ 0 , t ] d M s [ 0 , t ] .
Thus,
E [ M t δ 2 ] ζ ρ ( E [ sup s [ 0 , t ] | M s [ 0 , t ] | δ ] + E [ sup s [ 0 , t ] | 0 t M s [ 0 , t ] d M s [ 0 , t ] | δ 2 ] ) .
Similarly,
E [ sup s k | 0 s M q [ 0 , t ] d M q [ 0 , t ] | δ 2 ] C δ E [ ( 0 k M q [ 0 , t ] 2 d M q [ 0 , t ] ) δ 4 ] C δ E [ ( sup s k | M q [ 0 , t ] | δ 2 M k δ 4 ) ] C δ E [ ( sup s k | M q [ 0 , t ] | ) δ ] 1 2 E [ M k ] 1 2 .
Therefore,
E [ M k 1 2 ] ζ ρ ( E [ ( sup s [ 0 , t ] M s [ 0 , t ] | M s [ 0 , t ] | ) δ ] + C δ E [ ( sup s k | M q [ 0 , t ] | ) δ ] 1 2 E [ M k ] 1 2 ) .
If we carefully look at inequality (A33), we find out that it is in the form of m 2 ζ ρ ( n 2 + C δ m n ) , which further implies ζ ρ m 2 n 2 [ as 2 m n 1 ϵ m 2 + ϵ n 2 ], for any chosen ϵ . Hence,
ζ ρ [ E 0 k | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] δ 2 E | sup s k 0 t ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) ) d W s | δ ,
with δ = 1 we obtain,
ζ ρ [ E 0 k | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] 1 2 E | sup s k 0 t ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) ) d W s | .
Inequalities (A35) and (A29) imply
ζ ρ [ E 0 k | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] 1 2 E | sup s k 0 t ( ϕ ( s , x ( s ) , u ( s ) ) , σ 2 ( s , x ( s ) , u ( s ) ) ) d W s | ζ [ E 0 k | | ϕ ( s , x ( s ) , u ( s ) ) | | H 2 2 | | σ ^ ( s , x ( s ) , u ( s ) ) | | H 2 2 ] 1 2 .
This completes the proof. □
Proposition A4
(Grownwall inequality). Let us assume H 1 is a Banach space such that there exists an open subset S b such that S b H 1 . Suppose, there exists two continuous functions such that f 1 , f 2 : [ α , β ] × S b H 1 and m 1 , m 2 : [ α , β ] S b satisfy the initial value problems
m 1 ( s ) = f 1 ( s , m 1 ( s ) ) , m 1 ( α ) = m 10 , m 2 ( s ) = f 2 ( s , m 2 ( s ) ) , m 2 ( α ) = m 20 .
There exists a constant ζ such that,
| | f 2 ( s , e 2 ) f 2 ( s , e 1 ) | | ζ | | e 2 e 1 | | ,
and a continuous function : [ α , β ] [ 0 , ) so that
| | f 1 ( s , m 1 ( s ) ) f 2 ( s , m 1 ( s ) ) | | ( s ) .
Then
| | m 1 ( t ) m 2 ( t ) | | e ζ | t α | | | m 10 m 20 | | + e ζ | t α | α t e ζ | s α | ( s ) d s ,
where s [ α , β ] .
Proof. 
For any C 1 function f : [ α , β ] H 1 , we know d d s | | f ( s ) | | | | f ( s ) | | . Consider m 1 ( . ) , m 2 ( . ) : [ α , β ] 2 H 1 2 . Then,
d d s | | m 1 ( s ) m 2 ( s ) | | | | m 1 ( s ) m 2 ( s ) | | = | | f 1 ( s , m 1 ( s ) ) f 2 ( s , m 2 ( s ) ) | | | | f 1 ( s , m 1 ( s ) ) f 2 ( s , m 2 ( s ) ) | | + | | f 2 ( s , m 1 ( s ) ) f 2 ( s , m 2 ( s ) ) | | ( s ) + ζ | | m 1 ( s ) m 2 ( s ) | | ,
where ( s ) : [ α , β ] [ 0 , ) , and it is assumed that | | f 1 ( s , m 1 ( s ) ) f 2 ( s , m 2 ( s ) ) | | ( s ) . After rearranging the inequality (A37), we obtain,
d d s | | m 1 ( s ) m 2 ( s ) | | ζ | | m 1 ( s ) m 2 ( s ) | | ( s ) .
After multiplying the integrating factor e ζ s in both sides of the inequality (A38), we obtain,
d d s ( e ζ s | | m 1 ( s ) m 2 ( s ) | | ) e ζ s ( s ) .
Therefore, the integral form becomes,
α t [ d d s ( e ζ s | | m 1 ( s ) m 2 ( s ) | | ) ] d s α t e ζ s ( s ) d s e ζ t | | m 1 ( t ) m 2 ( t ) | | e ζ α | | m 10 m 20 | | α t e ζ s ( s ) d s .
Inequality (A40) and the argument in the proposition are the same. This completes the proof. □
Corollary A1. 
Let us assume H 1 is a Banach space such that there exists an open subset S b such that S b H 1 . Suppose there exists a continuous function such that f 1 : [ α , β ] × S b H 1 and m 1 , m 2 : [ α , β ] S b satisfy the initial value problems
m 1 ( s ) = f 1 ( s , m 1 ( s ) ) , m 1 ( α ) = m 10 , m 2 ( s ) = f 2 ( s , m 2 ( s ) ) , m 2 ( α ) = m 20 .
There exists a constant ζ [ 0 , ) such that,
| | f 2 ( s , e 2 ) f 2 ( s , e 1 ) | | ζ | | e 2 e 1 | | .
Then
| | m 1 ( t ) m 2 ( t ) | | e ζ | t α | | | m 10 m 20 | | ,
for all t [ α , β ] .
Proof. 
In Proposition A4, assume that f 1 ( . ) = f 2 ( . ) . Since for any continuous function ( s ) : [ α , β ] [ 0 , ) , Proposition A4 implies that | | f 1 ( s , m 1 ( s ) ) f 2 ( s , m 1 ( s ) ) | | ( s ) . As f 1 and f 2 are the same, then ( s ) 0 for all s [ α , β ] . Therefore, the second right-hand term of the inequality
| | m 1 ( t ) m 2 ( t ) | | e ζ | t α | | | m 10 m 20 | | + e ζ | t α | α t e ζ | s α | ( s ) d s
vanishes and we remain with,
| | m 1 ( t ) m 2 ( t ) | | e ζ | t α | | | m 10 m 20 | | .
This completes the proof. □

References

  1. Pramanik, P. Optimization of market stochastic dynamics. In SN Operations Research Forum; Springer: Berlin/Heidelberg, Germany, 2020; Volume 1, pp. 1–17. [Google Scholar] [CrossRef]
  2. Pramanik, P.; Polansky, A.M. Optimization of a dynamic profit function using Euclidean path integral. SN Bus. Econ. 2023, 4, 8. [Google Scholar] [CrossRef]
  3. Lasry, J.M.; Lions, P.L. Mean field games. Jpn. J. Math. 2007, 2, 229–260. [Google Scholar] [CrossRef]
  4. Pramanik, P.; Polansky, A.M. Scoring a goal optimally in a soccer game under Liouville-like quantum gravity action. Oper. Res. Forum 2023, 4, 66. [Google Scholar] [CrossRef]
  5. Santos, R.M. Optimal Soccer Strategies. Econ. Inq. 2014, 52, 183–200. [Google Scholar] [CrossRef]
  6. Dobson, S.; Goddard, J. Optimizing strategic behaviour in a dynamic setting in professional team sports. Eur. J. Oper. Res. 2010, 205, 661–669. [Google Scholar] [CrossRef]
  7. Pramanik, P. Stochastic Control in Determining a Soccer Player’s Performance. J. Compr. Pure Appl. Math. 2024, 2, 111. [Google Scholar]
  8. Laksari, K.; Kurt, M.; Babaee, H.; Kleiven, S.; Camarillo, D. Mechanistic insights into human brain impact dynamics through modal analysis. Phys. Rev. Lett. 2018, 120, 138101. [Google Scholar] [CrossRef]
  9. Chacoma, A.; Almeira, N.; Perotti, J.I.; Billoni, O.V. Stochastic model for football’s collective dynamics. Phys. Rev. E 2021, 104, 024110. [Google Scholar] [CrossRef]
  10. Kiley, D.P.; Reagan, A.J.; Mitchell, L.; Danforth, C.M.; Dodds, P.S. Game story space of professional sports: Australian rules football. Phys. Rev. E 2016, 93, 052314. [Google Scholar] [CrossRef]
  11. Ruth, P.E.; Restrepo, J.G. Dodge and survive: Modeling the predatory nature of dodgeball. Phys. Rev. E 2020, 102, 062302. [Google Scholar] [CrossRef]
  12. Chacoma, A.; Almeira, N.; Perotti, J.I.; Billoni, O.V. Modeling ball possession dynamics in the game of football. Phys. Rev. E 2020, 102, 042120. [Google Scholar] [CrossRef] [PubMed]
  13. Yamamoto, K.; Narizuka, T. Preferential model for the evolution of pass networks in ball sports. Phys. Rev. E 2021, 103, 032302. [Google Scholar] [CrossRef] [PubMed]
  14. Percy, D.F. Strategy selection and outcome prediction in sport using dynamic learning for stochastic processes. J. Oper. Res. Soc. 2015, 66, 1840–1849. [Google Scholar] [CrossRef]
  15. Casal, C.A.; Maneiro, R.; Ardá, T.; Marí, F.J.; and Losada, J.L. Possession zone as a performance indicator in football. The game of the best teams. Front. Psychol. 2017, 8, 1176. [Google Scholar] [CrossRef]
  16. Palomino, F.; Rigotti, L.; Rustichini, A.; Rustichini, A. Skill, Strategy and Passion: An Empirical Analysis of Soccer; Tilburg University Tilburg: Tilburg, The Netherlands, 1998. [Google Scholar]
  17. Banerjee, A.N.; Swinnen, J.F.; Weersink, A. Skating on thin ice: Rule changes and team strategies in the NHL. Can. J. Econ. Can. D’économique 2007, 40, 493–514. [Google Scholar] [CrossRef]
  18. Pramanik, P. Optimal lock-down intensity: A stochastic pandemic control approach of path integral. Comput. Math. Biophys. 2023, 11, 20230110. [Google Scholar] [CrossRef]
  19. Pramanik, P. Estimation of optimal lock-down and vaccination rate of a stochastic sir model: A mathematical approach. Eur. J. Stat. 2024, 4, 3. [Google Scholar] [CrossRef]
  20. Du, K.; Meng, Q. A revisit to W2n-theory of super-parabolic backward stochastic partial differential equations in Rd. Stoch. Process. Their Appl. 2010, 120, 1996–2015. [Google Scholar] [CrossRef]
  21. Pramanik, P.; Polansky, A.M. Semicooperation under curved strategy spacetime. J. Math. Sociol. 2024, 48, 172–206. [Google Scholar] [CrossRef]
  22. Krylov, N.V.; Rozovskii, B.L. Stochastic evolution equations. J. Sov. Math. 1981, 16, 1233–1277. [Google Scholar] [CrossRef]
  23. Hu, Y.; Peng, S. Adapted solution of a backward semilinear stochastic evolution equation. Stoch. Anal. Appl. 1991, 9, 445–459. [Google Scholar] [CrossRef]
  24. Mahmudov, N.I.; McKibben, M.A. On backward stochastic evolution equations in Hilbert spaces and optimal control. Nonlinear Anal. Theory Methods Appl. 2007, 67, 1260–1274. [Google Scholar] [CrossRef]
  25. Du, K.; Meng, Q. A maximum principle for optimal control of stochastic evolution equations. SIAM J. Control Optim. 2013, 51, 4343–4362. [Google Scholar] [CrossRef]
  26. Yeung, D.W.; Petrosyan, L.A. Cooperative Stochastic Differential Games; Springer: Berlin/Heidelberg, Germany, 2006; Volume 42. [Google Scholar]
  27. Ewald, C.O.; Nolan, C. On the adaptation of the Lagrange formalism to continuous time stochastic optimal control: A Lagrange-Chow redux. J. Econ. Dyn. Control 2024, 162, 104855. [Google Scholar] [CrossRef]
  28. Pramanik, P.; Maity, A.K. Bayes factor of zero inflated models under jeffereys prior. arXiv 2024, arXiv:2401.03649. [Google Scholar]
  29. Pramanik, P.; Boone, E.L.; Ghanam, R.A. Parametric estimation in fractional stochastic differential equation. Stats 2024, 7, 745. [Google Scholar] [CrossRef]
  30. Pramanik, P. Dependence on Tail Copula. J 2024, 7, 127–152. [Google Scholar] [CrossRef]
  31. Oksendal, B. Stochastic Differential Equations: An Introduction with Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  32. Pramanik, P. Measuring Asymmetric Tails Under Copula Distributions. Eur. J. Stat. 2024, 4, 7. [Google Scholar] [CrossRef]
  33. Feynman, R.P. Space-time approach to non-relativistic quantum mechanics. Rev. Mod. Phys. 1948, 20, 367. [Google Scholar] [CrossRef]
  34. Pramanik, P.; Polansky, A.M. Motivation to Run in One-Day Cricket. Mathematics 2024, 12, 2739. [Google Scholar] [CrossRef]
  35. Chow, C.C.; Buice, M.A. Path integral methods for stochastic differential equations. J. Math. Neurosci. (JMN) 2015, 5, 1–35. [Google Scholar] [CrossRef]
  36. Lenglart, É. Relation de domination entre deux processus. In Annales de L’institut Henri Poincaré. Section B. Calcul des Probabilités et Statistiques; Gauthier-Villars: Paris, France, 1977; Volume 13, pp. 171–179. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pramanik, P. Stubbornness as Control in Professional Soccer Games: A BPPSDE Approach. Mathematics 2025, 13, 475. https://doi.org/10.3390/math13030475

AMA Style

Pramanik P. Stubbornness as Control in Professional Soccer Games: A BPPSDE Approach. Mathematics. 2025; 13(3):475. https://doi.org/10.3390/math13030475

Chicago/Turabian Style

Pramanik, Paramahansa. 2025. "Stubbornness as Control in Professional Soccer Games: A BPPSDE Approach" Mathematics 13, no. 3: 475. https://doi.org/10.3390/math13030475

APA Style

Pramanik, P. (2025). Stubbornness as Control in Professional Soccer Games: A BPPSDE Approach. Mathematics, 13(3), 475. https://doi.org/10.3390/math13030475

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop