Existence, Uniqueness, and Stability Analysis of the Probabilistic Functional Equation Emerging in Mathematical Biology and the Theory of Learning

: Probabilistic functional equations have been used to analyze various models in computational biology and learning theory. It is worth noting that they are linked to the symmetry of a system of functional equations’ transformation. Our objective is to propose a generic probabilistic functional equation that can cover most of the mathematical models addressed in the existing litera-ture. The notable ﬁxed-point tools are utilized to examine the existence, uniqueness, and stability of the suggested equation’s solution. Two examples are also given to emphasize the signiﬁcance of our ﬁndings.


Introduction
In an animal or human being, the learning phase may often be viewed as a series of choices between multiple possible reactions. Even in basic repetitive experiments under strictly regulated conditions, preference chains are mostly volatile, recommending that the probability governs the choice of feedback. It is also helpful to identify structural adjustments in the series of alternatives that reflect changes in trial-to-trial outcomes. From this perspective, most of the learning analysis explains the probability of a trial-totest occurrence that describes a stochastic mechanism.
Experiments in mathematical learning have recently shown that the behavior of a basic learning experiment follows a stochastic model. Thus, it is not a novel idea (for detail, see [1,2]). However, following 1950, two crucial characteristics emerged mainly in the Bush, Estes, and Mosteller study. Firstly, one of the suggested models' most important features is the inclusive character of the learning process. Second, such models can be examined in this way so that they cannot hide their statistical features.
Symmetries have emerged in mathematical formulations many times, and they have been shown to be important for solving problems or furthering research. It is possible to see high-quality research that uses nontrivial mathematics and related geometries in the context of important issues from a wide range of fields.
In learning theory and mathematical biology, the solution to the subsequent equation is of great importance where L : J → R is an unknown and 0 < u ≤ v < 1 are the learning-rate parameters that measure the effectiveness of the responses in a two-choice situation.
In 1976, Istrȃţescu [3] used the above functional equation to inspect the involvement of predatory animals that prey on two distinct types of prey. Markov transitions were used to describe this behavior by converting the states x and (1 − x) to u + (1 − u)x and Bush and Wilson [1] used such operators to examine the movement of a fish in twochoice circumstances. They claimed that under such behavior, there are four possible events: left-reward, right-nonreward, right-reward, left-nonreward.
It is widely assumed that getting rewarded on one side would increase the probability of that side being selected in the following trial. However, the reasoning for non-rewarded trials is less apparent. According to an extinction or reinforcement theory (see Table 1), the probability of choosing an unrewarded side in the next trial would decrease. In contrast, a model that relies on habit formation or secondary reinforcement (see Table 2) would suggest that simply choosing a side would increase the probability of selecting that side in the upcoming trials. Table 1. Operators for reinforcement-extinction model. Table 2. Operators for habit formation model.

Fish's Responses Outcomes (Left Side) Outcomes (Right Side) Events
In 2015, Berinde and Khan [4] generalized the above idea by proposing the following functional equation where L : J → R and z 1 , z 2 : J → J are given contraction mappings with z 1 (1) = 1 and z 2 (0) = 0. Recently, Turab and Sintunavarat [5] utilized the above ideas and suggested the functional equation stated below where L : J → R is an unknown, 0 < r ≤ s < 1 and Θ 1 , Θ 2 ∈ J . The aforementioned functional equation was used to study a specific kind of psychological resistance of dogs enclosed in a small box. Several other studies on human and animal actions in probability-learning scenarios have produced different results (see [6][7][8][9][10][11][12]).
Here, by following the above work with the four possible events (right-reward, rightnonreward, left-reward, left-nonreward) discussed by Bush and Wilson [1], we propose the following general functional equation for all x ∈ [k, ], where 0 ≤ ζ ≤ 1, L : [k, ] → R is an unknown and z 1 , z 2 , z 3 , z 4 : [k, ] → [k, ] are given mappings. In addition, ν : J → J is a non-expansive mapping with ν(k) = k and |ν(x)| ≤ λ 5 , for λ 5 ≥ 0. Our objective is to prove the existence, uniqueness, Hyers-Ulam (HU)-and Hyers-Ulam-Rassias (HUR)-type stability conclusions of Equation (4) by using the appropriate fixed-point method. Following that, we provide two examples to demonstrate the importance of our findings.
The following stated outcome will be required in the advancement.
for some η < 1 and for all u, v ∈ J . Then L has only one fixed point. Furthermore, the Picard iteration (PI) {u n } in J , defined as u n = Lu n−1 for all n ∈ N, where u 0 ∈ J , converges to the unique fixed point of L.
Theorem 2. Consider the probabilistic functional Equation (7) with (8). Suppose that z 1 ( ) = = z 2 ( ) and η 1 < 1 such that where Assume that there is a nonempty subset C of S := {L ∈ T |L( ) ≤ } such that (C, · ) is a Banach space (BS), where · is given in (6), and the mapping M from C to C defined for each L ∈ T by for all x ∈ J is a self mapping. Then M is a BCM with the metric d induced by · .
Proof. Let d : C × C → R be a metric induced by · on C. Thus (C, d) is a complete metric space. We deal with the operator M from C defined in (10). In addition, M is continuous and ML < ∞ for all L ∈ C. Therefore, M is a self operator on C. Furthermore, it is clear that the solution of (7) is equivalent to the fixed point of M. Since M is a linear mapping, so for L 1 , L 2 ∈ C, we obtain where ∆L = L 1 − L 2 . Thus, to evaluate ML 1 − ML 2 , we mark the following framework To obtain this, let L 1 , L 2 ∈ C, and for each τ 1 , τ 2 ∈ J with τ 1 = τ 2 , we obtain Our aim is to use the definition of the norm (6) here. Therefore, by utilizing (8) with the condition z 1 ( ) = = z 2 ( ), we have As z 1 − z 4 are contraction mappings with the contractive coefficients λ 1 − λ 4 , respectively, we obtain Hence, where η 1 is defined in (9). This gives that d(ML 1 , ML 2 ) = ML 1 − ML 2 ≤ η 1 ∆L = η 1 d(L 1 , L 2 ).
As 0 < η 1 < 1, this implies that M is a BCM with metric d induced by · .
Proof. From Theorem 2, it is clear that M : C → C defined for each L ∈ T by (10) is a BCM with metric d induced by · . Thus, by utilizing the Banach fixed-point theorem, we get the conclusion of this theorem.
A similar estimation approach has been applied in a group control system (for the detail, see [14]).
We shall look at a unique situation here. If z 1 , z 2 , z 3 , z 4 : J → J are contraction mappings with contractive coefficients λ 1 ≤ λ 2 ≤ λ 3 ≤ λ 4 , respectively, then by Theorems 2 and 3, the outcomes are as follows. Corollary 1. Consider the probabilistic Equation (7) associated with (8). Assume that z 1 ( ) = = z 2 ( ) with η 1 := |3λ 4 (w 1 + w 2 )| < 1, and there is a nonempty subset C of S := {L ∈ T |L( ) ≤ } such that (C, · ) is a BS, where · is given in (6), and the mapping M from C to C defined for each L ∈ T by for all x ∈ J is a self mapping. Then M is a BCM with the metric d induced by · .
Corollary 2. Consider the probabilistic Equation (7) associated with (8). Assume that z 1 ( ) = = z 2 ( ) with η 1 := |3λ 4 (w 1 + w 2 )| < 1, and there is a nonempty subset C of S := {L ∈ T |L( ) ≤ } such that (C, · ) is a BS, where · is given in (6), and the mapping M from C to C defined for each L ∈ T by (12) is a self mapping. Subsequently, the probabilistic Equation (7) with (8) has a unique solution in C. Furthermore, the iteration L n in C given as for all n ∈ N, where L 0 ∈ C, converges to the unique solution of (7).

Theorem 4. Consider the probabilistic Equation
and that η 2 < 1, where Suppose that there is a nonempty subset C of S := {L ∈ T |L( ) ≤ } such that (C, · ) is a BS, where · is given in (6), and the mapping M from C to C defined for each L ∈ T by (ML)(x) = u 1 (y)w 1 L(z 1 (x)) + u 1 (y)w 2 L(z 2 (x)) +w 1 u 2 (y)L(z 3 (x)) + w 2 u 2 (y)L(z 4 (x)), (16) for all x ∈ J is a self mapping. Then M is a BCM with the metric d induced by · .
Proof. Let d : C × C → R be a metric induced by · on C. Thus (C, d) is a complete metric space. We deal with the operator M from C defined in (16). In addition, M is continuous and ML < ∞ for all L ∈ C. Therefore, M is a self operator on C. Furthermore, it is clear that the solution of (7) is equivalent to the fixed point of M. Since M is a linear mapping, so for L 1 , L 2 ∈ C, we obtain where ∆L = L 1 − L 2 . Thus, to evaluate ML 1 − ML 2 , we mark the following framework where ∆τ = τ 1 − τ 2 . To obtain this, let L 1 , L 2 ∈ C, and for each τ 1 , τ 2 ∈ J with τ 1 = τ 2 , we obtain Here, we use the norm (6) and the condition (14). Thus, we have As z 1 − z 4 are contraction mappings with the contractive coefficients λ 1 − λ 4 , respectively, we obtain where η 2 is defined in (15). This gives that As 0 < η 2 < 1, this implies that M is a BCM with metric d induced by · .
Proof. From Theorem 4, it is clear that M : C → C defined for each L ∈ T by (16) is a BCM with metric d induced by · . Thus, by utilizing the Banach fixed-point theorem, we get the conclusion of this theorem.

Corollary 4.
Consider the probabilistic Equation (7) associated with (8). Assume that (14) holds and η 2 < 1, where η 2 is defined in (18). Suppose that there is a nonempty subset C of S := {L ∈ T |L( ) ≤ } such that (C, · ) is a BS, where · is given in (6), and the mapping M from C to C defined for each L ∈ T by (19) is a self mapping. Then, the functional Equation (7) with (8) has a unique solution in C. Furthermore, the iteration L n in C is defined as for all n ∈ N, where L 0 ∈ C, converges to the unique solution of (7). (7) is a generalization of the functional equations discussed in [6,8].

Remark 1. Our proposed probabilistic Equation
We now offer the following examples to show the significance of our results.

Example 1. Consider the probabilistic functional equation given below
for all x ∈ J with k < a, b, c, d < and L ∈ T . If we set the mappings ν, z 1 , z 2 , z 3 , z 4 : J → J by for all x ∈ J , then our Equation (7) reduces to the Equation (21). It is easy to see that z 3 , z 4 satisfy our boundary conditions (8), and z 1 ( ) = = z 2 ( ). In addition, for all µ, υ ∈ J . This implies that z 1 − z 4 are contraction mappings with coefficients respectively, and ν : J → J is a non-expansive mapping with λ 5 = 1. If and there is a nonempty set C of S := {L ∈ T |L( ) ≤ } such that (C, · ) is a BS, and the mapping M from C given in (21) for all x ∈ J is a self mapping. Then, all constraints of Theorem 2 are fulfilled, and therefore, we get the existence of a solution to the functional Equation (21). If we define L 0 = I ∈ C (whereas I is an identity function), by considering as an initial approximation L 0 , then by Theorem 3, the next iteration converges to a unique solution of (21):

Example 2. Consider the probabilistic functional equation given below
Proof. Let L ∈ C such that d(ML, L) ≤ ϕ(L). By Theorem 2, we have a unique L ∈ C such that ML = L . Thus, we obtain and hence d(L, L ) ≤ ςϕ(L).
From the above analysis, we get the subsequent result related to the HU stability. (ML)(x) = w 1 u 1 (y)L(z 1 (x)) + w 2 u 1 (y)L(z 2 (x)) +w 1 u 2 (y)L(z 3 (x)) + w 2 u 2 (y)L(z 4 (x)), for all L ∈ C and x ∈ J , has HU stability; that is, for a fixed λ > 0, we have that for every L ∈ C with d(ML, L) ≤ λ, there exists a unique L ∈ C such that ML = L and d(L, L ) ≤ ςλ, for some ς > 0.

Conclusions
The predator-prey analogy is among the most appealing paradigms in a two-choice scenario emerging in mathematical biology. In such models, a predator has two possible prey choices, and the solution occurs when the predator is attracted to a particular type of prey. In this paper, we proposed a general functional equation that can cover numerous learning theory models in the existing literature. We also discussed the existence, uniqueness, and stability results of the suggested functional equation. The functional equations that appeared in [3,4,8] focused on just two cases, while our proposed functional Equation (4) covers all the possible cases discussed by Bush and Wilson in [1]. In addition, in [3,4,12], the authors used the boundary conditions z 1 (1) = 1 and z 2 (0) = 0 to prove their main results, but in Theorem 4, we did not employ such assumptions. Therefore, our method is novel and can be applied to many mathematical models associated with mathematical psychology and learning theory.
To conclude, we propose the following open problem for the interested readers. Question: Can we use another method to prove the conclusions of Theorems 2 and 3? Data Availability Statement: Not applicable.

Conflicts of Interest:
The authors declare no conflict of interest.