Next Article in Journal
Travel Frequent-Route Identification Based on the Snake Algorithm Using License Plate Recognition Data
Previous Article in Journal
UNet–Transformer Hybrid Architecture for Enhanced Underwater Image Processing and Restoration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Pseudo-Complemented Truth Values of Calculation Errors in Integral Transforms and Differential Equations Through Monte Carlo Algorithms

1
Department of Information System, Faculty of Computer Science, Universitas Gunadarma, Depok 16424, Indonesia
2
Department of Mathematics, Universitas Indonesia, Depok 16424, Indonesia
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(15), 2534; https://doi.org/10.3390/math13152534
Submission received: 9 July 2025 / Revised: 29 July 2025 / Accepted: 4 August 2025 / Published: 6 August 2025

Abstract

This study aims to demonstrate how mathematics, especially calculus concepts, can be expanded to include semi-entities and how these can be applied to sampling activities. Here, the multivalued logic uses pseudo-complemented lattices, instead of Boolean algebras. Truth values can express the intensity of a property: for example, the property of being heavy intensifies as weight increases. They can also express the state-of-the-art knowledge of an individual about a certain thing. To express that a number x approaches a is to say that the statement “ x = b ” is not fully true but approaches the full-true value as b a approaches zero. This approach generalizes the concept of a limit and the concepts derived from it, such as differentiation and integration. A Monte Carlo algorithm replaces one function with another with finite domain, preferably its finite part, by sampling the domain and calculating its map. The discussion extends to integration over an unbounded interval, integral transforms, and differential equations. This study then covers strategies for producing Monte Carlo estimates of respective problems and determining their crucial truth values. In the discussion, a topic related to axiomatizing set theory is also suggested.

1. Introduction

The main themes of this paper are multivalued logic and Monte Carlo algorithms. Multivalued logic is a system of logic with more values than just true and false. As the values must mimic logical operations, they must form a Boolean algebra. However, in this paper, this requirement is relaxed to a lattice with a pseudo-complement. A Monte Carlo algorithm provides solutions to a problem by taking finite samples from their domains and calculating or approximating their maps. The resulting solutions might have deviations or errors according to some theoretical concepts. The measures of these deviations form the truth values of the given solutions. Thus, a truth value represents a criterion of how far an input is departed from the ideal condition. In practice, the nature of the problem is expected to converge towards the actual solutions as the sample size becomes larger. Thus, if the solutions need to be functions, they should be assumed to be piecewise continuous. However, theoretically, multivalued logic is capable of giving truth values for any given solution.
This section outlines the main ideas behind the processes themselves, aside from the technicalities of sampling calculations.
In this paper, two kinds of limits are employed. The first one is the limit of functions. To say that a number L is the limit of a function f x as x approaches a means that the nearer x is to a , the nearer f x to L , even up to zero distance, denoted as lim x a f x = L . This is the kind of limit encountered in differentiation, where f x is of the form g x g a x a if the derivative calculated is g a . The other one is the limit of relations. To say that a number L is the limit of a relation x R y as x approaches a means that the nearer x is to a , the nearer the values of y satisfying x R y to L , even up to zero distance, denoted as lim x a x R y = L . This the kind of limit encountered in integration, where x R y if and only if x , n = 1 x g ( x n ) ( x n + 1 x n ) , when the integral calculated is a b g x d x .
When using multivalued logic, the symbol lim x a   can be deleted from sentences of the above forms, leaving them in the form “ L = f ( x ) ” or “ x R L ” for any x where the limit is replaced by a system of truth values v ( x ) satisfying v ( x ) 1 as x a . These changes are transitions to multivalued approach as an equivalent notion of limit, and in some cases, a generalization of it.
To deal with integral transforms, the following convention -to be formalized in the next section- is crucial. If f a = b is true and a = a has truth value r , then f a = b has truth value r as well. In general, if f ( a 1 , a 2 ) = b is true while a 1 = a 1 and a 2 = a 2 have truth value r and s , respectively, then f ( a 1 , a 2 ) = b has truth value r s .
A differential equation provides a measurement criterion F such that an input f is processed into some modified forms f 1 , f 2 , , f k that can be evaluated by F to determine whether f meets the criterion or not. The strict “yes” and “no” outputs of F can be relaxed into a more pliable scheme through the use of multivalued logic. This scheme applies to more general concepts.
Suppose that a Monte Carlo method provides a finite sample of the form D = a 1 , f a 1 , a 2 , f a 2 , , a n , f a n for a function f . Then, from D , approximations to a problem can be calculated along with their truth values by equating the approximations to the genuine ones.
The Monte Carlo algorithm was invented by Stanislaw Ulam and others, such as Nicholas Constantine Metropolis. It is nicely described in the first paragraph of [1], while the original manuscript is [2]. As originally, a Monte Carlo algorithm is an algorithm with the objective of generating constructed data records from given probability distribution functions. Once this is made possible, the data mimicking real life data to be analyzed can be used to investigate relationships mimicking real life data can be analyzed in various settings.
Most probability distribution functions employed in this paper are the uniform distributions. However, for some functions with known formulas, non-uniform distributions might be required, as will be encountered in Procedure 1 and Part B of Definition 4 below. The crucial tool is the random number generator: although it chooses members from finite, equally spaced points in an interval, the spacing is so small that the points can be assumed to be chosen arbitrarily and uniformly. In this paper, repeated outcomes are regarded as one point: in other words, it is assumed that the generator is non-repeating: once a point is chosen, it will not be chosen again.
Although differentiation does not theoretically require interval subdivision for its calculation, and integration only depends on the mesh produced by the chosen points, the need to estimate the required data size leaves no alternatives, in practice, rather than to predetermine equally spaced subdivision of a specific interval.
In general, the truth values employed in the process of differentiation or integration have three components. One of them is the subdivision component which in integration involves the number N of subintervals in the predetermined subdivision. In differentiation this component is replaced by the concept of the maximum acceptable gap d , to be introduced later. The second is the sample or data size component K namely the number of points taken as samples by the Monte Carlo algorithm. The third is the contribution of the corresponding process, called the process component γ . While the first two are simple, the third one can be more complex depending on how many aspects are contributing to it, usually due to equating things to other things. For example, equating an unbounded integration interval to a bounded one, equating a function to a relation, equating a possibly continuous fluctuating integrand to a finite step function, or even equating the integrand to another function.

2. Materials and Methods

A Boolean algebra U = B , , , , 0,1 is used for the truth values in a multivalued logic. For each Boolean algebra U , its opposite, U o p = B , , , , 0 , 1 where everything in U is dualized, is also a Boolean algebra. Here, x y = x y , x y = x y , 0 = 1 , and 1 = 0 , while x remains the same for any x .
Multivalued logic uses Boolean algebras for their truth values: however, in this paper, Boolean algebras are relaxed to interval lattices with pseudo-complements [3] (pp. 63–70) or [4] (pp. 63–70). The operation x is called a pseudo-complement of x in a lattice L if, for all x , y L , the following hold:
  • The map x x is antitone; namely, x and x move in opposite directions, such that if x is closer to 1 , x is farther.
  • 0 = 1 and 1 = 0 .
  • The map x x is a closure operator.
  • x y = x y .
  • x y = x y .
  • x x y = x y .
To be more specific, the lattices we employ are interval lattices which, by abuse of notation, can be written in the form
a , b = a , b , max . , . , min . , . , · , a , b ,
meaning a lattice with elements in the interval [ a , b ] where the numbers a and b act as the element 0 (namely, the absolute minimum), and 1 (namely, the absolute maximum), and for any x , y , x y = min x , y , x y = max x , y , and the pseudo-complement x = a + b x (namely, the distance from x to b , but added to a ). On other occasions, its opposite a , b o p is used, which interchanges the greater and lesser terms. To indicate the direction of the value “true”-namely the element 1 - we denote x in [ a , b ] as x a b , or just x b if confusion is unlikely. Conversely, an element x of a , b o p is written as x a b , or just x a .
The lattice 0 , is just like the structure in Equation (1); namely 0 , , max . , . , min . , . , · , 0 , , with the additional element regarded as larger than any real number. Here, the number 0 acts as the element 0 , acts as the element 1, and for any x , its pseudo-complement is x = 1 x .
If W i = B i , i , i , · i , 0 i , 1 i are lattices for i = 1,2 , , k , then their product is of the form B , , , , 0,1 , where B = B 1 × B 2 × × B k ; in which, for any a = ( a 1 , a 2 , , a k ) and b = ( b 1 , b 2 , , b k ) , any operation is performed component-wise: a b = a 1 1 b 1 , , a k k b k ,   a b = a 1 1 b 1 , , a k k b k ,   a = a 1 1 , , a k k .
If required, each component may be uniformized to the lattice 0 ,   1 . a , b and a , b o p are converted to 0,1 using the transforms φ x = x a b a and φ o p x = 1 φ ( x ) , respectively. 0 , and 0 , o p are converted to 0 ,   1 using the transforms ψ x = x x + 1 and ψ o p x = 1 ψ ( x ) respectively. Afterwards, the components may be multiplied to form single numbers as truth values in [ 0 ,   1 ] . This shows that the values can be standardized and unified. However, this might obscure their original meaning. The standardized components might help in evaluating their individual quality, and their product helps in evaluating their overall quality. It should be noted that these standards are still subject to interpretation, which depends on various considerations.
The multivalued logic in this paper uses formulas written as   r φ   , which means that the truth value of the sentence φ is r , where r is an element of a pseudo-complemented lattice B . Truth values follow logical operations via the following axioms:
  r φ     s θ     r s φ θ   ,
meaning that the value of the sentence φ θ is the maximum of the values of φ and θ .
  r φ     s θ     r s φ θ   ,
meaning that the value of the sentence φ θ is the minimum of the values of φ and θ .
  r ¬ φ     r φ   ,
meaning that the value of the sentence ¬ φ is the (pseudo) complement of the value of φ .
x   r x P x     x r x x P x   ,
meaning that the value of the sentence x P x is the maximum of the values of P ( x ) for each x .
x   r x P x     x r x x P x   .
meaning that the value of the sentence x P x is the minimum of the values of P ( x ) for each x .
To improve readability, the truth value can be attached to a connective or a relation symbol using an overbrace. For example, a   r =   b means   r a = b , a   r   B means   r a B , φ   r   θ means   r φ θ   ,   r x   φ ( x ) means   r x φ x     , and so on. When the truth value is 1 , then   1 φ   is just written as φ .
A set is written as S = { r a , s b } to mean that a r S , b s S , and others are not members of S and have truth value of 0 . For example, 1 2 a , 1 3 b , 0 c , 0 d = 1 2 a , 1 3 b .
The term ordinary set is employed to refer to any set S where for each object a , if a   r   S then r = 0 or r = 1 . In contrast, S is said to be a multivalued set if it is possible that there are objects a with a   r   S , where 0 < r < 1 . When there is no risk of confusion, it is safe to just say “set” for “multivalued set.”
When using multivalued logic, a problem may have no fully true solution, but it could have many near-solutions; namely, solutions of non-zero truths. On the other hand, it is equally possible even to indirectly say that there are many true solutions, such as the case for the function f , with f ( x ) = sin 1 x as illustrated in the subsection on differentiation.
It is now the right place to formalize the convention from the previous section so it can be used in integral transforms:
Convention 1. 
The following is an axiom:
( f a 1 , , a n   = b i = 1 n a i   v i = a i ) f a 1 , , a n     i I v i = b .
The following procedure is very useful when applying the Monte Carlo method in a real-life situation, since it can generate artificial data using a random number generator. It uses a cumulative distribution with a finite range to approximate a piecewise continuous cumulative distribution.
Procedure 1. 
Approximation of Continuous Cumulative Distributions by Finite Valued Ones.
Let  F x  be a cumulative distribution function. Thus,  F x = x f t d t , where  f ( x )  is a probability density function. Then, its range is the interval  [ 0 ,   1 ] , which can be subdivided into  N  subintervals of equal length. Let  x k = k N   for k = 0,1 , 2 , , N . Then the subintervals are  I 0 = [ x 0 , x 1 ] and I k = ( x k , x k + 1 ] for k = 1,2 , , N 1 . Thus, the sets  J k = F 1 I k  for each  k  have the same probability, namely  1 N . Hence,  J k k = 0,1 , N 1  is a partition of the domain of  F  consisting of  n  elements all with probability  1 N . Since  F  is monotone non-decreasing, then for any  k J k  is an interval. Hence,  F x = k + 1 N ,   i f   x J k , approximates  F  with range  1 N , 2 N , , N N .
This procedure creates events with a uniform distribution; namely J k , for k = 0,1 , . N 1 , which are events under the cumulative distribution F . A random number generator can now determine which J k happens in a trial.

3. Results

This section is divided into subsections that discuss representations of functions by other functions, integration, differentiation, integral transforms, and differential equations. The first three subsections serve as a theoretical basis for the following two, while the last one is independent.

3.1. Equating a Function to Another One

Multivalued logic is so flexible that a function might be equated to another function or even, more generally to a relation of a finite domain. However, only definitions that might be useful are mentioned here. In later subsections, it is assumed that the starting point is the functions obtained using Monte Carlo sampling, so other approaches, such as meshes and infinite interval handling in Section 3.3.2 are used. This subsection considers the truth value of saying that a function is the same as another function. In so doing, a difficult function may be replaced by another function that is hoped to be easier to manage using the procedure that will be applied, such as the Monte Carlo algorithm. The cost of equating these functions is that the truth value is reduced in comparison to the original function.
Definition 1. 
Truth Values from Equating Functions.
A.
Equating Two Functions with Bounded Domains. Let  f : I R   and  g : J R   be two piecewise continuous functions such that  I = [ a , b ]   and  J = [ c , d ] . Then   I   has a uniform distribution with mean  μ I = b a 2  and variance  σ I 2 = b a 2 12 , while  J   has a uniform distribution with mean  μ J = d c 2   and variance  σ J 2 = d c 2 12 . Then
f   r = g , w h e r e   r = μ I μ J 0 , σ I 2 σ J 2 0 , m 0   a n d   m = I J f ( x ) g x d x .
B.
Equating a Function with an Unbounded Domain to One with a Bounded Domain. Let  f : I R   be a piecewise continuous function where  I   is an infinite interval, and  g : J R   be another piecewise continuous function where  J   is a finite interval with mean  μ J   and variance  σ J 2 . Then
f   r = g , w h e r e   r = μ J , σ J 2 , m 0 , s 0 ,   i f   I = , b ,   f o r   s o m e   b , μ J 0 , σ J 2 , m 0 , s 0 ,   i f   I = , μ J , σ J 2 , m 0 , s 0 ,   i f   I = a , ,   f o r   s o m e   a , m = I J f ( x ) g x d x ,   a n d   s   i s   t h e   l e n g t h   o f   t h e   i n t e r v a l   J I .
C.
Equating a Function with an Interval Domain to One with a Finite Domain. Let  f : I R   be a piecewise continuous function where  I   is an interval, and let  g : J R   be a function where  J   is a finite set. Suppose  J = a 1 , a 2 , , a n  with  a 1 < a 2 < < a n . We define  g x = g ( a i )   if  x [ a i , a i + 1 )   for  i = 1,2 , , n 1 . Then,
f   r = g , w h e r e   f   r = g .
The truth value r in Definition 1 is called the representation component.
In the literature, the authors are concerned with making approximations; however, this is just another way of finding truth values that are as close as possible to the full truth 1 in equating functions. One might consider the truth values in approximations with the generalized rational function in [5] or with Balasz-Szabados operators in [6].

3.2. Differentiation

Differentiation is defined similarly in any textbook such as in [7] (Definition 27.1) or [8] (p. 41). Differentiation of a function f : ( a , b ) R at the point c ( a , b ) is defined as
f c = lim x c f x f c x c ,
from which the following multivalued definition is obtained:
f c   r =   f x f y x y w h e r e   r = y x 0 ,
for any x < c < y . This formula is the basis of the Monte Carlo truth value calculation for differentiation, where more components are to be introduced. Hence, f ( c ) is regarded as a member of the set   r f x f y x y   r = y x 0 .
This generalizes the ordinary definition of a derivative, since the function f ( x ) with f ( x ) = sin 1 x now has a derivative at x = 0 -namely f 0   r =   f ( x ) f ( y ) y x - for any x < 0 < y with truth value r = y x 0 . The right-hand side approaches sin 1 d for d = y x 2 as x y 0 , and thus approaches any value in the interval [ 1 ,   1 ] . In other words, the tangent line of the function can have any slope in [ 1 ,   1 ] near x = 0 . One should be careful not to conclude that, for any two numbers, a , b [ 1,1 ] , with a b , a = f ( 0 ) = b , since f ( 0 ) itself is never evaluated for the full truth value, 1.
Using identity (12), the definition of the Monte Carlo derivative is as follows:
Definition 2. 
Monte Carlo Estimate of the Derivative and Its Truth Value. Let  f : ( a , b ) R  be a function and  c ( a , b ) . Suppose that  d > 0  is the maximum acceptable gap, namely, if  x y d , so  f x f y x y  is considered close enough to  f c . Suppose that, using the Monte Carlo method, a sample of  K  points  x 1 , x 2 , , x K  is obtained; if  s = max x i < c x i  and  t = min x i > c x i , then the estimate is
f c   r = f s f t s t , w h e r e   r = d 0 , K , τ d   f o r   τ = t s .
In practice, the choice of the interval  ( a , b )  reflects the possible range from which the random number generator can take points, while  d  is taken as a component because, ideally,  d  approaches  0 .
The complexities behind the calculations for acquiring ( 13 ) are as follows (Table 1):
In numerical differentiation in general, there are no random number generation or sorting algorithm required. However, subdividing the interval into N equal subinterval takes their place. Therefore, we obtain the following (Table 2).
For other methods, the complexities can be higher. For a more complete account of Monte Carlo complexity, cf. [10].
The following theorem predicts how many pairs x , f x and y , f y with x < c < y should be taken for a satisfactory Monte Carlo sample.
Theorem 1. 
Suppose that when  d  is maximum acceptable gap and  a = c ρ  and  b = c + ρ  for  ρ > d , and  f : ( a , b ) R  is differentiable at  c . Then the Monte Carlo sample  x i , f x i , y i , f y i | i = 1,2 , , n  has the probability  p  of having the pair  x , f x  and  y , f y  with  y x d  and  x < c < y , where
p = 1 4 d ρ 2 .
Hence, a good Monte Carlo sample for differentiation is expected to have size of at least  4 ρ d 2   elements.
Proof. 
Suppose that d > 0 is a number such that if c d 2 < x < c < y < c + d 2 then f x f y x y is close enough to f ( c ) . Then the probability of obtaining x c d 2 , c uniformly in the interval a , c is d 2 c a , namely the proportion of the lengths of c d 2 , c and a , c . By the same reasoning, the probability of obtaining y c , c + d 2 uniformly in the interval c , b is d 2 b c . Hence, obtaining such x and y has the probability p = d 2 c a · d 2 b c = d b a 2 = 1 4 d ρ 2 . From the probability p , it is understood that only p · 100 % of the k -trial experiments produce the tolerable gap d . Then, such a trial occurs every 1 p trials. Thus, a Monte Carlo sample is expected to have a size of at least 1 p = 4 ρ d 2 . □
An account of fractional differentiation using Monte Carlo without multivalued logic is presented in [11]. Another extension of the multifunction differentiation concept is discussed in [12] (p. 248).
It must be mentioned that the estimate ( 13 ) in Definition 2 might be altered in an attempt to make it less sensitive to outlier fluctuations, such as in time-series predictions. Thus, the following M -average version is defined as its alternative.
Definition 3. 
M -Average Estimate of the Derivative. The  M -average estimate of the derivative is the average of the last  M  estimates of the derivative:
f c   r = 1 M i = 1 M f s i f t i s i t i , w h e r e   s i   a n d   t i   a r e   t h e   i t h   n e a r e s t   p o i n t s   t o   c ;   t h u s , s M < < s 2 < s 1 < c < t 1 < t 2 < < t M , a n d   r   c a n   b e   t a k e n   t o   b e   t h e   s a m e   a s   i n   13   o f   D e f i n i t i o n   2 .
Example 1. 
Let the interval be  2,4  and let the derivative to be estimated be  f 3 . Suppose that a subdivision of  N = 10  subintervals is close enough to approximate  f . Thus,  ρ = 1  and  d = 0.2 . Suppose the data obtained is as in Table 3.
The closest neighboring points around x = 3 are s = 2.80 and t = 3.14 , since 2.80 < 3 < 3.14 . Thus, the estimate is
f 3   r = 8 7 3.14 2.80 = 44.117647 ,
where r = 0.20 d 0 , 9 K , 0.34 τ 0.20 2 , since the maximum of τ = t s is 4 2 = 2 . To interpret this, suppose we take the transforms d = 1 d + 1 , K = K K + 1 , and τ = 1.80 2 = 90 % . Then d = 83 % , K = 90 % , and τ = 88 % . Hence, r = 83 % d , 90 % K , 90 % τ The overall quality is d K τ = 67 % . The quality might not be satisfactory because the data size is 9 instead of 4 ρ d 2 = 100 according to Theorem 1.
If we want to avoid excessive sensitivity of the function f near x = 3 , we can try the next closest neighboring points, which are x = 2.74 and x = 3.56 . Hence, the 2 -estimate is
f 3   r = 1 2 44.117647 + 9 5 3.56 2.74 14 0.82 = 30.595410 .
If (16) is not satisfactory, we can calculate the 3 -estimate, and so on.

3.3. Integration

This subsection deals with integrals and is split into two parts: the first is about integrals over finite intervals, while those over infinite intervals are covered in the subsequent part.

3.3.1. Integration over a Finite Interval

While integration has many distinct definitions, such as those in [7] (p. 121) or [13] (p. 425), here, the Riemann integral is considered. The original definition of the integral is as follows. Let f be a function defined on the interval I = [ a , b ] , where < a < b < . Let P = { I 1 , I 2 , , I n } be the partition of I , with I k = a k , a k + 1 for k = 0,1 , 2 , , n ,   a = a 0 < a 1 < a 2 < < a n 1 < a n = b , and let m P be the mesh of P , i.e., max 1 k n a k + 1 a k . Then,
a b f x d x = lim m P 0 k = 0 n 1 f a k a k + 1 a k ,
from which the following multivalued definition is obtained:
a b f x d x   r =   k = 0 n 1 f ( a k ) ( a k + 1 a k ) w i t h   t r u t h   v a l u e   r = m ( P ) 0 .
This formula is the basis of the Monte Carlo truth value calculation for integration, where more components will be introduced. Hence, a b f x d x is regarded as a member of the set   r k = 0 n 1 f ( a k ) ( a k + 1 a k )   r = m ( P ) 0 .
The right-hand side clearly approximates the integral a b f x d x for the partition P , provided the latter exists. However, it generalizes the notion of the integral to an arbitrary function, since an integral such as 1 1 χ x d x , where χ x = 1 ,   i f   x   i s   r a t i o n a l , 0 ,   o t h e r w i s e , has the value k = 1 n χ a k a k + 1 a k with truth value r = max k a k + 1 a k . Since the latter may take any value in the interval 0,1 as r 0 , the result is more than one number.
Definition 4. 
The Monte Carlo Estimate for the Riemann Integral  a b f x d x  with  N  Subdivisions.
A.
Suppose that  d > 0  is a sufficiently ideal size for a mesh to approximate the function  f . Let  x i = a + i b a N  for  i = 0,1 , , N  form a subdivision of  a , b  with  b a N d . Let  P  be the partition  a a 1 < < a K b  obtained from the Monte Carlo data  a i , f a i i = 0,1 , , K } . Let  D i = a 0 , , a K x i , x i + 1  for  i = 0,1 , , N 1 .
a b f x d x   r = k = 1 n 1 f ( a k ) ( a k + 1 a k ) w h e r e   r = h 0 , m ( P ) 0 , K , κ N ,
in which  m P   i s   the mesh of the corresponding partition,  κ   is the number of elements in  i D i , and  h = a 1 a b a , since  f ( a ) might not be known.
B.
When the formula for the function  f : [ a , b ] R  is known in advance, but it is problematic to calculate  a b f x d x     directly, then Procedure 1 is called for. The modification is applied after the interval  [ a , b ]   is subdivided into  N   equal subintervals. Suppose that  f   is of bounded variation in each subinterval  I n = [ x n , x n + 1 ]   with variation  V n = x n x n + 1 f x d x . Let  V ( k ) = V 0 + V 1 + + V k   for  k = 0,1 , 2 , , N 1 . Then the function
F x = V n V N 1 ,   i f   x I n , 1 ,   i f   x > b , 0 ,   o t h e r w i s e ,
is a finite-valued cumulative distribution, thus the imitation part of Procedure 1 can be skipped.  F ( x )   will be applied to the output of the random number generator so that the variation in a subinterval corresponds to the frequency at which its points are chosen. The rest is the same as in  19   in Part A.
a b f x d x   r = k = 1 N 1 f ( x k ) ( x k + 1 x k ) , w h e r e   t h e   p r o b a b i l i t y   o f   p o i n t s   t o   b e   c h o s e n   i n   I n   i s V n V N
The complexities of the calculations to obtain ( 19 ) are as follows (Table 4).
For numerical integration in general, the complexities are as follows (Table 5):
Suppose that d > 0 is a number such that if the mesh of the subdivision of [ a , b ] by the points a = x 0 < x 1 < < x N 1 < x N = b does not exceed d , then the finite function x i , f x i | i = 1,2 , , N is sufficiently close to f ( x ) . Assume that the intervals are uniformly spaced by subdividing the interval [ a , b ] into N equal subintervals of the form [ x i , x i + 1 ] where x i = i b a N for i = 0,1 , , N 1 , where d = b a N . Then we have the following theorem.
Theorem 2. 
Let  P N , k  denote the probability that in  k  trials, all of the  N  subintervals are chosen. Then,
P N , k = 0 ,   i f   k < N , N ! N N ,   i f   k = N , N ! + N k N N k ,   i f   k > N .
Hence, a good Monte Carlo sample for the integral is expected to have a size of at least  1 P N , N = N N N !   elements.
Proof. 
If k < N , there can be at most k subintervals filled by k points. Hence P N , k = 0 . Suppose k = N . There are N ! ways to fill N subintervals with N chosen points, but N points can fill any of the N subintervals in N N ways. Thus, P N , N = N ! N N . Meanwhile, if k > N , k points fill the N subintervals if there are N points to fill all of them. This may take place in N ! ways. The remaining k N points may fill the N subintervals in any manner. Thus, there are N k N ways to fill them. Hence, P N , k = N ! + N k N N k . □
Example 2. 
Let the interval be  2,4  and let the integral to be estimated be  2 4 f x d x . Suppose that a subdivision of  N = 10  subintervals is close enough to approximate  f . Suppose that the data obtained are those in Table 6.
Thus, K = 9 and κ = 7 . Therefore, the estimate is
2 4 f x d x   r = 5 2.26 2.06 + 7 2.58 2.26 + 4 2.59 2.58 5 2.80 2.59 7 2.99 2.80 + 8 3.56 2.99 + 9 3.70 3.56 + 13 3.84 3.70 + 10 4 3.84 = 10.32
where r = 0.03 h 0 , 0.58 m P 0 , 9 K , 7 κ 10 . Standardizing r , one has r = 97 % h , 63 % m P , 90 % K , 70 % κ , where the main weakness is the mesh. The overall quality is their product 39 % . The quality is unsatisfactory because the data size is 9 instead of 10 10 10 ! = 2756 according to Theorem 2.
The following subsection presents a strategy to tackle integrals over infinite intervals using the finite Monte Carlo method.

3.3.2. Integration over an Infinite Interval

This section deals with integrations of the form a b f x d x , where a = or b = or both. In principle, the integration is carried out over a finite interval, but with a certain adjustment to the truth value.
Definition 5. 
A Multivalued Definition of an Integral over an Infinite Interval. Suppose that the integration interval is  ( a , b ) , where one of the following holds:
1   a = ,   b R , 2   a R ,   b = ,       3 a = b = .    
Then the integral of  f ( x )   is the following:
a b f x d x   u = A B f x d x , f o r   u = λ , q 0 ,   w h e r e   λ = B A w h e r e   B = b > A ,   i f   1   h o l d s , A = a < B ,   i f   2   h o l d s , A < B ,   i f   3   h o l d s ,   a n d   q = f A ,   i f   1   h o l d s , f B ,   i f   2   h o l d s , max f A , f B ,   i f   3   h o l d s .
The purpose of q above is to incorporate, into u , the possibility that f diverges as x or x , so that a bigger q means a smaller truth value.
Definition 6. 
Equating an Integral over an Infinite Interval to the Sum of a Finite Function in its Domain.
With the integration interval determined in Definition 5, the integral  a b f x d x   is estimated as
a b f x d x   u = k = 1 K 1 f ( a k ) ( a k + 1 a k ) , w i t h   u = u , r   i n   w h i c h   u   i s   t h e   c o r r e s p o n d i n g   t r u t h   v a l u e   a s   i n   D e f i n i t i o n   5 a n d   A B f t d t   r = k = 1 K 1 f ( a k ) ( a k + 1 a k ) .
where  x i [ A , B ]  for  i = 1,2 , , K .
The truth value u is a part of the process component called the domain infinity component. The Monte Carlo version is just an adaptation by adding the u part to the truth value in ( 19 ) .
In [15] (p. 3) Equation (3) a more theoretical probabilistic setting of Monte Carlo integration is given, and the effectiveness of certain cases of integration in high dimensions is given.

3.4. Integral Transforms

Integral transforms differ from the usual integration processes in that they have other functions of distinct variables as outputs.
Definition 7. 
A function  T  is called an integral transform if it sends integrable functions  f ( x )  on an interval  I  to functions  g ( v 1 , v 2 , , v n )  on a rectangle  J 1 × J 2 × × J n  in a one-to-one fashion, through a process of one or more integrations.
An integral transform might operate on functions of more than one variable, such as W g , h s , t , u , v = g u s t h v , x d x , but they can all be treated similarly.
Focusing on transforms of the form T f v = a v f x d x as an exemplary illustration, the equation would be T f v   r =   i = 0 N 1 f a i a i + 1 a i , where r = m P v 0 is the mesh of the partition P v of the interval a , v ,   i f   a v , v , a ,   i f   v a , subdivided into N subintervals. However, forming a new subdivision every time v changes is impractical. Hence, we propose the following definition:
Definition 8. 
In an integration process in an integral transform, there are three cases with respect to the positions of the variables.
Case (i). 
None of the variables are at the ends of the integration intervals. This case is not a problem since the integration is as in Equation   ( 19 )  or  ( 21 )  in Definition 5 or Equation  ( 24 )  in Definition 6, except that there may be variables involved in the integrand or in a factor outside the integral.
Case (ii). 
A variable is at one end of the integration interval. Suppose the integral transform is   T f v = c v f x d x  and  v  ranges over the interval  [ a , b ]  where  a c b . Let  x i = a + i b a N  form an acceptable subdivision of  [ a , b ]  and  d = b a N . Then,
T f v = c v f x d x   r v = k = 1 M 1 f ( t i k ) ( t i , k + 1 t i , k ) , w h e r e   t 1 , f t 1 , , t M , f t M   i s   a   f u n c t i o n   w i t h a = t 1 < t 2 < < t M = b ,   t i 1 , , t i M = t 1 , , t M c , v ,   i f   c v , t 1 , , t M v , c ,   i f   v c , a n d   r v   i s   t h e   i n t e g r a l s   t r u t h   v a l u e   w i t h   s u b d i v i s i o n   c o m p o n e n t   κ N v f o r   N v = v a d   a n d   κ = 1 d D i x i + 1 x i   i n   w h i c h   D i = t i 1 , , t i M [ x i , x i + 1 ) .
According to Axiom  7   of Convention 1, one obtains
V   r = W , w h e r e   f o r   e a c h   v ,     V f = T f v   a n d   W f = k = 1 M 1 f ( t i k ) ( t i , k + 1 t i , k ) a n d   r = v t 1 , , t M r v .
Case (iii). 
There are variables at both ends of the integration interval. Suppose the integral transform is  T f u , v = u v f x d x  for  u , v [ a , b ] . Then,
T f u , v = u v f x d x   r u , v = k = 1 M 1 f ( t i k ) ( t i , k + 1 t i , k ) , w h e r e   t 1 , f t 1 , , t M , f t M   i s   a   f u n c t i o n   w i t h a = t 1 < t 2 < < t M = b ,     t i 1 , , t i M = t 1 , , t M u , v ,   i f   u v , t 1 , , t M v , u ,   i f   v u , a n d   r u , v   i s   t h e   i n t e g r a l s   t r u t h   v a l u e   w i t h   s u b d i v i s i o n   c o m p o n e n t s   κ N v f o r   N v = v a d   a n d   κ = 1 d D i x i + 1 x i   i n   w h i c h   D i = t i 1 , , t i M [ x i , x i + 1 ) .
According to Axiom  7   of Convention 1, one obtains
V   r = W , w h e r e   f o r   e a c h   v ,     V f = T f u , v   a n d   W f = k = 1 M 1 f ( t i k ) ( t i , k + 1 t i , k ) a n d   r = u , v t 1 , , t M r u , v .
The complexities of the calculation of integral transform ( 28 ) is derived from those of the integration. Suppose an integration has time complexity O ( N ) . Then, since there are N 2 = N N 2 2 subintervals over which integrations are performed, the time complexity of the transform is O ( N 2 ) = O N 2 . In contrast, suppose the integration over the interval [ p , q ] has complexity O q p . Since the memory occupied by a previous integration is erased, then the total required space does not exceed the maximum among the subintervals. Hence, the space complexity of the transform is still O ( M ) .
Table 7 displays the time and space complexities of the computation in ( 28 ) . While Table 8 displays those of computations by other numerical integration method. Table 8 is like Table 7 except V and W are replaced by other inputs.
The following theorem is just an adaptation of Equation ( 20 ) of Theorem 2 for a non-integer number, where Γ ( N + 1 ) replaces N ! .
Theorem 3. 
Suppose the subdivision  b = v 0 < v 1 < < v N = c  of the interval  b , c  is equally spaced and the mesh  d = c b N  satisfies the requirement of representing the function of interest. Then, a good Monte Carlo sample for the transform  T f v = a v f x d x  has size of at least
Q v = N v N v Γ ( N v + 1 ) N v N v N v !
elements, where  N v = v a d . The rightmost approximation is good when  N v  is large enough. Hence, the whole transform is expected to have a good Monte Carlo sample of size  Q b .
The case for transforms of the form u v f x d x is similar.
The above discussion still deals with transforms in general. Some examples of useful integral transforms, are presented in [16,17].
Example 3. 
Consider the transform  T f u , v , w = f u + v f x 1 + w x d x ,  where  u , v 0,1 ,  w R ,  f : R R . For the integration process itself, suppose we take  N = 5 , with the integration interval replacing  , 1  being  4,1 . Hence, the subdivision consists of the intervals:  4 , 3 ,  3 , 2 ,  2 , 1 ,  1,0 , and  0,1 .  Suppose that the data for  f  are those in Table 9.
The following are example calculations of
T f u , v , w   r v = f u + k = 1 M v f x k 1 + w x k ( x k + 1 x k )
for some ( u , v , w ) values:
  • T 0.25 , 2.25,5.00 r 2.25 = 1.28 + 0.020 · 6 3.9 · 0.15 + 0.023 · 6 3.75 · 0.50 + 0.39 · 6 3.25 · 1.00 + 0.11 · 6 2.25 · 0.24 = 1.281640 ,
    where r 2.25 = 1,75 λ , 0.11 q , 0 , 0.06 h 0 , 1.00 m P 0 , 4 K , 1.75 κ 1.99 , because
    κ = 2.25 4 1 = 1.75 and N 2.25 = 2.01 4 1 = 1.99 . So,
    r 2.25 = 64 % λ , 90 % q , 94 % h , 50 % m P , 80 % K , 88 % κ . The overall quality is λ q h m P K κ = 19 % . Here, the data size is 2, which coincides with the ideal size 2 2 2 ! according to Theorem 3.
  • T 0.65 , 1.30,11.00 r 1.3 = 1.92 + 0.020 · 10 3.9 · 0.15 + 0.023 · 10 3.75 · 0.50 + 0.39 · 10 3.25 · 1 + 0.11 · 10 2.25 · 0.24 + 0.14 · 10 2.01 · 0.31 + 0.27 · 10 1.30 · 0.55 = 1.928236 ,
    where r 1.3 = 2.70 λ , 0.27 q 0 , 0.04 h 0 , 1.00 m P 0 , 6 K , 2.7 κ 2.7 because κ = N 1.3 = 1.3 ( 4 ) 1 = 2.7 . Hence,
    r 1.3 = 73 % λ , 44 % q , 96 % h , 50 % m P , 80 % K , 100 % κ . The overall quality is λ q h m P K κ = 15 % . Here, the data size is 3, while the ideal size is 3 3 3 ! = 4 .

3.5. Differential Equations

The differential equation is defined below is in its most general form.
Definition 9. 
Definition of a Differential Equation.
A.
A differential equation of order  n   is an equation of the form:
F x , D U y x = 0 ,
In which  x = x 1 , x 2 , , x p   and
D U y ( x ) = y , y x 1 , , y x p , y x 1 x 2 , , y x p , x p , , y x 1 x 1 , x 1 n   t i m e s , , y x p x p x p n   t i m e s ,
where  y x = y x ,  y u v = 2 y u v , and so on, and  F   is an  n -times continuously differentiable function almost everywhere. The variable  x 1 = t   denotes time, while  x 2 , x 3 , , x p   represent space coordinates. Let  x = x 1 , x 2 , , x p . If  x = x 1 , then it is called an ordinary differential equation; otherwise, it is called a partial differential equation.
B.
Let  Ω   be a region in  R p . An initial condition for  ( 30 )   is a function  g x   satisfying
F x , D U g x = 0 , f o r   a n y   x Ω x 1 = t 0 ,
for some fixed time  x 1 = t 0 .
C.
Let  Ω   be a region in  R p   and  Ω = x 2 , , x p t t , x 2 , , x p Ω . A boundary condition for  ( 30 )   is a function  h ( x )   such that
F x , D U h x = 0 , f o r   x = x 1 , x 2 , , x p { t } × Ω ,
for all  t  such that  { t } × R p 1 Ω  where  Ω   d e n o t e s   t h e   b o u n d a r y   o f   Ω .
The original definition of a solution of a differential equation is as follows.
Definition 10. 
Definition of a Solution to a Differential Equation.
A.
A solution of the differential Equation  30   in a region  Ω   is a function  f x   that plays the role of  y   in  ( 30 ) , namely
F x , D U f x = 0 f o r   x Ω
B.
The solution  f  is said to satisfy the initial condition ( 31 )  if  f  is identical to  g  of  ( 31 )  in its domain, namely
f ( x ) = g ( x ) ,   f o r   g   i n   31
C.
Also, the solution  f  is said to satisfy the boundary condition ( 32 )  if  f  is identical to  h  of  32  in its domain, namely
f ( x ) = h ( x ) , f o r   h   i n   32
The above definition translates to the following discrete version.
Definition 11. 
Definition of a Finite Function Solution to a Differential Equation.
A.
A finite function  f = z l , y l l = 1,2 , , N  with  z l = z l , 1 , z l , 2 , , z l , p  for each  l  is said to be a solution of the differential Equation  ( 30 )  if
F z l , U f z l = 0 , f o r   a l l   p o i n t s   x l D o m ( f )
where  U f x = f , x 1 f , , x p f , x 1 x 2 f , , x p x p f , , x 1 x 1 x 1 n   t i m e s f , , x p x p x p n   t i m e s f   denotes the corresponding discrete version of  D U f z l , namely where  v f = y v ,  u v = 2 y u v , and so on.
B.
Given a discrete version of the initial condition  g , the solution  f  is said to satisfy the initial condition g  if
f z l = g ( z l ) , f o r   a n y   z l = z l , 1 , z l , 2 , , z l , p   w i t h   z l , 1 = t 0 .
C.
Given a discrete version of the boundary condition  h ,  f  is said to satisfy the boundary condition h  if
f z l = h z l , w h e r e   z l = z l , 1 , z l , 2 , , z l , p   w i t h   z l , 2 , , z l , p Ω ,
in which  Ω  originally denotes the boundary of  Ω  but now becomes an arbitrary subset of  Ω = z l , 2 , , z l , p z l , 1 , z l , 2 , , z l , p D o m f .
The calculation of U f z may use forward, central, or backward differences. The underlying theory is discussed in [18]. The following is the multivalued logic version of Definition 11.
Definition 12. 
Multivalued Definition of a Finite Function Solution to a Differential Equation.
A.
Given any finite function  f = z l , 1 , z l , 2 , , z l , p , y l l = 1,2 , , N , it is a solution of the differential Equation  ( 30 )   with the truth value  τ   if:
  τ l = 1 L F z l , U f z l = 0   ,
where  τ = m P x 1 0 , m P x 2 0 , , m P x p 0 , μ 0 ,  where for each  j = 1,2 , , p ,   m ( P x j )  is the mesh of  z 1 , j , z 2 , j , , z N , j  and  μ = max F z l , U f z l l = 1,2 , , N .
B.
Given a function  g  that is equated to an initial condition with the truth value  μ , the function  f  equated to a solution of  ( 30 )  is said to satisfy the initial condition  g  with the truth value  σ  if
  σ x l , 1 = t 0 f z l = g z l   ,
where  σ = μ z l , 1 = t 0 r l , where  f z l   r l =   g z l  for  z l  with  z l , 1 = t 0 .
C.
Given a function  h   that is equated to a boundary condition with the truth value  ν , the function  f   equated to a solution of  ( 30 )   is said to satisfy the boundary condition  h   with the truth value  ρ   if
  ρ z l D o m ( h ) f z l = h z l   ,
where  ρ = ν z l r l  where  f z l   r l =   h z l  for any relevant  z l .
In the Monte Carlo method, a value T is given as a criterion: the function f is regarded as acceptable if τ Τ for some fixed value Τ = T 1 , , T p where T i > 0 for each i . Should f be a solution for an initial or boundary values, then D should contain some points satisfying the corresponding initial or boundary values.
To calculate U f in Definition 12. A requires about n L times of derivative estimation in ( 13 ) , since the differential equation is of order n . The time complexity of ( 13 ) is O ( M N ) . Thus, n N · O M N = O ( M N 2 ) . But the space complexity remains the same, since old memories are erased in each calculation. Hence, we obtain the following (Table 10).
Using another method where each differentiation has time complexity O ( N ) produces time complexity n N · O ( N ) for calculating f . But time complexity remains unchanged. Thus, the following Table 11 is obtained.
Other numerical differentiation methods may produce higher complexities [19].
The following theorem predicts how many trials should be run for a good Monte Carlo estimate.
Theorem 4. 
Let a differential equation of the form   ( 30 )  be given. Let  D = D 1 × × D p  be a finite domain. Suppose that any differentiation with respect to the variable  x k  has a tolerable gap  d k . Let  d = min d 1 , , d p . Suppose that  I k = max x D k x , min x D k x  is subdivided into  N k  equal subintervals, each with a length not more than  d 2 p . Then a good Monte Carlo sample is expected to have a size of
R = k = 1 p N k N k N k !
elements.
Proof. 
Suppose that I k = max x D k x , min x D k x is subdivided into N k equal subintervals I k , l for l = 0,1 , , N k 1 , each with length d 2 p . According to Theorem 2, after N k N k N k ! trials, N k subintervals are expected to be chosen. When this happens, the distance between two chosen neighboring points cannot exceed d k . The reason is that if u and v are neighboring points, then u I k , l and v I k , l + 1 , or vice versa. Since I k , l I k , l + 1 is an interval of length not exceeding d 2 p , while the latter cannot exceed d k , then u v d k . Since k is arbitrary, the subdivisions ensure the possibility of having differentiations in each dimension that can be approximated with tolerable gaps. If we run the trials of the k th dimension, each for k = 1 , , p sequentially, then there are at least k = 1 p N k N k N k ! trials in total. Now, the second derivatives are obtained from the first derivatives. Thus, if the first derivatives can be determined for neighboring points that have gaps of no more than d k in the k th dimension for all k , then the second derivatives can also have gaps of no more than d k for each k . By a similar argument, k th derivatives can also have gaps of no more than d k , for k = 1 , , n . □
Example 4. 
Suppose the differential equation
t u x 3 x 2 u t t u x x 2 = 0
is given with the initial condition u 0 , x = 0 .
Suppose that the Monte Carlo procedure gives a function that happens to coincide with u x , t = t x at the points t , x 0 , 1 3 , 2 3 , 1 × 0 , 1 3 , 2 3 , 1 . Then using forward differences, we obtain the results in Table 12.
The meshes are m P t = m P x = 1 3 and the maximum deviation of F ( t , x , u , u t , u x , u x x ) is μ = 6.461880 .
Therefore, the solution is u x , t = t x at t , x t , x = 0 , 1 3 , 2 3 with truth value
r = 1 3 m P t 0 ,   1 3   m P x 0 ,   6.461880   μ 0 ,
implying that r = 75 % m P t , 75 % m P x , 13 % μ . The overall quality is 7.3 % .

4. Discussion

Due to random number generator and grid construction for each input of a Monte Carlo data, our approach has time and space complexities mostly higher than many other methods. However, the approach has the merit that it works with any given input: at least one point for integration, and two for differentiation. This is because it can be utilized not only to approximate, but also to determine the quality of equating one thing to something else, such as some data to the solution of a differential equation. Moreover, the approach does not proceed stepping to a better solution, but collecting as many solutions as possible, even including rather simpler ones; afterwards, the best ones among them are chosen. There is no guarantee that the answers form a convergent pattern, since the result might be intrinsically multivalued in nature. This suggests that our method might not be suitable for pursuing common numerical answers but can still be useful to search answers to cases where analytic and other methods fail to work properly, such as for functions with chaotic behavior, or in differential equations, where there are much wider types of equations, virtually without limitations, that our method can be applied to.
The truth values are determined according to the goals and nature of the calculations.
Multivalued logic assists the Monte Carlo method to sort the answers it collects by grading their quality which are reflected in the truth values. Sometimes, these grades are conflicting due to different aspects, but this is a task to be handled with decision theory. However, the input to be examined is not limited solely to the Monte Carlo method. For example, outputs of distinct strategies for differential equations may be compared using the multivalued logic approach, such as the Runge–Kutta method in [18] Chapter 20.
The Monte Carlo procedure is dimensionless in the sense that it deals with points in any space regardless of its dimension. Therefore, this approach can easily be adapted to higher dimensions, giving, for example, matrix derivatives, multiple integrals, or other kinds of differential equations. The concepts developed here suggest how further developments might handle, for example, an integro-differential equation.
A suggestion for further research is as follows. The method might be used to handle counterparts of some sets treated as non-existent in ZF set theory. For example, the non-existent Russell set R = x x x has an existing counterpart R 1 2 =   1 2 x   x x . Here R 1 2   1 2   R 1 2 , since if R 1 2   s   R 1 2 where s 1 , then R 1 2   1 s   R 1 2 ; which by definition, means 1 s = 1 2 . Hence, s = 1 2 . Similarly, the set of all sets U has an existing counterpart U 1 2 =   1 2 x   x = x . Here, if U 1 2   t   U 1 2 , where t 1 , then U 1 2   1 t   U 1 2 , which implies t = 1 t . Therefore, t = 1 2 . So, U 1 2   1 2   U 1 2 as well. Facts such as these may contribute to the formulation of ZF in a multivalued logic setting.
As can be seen from the suggestion regarding set theory above, the Russell set and the set of all sets are proof that there are mathematical objects that are half-existent intrinsically.

Author Contributions

Conceptualization, R.A.S. and S.M.; methodology, R.A.S.; software, S.M.; validation, E. and S.M.; formal analysis, R.A.S.; investigation, S.M.; resources, R.A.S. and E.; data curation, E.S.; writing—original draft preparation, R.A.S. and E.; writing—review and editing, E. and E.S.; visualization, T.S.; supervision, S.M.; project administration, T.S. and E.S.; funding acquisition, T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Gunadarma University in the form of a research and publication grant, number: 055.2/SK/REK/UG/2025.

Data Availability Statement

We do not use any data for this article, because it is theoretical in nature.

Acknowledgments

Thank you very much for the opportunity and support to the Gunadarma University the research and publication grant TA 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gaytan-Cortes, J. The Monte Carlo method of random simulation samples. Merc. Y Neg. 2023, 24, 95–108. [Google Scholar] [CrossRef]
  2. Metropolis, N.; Ulam, S. The Monte Carlo Method. J. Am. Stat. Assoc. 1949, 44, 335–341. [Google Scholar] [CrossRef] [PubMed]
  3. Bergman, C.H. Universal Algebra: Fundamentals and Selected Topics; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  4. Blyth, T.S. Lattices and Ordered Algebraic Structures; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  5. Alimov, A.; Tsar, I.G. Classical Problems of Rational Approximation. Dokl. Math. 2022, 106, 305–307. [Google Scholar] [CrossRef]
  6. Holhos, A. On the Approximation by Balázs–Szabados Operators. Mathematics 2021, 9, 1588. [Google Scholar] [CrossRef]
  7. Bartle, R.G. The Elements of Real Analysis, 2nd ed.; John Wiley & Son: New York, NY, USA, 1976. [Google Scholar]
  8. Munkres, J.R. Analysis on Manilfolds; Addison Wesley Publishing Company: Redwood City, CA, USA, 1991. [Google Scholar]
  9. Ullah, I.; Azam, N.A.; Hayat, U. Efficient and secure substitution box and random number generators over Mordell elliptic curves. J. Inf. Secur. Appl. 2021, 56, 102619. [Google Scholar] [CrossRef]
  10. Anderson, D.F.; Higham, D.J.; Sun, Y. Computational complexity analysis for Monte Carlo approximations of classically scaled population processes. Multiscale Model. Simul. 2018, 16, 1206–1226. [Google Scholar] [CrossRef]
  11. Leonenko, N.; Podlubny, I. Monte Carlo Method for Fractional-Order Differentiation Extended to Higher Orders. Fract. Calc. Appl. Anal. 2022, 25, 841–857. [Google Scholar] [CrossRef]
  12. Banks, H.T.; Jacobs, M.Q. A differential calculus for multifunctions. J. Math. Anal. Appl. 1970, 29, 246–272. [Google Scholar] [CrossRef]
  13. Royden, H.L. Real Analysis; Macmillan Publishing Company: New York, NY, USA, 1988. [Google Scholar]
  14. Usmani, A.R. A Novel Time and Space Complexity Efficient Variant of Counting-Sort Algorithm. In Proceedings of the International Conference on Innovative Computing (ICIC), Lahore, Pakistan, 1 November 2019. [Google Scholar]
  15. Tang, Y. A Note on Monte Carlo Integration in High Dimensions. Am. Stat. 2023, 78, 290–296. [Google Scholar] [CrossRef]
  16. Kumar, R.; Chandel, J.; Aggarwal, S. A New Integral Transform “Rishi Transform” with Application. J. Sci. Res. 2022, 14, 521–532. [Google Scholar] [CrossRef]
  17. Saadeh, R.; Qazza, A.; Burqan, A. A New Integral Transform: ARA Transform and Its Properties and Applications. Symmetry 2020, 12, 925. [Google Scholar] [CrossRef]
  18. Kreyszig, E.O. Advanced Engineering Mathematics, 10th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  19. Bhattacharya, N.; Silva, A.G. Stability and Complexity Analyses of Finite Difference Algorithms. arXiv 2007, arXiv:2007.08660. [Google Scholar]
Table 1. Complexities of calculations of ( 13 ) .
Table 1. Complexities of calculations of ( 13 ) .
ProcessPseudo
RNG 1
f ( x ) c , s , t
Positions
f ( x ) Truth
Value
Total
Time
Complexity
O ( M N )  2 O ( N ) O ( N ) O ( N ) O ( 1 ) O ( M N )
Space
Complexity
O ( N ) O ( 1 ) O log N O ( N ) O ( 1 ) O ( N )
1 RNG is an abbreviation for random number generator. 2 According to [9], for some M N .
Table 2. Complexities of differentiation calculations by Finite Difference Method.
Table 2. Complexities of differentiation calculations by Finite Difference Method.
ProcessPseudo
RNG 1
f ( x ) c , s , t
Positions
in Sorted
Array
f ( x ) Truth
Value
Total
Time Complexity O ( N ) O ( N ) O ( log N ) O ( 1 ) O ( N ) Time Complexity
Space Complexity O ( 1 ) O ( 1 ) O log N O ( 1 ) O ( N ) Space Complexity
1 RNG is an abbreviation for random number generator.
Table 3. Data for f to calculate f 3 .
Table 3. Data for f to calculate f 3 .
x 1.56 2.26 2.58 2.74 2.80 3.14 3.56 3.70 3.84
f ( x ) 5 7 4 5 7 8 9 13 10
Table 4. The Complexities of the calculations of ( 19 ) .
Table 4. The Complexities of the calculations of ( 19 ) .
Process P s e u d o R N G f ( x ) Ordering
(Before 2022)
The   Part   of   the   Process   as   in   ( 18 )  1Truth
Value
Total
Time Complexity O ( M N )  2 O ( N ) O ( N + K )  3 O ( N ) O ( 1 ) O ( m a x { M N , N + K } )
Space Complexity O ( N ) O ( 1 ) 2 N + 2 K   O ( N ) O ( 1 ) O ( N + K )
1 Simple Quadrature Method is used in ( 18 ) . 2 is as in Table 1. 3  K is the number of digits of the largest number in the array of x , cf. [14].
Table 5. Numerical integration methods.
Table 5. Numerical integration methods.
ProcessSubdivision f ( x ) IntegrationTruth
Value
Total
Time Complexity O ( N ) O ( N ) O ( N )   to   O ( N d ) O ( N ) O ( N )   to   O ( N d )
Space Complexity O ( N ) O ( 1 ) O ( 1 ) O ( 1 ) O ( N )
Table 6. Data for f to estimate the integral 2 4 f x d x .
Table 6. Data for f to estimate the integral 2 4 f x d x .
i 1234
D i [ 2,2.2 ] [ 2.2,2.4 ] [ 2.4,2.6 ] [ 2.6,2.8 ]
x 2.06 2.26 2.58 2.59
f ( x ) 5 7 4 5
i 5 6 7 8
D i [ 2.8,3 ] [ 3,3.2 ] [ 3.2,3.6 ] [ 3.6,3.8 ]
x 2.80 2.99 3.56 x
f ( x ) 7 8 9 f ( x )
i 910
D i [ 3.8,4 ] [ 2,2.2 ]
x 3.84 3.84
f ( x ) 10 10
Table 7. Complexities of computing ( 28 ) .
Table 7. Complexities of computing ( 28 ) .
Process T ( f ) ( u , v ) T ( f )
Time Complexity 1 O ( V ) O ( V 2 )
Space Complexity 2 O ( W ) O ( W )
1  V = max M N , N + K as in Table 4. 2 W = max N , 2 N + 2 K as in Table 4.
Table 8. Complexities of computing T ( f ) as in ( 28 ) by other methods.
Table 8. Complexities of computing T ( f ) as in ( 28 ) by other methods.
Process T ( f ) ( u , v ) T ( f )
Time Complexity 1 O ( V ) O ( V 2 )
Space Complexity 2 O ( W ) O ( W )
1  V = N   t o   N d , as in Table 5. 2  W = 1   t o   N , as in Table 5.
Table 9. Data for f in the transform T f u , v , w .
Table 9. Data for f in the transform T f u , v , w .
i 123
D i 4 , 3 3 , 2 2 , 1
x 3.90 3.75 3.25 2.25 2.01 1.30
f ( x ) 0.020 0.023 0.39 0.11 0.14 0.27
i 45
D i 1,0 0,1
x 0.85 0.25 0.65 0.95
f ( x ) 0.43 1.28 1.92 2.59
Table 10. Complexities of computing f in Definition 12.A.
Table 10. Complexities of computing f in Definition 12.A.
Process 1 f U f
Time Complexity O ( M N ) O ( M N 2 )
Space Complexity O ( N ) O ( N )
1  f denotes the method of in Equation ( 13 ) .
Table 11. Complexities of computing f in Definition 12.A by Finite Difference Method.
Table 11. Complexities of computing f in Definition 12.A by Finite Difference Method.
Process f   1 U f
Time Complexity O ( N ) O ( N 2 )
Space Complexity O ( N ) O ( N )
1  f denotes the Finite Difference Method.
Table 12. Calculation results of x u , x x u , t u , and F ( t , x , u , u t , u x , u x x ) .
Table 12. Calculation results of x u , x x u , t u , and F ( t , x , u , u t , u x , u x x ) .
t , x 0,0 1 3 , 0 2 3 , 0 0 , 1 3 0 , 2 3
x u 0 0.815318 0.331373 0 0
x x u 806817 1.639137 0.615231 0.806817 0
t u 3   1 0 0 2.184682 1.915743
F 0 1.862609 2.655076 0.242742 0.851441
t , x 1 3 , 1 3 1 3 , 2 3 2 3 , 1 3 2 3 , 2 3
x u 0.268939 0.183693 0.126296 0.092842
x x u 0.255738 1.100362 3.022560 2.728016
t u 0.483946 0.626589 0.331372 0.457668
F 0.306692 0.505729 6.461880 5.466661
1 This entry is calculated using the initial condition for u ( 0,0 ) .
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Salim, R.A.; Ernastuti; Sukirman, E.; Saptariani, T.; MT, S. Using Pseudo-Complemented Truth Values of Calculation Errors in Integral Transforms and Differential Equations Through Monte Carlo Algorithms. Mathematics 2025, 13, 2534. https://doi.org/10.3390/math13152534

AMA Style

Salim RA, Ernastuti, Sukirman E, Saptariani T, MT S. Using Pseudo-Complemented Truth Values of Calculation Errors in Integral Transforms and Differential Equations Through Monte Carlo Algorithms. Mathematics. 2025; 13(15):2534. https://doi.org/10.3390/math13152534

Chicago/Turabian Style

Salim, Ravi A., Ernastuti, Edi Sukirman, Trini Saptariani, and Suryadi MT. 2025. "Using Pseudo-Complemented Truth Values of Calculation Errors in Integral Transforms and Differential Equations Through Monte Carlo Algorithms" Mathematics 13, no. 15: 2534. https://doi.org/10.3390/math13152534

APA Style

Salim, R. A., Ernastuti, Sukirman, E., Saptariani, T., & MT, S. (2025). Using Pseudo-Complemented Truth Values of Calculation Errors in Integral Transforms and Differential Equations Through Monte Carlo Algorithms. Mathematics, 13(15), 2534. https://doi.org/10.3390/math13152534

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop