Next Article in Journal
On lp-Complex Numbers
Previous Article in Journal
On the Digital Pontryagin Algebras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solution of Ruin Probability for Continuous Time Model Based on Block Trigonometric Exponential Neural Network

1
School of Mathematics and Statistics, Central South University, Changsha 410083, China
2
College of Finance and Statistics, Hunan University, Changsha 410006, China
3
School of Mathematics and Statistics, Hunan University of Technology and Business, Changsha 410205, China
4
School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(6), 876; https://doi.org/10.3390/sym12060876
Submission received: 28 April 2020 / Revised: 16 May 2020 / Accepted: 22 May 2020 / Published: 26 May 2020

Abstract

:
The ruin probability is used to determine the overall operating risk of an insurance company. Modeling risks through the characteristics of the historical data of an insurance business, such as premium income, dividends and reinvestments, can usually produce an integral differential equation that is satisfied by the ruin probability. However, the distribution function of the claim inter-arrival times is more complicated, which makes it difficult to find an analytical solution of the ruin probability. Therefore, based on the principles of artificial intelligence and machine learning, we propose a novel numerical method for solving the ruin probability equation. The initial asset u is used as the input vector and the ruin probability as the only output. A trigonometric exponential function is proposed as the projection mapping in the hidden layer, then a block trigonometric exponential neural network (BTENN) model with a symmetrical structure is established. Trial solution is set to meet the initial value condition, simultaneously, connection weights are optimized by solving a linear system using the extreme learning machine (ELM) algorithm. Three numerical experiments were carried out by Python. The results show that the BTENN model can obtain the approximate solution of the ruin probability under the classical risk model and the Erlang(2) risk model at any time point. Comparing with existing methods such as Legendre neural networks (LNN) and trigonometric neural networks (TNN), the proposed BTENN model has a higher stability and lower deviation, which proves that it is feasible and superior to use a BTENN model to estimate the ruin probability.

1. Introduction

In order to help policyholders avoid risks, insurance companies bear all losses for risky buyers [1], which makes insurance companies accumulate a lot of risks themselves [2]. Insurance companies will face austere challenges and tribulations when large claims or a large number of small claims occur in a short period of time [3]. In fact, policyholders focus on the solvency of insurance companies [4]. The ruin probability refers to the probability that insurance business income will be lost at a certain moment, and it is an important indicator to measure the operating status of an insurance company [5]. Therefore, the study of the ruin probability has always been the most active subject in the study of risk theory, especially in ruin theory. Accurate calculations of the ruin probability can help insurance companies to effectively manage and avoid business risks strategically [6], which has important practical significance.
In the past few decades, researchers have established different types of insurance risk models based on premium income [7,8], payment amount [9], investment [10,11], dividend [12] and other characteristics [13]. Assuming that the distribution of claim intervals is affected by the Poisson process, Lundberg proposed a classic bankruptcy model in 1903 [14]. Sparre Anderson extended the distribution of compensation time intervals from the Poisson process to a general distribution, and constructed an updated risk model in 1957 [15]. Dickson and Hipp calculated the probability of bankruptcy under the Erlang(2) risk model [16]. Subsequently, Li et al. studied the Erlang (n) risk model [17,18]. Considering the uncertainty of the dividend strategy and investment, Gerber proposed a classical risk model with disturbances in 1970 [19]. A Markov process is used to describe the uncertainty of the external environment, and some risk models with Markov parameters were proposed [20,21,22]. Asmussen summarized different types of bankruptcy risks [23]. Modeling through probability theory and stochastic processes, generally, the ruin probability can be expressed in the form of an integral differential equation [24]. Cai et al. studied the probability of ruin under different interest rates and gave numerical solutions [25], extending the claim time to the generalized Erlang (n) distribution. Since the generalized Lundberg equation may have multiple positive roots, Bergel corresponds the obtained roots to the solutions of the differential equations one-by-one, taking into account factors such as dividends [26]. Kasumo used a block-by-block method to calculate the bankruptcy probability including the investment returns of the standard Black-Scholes model, illustrating the sensitivity of the ruin probability to stock price fluctuations [27].
Due to the complex structure of integral differential equations, it is difficult to use elementary functions to describe the ruin probability, and even these equations often cannot find any analytical solutions. Traditional methods use some inequalities to estimate the range of the ruin probability [28,29,30]. With the development of numerical calculation techniques such as the Euler method [31,32] and Runge–Kutta method [33], Cardosoa and Waters discussed the numerical solution of the ruin probability in the case of a finite time domain [34]. Makroglou used the polynomial spline method to solve the first-order Volterra integral–differential equations and obtained the approximate solution of the collective non-ruin equation [35]. Paulsen analyzed the probability of ruin with random returns through the block Simpson rule [36]. Tsitsiashvili used an exponential distribution to approximate the loss, and then calculates the ruin probability under the classical risk model with a claims distribution [37]. Zhang proposed the use of a Fourier cosine expansion to estimate the ruin probability in the compound Poisson distribution risk model, and estimated the error of the numerical solution [38].
As a powerful machine learning method, the neural network is widely used in computer vision [39,40], natural environment simulation [41,42,43], price prediction [44,45,46] and numerical solutions of differential equations [47,48,49,50,51]. Sabir proposed the use of Morlet wavelet neural networks for solving second-order differential equation [52]. Hure at al. proposed a deep backward neural network for solving high-dimension PDEs [53]. Samaniego uses deep neural networks in the partial differential equations of computational mechanics to improve the computational efficiency of mechanical problems [54]. In addition, Huang proposed an extreme learning machine (ELM) method for fast training a single hidden layer feedforward neural network [55]. In 2019, Zhou and Hou proposed to use the improved ELM (IELM) method and trigonometric basis function neural networks (TNN) to numerically solve the ruin probability [56]. Defining Legendre polynomials as the activation function in the hidden layer, Legendre neural network (LNN) based on IELM was used by Lu to obtain the approximate solution of the ruin probability under the Erlang(2) risk model and the classical risk model [57].
In this study, we apply the block trigonometric exponential neural network (BTENN) to find the approximate solutions of the ruin probability in the classical risk model and the Erlang(2) risk model separately. Numerical experiments were programmed in Python 3.6 and ran on a MacBook Pro (Retina, 13-inch, Late 2013). Comparing the numerical solutions obtained by the proposed BTENN method with LNN and TNN methods, BTENN method has the advantages of high-precision and stability.
The rest of this study is structured as follows: Section 2 briefly reviews the ELM algorithm. The block trigonometric exponential neural network (BTENN) is proposed in Section 3. Section 4 discusses the methods and details of using the BTENN model to obtain the numerical solution of the ruin probability equation in the case of the classical risk model and the Erlang(2) risk model. Three numerical experiments are conducted in Section 5, which proves the accuracy and reliability of the BTENN model. Finally, conclusions and prospects are given in Section 6.

2. Extreme Learning Machine Algorithm

As shown in Figure 1. The ELM is an algorithm that is used to optimize the parameters in a single hidden layer feedforward neural network (SLFN). Since it does not require repeated iterations in the calculation process, only the Moore–Penrose generalized inverse of the coefficient matrix is needed to obtain the global optimal solution. Thus, it has the advantages of a fast calculation speed and does not easily overfit. For a given data set with M samples { ( x i , y i ) } i = 1 M M × ( N + 1 ) , where N is the number of features, The output of this single hidden layer network with m hidden neurons can be mathematical represented as follows:
f m ( x ) = i = 1 m β i G ( a i , b i , x ) = h ( x ) β = y ˜ ,
where h ( x ) = [ G ( a 1 , b 1 , x ) , G ( a 2 , b 2 , x ) , , G ( a m , b m , x ) ] is the group of activation function which projects the network input x into the m -dimensional feature space. In addition, β = [ β 1 , β 2 , , β m ] T is the connection weight between hidden layer and output layer. When the network approximates the given data set without error by ELM algorithm, this means Equation (1) satisfies each data in the given dataset, which can be rewritten as:
f m ( x j ) = i = 1 m β i G ( a i , b i , x j ) = h ( x j ) β = y j   j = 1 , 2 , M ,
Equation (2) can be expressed in matrix form, i.e.,
H β = y ,
where,
H = [ h ( x 1 ) h ( x M ) ] = [ h 1 ( x 1 ) h m ( x 1 ) h 1 ( x M ) h m ( x M ) ] , y = [ y 1 y M ] ,
Then the problem of finding the optimal connection weight is transformed into the following optimization problem:
min :   i = 1 M ξ i 2
s . t .   h ( x i ) β = y i ξ i
Generally, the number of training points M will much bigger than the number of features. Based on the Karush–Kuhn–Tucker (KKT) condition, For any input weight a i and biases b i , ELM algorithm can get the minimum square error of the output through the least square method and it can be represented as
β * = a r g m i n β H β y 2 = H y ,
where, H = ( H T H ) 1 H T is the Moore–Penrose generalized inverse of matrix H .
The steps of the ELM algorithm are composed of three parts:
  • Step 1. Randomly initialized parameters in the hidden layer;
  • Step 2. Calculate the output matrix of the hidden layer;
  • Step 3. Obtain the output weight β by least square method.

3. Block Trigonometric Exponential Neural Network

In this study, a single hidden layer feedforward neural network (SLFN) model is established. The block trigonometric exponential function neural network (BTENN) is a special case of SLFN, which has an input node x and one output layer. We propose a novel trigonometric exponential expansion to replace the activation function of the hidden layer (generally a Gaussian function or the Legendre polynomials). The unique hidden layer consists of two parts. The first part uses the cosine exponential function as the basis function, and the other part implements the superposition of the sine exponential function. The input vector x is projected into a high-dimensional feature space using the proposed trigonometric exponential basis function, where cosine exponential basis functions and sine exponential basis functions are denoted as G c ( x ) and G s ( x ) , respectively. Two sets of basis functions are symmetrical to each other. These nonlinear basis functions ensure that the BTENN has the ability to fit complex ruin probabilities. The general form of the trigonometric exponential basis function can be written as follows:
G i , c ( x ) = ( 1 c o s x ) 2 e k i x ,
G i , s ( x ) = ( 1 s i n x ) 2 e k i x ,
Consider an input vector x = ( x 1 ,   x 2 , ,   x m ) , of whose dimension is m. The coordinates projected by trigonometric exponential functions in a hidden layer can be calculated as:
[ G 1 , c ( x 1 ) G N 2 , c ( x 1 ) G 1 , s ( x 1 ) G N 2 , s ( x 1 ) G 1 , c ( x m ) G N 2 , c ( x m ) G 1 , s ( x m ) G N 2 , s ( x m ) ] ,
where N is the number of hidden neurons. In the BTENN, N is always an even number. A schematic diagram of a block trigonometric exponential function neural network is given in Figure 2. Linearly combined with appropriate coefficients α i , the output of the BTENN can be calculated as:
N ( x ,   α ) = i = 1 N 2 [ α i , 1 G i , c ( x ) + α i , 2 G i , s ( x ) ] .
Remark 1.
Convergence theorem
We claim that the output of the block trigonometric exponential neural network can converge to any continuous function ψ ( x ) .
Proof. 
let L 2 ( R ) be a Hilbert space and G n , c ( x ) = ( 1 cos x ) 2 e k x n ,   G n , s ( x ) = ( 1 sin x ) 2 e k x n . where, G n , c ( x ) and G n , s ( x ) form a pair basis in L 2 ( R ) .
Let U ( x ) = i = 1 N α i , G i ( x ) be the output of block trigonometric exponential neural networks, where , represents the inner product, and
α i = ( α i , c ,   α i , s ) ,
G i ( x ) = ( G i , c ( x ) , G i , s ( x ) ) .
That is,
U ( x ) = i = 1 N ( α i , c ,   α i , s ) , ( G i , c ( x ) , G i , s ( x ) ) = i = 1 n [ α i , c G i , c ( x ) + α i , s G i , s ( x ) ] .
Here, we denote the sequence of partial sums of { α i , c + α i , s , G i , c ( x ) + G i , s ( x ) } as { S i } .
Hence, S m and S n represent the arbitrary partial sums with m n > 0 , we need to prove that { S i } is a Cauchy sequence.
Considering that S n = i n α i , G i ( x ) and S m = i m α i , G i ( x ) , in this case, we can get:
S m S n 2 = i = 1 m α i , G i ( x ) 2 = i = n m α i , c ( 1 cos x ) 2 e k x i + α i , s ( 1 sin x ) 2 e k x i 2 i = n m 2 α i , c e k x i + α i , s e k x i 2 i = n m 4 α i , m a x e k x i 2 i = n m 4 α i , m a x 2 = 16 i = n m α i , m a x 2 ,
where α i , m a x = max i = 1 , 2 , , N { | α i , c | ,   | α i , s |   } .
As m , n in Equation (13), we have S m S n 2 0 . This means { S i } is a Cauchy sequence in Hilbert space L 2 ( R ) and it converges to:
S = i α i , G i ( x ) .
Thus, the BTENN can approximate an unknown continuous function with arbitrary precision, as the number of hidden layer neurons N approaches to infinity.

4. The BTENN Model for the Ruin Probability Equation

In this section, we will introduce the details of solving the ruin probability equation by the block trigonometric exponential neural network under the assumption of the classical risk model or the Erlang(2) risk model.

4.1. Classical Risk Model

The utility function U ( t ) represents the surplus assets of the insurance company at time t , in the compound risk model, this process can be expressed as,
U ( t ) = u + c t i = 1 N t X i ,   t 0 ,
It has following structures: The sequence of random variables { N t ,   t = 0 , 1 , 2 , } represents the number of occurrences of the claim during time ( 0 , t ] , which is considered to form a Poisson process with parameter λ . Suppose that the number of claims { N t ,   t = 0 , 1 , 2 , } and the amount of claims { X i ,   i = 0 , 1 , , N t } are independent of each other. In addition, the amount of the i -th claim X i is based on a non-negative random sequence with independent identical distribution, whose distribution function is F ( x ) and mean value is μ , respectively. The initial surplus of an insurance company is u 0 and c is the insurance premium charged in a unit time. In order to ensure that the insurance company can operate safely, the premium income should not be less than the expected value of the compensation amount, i.e.,:
E ( c t i = 0 N t X i ) = ( c λ μ ) t 0 .
This means the security loading θ = c λ μ λ μ > 0 .
The time of ruin T is defined as:
T = inf { t 0 , U ( t ) < 0 } ,
where, as T = + means that the insurance company will never goes bankrupt.
The ultimate ruin probability is expressed as:
ψ ( u ) = P ( T <   | U ( 0 ) = u ) ,   u 0 .
In other words, the ultimate survival probability ϕ ( u ) is equal to 1 ψ ( u ) .
If the surplus process of an insurance company can be described by Formula (15), the ultimate ruin probability satisfies the following integral differential equation and initial value condition:
ψ ( u ) = λ c ψ ( u ) λ c 0 u ψ ( u x ) d F ( x ) λ c [ 1 F ( u ) ] ,
ψ ( 0 ) = λ c μ = 1 1 + θ ,   when   c > λ μ .

4.2. BTENN for Classical Risk Model

According to the BTENN model proposed above, the trial solution can be constructed as follows:
φ ˜ ( u ) = 1 1 + θ + u i = 1 N / 2 [ α i , c G i , c ( u ) + α i , s G i , s ( u ) ]
In fact, the trial solution satisfies the initial conditions (20). Replace ψ ( u ) by the trial solution φ ˜ ( u ) in Equation (19),
φ ˜ ( u ) = λ c φ ˜ ( u ) λ c 0 u φ ˜ ( u x ) d F ( x ) λ c [ 1 F ( u ) ] .
Expanding expression (22), we have:
L H S = φ ˜ ( u ) = i = 1 N / 2 [ α i , c ( G i , c ( u ) + u G i , c ( u ) + α i , s ( G i , s ( u ) + u G i , s ( u ) ) ] ,
R H S = R H S 1 R H S 2 R H S 3 ,
where,
R H S 1 = λ c φ ˜ ( u ) = λ c [ 1 1 + θ + i = 1 N 2 u [ α i , c G i , c ( u ) + α i , s G i , s ( u ) ] = λ c ( 1 + θ ) + i = 1 N 2 [ α i , c λ c u G i , c ( u ) + α i , s λ c u G i , s ( u ) ]
R H S 2 = λ c 0 u φ ˜ ( u x ) d F ( x ) = λ c 0 u 1 1 + θ + ( u x ) i = 1 N 2 u [ α i , c G i , c ( u x ) + α i , s G i , s ( u x ) ] d F ( x ) = λ c 1 1 + θ [ F ( u ) F ( 0 ) ] + i = 1 N 2 α i , c λ c 0 u ( u x ) G i , c ( u x ) d F ( x ) + i = 1 N 2 α i , s λ c   0 u ( u x ) G i , s ( u x ) d F ( x ) ,
and
R H S 3 = λ c [ 1 F ( u ) ] .
Merging the similar items, Equation (22) can be rewritten as,
i = 1 N 2 [ G i , c ( u ) λ c u G i , c ( u ) + u G i , c ( u ) + λ c   0 u ( u x ) G i , c ( u x ) d F ( x ) ] + i = 1 N 2 [ G i , s ( u ) λ c u G i , s ( u ) + u G i , s ( u ) + λ c   0 u ( u x ) G i , s ( u x ) d F ( x ) ] = λ c [ F ( θ ) F ( u ) θ 1 + θ + F ( u ) ] .
Take a series of training points Ω = 0 = u 1 < u 2 < u m = b , one clear fact is that Equation (28) holds at these training points, that is,
i = 1 N 2 [ ( 1 λ c u j ) G i , c ( u j ) + u j G i , c ( u j ) + λ c 0 u j ( u j x ) G i , c ( u j j ) d F ( x ) ] + i = 1 N 2 [ ( 1 λ c u j ) G i , s ( u j ) + u j G i , s ( u j ) + λ c 0 u j ( u j x ) G i , s ( u j j ) d F ( x ) ] = λ c [ F ( θ ) F ( u j ) θ 1 + θ + F ( u j ) ] ,     j = 1 , 2 , , M ,
Finally, the linear system in (29) has a matrix form,
A α = B ,
where,
A = ( A C , A S ) M × N ,   A C = ( A C 1 A C 2 A C M ) ,   A S = ( A S 1 A S 2 A S M ) ,
A C i j = ( 1 λ c u j ) G i , c ( u j ) + u j G i , c ( u j ) + λ c 0 u j ( u j x ) G i , c ( u j x ) d F ( x ) ,
A S i j = ( 1 λ c u j ) G i , s ( u j ) + u j G i , s ( u j ) + λ c 0 u j ( u j x ) G i , s ( u j x ) d F ( x )
and,
α T = ( α c , α s ) 1 × N ,   α c = ( α 1 ,   c α 2 ,   c α N 2 ,   c ) ,   α s = ( α 1 ,   s α 2 ,   s α N 2 ,   s ) ,
B = ( λ c F ( 0 ) F ( u 0 θ ) 1 + θ + F ( u 0 ) ,     λ c F ( 0 ) F ( u 1 θ ) 1 + θ + F ( u 1 ) ,   , λ c F ( 0 ) F ( u m θ ) 1 + θ + F ( u m ) ) T .
Note that, in generally, the original function of the definite integrals 0 u j ( u j x ) G i , c ( u ) ( u j x ) d F ( x ) and 0 u j ( u j x ) G i , s ( u ) ( u j x ) d F ( x ) in Formulas (32) and (33) cannot be represented by elementary functions. We usually use its numerical integration by Simpson’s formula [58]:
a b f ( x ) d x S n = h b k = 0 n 1 [ f ( x k ) + 4 f ( x k + 1 2 ) + f ( x k + 1 ) ] = h b [ f ( a ) + 4 k = 0 n 1 f ( x k + 1 2 ) + 2 k = 1 n f ( x k ) + f ( b ) ] .
The optimal coefficient vector α by ELM method is,
α * = A B ,
where, A is the least square solution to the linear system (30).

4.3. Erlang(2) Risk Model

By generalizing the claim inter-arrival time of classical risk model from the exponential distribution to an Erlang (n) distribution, we can get an Erlang (n) renewal risk model, which may better describe the risks encountered by insurance companies in practical problems. In particular, this study focuses on the Erlang(2) risk model: its surplus process can be expressed as
U ( n ) = u + i = 1 n ( c T i X i ) ,     n 1 ,
where u represents the initial reserve of the insurance company, n denotes the number of claims, c is the insurance premium charged per unit time and T i is the time interval between the i th and ( i 1 ) t h payments. As in the classical risk model, we assume that T i is independent of the claim amount X i . Moreover, suppose that the time interval T i is a non-negative sequence with independent and identical distribution. In the Erlang(2) model, the probability density function of the claim interval is f ( t ) = η 2 t e η t .
When the surplus is negative, the insurance company is considered bankrupt. The ruin probability is denoted by ψ ( u ) , whose definition is
ψ ( u ) = P { u + i = 1 n ( c T i X i ) < 0 ,   n } .
The ruin probability of insurance company ψ ( u ) satisfies the following equation [16]:
c 2 ψ ( u ) 2 η c ψ ( u ) + η 2 ψ ( u ) = η 2 0 u ψ ( u x ) d F ( x ) + η 2 [ 1 F ( u ) ] ,
ψ ( 0 ) = c 2 s 0 2 η c + η 2 α c 2 s 0 ,
where F ( x ) is the distribution function of claim size X i and s 0 is the solution of the following equation:
c 2 s 2 2 η c s + η 2 = η 2 0 + e s x d F ( x ) .

4.4. BTENN for Erlang(2) Risk Model

Similar to the classical risk model above, we claim that the trial solution that satisfies with the given initial value condition is as follows:
φ ˜ ( u ) = ψ ( 0 ) + u i = 1 N / 2 [ α i , c G i , c ( u ) + α i , s G i , s ( u ) ] .
Substituting model (38) into Equation (36), we have
c 2 φ ˜ ( u ) 2 η c φ ˜ ( u ) + η 2 φ ˜ ( u ) = η 2 0 u φ ˜ ( u x ) d F ( x ) + η 2 [ 1 F ( u ) ] ,
That is,
L H S = L H S 1 L H S 2 + L H S 3
R H S = R H S 1 + R H S 2
where,
L H S 1 = c 2 φ ˜ ( u ) = c 2 i = 1 N / 2 [ α i , c ( 2 G i , c ( u ) + u G i , c ( u ) + α i , s ( 2 G i , s ( u ) + u G i , s ( u ) ] ,
L H S 2 = 2 η c φ ˜ ( u ) = 2 η c i = 1 N / 2 [ α i , c ( G i , c ( u ) + u G i , c ( u ) ) + α i , s ( G i , c ( u ) + u G i , s ( u ) ) ] ,
L H S 3 = η 2 φ ˜ ( u ) = η 2 [ ψ ( 0 ) + u i = 1 N / 2 [ α i , c G i , c ( u ) + α i , s G i , s ( u ) ] ,
R H S 1 = η 2 0 u φ ˜ ( u x ) d F ( x )
= η 2 ψ ( 0 ) [ F ( u ) F ( 0 ) ] + η 2 0 u ( u x ) i = 1 N / 2 [ α i , c G i , c ( u x ) + α i , s G i , s ( u x ) ] d F ( x ) ,
R H S 2 = η 2 [ 1 F ( u ) ] ,
Sorting out Formulas (44)–(48),
i = 1 N / 2 α i , c [ c 2 u G i , c ( u ) + 2 c ( c η u ) G i , c ( u ) + ( η 2 u 2 η c ) G i , c ( u ) η 2 0 u ( u x ) G i , c ( u x ) d F ( x ) ] + i = 1 N / 2 α i , s [ c 2 u G i , s ( u ) + 2 c ( c η u ) G i , s ( u ) + ( η 2 u 2 η c ) G i , s ( u ) η 2 0 u ( u x ) G i , s ( u x ) d F ( x ) ] = η 2 ψ ( 0 ) [ F ( u ) F ( 0 ) 1 ] + η 2 [ 1 F ( u ) ] ,
When Equation (49) holds on the selected training point set Ω = 0 = u1 < u2⋯< um = b, the above equation can be represented as follows:
i = 1 N / 2 α i , c [ c 2 u j G i , c ( u j ) + 2 c ( c η u j ) G i , c ( u j ) + ( η 2 u j 2 η c ) G i , c ( u j ) η 2 0 u j ( u j x ) G i , c ( u j x ) d F ( x ) ] + i = 1 N / 2 α i , s [ c 2 u j G i , s ( u j ) + 2 c ( c η u j ) G i , s ( u j ) + ( η 2 u j 2 η c ) G i , s ( u j ) η 2 0 u j ( u j x ) G i , s ( u j x ) d F ( x ) ] = η 2 ψ ( 0 ) [ F ( u j ) F ( 0 ) 1 ] + η 2 [ 1 F ( u j ) ] ,
In matrix form of Equation (50) becomes
A α = B
where,
A = ( A C , A S ) M × N ,   A C = ( A C 1 , A C 2 , A C M ) T ,   A S = ( A S 1 , A S 2 , A S M ) T
A C i j = c 2 u j G i , c ( u j ) + 2 c ( c η u j ) G i , c ( u j ) + ( η 2 u j 2 η c ) G i , c ( u j ) η 2 0 u j ( u j x ) G i , c ( u j x ) d F ( x ) ,
A S i j = c 2 u j G i , s ( u j ) + 2 c ( c η u j ) G i , s ( u j ) + ( η 2 u j 2 η c ) G i , s ( u j ) η 2 0 u j ( u j x ) G i , s ( u j x ) d F ( x ) ,
and
α = ( α c α s ) N × 1 ,     α c = ( α 1 , c , α 2 , c , α N 2 , c ) T ,   α s = ( α 1 , s , α 2 , s , α N 2 , s ) T
B j = η 2 ψ ( 0 ) [ F ( u j ) F ( 0 ) 1 ] + η 2 [ 1 F ( u j ) ] ,   j = 1 , 2 , , M ,
Training such a block trigonometric exponential neural network is equivalent to finding the least square solution to a linear system (50). The optimal parameter of the connection weight of the BTENN model can be obtained by the Moore–Penrose generalized inverse of matrix A as
α * = A B
Meanwhile, α * = min α A α B exists and is unique.

5. Numerical Results and Analysis

In this section, three numerical experiments are conducted to verify the efficiency of the proposed BTENN method. In the first and third experiments, we assume that the number of claims follows a compound Poisson distribution, while the second experiment shows the numerical solution of the ruin probability equation when the inter-claim intervals follow an Erlang(2) renewal process. In the classical ELM model, the global optimal solution can be obtained by randomly assigning the weight ω and its bias b in the hidden layer; ω i = 1 and b i = 0 are set in our experiment for the convenience of calculation.
In order to measure the accuracy and stability of different algorithms, we calculated the mean square error (MSE) of the approximate solutions obtained on the same test set. The MSE can be calculated as follows:
M S E = 1 N t e s t i = 1 N t e s t ( y ( x i ) y ˜ ( x i ) ) 2 .
In fact, we are not only concerned with mean square error, but also with the maximum absolute error (MAE). In real world problems, MAE determines the stability of the model and whether it can be used in practical applications. MAE is defined as:
M A E = max i = 1 , 2 , , N t e s t | y ( x i ) y ˜ ( x i ) | ,
where N t e s t is the number of test points and x i means the i -th input test point. y ( x ) is the exact solution of the ruin probability equation and y ˜ ( x ) is the obtained approximate solution.

5.1. Numerical Example 1

In this experiment, we supposed that the probability of bankruptcy obeys the classical risk model and the claim amount follows an exponential distribution with mean μ = 1 α , that is, for x > 0 ,
F ( x ) = 1 e α x .
In this case, the analytical solution to Equation (17) is known and for u 0 can be expressed as:
ψ ( u ) = 1 1 + θ e θ α u 1 + θ .
Similar to Lu et al. [57] and Zhou et al. [56], 21 training points are isometrically selected in the definition domain [ 0 ,   10 ] . Parameters in the ruin probability Equation (19) are set as c = 3 ,   λ = 4 ,   α = 2 ,   b = 10 . Twelve basis functions are used and 20 equidistant points in [ 0.25 ,   9.75 ] are chosen as test points set.
Figure 3 shows the exact ruin probability and approximate solution obtained by the BTENN with N = 12 . It can be seen that the approximate solution and the exact solution are in good agreement. The absolute error obtained by the BTENN and the Legendre neural network (LNN) models are plotted in Figure 4. Compared with the LNN method, the numerical solution obtained by the BTENN model has smaller error fluctuations. Table 1 lists the numerical solutions of the BTENN at the test points and compares the errors of the approximate solutions obtained by different methods. The MSE of the BTENN is 2.2044 × 10 16 , while MAE is only 2.3783 × 10 08 , which is much smaller than the others. The accuracy of the BTENN approximate solution has been significantly improved, proving the effectiveness of the proposed method.
Moreover, according to Table 2, when 30 points are used for training, the MAE is 1.0836 × 10 08 . Further increasing the number of training points to 100 equidistant points, the MAE reduces to 8.1964 × 10 09 . It can be seen that the accuracy of the numerical solution increases as more training points are selected, but this improvement is not significant. Table 3 shows the mean square error and the maximum absolute error of the approximate solution obtained with different numbers of hidden layer neurons. When more hidden layer neurons are involved, a better approximation effect can be obtained, which is in line with the general approximation theorem in Remark 1.

5.2. Numerical Example 2

This experiment illustrates the case when the claim amount follows an exponential distribution with mean μ = 1 α under the Erlang(2) risk model. Same as the previous experiment, we assume exponential claims with:
F ( x ) = 1 e α x ,
for x > 0. An analytical solution exists for the ruin probability in this case and it can be expressed as:
ψ ( u ) = 2 η 2 c 2 α 2 + 2 η c α + c α ( 2 η c α ) 2 + 8 η c α 4 η 2 e 2 η 2 c α ( 2 η c α ) 2 + 8 η c α 4 η 2 2 c u ,
Figure 5 describes the exact and the approximate solutions obtained by the proposed block trigonometric exponential neural network. Eight hidden neurons were used and contains the same parameters as Zhou [56], where c = 3 ,   α = 1 ,   η = 5 ,   b = 10 and 21 equidistant training points are chosen in the interval [0,10]. Table 4 presents the exact solution to the Equation (39) for the Erlang(2) model and the numerical solution obtained by the BTENN method at these test points. The absolute errors of the approximate solutions obtained by the BTENN, the LNN and the TNN are also compared in Table 4. As you can see, the MSE is 8.7029 × 10 18 , which is about ten-billionth of the error of the TNN model O ( 10 08 ) . The absolute error curve of test points is plotted in Figure 6. The approximate solution error obtained by the proposed BTENN is lower, and the error of each test point is more stable. The absolute error does not become significantly larger as u increases. This shows that the proposed algorithm has good calculation stability. Moreover, it is obvious that among the selected test points, the minimum absolute error obtained by the LNN method is O ( 10 08 ) , which is even greater than the maximum absolute error of the proposed BTENN method O ( 10 09 ) .
In addition, Figure 7 shows that even if the solution interval is extended to [0, 100], the numerical solution obtained by the block trigonometric exponential neural network containing only with eight hidden layer neurons is still close to the real solution. Based on the above facts, the BTENN method proposed in this study has the ability to accurately estimate the numerical solution of the ruin probability equation using only a small number of training points in the interval. Numerical experiments show that when insurance companies own more assets, the ruin probability decreases. However, too many assets will no longer significantly reduce its risk of ruin. Estimating the possibility of bankruptcy, allowing operators to use fewer assets and get a controllable risk.

5.3. Numerical Example 3

The assumption that claim amounts follow a Pareto distribution, some studies was developed in some classical risk models for heavy-tail risks, where X ~ P a r e t o ( α ,   γ ) and F ( x ) can be written as:
F ( x ) = 1 ( γ γ + x ) α ,   F ( x ) = α γ ( γ γ + x ) α + 1 ,   x > 0 .
No analytical solution to Equation (19) exists in this case. In order to compare the approximate solutions obtained by the algorithm, we choose the same parameters c = 600 ,   α = 3 ,   λ = 1 ,   θ = 0.2 ,   b = 10000 as in Lu [57], with 51 equidistant points used as a training set for the BTENN model. Figure 8 plots the numerical solutions obtained by BTENN and LNN in the interval 0–10,000. The 50 test points depicted show that the approximate solutions obtained by the two algorithms almost overlap, which shows that the proposed BTENN model can be accurately calculated the ruin probability. It can be seen from the curve trend in Figure 8 that the ruin probability decreases as u rises, which is consistent with the actual situation. Table 5 contains solutions of these test points obtained by block trigonometric exponential neural network and trigonometric neural network. When the initial asset u is larger, the absolute difference between the two algorithms becomes larger. Considering that in numerical experiments 1 and 2, the error of LNN approximate solution increases when u is larger, which implies that the numerical solution obtained by the BTENN model is closer to the exact value of the ruin probability. The numerical solutions generated from the two algorithms are similar. The mean square difference is 3.2015 × 10 10 , which means that the proposed BTENN model can estimate the probability of ruin at any time, regardless of whether the ruin probability equation has an exact solution.

6. Conclusion and Prospects

In this study, a block trigonometric exponential neural network is established to numerically solving the ruin probability equations. Three numerical experiments show that compared to the existing methods, the mean square error of the numerical solution obtained by the BTENN model is smaller and has a smaller maximum error. This shows that our BTENN model can accurately and in a stable way to calculate the ruin probability using only a few training points. Experiments also show that increasing the number of hidden neurons gives, approximate solutions with higher accuracy, which is consistent with the convergence theorem. In addition, the BTENN model can also be used in future to solve other types of continued integral equations. Future research can focus on simplified model calculations and promote the application range of the BTENN model.

Author Contributions

All authors contributed to the study conception and design. Y.C. (Yinghao Chen): Conceptualization, methodology, formal analysis, writing—original; C.Y.: conceptualization, methodology, visualization; X.X.: writing—review, project administration, funding acquisition; M.H.: conceptualization, writing—review & editing, supervision, funding acquisition; Y.C. (Yangjin Cheng): conceptualization. writing—review & editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Projects of National Social Science Foundation of China under Grant Np.19BT011.

Conflicts of Interest

All authors declared that they have no conflict of interests.

References

  1. Drugdova, B. The issue of the commercial insurance, commercial insurance market and insurance of non-life risks. In Financial Management of Firms and Financial Institutions: 10th International Scientific Conference, Pts I-Iv; Culik, M., Ed.; Vsb-Tech Univ. Ostrava: Feecs, Czech Republic, 2015; pp. 202–208. [Google Scholar]
  2. Opeshko, N.S.; Ivashura, K.A. Improvement of stress testing of insurance companies in view of european requirements. Financ. Credit Act. Probl. Theory Pract. 2017, 1, 112–119. [Google Scholar] [CrossRef] [Green Version]
  3. Jia, F. Analysis of State-owned holding Insurance Companies’ Risk Management on the Basis of Equity Structure, 2nd China International Conference on Insurance and Risk Management (CICIRM); Tsinghua University Press: Beijing, China, 2011; pp. 60–63. [Google Scholar]
  4. Cejkova, V.; Fabus, M. Management and Criteria for Selecting Commercial Insurance Company for Small and Medium-Sized Enterprises; Masarykova Univerzita: Brno, Czech Republic, 2014; pp. 105–110. [Google Scholar]
  5. Belkina, T.A.; Konyukhova, N.B.; Slavko, B.V. Solvency of an Insurance Company in a Dual Risk Model with Investment: Analysis and Numerical Study of Singular Boundary Value Problems. Comput. Math. Math. Phys. 2019, 59, 1904–1927. [Google Scholar] [CrossRef]
  6. Jin, B.; Yan, Q. Diversification, Performance and Risk Taking of Insurance Company; Tsinghua University Press: Beijing, China, 2013; pp. 178–188. [Google Scholar]
  7. Wang, Y.; Yu, W.; Huang, Y.; Yu, X.; Fan, H. Estimating the Expected Discounted Penalty Function in a Compound Poisson Insurance Risk Model with Mixed Premium Income. Mathematics 2019, 7, 305. [Google Scholar] [CrossRef] [Green Version]
  8. Song, Y.; Li, X.Y.; Li, Y.; Hong, X. Risk investment decisions within the deterministic equivalent income model. Kybernetes 2020. [Google Scholar] [CrossRef]
  9. Stellian, R.; Danna-Buitrago, J.P. Financial distress, free cash flow, and interfirm payment network: Evidence from an agent-based model. Int. J. Financ. Econ. 2019. [Google Scholar] [CrossRef]
  10. Emms, P.; Haberman, S. Asymptotic and numerical analysis of the optimal investment strategy for an insurer. Insur. Math. Econ. 2007, 40, 113–134. [Google Scholar] [CrossRef] [Green Version]
  11. Zhu, S. A Becker-Tomes model with investment risk. Econ. Theory 2019, 67, 951–981. [Google Scholar] [CrossRef]
  12. Jiang, W. Two classes of risk model with diffusion and multiple thresholds: The discounted dividends. Hacet. J. Math. Stat. 2019, 48, 200–212. [Google Scholar] [CrossRef]
  13. Xie, J.-h.; Zou, W. On the expected discounted penalty function for the compound Poisson risk model with delayed claims. J. Comput. Appl. Math. 2011, 235, 2392–2404. [Google Scholar] [CrossRef]
  14. Lundberg, F. Approximerad Framställning Afsannollikhetsfunktionen: II. återförsäkring af Kollektivrisker; Almqvist & Wiksells Boktr: Uppsala, Sweeden, 1903. [Google Scholar]
  15. Andersen, E.S. On the collective theory of risk in case of contagion between claims. Bull. Inst. Math. Appl. 1957, 12, 275–279. [Google Scholar]
  16. Dickson, D.C.M.; Hipp, C. On the time to ruin for Erlang(2) risk processes. Insur. Math. Econ. 2001, 29, 333–344. [Google Scholar] [CrossRef] [Green Version]
  17. Li, S.M.; Garrido, J. On a class of renewal risk models with a constant dividend barrier. Insur. Math. Econ. 2004, 35, 691–701. [Google Scholar] [CrossRef]
  18. Li, S.M.; Garrido, J. On ruin for the Erlang(n) risk process. Insur. Math. Econ. 2004, 34, 391–408. [Google Scholar] [CrossRef]
  19. Gerber, H.U.; Yang, H. Absolute Ruin Probabilities in a Jump Diffusion Risk Model with Investment. N. Am. Actuar. J. 2007, 11, 159–169. [Google Scholar] [CrossRef]
  20. Yazici, M.A.; Akar, N. The finite/infinite horizon ruin problem with multi-threshold premiums: A Markov fluid queue approach. Ann. Oper. Res. 2017, 252, 85–99. [Google Scholar] [CrossRef]
  21. Lu, Y.; Li, S. The Markovian regime-switching risk model with a threshold dividend strategy. Insur. Math. Econ. 2009, 44, 296–303. [Google Scholar] [CrossRef]
  22. Zhu, J.; Yang, H. Ruin theory for a Markov regime-switching model under a threshold dividend strategy. Insur. Math. Econ. 2008, 42, 311–318. [Google Scholar] [CrossRef]
  23. Asmussen, S.; Albrecher, H. Ruin Probabilities. Advanced Series on Statistical Science & Applied Probability, 2nd ed.; World Scientific: Singapore, 2010; Volume 14, p. 630. [Google Scholar]
  24. Wang, G.J.; Wu, R. Some distributions for classical risk process that is perturbed by diffusion. Insur. Math. Econ. 2000, 26, 15–24. [Google Scholar] [CrossRef]
  25. Cai, J.; Yang, H.L. Ruin in the perturbed compound Poisson risk process under interest force. Adv. Appl. Probab. 2005, 37, 819–835. [Google Scholar] [CrossRef] [Green Version]
  26. Bergel, A.I.; Egidio dos Reis, A.D. Ruin problems in the generalized Erlang(n) risk model. Eur. Actuar. J. 2016, 6, 257–275. [Google Scholar] [CrossRef]
  27. Kasumo, C. Minimizing an Insurer’s Ultimate Ruin Probability by Reinsurance and Investments. Math. Comput. Appl. 2019, 24, 21. [Google Scholar] [CrossRef] [Green Version]
  28. Xu, L.; Wang, M.; Zhang, B. Minimizing Lundberg inequality for ruin probability under correlated risk model by investment and reinsurance. J. Inequalities Appl. 2018. [Google Scholar] [CrossRef] [PubMed]
  29. Zou, W.; Xie, J.H. On the probability of ruin in a continuous risk model with delayed claims. J. Korean Math. Soc. 2013, 50, 111–125. [Google Scholar] [CrossRef]
  30. Andrulytė, I.M.; Bernackaitė, E.; Kievinaitė, D.; Šiaulys, J. A Lundberg-type inequality for an inhomogeneous renewal risk model. Mod. Stoch. Theory Appl. 2015, 2, 173–184. [Google Scholar] [CrossRef] [Green Version]
  31. Fei, W.; Hu, L.; Mao, X.; Xia, D. Advances in the truncated euler-maruyama method for stochastic differential delay equations. Commun. Pure Appl. Anal. 2020, 19, 2081–2100. [Google Scholar] [CrossRef] [Green Version]
  32. Li, F.; Cao, Y. Stochastic Differential Equations Numerical Simulation Algorithm for Financial Problems Based on Euler Method. In 2010 International Forum on Information Technology and Applications; IEEE Computer Society: Los Alamitos, CA, USA, 2010; pp. 190–193. [Google Scholar] [CrossRef]
  33. Zhang, C.; Qin, T. The mixed Runge-Kutta methods for a class of nonlinear functional-integro-differential equations. Appl. Math. Comput. 2014, 237, 396–404. [Google Scholar] [CrossRef]
  34. Cardoso, R.M.R.; Waters, H.R. Calculation of finite time ruin probabilities for some risk models. Insur. Math. Econ. 2005, 37, 197–215. [Google Scholar] [CrossRef]
  35. Makroglou, A. Computer treatment of the integro-differential equations of collective non-ruin; the finite time case. Math. Comput. Simul. 2000, 54, 99–112. [Google Scholar] [CrossRef]
  36. Paulsen, J.; Kasozi, J.; Steigen, A. A numerical method to find the probability of ultimate ruin in the classical risk model with stochastic return on investments. Insur. Math. Econ. 2005, 36, 399–420. [Google Scholar] [CrossRef]
  37. Tsitsiashvili, G.S. Computing ruin probability in the classical risk model. Autom. Remote Control 2009, 70, 2109–2115. [Google Scholar] [CrossRef]
  38. Zhang, Z. Approximating the density of the time to ruin via fourier-cosine series expansion. Astin Bull. 2017, 47, 169–198. [Google Scholar] [CrossRef]
  39. Muzhou Hou, Y.C. Industrial Part Image Segmentation Method Based on Improved Level Set Model. J. Xuzhou Inst. Technol. 2019, 40, 10. [Google Scholar]
  40. Wang, Z.; Meng, Y.; Weng, F.; Chen, Y.; Lu, F.; Liu, X.; Hou, M.; Zhang, J. An Effective CNN Method for Fully Automated Segmenting Subcutaneous and Visceral Adipose Tissue on CT Scans. Ann. Biomed. Eng. 2020, 48, 312–328. [Google Scholar] [CrossRef] [PubMed]
  41. Hou, M.; Zhang, T.; Weng, F.; Ali, M.; Al-Ansari, N.; Yaseen, Z.M. Global solar radiation prediction using hybrid online sequential extreme learning machine model. Energies 2018, 11, 3415. [Google Scholar] [CrossRef] [Green Version]
  42. Muzhou Hou, T.Z.; Yang, Y.; Luo, J. Application of Mec- based ELM algorithm in prediction of PM2.5 in Changsha City. J. Xuzhou Inst. Technol. 2019, 34, 1–6. [Google Scholar]
  43. Hahnel, P.; Marecek, J.; Monteil, J.; O’Donncha, F. Using deep learning to extend the range of air pollution monitoring and forecasting. J. Comput. Phys. 2020, 408. [Google Scholar] [CrossRef] [Green Version]
  44. Chen, Y.; Xie, X.; Zhang, T.; Bai, J.; Hou, M. A deep residual compensation extreme learning machine and applications. J. Forecast. 2020, 1–14. [Google Scholar] [CrossRef]
  45. Weng, F.; Chen, Y.; Wang, Z.; Hou, M.; Luo, J.; Tian, Z. Gold price forecasting research based on an improved online extreme learning machine algorithm. J. Ambient Intell. Humaniz. Comput. 2020. [Google Scholar] [CrossRef]
  46. Hou, M.; Liu, T.; Yang, Y.; Zhu, H.; Liu, H.; Yuan, X.; Liu, X. A new hybrid constructive neural network method for impacting and its application on tungsten price prediction. Appl. Intell. 2017, 47, 28–43. [Google Scholar] [CrossRef]
  47. Hou, M.; Han, X. Constructive Approximation to Multivariate Function by Decay RBF Neural Network. IEEE Trans. Neural Netw. 2010, 21, 1517–1523. [Google Scholar] [CrossRef]
  48. Sun, H.; Hou, M.; Yang, Y.; Zhang, T.; Weng, F.; Han, F. Solving Partial Differential Equation Based on Bernstein Neural Network and Extreme Learning Machine Algorithm. Neural Process. Lett. 2019, 50, 1153–1172. [Google Scholar] [CrossRef]
  49. Yang, Y.; Hou, M.; Luo, J. A novel improved extreme learning machine algorithm in solving ordinary differential equations by Legendre neural network methods. Adv. Differ. Equ. 2018, 2018, 469. [Google Scholar] [CrossRef]
  50. Yang, Y.; Hou, M.; Luo, J.; Liu, T. Neural Network method for lossless two-conductor transmission line equations based on the IELM algorithm. AIP Adv. 2018, 8. [Google Scholar] [CrossRef]
  51. Yang, Y.; Hou, M.; Sun, H.; Zhang, T.; Weng, F.; Luo, J. Neural network algorithm based on Legendre improved extreme learning machine for solving elliptic partial differential equations. Soft Comput. 2020, 24, 1083–1096. [Google Scholar] [CrossRef]
  52. Sabir, Z.; Wahab, H.A.; Umar, M.; Sakar, M.G.; Raja, M.A.Z. Novel design of Morlet wavelet neural network for solving second order Lane-Emden equation. Math. Comput. Simul. 2020, 172, 1–14. [Google Scholar] [CrossRef]
  53. Hure, C.; Pham, H.; Warin, X. Deep backward schemes for high-dimensional nonlinear pdes. Math. Comput. 2020, 89, 1547–1579. [Google Scholar] [CrossRef] [Green Version]
  54. Samaniego, E.; Anitescu, C.; Goswami, S.; Nguyen-Thanh, V.M.; Guo, H.; Hamdia, K.; Zhuang, X.; Rabczuk, T. An energy approach to the solution of partial differential equations in computational mechanics via machine learning: Concepts, implementation and applications. Comput. Methods Appl. Mech. Eng. 2020, 362. [Google Scholar] [CrossRef] [Green Version]
  55. Huang, G.; Huang, G.B.; Song, S.; You, K. Trends in extreme learning machines: A review. Neural Netw. 2015, 61, 32–48. [Google Scholar] [CrossRef]
  56. Zhou, T.; Liu, X.; Hou, M.; Liu, C. Numerical solution for ruin probability of continuous time model based on neural network algorithm. Neurocomputing 2019, 331, 67–76. [Google Scholar] [CrossRef]
  57. Lu, Y.; Chen, G.; Yin, Q.; Sun, H.; Hou, M. Solving the ruin probabilities of some risk models with Legendre neural network algorithm. Digit. Signal Process. 2020, 99. [Google Scholar] [CrossRef]
  58. Zhang, X.; Wu, J.; Yu, D. Superconvergence of the composite Simpson’s rule for a certain finite-part integral and its applications. J. Comput. Appl. Math. 2009, 223, 598–613. [Google Scholar] [CrossRef] [Green Version]
Availability of Data and Material: Not applicable.
Code Availability: All code is executed by Python3.6, mail to [email protected].
Figure 1. The topology structure of the extreme learning machine (ELM) algorithm.
Figure 1. The topology structure of the extreme learning machine (ELM) algorithm.
Symmetry 12 00876 g001
Figure 2. Structure of a block trigonometric exponential neural network.
Figure 2. Structure of a block trigonometric exponential neural network.
Symmetry 12 00876 g002
Figure 3. Analytical and numerical solutions obtained by the block trigonometric exponential neural network (BTENN).
Figure 3. Analytical and numerical solutions obtained by the block trigonometric exponential neural network (BTENN).
Symmetry 12 00876 g003
Figure 4. Comparison of absolute errors of approximate solution by BTENN and legendre neural network (LNN).
Figure 4. Comparison of absolute errors of approximate solution by BTENN and legendre neural network (LNN).
Symmetry 12 00876 g004
Figure 5. Analytical and Numerical Solutions Obtained by the BTENN of Example 2.
Figure 5. Analytical and Numerical Solutions Obtained by the BTENN of Example 2.
Symmetry 12 00876 g005
Figure 6. Absolute Error of the Numerical Solution of Example 2.
Figure 6. Absolute Error of the Numerical Solution of Example 2.
Symmetry 12 00876 g006
Figure 7. Analytical and Numerical Solutions Obtained on the Interval [ 0 ,   100 ] .
Figure 7. Analytical and Numerical Solutions Obtained on the Interval [ 0 ,   100 ] .
Symmetry 12 00876 g007
Figure 8. Numerical Solutions Obtained by the BTENN of Example 3.
Figure 8. Numerical Solutions Obtained by the BTENN of Example 3.
Symmetry 12 00876 g008
Table 1. Comparison of Absolute Errors of Approximate Solutions Obtained by BTENN, LNN and Trigonometric Neural Networks (TNN).
Table 1. Comparison of Absolute Errors of Approximate Solutions Obtained by BTENN, LNN and Trigonometric Neural Networks (TNN).
u ExactApproximateAbsolute Error by BTENNAbsolute Error by LNN in [57]Absolute Error by TNN in [56]
0.000.66666666670.6666666667 0.0000 × 10 + 0 0.0000 × 10 + 0 0.0000 × 10 + 0
0.250.56432114990.5643211737 2.3783 × 10 8 3.2946 × 10 8 4.2156 × 10 5
0.750.40435377310.4043537518 2.1386 × 10 8 6.6140 × 10 8 7.9549 × 10 5
1.250.28973213900.2897321605 2.1480 × 10 8 6.5976 × 10 8 9.5476 × 10 5
1.750.20760214930.2076021261 2.3219 × 10 8 8.7577 × 10 8 1.0853 × 10 4
2.250.14875344010.1487534167 2.3394 × 10 8 9.8572 × 10 8 1.1750 × 10 4
2.750.10658649740.1065865074 1.0056 × 10 8 9.1825 × 10 8 1.2404 × 10 4
3.250.07637256270.0763725672 4.5537 × 10 9 8.9967 × 10 8 1.2869 × 10 4
3.750.05472333240.0547233102 2.2264 × 10 8 1.0164 × 10 7 1.3204 × 10 4
4.250.03921098110.0392109620 1.9102 × 10 8 1.1269 × 10 7 1.3442 × 10 4
4.750.02809589570.0280959020 6.3156 × 10 9 1.1018 × 10 7 1.3614 × 10 4
5.250.02013158890.0201315994 1.0449 × 10 8 9.9976 × 10 8 1.3737 × 10 4
5.750.01442491380.0144249036 1.0197 × 10 8 9.7611 × 10 8 1.3825 × 10 4
6.250.01033590240.0103358859 1.6527 × 10 8 1.0722 × 10 7 1.3888 × 10 4
6.750.00740599770.0074060017 4.0477 × 10 9 1.1571 × 10 7 1.3933 × 10 4
7.250.00530662920.0053066427 1.3449 × 10 8 1.1086 × 10 7 1.3966 × 10 4
7.750.00380236600.0038023595 6.5289 × 10 9 1.0081 × 10 7 1.3989 × 10 4
8.250.00272451430.0027245042 1.0087 × 10 8 1.0404 × 10 7 1.4007 × 10 4
8.750.00195219980.0019522135 1.3742 × 10 8 1.4151 × 10 7 1.4012 × 10 4
9.250.00139881230.0013988045 7.7700 × 10 9 1.0550 × 10 7 1.4059 × 10 4
9.750.00100229280.0010023058 1.3006 × 10 8 1.0982 × 10 7 1.3753 × 10 4
MAE 2.3783 × 10 8 1.4151 × 10 7 1.4059 × 10 4
MSE 2.2044 × 10 16 1.0124 × 10 14 1.6124 × 10 8
Table 2. Error Comparison of Approximate Solutions with Different Numbers of Training Points.
Table 2. Error Comparison of Approximate Solutions with Different Numbers of Training Points.
Training PointsMSEMAE
21 2.2044 × 10 16 2.3783 × 10 8
30 9.8427 × 10 17 1.0836 × 10 8
50 8.2830 × 10 17 9.2712 × 10 9
100 4.6419 × 10 17 8.1964 × 10 9
Table 3. Error Comparison of Approximate Solutions Using Different Numbers of Hidden Neurons.
Table 3. Error Comparison of Approximate Solutions Using Different Numbers of Hidden Neurons.
Hidden NeuronsMSEMAE
12 2.2044 × 10 16 2.3783 × 10 8
18 8.0732 × 10 17 6.1296 × 10 9
20 1.0714 × 10 18 8.3127 × 10 10
50 9.1445 × 10 22 7.1498 × 10 12
Table 4. Comparison of the Numerical Solutions Obtained by BTENN, LNN and TNN.
Table 4. Comparison of the Numerical Solutions Obtained by BTENN, LNN and TNN.
u . ExactBTENN SolutionAbsolute Error by BTENNAbsolute Error by LNN in [57]Absolute Error by TNN in [56]
0.00.78222935620.7822293562 0.0000 × 10 + 0 0.0000 × 10 + 0 0.0000 × 10 + 0
0.50.70152930260.7015292999 2.7167 × 10 9 2.4647 × 10 7 2.8123 × 10 4
1.00.62915481050.6291548124 2.0193 × 10 9 4.5465 × 10 7 2.8636 × 10 4
1.50.56424695900.5642469609 1.8784 × 10 9 1.0367 × 10 7 1.5895 × 10 4
2.00.50603543890.5060354357 3.3062 × 10 9 3.4282 × 10 7 2.6196 × 10 6
2.50.45382941170.4538294097 2.1078 × 10 9 2.8821 × 10 7 1.3678 × 10 4
3.00.40700931020.4070093132 3.0732 × 10 9 8.5184 × 10 8 2.1145 × 10 4
3.50.36501948600.3650194894 3.4673 × 10 9 3.3444 × 10 7 2.1746 × 10 4
4.00.32736161510.3273616137 1.4722 × 10 9 2.3922 × 10 7 1.6260 × 10 4
4.50.29358878410.2935887798 4.3992 × 10 9 7.1659 × 10 8 6.6239 × 10 5
5.00.26330018600.2633001850 1.0998 × 10 9 2.9814 × 10 7 4.5630 × 10 5
5.50.23613636380.2361363676 3.7978 × 10 9 2.4169 × 10 7 1.4506 × 10 4
6.00.21177494470.2117749479 3.3257 × 10 9 3.5002 × 10 8 2.0657 × 10 4
6.50.18992681370.1899268117 1.9615 × 10 9 2.6648 × 10 7 2.1149 × 10 4
7.00.17033268330.1703326792 4.2003 × 10 9 2.1959 × 10 7 1.5223 × 10 4
7.50.15276001540.1527600159 3.2072 × 10 10 6.6730 × 10 8 3.6624 × 10 5
8.00.13700026240.1370002664 4.0026 × 10 9 2.6165 × 10 7 1.0754 × 10 4
8.50.12286639180.1228663912 3.5653 × 10 10 7.4952 × 10 8 2.2779 × 10 4
9.00.11019066650.1101906634 3.1964 × 10 9 . 2.3406 × 10 7 2.4184 × 10 4
9.50.09882265460.0988226577 2.6166 × 10 9 4.8428 × 10 8 3.2086 × 10 5
10.00.08862744330.0886274378 9.7183 × 10 10 3.7546 × 10 7 5.6048 × 10 4
MAE 4.3992 × 10 9 4.5465 × 10 7 5.6048 × 10 4
MSE 8.7029 × 10 18 6.0264 × 10 7 4.4036 × 10 8
Table 5. Comparison of the Numerical Solutions of the BTENN and TNN of Example 3.
Table 5. Comparison of the Numerical Solutions of the BTENN and TNN of Example 3.
u BTENN Solution LNN Solution [57]Absolute Difference
2300.77716747260.7771671970 2.7562 × 10 7
11620.62622660380.6262251570 1.4468 × 10 6
20940.51776107220.5177563420 4.7302 × 10 6
30260.43674015070.4367394040 7.4668 × 10 7
39580.37191394160.3719008950 1.3047 × 10 5
48900.31887170660.3188620670 9.6396 × 10 6
58220.27488057810.2748686660 1.1912 × 10 5
67540.23808068740.2380170420 6.3645 × 10 5
76860.20698964440.2069120930 7.7551 × 10 5
86180.18055899850.1805008800 5.8118 × 10 5
95500.15800810810.1579619740 4.6134 × 10 5

Share and Cite

MDPI and ACS Style

Chen, Y.; Yi, C.; Xie, X.; Hou, M.; Cheng, Y. Solution of Ruin Probability for Continuous Time Model Based on Block Trigonometric Exponential Neural Network. Symmetry 2020, 12, 876. https://doi.org/10.3390/sym12060876

AMA Style

Chen Y, Yi C, Xie X, Hou M, Cheng Y. Solution of Ruin Probability for Continuous Time Model Based on Block Trigonometric Exponential Neural Network. Symmetry. 2020; 12(6):876. https://doi.org/10.3390/sym12060876

Chicago/Turabian Style

Chen, Yinghao, Chun Yi, Xiaoliang Xie, Muzhou Hou, and Yangjin Cheng. 2020. "Solution of Ruin Probability for Continuous Time Model Based on Block Trigonometric Exponential Neural Network" Symmetry 12, no. 6: 876. https://doi.org/10.3390/sym12060876

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop