Tsallis Entropy for Loss Models and Survival Models Involving Truncated and Censored Random Variables

The aim of this paper consists in developing an entropy-based approach to risk assessment for actuarial models involving truncated and censored random variables by using the Tsallis entropy measure. The effect of some partial insurance models, such as inflation, truncation and censoring from above and truncation and censoring from below upon the entropy of losses is investigated in this framework. Analytic expressions for the per-payment and per-loss entropies are obtained, and the relationship between these entropies are studied. The Tsallis entropy of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u is computed for the exponential, Weibull, χ2 or Gamma distribution. In this context, the properties of the resulting entropies, such as the residual loss entropy and the past loss entropy, are studied as a result of using a deductible and a policy limit, respectively. Relationships between these entropy measures are derived, and the combined effect of a deductible and a policy limit is also analyzed. By investigating residual and past entropies for survival models, the entropies of losses corresponding to the proportional hazard and proportional reversed hazard models are derived. The Tsallis entropy approach for actuarial models involving truncated and censored random variables is new and more realistic, since it allows a greater degree of flexibility and improves the modeling accuracy.


Introduction
Risk assessment represents an important topic in various fields, since it allows designing the optimal strategy in many real-world problems. The fundamental concept of entropy can be used to evaluate the uncertainty degree corresponding to the result of an experiment, phenomenon or random variable. Recent research results in statistics prove the increased interest for using different entropy measures. Many authors have dealt with this matter, among them are Koukoumis and Karagrigoriou [1], Iatan et al. [2], Li et al. [3], Miśkiewicz [4], Toma et. al. [5], Moretto et al. [6], Remuzgo et al. [7], Sheraz et al. [8] and Toma and Leoni-Aubin [9]. One of the most important information measures, the Tsallis entropy, has attracted considerable interest in statistical physics and many other fields as well. We can mention here the contributions of Nayak et al. [10], Pavlos et al. [11] and Singh and Cui [12]. Recently, Balakrishnan et al. [13] proposed a general formulation of a class of entropy measures depending on two parameters, which includes Shannon, Tsallis and fractional entropy as special cases.
As entropy can be regarded as a measure of variability for absolutely continuous random variables or measure of variation or diversity of the possible values of a discrete random variable, it can be used for risk assessment in various domains. In actuarial science, one of the main objectives which defines the optimal strategy of an insurance company is directed towards minimizing the risk of the claims. Ebrahimi [14] and Ebrahimi and Pellerey [15] studied the problem of measuring uncertainty in life distributions. The uncertainty corresponding to loss random variables in actuarial models can be evaluated also by the entropy of the loss distribution. Frequently in actuarial practice, as a consequence of using deductibles and policy limits, the practitioners have to deal with transformed data, generated by truncation and censoring. Baxter [16] and Zografos [17] developed information measure methods for mixed and censored random variables, respectively. The entropic approach enables the assessment of the uncertainty degree for loss models involving truncated and censored random variables. Sachlas and Papaioannou [18] investigated the effect of inflation, truncation or censoring from below or above on the Shannon entropy of losses of insurance policies. In this context of per-payment and per-loss models, they derived analytic formulas for the Shannon entropy of actuarial models involving several types of partial insurance coverage and studied the properties of the resulting entropies. Recent results in this field have also been obtained by Gupta and Gupta [19] and Di Crescenzo and Longobardi [20], Meselidis and Karagrigoriou [21].
This paper aims to develop several entropy-based risk models involving truncated and censored loss random variables. In this framework, the effect of some partial insurance schemes, such truncation and censoring from above, truncation and censoring from below and inflation is investigated using the Tsallis entropy. The paper is organized as follows. In Section 2 some preliminary results are presented. In Section 3 representation formulas for the Tsallis entropy corresponding to the truncated and censored loss random variables in the per-payment and per-loss approach are derived, and the relationships between these entropies are obtained. Moreover, the combined effect of a deductible and a policy limit is investigated. In Section 4, closed formulas for the Tsallis entropy corresponding to some survival models are derived, including the proportional hazard and the proportional reversed hazard models. Some concluding remarks are provided in the last section.

The Exponential Distribution
An exponential distributed random variable X ∼ Exp(λ) is defined by the probability density function: with λ ∈ R, λ > 0 and the cumulative distribution function: (2)

The Weibull Distribution
A Weibull distributed random variable X ∼ W(α, λ, γ) is closely related to an exponential distributed random variable and has the probability density function: with α, λ, γ ∈ R, λ, γ > 0.
If X ∼ Exp(1), then the Weibull distribution can be generated using the formula:

The χ 2 Distribution
Let Z i , 1 ≤ i ≤ γ be independent random variables, Gaussian distributed and N(0, 1). A random variable χ 2 with γ degrees of freedom can be represented as: A χ 2 distributed random variable with γ degrees of freedom is represented by the probability density function: where Γ denotes the Euler Gamma function.

The Gamma Distribution
An exponential distributed random variable X ∼ G(α, λ, γ) is defined by the probability density function [22]: where α ∈ R, γ, λ > 0 are, respectively, the location parameter, the scale parameter and the form parameter of the variable X.

The Tsallis Entropy
Entropy represents a fundamental concept which can be used to evaluate the uncertainty associated with a random variable or with the result of an experiment. It provides information regarding the predictability of the results of a random variable X. The Shannon entropy, along with other measures of information, such as the Renyi entropy, may be interpreted as a descriptive quantity of the corresponding probability density function.
Entropy can be regarded as a measure of variability for absolutely continuous random variables or as a measure of variation or diversity of the possible values of discrete random variables. Due to the widespread applicability and use of information measures, the derivation of explicit expressions for various entropy and divergence measures corresponding to univariate and multivariate distributions has been a subject of interest; see, for example, Pardo [23], Toma [24], Belzunce et al. [25], Vonta and Karagrigoriou [26]. Various measures of entropy and generalizations thereof have been proposed in the literature.
The Tsallis entropy was introduced by Constantino Tsallis in 1988 [27][28][29][30] with the aim of generalizing the standard Boltzmann-Gibbs entropy and, since then, it has attracted considerable interest in the physics community, as well as outside it. Recently, Furuichi [31,32] investigated information theoretical properties of the Tsallis entropy and obtained a unique-ness theorem for the Tsallis entropy. The use of Tsallis entropy enhances the analysis and solving of some important problems regarding financial data and phenomena modeling, such as the distribution of asset returns, derivative pricing or risk aversion. Recent research in statistics increased the interest in using Tsallis entropy. Trivellato [33,34] used the minimization of the divergence corresponding to the Tsallis entropy as a criterion to select a pricing measure in the valuation problems of incomplete markets and gave conditions on the existence and on the equivalence to the basic measure of the minimal k−entropy martingale measure. Preda et al. [35,36] used Tsallis and Kaniadakis entropies to construct the minimal entropy martingale for semi-Markov regime switching interest rate models and to derive new Lorenz curves for modeling income distribution. Miranskyy et al. [37] investigated the application of some extended entropies, such as Landsberg-Vedral, Rényi and Tsallis entropies to the classification of traces related to various software defects.
Let X be a real-valued discrete random variable defined on the probability space (Ω, F , P), with the probability mass function p X . Let α ∈ R \{1}. We introduce the definition of Tsallis entropy [27] for discrete and absolutely continuous random variables in terms of expected value operator with respect to a probability measure. Definition 1. The Tsallis entropy corresponding to the discrete random variable X is defined by: where E p X [·] represents the expected value operator with respect to the probability mass function p X .
Let X be a real-valued continuous random variable defined on the probability space (Ω, F , P), with the probability density function f X . Let α ∈ R \{1}. Definition 2. The Tsallis entropy corresponding to the continuous random variable X is defined by: provided that the integral exists, where E f X [·] represents the expected value operator with respect to the probability density function f X .
In the sequel, we suppose to know the properties of the expected value operator, such as additivity and homogeneity.
Note that for α = 2, the Tsallis entropy reduces to the second-order entropy [38] and for α → 1, we obtain the Shannon entropy [39]. The real parameter α was introduced in the definition of Tsallis entropy for evaluating more accurately the degree of uncertainty. In this regard, the Tsallis parameter tunes the importance assigned to rare events in the considered model.
Highly uncertain insurance policies are less reliable. The uncertainty for the loss associated to an insurance policy can be quantified by using the entropy of the corresponding loss distribution. In the actuarial practice, frequently transformed data are available as a consequence of deductibles and liability limits. Recent research in statistics increased the interest for using different entropy measures for risk assessment.

Tsallis Entropy Approach for Loss Models
We denote by X the random variable which models the loss corresponding to an insurance policy. We suppose that X is non-negative and denote by f X and F X its probability density function and cumulative distribution function, respectively. Let S X be the survival function of the random variable X, defined by S X (x) = P(X > x).
We consider truncated and censored random variables obtained from X, which can be used to model situations which frequently appear in actuarial practice as a consequence of using deductibles and policy limits. In the next subsections, analytical expressions for the Tsallis entropy are derived, corresponding to the loss models based on truncated and censored random variables.

Loss Models Involving Truncation or Censoring from Below
Loss models with left-truncated or censored from below random variables are used when losses are not recorded or reported below a specified threshold, mainly as a result of applying deductible policies. We denote by d the value of the threshold, referred to as the deductible value. According to Kluggman et al. [40], there are two approaches used to express the random variable which models the loss, corresponding to the per-payment and per-loss cases, respectively.
In the per-payment case, losses or claims below the value of the deductible may not be reported to the insurance company, generating truncated from below or left-truncated data.
We denote by X lt (d) the left-truncated random variable which models the loss corresponding to an insurance policy with a deductible d in the per-payment case. It can be expressed as X lt (d) = [X|X > d], or equivalently: In order to investigate the effect of truncation from bellow, we use the Tsallis entropy for evaluating uncertainty corresponding to the loss covered by the insurance company. The following theorem establishes the relationship between the Tsallis entropy of the random variables X and X lt (d). We denote by H T α (X lt (d)) the per-payment Tsallis entropy with a deductible d.
We denote by I A the indicator function of the set A, defined by: In the sequel, the integrals are always supposed to be correctly defined.
Theorem 1. Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α ∈ R\{1} and d > 0. The Tsallis entropy H T α (X lt (d)) of the left-truncated loss random variable corresponding to the per-payment risk model with a deductible d can be expressed as follows: Proof. The probability density function of the random variable X lt (d) is given by Therefore, the Tsallis entropy of the random variable X lt (d) can be expressed as follows:

Remark 1.
For the limiting case α → 1, we obtain the corresponding results for the Shannon entropy from [18].
In the per-loss case corresponding to an insurance policy with a deductible d, all the claims are reported, but only the ones over the deductible value are paid. As only the real losses of the insurer are taken into consideration, this situation generates censored from below data.
We denote by X lc (d) the left-censored random variable which models the loss corresponding to an insurance policy with a deductible d in the per-loss case. As X is censored from below at point d, it results that the random variable X lc (d) can be expressed as follows: We note that X lc (d) assigns a positive probability mass at zero point, corresponding to the case X ≤ d. In this case, X lc (d) it not absolutely continuous, but a mixed random variable, consisting of a discrete and a continuous part. We can remark that the per-payment loss random variable can be expressed as the per-loss one given that the later is positive.
In the next theorem, the relation between the Tsallis entropy of the random variables X and X lc (d) is established. Theorem 2. Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α ∈ R\{1} and d > 0. The Tsallis entropy H T α (X lc (d)) of the left-censored loss random variable corresponding to the per-payment risk model with a deductible d can be expressed as follows: Proof. The Tsallis entropy of X lc (d), which is a mixed random variable consisting of a discrete part at zero and a continuous part over (d, +∞), is given by: and the conclusion follows.

Remark 2.
Let α ∈ R\{1} and d > 0. Then, It results that the Tsallis entropy of the left-censored loss random variable corresponding to the per-loss risk model is greater than the Tsallis entropy of the loss random variable, and the difference can be quantified by the right-hand side of the formula above.  Theorem 3. Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α ∈ R\{1}. The Tsallis entropy measures H T α (X lt (d)) and H T α (X lc (d)) are connected through the following relationship: where B(F X (d)) represents a Bernoulli distributed random variable with parameter F X (d).
Proof. By multiplying (13) with S α (d), we obtain: From Theorem 2, we have: By subtracting the two relations above, we obtain: , for S X (x) > 0, the hazard rate function of the random variable X. In the next theorem, the per-payment simple or residual entropy with a deductible d is expressed in terms of the hazard or risk function of X.
Theorem 4. Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α ∈ R\{1}. The Tsallis entropy of the left-truncated loss random variable corresponding to the per-payment risk model with a deductible d is given by: Proof. From Theorem 1, we have: We have: Integrating by parts the second term from the relation above, we obtain: Theorem 5. Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α ∈ R\{1}. The Tsallis entropy H T α (X lt (d)) of the left-truncated loss random variable corresponding to the per-loss risk model with a deductible d is independent of d if, and only if, the hazard rate function is constant.
Proof. We assume that the hazard function is constant, therefore λ(x) = k ∈ R, for any x > 0. It results that f X (x) = kS X (x), for any x > 0 and, using (17), we obtain: which does not depend on d.
Conversely, assuming that H T α (X lt (d)) does not depend on d, Using (17), we obtain Using (17) again, the last relation can be expressed as follows: Using again the hypothesis that H T α (X lt (d)) does not depend on d, it follows that λ does not depend on d, therefore λ is constant.

Loss Models Involving Truncation or Censoring from Above
Right-truncated or censored from below random variables are used in actuarial models with policy limits. In this case, losses are not recorded or reported for or above a specified threshold. We denote by u, u > 0 the value of the threshold, referred to as the policy limit or liability limit. According to Kluggman et. al [40], there are two approaches used to express the random variable which models the loss corresponding to the per-payment and per-loss cases, respectively.
In the per-payment case, losses or claims above the value of the liability limit may not be reported to the insurance company, generating truncated from above or righttruncated data.
We denote by X rt (u) the right-truncated random variable which models the loss corresponding to an insurance policy limit u in the per-payment case. It can be expressed as X rt (u) = [X|X < u], or equivalently: The relationship between the Tsallis entropy of the random variables X and X rt (d) is established in the following theorem. Theorem 6. Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α ∈ R\{1}. The Tsallis entropy H T α (X rt (u)) of the right-truncated loss random variable corresponding to the per-payment risk model with a policy limit u is given by: Proof. The probability density function of the random variable X rt (u) is given by Therefore, the Tsallis entropy of the random variable X rt (u) can be expressed as follows: In the following theorem, the Tsallis entropy of the right-truncated loss random variable corresponding to the per-payment risk model with a policy limit is derived. Theorem 7. Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α ∈ R\{1} and u > 0. The Tsallis entropy H T α (X rt (u)) of the right-truncated loss random variable corresponding to the per-payment risk model with a policy limit u can be expressed in terms of the reversed hazard function as follows: Proof. The probability density function of the random variable X rt (u) is given by Therefore, the Tsallis entropy of the random variable X rt (u) can be expressed as follows: Now, we consider the case of the per-loss right censoring. In this case, if the loss exceeds the value of the policy limit, the insurance company pays an amount u.
For example, a car insurance policy covers losses up to a limit u, while major losses are covered by the car owner. If the loss is modeled by the random variable X, then the loss corresponding to the insurance company is represented by [X|X < u]. We note that the loss model with truncation from above is different from the loss model with censoring from above, which is defined by the random variable X rc (u) = min{X, u}. In this case, if the loss is X ≥ u, the insurance company pays an amount u.
The loss model with censoring from above is modeled using the random variable X rc (u) = min{X, u}. Moreover, it can be represented as This model, corresponding to the per-loss case, assumes that in the case where the loss is X ≥ u, the insurance company pays an amount u. Therefore, the insurer pays a maximum amount of u on a claim. We note that the random variable X rc (u) is not absolutely continuous.
In the following theorem, an analytical formula for the entropy corresponding to the random variable X rc (u) is obtained. Theorem 8. Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α ∈ R\{1} and u > 0. The Tsallis entropy of losses for the right-censored loss random variable corresponding to the per-payment risk model with a policy limit u can be expressed as follows: Proof. We have:

Loss Models Involving Truncation from Above and from Below
We denote by d the deductible and by u the retention limit, with d < u. The deductible is applied after the implementation of the retention limit u. Therefore, if the value of the loss is grater than u, then the value of the maximum payment is u − d. We denote by X lr (d, u) the loss random variable which models the payments to the policy holder under a combination of deductible and retention limit policies. X lr (d, u) is a mixed random variable, with an absolutely continuous part over the interval (0, u − d) and two discrete parts at 0, with probability mass F X (d) and at u − d and with probability mass S X (u). Following [40], the loss random variable X lr (d, u) can be expressed by: The deductible d is applied after the implementation of the retention limit u, which means that if the loss is greater than u, then the maximum payment is u − d. The random variable X lr (d, u) is a mixed variable with an absolutely continuous part over the interval (0, u − d) and two discrete parts at 0, with probability mass F X (d) and at u − d and with probability mass S X (u).
In the next theorem, the Tsallis entropy of losses for the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u is derived. Theorem 9. Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α ∈ R\{1}, d > 0 and u > d. The Tsallis entropy of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u is given by: Proof. The probability density function of the random variable X lr (d, u) is given by where δ denotes the Dirac delta function. It results: The following theorem establishes the relationship between H T α (X lr (d, u)), the entropy under censoring from above H T α (X rc (u)) and the entropy under censoring from below H T α (X lc (d)).
Theorem 10. Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α ∈ R\{1}. For any d > 0 and u > d, the Tsallis entropy H T α (X lr (d, u)) is related to the entropies H T α (X rc (u)) and H T α (X lc (d)) through the following relationship: Proof. We have: Moreover, It results that: Figure 2 illustrates the Tsallis entropy of the right-truncated loss random variable X lr (d, u), corresponding to the per-loss risk model with a deductible d and a policy limit u for the exponential distribution with λ = 0.1.   (d, u)) for all the considered values around 1 of the α parameter. Thus, we remark that, for all values of α, the Tsallis entropy H T α (X lr (d, u)) is decreasing with respect to the deductible d and it does not depend on the policy limit u. Figure 3 represents the Tsallis entropy of losses for the right-truncated loss random variable X lr (d, u) corresponding to the per-loss risk model with a deductible d and a policy limit u for the χ 2 distribution, with γ = 30 and for different values of the Tsallis parameter α, in the case d < u. Figure 3 reveals, for all the values of the parameter α considered, a similar decreasing behavior with respect to the deductible d of the Tsallis entropy H T α (X lr (d, u)). Moreover, it indicates that the Tsallis entropy H T α (X lr (d, u)) does not depend on the values of the policy limit u. Figure 4 depicts the Tsallis entropy of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the Weibull distribution, with γ = 0.3, λ = 1.3 and a = 0 for different values of the Tsallis parameter α, in the case d < u. Figure 4 highlights that the Tsallis entropy of losses H T α (X lr (d, u)) is decreasing with respect to d for all the values of the parameter α considered. Moreover, the Tsallis entropy H T α (X lr (d, u)) does not depend on the policy limit u for the values of the α parameter around 1, respectively, for α = 0.9 and α = 1.1. A different behavior is detected for α = 0.5. In this case, we remark that the Tsallis entropy is increasing with respect to the policy limit u, which is realistic from the actuarial point of view. Indeed, increasing the policy limit results in a higher risk for the insurance company.
The conclusions obtained indicate that Tsallis entropy measures with parameter values significantly different from 1 can provide a better loss model involving truncation from above and from below.    The analysis of the results presented in Table 1 reveals that for parameter values α = 1 the Tsallis entropy corresponding to the X rt (u) random variable is increasing with respect to the value of the policy limit u. On the other side, for α = 1, the Tsallis entropy, which reduces to the Shannon entropy measure, is decreasing with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior of the Tsallis entropy measure is reasonable in this case, and it means that the Tsallis entropy approach for evaluating the risk corresponding to the X rt (u) random variable is more realistic. Table 2 (X lr (d, u) Analyzing the results presented in Table 2, we remark that for parameter values α = 1 the Tsallis entropy corresponding to the X rt (u) random variable is increasing with respect to the value of the policy limit u. On the other side, for α = 1, the Tsallis entropy, which reduces to the Shannon entropy measure, is decreasing with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior of the Tsallis entropy measure is reasonable in this case, and it means that the Tsallis entropy approach for evaluating the risk corresponding to the X rt (u) random variable is more realistic.   Table 3 illustrates the Tsallis entropy values in the case of the Weibull distribution with λ = 0.9585, γ = 0.3192 and deductible d = 1.2 for various values of the Tsallis parameter α and several values of the policy limit u.
The study of the results presented in Table 3 reveals that for parameter values α = 1 the Tsallis entropy corresponding to the X rt (u) random variable is increasing with respect to the value of the policy limit u. On the other side, for α = 1, the Tsallis entropy, which reduces to the Shannon entropy measure, is decreasing with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior of the Tsallis entropy measure is reasonable in this case, and it means that the Tsallis entropy approach for evaluating the risk corresponding to the X rt (u) random variable is more realistic.   Table 4 reveals the values of all the Tsallis entropy measures analyzed in the case of the Weibull distribution with λ = 0.9585, γ = 0.3192 and d = 1.3 for several values of the Tsallis parameter α and different values of the policy limit u.
The results displayed in Table 4 show that for α = 1 the Tsallis entropy of the X rt (u) random variable increases with respect to the value of the policy limit u, whereas for α = 1, the entropy decreases with respect to u. It indicates that, when the policy limit increases, the risk of the insurance company increases, too. Thus, the entropy of losses is increasing. We can also conclude that in this case the right-truncated loss random variable X rt (u) is better modeled using Tsallis entropy measure. Table 5  Analyzing the results provided in Table 5, we remark that for the parameter α = 1 the Tsallis entropy corresponding to the right-truncated random variable is increasing with respect to the value of the policy limit u. For α = 1, the Shannon entropy measure decreases with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior of the Tsallis entropy measure is reasonable in this case, and it means that the Tsallis entropy approach for evaluating the risk corresponding to the X rt (u) random variable is more realistic.   From Tables 1-5, we draw the following conclusions. Using the Tsallis entropy measure approach, in the case when the deductible value d increases, the uncertainty of losses for the insurance company will decrease, therefore the company has to pay smaller amounts. In the case when the policy limit value u increases, the uncertainty of losses for the insurance company will increase, as the company has to pay greater amounts. Therefore, the Tsallis entropy approach is more realistic and flexible, providing a relevant perspective and a useful instrument for loss models.

Loss Models under Inflation
Financial and actuarial models are estimated using observations made in the past years. As inflation implies an increase in losses, the models must be adjusted corresponding to the current level of loss experience. Moreover, a projection of the anticipated losses in the future needs to be performed. Now, we study the effect of inflation on entropy. Let X be the random variable that models the loss corresponding to a certain year. We denote by F the cumulative distribution function of X and by f the probability density function of X. The random variable that models the loss after one year and under the inflation effect is X(r) = (1 + r)X, where r, r > 0, represents the annual inflation rate. We denote by F X(r) the cumulative distribution function of X(r) and by f X(r) the probability density function of the random variable X(r).
The probability density function corresponding to the random variable X(r) is given by: The following theorem derives the relationship between the Tsallis entropies of the random variables X and X(r) = (1 + r)X.
Theorem 11. Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α ∈ R\{1}. The Tsallis entropy of the random variable X(r), which models the loss after one year under inflation rate r, r > 0, is given by Proof. Using the definition of the Tsallis entropy, we have: Using the change in variable given by u = z 1+r , it follows Theorem 12. Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α ∈ R\{1}. For r > 0, the Tsallis entropy of the random variable X(r), which models the loss after one year under inflation rate r, r > 0, is always larger than that of X and is an increasing function of r.
Proof. Let r > 0. We denote by so that H T α (X(r)) is an increasing function of r. Therefore, it follows that H T α (X(r)) > H T α (X).
The results obtained show that inflation increases the entropy, which means that the uncertainty degree of losses increases compared with the case without inflation. Moreover, the uncertainty of losses increases with respect to the inflation rate.

Tsallis Entropy Approach for Survival Models
In this section, we derive residual and past entropy expressions for some survival models, including the proportional hazard and the proportional reversed hazard models. Relevant results in this field have been obtained by Sachlas and Papaioannou [18], Gupta and Gupta [19], Di Crescenzo [41] and Sankaran and Gleeja [42].
Let X and Y be random variables with cumulative distribution functions F and G, probability density functions f and g and survival functions F and G, respectively. We denote by λ X and λ Y the hazard rate functions of the random variables X and Y, respectively.

The Proportional Hazard Rate Model
Definition 3. The random variables X and Y satisfy the proportional hazard rate model if there exists θ > 0 such that (see Cox [43]).

The Proportional Reversed Hazard Rate Model
Definition 4. The random variables X and Y satisfy the proportional reversed hazard rate model [43] if there exists θ > 0 such that In the next theorem, the Tsallis entropy of the right-truncated random variable Y rt (u) under the proportional reversed hazard rate model is derived.

Applications
We used a real database from [18], representing the Danish fire insurance losses recorded during the 1980-1990 period [44][45][46], where losses are ranged from MDKK 1.0 to 263.250 (millions of Danish Krone). The average loss is MDKK 3.385, while 25% of losses are smaller than MDKK 1.321 and 75% of losses are smaller than MDKK 2.967.
The data from the database [18] were fitted by using a Weibull distribution and the maximum likelihood estimators of the shapeĉ = 0.3192, and scale parameters of the distributionτ = 0.9585 were obtained.
The results displayed in Tables 1-5 can be used to compare the values of the following entropy measures:

•
The Tsallis entropy H T α (X) corresponding to the random variable X which models the loss; • The Tsallis entropy of the left-truncated loss and, respectively, censored loss random variable corresponding to the per-payment risk model with a deductible d, namely H T α (X lt (d)) and, respectively, H T α (X lc (d)); • The Tsallis entropy of the right-truncated and, respectively, censored loss random variable corresponding to the per-payment risk model with a policy limit u, denoted by H T α (X rt (u)) and, respectively, H T α (X rc (u)); • The Tsallis entropy of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u, H T α (X lr (d, u)). In the case of the Weibull distribution, for the parameter values λ = 0.9585 and γ = 0.3192 for d = 1.1 − 1.5, u = 10, u = 15, u = 20, u = 25 and for different values of the Tsallis entropy parameter α located in the neighborhood of the point 1, we draw the following conclusions. The values of the Tsallis entropy for α = 1 correspond to those obtained in [18]. Moreover, we remark that, for values of the Tsallis parameter α lower than 1, the values of the corresponding entropy measures increase. Moreover, for values of the parameter α greater than 1, the values of the corresponding entropy measures decrease, as we can notice from Figure 3, too. This behavior allows a higher degree of flexibility for modeling the loss-truncated and loss-censored random variables in actuarial models.

Conclusions
In this paper, an entropy-based approach for risk assessment in the framework of loss models and survival models involving truncated and censored random variables was developed.
By using the Tsallis entropy, the effect of some partial insurance schemes, such as inflation, truncation and censoring from above and truncation and censoring from below was investigated.
Analytical expressions for the per-payment and per-loss entropies of losses were derived. Moreover, closed formulas for the entropy of losses corresponding to the proportional hazard rate model and the proportional reversed hazard rate model were obtained.
The results obtained point out that entropy depends on the deductible and the policy limit, and inflation increases entropy, which means the increase in the uncertainty degree of losses increases compared with the case without inflation. The use of entropy measures allows risk assessment for actuarial models involving truncated and censored random variables.
We used a real database representing the Danish fire insurance losses recorded between 1980 and 1990 [44][45][46], where losses range from MDKK 1.0 to 263.250 (millions of Danish Krone). The average loss is MDKK 3.385, while 25% of losses are smaller than MDKK 1.321 and 75% of losses are smaller than MDKK 2.967.
The data were fitted using the Weibull distribution in order to obtain the maximum likelihood estimators of the shapeĉ = 0.3192 and scale parameters of the distribution τ = 0.9585.
The values of the Tsallis entropies for α = 1 correspond to those from [18], while as the α is lower than 1 the values of the entropies will increase and, as the α is bigger than 1, the values of the entropies will decrease, as we can notice from the Figure 3, too.
The paper extends several results obtained in this field; see, for example, Sachlas and Papaioannou [18].
The study of the results obtained reveals that for parameter values α = 1 the Tsallis entropy corresponding to the right-truncated loss random variable is increasing with respect to the value of the policy limit u. On the other side, for α = 1, the Tsallis entropy, which reduces to the Shannon entropy measure, is decreasing with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior proves that the Tsallis entropy approach for evaluating the risk corresponding to the right-truncated loss random variable is more realistic.
Therefore, we can conclude that the Tsallis entropy approach for actuarial models involving truncated and censored random variables provides a new and relevant perspective, since it allows a higher degree of flexibility for the assessment of risk models.
Author Contributions: All authors contributed equally to the paper. All authors have read and agreed to the published version of the manuscript.