Abstract
The main objective of this research is to obtain interesting estimates for Jensen’s gap in the integral sense, along with their applications. The convexity of a fifth-order absolute function is used to established proposed estimates of Jensen’s gap. We performed numerical computations to compare our estimates with previous findings. With the use of the primary findings, we are able to obtain improvements of the Hölder inequality and Hermite–Hadamard inequality. Furthermore, the primary results lead to some inequalities for power means and quasi-arithmetic means. We conclude by outlining the information theory applications of our primary inequalities.
MSC:
26A51; 26D15; 68P30
1. Introduction
The study of convex functions is a great way to experience the elegance and allure of sophisticated mathematics. Studying convex functions helps one to understand the efficiency and order that are inherent in mathematical structures, in addition to solving equations. A convex function [1] is defined as follows:
Definition 1.
A function is said to be a convex, if the relation
holds, for all and . K is said to be concave over if the aforementioned inequality is true in the reverse sense.
Convex functions have several generalizations. Some recent generalizations include pseudo-convex functions [2], invex functions [3], quasi-convex functions [4], preinvex functions [5], and E-convex functions [6]. It is notable that convexity yields a number of creative ideas on mathematical inequalities and their applications. In the literature on inequalities, Jensen’s inequality extends the notion of convexity to expectations and weighted averages. Jensen’s inequality is one of the most famous and widely used inequalities in the field of mathematical inequalities. Jensen’s inequality [1] in the discrete form is stated as follows:
Theorem 1.
Let and K be a convex function on If and for with then
If the function K is concave on then Inequality (2) will hold in the reverse direction.
Jensen’s inequality [7] in continuous form can be described as follows:
Theorem 2.
Assume that are any integrable functions with and for all Also, suppose that If is an integrable function, then
Jensen’s inequality is crucial because it may be used to derive other classical inequalities, such as the Hölder’s, arithmetic–geometric, Minkowski’s, Young’s, and the Hermite–Hadamard inequalities. The Hermite–Hadamard inequality can be considered as an improvement of the concept of convexity and it is defined as follows:
Definition 2.
Let be an integrable convex function. Then,
Both inequalities hold in the reverse direction if K is concave.
Hölder’s inequality is a fundamental inequality in mathematics used to bound the integral or sum of a product of functions.
Definition 3.
Let and If and are real functions defined on and if and are integrable functions on , then
with equality holding if and only if almost everywhere, where and B are constants.
In addition, Talylor’s theorem with the integral remainder form has always been helpful in deriving the new bounds of Jensen’s inequality, which is stated as follows:
Theorem 3.
Let be a function with continuous derivatives on the interval between a and Then,
where the remainder term is given by the integral form
Setting the Equation (6) becomes
Also, Jensen’s inequality has been used to address several problems in different fields of science and technology, e.g., engineering [8], information science [9], mathematical statistics [10], and financial economics [11]. Regarding its generalizations, improvements, refinements, converses, etc., an extensive body of literature is available. Furthermore, Jensen’s inequality has also been demonstrated for several generalized classes of convex functions, including s-convex [12], coordinated convex [13], P-convex [14], h-convex [15], and six-convex functions [16]. Dragomir [17] discussed a refinement of the prominent Jensen inequality by taking the convex function over linear space. Khan et al. [7] obtained new estimates of Jensen’s gap by using the green function. Deng et al. [18] used some majorization results to provide refinements of the discrete Jensen inequality. Moreover, they also demonstrated the applicability of the improved Jensen inequality to information theory, power means, and quasi-arithmetic means. Using four-convexity, Ullah et al. [19] improved Jensen’s inequality in both continuous and discrete forms. Through the notion of majorization, Saeed et al. [20] refined the well-known integral Jensen’s inequality. These were used to refine the Hermite–Hadamard and Hölder inequalities. Zhu and Yang [21] examined the stability of discrete time delay systems in 2008 by applying Jensen’s inequality. Khan et al. [22] gave new estimates of Jensen difference by using the absolute convexity of the first differentiable function given as follows:
Theorem 4.
Consider a differentiable function such that is a convex function and let be two integrable functions defined on such that for all and Also, assume that . Then,
Sohail et al. [23] presented new refinements of Jensen’s inequality using absolute convexity of thrice differentiable functions, supported by numerical comparisons and applications to classical inequalities, means, and divergences.
Theorem 5.
Assume that the function is thrice differentiable with being a convex function and being any integrable function. Further presume that and Then,
The primary goal of this study is to provide new Jensen gap estimations, present applications of these estimates in the theory of means and information theory, and to refine classical inequalities. For this purpose, we use a function whose absolute fifth-order derivative is convex. In many classical settings, second- or fourth-order bounds suffice; however, in the application domains, fifth-order derivative-based bounds offer substantial advantages. These bounds become particularly valuable in systems or models exhibiting strong nonlinear behavior, non-Gaussianity, or where tight bounds are essential (e.g., high-precision numerical methods).
The layout of this paper is as follows: the main findings are presented in Section 2. The importance of these findings is covered in Section 3, with particular attention paid to the bounds that are determined with the conditions on the function. Applications of the main findings to the Hölder and Hermite–Hadamard inequalities are presented in Section 4, such as graphical result verification, whereas Section 5 includes mean theory applications. We examine information theory applications in Section 6.
2. Main Results
Let us start this section by obtaining the following theorem that provides enhancement of Jensen’s inequality.
Theorem 6.
Suppose that the function is fifth differentiable, with being a convex function and being any integrable function. Further, assume that and Then,
Proof.
Without loss of generality, assume that . Using the integral form of Taylor’s expansion (7) in we get
Now, by using the change in the variable for we obtain
This implies that
Taking the absolute value of the identity (9), and subsequently using triangular inequality, we get
Remark 1.
A function with a convex fifth derivative does not necessarily imply that the original function is convex. While it is true that a convex function has a second derivative that is non-negative, and a function with a convex second derivative is convex, convexity alone does not guarantee any specific behavior for higher-order derivatives, especially beyond the second derivative.
The convexity or concavity of is not guaranteed for all convex functions but may arise under certain conditions such as additional regularity conditions, e.g., or if is monotonic and preserves the sign on the domain and is non-negative. For specific function classes (e.g., exponentials, power functions, polynomials), the convexity of can be explicitly determined by direct computation e.g., for , is convex if
The following theorem uses concave functions to further improve the Jensen inequality.
Theorem 7.
Let be any function such that exists. Additionally, suppose that and are integrable functions with and If is a concave function, then
3. Importance of Main Results
This section examines the impact of observed improvements in the bounds obtained from Theorem 6 and 7 in relation to earlier research findings.
3.1. Functions Fit the Criteria
In the literature, we may find functions for which the first- to fourth-order derivatives in terms of absolute values are not convex. Below is an illustration of one of these functions. Let us consider
Now, we can find the absolute value of the function of first- to fourth-order derivatives.
These functions are not convex, whereas the absolute value function of the fifth-order derivative is convex.
Figure 1 provides an illustration of this, demonstrating that the fifth-order absolute function is convex, but the first- to fourth-order absolute functions are not. This demonstrates the special contribution that our article makes.
Figure 1.
Two-dimensional plot of functions.
3.2. Numerical Estimates for the Jensen Difference
In this part, we perform numerical experiments to show the importance of our main results over other results in the literature. In the first example, a parameter has given different values to check the bounds and are compared in Table 1 and Table 2.
Table 1.
Comparative analysis of the inequality of Theorem 6 for different values of the parameter a for Example 1.
Table 2.
Comparative analysis of the inequality of Theorem 1 from [22] for different values of the parameter a for Example 1.
Example 1.
Let where Therefore, using the inequality (8) for and where
With the previously described data and the specific function in Inequality (8) obtained by Khan et al. [22], we arrive at
As shown, Jensen’s difference estimates provided in this study are better than the estimates achieved from [22].
Example 2.
Let for Then which is a convex function on the given interval. By applying Inequality (8) with the choices and we obtain the estimate
Furthermore, since is also a convex function on , using Inequality (2.1) from article [23], we derive the estimate as
Additionally, as is also a convex function, applying Inequality (2.24) from article [22], we obtain
4. Applications to Hölder and Hermite–Hadamard Inequalities
Hölder’s inequality is a fundamental result in mathematical analysis with widespread applications in various fields. e.g., functional analysis, probability theory, optimization, and physics. It makes it easier to explore functions in various spaces. One can even use it to nail down key points in probability and statistics, tweak machine learning models, and tackle hands-on problems in physics and engineering. On the other hand, the Hermite–Hadamard inequality refines the concept of convexity. In this section, we present applications of our obtained bounds to Holder’s and Hermite–Hadamard inequalities for specific choices of the underlying functions.
Proposition 1.
Let and η be two positive functions such that and are integrable. Also, assume that with . If , then
Proof.
Take the function , then by the successive differentiation of the given function, we get and . Clearly, both and are nonnegative on for , which substantiate the convexity of as well as convexity of . Based on this, using (8) for and then assuming power we get
Since the inequality holds for and , using this inequality for and and we get
Proposition 2.
Let γ and η be two positive functions such that and are integrable functions and with . If , then
Let us consider an example for the estimates obtained from Proposition 1.
Example 3.
Take such that in (18). Using the above values for Inequality (18), we get
These estimates are visualized in Figure 2 below.
Figure 2.
Three-dimensional surface plot of an inequality where blue represents L.H.S and red represents R.H.S.
Proposition 3.
Let be any functions and with Additionally, assume that and are integrable over If then
Proof.
Let , then by the successive differentiation of the given function, we obtain and Clearly, is convex and is concave on for . Therefore, utilizing (12) for and then taking power , we get
Since the inequality holds for and , using , , and we get
Proposition 4.
Let γ and η be two positive functions such that and are integrable functions and with . If , then
The following corollary improves the Hermite–Hadamard inequality.
Corollary 1.
Under the assumptions of Theorem 6, we have
Corollary 2.
Under the assumptions of Theorem 7, we have
5. Applications to Power Means and Quasi-Arithmetic Means
Definition 4.
If both γ and η are positive integrable functions in the interval such that then
is the power mean of order
Theorem 6 enables us to give an inequality for the power mean as a special case as follows:
Corollary 3.
Let be two positive functions with In addition, let
- (i)
- If or or such that then
- (ii)
- If or or such that then (33) holds.
Proof.
Let the function . Then and . Clearly, both and are positive with the given conditions, and consequently this confirms the convexity of the function on . Therefore, taking and in (8), we obtain (33).
(ii) For the stated circumstances of m and n, the functions and are convex on . Thus, by following the process of (i), we will get (33). □
Theorem 7 allows us to provide an additional inequality for the power mean as a special case as follows:
Corollary 4.
Let be two positive functions with In addition, let
- (i)
- If with then
- (ii)
- If such that then (34) holds.
Proof.
(i) Let for Then, for given values of m and the function K is convex and is concave. Then, using (12) for and we get (34).
(ii) Under the specified parameters for m and n, the function is convex and is concave on . Therefore, by following the procedure of (i), we receive (34). □
Theorem 6 can be used to create a relation as shown below:
Corollary 5.
Let be two positive functions with Then,
The following corollary offers an relation for the power means as a result of Theorem 6.
Corollary 6.
Assume Corollary 5 meets its criteria. Then,
The definition of the quasi-arithmetic mean is as follows:
Definition 5.
If and are any integrable functions defined on with and Additionally, let be continuous and strictly monotonic functions on then
We may express an inequality for the quasi-arithmetic mean as follows using Theorem 6.
Corollary 7.
Consider the two positive functions and with . Moreover, suppose that a function with is convex and g is a continuous and strictly monotone function. Then, the following inequality holds:
Theorem 7 provides an inequality for the quasi-arithmetic mean as shown below:
Corollary 8.
Let us assume that and ( are the following two positive functions with . Also assume that g is a strictly monotonic and continuous function and the function such that is concave. Then,
6. Application in Information Theory
Information theory has revolutionized the way we send, store, and process data through its numerous applications in a variety of fields. It supports the effective compression of data for transmission in telecommunications, guaranteeing low-bandwidth consumption while maintaining data integrity. Information theory serves as the basis for secure communication and encryption techniques in cryptography that protect private data. It directs the creation of algorithms for clustering, classification, and pattern recognition in machine learning. Additionally, genetics has been impacted by information theory, which has helped to clarify DNA sequences and genetic variety. Applications of information theory continue to influence the technical landscape of the modern world, from the fields of biology to engineering, improving our capacity to process and derive meaning from enormous amounts of data. Csiszár divergence is defined below.
Definition 6.
Let be a convex function and also assume that the function γ and η is integrable on such that and for all Then
We will now go over a few ideas that are connected to the Csiszár divergence.
Definition 7.
Let η and γ be probability density function. Then, according to the Shannon entropy,
Definition 8.
As defined by Kullback–Leibler divergence,
Definition 9.
The following is the Bhattacharyya coefficient:
Definition 10.
Let η and γ be probability density functions. Moreover, the Rényi divergence can be explained as
The following corollary estimates the Csiszár divergence using Theorem 6.
Corollary 9.
Let the assumptions of Theorem 6 hold. Then,
Now, using Corollary 9, we can have applications for Shannon, Kullback–Leibler, and Bhattacharyya divergences as follows.
The following consequence gives Shannon entropy estimates as an application of Theorem 6.
Corollary 10.
Let γ be a positive probability density function. Then,
The effect of Theorem 6 on the Kullback–Leibler divergence is explained by the following conclusion.
Corollary 11.
Let γ and η be two positive probability density functions. Then,
Proof.
The following consequence estimates the Bhattacharyya coefficient by applying Theorem 6.
Corollary 12.
Let γ and η be two positive probability density functions. Then,
Proof.
Lastly, the following corollary states the bound for the Rényi divergence that is inferred from Theorem 6.
Corollary 13.
Let η and γ be probability density functions and for any . Then,
7. Conclusions
In this article, we applied the convexity property of a fifth-order derivative absolute function to obtain significant estimates of the Jensen gap. For this, we used the definition of convex functions and the Jensen inequality. We provided improvements of Hōlder’s inequality and the Hermite–Hadamard inequality by using our estimates. Furthermore, we presented estimates for the Csiszár and Kullback–Leibler divergences, the Bhattacharyya coefficient, and Shannon entropy through additional applications of the main findings in information theory. In addition, we also provide examples of such functions whose fifth-order derivative absolute function is convex and also examine the sharpness of our results through numerical experiments.
Author Contributions
Conceptualization, S.N. and F.Z.; methodology, S.N. and F.Z.; software, S.N.; validation, F.Z. and H.A.; formal analysis, S.N.; investigation, S.N.; writing—original draft preparation, S.N.; writing—review and editing, S.N., F.Z. and H.A.; visualization, S.N.; supervision, F.Z.; project administration, H.A.; funding acquisition, H.A. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Deanship of Scientific Research, Taif University, Taif, Saudi Arabia.
Data Availability Statement
No new data were created or analyzed in this study.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Pečarič, J.; Persson, L.E.; Tong, Y.L. Convex Functions, Partial Ordering and Statistical Applications; Academic Press: New York, NY, USA, 1992. [Google Scholar]
- Mangasarian, O.L. Pseudo-Convex Functions. SIAM J. Control. 1965, 3, 281–290. [Google Scholar] [CrossRef]
- Hanson, M.A. On sufficiency of the Kuhn-Tucker conditions. J. Math. Anal. Appl. 1981, 80, 545–550. [Google Scholar] [CrossRef]
- Arrow, K.J.; Enthoven, A.D. Quasiconcave Programming. Econometrica 1961, 29, 779–800. [Google Scholar] [CrossRef]
- Mohan, S.R.; Neogy, S.K. On invex sets and preinvex functions. J. Math. Anal. Appl. 1995, 189, 901–908. [Google Scholar] [CrossRef]
- Younness, E.A. E-convex sets, E-convex functions and E-convex programming. J. Optim. Theory. Appl. 1999, 102, 439–450. [Google Scholar] [CrossRef]
- Khan, M.A.; Khan, S.; Chu, Y. A new bound for the Jensen gap with applications in information theory. IEEE Access 2016, 4, 98001–98008. [Google Scholar] [CrossRef]
- Cloud, M.J.; Drachman, B.C.; Lebedev, L.P. Inequalities with Applications to Engineering; Springer: Heidelberg, Germany, 2014. [Google Scholar]
- Butt, S.I.; Mehmood, N.; Pečarić, Đ.; Pečarić, J. New bounds for Shannon, relative and Mandelbrot entropies via Abel-Gontscharoff interpolating polynomial. Math. Inequal. Appl. 2019, 22, 1283–1301. [Google Scholar] [CrossRef]
- Leorato, S. A refined Jensen’s inequality in Hilbert spaces and empirical approximations. J. Multivar. Anal. 2009, 100, 1044–1060. [Google Scholar] [CrossRef][Green Version]
- Lin, Q. Jensen inequality for superlinear expectations. Stat. Probab. Lett. 2019, 151, 79–83. [Google Scholar] [CrossRef]
- Hudzik, H.; Maligranda, L. Some remarks on s–convex functions. Aequationes Math. 1994, 48, 100–111. [Google Scholar] [CrossRef]
- Khan, M.A.; Wu, S.; Ullah, H.; Chu, Y.M. Discrete majorization type inequalities for convex functions on rectangles. J. Math. Anal. Appl. 2019, 2019, 16. [Google Scholar]
- Sezer, S.; Eken, Z.; Tinaztepe, G.; Adilov, G. p-convex functions and some of their properties. Numer. Funct. Anal. Optim. 2021, 42, 443–459. [Google Scholar] [CrossRef]
- Varošanec, S. On h-convexity. J. Math. Anal. Appl. 2007, 326, 303–311. [Google Scholar] [CrossRef]
- Khan, M.A.; Sohail, A.; Ullah, H.; Saeed, T. Estimations of the Jensen gap and their applications based on 6-convexity. Mathematics 2023, 11, 1957. [Google Scholar] [CrossRef]
- Dragomir, S.S. A refinement of Jensen’s inequality with applications to f -divergence measures. Taiwan. J. Math. 2010, 14, 153–164. [Google Scholar] [CrossRef]
- Deng, Y.; Ullah, H.; Khan, M.A.; Iqbal, S.; Wu, S. Refinements of Jensen’s inequality via majorization results with applications in the information theory. J. Math. 2021, 2021, 1–12. [Google Scholar] [CrossRef]
- Ullah, H.; Khan, M.A.; Saeed, T.; Sayed, Z.M.M. Some Improvements of Jensen’s inequality via 4-convexity and applications. J. Funct. Spaces 2022, 2022, 1–9. [Google Scholar] [CrossRef]
- Saeed, T.; Khan, M.A.; Ullah, H. Refinements of Jensen’s inequality and applications. AIMS Math. 2022, 7, 5328–5346. [Google Scholar] [CrossRef]
- Zhu, X.L.; Yang, G.H. Jensen inequality approach to stability analysis of discrete-time systems with time-varying delay. IET Control Theory Appl. 2008, 2, 1644–1649. [Google Scholar] [CrossRef]
- Khan, M.A.; Khan, S.; Erden, S.; Samraiz, M. A new approach for the derivations of bounds for the Jensen difference. Math. Methods Appl. Sci. 2022, 45, 36–48. [Google Scholar] [CrossRef]
- Sohail, A.; Khan, M.A.; Ding, X.; Sharaf, M.; El-Meligy, M.A. Improvements of the integral Jensen inequality through the treatment of the concept of convexity of thrice differential functions. AIMS Math. 2024, 9, 33973–33994. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).