1. Introduction
A function
is said to be convex on an nonempty interval
I if the inequality:
holds for all
.
If the inequality (
1) reverses, then
f is said to be concave on
I [
1].
Let
be a convex function on an interval
I and
. Then,
This double inequality is known in the literature as the Hermite–Hadamard (HH) integral inequality for convex functions.
It has a plenty of applications in different areas of Mathematics, (see [
2,
3] and references therein).
If
f is a concave function then both inequalities in (
2) hold in the reverse direction, i.e.,
In this article we investigate the possibility of a form of the Hermite–Hadamard inequality for functions that are not necessarily convex/concave on
I. This has already been attempted in [
4] where the convexity/concavity of the second derivative was shown to be crucial in managing improvements of the HH inequality as a linear combination of its endpoints.
We derive a form of the Hermite–Hadamard inequality under the sole condition that derivatives of a certain order of the target function exist locally on a closed interval . Thus, if exists on E. As it is continuous on a closed interval, numbers and exist. These numbers play an important role in the sequel.
Closely connected to the HH inequality is the well-known Simpson’s rule which is of great importance in numerical integration. This states that:
Lemma 1. ([
5]).
For an integrable f, we have:where and . By taking
, we obtain another form of Simpson’s rule:
Although (
4) assumes that
, we shall prove here an estimation of the target expression for twice differentiable functions. This is particulary useful if
is non-existent on
E.
Namely, if
, then:
as shown in Corollary 1 (below).
A challenging question is to determine best possible constant in the above estimation.
Open problemFind the best possible absolute constant A such that the inequality:holds for any . By other means the constant can be replaced with ; however, the problem is not yet solved. Conversely, the example gives .
An analogous result concerns functions that are only 3-times differentiable on E.
For any
we have:
In this case we know that the constant is the best possible.
In the sequel we shall sharply improve Simpson’s rule by assuming that
is convex on
E. Thus,
Applying the same method we give tight bounds for an extended form of (
4), as shown in Corollary 2.
2. Results and Proofs
Our first main result is a variant of the Hermite–Hadamard inequality for functions that are not necessary convex/concave but have a second derivative on .
Theorem 1. Let and . Then, Proof. For a given
, we define an auxiliary function
f by
. Since
, it follows that
f is a convex function on
E. Therefore, applying the HH inequality (
2), we obtain:
that is,
However, taking the auxiliary function f as , we see that it is also convex on E.
Applying the HH inequality once again, we get:
Comparing the inequalities (
5) and (
6), the proof is done. □
Remark 1. If g is a convex function on E then and (5) evidently represents an improvement of the HH inequality. Analogously, if then (6) improves the inequality (3). Another consequence is the following estimation of Simpson’s rule for twice differentiable functions.
Corollary 1. If , then: Proof. Inequalities (
5) and (
6) give:
Adding these inequalities, we obtain:
Similarly, adjusting the left-hand sides of (
5) and (
6), we get:
Hence,
and the proof is complete. □
Another important result concerns functions that are only 3-times differentiable on E.
Theorem 2. For any we have: The constant is the best possible.
Proof. For this case we need an integral representation of
, where:
Lemma 2. If , then:where . This identity can be easily proven by successive partial integration of its right-hand side.
To prove that the constant
is best possible, we consider the function
, which is defined as:
It is easy to confirm that this function is 3-times continuously differentiable on the real line.
Applying the form of Simpson’s rule for
, we obtain:
Since,
we obtain that
.
Letting
, we get:
and the proof is done. □
We now prove a more precise form of Simpson’s rule by assuming that exists and is convex on E.
Theorem 3. Let and be convex on E. Then, Proof. The proof of this assertion is based on the following two lemmas.
Lemma 3 ([6]).If h is convex on and for some , then: Remark 2. Note that this result represents a pre-HH inequality, i.e., the Hermite–Hadamard inequality is its direct consequence.
Indeed, let for .
Then and . Hence, Integrating this expression over we obtain the HH inequality.
The next result is another integral representation of Simpson’s rule.
Lemma 4. If , then the following identity holds.where . Proof. From Lemma 2, using partial integration we obtain:
□
Now, since
and
is convex on
E, applying Lemma 3 we get:
that is,
and, by Lemma 4, the proof of Theorem 3 readily follows. □
Remark 3. It is instructive to compare this result with the original form of Simpson’s rule (4). The assertion of Theorem 3 is fundamental for our next main result i.e., a sharp approximation of Simpson’s rule of the sixth order.
Theorem 4. Let and with . Then,where . Proof. For a given function , we consider the auxiliary function .
Since
, we conclude that
is a convex function in
E. Therefore, applying Theorem 3 and using the following identities:
and
we obtain:
Analogously, for
we found that
is convex and, applying Theorem 3 again, we get:
Multiplying the left-hand sides of these inequalities with r and s respectively, the right-hand sides with p and q and then adding, we obtain the desired result. □
Remark 4. Choosing parameters p, q, r, and s in particular cases, we can deduce different forms of the extended Simpson’s rule, which is of importance in numerical integration.
For example,
Corollary 2. For any , we have: Proof. Putting , the desired result follows from Theorem 4. □
Corollary 3. For any denote . Then, Proof. Then,
and the result follows from Theorem 4. □
Remark 5. The double inequality (3) represents an improvement of the assertion from Theorem 3.
Indeed, in this case , therefore is convex on E.
Consequently, the expression:
is non-negative and the inequality (3) can be written in the form:
3. Applications
Theorems proved above are the source of a plenty of interesting inequalities from Classical Analysis. As an illustration we shall provide several examples concerning the function
, which are improvements of the famous Cusa–Huygens inequality and Jordan’s inequality (cf. [
7]).
Since the function
is concave for
, the Hermite–Hadamard inequality applied to
, gives:
Let us now to consider the interval . Since is neither convex nor concave in this case, we should apply Theorem 1.
Because
, for
we easily get:
and Theorem 1 gives:
Since, for
,
and
we obtain the following:
Proposition 1. For , we have: This assertion sharply improves even the inequality (
8), since:
for
.
A more precise approximation of the target function can be deduced from Theorem 3.
Proposition 2. For , we have: Proof. For we have that which is convex for . Hence, Theorem 3 gives the result. □
Furthermore, using Corollary 2, we obtain:
Proposition 3. For , we have: Proof. This result is simply an application of Corollary 2 to the function . □
Corollary 4. For , we have:and Proof. By applying the inequality , the proof easily follows by integrating assertions from Propositions 2 and 3 over the range . □
4. Conclusions
A huge amount of articles have been dedicated to the famous Hermite–Hadamard integral inequality for convex functions, where convexity is treated in a broad sense (cf. [
3], for example).
In this paper we proposed a variant of the HH inequality for the class of twice differentiable functions on a closed interval. Its novelty lies in the fact that convexity/concavity of the target function is not assumed in advance. Moreover, this result represents an improvement of the Hermite–Hadamard inequality in the case of convex/concave functions. As a consequence we obtain a modified variant of Simpson’s rule along with an intriguing open problem. Further improvements and extensions of Simpson’s rule were given along with several applications which are of importance in numerical integration.