Next Article in Journal
Bayesian Updating for Stochastic Processes in Infinite-Dimensional Normed Vector Spaces
Next Article in Special Issue
Several Geometric Properties in Banach Spaces and Their Further Application in Orlicz Spaces
Previous Article in Journal
Accelerating Propagation Induced by Slowly Decaying Initial Data for Nonlocal Reaction-Diffusion Equations in Cylinder Domains
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Strongly F-Convex Functions with Structural Characterizations and Applications in Entropies

1
Department of Mathematics, Faculty of Science, University of Jiroft, Jiroft 7867155311, Iran
2
Faculty of Civil Engineering, Architecture and Geodesy, University of Split, Matice Hrvatske 15, 21000 Split, Croatia
3
Department of Mathematics, Sirjan University of Technology, Sirjan 7813733385, Iran
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(12), 926; https://doi.org/10.3390/axioms14120926
Submission received: 6 November 2025 / Revised: 8 December 2025 / Accepted: 10 December 2025 / Published: 16 December 2025
(This article belongs to the Special Issue Advances in Functional Analysis and Banach Space)

Abstract

Strongly convex functions form a central subclass of convex functions and have gained considerable attention due to their structural advantages and broad applicability, particularly in optimization and information theory. In this paper, we investigate the class of strongly F-convex functions, which generalizes the classical notion of strong convexity by introducing an auxiliary convex control function F. We establish several fundamental structural characterizations of this class and provide a variety of nontrivial examples such as power, logarithmic, and exponential functions. In addition, we derive refined Jensen-type and Hermite–Hadamard-type inequalities adapted to the strongly F-convex concept, thereby extending and sharpening their classical forms. As applications, we obtain new analytical inequalities and improved error bounds for entropy-related quantities, including Shannon, Tsallis, and Rényi entropies, demonstrating that the concept of strong F-convexity naturally yields strengthened divergence and uncertainty estimates.

1. Introduction

Convex functions constitute one of the most fundamental and influential classes in real analysis. Their mathematical importance arises from a combination of structural simplicity and rich properties, including the monotonicity of derivatives, existence of subgradients, stability under a wide variety of operations, powerful duality principles, and a remarkable capacity to generate inequalities and stability results that have no analogue in a non-convex setting (see, for example, [1,2,3]).
Since the pioneering works of Jensen [4], Hermite [5], and Hadamard [6], followed by the systematic treatments of Hardy–Littlewood–Pólya [7] and Mitrinović–Pečarić–Fink [8], convex functions have become central tools across numerous areas of mathematics, including classical analysis [9], geometry [10], probability theory [11], operator theory [12], and fuzzy-valued calculus [13], among others. Beyond pure mathematics, convexity plays a fundamental role in optimization [14], economics [15], statistical inference [16], information theory [17], machine learning [18], control theory [19], mechanics [20], thermodynamics [21], and many other scientific disciplines.
We recall the classical definition. Let I be a real interval. A function f : I R is called convex if
f ( t x + ( 1 t ) y ) t f ( x ) + ( 1 t ) f ( y ) ,
for all x , y I and t [ 0 , 1 ] . If f is convex, then f is concave.
Two classical inequalities highlight the central role of convexity more distinctly then any others. The first is Jensen’s inequality. For any convex function f on I, any vector x = ( x 1 , , x n ) I n , and any probability weight p = ( p 1 , , p n ) ,
f i = 1 n p i x i i = 1 n p i f ( x i ) .
This inequality encapsulates the essence of convexity and underlies a vast array of results in probability, information theory, and functional inequalities. Its operator, integral, and probabilistic forms are indispensable in modern analysis (see [3,22]).
Complementing Jensen’s inequality is the Hermite–Hadamard inequality, which provides a sharp two-sided bound for the integral mean of a convex function. If f is convex on [ a , b ] , then
f a + b 2 1 b a a b f ( x ) d x f ( a ) + f ( b ) 2 .
Extensive work by Dragomir et al. [23] demonstrates that this inequality serves as a prototype for numerous refinements and generalizations across analysis, information theory, approximation theory, numerical integration, interpolation, quadrature error bounds, and engineering applications.
Over the last three decades, numerous refinements of Jensen’s inequality have appeared. A fundamental refinement due to Dragomir [24] (see also [25]) states that for any convex function f on I , any x = ( x 1 , , x n ) I n , and any probability weight p = ( p 1 , , p n ) ,
0 min 1 i n { p i } i = 1 n f x i n f 1 n i = 1 n x i i = 1 n p i f x i f i = 1 n p i x i max 1 i n { p i } i = 1 n f x i n f 1 n i = 1 n x i .
Recently, research in convex analysis has increasingly focused on refined estimates associated with entropy measures. Entropies quantify uncertainty, disorder, or information content of a probability distribution and play basic roles in information theory [17], with an important influence on branch probability [26], partial differential equations [27], statistical mechanics [28], thermodynamic stability theory [29], and related fields. The most notable examples include the Shannon, Rényi, and Tsallis entropies.
Let P = { p i } i = 1 n be a positive probability distribution.
  • The Tsallis entropy [30] is defined by
    T α ( P ) : = i = 1 n p i log α p i , ( α R , α 0 ) ,
    where the α -logarithm is given by
    log α x =     x α 1 α , x > 0 , α 0 , ln x x > 0 , α = 0 .
    Equivalently,
    T α ( P ) = 1 α 1 i = 1 n p i 1 + α .
  • For α 0 , Tsallis entropy reduces to the Shannon entropy [31], defined by
    H ( P ) : = i = 1 n p i ln p i .
  • The Rényi entropy [32] is defined by
    R α ( P ) = 1 1 α ln i = 1 n p i α , α > 0 , α 1 .
These and many other entropy measures admit a unified formulation through Csiszár’s f-divergences [33], defined for convex functions f (see also [34]). This viewpoint reveals that numerous information-theoretic quantities arise naturally from the geometric structure of convexity. To obtain sharper estimates, the notion of strong f-divergences was introduced in [35] (see also [36]), replacing the generating convex function in Csiszár’s divergences by a strongly convex function.
We say that a function f on an real interval I is strongly convex if its curvature is reinforced relative to ordinary convexity by an additional non-negative term Δ c ( x , y ; t ) = c t ( 1 t ) ( y x ) 2 , with a dependence on the modulus c > 0 , that is, if
f ( t x + ( 1 t ) y ) t f ( x ) + ( 1 t ) f ( y ) Δ c ( x , y ; t )
for all x , y I , t [ 0 , 1 ] . If f satisfies (5), then f is called strongly concave.
Strongly convex functions have attracted considerable attention and have found applications in quantum calculus [37], trapezoidal-type inequalities [38], and differential equation problems [39]. More advanced applications appear in [40,41].
Condition (5) is equivalent to the convexity of the function g ( x ) = f ( x ) h ( x ) , where h ( x ) = c x 2 (see [42]). Dragomir [43] introduced the idea of observing an arbitrary convex function F instead of the square function h ( x ) = c x 2 . This idea also appears in [44,45].
For a function f on an interval I, we say that it is strongly F-convex if there exists a convex function F on the same interval such that
f ( t x + ( 1 t ) y ) t f ( x ) + ( 1 t ) f ( y ) Δ F ( x , y ; t )
where
Δ F ( x , y ; t ) = t F ( x ) + ( 1 t ) F ( y ) F ( t x + ( 1 t ) y ) 0 .
The non-negativity of Δ F ( x , y ; t ) follows from the convexity of the control function F.
A key structural lemma is the following:
Lemma 1.
Let f , F : I R R with F convex. Then:
(a) 
f is strongly F-convex if and only if f F is convex.
(b) 
f is strongly F-concave if and only if f + F is concave.
Proof. 
We prove only the convexity case. The concavity case can be proven analogously.
If f is strongly F-convex, then (6) holds and it is equivalent to
( f F ) ( t x + ( 1 t ) y ) t ( f F ) ( x ) + ( 1 t ) ( f F ) ( y )
which yields convexity of the function f F .
One immediate consequence is that strong F-convexity implies ordinary convexity
f ( t x + ( 1 t ) y ) t f ( x ) + ( 1 t ) f ( y ) Δ F ( x , y ; t ) t f ( x ) + ( 1 t ) f ( y ) .
Choosing t = 1 2 in (6) yields the midpoint estimate
0 F ( x ) + F ( y ) 2 F x + y 2 f ( x ) + f ( y ) 2 f x + y 2 .
For F ( x ) = c x 2 with c > 0 , strong F-convexity reduces to the standard strong convexity (5). The class of strongly F-convex functions extends standard strong convexity by replacing the quadratic modulus with an arbitrary convex function F , enabling extensions and refinements of classical convexity results. A closely related concept to strong F-convexity is that of delta-convex (DC) functions, i.e., functions representable as a difference of two convex functions [46] (see also [47]). One more related notion is that of g-convex dominated functions, where both g + f and g f are convex [48].
The main purpose of this paper is to establish structural characterizations of strong F-convexity and to derive Jensen-type and Hermite–Hadamard-type inequalities adapted to a general control function F. We present new families of strongly F-convex functions of practical importance, including power, logarithmic, and exponential forms. These structural forms are applied to obtain new analytical inequalities and new estimates for the Shannon, Tsallis, and Rényi entropies. Since the concept of strong F-convexity unifies several known convexity concepts, the results obtained here provide a flexible tool that can be further applied by choosing appropriate control functions F to derive extensions and refinements of many classical inequalities for convex functions.
The structure of the paper is as follows: in Section 2, we present structural characterizations and nontrivial examples of strong F-convexity. Section 3 develops the Jensen-type inequalities for strongly F-convex functions. In Section 4, we establish the Hermite–Hadamard-type inequalities. Finally, in Section 5, we provide analytical and entropic applications of the main results. At the end, we provide an illustrative numerical example demonstrating that our result yields a sharper bound for Rényi entropy compared to the classical result valid for standard convex functions.

2. Characterizations of Strong F -Convexity

In this section, we describe several constructions and criteria that guarantee strong F-convexity. Throughout the discussion, we write f SC F ( I ) to indicate that the function f : I R is strongly convex relative to the control convex function F on an interval I R . This notation will be used repeatedly for clarity and compactness.
Example 1.
Let a , b R with 0 < a < 1 2 < b .
1.
For f ( x ) = x m with m 1 , the function belongs to SC F a , 1 2 when F ( x ) = x 2 .
2.
For f ( x ) = x m with 2 m 5 , the function belongs to SC F 1 2 , b when F ( x ) = x 2 .
Proof. 
We verify the defining inequality of strong F-convexity.
Fix x , y in the respective interval and define
h ( t ) = ( t x + ( 1 t ) y ) 2 ( t x + ( 1 t ) y ) m , t [ 0 , 1 ]
Differentiating twice yields
d 2 h d t 2 ( t ) = 2 ( x y ) 2 m ( m 1 ) ( x y ) 2 ( t x + ( 1 t ) y ) m 2 .
Case (1): If m 1 and x , y a , 1 2 , then ( t x + ( 1 t ) y ) m 2 2 2 m . Hence
h ( t ) ( x y ) 2 2 m ( m 1 ) 2 m 2 0 .
Since h is concave and satisfies h ( 0 ) = y 2 y m and h ( 1 ) = x 2 x m , the standard Jensen type arguments gives
h ( t ) t x 2 x m + ( 1 t ) y 2 y m , for all t [ 0 , 1 ] .
This is exactly the strong F-convexity inequality.
Case (2): For m 2 , 5 and x , y 1 2 , b , the same computations show that
d 2 h d t 2 ( t ) 0 .
The argument proceeds as above. □
Example 2.
Let 1 a < b and f ( x ) = x m with m 2 . Then f SC F a , b for the control function F ( x ) = ln x .
Proof. 
For x , y a , b define
h ( t ) = t x m + ( 1 t ) y m + t ln x + ( 1 t ) ln y ( t x + ( 1 t ) y ) m ln ( t x + ( 1 t ) y ) , t [ 0 , 1 ] .
Here, h ( 0 ) = h ( 1 ) = 0 . A direct computation gives
d 2 h d t 2 ( t ) = ( x y ) 2 ( t x + ( 1 t ) y ) 2 1 m ( m 1 ) ( t x + ( 1 t ) y ) m ( x y ) 2 ( t x + ( 1 t ) y ) 2 1 m ( m 1 ) 0 ,
because m 2 .
Thus, h ( t ) 0 for every t [ 0 , 1 ] , proving the claim. □
Example 3.
Let a , b R , a < b and α , m R , such that 0 < α < m 1 . Let f ( x ) = x m and g ( x ) = x α + 1 x α .
1.
If  α + 1 m ( m 1 ) 1 m α 1 a < b , then f SC F a , b with F ( x ) = x α + 1 x α .
2.
If  0 < a b α + 1 m ( m 1 ) 1 m α 1 , then g SC F a , b with F ( x ) = x m .
Proof. 
To prove the strong F-convexity of f, we prove that f F is convex.
Fix x , y a , b and define the function h α , m : [ a , b ] R by
h α , m ( x ) = α x m x α + 1 + x α .
Therefore, we have
d 2 h α , m d x 2 ( x ) = m ( m 1 ) x m 2 ( α + 1 ) x α 1
and thus,
h α , m ( x ) 0 , α + 1 m ( m 1 ) 1 m α 1 x < , h α , m ( x ) 0 , 0 < x α + 1 m ( m 1 ) 1 m α 1 .
This completes the proof. □
Example 4.
Let a , b R , a < b and m R , such that m > 1 . Let f ( x ) = x m and g ( x ) = x ln x .
1.
If 1 m ( m 1 ) 1 m 1 a < b , then f SC F a , b with F ( x ) = x ln x .
2.
If 0 < a < b 1 m ( m 1 ) 1 m 1 , then g SC F a , b with F ( x ) = x m .
Proof. 
The result follows from Example 3 as α 0 + . □
The following lemma provides a useful characterization that can generally assist in the construction of strongly F-convex functions.
Lemma 2.
Let  f , F : I R  be twice-differentiable and assume that F is convex. Then
f SC F I f ( x ) F ( x ) 0 , f o r a l l x I .
Proof. 
The statement of the lemma follows from the fact that twice-differentiable functions F and g = f F are convex if and only if
F ( x ) 0 and g ( x ) = f ( x ) F ( x ) 0
holds for all x I . □
Using characterization from the previous lemma, we construct the next examples.
Example 5.
Let I ( 0 , ) and f , F : I R be functions such that f ( x ) = x α , α > 0 , and F ( x ) = x 2 . Define the function g ( x ) = f ( x ) F ( x ) = x α x 2 . Then f SC F if g is convex, which is equivalent to the request g ( x ) = f ( x ) F ( x ) = α ( α 1 ) x α 2 2 0 , i.e., x α 2 2 α ( α 1 ) .
(a) 
If α = 2 , the convexity region is whole I = ( 0 , ) .
(b) 
If α > 2 , then x x α 2 is strictly increasing on ( 0 , ) . Taking the ( α 2 ) -root, we get x 2 α ( α 1 ) 1 α 2 . Therefore, the convexity region is I 1 = 2 α ( α 1 ) 1 α 2 , .
(c) 
If 1 < α < 2 , then x x α 2 is strictly decreasing on ( 0 , ) . Taking the ( α 2 ) -root, we get x 2 α ( α 1 ) 1 α 2 . Therefore, the convexity region is I 2 = 0 , 2 α ( α 1 ) 1 α 2 .
(d) 
If 0 < α < 1 , then α 1 < 0 ; hence, α ( α 1 ) < 0 , i.e., 2 α ( α 1 ) < 0 . Then x α 2 2 α ( α 1 ) holds for every x I = ( 0 , ) .
Example 6.
Let 0 < a < b and f , F : [ a , b ] R be functions such that F ( x ) = α e x , α R . Since F ( x ) = α e x , using (9), we have that f SC F if and only if
0 α e x f ( x ) , i . e . , 0 α f ( x ) e x , f o r a l l x [ a , b ] .
Therefore, we can set
α = inf x [ a , b ] f ( x ) e x .
If f ( x ) = ln x , then f ( x ) = 1 x 2 and we take
α = inf x [ a , b ] 1 x 2 e x = 1 b 2 e b .
Hence, f ( x ) = ln x is strongly F-convex with control function F ( x ) = e x b 2 e b on [ a , b ] .
Example 7.
Let 0 < a < b and f , F : [ a , b ] R be functions such that F ( x ) = α ln x , α R . Since F ( x ) = α x 2 , using (9), we have that f SC F if and only if
0 α x 2 f ( x ) , i . e . , f ( x ) x 2 α 0 , f o r a l l x [ a , b ] .
Therefore, we can set
α = sup x [ a , b ] f ( x ) x 2 .
If f ( x ) = x ln x , then f ( x ) = 1 x and we take
α = sup x [ a , b ] x = a .
Hence, f ( x ) = x ln x is strongly F-convex with control function F ( x ) = a ln x on [ a , b ] .
The following characterizations are particularly useful in constructing new nontrivial examples.
Proposition 1.
If α 0 and f SC F I , then α f SC α F I .
Proof. 
Since f SC F I , then
0 t F ( x ) + ( 1 t ) F ( y ) F t x + ( 1 t ) y t f ( x ) + ( 1 t ) f ( y ) f t x + ( 1 t ) y
holds for all x , y I and t [ 0 , 1 ] . Multiplying the above inequalities by α 0 , we get
0 t α F ( x ) + ( 1 t ) α F ( y ) α F t x + ( 1 t ) y t α f ( x ) + ( 1 t ) α f ( y ) α f t x + ( 1 t ) y .
The proof is now complete. □
Proposition 2.
Let f 1 SC F 1 I and f 2 SC F 2 I . For any α , β 0 , α f 1 + β f 2 SC α F 1 + β F 2 I .
Proof. 
Since f 1 SC F 1 I and f 2 SC F 2 I , then
g 1 = f 1 F 1 and g 2 = f 2 F 2
are convex by Lemma 1.
We define f = α f 1 + β f 2 and F = α F 1 + β F 2 . Then
g = f F = α ( f 1 F 1 ) + β ( f 2 F 2 ) = α g 1 + β g 2
is convex as a non-negative linear combination of two convex functions. Moreover, f SC F I . □
Proposition 3.
If f 1 , f 2 SC F I , then the function f = max { f 1 , f 2 } also belongs to SC F I .
Proof. 
Since f 1 , f 2 SC F I , then
Δ F ( x , y ; t ) + f 1 ( t x + ( 1 t ) y ) t f 1 ( x ) + ( 1 t ) f 1 ( y ) t f ( x ) + ( 1 t ) f ( y )
and
Δ F ( x , y ; t ) + f 2 ( t x + ( 1 t ) y ) t f 2 ( x ) + ( 1 t ) f 2 ( y ) t f ( x ) + ( 1 t ) f ( y )
where the last inequalities in (10) and (11) are consequences of definition f ( x ) = max x I { f 1 ( x ) , f 2 ( x ) } . Furthermore, we have
Δ F ( x , y ; t ) + f ( t x + ( 1 t ) y ) Δ F ( x , y ; t ) + max x I f 1 ( t x + ( 1 t ) y ) , f 2 ( t x + ( 1 t ) y ) t f ( x ) + ( 1 t ) f ( y ) .
The proof is now complete. □
Example 8.
Let 0 < a < b . Consider the function f 1 ( x ) = ln x , which is strongly F 1 -convex with F 1 ( x ) = 1 b 2 e b e x on [ a , b ] and f 2 ( x ) = x ln x and strongly F 2 -convex with F 2 ( x ) = a ln x on [ a , b ] (see Example 6 and 7). Then by Proposition 2, for α , β 0 , the function h ( x ) = β x ln x α ln x is strongly α b 2 e b e x β a ln x -convex on [ a , b ] . For a specific selection a = 1 , b = 2 , α = b 2 = 4 ,   β = 1 , the function h ( x ) = x ln x 4 ln x is strongly ( e x 2 ln x ) -convex on [ 1 , 2 ] .
Example 9.
Using Lemma 2, we can prove that the functions f 1 ( x ) = e x + 1 2 x 2 and f 2 ( x ) = x 2 + 1 x are strongly F-convex with the same control function F ( x ) = x ln x on 1 2 , . Then by Proposition 3, the function f ( x ) = max x 1 2 , e x + 1 2 x 2 , x 2 + 1 x is strongly F-convex on 1 2 , .
The next characterization will play a key role in establishing several results in the subsequent sections.
Lemma 3.
Let f , F : I R be differentiable and assume F is convex. Let i n t I denote the interior of I . Then f SC F I if and only if
0 F ( y ) F ( z ) F ( z ) ( y z ) f ( y ) f ( z ) f ( z ) ( y z )
for all y , z i n t I .
Proof. 
The statement of this lemma follows from the fact that the differentiable function g = f F is convex if and only if
g ( y ) g ( z ) g ( z ) ( y z ) 0
holds for all y , z i n t I (see [3] (p. 5)). The non-negativity in the first inequality is a consequence of the convexity of F .
Remark 1.
If f and F are not differentiable, any choice c ( z ) [ g ( z ) , g + ( z ) ] , g = f F yields the equivalent inequality
g ( y ) g ( z ) c ( z ) ( y z ) 0 , y , z i n t I .
Taking c ( z ) = g + ( z ) = f + ( z ) F + ( z ) gives an equivalent two-sided formulation involving the right derivatives of f and F ,
0 F ( y ) F ( z ) F + ( z ) ( y z ) f ( y ) f ( z ) f + ( z ) ( y z )
for all y , z i n t I .
At the end of this section, we present two corollaries that are straightforward consequences of the definition of strong F-convexity.
Corollary 1.
If f SC F I , then for all u , v , w I , such that u < v < w , we have
1.
0 w v w u F ( u ) + v u w u F ( w ) F v w v w u f ( u ) + v u w u f ( w ) f v ;
2.
0 F ( u ) ( w u ) ( v u ) + F ( w ) ( w v ) ( w u ) F v ( w v ) ( v u )
f ( u ) ( w u ) ( v u ) + f ( w ) ( w v ) ( w u ) f v ( w v ) ( v u ) .
Proof. 
1. Substituting t x + ( 1 t ) y = v , x = u , y = w , 1 t = v u w u , and t = w v w u in (6) for every u , v , w I , such that u < v < w , we get (1).
2. Inequality (2) follows directly from (1).
Corollary 2.
If f SC F I , then for all u , v , w I , such that u < v < w , we have
1.
0 F ( w ) F ( u ) w u F ( v ) F ( u ) v u f ( w ) f ( u ) w u f ( v ) f ( u ) v u ;
2.
0 F ( w ) F ( v ) w v F ( w ) F ( u ) w u f ( w ) f ( v ) w v f ( w ) f ( u ) w u ;
3.
0 F ( w ) F ( v ) w v F ( v ) F ( u ) v u f ( w ) f ( v ) w v f ( v ) f ( u ) v u .
Proof. 
Let a u < v < w b .
1.
Since v = w v w u u + v u w u w , then by Corollary (1), we get
F ( w ) F ( u ) w u F ( v ) F ( u ) v u = 1 v u w v w u F ( u ) + v u w u F ( w ) F w v w u u + v u w u w 1 v u w v w u f ( u ) + v u w u f ( w ) f w v w u u + v u w u w = f ( w ) f ( u ) w u f ( v ) f ( u ) v u .
2.
In an analogous way, since v = w v w u u + v u w u w , applying Corollary (1), we have
F ( w ) F ( v ) w v F ( w ) F ( u ) w u = 1 w v w v w u F ( u ) + v u w u F ( w ) F w v w u u + v u w u w 1 w v w v w u f ( u ) + v u w u f ( w ) f w v w u u + v u w u w = f ( w ) f ( v ) w v f ( w ) f ( u ) w u .
3.
By adding inequalities (15) and (16), it follows (3).

3. Jensen-Type Inequalities Based on Strong F -Convexity

In this section, we develop several Jensen-type inequalities that hold under the assumption of strong F-convexity. The results extend the classical Jensen inequality for convex functions by incorporating the additional control provided by the function F .
Theorem 1.
Let x = x 1 , , x n I n and p = p 1 , , p n be non-negative weights with P = i = 1 n p i = 1 . Denote x ¯ P = i = 1 n p i x i . If f SC F I , then
0 i = 1 n p i f x i f x ¯ P i = 1 n p i F x i + F x ¯ P .
Proof. 
Using characterization (14) and substituting y = x i and z = x ¯ P , we obtain
0 F ( x i ) F ( x ¯ P ) F + ( x ¯ P ) ( x i x ¯ P ) f ( x i ) f ( x ¯ P ) f + ( x ¯ P ) ( x i x ¯ P ) .
Multiplying by p i and summing over i yields
0 i = 1 n p i F ( x i ) F ( x ¯ P ) F + ( x ¯ P ) i = 1 n p i x i x ¯ P i = 1 n p i f ( x i ) f ( x ¯ P ) f + ( x ¯ P ) i = 1 n p i x i x ¯ P .
Since i = 1 n p i x i = x ¯ P , the derivative terms vanish on both sides, giving exactly (17). □
Remark 2.
Inequality (17) was proven in [45] by another technique.
The next theorem gives an extension of (17) for strongly F-convex functions.
Theorem 2.
Let x = x 1 , , x n I n and p = p 1 , , p n be non-negative weights with P = i = 1 n p i = 1 . Denote x ¯ P = i = 1 n p i x i and x ¯ = 1 n i = 1 n x i . If f SC F I , then
0 min 1 i n { p i } i = 1 n f x i n f x ¯ i = 1 n F x i + n F x ¯ i = 1 n p i f x i f x ¯ P i = 1 n p i F x i + F x ¯ P max 1 i n { p i } i = 1 n f x i n f x ¯ i = 1 n F x i + n F x ¯ .
Proof. 
Since f SC F I , the difference g = f F is a convex function. Applying the corresponding refinement for convex functions (4) but now to g yields exactly the chain of inequalities in (19). □
Theorem 3.
Let f SC F I .
1.
For t [ 0 , 1 ] and x i I , i = 1 , , n ,
1 n i = 1 n F ( 1 t ) x i + t x n + 1 i F i = 1 n x i n 1 n i = 1 n f ( 1 t ) x i + t x n + 1 i f i = 1 n x i n .
2.
For x i I , i = 1 , , n ,
1 n i = 1 n 0 1 F ( 1 t ) x i + t x n + 1 i d t F i = 1 n x i n 1 n i = 1 n 0 1 f ( 1 t ) x i + t x n + 1 i d t f i = 1 n x i n .
Proof. 
1. Insert p i = 1 n and replace x i by combination ( 1 t ) x i + t x n + 1 i with t [ 0 , 1 ] in the first inequality in Theorem 2, i.e., in
0 min 1 i n { p i } i = 1 n f x i n f x ¯ i = 1 n F x i + n F x ¯ ,
yields
1 n i = 1 n f ( 1 t ) x i + t x n + 1 i f i = 1 n x i n = 1 n i = 1 n f ( 1 t ) x i + t x n + 1 i f i = 1 n ( 1 t ) x i + t x n + 1 i n 1 n i = 1 n F ( 1 t ) x i + t x n + 1 i F i = 1 n ( 1 t ) x i + t x n + 1 i n = 1 n i = 1 n F ( 1 t ) x i + t x n + 1 i F i = 1 n x i n ,
since i = 1 n ( 1 t ) x i + t x n + 1 i = i = 1 n x i , which implies the inequality (20).
2. Integrating on both sides of inequality (20) over [ 0 , 1 ] with respect to t yields (21).
Theorem 4.
Let f SC F I .
1.
For t [ 0 , 1 ] and x i I , i = 1 , , n ,
0 i = 1 n F ( x i ) F ( 1 t ) x i + t x n + 1 i i = 1 n f ( x i ) f ( 1 t ) x i + t x n + 1 i .
2.
For x i I , i = 1 , , n ,
0 i = 1 n F ( x i ) i = 1 n 0 1 F ( 1 t ) x i + t x n + 1 i d t i = 1 n f ( x i ) i = 1 n 0 1 f ( 1 t ) x i + t x n + 1 i d t .
Proof. 
1. Strong F-convexity gives
f ( x i ) f ( 1 t ) x i + t x n + 1 i = t f ( x n + 1 i ) + ( 1 t ) f ( x i ) f ( 1 t ) x i + t x n + 1 i t F ( x n + 1 i ) + ( 1 t ) F ( x i ) F ( 1 t ) x i + t x n + 1 i = F ( x i ) F ( 1 t ) x i + t x n + 1 i 0 .
Summing over i and using i = 1 n f ( x n + 1 i ) = i = 1 n f ( x i ) , i = 1 n F ( x n + 1 i ) = i = 1 n F ( x i ) , gives (22).
2. Integrating on both sides of inequality (22) with respect to t from 0 to 1 yields (23).
Theorem 5.
Let f SC F I . For t , x , y [ 0 , 1 ] and x i I , i = 1 , , n ,
t i = 1 n f x x n + 1 i + ( 1 x ) x i + ( 1 t ) i = 1 n f y x n + 1 i + ( 1 y ) x i i = 1 n f 1 t x ( 1 t ) y x i + t x + ( 1 t ) y x n + 1 i t i = 1 n F x x n + 1 i + ( 1 x ) x i + ( 1 t ) i = 1 n F y x n + 1 i + ( 1 y ) x i i = 1 n F 1 t x ( 1 t ) y x i + t x + ( 1 t ) y x n + 1 i 0 .
Proof. 
Apply strong F-convexity to the pair of points x x n + 1 i + ( 1 x ) x i and y x n + 1 i + ( 1 y ) x i with weight t; we get
0 t F ( x x n + 1 i + ( 1 x ) x i ) + ( 1 t ) F ( y x n + 1 i + ( 1 y ) x i ) F ( t ( x x n + 1 i + ( 1 x ) x i ) + ( 1 t ) ( y x n + 1 i + ( 1 y ) x i ) ) t f ( x x n + 1 i + ( 1 x ) x i ) + ( 1 t ) f ( y x n + 1 i + ( 1 y ) x i ) f ( t ( x x n + 1 i + ( 1 x ) x i ) + ( 1 t ) ( y x n + 1 i + ( 1 y ) x i ) ) .
The convex combination
t ( x x n + 1 i + ( 1 x ) x i ) + ( 1 t ) ( y x n + 1 i + ( 1 y ) x i )
simplifies to
1 t x ( 1 t ) y x i + t x + ( 1 t ) y
and summing over i gives (24). □
Corollary 3.
Let f SC F I . For x , y [ 0 , 1 ] and x i I , i = 1 , , n ,
i = 1 n f x x n + 1 i + ( 1 x ) x i + i = 1 n f y x n + 1 i + ( 1 y ) x i 2 i = 1 n 0 1 f 1 t x ( 1 t ) y x i + t x + ( 1 t ) y x n + 1 i d t i = 1 n F x x n + 1 i + ( 1 x ) x i + i = 1 n F y x n + 1 i + ( 1 y ) x i 2 i = 1 n 0 1 F 1 t x ( 1 t ) y x i + t x + ( 1 t ) y x n + 1 i d t 0 .
Proof. 
Integrating inequality (24) with respect to t [ 0 , 1 ] completes the argument and gives the claimed result. □

4. Hermite–Hadamard-Type Inequalities for Strongly F -Convex Functions

The classical Hermite–Hadamard inequality (3) for a convex function on [ a , b ] gives the well-known triple of estimates
0 f ( a ) + f ( b ) 2 f a + b 2 ,
0 1 b a a b f ( x ) d x f a + b 2 ,
0 f ( a ) + f ( b ) 2 1 b a a b f ( x ) d x .
A large body of the literature is devoted to sharpening or extending these inequalities. In this section, we obtain versions of these estimates adapted to the setting of strongly F-convex functions.
Theorem 6.
Let f SC F [ a , b ] and let a u < v < w b . Then
1.
0 F ( u ) F ( v ) 2 + F v + w 2 F u + w 2
f ( u ) f ( v ) 2 + f v + w 2 f u + w 2 ;
2.
0 F ( v ) F ( w ) 2 + F u + v 2 F u + w 2 f ( v ) f ( w ) 2 + f u + v 2 f u + w 2 .
Proof. 
1. Using the fact that there exists t [ 0 , 1 ] , such that v = t u + ( 1 t ) ( v + w ) 2 , by strong F-convexity of f, we have
f u + w 2 = f 1 t 2 u + 1 + t 2 · v + w 2 F 1 t 2 u + 1 + t 2 · v + w 2 + F u + w 2 1 t 2 f ( u ) 1 t 2 F ( u ) + 1 + t 2 f v + w 2 1 + t 2 F v + w 2 + F u + w 2 = 1 2 f ( u ) F ( u ) t f ( u ) t F ( u ) + ( 1 t ) f v + w 2 ( 1 t ) F v + w 2 + f v + w 2 F v + w 2 + F u + w 2 1 2 f ( u ) F ( u ) f t u + ( 1 t ) v + w 2 F t u + ( 1 t ) v + w 2 + f v + w 2 F v + w 2 + F u + w 2 = f ( u ) f ( v ) 2 F ( u ) F ( v ) 2 + f v + w 2 F v + w 2 + F u + w 2 ,
which completes the proof of (1).
2. In this similar fashion, we can prove (2) by taking, v = t u + v 2 + ( 1 t ) w , t [ 0 , 1 ] .
Theorem 7.
Let I R be an open interval and let a , b , where a < b belongs to I. If f SC F I , then
F a + b 2 1 b a a b F ( t ) d t 1 b a a b f ( t ) d t f a + b 2 + F a + b 2 .
Proof. 
From the characterization (14), for y = t ,
0 F ( t ) F ( z ) F + ( z ) ( t z ) f ( y ) f ( t ) f + ( z ) ( t z ) .
Integrating the entire chain over [ a , b ] and dividing by b a > 0 gives
0 1 b a a b F ( t ) d t F ( z ) F + ( z ) 1 b a a b t d t z 1 b a a b f ( t ) d t f ( z ) f + ( z ) 1 b a a b t d t z .
Choosing z = 1 b a a b t d t = a + b 2 eliminates the derivative terms, giving
0 1 b a a b F ( t ) d t F a + b 2 1 b a a b f ( t ) d t f a + b 2 .
By summarizing (26) with F a + b 2 , we get (25). □
Theorem 8.
Let I R be an open interval and let a , b , where a < b belongs to I. If f SC F I , then for every x ( a , b ) ,
1.
0 F ( x ) + F ( a ) 2 1 x a a x F ( v ) d v f ( x ) + f ( a ) 2 1 x a a x f ( v ) d v ;
2.
0 1 b x x b F ( v ) d v F ( x ) b x 2 F + ( x )
1 b x x b f ( v ) d v f ( x ) b x 2 f + ( x ) .
Proof. 
Let x ( a , b ) .
  • For v ( a , b ) , inequality (14) gives
    0 F ( x ) F ( v ) ( x v ) F + ( v ) f ( x ) f ( v ) ( x v ) f + ( v ) .
    Integrating over v [ a , x ] yields (1).
  • Similarly,
    0 F ( v ) F ( x ) ( v x ) F + ( x ) f ( v ) f ( x ) ( v x ) f + ( x )
    and integrating over v [ x , b ] gives (2).
Remark 3.
A useful refinement follows from Theorem 2. Setting n = 2 , weights p 1 = t , p 2 = 1 t and substituting x 1 = x , x 2 = y yields
f ( t x + ( 1 t ) y ) t f ( x ) + ( 1 t ) f ( y ) t F ( x ) + ( 1 t ) F ( y ) F ( t x + ( 1 t ) y ) min t , 1 t f ( x ) + f ( y ) 2 f x + y 2 F ( x ) F ( y ) + 2 F x + y 2 t f ( x ) + ( 1 t ) f ( y ) t F ( x ) + ( 1 t ) F ( y ) F ( t x + ( 1 t ) y ) t f ( x ) + ( 1 t ) f ( y )
which provides a sharper version of the defining inequality for strong F-convexity.
Theorem 9.
Let I R be an open interval and let f SC F I . For all x , y I with x < y ,
1 y x x y f ( u ) d u f ( x ) + f ( y ) 2 F ( x ) + F ( y ) 2 1 y x x y F ( u ) d u 1 4 f ( x ) + f ( y ) 2 f x + y 2 F ( x ) F ( y ) + 2 F x + y 2 f ( x ) + f ( y ) 2 F ( x ) + F ( y ) 2 1 y x x y F ( u ) d u f ( x ) + f ( y ) 2 .
Proof. 
Integrating the refined inequality (27) over t [ 0 , 1 ] gives the stated chain of inequalities. □

5. Analytical and Entropic Applications of Strong F -Convexity

This section demonstrates how strong F-convexity leads to new analytical inequalities and new error estimates for the Shannon, Tsallis, and Rényi entropies. The results are derived directly from the structural properties developed in earlier sections.
Proposition 4.
Let 1 2 < v < w and 2 m 5 . Then
1.
w v 4 w m 1 2 w 1 4 v m 1 2 v 1 ;
2.
v 1 2 w m v m w v 2 m w m 1 2 m w 2 m 1 ;
3.
w 1 2 w m v m w v 2 m v m 1 2 m v 2 m 1 .
Proof. 
Combine Example 1, which ensures that f ( x ) = x m is strongly F-convex with F ( x ) = x 2 on the interval 1 2 , b , with Lemma 2 applied to f ( x ) = x m . The inequality follows directly. □
Proposition 5.
Let 2 m 5 and 1 2 x < y . Then
y m + 1 x m + 1 ( m + 1 ) ( y x ) x m + y m 2 x 2 + y 2 2 x 2 + x y + y 2 3 1 4 x m + y m x + y m 2 m + 1 x 2 y 2 + x + y 2 2 x m + y m 2 x 2 + y 2 2 x 2 + x y + y 2 3 x m + y m 2 .
Proof. 
Applying inequality (28) to f ( x ) = x m , which is strongly F-convex with F ( x ) = x 2 , according to Example 1, yields the required estimate. □
Proposition 6.
Let 0 < a < 1 2 < b .
  • If m 1 and a x i 1 2 , i = 1 , , n , then
    0 i = 1 n x i 2 ( 1 t ) x i + t x n + 1 i 2 i = 1 n x i m ( 1 t ) x i + t x n + 1 i m .
  • If 2 m 5 and 1 2 x i b ; i = 1 , , n , then
    0 i = 1 n x i 2 ( 1 t ) x i + t x n + 1 i 2 i = 1 n x i m ( 1 t ) x i + t x n + 1 i m .
Proof. 
Using Example 1 and applying Theorem 4 to the function f ( x ) = x m , which is strongly F-convex with control function F ( x ) = x 2 , we get the required results. □
We obtain the following estimates for the Shannon entropy:
Proposition 7.
Let m 2 and P = { p i } i = 1 n be a positive probability distribution. Then
n m + ln n H ( P ) + i = 1 n p i 1 m .
Proof. 
Lemma 1 combined with 2 implies that g ( x ) = x m + ln x is convex on [ 1 , ) . Applying (2) to g,
i = 1 n p i x i m + ln i = 1 n p i x i i = 1 n p i x i m + ln x i .
Setting x i : = 1 p i gives
n m + ln n i = 1 n p i 1 m i = 1 n p i ln p i = H ( P ) + i = 1 n p i 1 m .
Proposition 8.
Let m 2 and P = { p i } i = 1 n be a positive probability distribution with μ : = min 1 i n { p i } and ν : = max 1 i n { p i } . Then
H ( P ) ln n μ 2 e 1 μ e 1 ν e n .
Proof. 
Let a = 1 ν and b = 1 μ . By Lemma 1 and Example 6, the function g ( x ) = ln x μ 2 e x 1 μ is convex on [ a , b ] . Applying (2) to g,
ln i = 1 n p i x i μ 2 e i = 1 n p i x i 1 μ i = 1 n p i ln x i μ 2 e x i 1 μ .
Taking x i = 1 p i , we get
ln n μ 2 e n 1 μ i = 1 n p i ln 1 p i i = 1 n p i μ 2 e 1 p i 1 μ H ( P ) μ 2 e 1 ν 1 μ ,
which completes the proof. □
We derive the following inequalities for the Tsallis entropy:
Proposition 9.
Let 0 < α < m 1 and P = { p i } i = 1 n be a probability distribution with p i α + 1 m ( m 1 ) 1 m α 1 , i = 1 , , n . Then
0 min 1 i n { p i } i = 1 n p i m n 1 m + T α ( P ) + n α 1 α i = 1 n p i m + 1 i = 1 n p i 2 m 1 α i = 1 n p i α + 2 i = 1 n p i 2 α + 1 max 1 i n { p i } i = 1 n p i m n 1 m + T α ( P ) + n α 1 α .
Proof. 
We can apply inequality (19) with f ( x ) = x m , F ( x ) = x α + 1 x α , and x i = p i , i = 1 , , n and use Example 3 to ensure strong F-convexity on the required interval. □
Corollary 4.
Let m > 1 and P = { p i } i = 1 n be a probability distribution with p i 1 m ( m 1 ) 1 m 1 , i = 1 , , n . Then
0 min 1 i n { p i } i = 1 n p i m n 1 m + H ( P ) ln n i = 1 n p i m + 1 i = 1 n p i 2 m i = 1 n p i 2 ln p i + i = 1 n p i 2 ln i = 1 n p i 2 max 1 i n { p i } i = 1 n p i m n 1 m + H ( P ) ln n .
Proof. 
Inequality (30) follows from (29) as α 0 + . □
We also establish the following estimate for Rényi entropy:
Proposition 10.
Let α > 0 and P = { p i } i = 1 n be a probability distribution. Then
R α ( P ) 1 1 α ln n 1 α + i = 1 n p i 2 1 n .
This holds under
(a) 
p i > 0 for α = 2 ;
(b) 
p i 2 α ( α 1 ) 1 α 2 for α > 2 ;
(c) 
0 < p i 2 α ( α 1 ) 1 α 2 for 1 < α < 2 ;
(d) 
p i > 0 for 0 < α < 1 .
Proof. 
Using Jensen’s inequality (17) with p i = 1 n and x i = p i , we get
0 1 n i = 1 n f p i f 1 n i = 1 n p i 1 n i = 1 n F p i + F 1 n i = 1 n p i .
Since i = 1 n p i = 1 , we have 1 n i = 1 n p i = 1 n . Substituting this into the previous inequality, with f ( x ) = x α , α > 1 , and F ( x ) = x 2 , we get
0 1 n i = 1 n p i α 1 n α 1 n i = 1 n p i 2 + 1 n 2 .
Multiplying by n , we obtain
0 i = 1 n p i α n 1 α i = 1 n p i 2 + 1 n .
Rearranging, we have
i = 1 n p i α n 1 α + i = 1 n p i 2 1 n ,
and taking logarithms (increasing function) and multiplying by the negative factor 1 1 α (since α > 1 ), the inequality reverses and we have
1 1 α ln i = 1 n p i α 1 1 α ln n 1 α + i = 1 n p i 2 1 n ,
i.e., inequality (31) holds.
Using Example 5, we conclude the last parts of the statements. □
Finally, we present a specific numerical example that illustrates how our approach yields sharper bounds for Rényi entropy compared to the classical Jensen-based inequality, thereby highlighting the practical relevance of our results.
Example 10.
Consider the following two probability distributions:
(a) 
Let n = 10 ,   α = 1.5 , and
P 1 = ( 0.1207 , 0.1207 , 0.1207 , 0.1207 , 0.1207 , 0.1207 , 0.1207 , 0.1207 , 0.0172 , 0.0172 ) .
Applying Proposition 10, the upper bound for Rényi entropy is
R 1.5 ( P 1 ) 1 1 α ln n 1 α + i = 1 n p i 2 1 n = 2 ln ( 0.3334 ) 2.197 .
For comparison, the classical Jensen-based upper bound is
R 1.5 ( P 1 ) ln n = ln 10 2.303
and the exact Rényi entropy is
R 1.5 ( P 1 ) = 1 1 α ln i = 1 n p i α = 2 ln i = 1 n p i 1.5 2.156 .
(b) 
Let n = 50 , α = 1.5 and
P 2 = ( 0.0178 , , 0.0178 45 t i m e s , 0.0398 , , 0.0398 5 t i m e s ) .
Applying Proposition 10, the upper bound is
R 1.5 ( P 2 ) 1 1 α ln n 1 α + i = 1 n p i 2 1 n = 2 ln ( 0.1436 ) 3.881 .
The classical Jensen-based upper bound is
R 1.5 ( P 2 ) ln n = ln 50 3.912
and the exact Rényi entropy is
R 1.5 ( P 2 ) = 1 1 α ln i = 1 n p i α = 2 ln ( 0.1466 ) 3.840 .
When we summarize the calculations, we obtain the following table:
DistributionExact R1.5Paper BoundClassical Jensen Bound
P12.1562.1972.303
P23.8403.8813.912
This table clearly illustrates that the upper bound obtained via Proposition 10 provides a sharper estimate than the classical Jensen-based bound.

6. Conclusions

In this paper, we investigated a class of strongly F-convex functions which extends the classical notion of strong convexity by incorporating a general convex control function F. We established several structural characterizations of this concept and derived Jensen-type and Hermite–Hadamard-type inequalities adapted to the presence of F. As applications, we obtained new analytical inequalities and new estimates for the Shannon, Tsallis, and Rényi entropies. The results obtained here provide a flexible foundation that can be further applied in combination with suitable choices of the control function F to derive extensions and refinements of many classical inequalities already present in the literature. In this sense, the construction principles for strongly F-convex functions developed in this paper offer a systematic approach that may serve as a basis for obtaining improved bounds and generalized results.

Author Contributions

Conceptualization, H.B., S.I.B., M.J. and Y.S.; Methodology, H.B., S.I.B., M.J. and Y.S.; Validation, H.B., S.I.B., M.J. and Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bot, R.I.; Kassay, G.; Wanka, G. Strong duality for generalized convex optimization problems. J. Optim. Theory Appl. 2005, 127, 45–70. [Google Scholar] [CrossRef]
  2. Niculescu, C.P.; Persson, L.E. Convex Functions and Their Applications. A Contemporary Approach, 2nd ed.; CMS Books in Mathematics; Springer: New York, NU, USA, 2018. [Google Scholar]
  3. Pećarixcx, J.; Proschan, F.; Tong, Y.L. Convex Functions, Partial Orderings and Statistical Applications; Academic Press: New York, NY, USA, 1992. [Google Scholar]
  4. Jensen, J.L.W.V. Sur les fonctions convexes et les inegalites inégalités entre les valeurs moyennes. Acta Math. 1906, 30, 175–193. [Google Scholar] [CrossRef]
  5. Hermite, C. Sur deux limites d’une intégrale définie. Bull. Sci. Math. Astronom. 1883, 7, 72–79. [Google Scholar]
  6. Hadamard, J. Étude sur les propriétés des fonctions entičres et en particulier d’une fonction considérée par Riemann. J. Math. Pures Appl. 1893, 9, 171–215. [Google Scholar]
  7. Hardy, G.H.; Littlewood, J.E.; Pólya, G. Inequalities; Cambridge University Press: Cambridge, UK, 1934. [Google Scholar]
  8. Mitrinović, D.S.; Pexcxarixcx, J.E.; Fink, A.M. Inequalities Involving Functions and Their Integrals and Derivatives, Mathematics and Its Applications (East European Series); Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1991; Volume 53. [Google Scholar]
  9. Tyrrell Rockafellar, R. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  10. Kelly, J.P.; Weiss, M.L. Geometry and Convexity: A Study in Mathematical Methods; Wiley: Hoboken, NJ, USA, 1979. [Google Scholar]
  11. Carlen, E.; Madiman, M.; Werner, E.M. Convexity and Concentration; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  12. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: Cham, Switzerland, 2017. [Google Scholar]
  13. Cortez, M.V.; Althobaiti, A.; Aljohani, A.F.; Althobaiti, S. Generalized fuzzy valued convexity with Ostrowski’s and Hermite-Hadamard type inequalities over inclusion relations and their applications. Axioms 2024, 13, 471. [Google Scholar] [CrossRef]
  14. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  15. Shreve, S.E. Stochastic Calculus for Finance I: The Binomial Asset Pricing Model; Springer: New York, NY, USA, 2004. [Google Scholar]
  16. Juditsky, A.; Nemirovski, A. Statistical Inference via Convex Optimization; Princeton University Press: Princeton, NJ, USA, 2020. [Google Scholar]
  17. Polyanskiy, Y.; Wu, Y. Lecture, Information Theory: From Coding to Learning; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar]
  18. Sra, S.; Nowozin, S.; Wright, S.J. Optimization for Machine Learning; The MIT Press Cambridge: London, UK, 2012. [Google Scholar]
  19. Azhmyakov, V.; Vadim; Raisch, J. Convex control systems and convex optimal control problems with constraints. IEEE Trans. Autom. Control 2008, 53, 993–998. [Google Scholar] [CrossRef]
  20. Ben-Haim, Y.; Elishakoff, I. Convex Models of Uncertainty in Applied Mechanics; Elsevier: Amsterdam, The Netherlands, 1990. [Google Scholar]
  21. Galgani, L.; Scotti, A. Further remarks on convexity of thermodynamic functions. Physica 1969, 42, 242–244. [Google Scholar] [CrossRef]
  22. Mitrinović, D.S. Analytic Inequalities; Springer: New York, NY, USA, 1970. [Google Scholar]
  23. Dragomir, S.S.; Pearce, C.E.M. Selected Topics on Hermite-Hadamard Inequalities and Applications; RGMIA Monographs; Victoria University: Melbourne, VIC, Australia, 2000. [Google Scholar]
  24. Dragomir, S.S. Bounds for the normalised Jensen functional. Bull. Austral. Math. Soc. 2006, 74, 471–478. [Google Scholar] [CrossRef]
  25. Krnić, M.; Lovrixcxevixcx, N.; Perixcx, J.P.J. Superadditivity and Monotonicity of the Jensen-Type Functionals (New Methods for Improving the Jensen-Type Inequalities in Real and in Operator Cases); Element: Zagreb, Croatia, 2015. [Google Scholar]
  26. Morales, J.V.; Rincón, L. Probability distributions and the maximum entropy principle. Appl. Math. Comput. 2023, 444, 127806. [Google Scholar] [CrossRef]
  27. Jüngel, A. Entropy Methods for Diffusive Partial Differential Equations; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  28. Sethna, J.P. Statistical Mechanics: Entropy, Order Parameters and Complexity, 2nd ed.; Oxford University Press: Oxford, UK, 2021. [Google Scholar]
  29. Kondepudi, D.; Prigogine, I. Modern Thermodynamics: From Heat Engines to Dissipative Structures; Wiley: Hoboken, NJ, USA, 2014. [Google Scholar]
  30. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  31. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  32. Rényi, A. On measures of information and entropy. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; University of California Press: Oakland, CA, USA, 1961. [Google Scholar]
  33. Csiszár, I.; Körner, J. Information Theory: Coding Theorem for Discrete Memoryless Mystems; Academic Press: New York, NY, USA, 1981. [Google Scholar]
  34. Dragomir, S.S. Other Inequalities for Csiszár Divergence and Applications; RGMIA Monographs; Victoria University: Melbourne, VIC, Australia, 2000. [Google Scholar]
  35. Ivelić Bradanović, S. Sherman’s inequality and its converse for strongly convex functions with applications to generalized f-divergence. Turk. J. Math. 2019, 43, 2680–2696. [Google Scholar] [CrossRef]
  36. Melbourne, J. Strongly Convex Divergences. Entropy 2020, 22, 1327. [Google Scholar] [CrossRef]
  37. Sahatsathatsana, C.; Yotkaew, P. New Estimates of the q-Hermite–Hadamard Inequalities via Strong Convexity. Axioms 2025, 14, 576. [Google Scholar] [CrossRef]
  38. Kalsoom, H.; Cortez, M.V.; Latif, M.A. Trapezoidal type inequalities for strongly convex and quasi-convex functions via post-quantum calculus. Entropy 2021, 23, 1238. [Google Scholar] [CrossRef]
  39. Fitzsimmons, M.; Liu, J. A note on the equivalence of a strongly convex function and its induced contractive differential equation. Automatica 2022, 142, 110349. [Google Scholar] [CrossRef]
  40. Zhao, T.H.; Shi, L.; Chu, Y.M. Convexity and concavity of the modified Bessel functions of the first kind with respect to Hölder means. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 2020, 114, 96. [Google Scholar] [CrossRef]
  41. Chen, S.B.; Rashid, S.; Noor, M.A.; Hammouch, Z.; Chu, Y.M. New fractional approaches for n-polynomial P-convexity with applications in special function theory. Adv. Differ. Equ. 2020, 2020, 543. [Google Scholar] [CrossRef]
  42. Ivelić Bradanović, S. Improvements of Jensen’s inequality and its converse for strongly convex functions with applications to strongly f-divergences. J. Math. Anal. Appl. 2024, 531, 1–16. [Google Scholar] [CrossRef]
  43. Dragomir, S.S. On the reverse of Jessen’s inequality for isotonic linear functionals. J. Inequal. Pure Appl. Math. 2001, 2, 1–13. [Google Scholar]
  44. Perić, J. Strong F-convexity and concavity and refinements of some classical inequalities. J. Inequal. Appl. 2024, 2024, 96. [Google Scholar] [CrossRef]
  45. Perić, J. The Clausing inequality and strong F-concavity. J. Math. Inequal. 2024, 18, 1201–1216. [Google Scholar] [CrossRef]
  46. Hartman, P. On functions representable as a difference of convex functions. Pac. J. Math. 1959, 9, 707–713. [Google Scholar] [CrossRef]
  47. Veselý, L.; Zajíček, L. Delta Convex Mappings Between Banach Spaces and Applications; Dissert. Math. 289; PWN: Warszawa, Poland, 1989. [Google Scholar]
  48. Dragomir, S.S.; Ionescu, N.M. On some inequaliti es for convex-dominated functions. L’Anal. Num. Théor. L’Approx. 1990, 19, 21–27. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Barsam, H.; Ivelić Bradanović, S.; Jelić, M.; Sayyari, Y. Strongly F-Convex Functions with Structural Characterizations and Applications in Entropies. Axioms 2025, 14, 926. https://doi.org/10.3390/axioms14120926

AMA Style

Barsam H, Ivelić Bradanović S, Jelić M, Sayyari Y. Strongly F-Convex Functions with Structural Characterizations and Applications in Entropies. Axioms. 2025; 14(12):926. https://doi.org/10.3390/axioms14120926

Chicago/Turabian Style

Barsam, Hasan, Slavica Ivelić Bradanović, Matea Jelić, and Yamin Sayyari. 2025. "Strongly F-Convex Functions with Structural Characterizations and Applications in Entropies" Axioms 14, no. 12: 926. https://doi.org/10.3390/axioms14120926

APA Style

Barsam, H., Ivelić Bradanović, S., Jelić, M., & Sayyari, Y. (2025). Strongly F-Convex Functions with Structural Characterizations and Applications in Entropies. Axioms, 14(12), 926. https://doi.org/10.3390/axioms14120926

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop