On a Non-Newtonian Calculus of Variations

The calculus of variations is a field of mathematical analysis born in 1687 with Newton's problem of minimal resistance, which is concerned with the maxima or minima of integral functionals. Finding the solution of such problems leads to solving the associated Euler-Lagrange equations. The subject has found many applications over the centuries, e.g., in physics, economics, engineering and biology. Up to this moment, however, the theory of the calculus of variations has been confined to Newton's approach to calculus. As in many applications negative values of admissible functions are not physically plausible, we propose here to develop an alternative calculus of variations based on the non-Newtonian approach first introduced by Grossman and Katz in the period between 1967 and 1970, which provides a calculus defined, from the very beginning, for positive real numbers only, and it is based on a (non-Newtonian) derivative that permits one to compare relative changes between a dependent positive variable and an independent variable that is also positive. In this way, the non-Newtonian calculus of variations we introduce here provides a natural framework for problems involving functions with positive images. Our main result is a first-order optimality condition of Euler-Lagrange type. The new calculus of variations complements the standard one in a nontrivial/multiplicative way, guaranteeing that the solution remains in the physically admissible positive range. An illustrative example is given.


Introduction
A popular method of creating a new mathematical system is to vary the axioms of a known one. Non-Newtonian calculi provide alternative approaches to the usual calculus of Newton (1643-1727) and Leibniz (1646-1716), which were first introduced by Katz (1933-2010) in the period between 1967 and 1970 [1]. The two most popular non-Newtonian calculi are the multiplicative and bigeometric calculi, which in fact are modifications of each other: in these calculi, the addition and subtraction are changed to multiplication and division [2]. Since such multiplicative calculi are variations on the usual calculus, the traditional one is sometimes called the additive calculus [3].
Recently, it has been shown that non-Newtonian/multiplicative calculi are more suitable than the ordinary Newtonian/additive calculus for some problems, e.g., in actuarial science, finance, economics, biology, demography, pattern recognition in images, signal processing, thermostatistics and quantum information theory [3][4][5][6][7]. This is explained by the fact that while the basis for the standard/additive calculus is the representation of a function as locally linear, the basis of a multiplicative calculus is the representation of a function as locally exponential [1,3,7]. In fact, the usefulness of product integration goes back to Volterra (1860Volterra ( -1940, who introduced in 1887 the notion of a product integral and used it to study solutions of differential equations [8,9]. For readers not familiar with product integrals, we refer to the book [10], which contains short biographical sketches of Volterra, Schlesinger and other mathematicians involved in the development of product integrals, and an extensive list of references, offering arXiv:2107.14152v1 [math.CA] 29 Jul 2021 a gentle opportunity to become acquainted with the subject of non-Newtonian integration. For our purposes, it is enough to understand that a non-Newtonian calculus is a methodology that allows one to have a different look at problems that can be investigated via calculus: it provides differentiation and integration tools, based on multiplication instead of addition, and in some cases-mainly problems of price elasticity, multiplicative growth, etc.-the use of such multiplicative calculi is preferable to the traditional Newtonian calculus [11][12][13][14]. Moreover, a non-Newtonian calculus is a self-contained system, independent of any other system of calculus [15]. The main aim of our work was to obtain, for the first time in the literature, a non-Newtonian calculus of variations that involves the minimization of a functional defined by a non-Newtonian integral with an integrand/Lagrangian depending on the non-Newtonian derivative. The calculus of variations is a field of mathematical analysis that uses, as the name indicates, variations, which are small changes in functions, to find maxima and minima of the considered functionals: mappings from a set of functions to the real numbers. In the non-Newtonian framework, instead of the classical variations of the form y(⋅) + h(⋅), proposed by Lagrange (1736-1813) and still used nowadays in all recent formulations of the calculus of variations [16][17][18], for example, in the fractional calculus of variations [19,20], quantum variational calculus [21,22] and the calculus of variations on time scales [23,24], we propose here to use "multiplicative variations". More precisely, in contrast with the calculi of variations found in the literature, we show here, for the first time, how to consider variations of the form y(⋅) ⋅ ln h(⋅) . The functionals of the calculus of variations are expressed as definite integrals, here in a non-Newtonian sense, involving functions and their derivatives, in a non-Newtonian sense here. The functions that maximize or minimize the functionals of the calculus of variations are found using the Euler-Lagrange equation, which we prove here in the non-Newtonian setting. Given the importance of the calculus of variations in applications, for example, in physics [25,26], economics [27,28] and biology [29,30], and the importance that non-Newtonian calculus already has in these areas, we trust that the calculus of variations initiated here will call attention to the research community. We give credit to the citation found in the 1972 book of Grossman and Katz [1]: "for each successive class of phenomena, a new calculus or a new geometry".

Materials and Methods
From 1967 till 1970, Grossman and Katz gave definitions of new kinds of derivatives and integrals, converting the roles of subtraction and addition into division and multiplication, respectively, and established a new family of calculi, called non-Newtonian calculi [1,31,32], which are akin to the classical calculus developed by Newton and Leibniz three centuries ago. Non-Newtonian calculi use different types of arithmetic and their generators. Let α be a bijection between subsets X and Y of the set of real numbers R, and endow Y, with the induced operations sum and multiplication and the ordering given by the inverse map α −1 . Then the α-arithmetic is a field with the order topology [33]. In concrete, given a bijection α ∶ X → Y ⊆ R, called a generator [15], we say that α defines an arithmetic if the following four operations are defined: If α is chosen to be the identity function and X = R, then (1) reduces to the four operators studied in school; i.e., one gets the standard arithmetic, from which the traditional (Newton-Leibniz) calculus is developed. For other choices of α and X, we can get an infinitude of other arithmetics from which Grossman and Katz produced a series of non-Newton calculi, compiled in the seminal book of 1972 [1]. Among all such non-Newton calculi, recently great interest has been focused on the Grossman-Katz calculus obtained when we fix α(x) = e x , α −1 (x) = ln(x) and X = R + for the set of real numbers strictly greater than zero [7,12,14,15]. We shall concentrate here on one option, originally called by Grossman and Katz the geometric/exponential/bigeometric calculus [1,2,13,[34][35][36][37], but from which other different terminology and small variations of the original calculus have grown up in the literature, in particular, the multiplicative calculus [3][4][5]8,12,15,36,[38][39][40], and more recently, the proportional calculus [7,11,14,41], which is essentially the bigeometric calculus of [35].
Here we follow closely this last approach-in particular, the exposition of the non-Newton calculus as found in [7,14,35], because it is appealing to scientists who seek ways to express laws in a scale-free form.
Throughout the text, we fix α(x) = e x , α −1 (x) = ln(x), and X = R + . Then we get from (1) the following operations: x ⊙ y = x ln(y) , x ⊘ y = x 1 ln(y) , y ≠ 1. (2) Let a, b, c ∈ R + . In the non-Newtonian arithmetic given by (2), the following properties of the ⊙ operation hold (cf. Proposition 2.1 of [14]): We see that in this non-Newtonian algebra, a = 1 is the traditional "zero" (in the current arithmetic, 0 represents −∞). In fact, see Proposition 2.2 of [14], one has Based on the mentioned properties, one easily proves that (R + , ⊕, ⊙) is a field (see Theorem 2.3 of [14]). In this field, the following calculus has been developed [7,14].
, is given by Let x, y, z be positive real numbers and define d ∶ R + × R + → R + as The following properties are simple to prove: We can now introduce the notion of limit.
According with Definition 2, it is possible to specify the meaning of equality lim x→x 0 f (x) = f (x 0 ), and therefore, the notion of continuity in the non-Newtonian calculus.

Definition 3 (continuity).
We say that f is continuous at x 0 or We proceed by reviewing the essentials on non-Newtonian differentiation and integration.

Derivatives
The derivative of a function is introduced in the following terms.
Definition 4 (derivative [7,14,35,41] exists. In this case, the limit is denoted byf (x 0 ) and receives the name of derivative of f at x 0 . Moreover, we say that f is differentiable if f is differentiable at x 0 for all x 0 in the domain of f .
We have In particular, if n = 1 in (3), we get: More examples of derivatives of a function in the sense of Definition 4 follow: If cos e (x) ∶= e cos(ln(x)) and sin e (x) ∶= e sin(ln(x)) , then cos e (x) = 1 ⊖ sin e (x) and sin e (x) = cos e (x).
The basic rules of differentiation (keep recalling that 1 is the "zero" of the non-Newtonian calculus) follow: (a) If f (x) = c, and c is a positive constant, thenf (x) = 1 (derivative of a constant); Higher-order derivatives are defined as usual: In the sequel, we use the following notation: n i=0 a i = a 0 ⊕ ⋯ ⊕ a n .
Theorem 1 (Taylor's theorem). Let f be a function such thatf (n+1) (x) exists for all x in a range that contains the number a. Then, for all x, where is the Taylor polynomial of degree n and is the remainder term in Lagrange form, for some number c between a and x.
Suppose f is a function that has derivatives of all orders over an interval centered on a. If lim n→+∞ R n (x) = 1 for all x in the interval, then the Taylor series is convergent and converges to f (x): As examples of convergent series, one has: The Taylor's theorem given by Theorem 1 has a natural extension for functions of several variables [6,42]. Here we proceed by briefly reviewing integration. For more on the alphaarithmetic, its topology and analysis, we refer the reader to the literature. For example, mean value theorems can be found in the original book of Grossman and Katz of 1972 [1]; for a recent reference with detailed proofs, see [43].

Integrals
The notion of integral for the non-Newtonian calculus under consideration is a type of product integration [10]. As expected, the function where c is a constant. Examples are (see [7,14,35]): If f is positive and continuous on [a, b], then f is integrable in [a, b]. The following properties hold: For more on the α-arithmetic, its generalized real analysis, its fundamental topological properties related to non-Newtonian metric spaces and its calculus, including non-Newtonian differential equations and its applications, see [7,33,[44][45][46][47][48]. For gentle, thorough and modern introduction to the subject of non-Newtonian calculi, we also refer the reader to the recent book [49]. Now we proceed with our original results.

Results
In order to develop a non-Newtonian calculus of variations (dynamic optimization), we begin by first proving some necessary results of static optimization.

Static Optimization
Given > 1, let Note that for a, b ∈ R + , one has Similarly for inequalities, for example, This means that B(x, ) = x ∈ R + ∶ d(x,x) ⊖ ≤ 1 .
Definition 5 (local minimizer). Let f ∶ (a, b) → R + and consider the problem of minimizing f (x), x ∈ (a, b). We say that x ∈ (a, b) is a (local) minimizer of f in (a, b) if there exists > 1 such that f (x) ≤ f (y) (i.e., f (x) ⊖ f (y) ≤ 1) for all y ∈ B(x, ) ∩ (a, b). In this case, we say that f (x) is a (local) minimum.
Another important concept in optimization is that of descent direction.

Remark 1.
From the chain rule and other properties of Section 2, it follows that In particular, we get from (4) that Our first result allow us to identify a descent direction of f at x based on the derivative of f at x.
Theorem 2. Let f be differentiable. If there exists d ∈ R + such thatf (x) ⊙ d < 1, then d is a descent direction of f at x.

Proof.
We know from Taylor's theorem (Theorem 1) that where with c being in the interval between x and x ⊕ ⊙ d. The equality (5) can be written in the following equivalent form: Recalling that a ⊙ b {−1} = a ⊘ b, a ⊙ a {−1} = e and ⊙ is distributive over ⊕, we get from (6) that Now we note that as → 1, one has ⊙ f (2) (c) 1 2 ⊙ d {2} → 1 so that the right-hand side of (7) converges tof (x) ⊙ d. From the hypothesisf (x) ⊙ d < 1 of our theorem, this means that, for > 1 sufficiently close to 1, the right-hand side of (7) is strictly less than one. Thus, for > 1 sufficiently close to 1, Recalling that a ⊘ < 1 ⇔ a 1 ln( ) < 1, we conclude from (8) that for sufficiently close to As a corollary of Theorem 2, we obtain Fermat's necessary optimality condition, which gives us a method to find local minimizers (or maximizers) of differentiable functions on open sets, by showing that every local extremizer of the function is a stationary point (the non-Newtonian function's derivative is one at that point).

Proof.
We want to prove that for a minimizer x we must havef (x) = 1 ⇔ 1 ⊖f (x) = 1. We do the proof by contradiction. Assume thatf (x) ≠ 1; that is, 1 ⊖f (x) ≠ 1. Let d = 1 ⊖f (x). Then, and since g(y) = 1 y ln(y) is a function with 0 < g(y) < 1 for all y ≠ 1, we conclude that f (x) ⊙ d < 1. It follows from Theorem 2 that d is a descent direction of f at x, and therefore, from the definition of descent direction, x is not a local minimizer.
In the next section, we make use of Theorem 3 to prove the non-Newtonian Euler-Lagrange equation.

Dynamic Optimization
A central tool in dynamic optimization, both in the calculus of variations and optimal control [16], is integration by parts. In what follows, we use the following notation:

Theorem 4 (integration by parts
The following formula of integration by parts holds: Proof. From the derivative of a product, we know that On the other hand, the fundamental theorem of integral calculus tell us that Therefore, by integrating (10) from a to b, we conclude that which is equivalent to (9).
We are now in a condition to formulate the fundamental problem of the calculus of variations: to minimize the integral functional and y ∈ C 2 , so that one can look to the Euler-Lagrange equation as a second-order ordinary differential equation [17]. We adopt such assumptions here. We denote the problem by (P).
Before proving the Euler-Lagrange equation (the necessary optimality condition for problem (P)), we first need to prove a non-Newtonian analogue of the fundamental lemma of the calculus of variations.
then h(x) satisfies the assumptions of the lemma; i.e., h(x) is continuous for Let us analyze the integrand β(x) of (12): Since f (x) > 1 and x This contradicts (11) and proves the lemma. Now we formulate and prove the analog of the Euler-Lagrange differential equation for our problem (P).

Theorem 5 (Euler-Lagrange equation). If y(x), x ∈ [a, b], is a solution to the problem
then y(x) satisfies the Euler-Lagrange equatioñ for all x ∈ [a, b].
Proof. Let y(x), x ∈ [a, b], be a minimizer of (P). Then, function (y ⊕ ⊙ h)(x), x ∈ [a, b], belongs to Y(y a ; y b ) for any function h ∈ Y(1; 1) and for any in an open neighborhood of 1. Note that (y ⊕ ⊙ h)(x) = y(x) for = 1. This means that for any smooth function h(x), x ∈ [a, b], satisfying h(a) = h(b) = 1, the function ϕ( ) defined by has a minimizer for = 1. It follows from Fermat's theorem (Theorem 3) that By differentiating (14) with respect to , and then putting = 1, we get from the chain rule and relationsd From integration by parts (Theorem 4), and the fact that h(a) = 1 and h(b) = 1, one has Using equality (16) in the necessary condition (15), we get that The result follows from the fundamental lemma of the calculus of variations (Lemma 1) applied to (17) To illustrate our main result, let us see an example. Consider the following problem of the calculus of variations: Theorem 5 tell us that the solution of (18) must satisfy the Euler-Lagrange Equation (13). In this example, the Lagrangian L is given by Noting that √ e = e ⊘ e 2 = e 2 {−1} , the equalities (19) simplify tõ L y (x, y,ỹ) = 1 ⊖ y,Lỹ(x, y,ỹ) =ỹ and the Euler-Lagrange Equation (13) takes the form The second-order differential Equation (20) has solutions of the form where c 1 and c 2 are constants. Given the boundary conditions y(1) = e and y e 2π = e −1 , we conclude that the Euler-Lagrange extremal for problem (18) is given by y(x) = e ⊙ cos e (x) ⊕ e 2 ⊙ sin e (x) ⇔ y(x) = e 2 sin(ln(x))+cos(ln(x)) .

Discussion
One can say that the calculus of variations began in 1687 with Newton's minimal resistance problem [25,50,51]. It immediately occupied the attention of Bernoulli (1655-1705), but it was Euler (1707-1783) who first elaborated mathematically on the subject, beginning in 1733. Lagrange (1736-1813) was influenced by Euler's work and contributed significantly to the theory, introducing a purely analytic approach to the subject based on additive variations y + h, whose essence we still follow today [17]. The calculus of variations is concerned with the minimization of integral functionals: b a L x, y(x), y ′ (x) dx → min .
However, as observed in [52], there are other interesting problems that arise in applications in which the functionals to be minimized are not of the form (21). For example, the planning of a firm trying to program its production and investment policies to reach a given production rate and to maximize its future market competitiveness at a given time horizon, can be mathematically stated in the form (see [52]). Another example, also given in [52], appears when dealing with the so called "slope stability problem", which is described mathematically as minimizing a quotient functional: Such multiplicative integral minimization problems that arise in different applications are nonstandard problems of the calculus of variations, but they can be naturally modeled in the non-Newtonian calculus of variations, and then solved in a rather standard way, using non-Newtonian Lagrange variations of the form y ⊕ ⊙ h, as we have proposed here. Therefore, we claim that the non-Newtonian calculus of variations just introduced may be useful for dealing with multiplicative functionals that arise in economics, physics and biology.
In this paper we have restricted ourselves to the ideas and to the central results of any calculus of variations: the celebrated Euler-Lagrange equation, which is a first-order necessary optimality condition. Of course our results can be extended, for example, by relaxing the considered hypotheses and enlarging the space of admissible functions, which we have considered here to be C 2 , or considering vector functions instead of scalar ones. We leave such generalizations to the interested and curious reader. In fact, much remains now to be done. As possible future research directions we can mention: obtaining natural boundary conditions (sometimes also called transversality conditions) to be satisfied at a boundary point a and/or b, when y(a) and/or y(b) are free or restricted to take values on a given curve; obtaining second-order necessary conditions; obtaining sufficient conditions; to investigate nonadditive isoperimetric problems; etc.

Conclusions
In this work, a new calculus of variations was proposed, based on the non-Newtonian approach introduced by Grossman and Katz, thereby avoiding problems about nonnegativity. A new relation was proved, the multiplicative Euler-Lagrange differential Equation (13), which each solution of a non-Newtonian variational problem, with admissible functions taking positive values only, must satisfy. An example was provided for illustration purposes.
Grossman and Katz have shown that infinitely many calculi can be constructed independently [1]. Each of these calculi provide different perspectives for approaching many problems in science and engineering [53]. Additionally, a mathematical problem, which is difficult or impossible to solve in one calculus, can be easily revealed through another calculus [14,39].
Here we adopted the non-Newtonian calculus as originally introduced by Grossman and Katz [1,34,35] and recently developed by Córdova-Lepe [11,41] and collaborators [4]: see the recent reviews in [7,14]. Roughly speaking, the key to understanding such calculus, valid for positive functions, is a formal substitution, where one replaces addition and subtraction with multiplication and division, respectively; multiplication in standard calculus is replaced by exponentiation in the non-Newtonian case, and thus, division by exponentiation with the reciprocal exponent. Our main contribution here was to develop, for the first time in the literature, a suitable non-Newtonian calculus of variations that minimizes a non-Newtonian integral functional with a Lagrangian that depends on the non-Newtonian derivative. The main result is a first-order necessary optimality condition of Euler-Lagrange type.
We trust that the present paper marks the beginning of a fruitful road for Non-Newtonian (NN) mechanics, NN calculus of variations and NN optimal control, thereby calling attention to and serving as inspiration for a new generation of researchers. Currently, we are investigating the validity of Emmy Noether's principle in the NN/multiplicative calculus of variations here introduced.