Next Article in Journal
The Reverse Order Law for the {1,3M,4N}—The Inverse of Two Matrix Products
Previous Article in Journal
Optimality Conditions and Stability Analysis for the Second-Order Cone Constrained Variational Inequalities
Previous Article in Special Issue
Numerical Model for Simulation of the Laser Thermal Forming Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evolutionary Search for Polynomial Lyapunov Functions: A Genetic Programming Method for Exponential Stability Certification

1
Faculty of Mechanics and Mathematics, Taras Shevchenko National University of Kyiv, 4E Academician Glushkov Avenue, 03127 Kyiv, Ukraine
2
Educational and Scientific Institute of Cybersecurity and Information Protection, State University of Information and Communication Technologies, 7, Solomyanska Str., 03110 Kyiv, Ukraine
3
Faculty of Information Technology, Taras Shevchenko National University of Kyiv, 24, Bohdana Havrylyshyna Str., 02000 Kyiv, Ukraine
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(5), 343; https://doi.org/10.3390/axioms14050343
Submission received: 29 March 2025 / Revised: 17 April 2025 / Accepted: 19 April 2025 / Published: 30 April 2025

Abstract

This paper presents a method for constructing polynomial Lyapunov functions to analyze the stability of nonlinear dynamical systems. The approach is based on genetic programming, a variant of genetic algorithms where the search space consists of hierarchical tree structures. In our formulation, these polynomial functions are represented as binary trees. The Lyapunov conditions for exponential stability are interpreted as a minimax optimization problem, using a carefully designed fitness metric to ensure positivity and dissipation within a chosen domain. The genetic algorithm then evolves candidate polynomial trees, minimizing constraint violations and continuously refining stability guarantees. Numerical examples illustrate that this methodology can effectively identify and optimize Lyapunov functions for a wide range of systems, indicating a promising direction for automated stability proofs in engineering applications.

1. Introduction

Lyapunov functions are fundamental tools for analyzing the stability of nonlinear systems [1,2,3] based on energy-based reasoning. If a system consistently dissipates energy everywhere except at the origin, it must eventually converge to the equilibrium point. Consequently, a Lyapunov function—serving as an energy-like measure—must be strictly positive definite for all states. This principle extends naturally to general nonlinear systems. Lyapunov’s direct method involves constructing a scalar function representing the system’s dynamics and analyzing its evolution over time. However, identifying such a function remains a significant challenge. Importantly, failing to find a Lyapunov function does not imply that the system is unstable, adding complexity to the task.
Numerous classical methods for identifying or approximating Lyapunov functions have been proposed, including the variable gradient method, Krasovskii’s method, and basis-function linearization; see [1,2,3,4,5,6] for a comprehensive overview. However, no universal technique exists, and constructing a Lyapunov function is generally as difficult as solving the system itself. In recent decades, advances in computational power have spurred research into numerical approaches such as sum-of-squares polynomials [7,8], piecewise affine functions [9], formal synthesis via satisfiability modulo theories (SMT) [10], and deep neural networks [11]. An in-depth survey of these methods can be found in [12], and studies on the stability of evolutionary systems are presented in [13,14].
This work introduces a numerical method for synthesizing polynomial Lyapunov functions tailored to exponentially stable dynamical systems utilizing the genetic programming (GP) paradigm [15]. GP is an evolutionary computation technique, an extension of genetic algorithms [16], which operates directly on symbolic expressions, enabling the automatic discovery and optimization of mathematical formulas. A key advantage of this approach is its ability to generate interpretable, symbolic representations of Lyapunov functions, facilitating further analysis of system dynamics.
Several prior works have explored related methodologies. Grosman and Lewin [17] applied GP techniques to the analysis of asymptotic stability. McGough et al. [18] utilized Grammatical Evolution, a specialized variant of GP. Feng et al. [19] combined deep neural networks with symbolic regression to synthesize Lyapunov functions; additional insights are provided in [20].
While these methods have demonstrated significant progress, many lack rigorous mathematical guarantees or general-purpose construction procedures. Our contributions address this gap by establishing existence criteria for polynomial Lyapunov functions in exponentially stable systems and developing a robust algorithm for their automated synthesis. The proposed method was validated across multiple benchmark systems, demonstrating its strong performance and high accuracy.

2. Exponential Stability of Dynamical Systems

In this section, we recall several fundamental concepts of exponential stability in dynamical systems. For additional details, we refer the reader to [1,2,3].
We consider an autonomous system of ordinary differential equations of the form
x ˙ t = f x t
where f : D R n is a locally Lipschitz continuous map defined on a domain D R n .
We analyzed an exponentially stable equilibrium according to the following definition.
Definition 1. 
The equilibrium point x   =   0 of (1) is exponentially stable if there exist positive constants c , k , and λ such that
x t k x t 0 λ t t 0 ,   x t 0 < c
and globally exponentially stable if satisfied for any initial state x t 0 .
The classical Lyapunov theorem states that if one can find a continuously differentiable function V x : D R that satisfies specified bounding conditions, then the equilibrium at the origin is exponentially stable.
Theorem 1 
(Khalil [2]). Let x   =   0 be an equilibrium point for the nonlinear system
x ˙ = f t , x
where f : 0 , × D R n is piecewise continuous in t and locally Lipschitz in x on 0 , × D and D R n is a domain that contains the origin x = 0 . Let V x : 0 , × D R be a continuously differentiable function such that
k 1 x α V t , x k 2 x α ,
V t + V x f t ,   x k 3 x α ,
t 0 and x D , where k 1 , k 2 , k 3 , and α are positive constants. Then, x = 0 is exponentially stable. If the assumption holds globally, then x = 0 is globally exponentially stable.
For nonlinear systems, stability can also be analyzed via linearization. The following theorem relates exponential stability of the nonlinear system to the stability of its linearization using the Jacobian matrix.
Theorem 2 
(Khalil [2]). Let x = 0 be an equilibrium point for the nonlinear system
x ˙ = f t , x
where f : 0 , × D R n is continuously differentiable, D = x R n     x 2 < r } , and the Jacobian matrix A t is bounded and Lipschitz on D , uniformly in t .
Let
A t = f x t , x x = 0 .
Then, the origin is an exponentially stable equilibrium point for the nonlinear system if and only if it is an exponentially stable equilibrium point for the linear system
x ˙ = A t x .
For the autonomous system (1), the stability of x = 0 can be determined by the eigenvalues of the constant linearization matrix.
Definition 2. 
A squared matrix A R n × n   is called a Hurwitz matrix if all its eigenvalues λ i satisfy
R e ( λ i ) < 0 .
Corollary 1. 
Let x = 0 be an equilibrium point for the autonomous nonlinear system (1), where f x is continuously differentiable in a neighborhood of x = 0 . Let A be a linearization matrix of (1). Then, x = 0 is an exponentially stable equilibrium point for the nonlinear system if and only if A is a Hurwitz matrix.
The following converse theorem presented by Peet [21] guarantees the existence of a polynomial Lyapunov function for an exponentially stable equilibrium under sufficient smoothness and boundedness conditions.
Theorem 3 
(Peet [21]). Consider the system defined by Equation (1), where f C n + 2 R n . Suppose there exist constants μ , δ , r > 0 such that
x t 2 μ x t 0 2 e δ t
for all t 0 and x t 0 2 r . Then, there exists a polynomial v : R n R and constants k 1 , k 2 , k 3 > 0 such that
k 1 x 2 2 v x k 2 x 2 2 ,
v x T f x k 3 x 2 2 ,  
for all x t 2 r .

3. Genetic Programming Method for Exponential Stability Certification

In this section, we outline our proposed method, which leverages genetic programming to compute a polynomial Lyapunov function for a given dynamical system. We provide a comprehensive explanation of the underlying theoretical principles and describe the construction process of our approach.

3.1. Existence Criterion for a Polynomial Lyapunov Function

We now present a converse Lyapunov theorem, which provides the main criterion for the applicability of the proposed method.
Theorem 4. 
Let x = 0 be an equilibrium for the nonlinear system (1), where f C n + 2 D and D = x R n     x 2 < r } . Let linearization matrix A of the system (1) be a Hurwitz matrix. Then, there exists a polynomial v : R n R and constants k 1 , k 2 , k 3 > 0 such that
k 1 x 2 2 v x k 2 x 2 2 ,
v x T f x k 3 x 2 2 ,
for all x D .
Proof of Theorem 4. 
By Corollary 1, the Hurwitz condition on the linearization implies that x = 0 is an exponentially stable equilibrium for the nonlinear system. Then, by Theorem 3, under the assumed smoothness of f and the exponential stability of the origin, there exists a polynomial Lyapunov function v satisfying the required quadratic bounds and dissipation condition on domain D . □

3.2. The Lyapunov Fitness

For the remainder of our work, we focus on dynamical systems that satisfy the conditions of Theorem 4, and our objective is to develop methods for constructing a polynomial Lyapunov function V that meets the following criteria for exponential stability on domain D R n :
  • Quadratic Bounds: There exist constants k 1 , k 2 > 0 such that
    k 1 x 2 V x k 2 x 2 ,   x D .
  • Dissipation Condition: There exist k 3 > 0 such that the orbital derivative of V satisfies
    V x T f x k 3 x 2 ,   x D .
Rather than strictly enforcing these conditions, we introduce a fitness measure that quantifies the degree of violation by formulating the search for a Lyapunov function as a minimax optimization problem:
i n f V V s u p x D F V ,   x ,
where V is a space of candidate function (in our case, polynomials on D ) and F V , x is a fitness functional that penalizes deviations from the desired Lyapunov conditions.
Definition 3. 
Let V be a chosen space of continuously differentiable functions V x : D R , D R n . Define the Lyapunov fitness functional F : V × D R by
F V ,   x = F 1 V ,   x + F 2 V ,   x + F 3 V ,   x ,
with the following penalty terms:
1. 
Lower Bound Penalty:
F 1 V ,   x = t a n h m a x 0 ,     k 1 x 2 V x .
2. 
Upper Bound Penalty:
F 2 V ,   x = t a n h   m a x 0 ,     V x k 2 x 2 .
3. 
Dissipation Penalty:
F 3 V ,   x = t a n h m a x 0 ,     V x T f x + k 3 x 2 .
Here, k 1 , k 2 , k 3 are positive constants, f x is the vector field of a dynamic system, and the hyperbolic tangent function
t a n h x = e x e x e x + e x
serves to smooth and bound the penalty values.
Definition 4. 
For a candidate Lyapunov function V and the vector field f , the Lyapunov fitness L f V over domain D is defined as
L f , D V = D F V ,   x d x .
This integral measures the total violation of the Lyapunov conditions over the domain D .
Definition 5. 
Given a probability distribution ρ on D and a set of sample points x ^ 1 , , x ^ N D drawn according to ρ , the empirical Lyapunov fitness is defined as
L f , ρ D V = 1 N i = 1 N F V ,   x i .
This discrete approximation converges to L f V as N .
It is straightforward to verify that if V   ^ is a Lyapunov function satisfying the exponential stability conditions on D , then V   ^ is a global minimizer of both (3) and (4) as well as a solution to the minimax problem (2).

3.3. Lyapunov Function Representation

In our formulation, a candidate Lyapunov function V is represented as a binary tree with the following structure:
  • The leaves of a tree (also known as terminal nodes) are drawn from a predefined set:
T = x 1 , , x n , a 1 , , a n ,
where x 1 , , x n denote the state variables and a 1 , , a n R are constant coefficients.
  • The internal nodes (branches or nonterminal nodes) represent binary operations. In our implementation, the set of functions is
H = h , g ,
where h a , b = a + b , and g a , b = a × b .
Although the sets T and H can be expanded, they suffice for our purposes, as our focus is on constructing polynomial functions.

3.4. Evolution Strategy

Our genetic algorithm employs the μ + λ —evolution strategy ( μ + λ − ES). Here, μ denotes the number of individuals selected for the next generation, and λ is the number of offspring generated at each iteration. In μ + λ E S , both parents and offspring are retained in a combined selection pool of size γ = μ + λ . This strategy, commonly known as elitism, unlike the alternative μ ,   λ E S , guarantees that the best individual found so far is preserved across generations. Under suitable conditions, such elitist selection ensures the global convergence of the evolution strategy. For further theoretical details and an in-depth explanation of evolution strategies, we refer the reader to [22].

4. Step-by-Step Schema of the Algorithm

Our method implements the classic flow of a genetic algorithm. It begins with an Initialization phase to create the initial population of individuals, followed by an Initial Fitness Evaluation to prepare this population for further evolution. Next, the Evolutionary Cycle, the main body of the algorithm, allows candidates to evolve toward a solution. Then, a New Generation is formed by evaluating the fitness of newly generated individuals for the next evolution cycle. Finally, a Termination Check is performed to determine whether a solution has been found or if the algorithm is trapped in a local minimum.
Step 1.
Initialization.
Generate an initial population of individuals P = V i i = 1 G , where G is the size of the initial population, and each individual V i is a candidate solution represented as a binary tree. Each tree is constructed using two sets: the set of terminal nodes T (5) and the set of nonterminal nodes H (6).
The tree is constructed recursively. At a given depth d , the node type (terminal or nonterminal) is selected according to a probability that decreases with depth. Specifically, the probability of choosing a nonterminal node is defined as
P n o n t e r m i n a l d = e α d ,
where α > 0 is a predetermined decay parameter. Consequently, the probability of selecting a terminal node at depth d is
P t e r m i n a l d = 1 e α d .
When a nonterminal node is selected, a node from the set F is placed, and its children are recursively generated (for binary trees, two children), with the depth incremented by one at each recursive call. Conversely, if a terminal node is selected, a node from T is placed, ending that branch of the recursion.
This strategy ensures that as the depth d increases, the likelihood of selecting a terminal node increases, thereby regulating the overall tree size and preventing uncontrolled growth. Each individual tree in the initial population is constructed independently according to this scheme, resulting in a diverse set of candidate solutions for the algorithm.
Step 2.
Initial Fitness Evaluation.
The numerical evaluation of L f , ρ V i proceeds as follows:
  • Domain Sampling:
    Let D R n be a predefined domain of interest. To calculate the fitness function, we generate a set of N test points x ^ j j = 1 N by sampling independently from the uniform distribution U D over D . These points form a well-distributed mesh covering the domain D and serve as the basis for numerical approximation of the fitness values.
  • Fitness Computation:
    For each individual V i in the population P = V i i = 1 G , we evaluate its fitness value l i on the set of test points x ^ j j = 1 N using the empirical Lyapunov fitness L f , ρ (4), which quantifies the degree to which V i violates the exponential stability conditions on domain D .
The computed fitness values l i i = 1 G determine the likelihood of each individual being selected for reproduction or mutation in subsequent generations. Individuals with lower fitness values (i.e., those that better satisfy the exponential stability conditions) typically have a greater chance of producing offspring, guiding the population toward better solutions over successive generations.
Step 3.
Evolutionary Cycle.
The evolutionary cycle simulates Darwinian natural selection by iteratively applying selection, crossover, and mutation to evolve the population toward improved solutions. These genetic operations balance exploration (discovering new regions of the solution space) and exploitation (refining high-quality candidates).
The selection operator is defined as
s e l e c t V i i = 1 N ,   l i i = 1 N ,
where V i i = 1 N   is the set of individuals in the current generation, and l i i = 1 N are their corresponding fitness values. We employ tournament selection [23] in our implementation. In each tournament, a fixed number n t of individuals is randomly sampled from the population, and the individual with the lowest fitness is selected for reproduction. This approach ensures that high-quality candidates have a higher chance of producing offspring while maintaining sufficient diversity to prevent premature convergence.
The crossover operator is defined as
c r o s s V p , V q
where V p and V q are two parent individuals chosen via selection, and p c is the predefined crossover probability. With probability p c , a random node is selected in the tree representations of both parents V p and V q . The corresponding subtrees are swapped, generating two offspring that inherit genetic material from both parents, introducing new variations. If crossover is not performed (with probability 1 p c ), the parent individuals are passed forward unchanged. Figure 1 illustrates the crossover operation, where shaded nodes are selected to be swapped between parents.
The mutation operator is defined as
m u t a t e V , p m ,
where V is an individual (either offspring of crossover or a direct parent copy), and p m is the mutation probability. With probability p m , a random node in the tree representation of V is replaced with a newly generated subtree constructed using the recursive initialization procedure. If no mutation occurs (with probability 1 p m ), then V is retained in the offspring pool if it results from crossover; however, if V is a direct copy of a parent, it is removed to avoid stagnation and promote population diversity. Figure 2 illustrates the mutation operation, where shaded nodes from the individual were replaced by a randomly generated subtree.
These operations are repeated until the offspring pool reaches the desired size of λ . After the offspring pool is fully populated, each individual’s fitness from the offspring pool is evaluated using the empirical Lyapunov fitness (4).
Step 4.
Forming a New Generation.
Following the μ + λ E S , we merge the current generation with the offspring pool, forming a combined pool of size γ = μ + λ .
All individuals in this pool are ranked by fitness, and the top μ (i.e., those with the lowest fitness values) are selected to form the next generation. This elitist strategy ensures the survival of the best-performing individuals from the previous generation while allowing new variations to improve diversity. This balance enhances convergence and maintains stability in the evolutionary process.
Step 5.
Termination Check.
The algorithm continues to evolve until one of the following stopping criteria is met:
  • Maximum Generations Reached: The algorithm runs for a predefined number of generations n g , after which the algorithm terminates.
  • Early Stopping: If an individual achieves a predefined target fitness value of 0.
  • No Improvement: If the best fitness value remains unchanged for n b consecutive generations, the algorithm is considered converged, and execution stops.
These termination conditions ensure computational efficiency, preventing unnecessary iterations while striving for optimal solutions.

5. Experiments

In this section, we validate the proposed algorithm by constructing polynomial Lyapunov functions for several nonlinear dynamical systems.
We implemented an application for numerical experiments. The environment parameters of the workstation and environment are as follows:
  • Programming language: Python 3.13.
  • Libraries:
    • Symbolic computations: PySR 1.4.0 [24].
    • Evolutionary algorithms framework: DEAP 1.4.2 [25].
    • Visualization: Matplotlib 3.10.0 [26].
  • Algorithm settings:
    • Initial population size: G = 1000 .
    • Maximum number of generations: n g = 100 .
    • Number of individuals selected for the next generation: μ = 500 .
    • Number of offspring: λ = 250 .
    • Crossover probability: p c = 0.2 .
    • Mutation probability: p m = 0.1 .
    • Number of random test points: N = 5000 .
  • Constraints:
    • Domain of interest: D = x R n     x 2 1 } .
    • Constants a 1 , , a n Z .
For visualization purposes, we define three error functions:
E 1 x = k 1 x 2 V x ,
E 2 x = V x k 2 x 2 ,
E 3 x = k 3 x 2 + V x T f x ,
which correspond to the lower-bound, upper-bound, and dissipation penalties, respectively. They are each expected to be non-positive on a chosen domain to satisfy (2) and to ensure the exponential stability conditions are met.
Example 1: 
Simple Two-Dimensional Linear System
x 1 ˙ = x 1 , x 2 ˙ = x 2 .
This system describes two independent, exponentially stable modes, where each state variable decays exponentially to the origin.
After two iterations, the algorithm identified the following quadratic function, which is commonly used as an initial guess for a Lyapunov function:
V x = x 1 2 + x 2 2 = x 2 .
It is straightforward to verify that V is a valid Lyapunov function for this system. Its orbital derivative is
V x T f x = 2 x 1 2 + x 2 2 = 2 x 2 .
Thus, the exponential stability conditions are satisfied for any neighborhood of the origin with the constants   k 1 , k 2 = 1 and k 3 = 2 .
The surface and contour plots of the Lyapunov function (11) in Figure 3 illustrate its quadratic form, where the level sets form concentric circles centered at the origin, while the orbital derivative (12) in Figure 4 confirms strict negativity across all directions. Although the error functions (7)–(9) are 0 when k 1 , k 2 = 1 and k 3 = 2 , for illustration we used k 1 = 0.5 , k 2 = 2 , and k 3 = 1 . Figure 5, Figure 6 and Figure 7 present surface and contour plots of the errors E 1 ,   E 2 ,   E 3 using illustrative parameters.
Example 2: 
The Damped Pendulum
x 1 ˙ = x 2 , x 2 ˙ = s i n x 1 x 2 .
This is a classic example of a nonlinear dynamical system. This system satisfies the conditions of Theorem 4, where the Hurwitz linearization matrix at the equilibrium point x * = 0 confirms the exponential stability of the origin.
The algorithm successfully identified the following quadratic Lyapunov function in the ninth generation:
V x = 2 x 1 2 + 2 x 1 x 2 + 2 x 2 2 .
The orbital derivative is
V ( x ) T f x = 2 x 2 2 x 1 + x 2 2 x 1 + 2 x 2 x 2 + s i n x 1 .
The Lyapunov function guarantees the exponential stability of the damped pendulum system (13) within the domain x 1 and constants k 1 = 1 , k 2 = 3 , and k 3 = 1 .
Figure 8 presents the surface and contour plots of the Lyapunov function (14), highlighting its quadratic form, where the level sets form concentric ellipses centered at the origin. The corresponding orbital derivative (15), shown in Figure 9, confirms strict negativity across all directions. Additionally, Figure 10, Figure 11 and Figure 12 present the surface and contour plots of the errors E 1 ,   E 2 ,   E 3 , visually confirming that the exponential stability conditions of the origin are met.
Example 3: 
A Two-Dimensional Polynomial System
x 1 ˙ = x 1 10 x 2 2 , x 2 ˙ = 2 x 2 .
This example, presented in [11], examines a two-dimensional system that employs a compositional Lyapunov function. This system meets the criteria of Theorem 4, where the Hurwitz linearization matrix at the equilibrium point x * = 0 confirms the exponential stability of the origin.
The algorithm successfully identified the Lyapunov function after 16 iterations:
V x = x 1 2 + 6 x 2 2 .
The orbital derivative is
V ( x ) T f x = 2 x 1 x 1 + 10 x 2 2 24 x 2 2 .
The Lyapunov function (17) ensures that the polynomial system (16) is exponentially stable within the domain x 1 and constants k 1 = 1 , k 2 = 6 , and k 3 = 2 .
Figure 13 shows the surface and contour plots of function (17), visually confirming its properties. Figure 14 shows that the orbital derivative is negative over the considered domain. Figure 15, Figure 16 and Figure 17 display the surface and contour plots of the error functions E 1 ,   E 2 , and E 3 , providing additional insight into the effectiveness of (17) in capturing the stability characteristics of the polynomial system (16).
Example 4: 
The Van der Pol Oscillator
x 1 ˙ = x 2 , x 2 ˙ = x 1 + x 2 x 1 2 1 .
This system provides another classic example of a nonlinear dynamical system—a Van der Pol equation in reverse time, that is, with t replaced by t . It fulfills the conditions outlined in Theorem 4, with the Hurwitz linearization matrix at the equilibrium point x * = 0 , thus verifying the exponential stability of the origin.
The algorithm successfully identified the Lyapunov function in the twelfth generation:
V x = x 1 2 x 1 x 2 + x 2 2 .
The orbital derivative is
V ( x ) T f x = x 2 2 x 1 x 2 x 1 2 x 2 x 1 + x 2 x 1 2 1 .
The Lyapunov function guarantees the exponential stability of the Van der Pol system (19) within the domain x 1 and constants k 1 = 0.1 , k 2 = 3 , and k 3 = 0.1 .
Figure 18 shows the surface and contour plots of function (20), visually confirming its properties. Figure 19 depicts the orbital derivative, revealing the intricate nonlinear dynamics of the system. Figure 20, Figure 21 and Figure 22 display the surface and contour plots of the error functions E 1 ,   E 2 ,   E 3 , offering further insight into the accuracy with which function (20) represents the stability properties of the Van der Pol system (19). The red region in Figure 22 indicates areas where E 3 > 0 , meaning the dissipation conditions are not met. However, this region falls outside the domain D .

6. Discussion

Our numerical experiments demonstrate that a GP-based search for polynomial Lyapunov functions reliably verifies stability of nonlinear systems. The fitness metric (4) employed in our GP-based method was specifically designed to enforce both positivity and dissipation conditions, ensuring that every Lyapunov criterion is satisfied. This metric effectively steered the evolutionary process toward valid polynomial solutions in our numerical tests. Additionally, the method’s inherent flexibility allows it to adapt to a variety of problem configurations and complexities. However, further work is needed to optimize the algorithm for higher-dimensional systems.
Future research can build on this work in several promising directions:
  • Enlarging the domain of attraction: Refinements are required to systematically expand the region over which the Lyapunov function remains valid. By prioritizing domain extensions in GP, the resulting Lyapunov function could cover a larger portion of the state space, yielding stronger stability guarantees.
  • Enhanced constant selection and optimization: This approach can be strengthened further by developing advanced heuristics and adaptive strategies for tuning the numerical constants, including the terminal node constants a 1 , , a n and the Lyapunov fitness coefficients k 1 ,   k 2 , and k 3 . These enhancements could mitigate convergence to suboptimal solutions, optimize computational load, and improve the method’s effectiveness in estimating viable solutions.
  • Compositional analytical functions for high-dimensional systems: For larger-scale systems, adapting the algorithm to construct compositional (separable or modular) Lyapunov functions is promising. Such an adaptation could mitigate the curse of dimensionality while preserving solution interpretability in complex, real-world applications.
Additionally, our GP-based method can be extended to the control domain by generating control Lyapunov functions to facilitate the design of robust feedback controllers. Future work could also explore theoretical convergence guarantees, performance bounds, and the integration of these evolutionary methods with classical control techniques to develop a unified framework that combines the strengths of both data-driven and analytical approaches in stability analysis.

7. Conclusions

In this paper, we proposed a genetic programming-based method to construct polynomial Lyapunov functions for exponentially stable nonlinear systems. In Theorem 4, we established the existence criterion for these functions, which serves as the primary indicator of the method’s applicability. We also introduced the Lyapunov fitness (3) to quantify deviations from exponential stability conditions.
We validated the method through numerical experiments on several dynamical systems, including a simple linear system (10), a damped pendulum (13), a polynomial system (16), and the Van der Pol oscillator (19). These tests demonstrate that the proposed method effectively constructs Lyapunov functions and can be used as an automated tool for stability analysis.
The future directions outlined highlight the potential to further enhance and extend the proposed approach. This work offers a flexible, automated method for Lyapunov-based stability certification, bridging evolutionary computation with classical control theory.

Author Contributions

Conceptualization, R.P., A.R., A.S. and Y.K.; formal analysis, R.P. and Y.K.; investigation, R.P.; methodology, R.P. and A.R.; software, R.P.; validation, R.P.; visualization, R.P. and A.S.; writing—original draft, R.P.; writing—review and editing, R.P. and A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pichkur, V.; Kapustyan, O.; Sobchuk, V. Stability and Attractors of Evolutionary Equations; Taras Shevchenko National University of Kyiv: Kyiv, Ukraine, 2023. (In Ukrainian) [Google Scholar]
  2. Khalil, H.K. Nonlinear Systems, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  3. Yoshizawa, T. Stability Theory by Lyapunov’s Second Method; Mathematical Society of Japan: Tokyo, Japan, 1966. [Google Scholar]
  4. Nikravesh, S.K.Y. Nonlinear Systems Stability Analysis: Lyapunov-Based Approach, 1st ed.; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  5. Sobchuk, V.; Barabash, O.; Musienko, A.; Tsyganivska, I.; Kurylko, O. Mathematical Model of Cyber Risks Management Based on the Expansion of Piecewise Continuous Analytical Approximation Functions of Cyber Attacks in the Fourier Series. Axioms 2023, 12, 924. [Google Scholar] [CrossRef]
  6. Kapustian, O.A.; Kapustyan, O.V.; Ryzhov, A.; Sobchuk, V. Approximate Optimal Control for a Parabolic System with Perturbations in the Coefficients on the Half-Axis. Axioms 2022, 11, 175. [Google Scholar] [CrossRef]
  7. Anderson, J.; Papachristodoulou, A. Advances in Computational Lyapunov Analysis Using Sum-of-Squares Programming. Discret. Contin. Dyn. Syst.-B 2015, 20, 2361–2381. [Google Scholar] [CrossRef]
  8. Ahmadi, A.A.; Parrilo, P.A. Sum of Squares Certificates for Stability of Planar, Homogeneous, and Switched Systems. IEEE Trans. Automat. Contr. 2017, 62, 5269–5274. [Google Scholar] [CrossRef]
  9. Hafstein, S.F.H. Algorithm for Constructing Lyapunov Functions. Electron. J. Differ. Equ. 2007, 1, 1–101. [Google Scholar] [CrossRef]
  10. Munser, L.; Devadze, G.; Streif, S. Synthesis of Lyapunov Functions Using Formal Verification. arXiv 2021, arXiv:2112.01835. [Google Scholar] [CrossRef]
  11. Grüne, L. Computing Lyapunov Functions Using Deep Neural Networks. J. Comput. Dyn. 2021, 8, 131–152. [Google Scholar] [CrossRef]
  12. Hafstein, S.; Giesl, P. Review on Computational Methods for Lyapunov Functions. Discret. Contin. Dyn. Syst.-B 2015, 20, 2291–2331. [Google Scholar] [CrossRef]
  13. Berkal, M.; Navarro, J.F. Qualitative Behavior of a Two-dimensional Discrete-time Prey–Predator Model. Comp. Math. Methods 2021, 3, e1193. [Google Scholar] [CrossRef]
  14. Berkal, M.; Almatrafi, M.B. Bifurcation and Stability of Two-Dimensional Activator–Inhibitor Model with Fractional-Order Derivative. Fractal Fract. 2023, 7, 344. [Google Scholar] [CrossRef]
  15. Koza, J.R. Genetic Programming as a Means for Programming Computers by Natural Selection. Stat. Comput. 1994, 4, 87–112. [Google Scholar] [CrossRef]
  16. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; The MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  17. Grosman, B.; Lewin, D.R. Automatic generation of Lyapunov functions using genetic programming. IFAC Proc. Vol. 2005, 38, 75–80. [Google Scholar] [CrossRef]
  18. McGough, J.S.; Christianson, A.W.; Hoover, R.C. Symbolic Computation of Lyapunov Functions Using Evolutionary Algorithms. In Proceedings of the IASTED Technology Conferences/696:MS/697:CA/698: WC/699: EME/700: SOE, Banff, AB, Canada, 15–17 July 2010; Acta Press: Calgary, AB, Canada, 2010. [Google Scholar] [CrossRef]
  19. Feng, J.; Zou, H.; Shi, Y. Combining Neural Networks and Symbolic Regression for Analytical Lyapunov Function Discovery. arXiv 2024, arXiv:2406.15675. [Google Scholar] [CrossRef]
  20. Angelis, D.; Sofos, F.; Karakasidis, T.E. Artificial Intelligence in Physical Sciences: Symbolic Regression Trends and Perspectives. Arch. Comput. Methods Eng. 2023, 30, 3845–3865. [Google Scholar] [CrossRef] [PubMed]
  21. Peet, M.M. Exponentially Stable Nonlinear Systems Have Polynomial Lyapunov Functions on Bounded Regions. IEEE Trans. Automat. Contr. 2009, 54, 979–987. [Google Scholar] [CrossRef]
  22. Beyer, H.-G.; Schwefel, H.-P. Evolution Strategies—A Comprehensive Introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  23. Hancock, P.J.B. An Empirical Comparison of Selection Methods in Evolutionary Algorithms. In Evolutionary Computing; Fogarty, T.C., Ed.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1994; Volume 865, pp. 80–94. ISBN 978-3-540-58483-4. [Google Scholar]
  24. Cranmer, M. Interpretable Machine Learning for Science with PySR and SymbolicRegression.jl. arXiv 2023, arXiv:2305.01582. [Google Scholar] [CrossRef]
  25. Fortin, F.-A.; Rainville, F.-M.D.; Gardner, M.-A.; Parizeau, M.; Gagné, C. DEAP: Evolutionary Algorithms Made Easy. J. Mach. Learn. Res. 2012, 13, 2171–2175. [Google Scholar]
  26. Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. [Google Scholar] [CrossRef]
Figure 1. Crossover diagram.
Figure 1. Crossover diagram.
Axioms 14 00343 g001
Figure 2. Mutation diagram.
Figure 2. Mutation diagram.
Axioms 14 00343 g002
Figure 3. The Lyapunov function (11) for the linear system (10).
Figure 3. The Lyapunov function (11) for the linear system (10).
Axioms 14 00343 g003
Figure 4. The orbital derivative (12) of the Lyapunov function (11) for the linear system (10).
Figure 4. The orbital derivative (12) of the Lyapunov function (11) for the linear system (10).
Axioms 14 00343 g004
Figure 5. Lower-bound penalty (7) of the Lyapunov function (11) for the linear system (10).
Figure 5. Lower-bound penalty (7) of the Lyapunov function (11) for the linear system (10).
Axioms 14 00343 g005
Figure 6. Upper-bound penalty (8) of the Lyapunov function (11) for the linear system (10).
Figure 6. Upper-bound penalty (8) of the Lyapunov function (11) for the linear system (10).
Axioms 14 00343 g006
Figure 7. Dissipation penalty (9) of the orbital derivative (12) for the linear system (10).
Figure 7. Dissipation penalty (9) of the orbital derivative (12) for the linear system (10).
Axioms 14 00343 g007
Figure 8. The Lyapunov function (14) for the dumped pendulum (13).
Figure 8. The Lyapunov function (14) for the dumped pendulum (13).
Axioms 14 00343 g008
Figure 9. The orbital derivative (15) of the Lyapunov function (14) for the dumped pendulum (13).
Figure 9. The orbital derivative (15) of the Lyapunov function (14) for the dumped pendulum (13).
Axioms 14 00343 g009
Figure 10. Lower-bound penalty (7) of the Lyapunov function (14) for the dumped pendulum (13).
Figure 10. Lower-bound penalty (7) of the Lyapunov function (14) for the dumped pendulum (13).
Axioms 14 00343 g010
Figure 11. Upper-bound penalty (8) of the Lyapunov function (14) for the dumped pendulum (13).
Figure 11. Upper-bound penalty (8) of the Lyapunov function (14) for the dumped pendulum (13).
Axioms 14 00343 g011
Figure 12. Dissipation penalty (9) of the orbital derivative (15) for the dumped pendulum (13).
Figure 12. Dissipation penalty (9) of the orbital derivative (15) for the dumped pendulum (13).
Axioms 14 00343 g012
Figure 13. The Lyapunov function (17) for a polynomial system (16).
Figure 13. The Lyapunov function (17) for a polynomial system (16).
Axioms 14 00343 g013
Figure 14. The orbital derivative (18) of the Lyapunov function (17) for a polynomial system (16).
Figure 14. The orbital derivative (18) of the Lyapunov function (17) for a polynomial system (16).
Axioms 14 00343 g014
Figure 15. Lower-bound penalty (7) of the Lyapunov function (17) for a polynomial system (16).
Figure 15. Lower-bound penalty (7) of the Lyapunov function (17) for a polynomial system (16).
Axioms 14 00343 g015
Figure 16. Upper-bound penalty (8) of the Lyapunov function (17) for a polynomial system (16).
Figure 16. Upper-bound penalty (8) of the Lyapunov function (17) for a polynomial system (16).
Axioms 14 00343 g016
Figure 17. Dissipation penalty (9) of the orbital derivative (18) for a polynomial system (16).
Figure 17. Dissipation penalty (9) of the orbital derivative (18) for a polynomial system (16).
Axioms 14 00343 g017
Figure 18. The Lyapunov function (20) for the Van der Pol oscillator (19).
Figure 18. The Lyapunov function (20) for the Van der Pol oscillator (19).
Axioms 14 00343 g018
Figure 19. The orbital derivative (21) of the Lyapunov function (20) for the Van der Pol oscillator (19).
Figure 19. The orbital derivative (21) of the Lyapunov function (20) for the Van der Pol oscillator (19).
Axioms 14 00343 g019
Figure 20. Lower-bound penalty (7) of the Lyapunov function (20) for the Van der Pol oscillator (19).
Figure 20. Lower-bound penalty (7) of the Lyapunov function (20) for the Van der Pol oscillator (19).
Axioms 14 00343 g020
Figure 21. Upper-bound penalty (8) of the Lyapunov function (20) for the Van der Pol oscillator (19).
Figure 21. Upper-bound penalty (8) of the Lyapunov function (20) for the Van der Pol oscillator (19).
Axioms 14 00343 g021
Figure 22. Dissipation penalty (9) of the orbital derivative (21) for the Van der Pol oscillator (19). The red regions indicate areas where E 3 > 0 ; these areas lie outside the domain D.
Figure 22. Dissipation penalty (9) of the orbital derivative (21) for the Van der Pol oscillator (19). The red regions indicate areas where E 3 > 0 ; these areas lie outside the domain D.
Axioms 14 00343 g022
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pykhnivskyi, R.; Ryzhov, A.; Sobchuk, A.; Kravchenko, Y. Evolutionary Search for Polynomial Lyapunov Functions: A Genetic Programming Method for Exponential Stability Certification. Axioms 2025, 14, 343. https://doi.org/10.3390/axioms14050343

AMA Style

Pykhnivskyi R, Ryzhov A, Sobchuk A, Kravchenko Y. Evolutionary Search for Polynomial Lyapunov Functions: A Genetic Programming Method for Exponential Stability Certification. Axioms. 2025; 14(5):343. https://doi.org/10.3390/axioms14050343

Chicago/Turabian Style

Pykhnivskyi, Roman, Anton Ryzhov, Andrii Sobchuk, and Yurii Kravchenko. 2025. "Evolutionary Search for Polynomial Lyapunov Functions: A Genetic Programming Method for Exponential Stability Certification" Axioms 14, no. 5: 343. https://doi.org/10.3390/axioms14050343

APA Style

Pykhnivskyi, R., Ryzhov, A., Sobchuk, A., & Kravchenko, Y. (2025). Evolutionary Search for Polynomial Lyapunov Functions: A Genetic Programming Method for Exponential Stability Certification. Axioms, 14(5), 343. https://doi.org/10.3390/axioms14050343

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop