1. Introduction
In this article, we consider an optimal control problem with a mixed cost function, composed of a term evaluated at the final state and the integral of a function involving the state and control variables. In addition to the standard constraints, the state and control variables are also subject to very general constraints. As mentioned in the abstract, we derive a maximum principle where the adjoint equation is an integral equation whose right-hand side is the sum of a Riemann integral and a Stieltjes integral with respect to a Borel measure.
Optimal control problems with mixed-state control constraints arise in various practical applications, such as in economics [
1], aerospace engineering [
2], and biomedical science [
3]. The inclusion of constraints in both the state and control variables significantly complicates the analysis, which requires advanced mathematical techniques to establish the necessary optimality conditions [
4]. One of the best-established methods for handling such constraints is the Dubovitskii–Milyutin theory [
5], which provides a framework for deriving optimality conditions through conical approximations. This approach has been successfully applied to nonlinear systems with state constraints [
6] and to problems involving control-affine dynamics [
7].
Although the maximum principle obtained here provides the necessary conditions for optimality, additional conditions are often required to ensure sufficiency. This work examines these conditions and their broader implications. A key contribution is the adaptation of Dubovitskii–Milyutin theory to effectively handle mixed-state control constraints, which has previously been explored in various applications such as energy management systems [
8], spacecraft trajectory optimization [
9], and financial engineering [
10].
To demonstrate the practical application of our findings, we use two optimal control problems: epidemic modeling (SIR model) and optimal control strategies at the onset of a new viral outbreak as illustrative examples (see [
11]). Controlling infectious diseases has been extensively studied using optimal control methods [
12], with constraints on state variables often representing limitations in medical resources or vaccine availability [
13].
Furthermore, we believe that our results can be applied to problems such as the optimal guidance of endoatmospheric launch vehicle systems under mixed-state control constraints (see [
14]) and the improvement of vehicle efficiency through the optimization of driving behavior (see [
15]). This research also opens doors to solving complex problems in various fields, including controlled mechanical systems [
16] and high-resolution image processing with neural networks [
17]. By expanding on the established applications of Dubovitskii–Milyutin theory, as seen in previous studies such as [
18], this study further enhances the understanding and utility of optimal control solutions under mixed constraints, advancing both the theoretical and applied aspects of the field.
The following provides an outline of this paper:
In
Section 2, we introduce the problem under study and present the main results.
Section 3 provides the necessary theoretical foundations of the Dubovitskii–Milyutin theory, which serves as the basis for proving our main results. We also give a detailed exposition of key concepts and relevant previous work to contextualize our contributions.
In
Section 4, we rigorously prove the main result of this work.
Section 5 establishes sufficient conditions to guarantee the existence of an optimal pair for the problems considered.
Section 6 presents two real-world models related to the dynamics of SIR-type infections. These examples demonstrate the practical applicability of our results in biological contexts, emphasizing the relevance of the problems addressed.
In
Section 7, we propose an open problem that emerges from our research, encouraging future investigations to explore new directions and address unresolved questions in this area.
Finally, in
Section 8, we summarize our findings, discuss their significance, and outline potential avenues for future research.
2. Setting of the Problem and the Main Results
In this section, we introduce the optimal control problem under consideration, which involves a system governed by a differential equation and subject to control and state constraints; also, the main theorems are presented. The goal is to minimize the objective function, which consists of a terminal cost term and an integral cost term over time. Formally, the problem is presented as follows:
Let
and
be fixed parameters. Consider the mappings
and
g:
Now, we introduce the following function spaces
and
as follows:
equipped with the supremum norm given by
Additionally, we consider the classical Banach space
, consisting of essentially bounded measurable functions, endowed with the essential supremum norm.
Assumption 1. - (a)
The mappings Θ, Ψ, g, and f are continuous and possess partial derivatives , , , , , and that are sufficiently smooth on compact subsets of .
- (b)
The set is convex and closed, with a nonempty interior—i.e.,
- (c)
The function g is continuous and satisfies , where are fixed points. Furthermore, g has a continuously differentiable first variable, denoted by such that whenever .
Theorem 1. Let us suppose that Conditions (a)–(c) of the previous assumptions are fulfilled. Let be a solution of Problem 1.
- (α)
Then, there exists a non-negative Borel measure μ on with support in - (β)
There exists and a function such that and η are not simultaneously zero. Moreover, η is a solution of the integral equationand also, for all and almost all , it follows that
Remark 1. If Condition (6) is changed by the conditionthen we can prove the following result as well: Theorem 2. Let us suppose that Conditions (a)–(b) are fulfilled. Let be a solutions of the new Problem and suppose that g has continuous derivative with respect to its first variable such that . Then, there exist , and a function such that and η are not both zero. Moreover, η is a solution of the differential equationand also, for all and almost all , it follows that 3. Preliminaries Results
In this paper, we describe the key findings of the Dubovitskii–Milyutin (DM) framework. We define the general constrained optimization problem and develop approximation cones for both the objective function and the constraints. The optimality condition, represented by the Euler–Lagrange (EL) equation, is derived using the duals of these cones. These fundamental results follow from well-established principles of DM theory, with comprehensive proofs provided in [
18,
19,
20].
3.1. Cones, Dual Cones, and the DM Theorem
Let E be a topological vector space with a locally convex structure, and let be its dual space (i.e., the space of continuous linear functionals on E).
A subset is called a cone with vertex at zero if it satisfies the property for each .
The
dual cone associated with
is defined as follows:
If
is a family of convex and
w-closed cones, then the following equality holds:
where the closure is taken in the
-topology. (See [
20].)
Lemma 1. Let be open convex cones such that Lemma 2 (DM).
Let be convex cones with the vertex at zero, with open. Then,if and only if there exist (), not all zero, such that 3.2. The Abstract Euler–Lagrange (EL) Equation
Consider a function
and
(
) such that
(
). Define the following problem:
Remark 2. The sets , () are usually given by inequality-type constraints, while is defined by equality-type constraints, and in general,
Theorem 3 (DM).
Let be an optimal solution to Problem (12), and assume that the following conditions are met:- (a)
denotes the descent (or decay) cone of at
- (b)
represent the feasibility (or admissible) cones associated with at for each .
- (c)
corresponds to the tangent cone to at
If the cones are convex, then there exist functionals , for , not all identically zero, such that Remark 3. The expression given in (13) is referred to asGeneralized Euler–Lagrange Equation. A detailed explanation of the concepts of recession, feasibility, and tangency cones can be found in [20]. The procedure for implementing the DM Theorem in concrete cases is as follows:
- 1.
Determine the decrease directions.
- 2.
Identify the feasible directions.
- 3.
Establish the tangent directions.
- 4.
Construct the associated dual cones.
3.3. Important Results
Now, we explicitly determine the decay, admissible, and tangent cones in certain cases. Given , we define the decay cone as , representing the set of decay directions.
Theorem 4 (See [
20], p. 48).
If E is a Banach space and is Fréchet-differentiable at , thenwhere denotes the Fréchet derivative of evaluated at . Theorem 5 (See [
20], p. 45).
Let be a continuous and convex function defined on a topological vector space E, and let . Then, the function admits a directional derivative in every direction at , and the following properties hold:- (a)
The directional derivative of at along h is given by - (b)
The decay cone at is characterized as
The admissible cone to will be denoted by .
Theorem 6 (See [
20], p. 59).
If is any convex set with a nonempty interior, i.e., then The set consisting of all tangent vectors to at forms a cone with its vertex at the origin, which will be denoted by and referred to as the tangent cone.
In this section, we highlight Lyusternik’s Theorem, a fundamental tool for computing the set of tangent vectors. This result plays a key role in our analysis, as it enables us to determine the vectors that are tangent at a given point.
Theorem 7 (Lyusternik [
20]).
Let and be Banach spaces, and assume that- (a)
and the mapping is Fréchet-differentiable at
- (b)
The derivative is surjective.
Under these conditions, the tangent cone to the set at the point is given by 4. Proof of the Main Theorem 1
Proof. Let
be a function defined as follows:
and let
, where the sets
, and
consist of pairs
that satisfy conditions (3), (4), (5), and (6), respectively.
Then, Problem 1 is equivalent to the following:
- (a)
Study of the mapping .
Let
denote the decrease cone of
at the point
. According to Theorem 4, we have
If
, then it follows that
According to Example 9.2 in [
20], p. 62, the derivative
is given by
Thus, for any
, there exists
such that
- (b)
Analysis of the restriction
We aim to find the tangent cone to
at the point
Assume that the system
is controllable (see [
21,
22,
23]); then, in view of Theorem 7, we find that
Now, let us calculate
To do so, we shall consider the following linear spaces:
Then, by Proposition 2.40 from [
18], we have that
if, and only if, there exists
such that
Moreover, by Lemma 2.5 from [
18], it follows that
is
closed; then, by the cone properties, we obtain that
Therefore, if, and only if, .
- (c)
Analysis of restriction
Then . Given that V is convex, closed, and , the following hold: 1. and are closed and convex. 2. and .
Let
be the admissible cone to
at
:
where
is the admissible cone to
at
.
For any
, there exists
such that
By Theorem 6, is a support of at .
- (d)
Analysis of restriction
Let us define the following function:
Then, by example 7.5 from (See [
20], p. 52), we have that
where
But, by Theorem 5, we obtain
Since (
15), we obtain that
Then, by example 10.3 ([
20], p. 73), we have that for all
there is a non-negative Borel measure
on
such that
and
has support in
- (e)
The Euler–Lagrange equation.
The convexity of the cones
and
is evident. Then, applying Theorem 3, we can find functionals
not simultaneously zero satisfying
Equation (
16) can be expressed as follows:
Now, for every
, there exists a function
that satisfies the Equation (
14) with
. Therefore,
, and consequently
. As a result, the Euler–Lagrange equation can be expressed as
Let
be the solution of Equation (
7), which means
This equation is a second-order Volterra-type equation, which has a unique solution
(see [
24], p. 519). By multiplying both sides of the previous equation by
and integrating from 0 to
T, we obtain
This reformulation simplifies the analysis or provides new insights into the structure of the solution.
The third term on the right-hand side can be simplified by using the integration by parts method for the Stieltjes integral, along with the conditions
and
. Specifically, since
and
, it follows that
.
Then, by the EL Equation (
15), we obtain that
Since
is a support of
at the point
from example 10.5 ([
20], p. 76), it follows that
for all
and almost all
Now, we will see that the case
is not possible. In fact,
if
then
Thus,
that is,
So, from Equation (
7), and the fact that
, we obtain that
which implies that
Also, from (
18), we have that
then, from the EL equation, it follows that
where
which contradicts the statement of Theorem 3.
At this point, we have introduced two additional assumptions: Firstly, we have assumed that
Secondly, we have supposed that the variational linear system
is controllable. We shall now establish that these assumptions are superfluous. Indeed, if
then by definition of
, we have that
Let us put
Then, from Equation (
17), we have that
for all
such that
z is solution of equation the (
14). Then,
which leads to the conclusion that
for all
and almost all
Assuming System (
14) is not controllable, according to an equivalence with the definition of controllability outlined in [
20,
22,
25], there is a non-trivial function
that is a solution of
such that for all
, it follows that
By taking
we obtain that
is solution of (
7), and therefore
for all
and almost all
Thus, the proof of Theorem 1 is now fully complete. □
5. Sufficient Condition for Optimality
The necessary optimality condition established in Theorem 1 (maximum principle) can also serve as a sufficient condition under certain additional assumptions. In particular, let us consider the case of Problem 1, where the governing differential equation is linear.
Problem 2. where are continuous matrix functions. Let be satisfying Conditions (20)–(23). Theorem 8. Assume that Conditions (a)–(c), (α) and from Theorem 1 hold.
Moreover, suppose the following conditions are satisfied:
- (A)
System (20) is controllable. - (B)
There exists such that
- (C)
The corresponding solution associated with in Equation (20) satisfies and - (D)
The functions and Θ are convex with respect to their first two arguments.
Then, the pair is a global solution to Problem 2.
Proof. Let us define the function
as follows:
Consider the set
, where
is given by (
20) and (
21),
by (
22), and
by (
23) as in Theorem 1.
Then, Problem 2 is equivalent to the following:
It is clear that the sets
are convex, and from Conditions
, we have that
is convex. Additionally,
Thus, by Theorem 2.17 from [
18], we have the following:
is a minimum point of
at
if, and only if, there are
not all zero such that
Here,
are cones defined as in Theorem 1.
Let
be the admissible cone to
at the point
. Then,
where
and
Then, by dual cones properties, we have that
So, each
has the following form:
Here,
is a non-negative Borel measure with support on
Now, suppose the Maximum Principle from Theorem 1 holds. This guarantees the existence of
,
, and non-negative Borel measures
supported on
R. Furthermore, there exists a function
that satisfies the following integral equation:
where both
and
are non-zero. Furthermore, for every
and almost every
, the following holds
To prove the theorem, it suffices to show that there exist elements
, not all equal to zero, such that
for which we define the set
and functionals
Then, from (
25), we obtain that
So,
is a support of
at
. Hence,
Let us define the functional
as follows:
Now, we will see that
where
as in Theorem 1. In fact, suppose that
Then, multiplying both sides of Equation (
24) by
and integrating from 0 to
T, we obtain that
Then,
Therefore,
Thus,
Next, we will introduce the following functionals:
by
Then,
, and it also holds that
with the condition that not all of these functionals are zero, since by assumption,
and
are not simultaneously zero.
From the convexity conditions, the global-minimality of follows. □
6. A Mathematical Model
In this section, we will present two important real-life models where our results can be applied; then, in the next section, we will present an open problem.
6.1. Optimal Control in Epidemics: SIR Model
The study in [
26] analyzed this model using the Hamiltonian framework. Consider a population affected by an epidemic, where the goal is to mitigate its spread through vaccination. The following variables are introduced:
, representing the number of infectious individuals capable of transmitting the disease.
, denoting the number of individuals who are not infected but are susceptible to the disease.
, indicating the number of recovered individuals who are no longer susceptible to infection.
Let
denote the infection rate,
the recovery rate, and
the vaccination rate. The control function
is constrained by
. The optimal control problem for the
SIR model system is formulated as follows:
where
. The goal is to determine an optimal vaccination strategy in order to minimize the above cost function on a fixed time
T.
Using the following notation, we can write this problem in an abstract formulation using the foregoing Theorem:
In other to apply Theorem 2, we compute the adjoint equation and the Pontryagin’s Maximum Principle to find the optimal control. In fact,
Note that, since there is no condition on
, we have
. For simplicity in the calculations, we also set
. Now, we proceed to compute
and
. Then, for all
, we have
This is equivalent to
Then, the optimal control is given by
At the final time
T, we have
. Hence, if
, then
near the final time
T.
Remark 4. In these types of models, it is natural to assume that the number of infectious individuals capable of spreading the infection, , is less than or equal to the number of non-infectious but susceptible individuals, . To apply Theorem 1, we incorporate the following condition:Therefore, the adjoint equation becomes the following equation, keeping the maximum principle the same: 6.2. Optimal Control Problem at Early Stage of Viral Outbreak
This model was analyzed in [
11] using the Karush–Kuhn–Tucker conditions. Consider a population affected by an epidemic, such as COVID-19, where the objective is to curb its spread through vaccination. The SIR model with controlled intervention is described as follows:
, representing the number of infectious individuals capable of transmitting the disease;
, denoting the number of individuals who are not infected but remain susceptible;
, indicating the number of recovered individuals who are no longer at risk of infection.
Here,
represents the infection rate,
the recovery rate, and
the vaccination rate. The control function
is subject to the constraint
. The optimal control problem for the SIR model system is formulated as follows:
The goal is to determine an optimal vaccination strategy in order to minimize the above cost function on a fixed time
T.
Using the following notation, we can write this problem in abstract formulation:
In other to apply Theorem 2, we compute the adjoint equation and the Pongtryagin’s maximum principle to find the optimal control. In fact,
Note that, since there is no condition on
, we have
. For simplicity in the calculations, we also set
. Now, we proceed to compute
and
. Then, for all
, we obtain
This is equivalent to
From here, we can specify different
functions that allow us to determine what form the optimal control has. For example,
- (i)
If (a)
, then the optimal control can be compute approximately from the maximum principle.
Then, the optimal control is given by
- (ii)
If
, then the optimal control can be computed approximately from the maximum principle.
Then, the optimal control is given by
7. Open Problem
In this section, we present an open problem that offers a promising avenue for future research and could even serve as the basis for a doctoral thesis. This problem addresses an optimal control scenario that simultaneously incorporates impulses and constraints on a state variable. Our main objective is to analyze the following optimal control problem, which we plan to explore in future work:
8. Conclusions and Final Remarks
In this article, we analyze an optimal control problem characterized by a mixed cost function, composed of a terminal cost evaluated at the final state and an integral term involving the state and control variables. In addition to standard constraints, we incorporate general constraints for both the state and control variables, significantly expanding the applicability of our results. To address this problem, we derive a maximum principle where the adjoint equation is an integral equation, the right-hand side of which consists of the sum of a Riemann integral and a Stieltjes integral with respect to a Borel measure. One of the key contributions of this work was the effective application of Dubovitskii–Milyutin theory to obtain the necessary conditions for optimality under mixed constraints. This method allowed us to handle equality and inequality constraints in a unified framework, confirming its robustness in addressing complex optimal control problems. Our theoretical findings were illustrated through a practical example, demonstrating the effectiveness of our approach in real-life scenarios.The results presented in this study open several avenues for future research. First, while our approach provides a solid foundation for solving constrained optimal control problems, further exploration is required in systems governed by nonlinear dynamics, where additional difficulties may arise due to nonconvexity in the constraint set. A natural extension would involve relaxing some of the smoothness assumptions and incorporating state-dependent constraints, which are commonly found in applications such as economics, engineering, and epidemiology. Furthermore, computational methods for solving constrained optimal control problems remain an active area of research. Our findings suggest that the integration of modern numerical optimization techniques, such as machine learning-based approaches and neural networks, could improve the efficiency of control strategies, especially in high-dimensional systems. The interaction between traditional control theory and data-driven methods is expected to generate new insights and improved solution techniques. Beyond theoretical aspects, practical applications of our results can be envisioned in various fields. For example, in robotics, mixed-constraint optimal control is crucial for trajectory planning and motion optimization in autonomous systems. Similarly, in aerospace engineering, state-constraint optimal control is critical for mission planning and vehicle guidance under physical and operational constraints. Furthermore, epidemiological models, such as the SIR-based examples considered here, can greatly benefit from advanced optimal control strategies to design effective intervention policies for disease mitigation. Finally, future work could also focus on sensitivity analysis of optimal solutions with respect to perturbations in system parameters. Since many real-world control problems involve uncertainty, incorporating robust and stochastic optimal control techniques would further enhance the practical applicability of our approach. In conclusion, our study provides a rigorous theoretical framework for mixed-constraint optimal control problems and demonstrates the effectiveness of the Dubovitskii–Milyutin approach in obtaining optimality conditions. By extending these results to nonlinear systems, integrating modern computational methods, and exploring practical applications in engineering and biomedical sciences, we anticipate significant advances in both the theory and application of constrained optimal control.
Author Contributions
H.L.: Writing—original draft, review, and editing, research, formal analysis, and conceptualization. G.T.-R.: Writing—review and editing, research, and formal analysis. J.P.R.-L.: Writing—review and editing, research, and formal analysis. C.D.: Writing—review and editing, research, and formal analysis. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Cesari, L. Optimization—Theory and Applications. Problems with Ordinary Differential Equations; Springer-Verlag: Berlin/Heidelberg, Germany, 1983. [Google Scholar]
- Bryson, A.E.; Ho, Y.C. Applied Optimal Control. Optimization, Estimation and Control; Routledge: London, UK, 1975. [Google Scholar]
- Lenhart, S.; Workman, J.T. Optimal Control Applied to Biological Models; Chapman and Hall/CRC: New York, NY, USA, 2007. [Google Scholar]
- Clarke, F.H. Optimization and Nonsmooth Analysis. In Classics in Applied Mathematics; SIAM: Philadelphia, PA, USA, 1990. [Google Scholar]
- Dubovitskii, A.Y.; Milyutin, A.A. Extremum problems in the presence of restrictions. Comput. Math. Math. Phys. 1965, 5, 1–80. [Google Scholar] [CrossRef]
- Frankowska, H. Optimal Control Under State Constraints. In Proceedings of the International Congress of Mathematicians 2010 (ICM 2010), Hyderabad, India, 19–27 August 2010; pp. 2915–2942. [Google Scholar]
- Vinter, R. Convex duality and nonlinear optimal control. SIAM J. Control Optim. 1993, 31, 518–538. [Google Scholar] [CrossRef]
- Middelberg, A.; Zhang, J.; Xia, X. An optimal control model for load shifting—With application in the energy management of a colliery. Appl. Energy 2009, 86, 1266–1273. [Google Scholar] [CrossRef]
- Betts, J.T. Survey of Numerical Methods for Trajectory Optimization. J. Guid. Control. Dyn. 1998, 21, 193. [Google Scholar] [CrossRef]
- Yong, J.; Zhou, X. Stochastic Controls. Hamiltonian Systems and HJB Equations; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
- Smirnova, A.; Ye, X. On optimal control at the onset of a new viral outbreak. Infect. Dis. Model. 2024, 9, 995–1006. [Google Scholar] [CrossRef] [PubMed]
- Behncke, H. Optimal control of deterministic epidemics. Optim. Control Appl. Meth. 2000, 21, 269–285. [Google Scholar] [CrossRef]
- Sharomi, O.; Malik, T. Optimal control in epidemiology. Ann. Oper. Res. 2017, 251, 55–71. [Google Scholar] [CrossRef]
- Bonalli, R.; Herisse, B.; Trela, E. Optimal Control of Endoatmospheric Launch Vehicle Systems: Geometric and Computational Issues. IEEE Trans. Automat. Control 2020, 65, 2418–2433. [Google Scholar] [CrossRef]
- Lee, H.; Kim, K.; Kim, N.; Cha, S.W. Energy efficient speed planning of electric vehicles for car-following scenario using model-based reinforcement learning. Appl. Energy 2022, 313, 118460. [Google Scholar] [CrossRef]
- Bonnans, J.F.; Shapiro, A. Perturbation Analysis of Optimization Problems; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
- Poggio, T.; Girosi, F. Networks for Approximation and Learning. Proc. IEEE 1990, 78, 1481–1497. [Google Scholar] [CrossRef]
- Leiva, H. Pontryagin’s maximum principle for optimal control problems governed by nonlinear impulsive differential equations. J. Math. Appl. 2023, 46, 15–68. [Google Scholar]
- Leiva, H.; Cabada, D.; Gallo, R. Roughness of the controllability for time varying systems under the influence of impulses, delay, and non-local conditions. Nonauton. Dyn. Syst. 2020, 7, 126–139. [Google Scholar] [CrossRef]
- Girsanov, I.V. Lectures on Mathematical Theory of Extremum Problems. In Lecture Notes in Economics and Mathematical Systems; Beckmann, M., Goos, G., Künzi, H.P., Eds.; Springer-Verlag: Berlin/Heidelberg, Germany, 1972. [Google Scholar]
- Cabada, D.; Garcia, K.; Guevara, C.; Leiva, H. Controllability of time varying semilinear non-instantaneous impulsive systems with delay and nonlocal conditions. Arch. Control Sci. 2022, 32, 335–357. [Google Scholar]
- Lee, E.B.; Markus, L. Fundations of Optimmal Control Theory; Wiley: New York, NY, USA, 1967. [Google Scholar]
- Leiva, H. Controllability of semilinear impulsive nonautonomous systems. Int. J. Control. 2014, 88, 585–592. [Google Scholar] [CrossRef]
- Kolmogorov, A.N.; Fomin, S.V. Elementos de la teoría de funciones y de análisis funcional; Editorial Mir: Moscow, Russia, 1975. [Google Scholar]
- Lalvay, S.; Padilla-Segarra, A.; Zouhair, W. On the existence and uniqueness of solutions for non-autonomous semi-linear systems with non-instantaneous impulses, delay, and non-local conditions. Miskolc Math. Notes 2022, 23, 295–310. [Google Scholar] [CrossRef]
- Trélat, E. Contrôle optimal: Théorie et applications, 2nd ed.; De Boeck Sup, Vuibert: Paris, France, 2008. [Google Scholar]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).