Next Article in Journal
Study on Deep Hole Blasting for Roof Cutting, Pressure Relief and Roadway Protection in Deep Multi-Coal Seam Mining
Previous Article in Journal
Enhancement of Co-Production of Astaxanthin and Total Fatty Acids in Haematococcus lacustris by Combined Treatment with Exogenous Indole-3-Acetic Acid and Abscisic Acid
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Just-in-Time Optimal Routing in the Presence of Non-Uniform and Time-Evolving Uncertainty

by
Chady Ghnatios
1,* and
Francisco Chinesta
2,3
1
Mechanical Engineering Department, University of North Florida, 1 UNF Drive, Jacksonville, FL 32224, USA
2
PIMM Research Laboratory, UMR 8006 CNRS-ENSAM-CNAM, Arts et Metiers Institute of Technology, 151 Boulevard de l’Hôpital, 75013 Paris, France
3
CNRS@CREATE 1 Create Way, #08-01 CREATE Tower, Singapore 138602, Singapore
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(18), 10137; https://doi.org/10.3390/app151810137
Submission received: 1 September 2025 / Revised: 15 September 2025 / Accepted: 15 September 2025 / Published: 17 September 2025

Abstract

Trajectory planning aims to compute an optimal path and velocity of an agent through the minimization of a cost function. This paper proposes a just-in-time routing method, incorporating the stochastic minimization of a cost function, which ingests the effect of the agent’s environment evolving in space and time. The environment is considered known at present, but the uncertainty increases when advancing in time. To compute the optimal routing in such an uncertain environment, Euler–Lagrange equations will be formulated in a stochastic setting, to obtain a probabilistic optimal planning. With the cost function approximated by using a surrogate modeling based on deep neural networks, a neural formulation of the stochastic Euler–Lagrange equations is proposed and employed.

1. Introduction

Optimal trajectory planning is a topic that arises in many domains of science, engineering, and technology. While reinforcement learning [1,2] is always a valuable route, widely employed in similar settings [3], its two main drawbacks are its high computational cost and the large amount of data required for training purposes.
Previous work [4] proposed an alternative route by employing neural network-based regressions informed by the Euler–Lagrange equation (ELE) for optimal velocity planning along a given path. This procedure was further improved in [5] to compute the optimal trajectory, i.e., both the optimal path and the optimal velocity along it, x ( t ) .
In the aforementioned works, the considered cost function includes a term reflecting the effect of the environment (typically wind), which can either assist or resist the agent’s movement. When addressing short paths, it is reasonable to assume that the environment is known and will remain nearly unchanged. Thus, trajectory planning can be elaborated within a deterministic framework, as considered in previous works [4,5].
However, when considering longer trajectories (such as ship routing), currents, sea conditions, and wind, all of which affect the cost function in the optimization problem, are known at the time of planning but are expected to evolve over time, with increasing uncertainty as the time horizon extends. Moreover, ship operations can be scheduled over many months without the possibility of refueling, especially when operating far from homeland.
Similarly, optimal trajectory planning is of utmost importance for airplanes and drones performing missions over extended distances and in foreign territories. In such missions, refueling or recharging is often impossible, while over-fueling can negatively impact the aircraft’s maneuverability. In these situations, especially those occurring far from home bases, environmental conditions such as wind velocity and orientation are only statistically known and are expected to evolve over time, rather than being deterministically available throughout the entire mission duration.
Recently, the hierarchical safe reinforcement learning with prescribed performance control is proposed, where the challenge in unpredictable obstacles is addressed for the leader–follower system. However, these approaches fail to limit travel time for a given path. A solution is proposed in [6], where the authors proposed a finite-time solution, without limiting the path to a fixed interval when possible. Robust optimal tracking is also proposed in [7], using a fixed-time concurrent learning-based actor–critic–identifier.
Other works did attempt the optimal trajectory planing using Gaussian process regressions for small trajectories on robot applications [8,9]. However, the globally optimal solution is often intractable using these techniques, they require trajectory priors, and only a plausible region is identified.
Reinforcement learning is well suited to handle such uncertainty [2,10,11], while traditional ELE formulations are much less so. Nevertheless, the simplicity of applying the ELE, combined with its potential for achieving global optimization, motivates the present research, which proposes extending ELE methods to probabilistic settings, thereby enabling the consideration of evolving uncertainty in optimal trajectory planning [12].
Moreover, many problems in mechanical engineering and manufacturing involve deterministic or stochastic trajectory optimization. For example, motion planning for robotic and mobile systems is investigated in [13]. Other applications involve optimal path identification, such as in 3D printing [14,15,16] and automated tape placement [17,18] using trajectory adjustments along parallel paths. While the authors in [13] use a non-parametric approach for path planning through torque minimization, the other aforementioned methods rely on parametric or spline-based optimization for trajectory planning.
The proposed method has the advantage of being a non-parametric approach guaranteeing a global minimum of the stochastic problem at hand [19,20]. Although it is showcased in this work on routing applications, it can be applied to any path optimization problem, by adapting the functional cost to the selected use case.
Many recent works analyze and propose possible technologies for routing optimization [21,22,23,24]. For example, a hybrid parallel computing scheme combined with an improved inverse distance weight interpolation method was proposed for a search-and-rescue route planning model based on the Lawnmower search method in [21]. The main drawback of that method remains the large span of the domain to be investigated by the optimization algorithm. This drawback is also shared by the authors in [22], who proposed a two-level optimization algorithm for urban routing applied to connected and automated vehicles. Recently, a novel algorithm optimizing routing for multi-modal transport methods in urban settings was proposed, employing a stochastic optimization approach using a generalized interval-valued trapezoidal, in-time, possibility distribution [23]. However, this method does not guarantee on-time arrival or a global optimum, though it can account for disturbances and uncertainties in multi-modal transportation. The method proposed in this work, by contrast, guarantees a global optimum at convergence by minimizing the functional that integrates the cost over the entire travel domain.
This paper begins by exploring the just-in-time routing process, where the objective is to reach the destination within a known time frame. Section 3 then introduces the deterministic neural Euler–Lagrange (ELE) formulation. In Section 4, this deterministic approach is generalized to a stochastic one by combining the neural ELE with a Monte-Carlo sampling strategy and is validated on a use case with a known analytical solution. Section 5 explores several applications and examples, comparing the deterministic and stochastic approaches. Finally, conclusions are presented in Section 6. A list of the used symbols and nomenclature is available in Table 1.

2. Just-in-Time Deterministic Routing

The path is assumed starting at X 0 at t = 0 , and should finish at position X T at time t = T . At its turn, the local cost function C is assumed depending on the difference of velocities between the one of the agent x ˙ and the one of the environment v w at the agent’s position x ( t ) .
The total cost C reads
C = 0 T C ( t , x ˙ ( t ) , v w ( x ( t ) ) ) d t .
In what follows, for the sake of illustration, we consider three different scenarios.

2.1. Constant Environment Effect

We consider in this case the one-dimensional case x ( t ) with X 0 = 0 , at t = 1 , and X T = 1 , with T = 1 , and C = ( x ˙ v w ) 2 , that is, C = x ˙ 2 + v w 2 2 v w x ˙ .
The selected boundary conditions are chosen without any loss of generality. Now, the ELE,
C x d d t C x ˙ = 0 ,
leads to x ¨ = 0 , which, integrated twice, results in x ( t ) = a + b t , which, taking into account the initial and final conditions, reduces to x ( t ) = t , with a constant velocity x ˙ = 1 , independent of the the value of v w . That is, for any value of v w , the minimum cost enabling the fulfillment of both the initial and final conditions is the constant unit velocity.

2.2. Environment Effect Evolving in Time

Here we consider the same case as before, with the only difference being the environment effect that now grows in time, that is v w = t . Thus, C = ( x ˙ t ) 2 = x ˙ 2 + t 2 2 t x ˙ , inserted into the ELE, leads to x ¨ = 1 , whose solution x ( t ) = t 2 + t 2 implies the velocity x ˙ ( t ) = t + 1 2 , where the agent profits from the environment effect by increasing its velocity in time.

2.3. Environment Effect Evolving in Space

This section consider v w = x , which implies C = ( x ˙ x ) 2 , which, inserted into the ELE, leads to x ¨ = x , whose solution reads x ( t ) = a e t + b e t , with the value of constants a and b determined by the initial and final conditions x ( 0 ) = 0 and x ( 1 ) = 1 .

2.4. Discussion

It is also important to mention that, in the case of the just referred time-dependent environment, changing v w = t to v w = t changes both the x ( t ) and the velocity along it. However, in the space-dependent case, when changing v w = x by v w = x , nothing changes.
For more complex environments, the resulting ordinary differential equation (ODE) can be solved by discretizing it by using typical discretization techniques, for example, the finite differences. However, sometimes the cost function is unknown analytically, and is approximated using a surrogate model, like a neural network
C = NN c ( x ˙ , x , t ) ,
where x or t could be absent in some particular cases. Such a cost function is determined during the agent calibration, in a wind tunnel, for example, in the case of drones, but it could also be obtained from high-fidelity solutions of aerodynamic or/and hydrodynamic software.

3. Neural Euler–Lagrange Formulation

With the cost expressed by the pre-trained neural network (3), we assume that the trajectory results from a trainable neural network
x = NN x ( t ) ,
whose time derivative leads to the velocity x ˙ , with x, x ˙ , and the time t being the inputs of the cost neural network (3).
To train the network (4), it suffices to use the ELE residual as the cost function, whose derivatives can be easily computed by automatic differentiation.

4. Addressing Uncertainty

This section considers the time evolution of the environment uncertainty. As soon as its statistical distribution is defined, a sampling consisting on N realizations can be performed. Then, a cost C i is associated with each realization i = 1 , . . . , N .
At first view, the stochastic formulation reduces to the solution of N deterministic Euler–Lagrange equations by using the neural formulation introduced in the previous section, to compute the trajectory statistics from all the solutions.
Another possibility, specially adapted to the use of neural architectures, consists of training the network NN x with a loss function that considers the N ELE residuals.
By defining the residual R i for realization i using
R i C i x d d t C i x ˙ ,
the global loss L considered in the training reads
L = i = 1 N R i 2 .

Numerical Validation in the Deterministic Case

First we consider the deterministic 2D case with x = ( x , y ) T and v w = x . The initial and final conditions are given by X 0 = 0 and X T = ( 1 , 1 ) T , at time t = T = 1 . The cost is assumed to be the one introduced in Section 2.3, and given by
C = x ˙ x 2 .
This cost function C is used to generate data, and to train a neural network surrogate approximating the cost:
NN c ( x ˙ , x ) C ,
which should replicate that cost function after the appropriate training.
First, 50 million combinations ( x ˙ , x ) were considered for training the cost network C = NN c ( x ˙ , x ) , 80 % of which were used in the training. The remaining 20 % were used to test the trained model.
Figure 1 depicts the prediction (8) versus the reference value (7) of the cost function, from which an almost perfect training can be concluded (the R 2 metrics is higher than 0.99 for both the training and the test datasets, with an absolute percentage error lower than 10 7 on both datasets).
Once the cost network NN c is trained, the network NN x providing the optimal trajectory can be trained with the loss associated with the ELE residual as previously discussed. The used network for NN c is a three-layer dense network with size (200, 60, 60, 60, 1) and activations (tanh,relu,relu,rely,linear), respectively. Two different networks of identical structure are used to predict the x and y coordinates of the path. The resulting optimal trajectory is depicted in Figure 2 for both the x and the y coordinate. Figure 2 also depicts the resulting optimal velocity.
In the present case, the problem admits an analytical solution that serves as a reference solution to validate the approach. Another possibility to validate the results in the present case, in which the cost function has an explicit expression, consists of solving the ELE by employing any discretization technique, like the finite differences methods.
The problem’s analytical solution is shown in Figure 3 along with a comparison to the one identified by the trained network. The relative and absolute errors between the analytical solutions and the one provided by the neural approach are shown in Figure 4. Figure 4 demonstrates the high accuracy of the proposed planning procedure.

5. Numerical Results in a Stochastic Setting

In the following examples, we consider the cost function defined by
C = x ˙ v w 2 .
However, now the wind velocity v w becomes stochastic, having different expressions depending on the considered use case. In all the illustrated examples in this section, Tensorflow is used in the implementation [25]. The training of the neural network NN x minimizing the stochastic ELE cost function is performed using an ADAM stochastic gradient descent algorithm [26], with a custom decreasing learning rate from 10 3 to 10 6 over the course of 1000 epochs. The learning starts with batches of 50 points over the trajectory and increases over the epochs to end with a learning over the full trajectory. The trajectory’s boundary conditions are imposed by construction using a change of variables. The same neural network architecture is also used for NN x . In this section, we also refer to the agent’s velocity x ˙ by v , of components ( v x , v y ) .
In the rest of this section, the potential wind trajectories are sampled a priori. Later on, each sampled potential wind history is treated as a deterministic sample.

5.1. A First Example with a Sampled Multi-Normal Wind Velocity Distribution

This section assumes a uniform environment, whose uncertainty evolves in time. Thus, the environment statistics is assumed to follow a multi-normal distribution, with the dimensionality given by a partition of the time interval. If we consider a partition defined by M nodes uniformly distributed on the time interval [ 0 , T ] , with the partition length h = T / ( M 1 ) , the nodal times become
t i = ( i 1 ) · h , i = 1 , . . . , M .
The multi-normal statistic becomes fully determined using the M × M cross-correlation matrix Σ , and the M × 1 mean-vector μ .
Here, without loss of generality, we consider a linear evolution of the components of vector μ :
μ i = a + b t i ,
with the diagonal components of Σ (variances) also following a linear evolution in time, according to
Σ i , i = c + d t i ,
with the covariance scaling (inversely) with the distance in time according to
Σ i j = e ( t i t j ) 2 2 β 2 .
In the cases addressed below, the values c = 1 , d = 0.5 , and β = 10 are considered. Now, by sampling the multi-normal distribution defined above, different trajectories of the stochastic process at the time nodes are obtained. From them, a spline can be used for determining the value in between the nodes.
In this example, the x- and y-components of vector μ are given by
μ j x = 20 t j μ j y = 20 t j , i = 1 , , M .
We considered 30 realizations of the multi-normal distribution illustrated (without considering a spline-based smoothing) in Figure 5. The computed results following the rationale described in Section 4, to address uncertainty quantification, are illustrated in Figure 6.

5.2. A Second Case-Study Considering a Markov Chain Monte-Carlo Sampling

In this second example, the change in time of the wind velocity is sampled by using the Metropolis–Hastings algorithm, a powerful Markov chain method to simulate multivariate distributions [27].
This section considers two cases: (i) a symmetrical probability distribution of the wind velocity in terms of amplitude and orientation, and (ii) another case where the probability distribution is slightly asymmetrical, while conserving the same mean and standard deviation.
In both sampling efforts, the sampled probability is normal of the mean 20 2 t for the wind velocity magnitude v w and a standard deviation of 0.2 . Regarding the orientation θ , the sampled probability is normal with a mean of 45 ° and a standard deviation of 0.2 . However, for both the velocity magnitude and orientation, the acceptance probability is normal of the mean | q s a m p l e d q c u r r e n t | and a standard deviation of 0.2 in the symmetric case, while in the asymmetrical one, the acceptance probability is a lognormal distribution of the same mean and standard deviation, the quantity q being either the wind velocity amplitude v w or the wind orientation θ . Both sampling of the wind velocity magnitude and its orientation are performed independently, and the wind vector components are computed using
v w x = v w c o s ( θ ) v w y = v w s i n ( θ )
Using the symmetrical and asymmetrical probability density functions, we construct 30 wind velocity samples in each use case. The sampled wind velocities along x and y are illustrated in Figure 7 along with their mean distribution.
The agent’s trajectory (path and velocity) are optimized using the cost function defined in Equation (9). The solutions obtained using the Euler–Lagrange-based stochastic procedure are illustrated in Figure 8 for the symmetric and asymmetric cases.

5.3. Discussion on the Stochastic Approaches

The present section compares the stochastic approaches using the different sampling techniques, along with a deterministic approach. The selected deterministic solution is the one minimizing the cost for the mean value of the distribution using the deterministic E L E framework. For instance, in this example, the deterministic solution is the optimal path for a deterministic environment where the wind velocity is defined by
v w x = 20 t v w y = 20 t ,
which consists of the theoretical mean distribution of the wind considered in Section 5.1 and Section 5.2.
Figure 9 compares the optimal path in terms of x and y coordinates, as well as v x and v y , for different distributions. Figure 10 shows a zoom on the identified paths. From it, we can conclude that the deterministic solution using the wind average is comparable to the probabilistic solutions, with the difference becoming more noticeable for the asymmetrical probability density functions, the nonlinearities involved in the procedure being more significant. In general, minimizing the cost for the mean value of the wind velocity in the treated examples is not optimal with respect to the cost that each wind velocity sample entails.

5.4. Considering Available Wind Velocity Statistics

This section considers the wind probability distribution in the city of Paris [28] shown in Figure 11.
The wind distribution is sampled for two cases. First set of 30 samples uses constant values along the selected time frame of one hour, picked using the probability distribution illustrated in Figure 11. The second set of 30 samples is obtained using the Metropolis–Hastings sampling algorithm, with a sampling evolution every minute. The optimal trajectory results are illustrated in Figure 12 for both cases.
We can note a slight difference between the different sampling procedures. A deterministic solution considering the average values of the wind velocity remains very close to the sampling of constant velocities in time shown in Figure 12.

6. Conclusions

This work extends former proposals of computing optimal trajectories from a neural network approximation trained to minimize the Euler–Lagrange equation residual as a loss function. This extension into the stochastic setting in which the environment varies in space and its uncertainty evolves in time allowed addressing stochastic environmental evolution during routing operations using the proposed ELE framework.
The proposed strategy consists of the statistical modeling of the uncertainty, which is then sampled using a Monte-Carlo procedure. Later, the network provides the optimal trajectory trained from a loss that considers the Euler–Lagrange equation residual associated with the different stochastic process realizations.
The numerical examples enabled the procedure validation as well as comforting on the procedure simplicity. The results showcase a difference between deterministic approaches considering only the mean values of the stochastic realizations, and the EL stochastic approach using Monte-Carlo minimization of the stochastic ELE form, even if for the simplistic cases addressed, the difference is not significant. In the case where the cost function C is linear, there is no difference between the deterministic solution considering the average of the environmental effect, and the proposed work where we consider the stochastic minimization of the cost function. On the contrary, when C is highly non-linear, the effect becomes more pronounced.
It is important to emphasize the zero-data framework used in this routing process. No experimental nor simulated data is needed for the network training, as the network learns the optimal solution from the Euler–Lagrange residual minimization, in contrast with the vast majority of the available techniques in the literature.
The proposed work can be extended to add the availability of obstacles and dynamically evolving obstacles. Moreover, the assessment of the Monte-Carlo sampling size and the convergence analysis of the proposed Euler–Lagrange framework will be analyzed.

Author Contributions

Conceptualization, C.G. and F.C.; methodology, C.G. and F.C.; software, C.G.; validation, C.G. and F.C.; resources, F.C.; data curation, C.G.; writing—original draft preparation, C.G.; writing—review and editing, C.G. and F.C.; visualization, C.G.; supervision, F.C.; project administration, F.C.; funding acquisition, F.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research is part of the DesCartes program and is supported by the National Research Foundation, Prime Minister Office, Singapore, under its Campus for Research Excellence and Technological Enterprise (CREATE).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sutton, R.; Barto, A. Reinforced Learning. An Introduction; The MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  2. Tan, J.; Xue, S.; Li, H.; Guo, Z.; Cao, H.; Chen, B. Hierarchical Safe Reinforcement Learning Control for Leader-Follower Systems with Prescribed Performance. IEEE Trans. Autom. Sci. Eng. 2025, 22, 19568–19581. [Google Scholar] [CrossRef]
  3. Kim, H.; Pyeon, H.; Park, J.S.; Hwang, J.Y.; Lim, S. Autonomous Vehicle Fuel Economy Optimization with Deep Reinforcement Learning. Electronics 2020, 9, 1911. [Google Scholar] [CrossRef]
  4. Ghnatios, C.; di Lorenzo, D.; Champaney, V.; Cueto, E.; Chinesta, F. Optimal velocity planning based on the solution of the Euler-Lagrange equations with a neural network based velocity regression. Discret. Contin. Dyn. Syst. Ser. S 2023, 17, 2323–2333. [Google Scholar] [CrossRef]
  5. Ghnatios, C.; Di Lorenzo, D.; Champaney, V.; Ammar, A.; Cueto, E.; Chinesta, F. Optimal trajectory planning combining model-based and data-driven hybrid approaches. Adv. Model. Simul. Eng. Sci. 2024, 11, 10. [Google Scholar] [CrossRef]
  6. Tan, J.; Xue, S.; Guan, Q.; Qu, K.; Cao, H. Finite-time safe reinforcement learning control of multi-player nonzero-sum game for quadcopter systems. Inf. Sci. 2025, 712, 122117. [Google Scholar] [CrossRef]
  7. Tan, J.; Xue, S.; Niu, T.; Qu, K.; Cao, H.; Chen, B. Fixed-time concurrent learning-based robust approximate optimal control. Nonlinear Dyn. 2025, 113, 21455–21475. [Google Scholar] [CrossRef]
  8. Petrović, L.; Peršić, J.; Seder, M.; Marković, I. Stochastic Optimization for Trajectory Planning with Heteroscedastic Gaussian Processes. In Proceedings of the 2019 European Conference on Mobile Robots (ECMR), Prague, Czech Republic, 4–6 September 2019; pp. 1–6. [Google Scholar] [CrossRef]
  9. Petrović, L.; Peršić, J.; Seder, M.; Marković, I. Cross-entropy based stochastic optimization of robot trajectories using heteroscedastic continuous-time Gaussian processes. Robot. Auton. Syst. 2020, 133, 103618. [Google Scholar] [CrossRef]
  10. Ren, J.; Huang, X.; Huang, R.N. Efficient Deep Reinforcement Learning for Optimal Path Planning. Electronics 2022, 11, 3628. [Google Scholar] [CrossRef]
  11. Ke, C.H.; Tu, Y.H.; Ma, Y.W. A reinforcement learning approach for widest path routing in software-defined networks. ICT Express 2023, 9, 882–889. [Google Scholar] [CrossRef]
  12. Harrison, R.L. Introduction to Monte Carlo Simulation. AIP Conf. Proc. 2010, 1204, 17–21. [Google Scholar] [CrossRef]
  13. Kalakrishnan, M.; Chitta, S.; Theodorou, E.; Pastor, P.; Schaal, S. STOMP: Stochastic trajectory optimization for motion planning. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 4569–4574. [Google Scholar] [CrossRef]
  14. Li, Y.; Xu, K.; Liu, X.; Yang, M.; Gao, J.; Maropoulos, P. Stress-oriented 3D printing path optimization based on image processing algorithms for reinforced load-bearing parts. CIRP Ann. 2021, 70, 195–198. [Google Scholar] [CrossRef]
  15. Yamamoto, K.; Luces, J.V.S.; Shirasu, K.; Hoshikawa, Y.; Okabe, T.; Hirata, Y. A novel single-stroke path planning algorithm for 3D printers using continuous carbon fiber reinforced thermoplastics. Addit. Manuf. 2022, 55, 102816. [Google Scholar] [CrossRef]
  16. Ghnatios, C.; Rai, K.E.; Hascoet, N.; Pires, P.A.; Duval, J.L.; Lambarri, J.; Hascoet, J.Y.; Chinesta, F. Reduced order modeling of selective laser melting: From calibration to parametric part distortion. Int. J. Mater. Form. 2021, 14, 973–986. [Google Scholar] [CrossRef]
  17. Zhang, P.; Yin, L.; Zhou, Z.; Tong, Y.; Feng, W. Modeling and Analyzing Layup Gaps in the Trajectory Planning for Automated Tape Placement. Comput.-Aided Des. 2023, 164, 103595. [Google Scholar] [CrossRef]
  18. Zhang, P.; Li, Y.; Tang, K.; Yin, L.; Huang, L.; Wang, H. Trajectory optimization for automated tape placement on triangular mesh surfaces considering gap requirements. J. Comput. Des. Eng. 2024, 11, 230–252. [Google Scholar] [CrossRef]
  19. Reddy, J. Introduction to the Finite Element Method, 3rd ed.; McGraw-Hill: Singapore, 2006. [Google Scholar]
  20. Becker, M. The Principles and Applications of Variational Methods; MIT Press: Cambridge, MA, USA, 1964. [Google Scholar]
  21. Liao, S.; Xiao, W.; Wang, Y. Optimization of route planning based on active towed array sonar for underwater search and rescue. Ocean Eng. 2025, 330, 121249. [Google Scholar] [CrossRef]
  22. Xue, Q.; Zheng, S.T.; Han, X.; Jiang, R. A two-level framework for dynamic route planning and trajectory optimization of connected and automated vehicles in road networks. Phys. A Stat. Mech. Its Appl. 2025, 668, 130552. [Google Scholar] [CrossRef]
  23. Wang, X.; Lv, Y.; Sun, H.; Wang, X.; Zhu, C. Multi-modal travel route planning considering environmental preference under uncertainties: A distributionally robust optimization approach. Transp. Res. Part E Logist. Transp. Rev. 2025, 198, 104097. [Google Scholar] [CrossRef]
  24. Xu, W.; Xie, N. Scheduling and route planning for forests rescue: Applications with a novel ant colony optimization algorithm. Eng. Appl. Artif. Intell. 2025, 155, 111042. [Google Scholar] [CrossRef]
  25. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv 2015, arXiv:1603.04467. [Google Scholar]
  26. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2017, arXiv:1412.6980. [Google Scholar] [CrossRef]
  27. Chib, S.; Greenberg, E. Understanding the Metropolis-Hastings Algorithm. Am. Stat. 1995, 49, 327–335. [Google Scholar] [CrossRef]
  28. Prefecture. Normales de Rose de Vent. Vent Horaire à 10 Mètres, Moyenné sur 10 Minutes. 2016. Available online: https://www.prefecturedepolice.interieur.gouv.fr/sites/default/files/media/ROSE_PARIS-MONTSOURIS.pdf (accessed on 19 June 2025).
Figure 1. Predicted (8) versus exact (7) cost function for both, the training and the test datasets.
Figure 1. Predicted (8) versus exact (7) cost function for both, the training and the test datasets.
Applsci 15 10137 g001
Figure 2. Identified optimal path provided by the trained network x = NN x ( t ) , 0 t 1 . (a) Optimal path and velocity on the x coordinate. (b) Optimal path and velocity on the y coordinate.
Figure 2. Identified optimal path provided by the trained network x = NN x ( t ) , 0 t 1 . (a) Optimal path and velocity on the x coordinate. (b) Optimal path and velocity on the y coordinate.
Applsci 15 10137 g002
Figure 3. Analytical solution of the PDE and a comparison of the analytical optimal path/velocity with the ones identified by the trained network x = NN x ( t ) , 0 t 1 . (a) Analytical solution. (b) Comparison between the analytical solution and the neural-based solution.
Figure 3. Analytical solution of the PDE and a comparison of the analytical optimal path/velocity with the ones identified by the trained network x = NN x ( t ) , 0 t 1 . (a) Analytical solution. (b) Comparison between the analytical solution and the neural-based solution.
Applsci 15 10137 g003
Figure 4. Relative and absolute errors between the analytical optimal path/velocity and the ones identified by the trained network x = NN x ( t ) , 0 t 1 . (a) Errors on the identified position. (b) Errors on the identified velocities.
Figure 4. Relative and absolute errors between the analytical optimal path/velocity and the ones identified by the trained network x = NN x ( t ) , 0 t 1 . (a) Errors on the identified position. (b) Errors on the identified velocities.
Applsci 15 10137 g004
Figure 5. Realizations of the multi-normal distribution modeling the environment’s time-evolution. (a) Sampled realizations of the x-component of the wind velocity v w x . (b) Mean of the sampled realizations of v w x .
Figure 5. Realizations of the multi-normal distribution modeling the environment’s time-evolution. (a) Sampled realizations of the x-component of the wind velocity v w x . (b) Mean of the sampled realizations of v w x .
Applsci 15 10137 g005
Figure 6. Neural stochastic solution of the Euler–Lagrange equations for optimal path generation considering the environment statistics described by a multi-normal distribution. (a) Solution on the x coordinate. (b) Solution on the y coordinate.
Figure 6. Neural stochastic solution of the Euler–Lagrange equations for optimal path generation considering the environment statistics described by a multi-normal distribution. (a) Solution on the x coordinate. (b) Solution on the y coordinate.
Applsci 15 10137 g006
Figure 7. Sampled velocities v w x using the Metropolis–Hastings algorithm. (a) Sampled v w x using symmetrical Metropolis–Hastings algorithm acceptance probability. (b) Sampled v w x using asymmetrical Metropolis–Hastings algorithm acceptance probability. (c) Mean of sampled v w x using symmetrical Metropolis–Hastings algorithm acceptance probability. (d) Mean of sampled v w x using asymmetrical Metropolis–Hastings algorithm acceptance probability.
Figure 7. Sampled velocities v w x using the Metropolis–Hastings algorithm. (a) Sampled v w x using symmetrical Metropolis–Hastings algorithm acceptance probability. (b) Sampled v w x using asymmetrical Metropolis–Hastings algorithm acceptance probability. (c) Mean of sampled v w x using symmetrical Metropolis–Hastings algorithm acceptance probability. (d) Mean of sampled v w x using asymmetrical Metropolis–Hastings algorithm acceptance probability.
Applsci 15 10137 g007
Figure 8. Optimal trajectory and velocities for the examples sampled using the Metropolis–Hastings algorithm. (a) Optimal solution along x for the symmetrical Metropolis–Hastings sampled velocities. (b) Optimal solution along y for the symmetrical Metropolis–Hastings sampled velocities. (c) Optimal solution along y for the asymmetrical Metropolis–Hastings sampled velocities. (d) Optimal solution along y for the asymmetrical Metropolis–Hastings sampled velocities.
Figure 8. Optimal trajectory and velocities for the examples sampled using the Metropolis–Hastings algorithm. (a) Optimal solution along x for the symmetrical Metropolis–Hastings sampled velocities. (b) Optimal solution along y for the symmetrical Metropolis–Hastings sampled velocities. (c) Optimal solution along y for the asymmetrical Metropolis–Hastings sampled velocities. (d) Optimal solution along y for the asymmetrical Metropolis–Hastings sampled velocities.
Applsci 15 10137 g008
Figure 9. Optimal trajectory and velocities comparing different wind velocity distributions. (a) Optimal solution for the x coordinate. (b) Optimal solution for the y coordinate. (c) Optimal solution for v x . (d) Optimal solution for v y .
Figure 9. Optimal trajectory and velocities comparing different wind velocity distributions. (a) Optimal solution for the x coordinate. (b) Optimal solution for the y coordinate. (c) Optimal solution for v x . (d) Optimal solution for v y .
Applsci 15 10137 g009aApplsci 15 10137 g009b
Figure 10. Zoom on optimal v y at the end of the selected time domain.
Figure 10. Zoom on optimal v y at the end of the selected time domain.
Applsci 15 10137 g010
Figure 11. Wind distribution probability in Paris, France [28].
Figure 11. Wind distribution probability in Paris, France [28].
Applsci 15 10137 g011
Figure 12. Optimal trajectory and velocities comparing different wind velocity distributions. (a) Optimal solution for the x coordinate. (b) Optimal solution for the y coordinate. (c) Optimal solution for v x . (d) Optimal solution for v y .
Figure 12. Optimal trajectory and velocities comparing different wind velocity distributions. (a) Optimal solution for the x coordinate. (b) Optimal solution for the y coordinate. (c) Optimal solution for v x . (d) Optimal solution for v y .
Applsci 15 10137 g012aApplsci 15 10137 g012b
Table 1. Nomenclature and notations.
Table 1. Nomenclature and notations.
SymbolName
x Position vector
xPosition x-coordinate
yPosition y-coordinate
tTime
C Instantaneous cost function
C total cost over a trajectory
ELEEuler Lagrange Equation
v Velocity of the object
v x x-coordinate of v
v y y-coordinate of v
v w Wind velocity
v w x x-coordinate of v w
v w y y-coordinate of v w
o ˙ = o t Time derivative of o
NN c Neural network approximating C
NN x Neural network identifying x
μ Vector of mean values
μ x mean value of the x-components
μ y mean value of the y-components
Σ Covariance matrix
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ghnatios, C.; Chinesta, F. Just-in-Time Optimal Routing in the Presence of Non-Uniform and Time-Evolving Uncertainty. Appl. Sci. 2025, 15, 10137. https://doi.org/10.3390/app151810137

AMA Style

Ghnatios C, Chinesta F. Just-in-Time Optimal Routing in the Presence of Non-Uniform and Time-Evolving Uncertainty. Applied Sciences. 2025; 15(18):10137. https://doi.org/10.3390/app151810137

Chicago/Turabian Style

Ghnatios, Chady, and Francisco Chinesta. 2025. "Just-in-Time Optimal Routing in the Presence of Non-Uniform and Time-Evolving Uncertainty" Applied Sciences 15, no. 18: 10137. https://doi.org/10.3390/app151810137

APA Style

Ghnatios, C., & Chinesta, F. (2025). Just-in-Time Optimal Routing in the Presence of Non-Uniform and Time-Evolving Uncertainty. Applied Sciences, 15(18), 10137. https://doi.org/10.3390/app151810137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop