Next Article in Journal
Softwarized Edge Intelligence for Advanced IIoT Ecosystems: A Data-Driven Architecture Across the Cloud/Edge Continuum
Previous Article in Journal
An Automated Approach for Calibrating Gafchromic EBT3 Films and Mapping 3D Doses in HDR Brachytherapy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

FLOW-GLIDE: Global–Local Interleaved Dynamics Estimator for Flow Field Prediction

1
College of Computer Science, Sichuan University, Chengdu 610065, China
2
National Key Laboratory of Fundamental Algorithms and Models for Engineering Simulation, Sichuan University, Chengdu 610207, China
3
School of Aeronautics and Astronautics, Sichuan University, Chengdu 610200, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(19), 10834; https://doi.org/10.3390/app151910834
Submission received: 30 August 2025 / Revised: 20 September 2025 / Accepted: 23 September 2025 / Published: 9 October 2025
(This article belongs to the Section Fluid Science and Technology)

Abstract

Accurate prediction of the flow field is crucial to evaluating the aerodynamic performance of an aircraft. While traditional computational fluid dynamics (CFD) methods solve the governing equations to capture both global flow structures and localized gradients, they are computationally intensive. Deep learning-based surrogate models offer a promising alternative, yet often struggle to simultaneously model long-range dependencies and near-wall flow gradients with sufficient fidelity. To address this challenge, this paper introduces the Message-passing And Global-attention block (MAG-BLOCK), a graph neural network module that combines local message passing with global self-attention mechanisms to jointly learn fine-scale features and large-scale flow patterns. Building on MAG-BLOCK, we propose FLOW-GLIDE, a cross-architecture deep learning framework that learns a mapping from initial conditions to steady-state flow fields in a latent space. Evaluated on the AirfRANS dataset, FLOW-GLIDE outperforms existing models on key performance metrics. Specifically, it reduces the error in the volumetric flow field by 62% and surface pressure prediction by 82% compared to the state-of-the-art.

1. Introduction

In the field of aerospace engineering, accurate flow field prediction plays a pivotal role in the design, analysis, and control of high-performance systems such as aircraft, launch vehicles, and reentry capsules. Traditional computational fluid dynamics (CFD) methods, although highly accurate, often involve solving complex partial differential equations (PDEs) on fine-resolution meshes, leading to prohibitively expensive computations, particularly for high-fidelity simulations. These limitations pose significant challenges for rapid design iteration, making decisions in real time, and quantifying uncertainty. As a result, there is a growing demand for efficient surrogate models capable of predicting flow fields with reduced computational overhead while maintaining physical accuracy. In recent years, neural network-based approaches have emerged as a promising alternative, offering fast and scalable solutions for modeling nonlinear flow dynamics across diverse geometries and conditions.
Various network models in deep learning are applied to flow field modeling, such as Convolution Neural Network (CNN) [1,2,3,4,5,6,7] and Graph Neural Network (GNN) [8,9,10,11,12,13,14]. Yu et al. used a knowledge-enhanced end-to-end CNN as the core, and replaced the high-cost numerical calculation of unsteady flow around blunt bodies with near-real-time pixel-to-pixel prediction through wind engineering priors such as non-uniform encryption and transfer learning [15]. Lee et al. used CNNs with four progressive resolutions and physical conservation constraints to achieve fast prediction of three-dimensional cylindrical wakes [16]. Du et al. builds an integrated CNN-based pipeline that unifies airfoil parameterization, flow prediction, and rapid inverse design [17]. Pfaff et al. proposed MeshGraphNet, which combines multi-space message passing with a learnable adaptive re-meshing mechanism [18]. Li et al. achieved fast and high-precision prediction of two-dimensional unsteady incompressible flows on unstructured grids by embedding finite volume constraints into losses and combining two-level message aggregation with spatial integration layers, significantly suppressing error accumulation and accelerating training and inference [19]. Zou et al. combine finite difference residual with graph to make a differentiable GC-FDM. Combined with block-level coordinate transformation and unit-enhanced message passing, it realizes label-free, physically constrained flow field solution on multi-block conforming grids [20].
In order to enhance the global feature learning capabilities of CNN and GNN, the researchers made some adjustments to the network architecture. Zhang et al. fused the signed distance function matrix and multichannel features based on the U-Net architecture, and used independent decoders for each physical quantity, which greatly improved the accuracy of multi-shape flow field prediction [21]. Chen et al. proposed a U-Net network model based on coordinate transformation encoding to quickly and accurately infer the flow of two-dimensional compressible flow fields on different wing sections [22]. In addition, Chen et al. also combined the attention mechanism with the U-Net architecture to quickly and accurately predict transonic unsteady airfoil flows [23]. Fortunato et al. introduced Multiscale MeshgraphNets based on Pfaff [18] for multiscale feature [24]. Gao et al. borrowed the U-Net architecture and proposed the graph U-Net containing graph pooling and graph convolution for global and local feature [25].
The Transformer architecture [26] has gained prominence due to its exceptional natural language processing capabilities [27,28,29]. Based on this architecture and its core component attention mechanism, Wu et al. used learnable physical slices and Physics-Attention to transfer the multi-head attention of Transformer from massive grid points to physical tokens, achieving high-precision, linear-time solution of PDE in arbitrarily complex geometric domains [30]. Luo et al. extended Transolver, enables the first success in accurately solving PDEs discretized on million-scale geometries [31]. Zhou et al. utilize complete PDE component and decoupled conditional Transformer to explicitly inject equation knowledge into the network, achieving strong generalization and state-of-the-art (SOTA) performance under multiple data sets, multiple equations, and multiple boundary conditions [32]. Xiao et al. combined geometric attention with implicit neural fields, and efficiently encoded the geometric information of irregular point clouds [33]. Alki et al. proposed a unified neural operator framework that does not rely on a specific grid or particle structure [34].
CNNs and GNNs can effectively capture local flow structure, but they struggle with global interactions. CNNs require rasterization/pixelization of the computational mesh [35], which introduces aliasing and geometric-edge distortion. Moreover, both CNNs and standard GNNs are inherently local operators whose receptive fields expand only with depth, leading to limited effective context [36,37]. Deepening GNNs to extend their reach typically induces feature over-smoothing [38], whereas architectural workarounds that exchange information at distance can distort the original graph representation. While attention mechanisms excel at modeling long-range dependencies, vanilla attention lacks an explicit inductive bias for grid locality and non-uniform node density. The resulting global aggregation tends to dilute small-scale structure and makes it difficult to prioritize key node-dense regions such as the boundary layer and wake flow. Ultimately, native attention struggles to simultaneously preserve strong local variations near walls and capture global coupling. This limitation requires the incorporation of local priors or grid-aware mechanisms. Collectively, these limitations hinder a robust balance between local detail and global coupling when both must be learned simultaneously.
Based on this, we propose a novel Global–Local learning module, which utilizes the spatial information in the Mesh-Based representation. The Global–Local learning module uses GNN and attention mechanism to extract local features and global features based on the mesh, and fuses features of different scales. Stacking the Global–Local learning module to build Global–Local Interleaved Dynamics Estimator(FLOW-GLIDE), which balances the learning process of features of different scales, significantly improving the prediction accuracy of dense mesh areas and areas with large physical space spans. The main contributions of this study can be summarized as follows:
  • This paper introduces a multiscale feature learning module, a novel Global–Local feature learning method that can effectively balance the learning process of different features of the neural network without the need for auxiliary learning methods.
  • Based on the novel feature learning method, we propose FLOW-GLIDE, which can learn from mesh-based geometric representations and significantly improve the prediction accuracy in obstacle boundary layers and wake regions.
  • A series of experiments were conducted on a data set based on aircraft airfoils to verify the effectiveness of the proposed model. Quantitative comparisons with other baseline models were additionally performed. The experimental results show that our model significantly improves prediction accuracy compared to single-type network models and other multi-scale feature fusion models.
This paper is summarized as follows: Section 2 defines the main research questions of this paper and describes the MAG-BLOCK and FLOW-GLIDE in detail. Comparison of the results with the ground truth and conduct quantitative analysis with the baseline model in Section 3. Finally, the conclusion is presented in Section 4.

2. Methodology

2.1. Problem Setup

This work addresses learning the mapping from initial conditions to incompressible steady high–Reynolds number flow over an 2D airfoil, which can be shown as follows:
F θ : ( geometry , operating conditions ) ( u x , u y , p , ν t ) steady
In incompressible high-Reynolds condition, flow are governed by Reynolds-Averaged Navier–Stokes (RANS) equations as follows:
i u ¯ i = 0 j ( u ¯ i u ¯ j ) = i p ¯ ρ + ( ν + ν t ) j j 2 u ¯ i , i { 1 , 2 }
where · ¯ presents an ensemble-averaged quantity, i the partial derivative with respect to the i t h spatial components. u, p, ρ are fluid velocity, pressure and the fluid mass. ν and ν t are the fluid kinematic viscosity and the fluid kinematic turbulent viscosity respectively.
At high Reynolds number, the surface flow exhibits thin, high-shear boundary layers, separation and vortex shedding, together with long-range coupling from the boundary layer into the downstream wake flow. Our model focuses on learning local features of the airfoil surface as well as global features of the entire flow field, including the wake evolving from the surface.
In order to better describe the method proposed in this paper, the symbols in Table 1 will be used.

2.2. Global–Local Interleaved Dynamics Estimator

In this section, we introduce the overall architecture of our network model, which is called FLOW-GLIDE, and how it learns the flow field distribution from the features of the sampled points.
FLOW-GLIDE learns pairwise and higher-order relationships among mesh points to capture both local variations and domain-level trends in the flow field. The architecture interleaves Message Passing layers with attention layers, providing complementary inductive biases: the former resolves neighborhood-scale structures, such as near-wall and wake interactions, while the latter models long-range couplings across the domain. This alternation yields a balanced latent representation that does not privilege any single feature scale.
As shown in Figure 1, Given a two-dimensional geometry and its discrete points on the original mesh in AirfRANS [39], let the continuous domain be Ω R 2 , and the original discrete domain be V o r i g i n = V i i = 1 N and E o r i g i n = E i j = ( V i , V j ) i = 1 , j = 2 , i j N , N . To reduce computational complexity and preserve the local structure, as well as compare to the baseline model, structure-preserving subsampling strategy is applied under the premise of geometric consistency, obtaining a node set V = { V 1 , , V n } ( n N ) and a edge set E = { E 12 , , E i j } ( i N , j N ) . The sampling strategy consists of two steps: First, 2000 points are randomly sampled from the entire discrete point set V o r i g i n ; then, using the K-hop algorithm with k = 5, a second step of sampling is simultaneously performed and connecting along the original grid lines to get V and E .
Based on this, an undirected graph G = ( V , E ) is constructed. For each node, the initial features n i R d in v comprise global coordinate p o s x and p o s y , inlet velocity V i n f l o w x and V i n f l o w y , the distance from points to airfoil surface d, the components of the normal vector in the x and y directions n o r m a l x and n o r m a l y . Specifically, if the point is in flow field, then the zero vector is set. The initial feature of each edge e i j R d in e is the difference between the features of the points at its two ends, except for the coordinates.
After subsampling, the node features and edge features will be projected into the latent space D R d h i d d e n through MLP, as shown in Equation (3):
n i 0 = ψ n ( n i ) e i j 0 = ψ e ( e i j )
where, n i 0 and e i j 0 represent the node and edge feature respectively in the latent space. The ψ ( · ) reject the MLP.
To emphasize the local structure, FLOW-GLIDE begins with graph module. After an encoder projects node attributes into the latent space, the undirected graph G is processed by three Message Passing layers, which utilized FVGN [19]. This configuration enabling aggregation of informative neighborhood interactions while remaining sufficiently shallow to mitigate feature oversmoothing of node representations. After Message Passing layers, the latent feature n i 0 and e i j 0 aggregate the neighbor node information to n i L and e i j L . Local feature extraction can be described as follows:
n i L = g L ( g L 1 ( g l ( g 1 ( n i 0 ) ) ) ) , 1 l L e i j L = g L ( g L 1 ( g l ( g 1 ( e i j 0 ) ) ) ) , 1 l L
where g represents the Message Passing layer. L = 3 is determined as the subsequent experimental setup.
After local feature extraction, the undirected graph G is passed to a global attention module, where Transolver [30] is employed as the attention layer. G will be decomposed into a set of learnable slices s = { s k } k = 1 M . Each slice is then compressed into a physics-aware token by aggregating node embeddings together with spatially weight. A multi-head cross-attention operator subsequently evaluates the contribution of each token to each output, assigning token-specific attention weights and thereby modeling long-range couplings across the domain. Global feature extraction can be described as follows:
n i 2 L = A L ( A L 1 ( A l ( A 1 ( n i L ) ) ) ) , 1 l L
where A represent the attention layer. Only pass node features are passed to attention layer, as shown in Equation (5). L = 1 attention layer is employed in subsequent experimental setup.
Building on the components above, the principal module, MAG-BLOCK, is constructed, coupling three Message Passing layers with an attention layer through residual projections. Stacking MAG-BLOCKs yields a deep architecture with alternating local–global stages, progressively enriching neighborhood-scale features and domain-wide dependencies without privileging any single scale. The constituent layers are described in detail in Section 2.2.
After stacking B instances of MAG-BLOCK, the model produces node-level latent embeddings that jointly encode local structure and long-range couplings. A decoder architecturally symmetric to the encoder then maps these embeddings to pointwise predictions of the target flow variables, yielding the full flow-field distribution, which can be shown as Equation (6).
n i p r e d = ψ dec ( n i K )

2.3. Global–Local Learning Module

The structure of Global–Local learning module is shown in Figure 2, which is called MAG-BLOCK. After each Message Passing layer and attention layer, there is a residual connection layer to maintain training stability. The right side of Figure 2 shows the calculation process of the Message Passing layer, and the left side shows the calculation process of the attention mechanism. We now describe the learning procedure within MAG-BLOCK in detail. Figure 2 presents the architecture of the Global–Local learning module, MAG-BLOCK. Each message-passing and attention layer is wrapped with a residual connection to promote training stability. The right panel of Figure 2 details the computations in the Message Passing layer, whereas the left panel illustrates the attention mechanism. Next, the learning procedure within MAG-BLOCK is detailed.
Let G = ( V , E ) be the undirected graph, with node features n i R d h i d d e n and edge features e i j R d h i d d e n . Denote the 1-hop neighborhood of node i by N ( i ) . For the endpoints i and j of each edge E i j , calculate the sum of the features of adjacent nodes n ˜ i and n ˜ j , respectively:
n ˜ i = m N ( i ) n m l , n ˜ j = p N ( j ) n p l .
Then concatenate these descriptors with the current edge attribute in the fixed order [ n ˜ i n ˜ j e i j ] and pass the result through an MLP ψ edge for edge to obtain the updated edge feature:
e i j l + 1 = ψ edge [ n ˜ i n ˜ j e i j l ] .
where ( · ) ( · ) is concatenation.
With the updated edge features e i j l + 1 obtained from the edge stage, the aggregation of information to each points V i is performed in two steps. The first step computes an incident edge feature sum, which aggregates the messages incoming to point V i :
e ˜ i = j N ( i ) e i j l + 1 .
where e ˜ i refers to the sum of the attributes of all edges connected to point V i .
Equal weighting is then applied to all e ˜ j of neighboring points of point V i , yielding a mean of features n ¯ i on point V i :
n ¯ i = 1 | N ( i ) | j N ( i ) e ˜ j .
n ¯ i is concatenated with the current node feature n i l and passed through an MLP ψ n o d e for node to produce the updated node attribute:
n i l + 1 = ψ n o d e ( [ n ¯ i n i l ] )
After three message-passing layers, Transolver [30] is employed as the global feature learning module. Prior to global aggregation, the node feature n i l are normalized with LayerNorm. With H head attention, a linear projection f weight and a Softmax layer yields M slice weights w = { w i , k h } i = 1 , k = 1 N , M for each head h:
{ w i , k h } i = 1 , k = 1 N , M = Softmax f w e i g h t n i l i = 1 N
For slice k, a physics-aware token t k h is produced by weighted reduction of node embeddings according to Equation (13):
t k h = i = 1 N w i , k h s i h i = 1 N w i , k h , k = ( 1 , , M ) h = ( 1 , , H )
Each token t k h is input into multi-head attention. For head h with head dimension d h generates its own query Q k h , key K k h , and value V k h vectors for each token through a linear layer. The attention weights encoding token-to-token correlations and outputs are
A k h = Softmax Q k h ( K k h ) d h z k h = A k h V k h .
Afterward, learned tokens z k h recomposes to nodes attribute with slice weights on each head, then compose attribute go through a linear layer to obtain the attention outputs:
n i h = k = 1 M w i , k h z k h , n ˜ i = f n i h
where f is a linear projection. n ˜ i will also go through a LayerNorm and an MLP, and get the final output after the residual connection. The entire learning process can be expressed as follows:
n i = n i l + Attention ( LN ( n i l ) ) , n i l + 1 = n i + ψ a t t n ( LN ( n i ) ) .
where LN represents layernorm and Attention contains learning routine from Equation (14) to Equation (16).

2.4. Loss Function

L v o l = i = 1 N ( y i p r e d y i ) 2 i = 1 N y i 2
L s u r f = i = 1 N ( p i p r e d p i ) 2 i = 1 N p i 2
L = L s u r f + L v o l
Relative L 2 error is used as the loss function. For volume nodes, the loss is evaluated jointly over all predicted variables—velocity components v x and v y , pressure p, and turbulent viscosity ν t . For airfoil surface nodes, since only the reduced pressure is given, so the loss is restricted to the pressure channel. The final training objective is minimize the sum of the volumetric and surface relative L 2 terms.

3. Results and Discussion

3.1. Benchmark and Configures

AirfRANS [39] is used as the benchmark dataset, comprising 1000 two-dimensional steady incompressible RANS simulations of NACA 4- and 5-digit airfoils, with angles of attack ranging from −5° to 15° and Reynolds numbers between 2 × 10 6 and 6 × 10 6 . For each case, a C-grid multi-block hexahedral mesh is generated using blockMesh in OpenFOAM v2112. Solutions are computed with simpleFOAM using the SIMPLEC algorithm and the SST turbulence model, and are advanced to convergence of drag and lift coefficients. The dataset provides four evaluation tasks, which are Full data regime, Scarce data regime, Reynolds extrapolation regime, and Angle of Attack (AoA) extrapolation. The results are reported on the full-regime, Reynolds-extrapolation, and AoA-extrapolation tracks, with comparisons made against baselines.
The baseline models used in AirfRANS include MLP, Graph U-Net [25], GraphSAGE [40], and PointNet [41]. For visualization comparison with FLOW-GLIDE, we additionally re-implement FVGN [19], MeshGraphNet [18], Transolver [30], together with MLP and GraphSAGE [40]. All these models were trained for 600 epochs using an initial learning rate of 0.001 and the ADAM optimizer with a latent space of feature length 128. The relative L2 loss function was used. In one MAG-BLOCK, three Message Passing layers and one attention layer are used. Additional comparisons are conducted with newer models, including GNO [42], GNOT [43], GINO [44], and Galerkin [45]. All experiments were conducted on an RTX 4090 GPU and the code was implemented using the PyTorch framework.

3.2. Comparison of Flow Fields

We compared the prediction results with CFD. Figure 3 and Figure 4 show the values of flow field variables and the distribution of streamlines when the angle of attack is 14.642° and the velocity is 39.741 m/s.
Figure 3 shows that FLOW-GLIDE reproduces the spatial organization of the two-dimensional flow with high fidelity. Locally, the leading-edge stagnation region and the near-wall acceleration along the suction side agree closely with the CFD reference. Globally, the growth of the boundary layer and the downstream development of the wake are consistently reproduced.
Figure 4 compares the predicted streamlines of FLOW-GLIDE with the CFD reference for the same operating condition. The left panels show the full field. The incoming flow decelerates to a leading-edge stagnation point, accelerates along the suction side, and recovers toward the trailing edge. The wake centerline orientation and the spacing of farfield streamlines are consistent between prediction and CFD, indicating that the global flow organization is faithfully reproduced.
The right panels magnify the boxed region near the trailing edge and highlight the recirculation bubble. FLOW-GLIDE resolves the family of closed streamlines inside the bubble and closely matches the CFD in the curvature of the outer shear layer that bounds the bubble, and the reattachment location where the shear layer impinges on the surface. The extent and height of the bubble are comparable to the reference, and no spurious secondary eddies are observed.
In addition to qualitative comparisons with the CFD fields, error visualizations are provided for the baseline models described in Section 3.1. Since differences in raw predictions are often visually indistinguishable under standard color mappings, pointwise relative L 2 errors are reported against the CFD reference fields to provide a more sensitive evaluation. Figure 5 reports the error across all predicted variables. To enable fair side-by-side comparison, all panels for a given variable use an identical color scale in the range [ 0 , 10 1 ] . Lower values indicate better agreement.
FLOW-GLIDE attains the lowest errors for all components. Spatially, errors for all methods concentrate in the trailing-edge shear layer and wake, whereas the far field and most of the external flow remain below 0.01 for FLOW-GLIDE. The leading and trailing edge neighborhoods exhibit noticeably higher errors for the baselines, reflecting reduced accuracy in resolving local flow features. Although Transolver [30] is the SOTA model, its error maps still display residual bands of elevated wake error and localized hot spots near the leading edge, both attenuated in FLOW-GLIDE. Overall, the alternating local-then-global design reduces wake-core and near-edge errors without inflating far-field artifacts.
Beyond comparisons with CFD errors, visualizations of the attention distribution in the final Attention layer are used to probe the internal mechanisms of the model. These visualizations assess whether the node attributes produced by the preceding local message-passing layers effectively modulate global feature aggregation in the attention module.
Figure 6 visualizes the slice weights of the final global attention layer. All panels share an identical color scale. The slice number is located in the upper right of each slice. For each slice, the weight map shows where the global module attends. Higher intensities denote nodes that contribute more strongly to the token formed for that slice, as descriped in Equation (13). The resulting patterns closely follow the underlying flow physics, with salient regions aligning with boundary layers, the wake, and far-field zones. Attention distribution can be roughly divided into the following three categories:
  • Far-field–oriented slices (green box): These maps allocate low weight to the immediate vicinity of the airfoil and to the wake core, while emphasizing the outer domain. This pattern indicates that the attention layer separates farfield flow from boundary layer and wake dynamics.
  • Leading-edge + wake slices (red box): We observe concentrated weight at the leading edge and within the wake, connected by a continuous band from the trailing edge into the wake. This suggests that specific tokens are dedicated to modeling the evolution of shear-layer structures and downstream transport.
  • Surface-focused slices (blue box): The weights form narrow rims along the airfoil surface, consistent with a focus on boundary layer processes. The emergence of these near-wall saliency patterns implies that the node embeddings produced by local feature learning effectively condition the global module to attend to the flow areas where the velocity gradients are strongest.
Taken together, these figures demonstrate that the proposed local-first-global-last design produces semantically explicit labeling of the far-field, boundary layer, and wake regions, thus facilitating accurate cross-scale coupling.

3.3. Airfoil Surface and Quantitative Comparison

Following the field-level analysis, the evaluation turns to quantitative statistical analysis, encompassing both the airfoil surface and the flow field. Figure 7 shows the correlation between the lift and drag coefficients in each test case, and a linear fit is performed on the results of the entire testset. The black curve is the CFD result, which slope equals one.
Figure 7a presents the spearman correlation of drag coefficient. Among all methods, FLOW-GLIDE exhibits the tightest clustering around the identity with a regression slope closest to one and near-zero intercept, indicating superior calibration and reduced bias across the full drag range. Baselines display either dynamic-range compression, shows a slope less than one, or systematic over-prediction at higher CD. FLOW-GLIDE mitigates both, yielding smaller residuals and higher rank consistency with CFD reference.
For the lift coefficient C L , although all models follow the ground truth closely, FLOW-GLIDE produces a fitted line nearly coincident with the identity and exhibits the smallest scatter across the entire C L range, including the high-lift regime, indicating excellent monotonic agreement and negligible scale bias.
Figure 8 compares surface pressure distributions predicted by the FLOW-GLIDE and baseline models under two operating conditions: (a) low speed and high angle of attack, (b) high speed and low angle of attack.
At high AoA, FLOW-GLIDE accurately reproduces the magnitude and streamwise location of the leading-edge suction peak and tracks the suction-side pressure-recovery gradient without spurious oscillations. By contrast, several baselines exhibit systematic biases, including under-predicted suction minima, downstream-shifted peaks, and high-frequency ripples over mid-chord, and some additionally show a negative offset on the pressure side.
As shown in Figure 8b, differences are smaller overall in low AoA and high speed condition, yet FLOW-GLIDE remains the closest to CFD across the entire chord on both surfaces. It captures the stagnation jump, the monotonic pressure recovery along the suction side, and trailing-edge closure. Baseline methods reproduce the overall surface-pressure trend but exhibit noticeable offsets relative to CFD, including mild slope and level deviations along portions of the chord.
In addition to visualization, FLOW-GLIDE is quantitatively compared with baseline models across different tasks.
Table 2 summarizes full-regime results on AirfRANS [39]. Downward arrows indicate that smaller is better, and the upward arrow indicates that larger is better. Here, Volume denotes the domain-wise relative- L 2 error averaged over the four predicted variables. Surface is the surface relative- L 2 error. The volume error was reduced by 62% relative to the strongest baseline, Transolver [30]. The reduction in error for surface metrics was particularly significant, decreasing by nearly an order of magnitude. The lift coefficient also improved by 54% compared to the best performing FVGN [19] and the Spearman rank correlation for C L is the highest among all methods.
Beyond the full-regime benchmark, out-of-distribution (OOD) generalization is evaluated on two extrapolation tasks: Reynolds number and angle of attack. For Reynolds extrapolation, the training set comprises 500 cases with Re [ 3 × 10 6 , 5 × 10 6 ] , the test set contains 500 cases drawn from the two held-out outer bands Re [ 2 × 10 6 , 3 × 10 6 ] [ 5 × 10 6 , 6 × 10 6 ] . For AoA extrapolation, the training set includes 800 cases with AoA [ 2.5 , 12.5 ] and the test set includes 200 cases sampled from the disjoint intervals AoA [ 5 , 2.5 ] [ 12.5 , 15 ] . Evaluation uses the same metrics as Section 3.2, and the results are summarized in Table 3.
Table 3 reports OOD results. For the Reynolds-number extrapolation task, FLOW-GLIDE attains the highest rank consistency with ρ D = 0.9985 , surpassing the best baseline. In terms of absolute drag error, FLOW-GLIDE is competitive, ranking behind Transolver [30] and GNOT [43] but outperforming the remaining baselines. For the AoA extrapolation task, FLOW-GLIDE also achieves the best Spearman correlation with ρ D = 0.9958 . For absolute lift error, it is second best, close to Transolver [30] and lower than all other baselines. Overall, FLOW-GLIDE exhibits the strongest rank consistency and competitive absolute accuracy across both extrapolation tasks.

3.4. Ablation

Two ablation studies are conducted to evaluate the benefits of the interleaved Local–Global design and to determine the local depth required to avoid over-smoothing while preserving discriminative locality. The first ablation removes interleaving: message-passing and attention layers are stacked in contiguous blocks, varying only their order to form GN-Attention and Attention-GN. Consistent with Section 3.2, the reference FLOW-GLIDE stacks four MAG-BLOCKs, each comprising three Message Passing layers followed by one attention layer. To match the total depth and parameter count, GN→Attention uses twelve Message Passing layers followed by four attention layers, whereas Attention→GN applies the reverse ordering. All other hyperparameters are kept fixed to enable a fair comparison.
Table 4 compares FLOW-GLIDE with non-interleaved architectures. With total depth and training setup held fixed, FLOW-GLIDE delivers the best overall performance. Compared to GN-Attention, FLOW-GLIDE reduces C d by 31%, increases ρ d by 50% and further decreases Volume/Surface by 6.7%/10.7%, respectively. Attention-GN underperforms across all metrics, indicating that global-first ordering washes out near-wall gradients early. The subsequent GN layer cannot recover the lost high-frequency information. In terms of volume and surface metrics, Attention-GN is one order of magnitude lower than FLOW-GLIDE. Compared to FLOW-GLIDE, ρ D and C D in Attention-GN decreased by 57% and 82%, respectively. A caveat is that GN→Attention achieves a slightly lower C L error than FLOW-GLIDE. In the Volume and Surface metrics, the GN→Attention configuration performs comparably to FLOW-GLIDE, demonstrating the importance of prioritizing local feature learning over global feature learning. Furthermore, with the exception of C L , GN-Attention performs worse than FLOW-GLIDE in all other metrics, further demonstrating that overly deep graph networks can lead to oversmoothing. The appropriate number of GN layers was further investigated in a second ablation experiment. Overall, the ablation supports interleaving local message passing with global attention, which better balances near-wall fidelity and long-range coupling.
The second ablation focused on determining how many Message Passing layers are needed in a MAG-BLOCK to capture local features as closely as possible without causing oversmoothing. The number of Message Passing layers per block is varied, with k 1 , 2 , 3 , 4 , 5 , while all other factors are kept fixed. Models are then evaluated on the full-regime task using the same metrics as Section 3.2, enabling an isolated assessment of how local depth within a block affects performance.
Table 5 summarizes the ablation on the number of Message Passing layers per MAG-BLOCK, with all other settings held fixed. The configuration with k = 3 delivers the best overall performance. Increasing Message Passing layer from k = 1 to k = 3 markedly improves the representation of flow field. However, pushing the layer depth to k = 4 and k = 5 introduces oversmoothing, which degrades Volume, Surface, and C D and substantially lowers ρ D . Although k = 5 yields the lowest lift error C L = 0.0288 , this comes at the expense of higher field errors and weaker drag rank consistency, and is therefore not preferable overall. In summary, three Message Passing layers strike the most effective trade-off between capturing local structure and avoiding oversmoothing.

4. Conclusions

To address the challenge of balancing strong near-wall gradients and global long-distance coupling in high-Reynolds number flows, this paper proposes MAG-BLOCK, which couples low-level message passing with physical perception attention through residual mapping, further stacking results in the alternating Local–Global architecture FLOW-GLIDE. This design balances discriminative local features with global dependencies in the latent space. Visualization of the final attention layer reveals that tokens form a semantic division of labor across the boundary layer, wake, and far field, providing a mechanistic explanation for the performance improvement.
Comprehensive experiments on AirfRANS validate the approach. In the full-regime track, FLOW-GLIDE reduces the volume relative- L 2 error by 62% over the strongest baseline and lowers surface-pressure error by nearly an order of magnitude. It also achieves the lowest lift-coefficient error and the highest Spearman rank correlation among all competitors. Qualitatively, the model reproduces the leading-edge stagnation pattern, suction-side acceleration, and the trailing-edge recirculation bubble with correct curvature and re-attachment location. In out-of-distribution evaluations, FLOW-GLIDE attains the highest rank consistency for both Reynolds-number and AoA extrapolations while maintaining competitive absolute errors, demonstrating robustness when operating outside the training ranges.
Ablation studies clarify architectural choices. Removing interleaving degrades performance, indicating that alternating local message passing with global attention is crucial for suppressing wake-core and near-edge errors without inflating farfield artifacts. Varying the number of message-passing layers per block shows that three layers strike the best balance. Deeper stacks begin to over-smooth and harm drag ranking, whereas shallower stacks underfit local structure.
These results suggest that FLOW-GLIDE is a practical surrogate for rapid design space exploration, inverse design, and control where both accurate ranking and low field-level error matter. The method integrates naturally with mesh-based pipelines and preserves original connectivity, enabling deployment alongside existing CFD workflows as a fast pre-screening or co-simulation module.
Future work will target the attention mechanism by explicitly decomposing global attention into tangential and normal direction. This design aims to learn direction-dependent transport and diffusion. Moreover, physical constraints can be applied to loss functions to enhance physical consistency and interpretability. We also plan to evaluate scalability and generalization capabilities with respect to mesh resolution and problem classes, extending the framework from two-dimensional, stable, incompressible cases to three-dimensional, unsteady, and compressible flows. Together, these directions aim to enhance the inductive bias of FLOW-GLIDE, improve its robustness across domains, and broaden its applicability to practical aerodynamic design and control.

Author Contributions

Conceptualization, J.S.; data curation, J.S.; formal analysis, J.S.; methodology, J.S.; project administration, J.S.; resources, J.W.; software, J.S.; supervision, J.W., L.X.; validation, J.S.; visualization, J.S.; writing—original draft preparation, J.S.; writing—review and editing, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Key Project of China (Grant No. GJXM9257), the Sichuan Science and Technology Program (Project No. 2021ZDZX0001), the Open Funding of National key laboratory of Fundamental Algorithms and Models for Engineering Simulation, and the Sichuan University Interdisci-plinary Innovation Fund.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors are thankful to Tianyu Li, Lin Lu and Yiye Zou for their support to this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wandel, N.; Weinmann, M.; Klein, R. Learning Incompressible Fluid Dynamics from Scratch–Towards Fast, Differentiable Fluid Models that Generalize. arXiv 2020, arXiv:2006.08762. [Google Scholar]
  2. Sofos, F.; Drikakis, D.; Kokkinakis, I.W. Deep learning architecture for sparse and noisy turbulent flow data. Phys. Fluids 2024, 36, 035155. [Google Scholar] [CrossRef]
  3. Hu, C.; Guo, X.; Dai, Y.; Zhu, J.; Cheng, W.; Xu, H.; Zeng, L. A deep-learning model for predicting spatiotemporal evolution in reactive fluidized bed reactor. Renew. Energy 2024, 225, 120245. [Google Scholar] [CrossRef]
  4. Illarramendi, E.A.; Bauerheim, M.; Cuenot, B. Performance and accuracy assessments of an incompressible fluid solver coupled with a deep convolutional neural network. Data-Centric Eng. 2022, 3, e2. [Google Scholar] [CrossRef]
  5. Bhatnagar, S.; Afshar, Y.; Pan, S.; Duraisamy, K.; Kaushik, S. Prediction of aerodynamic flow fields using convolutional neural networks. Comput. Mech. 2019, 64, 525–545. [Google Scholar] [CrossRef]
  6. Morimoto, M.; Fukami, K.; Zhang, K.; Nair, A.G.; Fukagata, K. Convolutional neural networks for fluid flow analysis: Toward effective metamodeling and low dimensionalization. Theor. Comput. Fluid Dyn. 2021, 35, 633–658. [Google Scholar] [CrossRef]
  7. Peng, J.Z.; Chen, S.; Aubry, N.; Chen, Z.H.; Wu, W.T. Time-variant prediction of flow over an airfoil using deep neural network. Phys. Fluids 2020, 32, 123602. [Google Scholar] [CrossRef]
  8. Iakovlev, V.; Heinonen, M.; Lähdesmäki, H. Learning continuous-time pdes from sparse data with graph neural networks. arXiv 2020, arXiv:2006.08956. [Google Scholar]
  9. Sanchez-Gonzalez, A.; Godwin, J.; Pfaff, T.; Ying, R.; Leskovec, J.; Battaglia, P. Learning to simulate complex physics with graph networks. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 8459–8468. [Google Scholar]
  10. Li, Q.; Li, X.; Chen, X.; Yao, W. A novel graph modeling method for GNN-based hypersonic aircraft flow field reconstruction. Eng. Appl. Comput. Fluid Mech. 2024, 18, 2394177. [Google Scholar] [CrossRef]
  11. Nastorg, M.; Bucci, M.A.; Faney, T.; Gratien, J.M.; Charpiat, G.; Schoenauer, M. An implicit GNN solver for Poisson-like problems. Comput. Math. Appl. 2024, 176, 270–288. [Google Scholar] [CrossRef]
  12. Lu, L.; Zou, Y.; Wang, J.; Zou, S.; Zhang, L.; Deng, X. Unsupervised learning with physics informed graph networks for partial differential equations. Appl. Intell. 2025, 55, 617. [Google Scholar] [CrossRef]
  13. Han, X.; Gao, H.; Pfaff, T.; Wang, J.X.; Liu, L.P. Predicting physics in mesh-reduced space with temporal attention. arXiv 2022, arXiv:2201.09113. [Google Scholar] [CrossRef]
  14. Chen, J.; Hachem, E.; Viquerat, J. Graph neural networks for laminar flow prediction around random two-dimensional shapes. Phys. Fluids 2021, 33, 123607. [Google Scholar] [CrossRef]
  15. Yu, X.; Wu, T. Simulation of unsteady flow around bluff bodies using knowledge-enhanced convolutional neural network. J. Wind Eng. Ind. Aerodyn. 2023, 236, 105405. [Google Scholar] [CrossRef]
  16. Lee, S.; You, D. Mechanisms of a convolutional neural network for learning three-dimensional unsteady wake flow. arXiv 2019, arXiv:1909.06042. [Google Scholar] [CrossRef]
  17. Du, Q.; Liu, T.; Yang, L.; Li, L.; Zhang, D.; Xie, Y. Airfoil design and surrogate modeling for performance prediction based on deep learning method. Phys. Fluids 2022, 34, 015111. [Google Scholar] [CrossRef]
  18. Pfaff, T.; Fortunato, M.; Sanchez-Gonzalez, A.; Battaglia, P. Learning mesh-based simulation with graph networks. In Proceedings of the International Conference on Learning Representations, Addis Ababa, Ethiopia, 30 April 2020. [Google Scholar]
  19. Li, T.; Zou, S.; Chang, X.; Zhang, L.; Deng, X. Predicting unsteady incompressible fluid dynamics with finite volume informed neural network. Phys. Fluids 2024, 36, 043601. [Google Scholar] [CrossRef]
  20. Zou, Y.; Li, T.; Lu, L.; Wang, J.; Zou, S.; Zhang, L.; Deng, X. Finite-difference-informed graph network for solving steady-state incompressible flows on block-structured grids. Phys. Fluids 2024, 36, 103608. [Google Scholar] [CrossRef]
  21. Zhang, M.; Cao, J. Flow Field Prediction of Variable Geometry using a Novel Deep CNN Model with Dual-Feature of Flow Regions. In Proceedings of the 2024 43rd Chinese Control Conference (CCC), Kunming, China, 28–31 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 7486–7491. [Google Scholar]
  22. Chen, L.W.; Thuerey, N. Towards high-accuracy deep learning inference of compressible flows over aerofoils. Comput. Fluids 2023, 250, 105707. [Google Scholar] [CrossRef]
  23. Chen, L.; Thuerey, N. Deep learning-based predictive modeling of transonic flow over an airfoil. Phys. Fluids 2024, 36, 127106. [Google Scholar] [CrossRef]
  24. Fortunato, M.; Pfaff, T.; Wirnsberger, P.; Pritzel, A.; Battaglia, P. Multiscale meshgraphnets. arXiv 2022, arXiv:2210.00612. [Google Scholar] [CrossRef]
  25. Gao, H.; Ji, S. Graph u-nets. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 2083–2092. [Google Scholar]
  26. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
  27. Team, G.; Riviere, M.; Pathak, S.; Sessa, P.G.; Hardin, C.; Bhupatiraju, S.; Hussenot, L.; Mesnard, T.; Shahriari, B.; Ramé, A.; et al. Gemma 2: Improving open language models at a practical size. arXiv 2024, arXiv:2408.00118. [Google Scholar] [CrossRef]
  28. Team, G.; Anil, R.; Borgeaud, S.; Alayrac, J.B.; Yu, J.; Soricut, R.; Schalkwyk, J.; Dai, A.M.; Hauth, A.; Millican, K.; et al. Gemini: A family of highly capable multimodal models. arXiv 2023, arXiv:2312.11805. [Google Scholar] [CrossRef]
  29. Gu, A.; Dao, T. Mamba: Linear-time sequence modeling with selective state spaces. arXiv 2023, arXiv:2312.00752. [Google Scholar] [CrossRef]
  30. Wu, H.; Luo, H.; Wang, H.; Wang, J.; Long, M. Transolver: A fast transformer solver for pdes on general geometries. arXiv 2024, arXiv:2402.02366. [Google Scholar] [CrossRef]
  31. Luo, H.; Wu, H.; Zhou, H.; Xing, L.; Di, Y.; Wang, J.; Long, M. Transolver++: An Accurate Neural Solver for PDEs on Million-Scale Geometries. arXiv 2025, arXiv:2502.02414. [Google Scholar]
  32. Zhou, H.; Ma, Y.; Wu, H.; Wang, H.; Long, M. Unisolver: PDE-conditional transformers are universal PDE solvers. arXiv 2024, arXiv:2405.17527. [Google Scholar] [CrossRef]
  33. Xiao, L.; Zhang, M.; Chang, X. Learning Airfoil Flow Field Representation via Geometric Attention Neural Field. Appl. Sci. 2024, 14, 10685. [Google Scholar] [CrossRef]
  34. Alkin, B.; Fürst, A.; Schmid, S.; Gruber, L.; Holzleitner, M.; Brandstetter, J. Universal physics transformers: A framework for efficiently scaling neural operators. Adv. Neural Inf. Process. Syst. 2024, 37, 25152–25194. [Google Scholar]
  35. Wu, H.; Liu, X.; An, W.; Chen, S.; Lyu, H. A deep learning approach for efficiently and accurately evaluating the flow field of supercritical airfoils. Comput. Fluids 2020, 198, 104393. [Google Scholar] [CrossRef]
  36. Luo, W.; Li, Y.; Urtasun, R.; Zemel, R. Understanding the effective receptive field in deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2016, 29, 4905–4913. [Google Scholar]
  37. Alon, U.; Yahav, E. On the bottleneck of graph neural networks and its practical implications. arXiv 2020, arXiv:2006.05205. [Google Scholar]
  38. Chen, D.; Lin, Y.; Li, W.; Li, P.; Zhou, J.; Sun, X. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 3438–3445. [Google Scholar]
  39. Bonnet, F.; Mazari, J.; Cinnella, P.; Gallinari, P. Airfrans: High fidelity computational fluid dynamics dataset for approximating reynolds-averaged navier–stokes solutions. Adv. Neural Inf. Process. Syst. 2022, 35, 23463–23478. [Google Scholar]
  40. Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. Adv. Neural Inf. Process. Syst. 2017, 30, 1025–1035. [Google Scholar]
  41. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  42. Anandkumar, A.; Azizzadenesheli, K.; Bhattacharya, K.; Kovachki, N.; Li, Z.; Liu, B.; Stuart, A. Neural operator: Graph kernel network for partial differential equations. In Proceedings of the ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  43. Hao, Z.; Wang, Z.; Su, H.; Ying, C.; Dong, Y.; Liu, S.; Cheng, Z.; Song, J.; Zhu, J. Gnot: A general neural operator transformer for operator learning. In Proceedings of the International Conference on Machine Learning, PMLR, Honolulu, HI, USA, 23–29 July 2023; pp. 12556–12569. [Google Scholar]
  44. Li, Z.; Kovachki, N.; Choy, C.; Li, B.; Kossaifi, J.; Otta, S.; Nabian, M.A.; Stadler, M.; Hundt, C.; Azizzadenesheli, K.; et al. Geometry-informed neural operator for large-scale 3d pdes. Adv. Neural Inf. Process. Syst. 2023, 36, 35836–35854. [Google Scholar]
  45. Cao, S. Choose a transformer: Fourier or galerkin. Adv. Neural Inf. Process. Syst. 2021, 34, 24924–24940. [Google Scholar]
Figure 1. FLOW-GLIDE workflow.
Figure 1. FLOW-GLIDE workflow.
Applsci 15 10834 g001
Figure 2. MAG-BLOCK architecture. Dark red, blue, and dark green dots indicate the nodes encompassed by the model’s local receptive field on the graph after the first, second, and third message-passing layers, respectively. Light green rounded boxes represent non-attention computations, while orange rounded boxes represent attention paths.
Figure 2. MAG-BLOCK architecture. Dark red, blue, and dark green dots indicate the nodes encompassed by the model’s local receptive field on the graph after the first, second, and third message-passing layers, respectively. Light green rounded boxes represent non-attention computations, while orange rounded boxes represent attention paths.
Applsci 15 10834 g002
Figure 3. Flow field comparison between FLOW-GLIDE and CFD.
Figure 3. Flow field comparison between FLOW-GLIDE and CFD.
Applsci 15 10834 g003
Figure 4. Stream line comparison between FLOW-GLIDE and CFD.
Figure 4. Stream line comparison between FLOW-GLIDE and CFD.
Applsci 15 10834 g004
Figure 5. Relative L2 loss comparison with baseline (full regime).
Figure 5. Relative L2 loss comparison with baseline (full regime).
Applsci 15 10834 g005
Figure 6. Attention distribution among slices (full regime).
Figure 6. Attention distribution among slices (full regime).
Applsci 15 10834 g006
Figure 7. Spearman correlation of Lift and drag coefficients (full regime).
Figure 7. Spearman correlation of Lift and drag coefficients (full regime).
Applsci 15 10834 g007
Figure 8. Pressure coefficient distributiion on the surface of airfoil for two CFD cases (full regime).
Figure 8. Pressure coefficient distributiion on the surface of airfoil for two CFD cases (full regime).
Applsci 15 10834 g008
Table 1. Definition of main symbols.
Table 1. Definition of main symbols.
SymbolDefinition
V Point sets
V a Point a
E Edge sets
E a b Edge that connect point a and b
NThe number of total points
Ω The entire flow field
G The input graph structure
n a Initial feature of point a
e a b Initial feature of edge ab
d a The number of a domain’s dimensions
n a l Latent feature of point a after l-layer Network
e a b l Latent feature of edge ab after l-layer Network
ψ Multilayer perceptron(MLP)
gMessage Passing layer
AAttention layer
Table 2. Test result on AirfRANS (Full regime, baseline result from AirfRANS, and Transolver).
Table 2. Test result on AirfRANS (Full regime, baseline result from AirfRANS, and Transolver).
ModelVolume↓Surface↓ C L ρ L
GraphSAGE0.01020.06600.5170.965
Transolver0.00370.01400.1030.997
PointNet0.02800.09300.7420.938
FVGN0.04380.02300.0790.997
MLP0.01140.11300.7670.913
GNO0.02690.04050.2010.993
GALERKIN0.00740.01590.2330.995
GNOT0.00490.01520.1990.994
GINO0.02970.04820.1820.995
FLOW-GLIDE0.00140.00250.0360.999
Table 3. Performance comparison on Reynolds and AoA task (baseline result from Transolver).
Table 3. Performance comparison on Reynolds and AoA task (baseline result from Transolver).
ModelReynoldsAoA
C D ρ D C L ρ L
MLP0.62050.95780.41280.9572
GraphSAGE0.43330.97070.25380.9894
Transolver0.29960.98960.15000.9950
PointNet0.38360.98060.44250.9784
GNO0.44080.98780.30380.9884
CALERKIN0.46150.98260.38140.9821
GNOT0.32680.98650.34970.9868
GINO0.41800.96450.25830.9923
FLOW-GLIDE0.33440.99850.19870.9958
Table 4. Comparison of model performance in different order (full regime).
Table 4. Comparison of model performance in different order (full regime).
ModelVolume↓Surface↓ ρ D ρ L C D C L
GN-Attention0.00150.00280.52940.99940.19770.0288
Attention-GN0.02140.07840.50400.99570.77830.1211
FLOW-GLIDE0.00140.00250.79210.99960.13680.0368
Table 5. Comparison of GN layer in one MAG-BLOCK (full regime).
Table 5. Comparison of GN layer in one MAG-BLOCK (full regime).
Number of GNVolume↓Surface↓CLCD ρ L ρ D
10.00150.00320.04060.21860.99950.3821
20.00210.00450.05440.28840.99910.2992
30.00140.00250.03660.13680.99960.7921
40.00230.00280.04770.15970.99930.6871
50.00190.00470.02880.16570.99950.5972
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Su, J.; Xiao, L.; Wang, J. FLOW-GLIDE: Global–Local Interleaved Dynamics Estimator for Flow Field Prediction. Appl. Sci. 2025, 15, 10834. https://doi.org/10.3390/app151910834

AMA Style

Su J, Xiao L, Wang J. FLOW-GLIDE: Global–Local Interleaved Dynamics Estimator for Flow Field Prediction. Applied Sciences. 2025; 15(19):10834. https://doi.org/10.3390/app151910834

Chicago/Turabian Style

Su, Jinghan, Li Xiao, and Jingyu Wang. 2025. "FLOW-GLIDE: Global–Local Interleaved Dynamics Estimator for Flow Field Prediction" Applied Sciences 15, no. 19: 10834. https://doi.org/10.3390/app151910834

APA Style

Su, J., Xiao, L., & Wang, J. (2025). FLOW-GLIDE: Global–Local Interleaved Dynamics Estimator for Flow Field Prediction. Applied Sciences, 15(19), 10834. https://doi.org/10.3390/app151910834

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop