Physics-Informed Fine-Tuned Neural Operator for Flow Field Modeling
Abstract
1. Introduction
- 1.
- We present a physics-informed fine-tuning method for high-dimensional flow field modeling, which separates the optimization of physics loss and data loss into distinct phases. By fine-tuning using inference data input via physics loss, it addresses the gradient imbalance problem that plagues joint optimization approaches in physics-informed neural operators.
- 2.
- We apply the physics-informed fine-tuning method to the CNO and propose PFT-CNO, which combines the ability of the CNO to handle high-dimensional spatio-temporal flow fields with the advantages of physics-informed fine-tuning, requiring no additional training data yet significantly improving prediction accuracy through physics-informed fine-tuning during inference.
- 3.
- We demonstrate the effectiveness of our method on two distinct flow field systems (shallow water equation and advection–diffusion equation), achieving substantial improvements in both single-step and multi-step predictions, validating the practical utility of PFT-CNO.
2. Related Works
3. Materials and Methods
3.1. Problem Setup
3.2. Neural Operator
- Lifting operator P: lifts the lower-dimensional input function (with co-dimension ) into a higher-dimensional latent space.
- Iterative layers: The model stacks L layers of , where are local linear operators, are integral kernel operators that capture non-local dependencies in function spaces, and denotes pointwise activation functions.
- Projection operator Q: projects the higher-dimensional function back to the output space (with co-dimension ).
3.3. Convolutional Neural Operator
- Lifting operator P: The lifting operator is instantiated as a convolutional layer that maps the input function to the latent space , where is the number of latent channels.
- Iterative layers: Each layer l implements the transformation , where
- -
- is a convolutional operator with learnable kernels that captures spatial dependencies;
- -
- represents skip connections or residual mappings;
- -
- applies pointwise nonlinear activations.
Additionally, resolution adjustment operators (upsampling or downsampling) may be incorporated within layers to enable multi-scale feature extraction, inspired by architectures for image generation. - Projection operator Q: The projection operator is realized as a convolutional layer that maps the final latent function to the output space .
3.4. Physics-Informed Fine-Tuned CNO
Two-Phase Training Strategy
4. Results
4.1. Baseline Methods
- DeepONet [33]: This is a data-driven neural operator that uses branch and trunk networks to approximate nonlinear operators. This method serves as a representative baseline for purely data-driven operator learning approaches.
- PFT-DeepONet [42]: The physics-informed fine-tuning method is applied to DeepONet and validated on simple 1D problems. It serves as a baseline to compare the advantages of our proposed PFT-CNO over existing models for more complex systems.
- Transolver [34]: This is a data-driven transformer-based neural operator that leverages attention mechanisms to achieve geometry-independent learning of flow fields. This method demonstrates the capability of transformer architectures in operator learning.
- CNO [26]: It serves as our base model architecture. This purely data-driven method provides the foundation for our physics-informed fine-tuning approach and allows us to isolate the contribution of the fine-tuning strategy.
- PD-CNO: The Physics-Driven CNO is a variant of the CNO trained exclusively using physics loss without any data loss. This baseline demonstrates the performance when relying solely on physical constraints, highlighting the importance of data-driven initialization.
- PI-CNO [40]: This is the Physics-Informed CNO, which jointly optimizes both data loss and physics loss during training using the combined objective . This method represents the conventional approach to incorporating physical information into neural operators and allows us to compare against joint optimization strategies.
4.2. Evaluation Metrics
- Relative Error: This metric measures the normalized prediction error relative to the ground truth magnitude, defined as follows:where N is the total number of spatial points, u represents the ground truth, and denotes the predicted solution, which is the output of neural operators defined in Section 3.2. As we mentioned in Section 3.1, maps input functions (e.g., initial conditions, boundary conditions, or coefficient functions) to output solution functions that satisfy the governing PDE. This metric provides a scale-invariant measure of prediction accuracy.
- Root Mean Square Error (RMSE): The RMSE can be expressed in terms of the standard deviations and correlation coefficient:where and represent ground truth and predicted standard deviations, respectively, and R represents the Pearson correlation coefficient between them. This metric penalizes variance differences and insufficient correlation.
- Mean Absolute Error (MAE): The MAE quantifies the average absolute deviation between predictions and the ground truth:This metric offers an intuitive interpretation of prediction errors in the original units.
- Coefficient of Determination (): The score measures the proportion of variance in the ground truth explained by the predictions:where is the mean of the ground truth. An value close to 1 indicates excellent predictive performance, while negative values suggest predictions worse than the mean.
- Inference Time: This metric measures the computational efficiency by recording the average time required to perform a single forward prediction. Lower inference times indicate better computational efficiency for real-time applications. We measure the inference time of all methods using an NVIDIA H800 GPU, 64 cores, and a batch size of 64.
- ANOVA: This assesses whether significant differences exist among multiple models using the F-statistic ( indicates significance).where k is the number of models, is the sample size of the i-th model, N is the total sample size, is the mean of the i-th model, and is the overall mean.
- T-test: The test compares performance differences between two models on the same data.where is the mean of paired differences (MSE1 − MSE2), is the standard deviation of paired differences, and n is the sample size.
- Cohen’s d: It quantifies the magnitude of the results between two models. For paired samples, it is calculated as follows:d indicates which model performs better: suggests that Model 2 has better performance, while indicates that Model 1 has better performance.
4.3. Shallow Water Equation
4.3.1. Data Collection
4.3.2. Single-Step Prediction Performance
4.3.3. Multi-Step Prediction Performance
4.4. Advection–Diffusion Equation
4.4.1. Data Collection
4.4.2. Single-Step Prediction Performance
4.4.3. Multi-Step Prediction Performance
4.5. ANOVA and T-Test
5. Conclusions
6. Limitations and Future Perspectives
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A. Pseudocode
| Algorithm A1 Physics-informed fine-tuned neural operator |
| Require: Training dataset |
| Require: Evaluation dataset (only inputs) |
| Require: Neural operator , learning rates , |
| Require: Optimizers (for pretraining), (for fine-tuning) |
| Require: Number of pretraining epochs , fine-tuning epochs |
|
Appendix B. Model Parameters
| Training Hyperparameters | Model Hyperparameters | ||
|---|---|---|---|
| Parameter | Value | Parameter | Value |
| Optimizer | Adam | 3 | |
| Learning rate | 0.001 | 3 | |
| LR scheduler | Step | 5 | |
| Scheduler step size | 100 | Channel multiplier | 32 |
| Gamma | 0.5 | Activation | LeakyReLU |
| Train batch size | 16 | Kernel size | 3 |
| Eval batch size | 64 | Cutoff denominator | 2.0001 |
| Training updates | 1000 | Filter size | 6 |
| Physics loss weight () | 1 | LReLU upsampling | 2 |
| Half-width mult. | 0.8 | ||
| Batch norm | True | ||
| Latent dim | 128 | ||
Appendix C. Visualizations of Results




Appendix D. Experiment on Cylinder Wake Flow
| Method | Rel Error ↓ | RMSE ↓ | MAE ↓ | ↑ | Eval Time (s) ↓ |
|---|---|---|---|---|---|
| DeepONet | 0.30349 | 0.05495 | 0.02918 | 0.62043 | 0.00192 |
| CNO | 0.08664 | 0.01586 | 0.00817 | 0.96837 | 0.00178 |
| PFT-CNO | 0.04348 | 0.00785 | 0.00387 | 0.99225 | 0.00171 |
References
- Uchida, T.; Ohya, Y. Numerical simulation of atmospheric flow over complex terrain. J. Wind. Eng. Ind. Aerodyn. 1999, 81, 283–293. [Google Scholar] [CrossRef]
- Klein, R. Scale-dependent models for atmospheric flows. Annu. Rev. Fluid Mech. 2010, 42, 249–274. [Google Scholar] [CrossRef]
- Liu, X.; Zhang, Y.; Song, D.; Yang, H.; Li, X. A multifractal cascade model for energy evolution and dissipation in ocean turbulence. J. Mar. Sci. Eng. 2023, 11, 1768. [Google Scholar] [CrossRef]
- Xu, H.; Wang, M.; Zhou, Z.; Sun, T.; Zhang, G. Unsteady Flow and Loading Characteristics of Rotating Spheres During Underwater Ejection. J. Mar. Sci. Eng. 2025, 13, 2331. [Google Scholar] [CrossRef]
- Maximillian, J.; Brusseau, M.L.; Glenn, E.P.; Matthias, A.D. Pollution and environmental perturbations in the global system. In Environmental and Pollution Science, 3rd ed.; Academic Press: Cambridge, MA, USA, 2019; pp. 457–476. [Google Scholar]
- Van Gennip, S.J.; Popova, E.E.; Yool, A.; Pecl, G.T.; Hobday, A.J.; Sorte, C.J.B. Going with the flow: The role of ocean circulation in global marine ecosystems under a changing climate. Glob. Change Biol. 2017, 23, 2602–2617. [Google Scholar] [CrossRef]
- Bodnar, C.; Bruinsma, W.P.; Lucic, A.; Stanley, M.; Allen, A.; Brandstetter, J.; Garvan, P.; Riechert, M.; Weyn, J.A.; Dong, H.; et al. A foundation model for the Earth system. Nature 2025, 641, 1180–1187. [Google Scholar] [CrossRef]
- Liu, P.; Georgiadis, M.C.; Pistikopoulos, E.N. Advances in energy systems engineering. Ind. Eng. Chem. Res. 2011, 50, 4915–4926. [Google Scholar] [CrossRef]
- Brunton, S.L.; Noack, B.R.; Koumoutsakos, P. Machine learning for fluid mechanics. Annu. Rev. Fluid Mech. 2020, 52, 477–508. [Google Scholar] [CrossRef]
- Cai, Z. On the finite volume element method. Numer. Math. 1990, 58, 713–735. [Google Scholar] [CrossRef]
- Teixeira, F.L.; Sarris, C.; Zhang, Y.; Na, D.-Y.; Berenger, J.-P.; Su, Y.; Okoniewski, M.; Chew, W.C.; Backman, V.; Simpson, J.J. Finite-difference time-domain methods. Nat. Rev. Methods Prim. 2023, 3, 75. [Google Scholar] [CrossRef]
- Pirozzoli, S. Numerical methods for high-speed flows. Annu. Rev. Fluid Mech. 2011, 43, 163–194. [Google Scholar] [CrossRef]
- Cheng, M.; Fang, F.; Pain, C.C.; Navon, I.M. Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network. Comput. Methods Appl. Mech. Eng. 2020, 365, 113000. [Google Scholar] [CrossRef]
- Vinuesa, R.; Brunton, S.L.; McKeon, B.J. The transformative potential of machine learning for experiments in fluid mechanics. Nat. Rev. Phys. 2023, 5, 536–545. [Google Scholar] [CrossRef]
- Feng, H.; Wang, Y.; Fan, D. How to re-enable PDE loss for physical systems modeling under partial observation. Proc. Aaai Conf. Artif. Intell. 2025, 39, 182–190. [Google Scholar] [CrossRef]
- Gilpin, W. Generative learning for nonlinear dynamics. Nat. Rev. Phys. 2024, 6, 194–206. [Google Scholar] [CrossRef]
- Solomatine, D.P.; Ostfeld, A. Data-driven modelling: Some past experiences and new approaches. J. Hydroinform. 2008, 10, 3–22. [Google Scholar] [CrossRef]
- Marcus, E.; Teuwen, J. Artificial intelligence and explanation: How, why, and when to explain black boxes. Eur. J. Radiol. 2024, 173, 111393. [Google Scholar] [CrossRef]
- Li, Z.; Zheng, H.; Kovachki, N.; Jin, D.; Chen, H.; Liu, B.; Azizzadenesheli, K.; Anandkumar, A. Physics-informed neural operator for learning partial differential equations. ACM/IMS J. Data Sci. 2024, 1, 1–27. [Google Scholar] [CrossRef]
- Goswami, S.; Bora, A.; Yu, Y.; Karniadakis, G.E. Physics-informed deep neural operator networks. In Machine Learning in Modeling and Simulation: Methods and Applications; Springer: Berlin/Heidelberg, Germany, 2023; pp. 219–254. [Google Scholar]
- Rosofsky, S.G.; Al Majed, H.; Huerta, E.A. Applications of physics informed neural operators. Mach. Learn. Sci. Technol. 2023, 4, 025022. [Google Scholar] [CrossRef]
- Wang, S.; Teng, Y.; Perdikaris, P. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM J. Sci. Comput. 2021, 43, A3055–A3081. [Google Scholar] [CrossRef]
- Wang, S.; Yu, X.; Perdikaris, P. When and why PINNs fail to train: A neural tangent kernel perspective. J. Comput. Phys. 2022, 449, 110768. [Google Scholar] [CrossRef]
- Sahoo, M.; Chakraverty, S. Dynamics of tsunami wave propagation in uncertain environment. Comput. Appl. Math. 2024, 43, 266. [Google Scholar] [CrossRef]
- Feng, H.; Hu, P.; Wang, Y.; Fan, D.; Wu, T.; Zhang, Y. Physics-informed super-resolution and forecasting method based on inaccurate partial differential equations and partial observation. Phys. Fluids 2025, 37, 066625. [Google Scholar] [CrossRef]
- Raonic, B.; Molinaro, R.; De Ryck, T.; Rohner, T.; Bartolucci, F.; Alaifari, R.; Mishra, S.; de Bézenac, E. Convolutional neural operators for robust and accurate learning of PDEs. Adv. Neural Inf. Process. Syst. 2023, 36, 77187–77200. [Google Scholar]
- Garcia-Fernandez, R.; Portal-Porras, K.; Irigaray, O.; Ansa, Z.; Fernandez-Gamiz, U. CNN-based flow field prediction for bus aerodynamics analysis. Sci. Rep. 2023, 13, 21213. [Google Scholar] [CrossRef]
- Deng, Z.; Chen, Y.; Liu, Y.; Kim, K.C. Time-resolved turbulent velocity field reconstruction using a long short-term memory (LSTM)-based artificial intelligence framework. Phys. Fluids 2019, 31, 075108. [Google Scholar] [CrossRef]
- Zhuang, D.L.; Wu, Y.; Wang, Q. A recurrent neural network for particle trajectory prediction in complex flow fields. Phys. Fluids 2025, 37, 105101. [Google Scholar] [CrossRef]
- Li, J.; Li, Y.; Liu, T.; Zhang, D.; Xie, Y. Multi-fidelity graph neural network for flow field data fusion of turbomachinery. Energy 2023, 285, 129405. [Google Scholar] [CrossRef]
- Wu, T.; Wang, Q.; Zhang, Y.; Ying, R.; Cao, K.; Sosic, R.; Jalali, R.; Hamam, H.; Maucec, M.; Leskovec, J. Learning large-scale subsurface simulations with a hybrid graph network simulator. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 14–18 August 2022; pp. 4184–4194. [Google Scholar]
- Li, Z.; Huang, D.Z.; Liu, B.; Anandkumar, A. Fourier neural operator with learned deformations for pdes on general geometries. J. Mach. Learn. Res. 2023, 24, 1–26. [Google Scholar]
- Lu, L.; Jin, P.; Pang, G.; Zhang, Z.; Karniadakis, G.E. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nat. Mach. Intell. 2021, 3, 218–229. [Google Scholar] [CrossRef]
- Wu, H.; Luo, H.; Wang, H.; Wang, J.; Long, M. Transolver: A fast transformer solver for PDEs on general geometries. In Proceedings of the 41st International Conference on Machine Learning, Vienna, Austria, 21–27 July 2024; pp. 53681–53705. [Google Scholar]
- Hu, P.; Wang, R.; Zheng, X.; Zhang, T.; Feng, H.; Feng, R.; Wei, L.; Wang, Y.; Ma, Z.-M.; Wu, T. Wavelet Diffusion Neural Operator. In Proceedings of the Thirteenth International Conference on Learning Representations, Singapore, 24–28 April 2025. [Google Scholar]
- Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
- Cai, S.; Wang, Z.; Wang, S.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks for heat transfer problems. J. Heat Transf. 2021, 143, 060801. [Google Scholar] [CrossRef]
- Rasht-Behesht, M.; Huber, C.; Shukla, K.; Karniadakis, G.E. Physics-informed neural networks (PINNs) for wave propagation and full waveform inversions. J. Geophys. Res. Solid Earth 2022, 127, e2021JB023120. [Google Scholar] [CrossRef]
- Wang, S.; Wang, H.; Perdikaris, P. Learning the solution operator of parametric partial differential equations with physics-informed DeepONets. Sci. Adv. 2021, 7, eabi8605. [Google Scholar] [CrossRef] [PubMed]
- Ma, X.; Alkhalifah, T. An effective physics-informed neural operator framework for predicting wavefields. arXiv 2025, arXiv:2507.16431. [Google Scholar] [CrossRef]
- Yamazaki, Y.; Harandi, A.; Muramatsu, M.; Viardin, A.; Apel, M.; Brepols, T.; Reese, S.; Rezaei, S. A finite element-based physics-informed operator learning framework for spatiotemporal partial differential equations on arbitrary domains. Eng. Comput. 2025, 41, 1–29. [Google Scholar] [CrossRef]
- Zhang, Z.; Moya, C.; Lu, L.; Lin, G.; Schaeffer, H. Deeponet as a multi-operator extrapolation model: Distributed pretraining with physics-informed fine-tuning. J. Comput. Phys. 2025, 547, 114537. [Google Scholar] [CrossRef]
- Kang, S.; Park, G.; Kum, D. A Transfer Learning Physics-Informed Neural Network Framework for Solving Multi-Coupled Partial Differential Equations with Complex Geometries: Fuel Cell Case Study. SSRN 2025. [Google Scholar] [CrossRef]
- Wang, Y.; Bai, J.; Eshaghi, M.S.; Anitescu, C.; Zhuang, X.; Rabczuk, T.; Liu, Y. Transfer Learning in Physics-Informed Neurals Networks: Full Fine-Tuning, Lightweight Fine-Tuning, and Low-Rank Adaptation. Int. J. Mech. Syst. Dyn. 2025, 5, 212. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Ren, P.; Rao, C.; Liu, Y.; Ma, Z.; Wang, Q.; Wang, J.X.; Sun, H. PhySR: Physics-informed deep super-resolution for spatiotemporal data. J. Comput. Phys. 2023, 492, 112438. [Google Scholar] [CrossRef]
- Akbar, Z.; Murtaza, N.; Pasha, G.A.; Iqbal, S.; Ghumman, A.R.; Abbas, F.M. Predicting scour depth in a meandering channel with spur dike: A comparative analysis of machine learning techniques. Phys. Fluids 2025, 37, 045158. [Google Scholar] [CrossRef]
- Azizzadenesheli, K.; Kovachki, N.; Li, Z.; Liu-Schiaffini, M.; Kossaifi, J.; Anandkumar, A. Neural operators for accelerating scientific simulations and design. Nat. Rev. Phys. 2024, 6, 320–328. [Google Scholar] [CrossRef]
- Wang, Y.; Li, Z.; Yuan, Z.; Peng, W.; Liu, T.; Wang, J. Prediction of turbulent channel flow using Fourier neural operator-based machine-learning strategy. Phys. Rev. Fluids 2024, 9, 084604. [Google Scholar] [CrossRef]
- Zhang, K.; Zuo, Y.; Zhao, H.; Ma, X.; Gu, J.; Wang, J.; Yang, Y.; Yao, C.; Yao, J. Fourier neural operator for solving subsurface oil/water two-phase flow partial differential equation. SPE J. 2022, 27, 1815–1830. [Google Scholar] [CrossRef]
- Weymouth, G.D.; Yue, D.K. Boundary data immersion method for Cartesian-grid simulations of fluid-body interaction problems. J. Comput. Phys. 2011, 230, 6233–6247. [Google Scholar] [CrossRef]




| Method | Rel Error ↓ | RMSE ↓ | MAE ↓ | ↑ | Eval Time (s) ↓ |
|---|---|---|---|---|---|
| DeepONet | 0.16331 | 0.09720 | 0.07124 | −0.05412 | 0.00727 |
| PFT-DeepONet | 0.15846 | 0.09475 | 0.06924 | −0.00190 | 0.00737 |
| Transolver | 0.16255 | 0.09682 | 0.07093 | −0.04465 | 0.00156 |
| CNO | 0.01284 | 0.00790 | 0.00389 | 0.99304 | 0.00326 |
| PD-CNO | 0.01693 | 0.01031 | 0.00538 | 0.98813 | 0.00334 |
| PI-CNO | 0.01256 | 0.00774 | 0.00384 | 0.99331 | 0.00352 |
| PFT-CNO | 0.01178 | 0.00737 | 0.00372 | 0.99394 | 0.00314 |
| Method | SWE ↓ | ADE ↓ |
|---|---|---|
| CNO | 0.09250 | 0.19034 |
| PFT-CNO | 0.06551 | 0.15664 |
| Method | Rel Error ↓ | RMSE ↓ | MAE ↓ | ↑ | Eval Time (s) ↓ |
|---|---|---|---|---|---|
| DeepONet | 0.25895 | 6.94667 | 1.51632 | 0.46684 | 0.00719 |
| PFT-DeepONet | 0.19702 | 5.35981 | 1.16720 | 0.68260 | 0.00736 |
| Transolver | 0.15784 | 3.34494 | 1.27491 | 0.87638 | 0.00224 |
| CNO | 0.06477 | 1.19996 | 0.41310 | 0.98409 | 0.00365 |
| PD-CNO | 0.06986 | 1.30461 | 0.44717 | 0.98120 | 0.00360 |
| PI-CNO | 0.06297 | 1.16599 | 0.39620 | 0.98498 | 0.00344 |
| PFT-CNO | 0.04767 | 0.90596 | 0.31588 | 0.99093 | 0.00358 |
| Model Comparison | Mean Difference | ANOVA (F-Statistic) | T-Test (T-Statistic) | Cohen’s d | |
|---|---|---|---|---|---|
| SWE | CNO vs. PFT-CNO | 0.0277 | 1234.20 | 106.84 | 5.04 |
| CNO vs. PI-CNO | 0.0101 | 3.42 | 14.50 | 0.68 | |
| PFT-DeepONet vs. PFT-CNO | 0.0681 | 2147.95 | 54.93 | 2.59 | |
| ADE | CNO vs. PFT-CNO | 1.6386 | 266.51 | 33.56 | 1.58 |
| CNO vs. PI-CNO | 1.2240 | 81.40 | 19.69 | 0.93 | |
| PFT-DeepONet vs. PFT-CNO | 2.8370 | 2329.92 | 46.60 | 2.20 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Feng, H.; Zhang, Y.; Fan, D. Physics-Informed Fine-Tuned Neural Operator for Flow Field Modeling. J. Mar. Sci. Eng. 2026, 14, 201. https://doi.org/10.3390/jmse14020201
Feng H, Zhang Y, Fan D. Physics-Informed Fine-Tuned Neural Operator for Flow Field Modeling. Journal of Marine Science and Engineering. 2026; 14(2):201. https://doi.org/10.3390/jmse14020201
Chicago/Turabian StyleFeng, Haodong, Yuzhong Zhang, and Dixia Fan. 2026. "Physics-Informed Fine-Tuned Neural Operator for Flow Field Modeling" Journal of Marine Science and Engineering 14, no. 2: 201. https://doi.org/10.3390/jmse14020201
APA StyleFeng, H., Zhang, Y., & Fan, D. (2026). Physics-Informed Fine-Tuned Neural Operator for Flow Field Modeling. Journal of Marine Science and Engineering, 14(2), 201. https://doi.org/10.3390/jmse14020201

