Next Article in Journal
Application of Machine Learning to Cluster Analysis of Diabetes Mortality at the Municipality Level in Mexico According to Sociodemographic Factors
Next Article in Special Issue
Improved Approximation and Theory of Solutions to Squeezing of Fluid Between Two Plates
Previous Article in Journal
Local Contrast Enhancement in Digital Images Using a Tunable Modified Hyperbolic Tangent Transformation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Implicit Neural Representation for Dense Event-Based Imaging Velocimetry

Hubei Provincial Engineering Research Center of Robotics & Intelligent Manufacturing, School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2026, 14(3), 572; https://doi.org/10.3390/math14030572
Submission received: 6 January 2026 / Revised: 31 January 2026 / Accepted: 4 February 2026 / Published: 5 February 2026
(This article belongs to the Special Issue Applied Mathematics in Fluid Mechanics and Flows)

Abstract

This paper presents an Implicit Neural Representation method for Event-Based Imaging Velocimetry (INR-VG) to reconstruct dense velocity fields from sparse event streams. The core idea is to learn a mapping (multilayer perceptron) from spatial coordinates to flow velocities, v ( x ) = f ( x ; θ ) , which thereby enables dense velocity measurements at any desired spatial resolution. The neural network is optimized through test-time optimization by minimizing the alignment error between warped voxel grids of events. Extensive evaluations on synthetic datasets and real-world flows demonstrate that INR-VG achieves high accuracy (errors as low as 0.05   px / ms ) and maintains robustness in challenging conditions where existing methods typically fail, including low event rates and large displacements, significantly outperforming optical-flow-based baselines. To the best of our knowledge, this work represents a successful application of implicit neural representations to event-based imaging velocimetry (EBIV), establishing a new paradigm for dense and robust event-based flow measurement. The implementation and experimental details are publicly available to support reproducibility and future research.

1. Introduction

Event-Based Imaging Velocimetry [1,2,3,4] has recently gained increasing attention as a novel non-intrusive, whole-field flow velocity measurement technique. In EBIV, tracer particles are illuminated by a laser sheet and their motion, assumed to follow the underlying flow [5], is captured by an event camera. The flow velocity is then estimated from the recorded particle events. As the enabling sensor, event cameras [6,7,8,9] respond only to logarithmic brightness changes at individual pixels and output asynchronous events with microsecond temporal resolution—far surpassing conventional frame-based cameras. This acquisition paradigm allows particle motion to be captured clearly under extremely high-speed or challenging illumination conditions, highlighting the strong potential of EBIV for resolving complex flow dynamics [10,11]. In practice, fully unlocking this capability relies on the estimation algorithms that can effectively convert sparse, asynchronous events into accurate and dense velocity fields. The difficulty arises from the sparsity and velocity-dependence of the event data, as well as the inherent complexity and multiscale structure of realistic flow fields [12,13]. These factors lead to two essential challenges for EBIV—achieving high accuracy for each individual velocity estimation and obtaining sufficiently dense spatial measurements across the field of view. In other words, the main difficulty for EBIV algorithms is to simultaneously achieve a high dynamic velocity range (DVR) and a high dynamic spatial range (DSR) [14,15].
EBIV algorithms can be classified by their use of pseudo-frames. (1) The first strategy converts events into pseudo-frames—such as voxel grids [1,16], time surfaces [17], or binary particle maps [11]—and then applies conventional frame-based velocity estimators, including cross-correlation [1], optical flow [17,18], or deep neural networks [16,19], to compute displacements over a time interval. (2) The second strategy operates directly on raw events without reconstructing pseudo-frames. It can be implemented by tracking individual event trajectories—e.g., triplet-matching [20] or Kalman filter-based trajectory reconstruction [21]—or by aligning groups of events through methods such as local plane-fitting [22] or motion-compensation approaches (e.g., contrast maximization [23,24] and projection concentration maximization [4]). These frameless approaches generally yield more accurate flow estimates than pseudo-frame methods, albeit at higher computational cost. As a result, existing EBIV methods are generally capable of achieving a satisfactory DVR for practical flow measurements.
Across both algorithmic families, one can find window-based estimators as well as dense per-pixel methods, each exhibiting distinct dynamic spatial ranges. We argue that these differences primarily stem from how the high-dimensional flow field v ( x ) is represented. Window-based estimators rely on a locally uniform-motion model, v ( x ) = c , which suppresses intra-window variations and restricts the achievable DSR [1,5]. Dense per-pixel methods instead assign a velocity vector to every pixel [25], offering high spatial resolution but suffering from ill-posedness—most notably the aperture problem—which makes them sensitive to noise and heavily dependent on regularization. These limitations suggest that a more expressive yet compact parameterization, v θ ( x ) , may be essential for next-generation velocimetry algorithms. Similar challenges related to high-precision localization and robustness under severe speckle noise have also been widely investigated in optical imaging and sensing, motivating the development of compact representations and learning-based mitigation strategies [26,27]. Indeed, models with richer degrees of freedom, such as rotational [28] or affine motion representations [29], outperform the uniform-motion assumption, while reduced-order models demonstrate clear advantages over dense per-pixel formulations [30,31]. Building on this trend, representing the velocity field as a continuous coordinate-based mapping is an idea with a long history [32], but it has recently been revitalized by the emergence of NeRFs [33] and more general implicit neural representations [34,35,36,37]. These implicit neural flow models (Figure 1) offer a highly flexible and compact way to represent complex velocity fields. Recent advances in PIV have demonstrated that implicit neural representations (INR) can outperform traditional cross-correlation and optical-flow methods by enabling continuous velocity parameterization, consistent regularization, and effective incorporation of physical constraints [36,38]. These successes naturally motivate extending INR-based formulations to event-based imaging velocimetry, making them a promising—and largely unexplored—direction for EBIV.
Motivated by the success of INRs, we propose INR-VG to achieve higher dynamic spatial resolution. Specifically, INR-VG integrates INR with voxel-grid-based event encoding for event-based imaging velocimetry. In INR-VG, the flow field is modeled as a continuous function in space, enabling expressive and resolution-independent velocity estimation. INR-VG conceptually advances beyond feed-forward optical flow models because it does not require large-scale pre-training, performs test-time optimization directly on each event sequence, and provides high-resolution continuous outputs that can yield gradient fields such as vorticity for downstream analysis. The main contributions of this work are described below.
  • The INR is introduced for event-based imaging velocimetry, enabling a continuous and expressive representation of the underlying flow fields.
  • We investigate how INR parameterization and event recording conditions affect measurement accuracy, providing insights for EBIV system design and configuration.
  • Extensive synthetic and real-world experiments demonstrate the superior accuracy, spatial resolution, and robustness of our INR-VG over existing EBIV approaches.
The remainder of this paper is organized as follows. Section 2 details the proposed INR-VG method. Section 3 presents experimental evaluations on synthetic and real datasets, demonstrating the method’s performance across varying measurement conditions. Finally, Section 4 concludes the paper with a discussion of limitations and future research directions.

2. Methodology

Our INR-VG aims to estimate a dense, continuous flow field directly from sparse, asynchronous event streams. We assume that the underlying flow can be well represented by a continuous function [36], and that sufficient tracer particles (with appropriate size) move through the flow to generate enough events for accurate velocity field reconstruction. As illustrated in Figure 2, the overall pipeline consists of three key components. First, the asynchronous events are converted into a time-sliced voxel grid, providing a structured representation of the event data. Second, the underlying flow field is modeled with an implicit neural representation, which encodes the velocity field as a continuous, differentiable function over spatial coordinates, enabling dense, high-resolution predictions at arbitrary locations. Third, a voxel consistency objective is formulated to enforce agreement between the predicted flow and the observed event distributions. This objective jointly constrains temporal alignment and spatial continuity, and the entire process is optimized end-to-end via gradient-based methods. In addition, implementation details are given.

2.1. Voxel Grid Construction for Events

Event cameras output asynchronous events in the form of ( x , t , p ) , where x = ( x , y ) denotes the pixel location, t is the timestamp, and p { + 1 , 1 } represents the polarity of the brightness change [1]. Due to their asynchronous and sparse nature, these events do not form conventional image frames, making them incompatible with standard synchronous processing techniques commonly used in frame-based vision. To address this, several approaches have been proposed to encode events into synchronous, structured representations by aggregating events over small spatial and temporal intervals. Examples include the time surface [39], which captures the most recent event timestamp at each pixel; the voxel grid [40], which discretizes the observation window into uniform temporal slices and accumulates events within each voxel; and the event spike tensor (EST) [41], which stacks events along both temporal and polarity dimensions to form a multi-channel tensor. Among these, we adopt the voxel grid representation due to its simplicity and effectiveness.
G ( x , t c ) = i ε c p i k x ( x x i ) k t ( t c t i )
where ε c = { i t i [ t c , t c + 1 ) } denotes the set of events within the c-th temporal slice, and k x ( · ) and k t ( · ) are interpolation kernels for distributing events onto the spatial–temporal grid. By partitioning the observation window into uniformly spaced time slices and accumulating events in each slice, each voxel encodes the local spatiotemporal distribution of events. Stacking these slices along the temporal axis produces a structured representation, while enabling direct warping via the predicted velocity field between different time slices.

2.2. Implicit Neural Representation of the Flow Field

We represent the continuous flow field using an implicit neural representation, which maps 2D spatial coordinates x to flow velocities v θ ( x ) . To enhance expressiveness, each coordinate is mapped into a higher-dimensional space using Fourier feature encoding [38]:
γ ( x ) = sin ( B x ) , cos ( B x ) T
where B R ( N e / 2 ) × 2 is a random Gaussian matrix with elements sampled from 𝒩 ( 0 , 1 / β 2 ) . As a result, the position vector γ ( x ) R N e maps spatial coordinates to a high-dimensional space for modeling high-frequency flow variations. The variance parameter β controls the frequency spectrum of the encoding, with larger values emphasizing low-frequency patterns and smaller values capturing high-frequency variations.
The flow field v θ ( x ) is parameterized by a fully connected multilayer perceptron with three hidden layers, which provides sufficient expressive capacity while maintaining stable optimization and computational efficiency for test-time optimization. The network takes an N e -dimensional encoded coordinate as input, the hidden layers have widths 2 N e , 2 N e , and N e / 2 , respectively, and the final layer outputs a two-dimensional velocity vector. All hidden layers utilize the Gaussian Error Linear Unit (GELU) activation function, σ ( z ) = G E L U ( z ) , providing smooth nonlinear mappings that facilitate the representation of continuous velocity fields and their spatial derivatives [36,42]. We note that other commonly used activation functions (e.g., ReLU or SiLU) are also applicable in our framework and lead to comparable performance. To this end, the parameterized velocity field is implemented as v θ ( x ) = M L P θ ( γ ( x ) ) , where θ denotes the tunable parameters of the MLP neural network.
The velocity field is represented as a continuous function v θ ( x ) : R 2 R 2 . This design choice is particularly well suited for event-based imaging velocimetry. First, the continuous coordinate-to-velocity mapping enables flow estimation at arbitrary spatial resolutions. Velocities can be queried on any desired grid without explicit interpolation or super-resolution procedures, thereby decoupling the output resolution from the discretization of the event voxel grid. This property is essential for achieving high dynamic spatial range under varying measurement conditions. Second, the differentiability of v θ ( x ) allows direct evaluation of spatial derivatives such as v (e.g., automatic differentiation), which are required for downstream physical analysis including vorticity and strain-rate estimation. Compared with finite-difference schemes applied to discrete velocity fields, this formulation avoids numerical artifacts and improves the accuracy of derivative-based quantities. Overall, the continuous representation provided by INR does not simply result in another event-based optical flow method, but instead reformulates EBIV as a continuous inverse problem rather than a discrete motion estimation task.

2.3. Voxel Consistency Objective and Training

The neural network for INR is trained by enforcing voxel consistency across consecutive event slices. Specifically, given two temporally adjacent voxel grids G 1 and G 2 , the flow field v θ ( x ) predicts a vector for each spatial coordinate, and the data term measures the “brightness” misalignment after symmetrical bidirectional warping [43,44],
𝒥 data = | W 1 ( x ) W 2 ( x ) | d x
where W 1 ( x ) = G 1 x δ t 2 v θ ( x ) and W 2 ( x ) = G 2 x + δ t 2 v θ ( x ) denote the spatially warped voxel grids, and δ t represents the temporal interval between the two voxel grids. The warping operation encourages the two voxel grids to be consistent under the estimated velocity field. In practice, since event data are only available on discrete voxel grids, the integral in Equation (3) is approximated by a finite summation over voxel coordinates:
𝒥 data x Ω v | W 1 ( x ) W 2 ( x ) |
where Ω v denotes the set of spatial coordinates corresponding to the voxel grid. This discrete formulation corresponds to a Riemann-sum approximation of the integral.
To regularize the flow, we penalize spatial variations using a smoothness term:
𝒥 smooth = v θ ( x ) 2 d x
Here, the gradient v θ ( x ) is obtained directly from the explicit representation of v θ ( x ) . Unlike the data term, the regularization is defined on the continuous velocity field and does not require voxelized measurements. Therefore, the integral in Equation (5) is approximated by uniformly sampling spatial coordinates { x i } i = 1 N over the image domain,
𝒥 smooth 1 N i = 1 N v θ ( x i ) 2
The total objective combines these two terms:
𝒥 t o t a l ( θ ) = 𝒥 data + λ 𝒥 smooth
where the smoothness weight λ balances alignment accuracy and flow smoothness. The optimal parameters θ * are then obtained by minimizing the total objective 𝒥 t o t a l ( θ ) , i.e., θ * = arg   min 𝒥 t o t a l ( θ ) . This formulation enforces both voxel consistency and spatial continuity while remaining fully differentiable for gradient-based optimization. Hence, the MLP was trained using the Adam optimizer with an initial learning rate of 0.001 for 1000 epochs. The learning rate was scheduled with cosine annealing over the entire training, smoothly decaying from the initial value to zero [45]. This training strategy is empirically verified to provide stable and satisfactory convergence in our preliminary experiments.

2.4. Implementation Details

Based on the parameter study in Section 3, the default settings are adopted throughout this work. Specifically, the encoding dimension is set to N e = 64 , the variance parameter is initialized as β = 100 , and the smoothness weight λ is fixed to 10.0. Taking the two additional derivative networks and the gradient storage required by test-time optimization into consideration, the overall memory consumption is approximately 1017 MB for a 256 × 256 input and 15.12 GB for a 1280 × 720 input. Similarly, the total computational cost scales linearly with the number of optimization epochs: a 256 × 256 input requires approximately 3.8 GFLOPs per training epoch, while a 1280 × 720 input requires roughly 53.5 GFLOPs per epoch. All experiments are implemented in PyTorch (v1.13.1+cu117) and executed on an NVIDIA GeForce RTX 3090 GPU. For reproducibility, the complete implementation, along with all experimental configurations and scripts, is publicly available at https://github.com/yongleex/INR-VG (accessed on 3 February 2026). This enables straightforward testing of INR-VG on diverse real-world EBIV data under different experimental conditions. With these settings, the learning process takes approximately 5 s for synthetic event data at a spatial resolution of 256 × 256 px2, and around 1 min for data at a resolution of 1280 × 720 px2. These runtimes reflect a test-time optimization process and are not intended for real-time deployment, but rather for accurate and robust flow measurement under challenging event conditions.

3. Experiments

To comprehensively evaluate the proposed INR-VG method, both synthetic and real-world particle event data are used. Due to the difficulty of obtaining reliable ground-truth velocity fields in real EBIV experiments, extensive synthetic datasets are employed to systematically analyze accuracy, robustness, and parameter sensitivity under controlled conditions. Specifically, the synthetic test event streams (Figure 3) are generated for arbitrary flow fields by first creating particle image sequences using a particle image generator [5,19], which are subsequently converted into event streams through an event simulator [46,47]. Meanwhile, the real events (Figure 4) are recorded using two independent EBIV setups, including a solid-body rotation flow and a water-tank flow [1]. Furthermore, the proposed INR-VG framework is benchmarked against five representative baselines formed by combining two event representations (EST and voxel grid) with three estimation modules, namely iterative Lucas–Kanade (ILK) [48], Farnebäck optical flow (OF) [49], and INR [36]. Finally, consistent with the evaluation criteria commonly adopted in PIV [5,19,50], the root mean square error (RMSE) and the average endpoint error (AEE) are used to quantify the performance [51,52].
Based on the experimental configuration described above, the influence of key hyperparameters in INR-VG—including the variance parameter β , the network width N e , and the smoothness weight λ —is first investigated. The effects of imaging conditions, such as particle density, particle size, and noise ratio, are also analyzed. Next, visualized comparisons against the baselines are performed using synthetic events, followed by statistical evaluations across 3000 synthetic event datasets with 3 different underlying flow fields [53]. Finally, our INR-VG is further validated on real-world EBIV scenarios, demonstrating its ability to generate dense and practically usable velocity fields.

3.1. On the Hyper-Parameters of INR-VG

Figure 5 presents the measurement errors under different combinations of the encoding dimension N e and the variance parameter β for the three test cases (Figure 3). For a fixed β , the error generally decreases as N e increases, particularly when N e is small, indicating that insufficient encoding capacity leads to underfitting of complex and multi-scale flow structures. Once N e 64 , further increasing N e yields marginal improvement, suggesting that the capacity of the MLP is already sufficient to represent the complexity of the underlying flow field. In contrast, the influence of β is different, and the error first decreases and then increases with β , reaching its minimum when β lies in the range of 10 1 10 3 . A small β results in an encoding dominated by high-frequency components, which limits the ability to capture the low-frequency content of the spatial velocity field. Meanwhile, an excessively large β produces an over-smoothed encoding and fails to robustly capture complex high-frequency flow structures [36,38]. Based on these observations and prior reports, we set N e = 64 and β = 100 in the subsequent experiments.
Figure 6a analyzes the influence of the smoothness weight λ on measurement accuracy using the same three event streams (Figure 3). When λ 10 1.5 , the measurement errors are nearly unchanged. As λ further increases, the error tends to rise. As expected, the INR-VG remains effective even when λ is small or set to zero, since the INR representation implicitly enforces smoothness (smooth predictions with respect to spatial coordinates) [36]. However, an excessively large λ leads to over-smoothing, resulting in degraded accuracy. Additionally, the rotational flow consistently yields lower errors than the cellular flow, which in turn outperforms the sinusoidal flow, indicating that the measurement accuracy is also closely related to the characteristics of the underlying flow. Overall, the method is relatively robust to the variations of λ , and we adopt λ = 10.0 as the default setting given the unavoidable noise present in practical measurements. Figure 6b further investigates the convergence behavior of INR-VG by varying the number of training epochs. Both RMSE and AEE decrease consistently as the number of epochs increases, demonstrating stable convergence of the test-time optimization process. Notably, the error curves become nearly flat around 1000 epochs, suggesting that the optimization has essentially converged. While additional training beyond this point can still yield minor accuracy improvements, the reduction is marginal compared to the increased computational cost. Therefore, 1000 training epochs are adopted in this work, as they provide a practical speed–accuracy trade-off, delivering near-converged accuracy while avoiding excessive computational overhead.

3.2. Effect of Imaging Conditions

Event data serve as the direct inputs to the EBIV algorithm, and their quality—and thus the achievable accuracy—is strongly influenced by the underlying imaging conditions. To examine this effect, we synthesize particle event data under different recording settings, including particle density ρ p , particle diameter d p , and the noise ratio R. This analysis provides insights into how imaging conditions affect INR-VG performance and offers practical guidance for constructing high-accuracy measurement conditions.
Figure 7a shows the effect of particle seeding density. As the density increases from 0.005 to 0.1 particles per pixel (ppp), all three flow fields exhibit a consistent trend in which the error first decreases and then increases. Low densities provide too few events to reliably record motion cues, whereas excessively high densities introduce particle overlap and matching ambiguity, both of which degrade estimation accuracy. Among the flow fields, sinusoidal and cellular flows show stronger sensitivity to density variation, while the simple rotational flow remains relatively stable due to its spatially smooth velocity field. Overall, the minimum errors are consistently observed at a particle density of around 0.03 ppp across all tested flow cases. These optimal densities agree well with the conventional PIV recommendations [5].
Figure 7b presents the influence of particle diameter. Overall, measurement errors tend to increase as the particle diameter grows. A clear error jump occurs around a diameter of 1.0 px across all flow scenarios, indicating that errors are significantly lower for d p < 1.0 px than for d p 1.0 px. The minimum errors are achieved at a diameter of ∼0.5 px. This behavior differs significantly from conventional PIV, where the optimal particle diameter is typically around 2.2 px [5]. We argue that this difference arises from the characteristics of event cameras. For a particle of 0.5 px diameter, there is approximately a 50% chance that two adjacent pixels will fire events, enabling sufficient motion capture. These observations suggest that, in EBIV, smaller tracer particles than those commonly used in PIV are recommended to achieve higher measurement accuracy.
Figure 7c illustrates the effect of noise. As expected, errors increase gradually as the noise ratio rises from 0% to 20%. Nonetheless, across all three flow fields, the error remains below 0.05 px / ms even at the highest noise level, implying that INR-VG could be particularly advantageous for practical EBIV measurements in the presence of event noise.
These results establish practical EBIV operating guidelines, with optimal performance at particle densities of 0.02 0.06 ppp and particle diameters of 0.1 0.75 px. Outside these ranges, accuracy degrades smoothly rather than failing abruptly, indicating a broad and robust operating range.

3.3. Comparison on Synthetic Events

Figure 8 shows the measured velocity vector fields with endpoint error (EPE) background for the three synthetic event streams (Figure 3). The errors primarily occur in flow regions with high streamline curvature or low velocity magnitude. Specifically, the high-curvature regions of the sinusoidal flow conflict with the uniform assumption, while low-velocity regions—such as the center of the rotational flow—generate too few events to reliably capture motion. The results demonstrate that the INR representation can accommodate high-curvature flows while effectively “interpolating” in regions with sparse data. Across all three synthetic cases, the proposed INR-EST and INR-VG methods produce substantially smaller errors than the classical optical-flow baselines. Between the two event pseudo-frames (EST and VG), the EPE maps are very similar, with no significant differences. Table 1 summarizes the quantitative RMSE and AEE results corresponding to Figure 8. Consistent with visual observations, INR-VG achieves the lowest errors across all flow fields, slightly outperforming INR-EST, which demonstrates the clear advantage of our INR-VG in EBIV measurement.
To systematically evaluate the performance of INR-based methods on large-scale event data, we conduct extensive statistical analyses using synthetic event streams generated from a publicly available dense flow dataset [53]. For each flow category, 1000 event streams are synthesized, enabling statistically meaningful comparisons across different methods. Specifically, three representative flow fields with increasing complexity are considered: uniform flow, a backward-facing step (backstep) flow, and DNS turbulence. As a result, this diversity in flow complexity allows a clear evaluation of the effectiveness of INR-based methods under increasingly challenging conditions.
Figure 9 presents the box plots of RMSE and AEE from the statistical analysis. Across all three categories, the INR-based methods consistently achieve the lowest errors among all compared approaches. For the uniform flow, the average error remains below 0.05 px / ms , while for the backstep flow the error increases to approximately 0.10 px / ms . On the most challenging DNS turbulence, the error further rises to around 0.35 px / ms . This progressive error increase is directly related to the differences in flow characteristics. Specifically, the backstep flow contains large low-velocity regions where only a limited number of events are generated, leading to increased measurement difficulty, as also evidenced by the numerous outliers observed in the ILK-based methods. In contrast, the DNS turbulence features abundant small-scale structures, which cannot be fully captured by particles with finite seeding density, thereby limiting the achievable accuracy. Nevertheless, compared with all baseline methods, the INR-based approaches consistently exhibit statistically significant accuracy advantages and improved robustness across all tested conditions. Overall, these results also reflect the intrinsic challenges of EBIV measurements in flows characterized by large regions of extremely low-velocity motion, as well as by fine-scale flow structures, which inherently limit particle event generation and accurate velocity measurement. In such cases, the estimation problem becomes fundamentally under-constrained due to insufficient event support, rather than limitations of a specific reconstruction algorithm. Consequently, improving performance in these regimes primarily depends on increasing particle seeding density or event generation within practical experimental limits, which lies beyond the scope of algorithmic design.

3.4. Evaluation on Real Event Data

Figure 10 presents the measurement results on two real-world event recordings (Figure 4). For the real rotational flow, the ILK-based and INR-based methods show good agreement with the ground-truth solid-body rotation, whereas the optical-flow-based methods fail to recover the flow structure near the center and boundary regions. A closer inspection reveals that ILK-EST and ILK-VG exhibit a few outliers in the central region, as bright spots in the background magnitude map. Quantitative results in Table 2 further confirm that the INR-based variants achieve the lowest RMSE and AEE, indicating superior accuracy and robustness. For the turbulent water-tank flow, the limitations of the baseline methods become more apparent. The ILK-EST and ILK-VG produce more clustered outliers, OF-EST and OF-VG fail to recover meaningful flow structures, and INR-EST shows noticeable inconsistencies in the lower region of the flow field. This behavior implies an easily detectable failure mode of INR-based methods, in which voxel grids may be mismatched over a large area, rather than as single or clustered velocity outliers. In contrast, only INR-VG yields results that are consistent with the observed event patterns (Figure 4). Overall, these results show that INR-VG offers the most robust and reliable performance on real-world event data, benefiting from its continuous flow representation and effective noise suppression via voxel-grid encoding. Moreover, the continuous formulation enables direct access to spatial velocity gradients (e.g., vorticity and strain-rate tensors) without additional post-processing, as illustrated in [36].

4. Conclusions

This work presents INR-VG, a novel event-based imaging velocimetry algorithm that leverages implicit neural representations to model the latent flow field directly in continuous coordinate space, enabling dense velocity measurements from sparse event data. Specifically, spatial coordinates are encoded using Fourier feature embedding and mapped to flow velocities via a multilayer perceptron, with network parameters optimized through test-time optimization by minimizing a voxel-grid-based event alignment loss. Parameter experiments indicate that the configuration N e = 64 , β = 100 , and λ = 10.0 provides an effective setting for the proposed INR-VG. In addition, imaging condition experiments suggest that a particle concentration of approximately 0.03 ppp and a particle diameter around 0.5 px are recommended for achieving high EBIV measurement accuracy. Extensive tests on synthetic event cases and the real rotational flow show that INR-VG achieves competitive measurement accuracy, with errors as low as 0.05 px / ms . More importantly, INR-VG consistently delivers reliable and robust velocity fields on two challenging real-world measurements, whereas other competing methods fail to do so. Moving forward, extending INR-based models to the temporal dimension, such as time-resolved EBIV, may further enhance their applicability to dynamic fluid flows. From a computational perspective, exploring acceleration strategies, including multi-resolution or coarse-to-fine training schemes, could substantially improve efficiency while preserving reconstruction accuracy. In addition, incorporating uncertainty estimation mechanisms, for example through ensemble-based inference or test-time perturbation strategies, would enable quantitative confidence assessment of the reconstructed flow fields, thereby further supporting robust and reliable measurements in scientific and engineering applications.

Author Contributions

Conceptualization, J.A. and Y.L.; Methodology, J.A. and J.L.; Software, J.A. and J.L.; Validation, J.A., J.L. and Z.C.; Formal analysis, J.A. and J.L.; Investigation, J.A. and J.L.; Resources, Y.L.; Data curation, J.A. and J.L.; Writing—original draft preparation, J.A. and J.L.; Writing—review and editing, Z.C. and Y.L.; Visualization, J.A. and J.L.; Supervision, Y.L.; Project administration, Y.L.; Funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Hubei Province (Grant No. 2023AFB128).

Data Availability Statement

The data presented in this study are openly available in GitHub at https://github.com/yongleex/INR-VG (accessed on 3 February 2026).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AEEAverage Endpoint Error
DNSDirect Numerical Simulation
DSRDynamic Spatial Range
DVRDynamic Velocity Range
EBIVEvent-Based Imaging Velocimetry
EPEEndpoint Error
ESTEvent Spike Tensor
GELUGaussian Error Linear Unit
GPUGraphics Processing Unit
ILKIterative Lucas–Kanade
INRImplicit Neural Representation
INR-VGImplicit Neural Representation with Voxel Grid
JHTDBJohns Hopkins Turbulence Database
MLPMultilayer Perceptron
NeRFNeural Radiance Fields
OFOptical Flow
PIVParticle Image Velocimetry
RMSERoot Mean Square Error
VGVoxel Grid

References

  1. Willert, C.E.; Klinner, J. Event-based imaging velocimetry: An assessment of event-based cameras for the measurement of fluid flows. Exp. Fluids 2022, 63, 101. [Google Scholar] [CrossRef]
  2. Willert, C.E. Event-based imaging velocimetry using pulsed illumination. Exp. Fluids 2023, 64, 98. [Google Scholar] [CrossRef]
  3. Franceschelli, L.; Amico, E.; Raiola, M.; Willert, C.E.; Serpieri, J.; Cafiero, G.; Discetti, S. Event-Based Imaging Velocimetry for Jet Flow Control. In Proceedings of the 16th International Symposium on Particle Image Velocimetry (ISPIV 2025), Tokyo, Japan, 26–28 June 2025. [Google Scholar]
  4. Ai, J.; Chen, Z.; Ning, W.; Lee, Y. A Projection Concentration Maximization Method for Event-Based Imaging Velocimetry. In Proceedings of the 16th International Symposium on Particle Image Velocimetry (ISPIV 2025), Tokyo, Japan, 26–28 June 2025. [Google Scholar]
  5. Raffel, M.; Willert, C.E.; Scarano, F.; Kähler, C.J.; Wereley, S.T.; Kompenhans, J. Particle Image Velocimetry: A Practical Guide; Springer: Cham, Switzerland, 2018. [Google Scholar] [CrossRef]
  6. Serrano-Gotarredona, T.; Linares-Barranco, B. A 128×128 1.5% Contrast Sensitivity 0.9% FPN 3 µs Latency 4 mW Asynchronous Frame-Free Dynamic Vision Sensor Using Transimpedance Preamplifiers. IEEE J.-Solid-State Circuits 2013, 48, 827–838. [Google Scholar] [CrossRef]
  7. Son, B.; Suh, Y.; Kim, S.; Jung, H.; Kim, J.S.; Shin, C.; Park, K.; Lee, K.; Park, J.; Woo, J.; et al. 4.1 A 640× 480 dynamic vision sensor with a 9µm pixel and 300Meps address-event representation. In Proceedings of the 2017 IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, USA, 5–9 February 2017; IEEE: New York, NY, USA, 2017; pp. 66–67. [Google Scholar] [CrossRef]
  8. Gallego, G.; Delbrück, T.; Orchard, G.; Bartolozzi, C.; Taba, B.; Censi, A.; Leutenegger, S.; Davison, A.J.; Conradt, J.; Daniilidis, K.; et al. Event-based vision: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 154–180. [Google Scholar] [CrossRef]
  9. Yang, W.; Jiang, J.; Zhang, X.; Ji, Y.; Zhu, L.; Xie, Y.; Ren, Z. Joint Event Density and Curvature Within Spatio-Temporal Neighborhoods-Based Event Camera Noise Reduction and Pose Estimation Method for Underground Coal Mine. Mathematics 2025, 13, 1198. [Google Scholar] [CrossRef]
  10. Willert, C. Event-based particle image velocimetry for high-speed flows. Meas. Sci. Technol. 2025, 36, 075302. [Google Scholar] [CrossRef]
  11. Cao, J.; Zeng, X.; Li, S.; He, C.; Wen, X.; Liu, Y. A novel event-based ensemble particle tracking velocimetry for single-pixel turbulence statistics. Exp. Therm. Fluid Sci. 2025, 169, 111554. [Google Scholar] [CrossRef]
  12. Shiba, S.; Aoki, Y.; Gallego, G. Secrets of Event-Based Optical Flow. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2022; pp. 628–645. [Google Scholar] [CrossRef]
  13. Manovski, P.; Abu Rowin, W.; Ng, H.; Gulotta, P.; Giacobello, M.; De Silva, C.; Hutchins, N.; Marusic, I. The spectral response of time-resolved PIV in a turbulent boundary layer. Exp. Fluids 2025, 66, 134. [Google Scholar] [CrossRef]
  14. Adrian, R. Dynamic ranges of velocity and spatial resolution of particle image velocimetry. Meas. Sci. Technol. 1997, 8, 1393. [Google Scholar] [CrossRef]
  15. Kähler, C.J.; Scharnowski, S.; Cierpka, C. On the resolution limit of digital particle image velocimetry. Exp. Fluids 2012, 52, 1629–1639. [Google Scholar] [CrossRef]
  16. Gehrig, M.; Millhäusler, M.; Gehrig, D.; Scaramuzza, D. E-RAFT: Dense Optical Flow from Event Cameras. In Proceedings of the 2021 International Conference on 3D Vision (3DV), London, UK, 1–3 December 2021; IEEE: New York, NY, USA, 2021; pp. 197–206. [Google Scholar] [CrossRef]
  17. Nagata, J.; Sekikawa, Y.; Aoki, Y. Optical flow estimation by matching time surface with event-based cameras. Sensors 2021, 21, 1150. [Google Scholar] [CrossRef]
  18. Akhtar, N.; Hussain, M.; Habib, Z. Two–Stage Detection and Localization of Inter–Frame Tampering in Surveillance Videos Using Texture and Optical Flow. Mathematics 2024, 12, 3482. [Google Scholar] [CrossRef]
  19. Lee, Y.; Yang, H.; Yin, Z. PIV-DCNN: Cascaded deep convolutional neural networks for particle image velocimetry. Exp. Fluids 2017, 58, 171. [Google Scholar] [CrossRef]
  20. Shiba, S.; Aoki, Y.; Gallego, G. Fast event-based optical flow estimation by triplet matching. IEEE Signal Process. Lett. 2023, 29, 2712–2716. [Google Scholar] [CrossRef]
  21. AlSattam, O.; Mongin, M.; Grose, M.; Gunasekaran, S.; Hirakawa, K. KF-PEV: A causal Kalman filter-based particle event velocimetry. Exp. Fluids 2024, 65, 141. [Google Scholar] [CrossRef]
  22. Aung, M.T.; Teo, R.; Orchard, G. Event-Based Plane-Fitting Optical Flow for Dynamic Vision Sensors in FPGA. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; IEEE: New York, NY, USA, 2018; pp. 1–5. [Google Scholar] [CrossRef]
  23. Gallego, G.; Rebecq, H.; Scaramuzza, D. A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; IEEE: New York, NY, USA, 2018; pp. 3867–3876. [Google Scholar] [CrossRef]
  24. Gallego, G.; Gehrig, M.; Scaramuzza, D. Focus is All you Need: Loss Functions for Event-Based Vision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; IEEE: New York, NY, USA, 2019; pp. 12280–12289. [Google Scholar] [CrossRef]
  25. Corpetti, T.; Heitz, D.; Arroyo, G.; Mémin, E.; Santa-Cruz, A. Fluid experimental flow estimation based on an optical-flow scheme. Exp. Fluids 2006, 40, 80–97. [Google Scholar] [CrossRef]
  26. Li, K.; Zhao, Z.; Zhao, H.; Zhou, M.; Jin, L.; Danyun, W.; Zhiyu, W.; Zhang, L. Three-stage training strategy phase unwrapping method for high speckle noises. Opt. Express 2024, 32, 48895–48914. [Google Scholar] [CrossRef]
  27. Wang, L.; Fu, Q.; Zhu, R.; Liu, N.; Shi, H.; Liu, Z.; Li, Y.; Jiang, H. Research on high precision localization of space target with multi-sensor association. Opt. Lasers Eng. 2025, 184, 108553. [Google Scholar] [CrossRef]
  28. Giarra, M.N.; Charonko, J.J.; Vlachos, P.P. Measurement of fluid rotation, dilation, and displacement in particle image velocimetry using a Fourier–Mellin cross-correlation. Meas. Sci. Technol. 2015, 26, 035301. [Google Scholar] [CrossRef]
  29. Zhang, Z.; Li, Y.; Xiang, K.; Wang, J. Determining dense velocity fields for fluid images based on affine motion. PeerJ Comput. Sci. 2024, 10, e1810. [Google Scholar] [CrossRef]
  30. Stapf, J.; Garbe, C.S. A learning-based approach for highly accurate measurements of turbulent fluid flows. Exp. Fluids 2014, 55, 1799. [Google Scholar] [CrossRef]
  31. Wulff, J.; Black, M.J. Efficient Sparse-To-Dense Optical Flow Estimation Using a Learned Basis and Layers. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; IEEE: New York, NY, USA, 2015; pp. 120–130. [Google Scholar] [CrossRef]
  32. Kimura, I.; Susaki, Y.; Kiyohara, R.; Kaga, A.; Kuroe, Y. Gradient-based PIV using neural networks. J. Vis. 2002, 5, 363–370. [Google Scholar] [CrossRef]
  33. Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. NeRF: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 2021, 65, 99–106. [Google Scholar] [CrossRef]
  34. Jung, H.; Hui, Z.; Luo, L.; Yang, H.; Liu, F.; Yoo, S.; Ranjan, R.; Demandolx, D. AnyFlow: Arbitrary Scale Optical Flow with Implicit Neural Representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; IEEE: New York, NY, USA, 2023; pp. 5455–5465. [Google Scholar] [CrossRef]
  35. Molaei, A.; Aminimehr, A.; Tavakoli, A.; Kazerouni, A.; Azad, B.; Azad, R.; Merhof, D. Implicit Neural Representation in Medical Imaging: A Comparative Survey. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; IEEE: New York, NY, USA, 2023; pp. 2381–2391. [Google Scholar] [CrossRef]
  36. Masker, A.I.; Zhou, K.; Molnar, J.P.; Grauer, S.J. Neural optical flow for planar and stereo PIV. Exp. Fluids 2025, 66, 129. [Google Scholar] [CrossRef]
  37. Ji, J.; Fu, S.; Man, J. I-NeRV: A Single-Network Implicit Neural Representation for Efficient Video Inpainting. Mathematics 2025, 13, 1188. [Google Scholar] [CrossRef]
  38. Magaña, E.; Costabal, F.S.; Brevis, W. Image Velocimetry using Direct Displacement Field estimation with Neural Networks for Fluids. arXiv 2025, arXiv:2501.18641. [Google Scholar] [CrossRef]
  39. Lagorce, X.; Orchard, G.; Galluppi, F.; Shi, B.E.; Benosman, R.B. HOTS: A hierarchy of event-based time-surfaces for pattern recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1346–1359. [Google Scholar] [CrossRef]
  40. Zihao Zhu, A.; Yuan, L.; Chaney, K.; Daniilidis, K. Unsupervised Event-Based Optical Flow Using Motion Compensation. In European Conference on Computer Vision (ECCV) Workshops; Springer: Cham, Switzerland, 2018. [Google Scholar] [CrossRef]
  41. Gehrig, D.; Loquercio, A.; Derpanis, K.G.; Scaramuzza, D. End-To-End Learning of Representations for Asynchronous Event-Based Data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; IEEE: New York, NY, USA, 2019; pp. 5633–5643. [Google Scholar] [CrossRef]
  42. Hendrycks, D. Gaussian Error Linear Units (Gelus). arXiv 2016, arXiv:1606.08415. [Google Scholar] [CrossRef]
  43. Wereley, S.T.; Meinhart, C.D. Second-order accurate particle image velocimetry. Exp. Fluids 2001, 31, 258–268. [Google Scholar] [CrossRef]
  44. Ai, J.; Chen, Z.; Li, J.; Lee, Y. Rethinking asymmetric image deformation with post-correction for particle image velocimetry. Phys. Fluids 2025, 37, 017122. [Google Scholar] [CrossRef]
  45. Loshchilov, I.; Hutter, F. SGDR: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar] [CrossRef]
  46. Rebecq, H.; Gehrig, D.; Scaramuzza, D. ESIM: An Open Event Camera Simulator. Conf. Robot. Learn. 2018, 87, 969–982. [Google Scholar]
  47. Wu, F.; Feng, X.; Zhang, A.; Lee, Y. FED-PV: A Large-Scale Synthetic Frame/Event Dataset for Particle-Based Velocimetry. In Proceedings of the 16th International Symposium on Particle Image Velocimetry (ISPIV 2025), Tokyo, Japan, 26–28 June 2025. [Google Scholar] [CrossRef]
  48. Lucas, B.D.; Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision. In IJCAI’81: 7th International Joint Conference on Artificial Intelligence; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1981; Volume 2, pp. 674–679. [Google Scholar]
  49. Farnebäck, G. Two-frame motion estimation based on polynomial expansion. In Scandinavian Conference on Image Analysis; Springer: Cham, Switzerland, 2003; pp. 363–370. [Google Scholar] [CrossRef]
  50. Lu, J.; Yang, H.; Zhang, Q.; Yin, Z. An accurate optical flow estimation of PIV using fluid velocity decomposition. Exp. Fluids 2021, 62, 78. [Google Scholar] [CrossRef]
  51. Lagemann, C.; Lagemann, K.; Mukherjee, S.; Schröder, W. Deep recurrent optical flow learning for particle image velocimetry data. Nat. Mach. Intell. 2021, 3, 641–651. [Google Scholar] [CrossRef]
  52. Zhu, Q.; Wang, J.; Hu, J.; Ai, J.; Lee, Y. PIV-FlowDiffuser: Transfer-Learning-Based Denoising Diffusion Models for Particle Image Velocimetry. Sensors 2025, 25, 6077. [Google Scholar] [CrossRef] [PubMed]
  53. Cai, S.; Zhou, S.; Xu, C.; Gao, Q. Dense motion estimation of particle images via a convolutional neural network. Exp. Fluids 2019, 60, 73. [Google Scholar] [CrossRef]
Figure 1. Spatially continuous flow field representation with INR (MLP: multilayer perceptron).
Figure 1. Spatially continuous flow field representation with INR (MLP: multilayer perceptron).
Mathematics 14 00572 g001
Figure 2. The overall framework of INR-VG method.
Figure 2. The overall framework of INR-VG method.
Mathematics 14 00572 g002
Figure 3. Different synthetic test particle events visualized as time surfaces: (a) sinusoidal case; (b) rotational case; (c) cellular case.
Figure 3. Different synthetic test particle events visualized as time surfaces: (a) sinusoidal case; (b) rotational case; (c) cellular case.
Mathematics 14 00572 g003
Figure 4. Experimental setup and two real event recordings: (a) Our in-house EBIV setup for particle imaging; (b) real particle events recorded with a solid-body rotation flow (top) using our setup, and a water-tank flow (bottom) obtained from Ref. [1].
Figure 4. Experimental setup and two real event recordings: (a) Our in-house EBIV setup for particle imaging; (b) real particle events recorded with a solid-body rotation flow (top) using our setup, and a water-tank flow (bottom) obtained from Ref. [1].
Mathematics 14 00572 g004
Figure 5. Effect of the parameters N e and β on the measurement accuracy (RMSE and AEE) for the three synthetic event sets: (a) sinusoidal case; (b) rotational case; (c) cellular case.
Figure 5. Effect of the parameters N e and β on the measurement accuracy (RMSE and AEE) for the three synthetic event sets: (a) sinusoidal case; (b) rotational case; (c) cellular case.
Mathematics 14 00572 g005
Figure 6. Effect of the smoothness weight λ and the number of training epochs on the measurement accuracy: (a) the smoothness weight λ ; (b) the number of training epochs.
Figure 6. Effect of the smoothness weight λ and the number of training epochs on the measurement accuracy: (a) the smoothness weight λ ; (b) the number of training epochs.
Mathematics 14 00572 g006
Figure 7. Effect of three imaging conditions on measurement errors for the synthetic events: (a) different particle densities; (b) different particle diameters; and (c) different noise ratios.
Figure 7. Effect of three imaging conditions on measurement errors for the synthetic events: (a) different particle densities; (b) different particle diameters; and (c) different noise ratios.
Mathematics 14 00572 g007
Figure 8. Visualized velocity fields measured from three synthetic event streams (Figure 3). The arrows indicate the estimated velocity vectors, and the background color denotes the EPE magnitude.
Figure 8. Visualized velocity fields measured from three synthetic event streams (Figure 3). The arrows indicate the estimated velocity vectors, and the background color denotes the EPE magnitude.
Mathematics 14 00572 g008
Figure 9. Statistical error distribution across three synthetic event datasets with distinct flow fields: (a) uniform, (b) backstep, and (c) DNS. Each box summarizes 1000 independent measurements.
Figure 9. Statistical error distribution across three synthetic event datasets with distinct flow fields: (a) uniform, (b) backstep, and (c) DNS. Each box summarizes 1000 independent measurements.
Mathematics 14 00572 g009
Figure 10. Measurement results on two real-world event recordings (Figure 4). (Left): Velocity fields for the rotational flow. (Right): Velocity fields for the turbulent water-tank flow. The background color indicates the velocity magnitude.
Figure 10. Measurement results on two real-world event recordings (Figure 4). (Left): Velocity fields for the rotational flow. (Right): Velocity fields for the turbulent water-tank flow. The background color indicates the velocity magnitude.
Mathematics 14 00572 g010
Table 1. Measurement errors (RMSE and AEE) presented in Figure 8. Best results in bold.
Table 1. Measurement errors (RMSE and AEE) presented in Figure 8. Best results in bold.
MethodsSinusoidal CaseRotational CaseCellular Case
ILK-EST0.3026 (0.2541)0.1856 (0.1638)0.2291 (0.1989)
ILK-VG0.2993 (0.2517)0.1720 (0.1574)0.2297 (0.1985)
OF-EST0.3305 (0.2773)0.1699 (0.1513)0.2592 (0.2259)
OF-VG0.3260 (0.2735)0.1652 (0.1476)0.2534 (0.2202)
INR-EST0.0684 (0.0547)0.0369 (0.0247)0.0383 (0.0297)
INR-VG (Ours)0.0479 (0.0407)0.0242 (0.0184)0.0317 (0.0217)
Notes: RMSE and AEE values are given in px/ms, with AEE shown in parentheses.
Table 2. RMSE and AEE of six methods on the real rotational flow field. Best results in bold.
Table 2. RMSE and AEE of six methods on the real rotational flow field. Best results in bold.
MethodsRMSE [px/ms]AEE [px/ms]
ILK-EST0.19020.0716
ILK-VG0.33350.0718
OF-EST1.04160.7812
OF-VG1.01290.6716
INR-EST0.02670.0208
INR-VG (Ours)0.02700.0205
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ai, J.; Li, J.; Chen, Z.; Lee, Y. Implicit Neural Representation for Dense Event-Based Imaging Velocimetry. Mathematics 2026, 14, 572. https://doi.org/10.3390/math14030572

AMA Style

Ai J, Li J, Chen Z, Lee Y. Implicit Neural Representation for Dense Event-Based Imaging Velocimetry. Mathematics. 2026; 14(3):572. https://doi.org/10.3390/math14030572

Chicago/Turabian Style

Ai, Jia, Junjie Li, Zuobing Chen, and Yong Lee. 2026. "Implicit Neural Representation for Dense Event-Based Imaging Velocimetry" Mathematics 14, no. 3: 572. https://doi.org/10.3390/math14030572

APA Style

Ai, J., Li, J., Chen, Z., & Lee, Y. (2026). Implicit Neural Representation for Dense Event-Based Imaging Velocimetry. Mathematics, 14(3), 572. https://doi.org/10.3390/math14030572

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop