Real-Time Motion Blur Using Multi-Layer Motion Vectors

: Traditional methods for motion blur, often relying on a single layer, deviate from the correct colors. We propose a multilayer rendering method that closely approximates the motion blur effect. Our approach stores motion vectors for each pixel, divides these vectors into multiple sample points, and performs a backward search from the current pixel. The color at a sample point is sampled if it shares the same motion vector as its origin. This procedure repeats across layers, with only the nearest color values sampled for depth testing. The average color sampled at each point becomes that of the motion blur. Our experimental results indicate that our method significantly reduces the color deviation commonly found in traditional approaches, achieving structural similarity index measures (SSIM) of 0.8 and 0.92, which represent substantial improvements over the accumulation method.


Introduction
Motion blur is an effect created by the movement of objects between frames, blurring them in the direction of movement to produce realistic images.When this effect is applied to videos, the motion of objects is expressed in the image of each frame, enhancing realism.In general, sequences that contain a moderate amount of motion blur are perceived as natural.However, rendering motion blurred image requires significant computations, unlike cameras that generate motion blur automatically.As a result, much research has been done to reduce the resources required to generate motion blur.Although existing methods can achieve fast rendering speeds through post-processing, they generally rely on a single layer for motion blur generation [1][2][3][4][5][6][7].If an object in the first layer moves, it cannot produce a color mixed with objects in the later layers.Another method to create motion blur is the accumulation method, which uses multiple rendering iterations with different times in a single frame to generate images that are subsequently averaged [8,9].Because this method uses an actual rendering process, the motion blur effect is correctly simulated.However, this method introduces artifacts at low rendering iterations and performs slowly at high rendering iterations.Another method to create motion blur outputs a single image with motion blur using images from the previous frame and the current frame with a trained AI model [10].However, it requires training data and can cause artifacts in realtime rendering environments with many objects.
Our proposed method employs multilayer motion vectors alongside color and depth images to calculate object movement, achieving a high-speed correct motion blur effect, as shown in Figure 1.
The motion vector is the difference in the position of each pixel between the previous and current frames, as shown in Figure 2.
All pixels between P T−1 and P T in Figure 2 are averaged to obtain the motion-blurred color of pixel P T .Each pixel searches for colors that pass through it, and the motion-blurred color is calculated by averaging all these colors.The motion vector is the difference in the position of each pixel between the previous and current frames, as shown in Figure 2.
All pixels between PT−1 and PT in Figure 2 are averaged to obtain the motion-blurred color of pixel PT.Each pixel searches for colors that pass through it, and the motionblurred color is calculated by averaging all these colors.This paper is organized as follows.Section 2 describes how motion blur is generated with multilayered images.Section 3 explains the generation of multilayered images, and Section 4 analyzes the rendering speed of each method.Section 5 discusses the research results and examines potential future directions of research.Section 6 concludes the study.

Motion Blur with Multilayer Images
Figure 3 shows three key steps of our method: multi-layer image rendering [11][12][13][14], motion blur using multi-layer image, and denoising with pixel reliability.
In this section, we explain how to compute motion blur using multi-layer image, which consists of three steps, as shown in Figure 3.The details of this process are described below.The motion vector is the difference in the position of each pixel between the previous and current frames, as shown in Figure 2.
All pixels between PT−1 and PT in Figure 2 are averaged to obtain the motion-blurred color of pixel PT.Each pixel searches for colors that pass through it, and the motionblurred color is calculated by averaging all these colors.This paper is organized as follows.Section 2 describes how motion blur is generated with multilayered images.Section 3 explains the generation of multilayered images, and Section 4 analyzes the rendering speed of each method.Section 5 discusses the research results and examines potential future directions of research.Section 6 concludes the study.

Motion Blur with Multilayer Images
Figure 3 shows three key steps of our method: multi-layer image rendering [11][12][13][14], motion blur using multi-layer image, and denoising with pixel reliability.
In this section, we explain how to compute motion blur using multi-layer image, which consists of three steps, as shown in Figure 3.The details of this process are described below.This paper is organized as follows.Section 2 describes how motion blur is generated with multilayered images.Section 3 explains the generation of multilayered images, and Section 4 analyzes the rendering speed of each method.Section 5 discusses the research results and examines potential future directions of research.Section 6 concludes the study.

Motion Blur with Multilayer Images
Figure 3 shows three key steps of our method: multi-layer image rendering [11][12][13][14], motion blur using multi-layer image, and denoising with pixel reliability.

Multi-Layer Image Rendering
We render scenes to generate multilayer images composed of color, depth, and motion vector maps.We do depth peeling rendering or stencil-routed rendering.Details will be given in Section 3. In this section, we explain how to compute motion blur using multi-layer image, which consists of three steps, as shown in Figure 3.The details of this process are described below.

Multi-Layer Image Rendering
We render scenes to generate multilayer images composed of color, depth, and motion vector maps.We do depth peeling rendering or stencil-routed rendering.Details will be given in Section 3.

Motion Blur Generation Using Multi-Layer Image
The process starts by drawing a full-screen quad using a multilayer image to compute the blurred color of each pixel.This process involves calculating the search area where motion vectors are likely to be found, defending the motion vector that passes through the current pixel, and then sampling the pixel in the opposite direction of the motion [15].The motion blur color is then calculated from the sampled points.

Calculating Search Area
To calculate the motion-blurred color of a pixel we should find pixels that pass through the current pixel between T − 1 and T. Rather than checking all pixels, we calculate the search area based on the average and standard deviation of motion vectors.If a mipmap level n is read from a pixel, it is an average of 2 n × 2 n pixels centered on that pixel [16,17].The standard deviation is then calculated as the square root of the difference between the average of the squared motion vector and the square of the average of the motion vector.In Figure 4, we illustrate the search area(blue box) generated according to the motion vector map.

Multi-Layer Image Rendering
We render scenes to generate multilayer images composed of color, dept tion vector maps.We do depth peeling rendering or stencil-routed rendering.be given in Section 3.

Motion Blur Generation Using Multi-Layer Image
The process starts by drawing a full-screen quad using a multilayer image the blurred color of each pixel.This process involves calculating the search motion vectors are likely to be found, defending the motion vector that pass the current pixel, and then sampling the pixel in the opposite direction of the m The motion blur color is then calculated from the sampled points.

Calculating Search Area
To calculate the motion-blurred color of a pixel we should find pixel through the current pixel between T − 1 and T. Rather than checking all pixels, w the search area based on the average and standard deviation of motion vector map level n is read from a pixel, it is an average of 2 n × 2 n pixels centered on [16,17].The standard deviation is then calculated as the square root of the diff tween the average of the squared motion vector and the square of the average tion vector.In Figure 4, we illustrate the search area(blue box) generated accor motion vector map.

Finding Motion Vector
The approximated search area can be used to find the motion vectors that pass through the current pixel according to the pseudocode shown in Figure 5, with three nested loops.Our algorithm uses nested loops to save the vector at a pixel if it corresponds to the current search path; otherwise, it continues the search using the backward motion vector path and using backward search to find the motion vector passing through the current pixel, as shown in Figure 6.
The approximated search area can be used to find the motion vectors that pass through the current pixel according to the pseudocode shown in Figure 5, with three nested loops.Our algorithm uses nested loops to save the vector at a pixel if it corresponds to the current search path; otherwise, it continues the search using the backward motion vector path and using backward search to find the motion vector passing through the current pixel, as shown in Figure 6.

Calculating Motion Blur Effect
The motion vector obtained in the previous step is used to calculate the color of the motion blur effect according to the pseudocode shown in Figure 7.For each sample-point time, the motion vector is used to obtain the positions of neighboring pixels.If more than one sample point is sampled simultaneously, the color of the sample point closest to the camera is used.The average of all sample point colors is the color of the motion blur.through the current pixel according to the pseudocode shown in Figure 5, with three nested loops.Our algorithm uses nested loops to save the vector at a pixel if it corresponds to the current search path; otherwise, it continues the search using the backward motion vector path and using backward search to find the motion vector passing through the current pixel, as shown in Figure 6.

Calculating Motion Blur Effect
The motion vector obtained in the previous step is used to calculate the color of the motion blur effect according to the pseudocode shown in Figure 7.For each sample-point time, the motion vector is used to obtain the positions of neighboring pixels.If more than one sample point is sampled simultaneously, the color of the sample point closest to the camera is used.The average of all sample point colors is the color of the motion blur.

Calculating Motion Blur Effect
The motion vector obtained in the previous step is used to calculate the color of the motion blur effect according to the pseudocode shown in Figure 7.For each sample-point time, the motion vector is used to obtain the positions of neighboring pixels.If more than one sample point is sampled simultaneously, the color of the sample point closest to the camera is used.The average of all sample point colors is the color of the motion blur.

Denoising with Pixel Reliability
Because the motion vector search is a stochastic method, a small number of iterations may incur noise.We therefore implemented a denoising phase by adding pixel reliability to the Edge-Avoiding À-Trous filter [18][19][20].As the success rate from the previous step, pixel reliability, defined as the ratio of successful sample-point times, can be used to minimize noise.

Denoising with Pixel Reliability
Because the motion vector search is a stochastic method, a small number of iterations may incur noise.We therefore implemented a denoising phase by adding pixel reliability to the Edge-Avoiding À-Trous filter [18][19][20].As the success rate from the previous step, pixel reliability, defined as the ratio of successful sample-point times, can be used to minimize noise.

Edge-Avoiding À-Trous Filter
The Edge-Avoiding À-Trous filter uses the color and depth of neighbor pixels to get the denoised color of a pixel.If the color or depth difference of a neighbor pixel is large, the weight of it is low.As a result, noises are removed while edges are preserved.Additionally, we use pixel reliability to calculate the weight of neighbors.

Pixel Reliability
In the previous step, we calculate by averaging the color of pixels corresponding to different times.As our algorithm is stochastic, we may fail to find pixels corresponding to some times.The ratio of found samples over the total number of times is pixel reliability.By incorporating pixel reliability as a weight in the Edge-Avoiding À-Trous filter, low-reliability pixels can be classified as noise and removed, as shown in Figure 8.

Denoising with Pixel Reliability
Because the motion vector search is a stochastic method, a small number of iterations may incur noise.We therefore implemented a denoising phase by adding pixel reliability to the Edge-Avoiding À-Trous filter [18][19][20].As the success rate from the previous step, pixel reliability, defined as the ratio of successful sample-point times, can be used to minimize noise.

Edge-Avoiding À-Trous Filter
The Edge-Avoiding À-Trous filter uses the color and depth of neighbor pixels to get the denoised color of a pixel.If the color or depth difference of a neighbor pixel is large, the weight of it is low.As a result, noises are removed while edges are preserved.Additionally, we use pixel reliability to calculate the weight of neighbors.

Pixel Reliability
In the previous step, we calculate by averaging the color of pixels corresponding to different times.As our algorithm is stochastic, we may fail to find pixels corresponding to some times.The ratio of found samples over the total number of times is pixel reliability.By incorporating pixel reliability as a weight in the Edge-Avoiding À-Trous filter, lowreliability pixels can be classified as noise and removed, as shown in Figure 8.

Multilayer Images
Rendering motion blur with a single layer often fails to apply the effect correctly.This limitation is shown in Figure 9a.The blue ball is moving from left to right.For pixel P for time T 0 , T 1 , and T 2 , the corresponding pixels are on the blue ball.But for T 3 , T 4 , and T 5 , the corresponding pixel is P on the second layer.Our multilayer approach, illustrated in Figure 9b, allows points that are missed with a single layer to be captured in a second layer.

Multilayer Images
Rendering motion blur with a single layer often fails to apply the effect correctly.This limitation is shown in Figure 9a.The blue ball is moving from left to right.For pixel P for time T0, T1, and T2, the corresponding pixels are on the blue ball.But for T3, T4, and T5, the corresponding pixel is P on the second layer.Our multilayer approach, illustrated in Figure 9b, allows points that are missed with a single layer to be captured in a second layer.
We used depth peeling rendering or stencil-routed rendering to get multi-layer images.In this section, we will explain both methods.

Depth Peeling by Comparison with Previous Layer Depth
Scenes are rendered multiple times across multiple layers through iterated depth comparison and rendering [11,12].In each pass, polygons closer than the previous depth are killed, and the closest surviving polygon is stored.Figures 10a and 11 show multilayer rendering via depth peeling by comparison with previous layer depth.We used depth peeling rendering or stencil-routed rendering to get multi-layer images.In this section, we will explain both methods.

Depth Peeling by Comparison with Previous Layer Depth
Scenes are rendered multiple times across multiple layers through iterated depth comparison and rendering [11,12].In each pass, polygons closer than the previous depth are killed, and the closest surviving polygon is stored.Figures 10a and 11

Depth Peeling by Comparison with Previous Layer Depth
Scenes are rendered multiple times across multiple layers through iterated depth comparison and rendering [11,12].In each pass, polygons closer than the previous depth are killed, and the closest surviving polygon is stored.Figures 10a and 11

Depth Peeling by Stencil-Routed Rendering
Stencil-routed rendering preserves color only in pixels with the same stencil value as the comparison value and decreases the stencil value of the pixel where the color is stored by 1 [13,14].Subsequently, multilayer images are generated by depth-wise sorting after rendering all objects.Figures 10b and 12 show multi-layer rendering via depth peeling by stencil-routed rendering.

Depth Peeling by Stencil-Routed Rendering
Stencil-routed rendering preserves color only in pixels with the same stencil value as the comparison value and decreases the stencil value of the pixel where the color is stored by 1 [13,14].Subsequently, multilayer images are generated by depth-wise sorting after rendering all objects.Figures 10b and 12 show multi-layer rendering via depth peeling by stencil-routed rendering.
Stencil-routed rendering preserves color only in pixels with the same stencil value as the comparison value and decreases the stencil value of the pixel where the color is stored by 1 [13,14].Subsequently, multilayer images are generated by depth-wise sorting after rendering all objects.Figures 10b and 12 show multi-layer rendering via depth peeling by stencil-routed rendering.

Polygon Loss Problem
Modern GPUs, which use multiple sampling points per pixel, can experience a loss of polygon information if a polygon covers only a few of these sampling points.This condition leads to unchanged stencil values across different renders, resulting in fewer layers, as shown in Figure 13.

Polygon Loss Problem
Modern GPUs, which use multiple sampling points per pixel, can experience a loss of polygon information if a polygon covers only a few of these sampling points.This condition leads to unchanged stencil values across different renders, resulting in fewer layers, as shown in Figure 13.

Comparing Average Depth of Sample Points
The first layer with the highest sample count is replaced with the image generated by the general rendering process.However, some pixels use information from the lost pixels to calculate the color of the motion blur.In situations where the polygon covers fewer sample points, resulting in 'lost pixels'; these are identified by their higher average depth compared to the surrounding, non-lost pixels.These lost pixels can be detected by comparing the depth of the current pixel with the average depth of its neighbors.Figure 14 shows the reliability filter performance of the depth comparison method.In Figure 14a, we did not apply the depth comparison method and saw clear artifacts.However, in Figure 14b, we applied the depth comparison method and saw that the artifacts disappeared.

Comparing Average Depth of Sample Points
The first layer with the highest sample count is replaced with the image generated by the general rendering process.However, some pixels use information from the lost pixels to calculate the color of the motion blur.In situations where the polygon covers fewer sample points, resulting in 'lost pixels'; these are identified by their higher average depth compared to the surrounding, non-lost pixels.These lost pixels can be detected by comparing the depth of the current pixel with the average depth of its neighbors.Figure 14 shows the reliability filter performance of the depth comparison method.In Figure 14a, we did not apply the depth comparison method and saw clear artifacts.However, in Figure 14b, we applied the depth comparison method and saw that the artifacts disappeared.
sample points, resulting in 'lost pixels'; these are identified by their higher average depth compared to the surrounding, non-lost pixels.These lost pixels can be detected by comparing the depth of the current pixel with the average depth of its neighbors.Figure 14 shows the reliability filter performance of the depth comparison method.In Figure 14a, we did not apply the depth comparison method and saw clear artifacts.However, in Figure 14b, we applied the depth comparison method and saw that the artifacts disappeared.

Experimental Analysis
The accumulation method and the two rendering methods were experimentally analyzed in a Direct3D 11 (Intel i9-10900K, Samsung 128 GB RAM, Nvidia RTX 3080) environment.The accumulation method renders a single blurred image from rendering iterations within a frame.

Samples per Frame
As shown in Table 1, we analyzed performance by varying the number of samples per frame in scenes with 860 K triangles, noting only a minor decrease in frames per second (FPS) with increasing samples.

Experimental Analysis
The accumulation method and the two rendering methods were experimentally analyzed in a Direct3D 11 (Intel i9-10900K, Samsung 128 GB RAM, Nvidia RTX 3080) environment.The accumulation method renders a single blurred image from rendering iterations within a frame.

Samples per Frame
As shown in Table 1, we analyzed performance by varying the number of samples per frame in scenes with 860 K triangles, noting only a minor decrease in frames per second (FPS) with increasing samples.We evaluated the perceptual quality of images using the structural similarity index measure (SSIM) [21].The three methods were subjected to the same denoising process, and the SSIM calculations were averaged over 10 randomly selected images from the same scene with triangles at 860 K.As shown in Table 3, both the depth peeling and stencil-routed methods achieved SSIM values above 0.93.

Comparing Motion Blur Image Quality
Figure 15 compares the image quality of three rendering methods.We chose the accumulation method as our reference because it is the closest to the correct motion blur.The checkerboard pattern on the front of the car is not clear in scenes with 16 samples per frame but is clear in scenes with 32 samples per frame.

Comparing Motion Blur by Number of Layers
Figure 16 shows the impact of varying layer counts on motion blur quality.Notably, a single layer does not adequately convey motion blur, while multiple layers significantly enhance the effect.

Comparing Motion Blur by Number of Layers
Figure 16 shows the impact of varying layer counts on motion blur quality.Notably, a single layer does not adequately convey motion blur, while multiple layers significantly enhance the effect.

Comparing Motion Blur by Number of Layers
Figure 16 shows the impact of varying layer counts on motion blur quality.Notably, a single layer does not adequately convey motion blur, while multiple layers significantly enhance the effect.

Discussion
In this section, we compare previously proposed motion blur generation algorithms with our method and discuss potential improvements and future work.
The previously proposed methods can be broadly categorized into accumulation buffers, the use of AI, and post-processing.First, the accumulation method creates the motion blur effect by repeating the rendering in a single frame.This method produces the most ideal motion blur effect because it repeats the rendering.However, this method takes a long time to create the motion blur effect because it needs to go through many rendering iterations to reduce artifacts, which means it is not suitable for real-time rendering environments.
Second, using AI is a fast way to create motion blur effects.However, it requires a large amount of data to train the agent, and it consumes a lot of resources to create the data.It can also produce artifacts in real-time rendering environments with many objects and rapid changes.Also, because the agent cannot accurately explore the processes it outputs, it takes a long time to correct errors when they occur.
Finally, post-processing methods that use information from the current frame and previous frames to create motion blur effects are the most popular.Single-layer postprocessing is a popular method, but if objects in the first layer move, it is not possible to create a motion blur effect that blends with objects in later layers.Many algorithms have been developed and proposed to deal with this problem, but they have limitations because they do not use the information provided by the real layer.
Our method uses depth filling and stencil routing techniques to accurately capture multiple objects moving in different directions and at different speeds.It also uses motion vectors from multiple layers to reproduce blur close to the answer.This allows us to approximate the colors and details of the original scene more accurately, which are often lost in traditional methods.We have shown that our approach achieves higher SSIM values, indicating superior image quality, and maintains high frame rates, confirming its practical application in real-time rendering.
In future work, we will investigate ways to improve the motion vector detection process and streamline the multilayer rendering process.If the efficiency of the motion vector detection process is increased, the denoising process can be eliminated.Also, due to the nature of our method, which uses information from multiple layers, resources are essentially consumed in drawing multiple layers.In the case of depth filling, for example, rendering must be repeated in the same frame to get as many layers as needed.Stencil routes are rendered once, but there is a process of sorting by depth, and additional denoising techniques must be used.By studying and applying these improvements, you can expect to increase the efficiency and speed of motion blur generation.

Figure 1 .
Figure 1.(a) Scene with moving billiard balls (b) Scene with moving cars.

Figure 2 .
Figure 2. Motion vector corresponding to the movement of pixel PT−1 to PT between frames T − 1 and T.

Figure 1 .
Figure 1.(a) Scene with moving billiard balls (b) Scene with moving cars.

Figure 1 .
Figure 1.(a) Scene with moving billiard balls (b) Scene with moving cars.

Figure 2 .
Figure 2. Motion vector corresponding to the movement of pixel PT−1 to PT between frames T − 1 and T.

Figure 2 .
Figure 2. Motion vector corresponding to the movement of pixel P T−1 to P T between frames T − 1 and T.

Figure 4 .
Figure 4. Search area generated according to motion vector map.

Figure 4 .
Figure 4. Search area generated according to motion vector map.

Figure 5 .
Figure 5. Pseudocode to determine the motion vector passing through the current pixel (P).

Figure 6 .
Figure 6.Find the motion vector passing through the current pixel (P) using the search area and backward search.

Figure 5 .
Figure 5. Pseudocode to determine the motion vector passing through the current pixel (P).

Figure 5 .
Figure 5. Pseudocode to determine the motion vector passing through the current pixel (P).

Figure 6 .
Figure 6.Find the motion vector passing through the current pixel (P) using the search area and backward search.

Figure 6 .
Figure 6.Find the motion vector passing through the current pixel (P) using the search area and backward search.

Figure 7 .
Figure 7. Pseudocode to calculate motion blur color and success rate using the motion vector.

Figure 7 .
Figure 7. Pseudocode to calculate motion blur color and success rate using the motion vector.

Figure 7 .
Figure 7. Pseudocode to calculate motion blur color and success rate using the motion vector.

Figure 9 .
Figure 9. Color sampling of points Tn opposite of the motion vector found at the current pixel using a single layer (a) a single layer and (b) multiple layers.

Figure 9 .
Figure 9. Color sampling of points T n opposite of the motion vector found at the current pixel using a single layer (a) a single layer and (b) multiple layers.
show multi-layer rendering via depth peeling by comparison with previous layer depth.

Figure 9 .
Figure 9. Color sampling of points Tn opposite of the motion vector found at the current pixel using a single layer (a) a single layer and (b) multiple layers.
show multilayer rendering via depth peeling by comparison with previous layer depth.(a) (b)

Figure 10 .
Figure 10.Simple pseudocode to multi-layer rendering (a) Depth peeling by comparison with previous layer depth and (b) Depth peeling by stencil-routed rendering.Figure 10.Simple pseudocode to multi-layer rendering (a) Depth peeling by comparison with previous layer depth and (b) Depth peeling by stencil-routed rendering.

Figure 10 .
Figure 10.Simple pseudocode to multi-layer rendering (a) Depth peeling by comparison with previous layer depth and (b) Depth peeling by stencil-routed rendering.Figure 10.Simple pseudocode to multi-layer rendering (a) Depth peeling by comparison with previous layer depth and (b) Depth peeling by stencil-routed rendering.ppl.Sci.2024, 14, x FOR PEER REVIEW 7 of 12

Figure 11 .
Figure 11.Depth peeling by comparison with previous layer depth pipeline.

Figure 11 .
Figure 11.Depth peeling by comparison with previous layer depth pipeline.

Figure 14 .
Figure 14.Reliability filter performance (a) without depth comparison and (b) with depth comparison.

Figure 14 .
Figure 14.Reliability filter performance (a) without depth comparison and (b) with depth comparison.

Figure 15 .
Figure 15.Comparing image quality of the rendered scene by each rendering method and multilayer rendering method.

Figure 15 .
Figure 15.Comparing image quality of the rendered scene by each rendering method and multi-layer rendering method.

Figure 16 .
Figure 16.Motion blur with different numbers of layers.Figure 16.Motion blur with different numbers of layers.

Figure 16 .
Figure 16.Motion blur with different numbers of layers.Figure 16.Motion blur with different numbers of layers.

Table 1 .
FPS to the sample per frame.

Table 2 .
FPS to the number of triangles.

Table 3 .
Comparison in terms of SSIM.