Abstract
Background subtraction is a core preprocessing step for video analytics, enabling downstream tasks such as detection, tracking, and scene understanding in applications ranging from surveillance to transportation. However, real-time deployment remains challenging when illumination changes, shadows, and dynamic backgrounds produce heavy-tailed pixel variations that are difficult to capture with simple Gaussian assumptions. In this work, we propose a fully parallel GPU implementation of a per-pixel background model that represents temporal pixel deviations with lognormal distributions. During a short training phase, a circular buffer of n frames (as small as ) is used to estimate, for every pixel, robust log-domain parameters . During testing, each incoming frame is compared against a robust reference (per-pixel median), and a lognormal cumulative density function yields a probabilistic foreground score that is thresholded to produce a binary mask. We evaluate the method on multiple videos under varying illumination and motion conditions and compare qualitatively with widely used mixture of Gaussians baselines (MOG and MOG2). Our method achieves, on average, 87 fps with a buffer size of 10, and reaches about 188 fps with a buffer size of 3, on an NVIDIA 3080 Ti. Finally, we discuss the accuracy–latency trade-off with larger buffers.