You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
  • Feature Paper
  • Article
  • Open Access

22 December 2025

A Parallel Algorithm for Background Subtraction: Modeling Lognormal Pixel Intensity Distributions on GPUs

,
and
1
Department of Computer Science and Electrical Engineering, Mayfield College of Engineering, Tarleton State University, The Texas A&M University System, Stephenville, TX 76402, USA
2
Department of Mathematics, College of Science and Mathematics, Tarleton State University, The Texas A&M University System, Stephenville, TX 76402, USA
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Application of Advanced Computing and Artificial Intelligence in Engineering and Science, 2nd Edition

Abstract

Background subtraction is a core preprocessing step for video analytics, enabling downstream tasks such as detection, tracking, and scene understanding in applications ranging from surveillance to transportation. However, real-time deployment remains challenging when illumination changes, shadows, and dynamic backgrounds produce heavy-tailed pixel variations that are difficult to capture with simple Gaussian assumptions. In this work, we propose a fully parallel GPU implementation of a per-pixel background model that represents temporal pixel deviations with lognormal distributions. During a short training phase, a circular buffer of n frames (as small as n=3) is used to estimate, for every pixel, robust log-domain parameters (μ,σ). During testing, each incoming frame is compared against a robust reference (per-pixel median), and a lognormal cumulative density function yields a probabilistic foreground score that is thresholded to produce a binary mask. We evaluate the method on multiple videos under varying illumination and motion conditions and compare qualitatively with widely used mixture of Gaussians baselines (MOG and MOG2). Our method achieves, on average, 87 fps with a buffer size of 10, and reaches about 188 fps with a buffer size of 3, on an NVIDIA 3080 Ti. Finally, we discuss the accuracy–latency trade-off with larger buffers.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.