Next Article in Journal
Mathematical Assessment of Aerosol Impact on the Diffuse-to-Global Ratio of Solar UV Radiation
Previous Article in Journal
Electromagnetic Sources Teleparallel Robertson–Walker F(T)-Gravity Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Probabilistic Foundations of Surveillance Failure: From False Alerts to Structural Bias

Department of Mathematics, Trent University, Peterborough, ON K9L 0G2, Canada
Mathematics 2026, 14(1), 49; https://doi.org/10.3390/math14010049
Submission received: 18 November 2025 / Revised: 18 December 2025 / Accepted: 20 December 2025 / Published: 23 December 2025
(This article belongs to the Section D1: Probability and Statistics)

Abstract

Forensic statisticians have long debated whether searching large DNA databases undermines the evidential value of matches. Modern surveillance faces an exponentially harder problem: screening populations across thousands of attributes using threshold rules. Intuition suggests that requiring many coincidental matches should make false alerts astronomically unlikely. This intuition fails. Consider a system monitoring 1000 attributes, each with a 0.5 percent innocent match rate. Matching 15 pre-specified attributes has probability 10 35 , 1 in 30 decillion, effectively impossible. But operational systems may flag anyone matching any 15 of the 1000. In a city of one million innocents, this produces about 226 false alerts. A seemingly impossible event becomes guaranteed. This is a mathematical consequence of high-dimensional screening, not implementation failure. We identify fundamental probabilistic limits on screening reliability. Systems undergo sharp transitions from reliable to unreliable with small data scale increases, a fragility worsened by data growth and correlations. As data accumulate and correlation collapses effective dimensionality, systems enter regimes where alerts lose evidential value even when individual coincidences remain vanishingly rare. This framework reframes the DNA database controversy as a regime shift. Unequal surveillance exposures magnify failure, making “structural bias” mathematically inevitable. Beyond a critical scale, failure cannot be prevented through threshold adjustment or algorithmic refinement.

1. Introduction

1.1. From Database Search to Threshold Screening

For over two decades, forensic statisticians have debated a fundamental question: does searching a DNA database of a million profiles weaken the evidential value of a match? Stockmarr [1] argued that database searching increases coincidental match probability, while Balding [2] maintained that likelihood ratios preserve evidential weight regardless of search method. This controversy (concerning a single database with exact profile matching) remains unresolved [3,4].
Modern surveillance systems face an exponentially harder problem. Rather than searching one database for exact matches, they monitor populations across thousands of attributes (locations visited, transactions made, and communications sent) and flag individuals whose patterns match sufficiently many criteria using a threshold rule rather than exact matching. The intuition that “requiring many coincidences makes false matches astronomically unlikely” fails catastrophically in this regime.
Consider a concrete example that reveals the nature of this failure. A surveillance system monitors k = 1000 binary indicators, each with innocent match probability p = 0.005 (half a percent chance), and flags individuals matching at least m = 15 attributes. At first glance, this threshold seems extraordinarily conservative. Requiring an exact match across 15 specific predetermined attributes would have probability ( 0.005 ) 15 3 × 10 35 , 1 in 30 decillion. Such exact-match odds suggest that false alerts should be astronomically impossible, even across populations of billions.
But operational systems do not require exact matches across 15 predetermined attributes. Instead, they may flag individuals matching at least 15 of any 1000 attributes. This seemingly subtle distinction changes everything. With λ = k p = 1000 × 0.005 = 5 expected chance matches per innocent person and threshold m = 15 (three times the expected value), direct calculation gives a per-person false alert probability of
q : = Pr ( Poisson ( 5 ) 15 ) 2.26 × 10 4 = 0.0226 % .
In a city of n = 10 6 innocents, this produces approximately n q 226 false alerts, and the probability of at least one false alert is
Pr ( at least one false alert ) 1 e 226 1 7 × 10 99 .
Thus the probability of no false alerts is on the order of 10 99 , making the system, for all practical and mathematical purposes, certain to flag at least one innocent person.
The “1 in 30 decillion” exact-match calculation is mathematically correct but operationally irrelevant. The threshold rule “at least m of k” transforms the per-person probability from vanishingly small ( 10 35 ) to operationally significant ( 10 4 ), and population size amplifies this individual-level vulnerability into system-level certainty of failure. This is not a failure of implementation or algorithm design; it is a mathematical inevitability arising from the combinatorics of threshold detection in high-dimensional attribute spaces.
This contrast between the DNA setting and modern multi-attribute surveillance also clarifies why the long-standing DNA database controversy has persisted. Classical DNA searches operate in an extremely sparse false-match regime, where random matches are so rare that enlarging the database does not meaningfully dilute the evidential value of a hit.
Multi-attribute threshold screening, however, operates in a dense false-match regime where accumulated coincidences quickly overwhelm the signal. These are mathematically distinct operating regimes. Recognizing this distinction resolves the apparent conflict in the DNA literature and motivates the broader probabilistic limits developed in this paper.

1.2. Motivation and Background

Modern surveillance and screening systems collect vast amounts of data about individuals and attempt to identify rare targets of interest, such as potential terrorists, criminals, or fraudsters. These systems compare observed attributes—biometric features, transaction patterns, travel histories, and communication metadata—against profiles or watchlists, flagging individuals whose patterns match suspiciously well [5,6,7].
The fundamental challenge is the curse of dimensionality in coincidence detection [8]: as monitored attributes grow, innocent individuals increasingly match multiple criteria by chance. When populations are large (millions to billions), even extremely rare coincidences become statistically certain, generating overwhelming false alerts. This parallels challenges in cybersecurity [9,10,11,12], where even highly accurate intrusion detectors are swamped by false alarms at scale, and in open-set recognition [13], where classifiers must reliably reject vast numbers of unfamiliar inputs.

1.3. Related Work and Connections

Our analysis builds on classical probability theory (Poisson approximation, Chernoff bounds, and concentration inequalities) applied to modern surveillance contexts, situating this work within the framework of large-scale inference [14]. The system-level bound Pr ( false alert ) 1 ( 1 q ) n corresponds to family-wise error rate (FWER) control in multiple testing [15,16,17], though our large-deviation analysis shows when such control becomes infeasible.
The base-rate analysis resolves the Stockmarr–Balding debate [1,2] in forensic DNA statistics by identifying distinct asymptotic regimes. Connections to anomaly detection [9,12], open-set recognition [13], and algorithmic fairness [18,19,20,21,22] appear throughout: false-positive explosion is a fundamental challenge across all screening domains.

1.4. Main Contributions

This paper develops a rigorous probabilistic framework for analyzing such screening systems. Using classical tools (Poisson approximations, Chernoff bounds, concentration inequalities, and Bayesian inference), we characterize fundamental limits on system reliability. Our analysis reveals that the single-database DNA controversy and multi-attribute surveillance failure are manifestations of the same mathematical phenomenon, differing only in which asymptotic regime they occupy.
Our main contributions include the following:
  • Sharp critical population bounds: We derive two-sided bounds on both per-person and system-level false alert probabilities, showing that the critical population grows as n crit λ exp ( λ D ( c 1 ) ) with explicit λ corrections (Theorem 1). The rate function D ( c 1 ) = c log c c + 1 governs the exponential scaling.
  • Finite system lifetimes: Under exponential data growth k ( t ) = k 0 γ t , systems fail at time T * 1 log γ log ( m / k 0 p ) when λ ( t ) reaches threshold m, with population corrections secondary (Theorem 2).
  • Unified Bayesian–frequentist view: Both regimes exhibit the same exponential scaling exp ( λ D ( c 1 ) ) , but Bayesian actionability requires n 1 α α r s λ e λ D (for desired PPV α ) while frequentist reliability allows n λ e λ D (Section 4.1). Posterior probabilities collapse when n q r s , resolving the Stockmarr–Balding controversy as a regime transition.
  • Structural bias through group dominance: Differential surveillance exposure creates exponential outcome disparities that cannot be eliminated through threshold adjustments (Section 3.3), formalizing “structural bias” arguments from the algorithmic fairness literature.
  • Effective dimensionality under correlation: Spatiotemporal dependencies reduce effective attribute counts to k eff A / ( 2 π ξ 2 ) (spatial) or k eff k / ( 2 τ ) (temporal), connecting surveillance analysis to multiple-testing corrections in genomics and neuroimaging (Section 3.4; technical details in Appendix B).
All main results are proved using elementary probability, with full technical details collected in Appendix A. Appendix C presents illustrative examples based on public datasets (UCI Adult Census and Chicago Crime) that show how the phenomena described here arise with real data.
Notation. 
The symbols above are defined precisely in later sections: D ( c 1 ) is the Poisson rate function (Lemma 1); c > 1 is the threshold multiplier with m = c λ (Section 2.2); γ and k 0 parameterize exponential data growth (Section 3.2); r, s, and α are Bayesian parameters (Section 4.1); and k eff , ξ , and τ quantify correlation effects (Section 3.4). A complete notation summary appears in the symbol table.

2. Materials and Methods

2.1. The Basic Screening Model

Consider a population of n individuals, from each of whom we collect k types of data (e.g., locations visited, financial transactions, etc.). For each type of data, we also have lists of suspicious activity, for example, locations of interest where crimes have occurred. In general, we observe k binary attributes (indicators) for each individual, where each attribute has a small probability p 1 of matching some profile of interest purely by chance.
For example, a simple model for the match probability p on the j-th data type might be found by calculating the probability that a person’s list (size t) overlaps the suspicious list (size s) on a domain of size V possible distinct items (e.g., V geo-location cells). With uniform sampling without replacement from the domain, the probability of an overlap comes from the zero-intersection hypergeometric probability:
p = Pr ( overlap 1 ) = 1 V t s V s = 1 = 0 s 1 1 t V , for s V t ,
with p = 1 when s > V t (guaranteed overlap).
More generally, the match probability p may be derived from any domain-appropriate model; the hypergeometric calculation above serves merely as one concrete example. Our analysis holds for any fixed value of p 1 , regardless of how it is obtained. With this understanding, let X i denote the number of matching attributes for individual i. Under the null hypothesis (individual i is innocent) and the modeling assumptions detailed below, we have
X i Binomial ( k , p ) .
For large k and small p, we write λ = k p for the expected number of chance matches; this parameter will govern all subsequent results.
The screening system flags individual i as suspicious if they match some unsolved crime on m attributes with X i m for an integer threshold m. Throughout, we consider thresholds of the form m = c λ for constant c > 1 , representing detection criteria set above the expected number of chance matches; rounding affects only constant factors. The per-person false alert probability is
q = Pr ( X i m innocent ) .
Since we screen n individuals, the system-level false alert probability, that is, the probability that at least one innocent person is flagged, is
Pr ( at least one false alert ) = 1 ( 1 q ) n .
For small q, this is commonly approximated by
Pr ( at least one false alert ) 1 e n q .
The system becomes unreliable when this probability approaches 1 (that is, when n q 1 ), in which case the system is near certain to flag innocent individuals as suspects.

2.2. Modeling Assumptions and Regime of Validity

Throughout this paper, we operate under the following assumptions:
  • Large-k, small-p regime: We assume k 1 (many attributes) and p 1 (rare individual matches), with mean λ = k p held moderate and fixed as k . This is the natural Poisson regime for rare event detection, where the binomial distribution converges to Poisson ( λ ) with total variation error bounded by 2 i = 1 k p i 2 (see Section 3.1).
  • Independence across individuals: The match indicators for different individuals are statistically independent (Remark 2). In practice, common-mode events (e.g., large gatherings or seasonal shifts) may introduce positive dependence, which would only increase false alert rates beyond our bounds.
  • Homogeneous match probability (baseline model): For analytical tractability, we initially assume that all attributes have the same match probability p. Section 3.3 extends to heterogeneous populations with different exposure rates p g across demographic groups. Remark 1 addresses heterogeneity across attributes within individuals.
  • Binary attributes: Each attribute either matches ( M = 1 ) or does not match ( M = 0 ) a profile of interest. This models presence/absence at locations, yes/no transaction patterns, or binarized continuous features.
  • Fixed threshold screening: The system flags individuals with m or more matches, where m is predetermined. We primarily analyze m = c λ for constant c > 1 , representing thresholds scaled to exceed the expected number of chance matches under the null hypothesis.
  • Null hypothesis analysis: We analyze false alerts under the assumption that all n individuals are innocent (the null hypothesis). Detection of actual targets requires separate treatment via signal detection theory and receiver operating characteristic (ROC) analysis [23], which is not addressed here. Our results characterize the false positive rate; in practice, system designers must balance this against detection power for true targets.
These assumptions define the scope of our analysis. The core results (Section 3.1 and Section 3.2) establish fundamental limits under these idealized conditions. Later sections extend the framework: Section 3.3 analyzes heterogeneous populations with differential exposure, and Section 3.4 summarizes how spatiotemporal correlation reduces effective dimensionality (with technical details in Appendix B).
The large-k, small-p regime is particularly natural for modern surveillance systems, where technological advances enable collection of thousands of attributes per person (GPS pings, transaction records, and communication events). In our model, each attribute contributes a binary match indicator (match or no match) of whether it aligns with a profile of interest, and each such match remains rare. For instance, with k = 1000 location cells and p = 0.005 (half-percent chance of presence at any flagged location), we have λ = 5 expected false matches, a moderate value that falls squarely within our analytical framework.

3. Results

3.1. Fundamental Probabilistic Limits

3.1.1. The Poisson Approximation

When k is large, p is small, and λ = k p is fixed, the Binomial ( k , p ) distribution can be approximated by Poisson ( λ ) [24]. Specifically, if X Binomial ( k , p ) , then
Pr ( X = j ) e λ λ j j ! , λ = k p .
Moreover, by Le Cam’s inequality [25],
d TV Binomial ( k , p ) , Poisson ( λ ) 2 i = 1 k p i 2 .
For the homogeneous case with p i = p for all i, this gives d TV 2 k p 2 = 2 λ p . When p is small and λ = k p is moderate, this bound is O ( p ) , making the Poisson approximation numerically safe for practical applications. Throughout, we assume independence across individuals; common-mode shocks would require extensions via positively associated random variables (see Remark 2).
Under this approximation, the per-person false alert probability (tail) is
q = Pr ( X m ) j = m e λ λ j j ! = 1 j = 0 m 1 e λ λ j j ! .
Remark 1
(Heterogeneous Attribute Probabilities). For analytical tractability we assume a homogeneous match probability p across attributes, but this assumption can be relaxed. Let the attribute-level match probabilities be p 1 , , p k , and define
X = i = 1 k Bernoulli ( p i ) , λ = i = 1 k p i .
By Le Cam’s theorem [25], the total variation distance between the distribution of X and a Poisson distribution with mean λ satisfies
d TV X , Poisson ( λ ) 2 i = 1 k p i 2 .
Thus the Poisson approximation remains accurate whenever all p i lie in the small-probability regime, even if the p i differ substantially from one another. In this heterogeneous setting, the distribution of X, and therefore all results in this section, depends on the collection { p i } only through the aggregate rate λ, up to an approximation error of order i p i 2 .
Group-level heterogeneity (e.g., differing p g across demographic groups) is treated separately in Section 3.3.
Remark 2
(Independence Across Individuals). Since we screen n individuals independently, the probability that at least one innocent person is flagged is
Pr ( at least one false alert ) = 1 ( 1 q ) n .
In real deployments, common-mode events may introduce positive dependence among individuals. Positive association means Pr ( X i x i for all i ) i Pr ( X i x i ) : variables are more likely to be jointly large than under independence. This increases the system-level false alert probability beyond our bounds.
Our analysis assumes independence. Extensions to positively associated arrays via Chen–Stein’s or Stein’s method [26] are possible but omitted here. The key implication is that positive dependence increases tail probabilities, so our independence-based bounds underestimate true false alert rates when correlation is present. Thus, the independence-based bounds are conservative.

3.1.2. Tail Bounds for the Poisson Distribution

To understand how q depends on m and λ , we use Chernoff bounds and related concentration inequalities [27,28] for Poisson random variables.
Lemma 1
(Poisson Upper Tail). Let Y Poisson ( λ ) and suppose m > λ . Then
Pr ( Y m ) e λ m m e λ = exp λ D m λ 1 ,
where D ( α 1 ) = α log α α + 1 is the Poisson rate function.
Remark 3
(Rate Function Properties). The function D ( α 1 ) is also called the Cramér rate function or Cramér transform of the Poisson distribution with unit mean. This differs from the binary Kullback–Leibler divergence D KL ( p q ) = p log ( p / q ) + ( 1 p ) log ( ( 1 p ) / ( 1 q ) ) used for Bernoulli random variables.
The large-deviation rate is asymptotically exact for m / λ > 1 : the tail probability satisfies q λ 1 / 2 e λ D ( m / λ 1 ) , with factorial discreteness contributing only the λ 1 / 2 subexponential prefactor. We write D ( c 1 ) or simply D for brevity when the argument is clear.
Proof. 
For any t > 0 ,
Pr ( Y m ) = Pr e t Y e t m E [ e t Y ] e t m = exp λ ( e t 1 ) t m .
Optimizing by setting d d t [ λ ( e t 1 ) t m ] = 0 gives e t * = m / λ , which yields the stated bound. □

3.1.3. Critical Population Size

We now establish the fundamental scale of population size for reliable screening when the decision threshold scales as m = c λ with c > 1 .
Theorem 1
(Critical Population Scale). Consider a screening system with k binary attributes, per-attribute match probability p, threshold m = c λ for some c > 1 , population size n, and λ = k p . Let D ( c 1 ) = c log c c + 1 (the Poisson rate function from Lemma 1). Then the per-person false alert probability q satisfies the following asymptotic bounds:
1 2 π c λ exp λ D ( c 1 ) 1 12 c λ q exp λ D ( c 1 ) ,
which hold for λ 1 and c > 1 with the stated constants; for smaller λ the same exponential form holds with different absolute constants. The system-level false alert probability obeys
1 exp n 2 π c λ e λ D ( c 1 ) 1 12 c λ Pr ( false alert ) 1 exp n e λ D ( c 1 ) 1 e λ D ( c 1 ) .
(For e λ D 1 / 2 , the upper bound denominator is within a factor of 2 in the exponent, providing tight control.) In particular, the system becomes unreliable once
n λ exp λ D ( c 1 ) ( up to constant factors ) ,
and the critical population scale is
n crit exp λ D ( c 1 ) = exp k p D ( c 1 ) ,
ignoring subexponential λ corrections. Here we use the notation f g to denote equality up to multiplicative constants independent of the main asymptotic parameters (λ, k, and n).
Proof sketch. 
The upper bound follows directly from Lemma 1 with m = c λ , giving q exp ( λ D ( c 1 ) ) . The lower bound uses Robbins’ refinement of Stirling’s formula, which shows that Pr ( Y = m ) has the same exponential rate D ( c 1 ) up to a subexponential λ factor. Because q = Pr ( Y m ) Pr ( Y = m ) , these bounds yield matching exponential rates for q.
The system-level inequalities (3) follow from e n x / ( 1 x ) ( 1 x ) n e n x for x ( 0 , 1 ) . The critical population scale arises when n q 1 , giving n crit λ e λ D ( c 1 ) , and hence, up to subexponential factors, n crit e λ D ( c 1 ) , as stated. Full derivations appear in Appendix A. □

3.1.4. Sharp Threshold Behavior

The relationship between population size and system reliability exhibits an increasingly sharp threshold as the number of attributes grows.
Proposition 1
(Threshold Sharpness). Consider a screening system with λ = k p , threshold m = c λ for fixed c > 1 , and population scaled as n = λ · e α λ D ( c 1 ) for α > 0 . Then the system-level false alert probability satisfies
lim λ Pr ( false alert ) = 0 if α < 1 , 1 if α > 1 .
The transition occurs in a window Δ α 1 / ( λ D ( c 1 ) ) 0 , becoming arbitrarily sharp as λ increases; this follows from standard large-deviation theory for sums of i.i.d. rare events [29]. (Note: α here denotes the population scaling exponent, distinct from the posterior probability threshold α in Section 4.1.)
Proof sketch. 
From Theorem 1, log q = λ D ( c 1 ) + O ( log λ ) . With n = λ e α λ D ( c 1 ) , we obtain
log ( n q ) = ( α 1 ) λ D ( c 1 ) + O ( log λ ) .
For α < 1 , n q 0 exponentially; for α > 1 , n q exponentially. The transition width Δ α = O ( 1 / ( λ D ) ) 0 . Full details appear in Appendix A.2. □
Remark 4.
This sharp threshold implies no gradual degradation: systems operating near the critical line log n = λ D ( c 1 ) transition from reliable to unreliable essentially instantaneously as λ increases (Figure 1). Combined with exponential data growth λ ( t ) = λ 0 γ t with λ 0 = k 0 p , every system inevitably crosses this threshold at a finite time
T * log ( m / λ 0 ) log γ = 1 log γ log m k 0 p
(Theorem 2), making failure predictable rather than merely possible.

3.1.5. Numerical Validation

Monte Carlo simulations (5000 runs per data point) validate the Poisson approximation and critical population predictions as k increases. Figure 1a compares simulated false alert rates (orange points) to theoretical predictions, showing a mean absolute error of 0.0015 across the critical transition region. The sharp threshold behavior predicted by Proposition 1 is clearly evident in the simulation data.
The accuracy of the Poisson approximation across varying ( k , p ) combinations follows from Le Cam’s inequality (Section 3.1.1). In the homogeneous case p i = p used in this section, Le Cam’s bound simplifies to
d TV 2 k p 2 = 2 λ p ,
where λ = k p . Table 1 compares this theoretical bound against the actual total variation distance between Binomial ( k , p ) and Poisson ( λ ) distributions across parameter combinations spanning the operating regime of interest.
The Le Cam bound is conservative by a factor of 17–40 in these regimes; the actual approximation error is substantially smaller than the bound suggests. For the primary parameters used throughout this paper ( k = 1000 , p = 0.005 , λ = 5 ), the actual total variation distance is approximately 0.001 , indicating that the Poisson approximation introduces negligible error relative to the phase transition effects we characterize. Even in less favorable regimes ( k = 50 , p = 0.1 ), the approximation error remains below 0.03 , confirming that the Poisson model is appropriate across the full range of parameters considered.
Additional numerical illustrations across a broader range of parameters, as well as examples based on real-world datasets, are given in Appendix C.
Key Takeaway. 
The per-person false alert probability decays as q e λ D ( c 1 ) , and the system becomes unreliable once the population size exceeds n crit λ e λ D ( c 1 ) . This behavior produces a sharp phase transition: systems move abruptly from reliable to unreliable operation, and the sharpness of the transition increases monotonically with λ .

3.2. Temporal Dynamics: Finite System Lifetimes

Real surveillance systems accumulate data over time. As more attributes are collected, the false alert problem intensifies until the system becomes unreliable. We now quantify how long a screening system can operate before this inevitable failure occurs. As Figure 1c demonstrates, systems with exponential data growth exhibit sharp temporal transitions from reliable to unreliable operation at predictable times.
Remark 5
(Change of Threshold Regime). In the preceding analysis (Section 3.1), we studied thresholds of the form m = c λ with c > 1 constant, where both m and λ scale together. In this section, we adopt a different perspective: we fix the threshold m at deployment time while allowing λ ( t ) = k ( t ) · p to grow over time as data accumulates. This models realistic system operation, where detection thresholds are predetermined policy parameters that remain constant even as surveillance capabilities expand. The critical behavior occurs when the growing λ ( t ) crosses the fixed threshold m, at which point false alerts become inevitable.

3.2.1. Exponential Data Growth

Modern data collection exhibits exponential growth [30,31]. We model this as follows:
k ( t ) = k 0 γ t ,
where k 0 is the initial number of monitored attributes at time t = 0 and γ > 1 is the growth factor per time unit (e.g., annual growth rate).
Since each attribute has probability p of matching by chance for an innocent individual, the expected number of false matches grows exponentially:
λ ( t ) = k ( t ) · p = k 0 p · γ t .

3.2.2. Critical Time Derivation

The system flags an individual when they have m or more matching attributes. At time t, the per-person false alert probability is
q ( t ) = Pr ( Poisson ( λ ( t ) ) m ) .
With population size n, the probability of at least one false alert is approximately
Pr ( system false alert at time t ) 1 e n · q ( t ) .
The system becomes unreliable when n · q ( t ) 1 .
Theorem 2
(System Lifetime Under Exponential Growth). Consider a screening system with exponential data growth k ( t ) = k 0 γ t (where γ > 1 ), per-attribute match probability p, fixed threshold m, and population size n.
For populations satisfying n m exp ( m / 2 ) (such that system failure occurs near λ ( T * ) m rather than deep in the Poisson tail where λ m ), the system becomes unreliable at time
T * 1 log γ log m k 0 p ,
where the critical time is characterized by λ ( T * ) m .
The population size n introduces a correction of order 1 log γ log n m to the time to failure (equivalently, a shift of order 2 m log n in λ), up to logarithmic factors in m and n. This is typically weak relative to the effects of γ and m.
Proof sketch. 
The system fails when the expected number of false alerts reaches order one: n · q ( T * ) 1 . For λ < m , the Poisson tail probability q ( λ ) = Pr ( Poisson ( λ ) m ) is exponentially suppressed by the factor exp ( λ D ( m / λ 1 ) ) . As λ increases from well below m toward m, the rate function D ( m / λ 1 ) decreases to zero, causing q ( λ ) to increase sharply from nearly zero to order one. When λ > m , the tail probability increases rapidly toward 1. The transition occurs sharply near λ = m due to concentration of the Poisson distribution.
For λ near m, we use the normal approximation Poisson( λ ) N ( λ , λ ) , which yields q ( λ ) = Pr ( X m ) 1 Φ ( ( m λ ) / λ ) . For λ m , the Gaussian tail asymptotics (Mills’ ratio: 1 Φ ( x ) ϕ ( x ) / x as x ) give the more precise form
q ( λ ) λ ( m λ ) 2 π e ( m λ ) 2 / ( 2 λ ) ( λ < m ) .
Setting n q ( T * ) 1 gives λ ( T * ) m 2 m log n , which is a small correction to λ ( T * ) = m for typical parameters. Translating through λ ( t ) = k 0 p γ t yields a time shift of order 1 log γ log n m . Substituting into λ ( t ) = k 0 p γ t yields (8) as the leading-order term. □

3.2.3. Key Insights

Equation (8) reveals several fundamental properties:
  • Finite lifetime is inevitable: For any fixed threshold m and exponential growth γ > 1 , we have T * < .
    Even if one attempts to preserve reliability by scaling the threshold with the data ( m λ ( t ) ), doing so would require the monitored population to grow as n crit ( t ) exp ( C γ t ) for some C > 0 —a doubly exponential rate that far exceeds any realistic population dynamics.
    Because real surveillance systems use fixed thresholds and monitor approximately fixed populations, temporal failure becomes inevitable rather than merely possible. This mathematical structure parallels the “double birthday paradox” in our earlier work [32], though it arises here in a distinct screening context.
  • Logarithmic dependence on threshold: Doubling the threshold m adds only ( log 2 ) / log γ time units. For annual doubling ( γ = 2 ), this is exactly 1 year. Even increasing m tenfold extends lifetime by only log 2 ( 10 ) 3.3 years.
  • Inverse dependence on growth rate: The factor 1 / log γ means that faster data growth dramatically reduces system lifetime. Increasing γ from 1.5 to 2 (from 50% to 100% annual growth) roughly halves the operational lifetime.
  • Weak population dependence: While larger populations cause slightly earlier failure, this effect is logarithmic in n and secondary to the exponential effects of γ and m. The system lifetime is primarily determined by data growth dynamics, not population size.
The temporal phase transition in Figure 1c illustrates these dynamics: a system with 50% annual growth ( γ = 1.5 ) operates reliably for approximately 4 years before crossing the critical threshold, after which false alerts become statistically inevitable.

3.2.4. Quantitative Examples

Example 1
(Border Security System). Consider a border security system with initial attributes k 0 = 100 , annual growth rate γ = 1.5 (50%), match probability p = 0.01 , and threshold m = 5 . The critical time is
T * = log ( 5 / ( 100 × 0.01 ) ) log 1.5 = log 5 log 1.5 4.0 years .
The lifetime sensitivity reveals fundamental constraints. Increasing m from 3 to 10,
T * ( m = 3 ) = log ( 3 / ( 100 × 0.01 ) ) log 1.5 = log 3 log 1.5 2.7 years , T * ( m = 10 ) = log ( 10 / ( 100 × 0.01 ) ) log 1.5 = log 10 log 1.5 5.7 years .
Increasing the threshold from 3 to 10 extends lifetime from 2.7 to only 5.7 years (logarithmic benefit). In contrast, doubling γ from 1.5 to 2.0,
T * ( m = 5 , γ = 2.0 ) = log 5 log 2 2.3 years ,
cuts lifetime nearly in half (from 4.0 to 2.3 years). Growth rate dominates system longevity; threshold adjustments provide minimal protection.

3.2.5. Practical Implications

  • Predictable obsolescence: Every system has a calculable expiration date T * 1 log γ log ( m / k 0 p ) . Monitoring λ ( t ) / m provides early warning.
  • Growth rate dominates: Lifetime scales as 1 / log γ ; doubling γ halves operational lifetime. Threshold adjustments provide only logarithmic benefit.
Remark 6
(Non-Exponential Data Growth). The analysis above focuses on exponential growth, but alternative data growth models lead to qualitatively different system lifetimes. For simplicity, we identify the onset of system unreliability with the heuristic condition λ ( t ) = p k ( t ) m , which corresponds to the point at which false alert probabilities increase sharply. We assume m > p k 0 so that this regime is reached, if at all, at some T * > 0 .
  • Polynomial growth  k ( t ) = k 0 + γ t α with α 1 : Solving p ( k 0 + γ T * α ) m gives
    T * m / p k 0 γ 1 / α ,
    so T * m 1 / α grows linearly when α = 1 and sublinearly when α > 1 . Larger values of α correspond to faster growth of k ( t ) , leading to shorter system lifetimes. Moreover, because 1 / α decreases with α, increases in m yield progressively smaller gains in T * , further reducing threshold protection. Nonetheless, polynomial growth still provides significantly more protection than exponential growth, where T * increases only logarithmically with m.
  • Logarithmic growth  k ( t ) = k 0 + β log ( 1 + γ t ) : Here
    T * exp ( m / p k 0 ) / β 1 γ ,
    so the time required for λ ( t ) to reach m grows exponentially in the threshold m. Because k ( t ) itself grows only logarithmically in t, this time can be extremely large, and practical data collection may saturate before a critical regime is ever reached.
  • Bounded growth  k ( t ) k max : If the maximal attainable rate λ max = p k max satisfies λ max < m , the system never enters the high-false-alert regime and remains below the critical level for all time.
Exponential growth is the critical case because it eventually exceeds any subexponential threshold-adjustment strategy: if λ ( t ) grows exponentially while m ( t ) increases subexponentially, then λ ( t ) > m ( t ) for all sufficiently large t. Empirical studies suggest that historical data collection capacity has grown approximately exponentially [30], making this the scenario of greatest relevance for policy analysis.
Key Takeaway. 
Under exponential data growth, every screening system has a finite, calculable lifetime T * 1 log γ log ( m / k 0 p ) determined by when λ ( t ) reaches the threshold m. Threshold adjustments provide only logarithmic protection against exponential growth; the growth rate γ dominates system longevity, making temporal failure inevitable rather than merely possible.

3.3. Heterogeneous Population Structure

Not all individuals have equal representation in surveillance databases. Socioeconomic, geographic, and demographic factors lead to differential exposure rates [5,33].

3.3.1. Multi-Group Model

Partition the population into G groups such that group g has n g individuals, with g = 1 G n g = n , and each individual in group g has a per-attribute match probability p g . Let λ g = k p g and define the per-group false alert probability:
q g = Pr ( Poisson ( λ g ) m ) .
Proposition 2
(Heterogeneous System Risk). The system-level false alert probability is
Pr ( false alert ) = 1 g = 1 G ( 1 q g ) n g 1 exp g = 1 G n g q g .
Proof. 
The event “individual i in group g generates a false alert” is independent across individuals. The probability that no one in group g alerts is ( 1 q g ) n g . Since groups are disjoint, the probability that no one alerts is the product over groups. The approximation follows from ( 1 x ) n e n x for small x. □

3.3.2. Dominance and Disparity

Theorem 3
(Group Dominance Effect: Algebraic Decomposition). Let g * = arg max g { n g q g } be the group contributing the most to system risk. Then,
Pr ( false alert ) 1 e n g * q g * + O g g * n g q g e n g * q g * .
If n g * q g * g g * n g q g , then group g * dominates system behavior. This is an algebraic decomposition showing how system-level risk concentrates in the highest-exposure group.
Proof. 
From Proposition 2,
1 exp g = 1 G n g q g = 1 exp ( n g * q g * ) · exp g g * n g q g .
Using e x 1 x for small x,
1 e n g * q g * 1 g g * n g q g = 1 e n g * q g * + e n g * q g * g g * n g q g .
The main term is 1 e n g * q g * , representing the contribution from group g * . The correction term is of order g g * n g q g e n g * q g * and becomes negligible whenever n g * q g * g g * n g q g . □

3.3.3. Numerical Example

Example 2
(Geographic Disparities). Consider a city with two neighborhoods:
  • Neighborhood A (low surveillance): n A = 100,000 , p A = 0.005
  • Neighborhood B (high surveillance): n B = 100,000 , p B = 0.02
With k = 100 attributes and threshold m = 3 ,
λ A = 100 · 0.005 = 0.5 , q A = j = 3 e 0.5 0 . 5 j j ! 0.014 , λ B = 100 · 0.02 = 2 , q B = j = 3 e 2 2 j j ! 0.323 .
The expected number of false alerts is
n A q A 100,000 · 0.014 = 1400 , n B q B 100,000 · 0.323 = 32,300 .
Despite equal population sizes, Neighborhood B experiences approximately 23 times more false alerts. This exponential disparity, visualized in Figure 1d, demonstrates how differential exposure rates create fundamentally unequal outcomes that cannot be remedied through threshold adjustments alone. Moreover, by Theorem 3, since n B q B n A q A , the system-level false alert probability is dominated by Neighborhood B:
Pr ( system false alert ) 1 e 32,300 1 .
The system is essentially guaranteed to produce false alerts, driven almost entirely by the heavily surveilled neighborhood. This illustrates how heterogeneous exposure not only creates disparate individual-level burdens but also determines aggregate system reliability.
In the UCI Adult Census data, increasing the threshold produces the predicted divergence in group contributions: at m = 8 , a 2% subgroup generates 17% of false alerts, while the lowest-exposure group contributes none. See Appendix C.

3.3.4. Exposure Amplification Through Poisson Tails

Proposition 3
(Exposure Amplification Through Poisson Tails). Consider two groups with equal population sizes ( n 1 = n 2 = n / 2 ) but different per-attribute match probabilities p 1 < p 2 . Let λ i = k p i and suppose both groups are screened with a common threshold m. Then, the following hold:
  • If λ 1 < m λ 2 , the disparity in expected false alerts is exponential:
    n 2 q 2 n 1 q 1 = q 2 q 1 exp ( c · m )
    for some constant c > 0 depending on λ 1 for all sufficiently large m. The disparity grows super-exponentially because q 1 decays exponentially while q 2 remains bounded away from zero.
  • The fraction of false alerts attributable to Group 2 approaches 1 as the exposure gap grows:
    n 2 q 2 n 1 q 1 + n 2 q 2 1 as λ 2 m increases .
  • For fixed threshold m and exposure ratio α = p 2 / p 1 > 1 , as k increases, Group 2 enters the critical regime ( λ 2 m ) before Group 1, creating a temporal window of maximum disparity.
Proof sketch. 
(1) For λ 2 m , we have q 2 Pr ( Poisson ( m ) m ) 1 / 2 O ( m 1 / 2 ) by normal approximation, while q 1 e λ 1 D ( m / λ 1 1 ) is exponentially small (Lemma 1). The ratio grows super-exponentially. (2) Since q 2 / q 1 , we have q 2 / ( q 1 + q 2 ) 1 . (3) Group 2 reaches λ 2 = m at k = m / p 2 < m / p 1 , so disparity is maximal for k ( m / p 2 , m / p 1 ) . Full details appear in Appendix A.3. □
Remark 7
(Within-Group Individual Heterogeneity). Even within a demographic group g, individuals may experience different exposure levels due to occupation, neighborhood, behavioral patterns, or daily routines, leading to person-specific match probabilities p g ( i ) . A convenient model treats p g ( i ) as random with mean p ¯ g , for example p g ( i ) Beta ( α g , β g ) with E [ p g ( i ) ] = p ¯ g . This induces a mixture of Poisson distributions for match counts whose variance exceeds the Poisson variance of the homogeneous case (classical overdispersion).
The qualitative phenomena we describe persist under within-group heterogeneity. As long as the group-level means p ¯ g differ, the higher-exposure group reaches false alert thresholds earlier on average. Moreover, within-group heterogeneity may actually increase disparity: overdispersion inflates upper-tail probabilities, making heterogeneous groups more likely to produce false alerts than homogeneous groups with the same mean exposure. A full mixture analysis is beyond our present scope, but the structural exposure–disparity mechanism is robust to this generalization.
Key Takeaway. 
Differential surveillance exposure creates exponential disparities in false alert rates through Poisson tail behavior. Because this mechanism arises from data collection intensity rather than classifier design, neither threshold adjustment nor standard algorithmic fairness interventions can eliminate it. The disparity persists under within-group heterogeneity and may even intensify due to overdispersion. Achieving genuine outcome parity requires equalizing surveillance intensity itself.

3.4. Effective Dimensionality Under Correlation

Real surveillance data exhibit spatial and temporal dependencies that fundamentally alter tail probabilities. While correlation does not change the expected number of matches ( E [ i X i ] = k p regardless of dependence), it induces overdispersion that inflates tail probabilities beyond the independent Poisson approximation.
The key insight is that positive correlation reduces the effective degrees of freedom. The standard design effect methodology parameterizes this through an effective sample size k eff satisfying Var ( Y ) = k eff · p ( 1 p ) . For spatial correlation with exponential decay over area A with correlation length ξ ,
k eff A 2 π ξ 2 .
For temporal correlation with correlation time τ ,
k eff k 2 τ .
This modifies the critical population scale. The exponent λ D ( c 1 ) is reduced by the factor k eff / k , yielding
n crit corr λ exp k eff k · λ D ( c 1 ) .
The practical impact is severe. City-scale location monitoring with k = 10,000 cells and correlation length ξ = 500 m yields k eff 64 , reducing critical populations by over two orders of magnitude. Year-long daily observations ( k = 365 ) with τ = 30 days yields k eff 6 , making reliable surveillance of even small groups challenging.
Remark 8
(Scope: Heuristic Approximations). This analysis uses heuristic approximations rather than rigorous large-deviation theory. The effective degrees of freedom approach captures the dominant effect (variance inflation) but does not constitute a formal theorem. The qualitative conclusion that positive correlation accelerates false alert saturation is robust and well established in the literature on design effects, effective sample size, and correlation-adjusted multiple testing. Full technical details, derivations, and worked examples appear in Appendix B.

4. Discussion

4.1. Bayesian Posterior Reliability and the Base-Rate Trap

The preceding analysis focused on frequentist system reliability: the probability that at least one innocent individual is flagged. However, practitioners ultimately need the posterior probability that a flagged person is truly a target. This Bayesian perspective reveals an even more stringent constraint: in the sparse-target regime (where the expected number of true targets r is small relative to population size), posterior reliability degrades once n q becomes comparable to r s and collapses when n q r s . In this regime, flags become epistemically meaningless well before the frequentist transition at n q 1 .

4.1.1. The Bayesian Framework and Positive Predictive Value

We introduce standard notation from diagnostic testing and forensic statistics [2]:
π Pr ( target ) ( base rate ) , s Pr ( flag target ) ( sensitivity ) , q Pr ( flag innocent ) ( false positive rate ) .
By Bayes’ rule, the positive predictive value (PPV) is
Pr ( target flag ) = s π s π + q ( 1 π ) .
In a screened population of size n with r true targets ( π = r / n ), the expected number of flagged individuals is
E [ # flags ] r s + n q .
Thus, the expected false discovery rate (FDR) is
FDR n q r s + n q , PPV = 1 FDR r s r s + n q ,
matching (15) exactly.
Remark 9
(Sensitivity Dependence on Threshold). The analysis above treats sensitivity s as constant, but in practice s = Pr ( flag target ) typically decreases as the threshold m increases: stricter criteria miss a larger fraction of true targets. A simple parametric model capturing this tradeoff is
s ( m ) = s max exp β ( m λ signal ) + ,
where s max is sensitivity at low thresholds, λ signal > λ = k p is the expected number of matching attributes for a true target, β > 0 controls the decay rate, and ( x ) + = max ( x , 0 ) . The exponential form reflects the common observation that match-score distributions exhibit approximately exponential tails, though any monotone decreasing form would yield the same qualitative conclusions.
Substituting (17) into the PPV formula (16) reveals competing effects as m increases:
  • False positives decrease: For innocent individuals, q ( m ) = Pr ( Y m ) with Y Poisson ( λ ) and λ = k p , and Lemma 1 shows that q ( m ) decays very rapidly once m > λ (indeed, faster than any fixed-rate exponential in m).
  • True positives decrease:  s ( m ) remains near s max for m λ signal but then decays exponentially for m > λ signal .
These effects create an intermediate regime in which PPV may plateau or improve more slowly than the constant-s analysis suggests, particularly once m exceeds λ signal and losses in sensitivity offset some of the gains from reduced false positives. Under the model (17), however, Lemma 1 implies that q ( m ) eventually decays much faster than s ( m ) , so q ( m ) / s ( m ) 0 and therefore PPV ( m ) 1 as m . Any plateau or dip in PPV can therefore occur only over this intermediate range of thresholds, not asymptotically.
Because sensitivity is bounded above by s max , the constant-s model used earlier provides an upper bound on achievable PPV for any given threshold: PPV ( m ) is increasing in s, and s ( m ) s max for all m. Optimal threshold selection ultimately requires specifying both the innocent and target match distributions—equivalently, full ROC curve analysis [23]. Since the signal distribution is application-dependent, we do not pursue this direction here. Of course, driving m to extremely large values also drives the overall flag rate toward zero, so the PPV ( m ) 1 limit is primarily of theoretical interest; in practice, the operationally relevant regime is the intermediate range of thresholds where nontrivial detection rates are maintained.

4.1.2. Posterior Reliability and Bayesian Critical Scales

In the large-deviation setting of Theorem 1, with threshold m = c λ and c > 1 , the false positive rate satisfies
q κ ( λ ) e λ D ( c 1 ) , κ ( λ ) = O ( λ 1 / 2 ) ,
up to subexponential prefactors. Combining this with (16) yields an explicit condition for maintaining actionable posterior probabilities.
Remark 10
(Sparse-Target Regime). The following proposition applies to the “needle in a haystack” setting, where r (the expected number of true targets) is fixed or grows slowly while population n increases. If instead r n (constant prevalence), then the ratio r / n is fixed, and the condition for actionable PPV reduces to a bound on q / s relative to α / ( 1 α ) , independent of n. The sparse-target regime is the most challenging case for screening systems and is therefore the primary focus of this analysis.
Proposition 4
(Bayesian Critical Population for Actionable PPV). Fix a desired posterior level α ( 0 , 1 ) (e.g., α = 0.9 ) and sensitivity s ( 0 , 1 ] . Let r denote the expected number of true targets in the population (so the base rate is π = r / n ). In the sparse-target regime where r is fixed or grows sublinearly with n, the condition
Pr ( target flag ) α
is satisfied whenever
n ( 1 α ) r s α q .
Under the large-deviation scaling q κ ( λ ) e λ D ( c 1 ) , the Bayesian critical population size satisfies
n crit Bayes ( α , s ) ( 1 α ) r s α λ exp λ D ( c 1 ) ,
where ≍ hides subexponential factors absorbed into κ ( λ ) .
Proof. 
From (16), PPV α iff r s α ( r s + n q ) . Rearranging gives r s ( 1 α ) α n q , which yields (18). Substituting the large-deviation scaling for q proves (19). □
When sensitivity varies with the threshold, as in Remark 9, the bound (19) should be interpreted as an upper bound on achievable Bayesian reliability, since s ( m ) s max for all m.
Remark 11
(The Bayesian Trap). Frequentist reliability deteriorates once n q 1 , when the system is likely to produce at least one false alert. Bayesian actionability demands the stronger condition n q r s . When r s > 1 , there is an intermediate regime where false alerts occur frequently but individual flags retain some evidential value. When r s < 1 (extremely sparse targets), posterior reliability collapses before the system becomes statistically unreliable. In all cases, if n q r s , posterior probabilities decay toward zero even when individual false positives remain rare.

4.1.3. Likelihood Ratios and Classical Fallacies

Analysts often cite tiny false positive rates q or large likelihood ratios
L = s q ,
and mistakenly infer that Pr ( target flag ) is therefore large. This is the classical prosecutor’s fallacy [2]. Bayes’ rule shows that
Pr ( target flag ) Pr ( innocent flag ) = s q · π 1 π = L × prior odds .
When the base rate π = r / n is small, the prior odds can overwhelm any fixed likelihood ratio. Even extremely rare false positives ( q 1 ) do not guarantee high PPV. When π q , most flagged individuals remain innocent despite individually low false positive rates.

4.1.4. Resolving the DNA Database Controversy

Forensic statisticians have long debated the evidential value of DNA “cold hits.” The authors of [1] argued that searching databases weakens evidence by inflating coincidental match probabilities; refs. [2,34] countered that likelihood ratios preserve evidential weight. Our framework resolves this apparent contradiction by identifying the relevant asymptotic regime.
Using (20), the posterior odds after a database match are
Pr ( guilty match ) Pr ( innocent match ) = s q · r / n 1 r / n .
The DNA regime. Standard STR genotype profiling yields match probabilities on the order of q 10 12 or smaller. Even with databases of size n 10 6 , the product n q 10 6 1 remains far below the critical scale where coincidental matches become probable.
In this extreme regime, likelihood ratios dominate the posterior odds. The prior odds r / n may be small (perhaps 10 6 if we have one suspect among a million), but the likelihood ratio s / q 10 12 is so enormous that posterior probabilities remain overwhelmingly high. This validates Balding’s argument: database size does not meaningfully dilute evidential weight when n q 1 .
The surveillance regime. Multi-attribute surveillance systems operate in a fundamentally different regime. With λ = 5 and threshold multiplier c = 3 (i.e., m = 15 ), we have q 10 4 (Section 3.1). For populations of n 10 6 , the product n q 100 1 places the system far above the critical scale.
In this regime, prior odds collapse faster than likelihood ratios can compensate. Even if the likelihood ratio s / q is substantial, the prior odds r / n are so unfavorable that posterior probabilities remain low. Stockmarr’s caution applies: match evidence loses evidential weight as search populations grow.
Resolution. The critical scale n crit exp ( λ D ) separates these regimes. Stockmarr and Balding are both correct in their respective contexts: DNA forensics operates where n q 1 (likelihood-driven), while multi-attribute surveillance operates where n q 1 (base-rate-dominated). The apparent contradiction dissolves once we recognize this regime distinction.

4.1.5. Key Takeaways

  • Bayesian scaling mirrors frequentist scaling: Bayesian actionability inherits the exponential factor e λ D ( c 1 ) but includes the additional multiplicative factor ( 1 α ) r s / α ; see (19).
  • Posterior collapse can precede frequentist failure: In the sparse-target regime, posterior reliability collapses once n q approaches r s , often well before the frequentist transition at n q 1 .
  • Exponential data growth overwhelms adaptation: Reducing q by a factor of β requires increasing k by only log ( 1 / β ) / ( p D ) , while real-world data growth is exponential: k ( t ) = k 0 γ t (Section 3.2). Thus posterior collapse is temporally inevitable.
  • Epistemic saturation: As n grows, base rates π = r / n shrink. Even rare false positives become dominated by prior odds, causing Pr ( target flag ) to decay toward zero.
  • Resolution of the DNA debate: Stockmarr and Balding are correct in different regimes: Balding for n q 1 (DNA) and Stockmarr for n q 1 (large-scale attribute screening).
Remark 12
(Connections to Classical Statistical Fallacies). This section unifies the base-rate fallacy, the prosecutor’s fallacy, false discovery rate control [15], and the PPV problem in medical screening [35]. Conceptually, these phenomena are identical: all reflect Bayes’ rule under low prevalence and imperfect specificity. The widely cited argument that “most published research findings are false” [36] is the same FDR/PPV problem in another domain.

4.2. Fairness Implications

The Group Dominance Effect (Theorem 3) has important implications for fairness in surveillance systems. When different population groups experience differential surveillance exposure, small differences in exposure rates create exponential disparities in outcomes. Proposition 3 shows that exposure ratios of 2–4 times generate false alert disparities exceeding 20 times near critical thresholds. This exponential amplification, which arises from Poisson tail behavior, means that even modest differences in surveillance intensity produce severe outcome inequalities.
Crucially, these disparities cannot be eliminated through threshold adjustment. Group-specific thresholds merely encode the underlying exposure inequality in a different form. Equalizing outcomes requires equalizing data collection intensity at the source, not algorithmic tuning. Moreover, since the high-exposure group drives system-level false alerts, aggregate reliability metrics obscure concentrated burdens on specific subpopulations, making demographic disaggregation essential for understanding actual system performance.
Remark 13
(Structural vs. Algorithmic Bias). Proposition 3 demonstrates that disparate outcomes arise from the probabilistic structure of screening systems, independent of algorithmic design choices. When different groups experience differential surveillance exposure rates ( p 1 p 2 ), this mathematically guarantees unequal false positive rates ( q 1 q 2 ) under any common threshold m, creating disproportionate false alert burdens through Poisson tail behavior.
The exponential amplification in part (1) is particularly striking: when both groups are screened using the same attribute set and threshold, small differences in exposure translate to exponential differences in false alert rates (Figure 1d).
The effect manifests temporally as different groups reach critical false alert rates at different times: Group 2 with exposure rate p 2 = 0.020 fails at k 2 * = 350 attributes, while Group 1 with p 1 = 0.005 remains reliable until k 1 * = 1400 . While this fourfold difference in system lifetime simply reflects the fourfold difference in exposure rates (a linear relationship), the amplification becomes exponential when comparing simultaneous false alert rates: at any intermediate k ( 350 , 1400 ) , Group 2 experiences exponentially more false alerts than Group 1.
Connection to algorithmic fairness. These findings relate directly to classical impossibility theorems in the algorithmic fairness literature. Kleinberg et al. [37] and Chouldechova [19] show that equalizing false positive rates, false negative rates, and calibration is impossible when base rates differ. Our analysis identifies a complementary, and more fundamental, mechanism: surveillance exposure itself creates different effective base rates ( λ g = k p g ), guaranteeing unequal false alert burdens even before any classifier is applied.
Standard fairness interventions operate at the classifier level: equalized odds [18], demographic parity, and calibration attempt to constrain predictions. None can correct the structural disparity we identify, because it arises from data collection intensity ( p g ), not from how a classifier processes the collected data. Group-specific thresholds m g might equalize q 1 and q 2 , but only by encoding exposure inequality directly. Achieving parity requires assigning a higher threshold to the more surveilled group.
This perspective connects to “fairness through awareness” [38] and formalizes “structural bias” arguments from the critical algorithm studies literature [22,33,39]. Disparities can be intrinsic to systems built on heterogeneous data collection, rather than artifacts of biased algorithms or training data.
Remark 14
(Policy Implications). These results suggest that surveillance system audits should perform the following:
1. 
Measure exposure rates ( p g ) across demographic and geographic groups, not just aggregate false alert rates.
2. 
Recognize that system reliability is bounded by the worst-performing group (Theorem 3), making demographic disaggregation essential.
3. 
Account for temporal dynamics: groups with higher exposure fail first as data accumulates, creating windows of maximum disparity.
4. 
Acknowledge that threshold adjustments cannot eliminate disparities arising from differential exposure; only equalizing p g across groups can achieve fairness.

4.3. Limitations and Future Directions

This analysis operates under several simplifying assumptions that define its scope and suggest natural directions for future work.
Independence across individuals. Our core results (Theorems 1 and 2) assume statistical independence of match counts across individuals (Remark 2). Common-mode events (mass gatherings, natural disasters, viral social media content, and coordinated activities) introduce positive dependence that would increase false alert rates beyond our bounds. Positive dependence inflates upper-tail probabilities and therefore worsens system reliability relative to the independent case. Extensions via positively associated random variables or Chen–Stein methods [26] could quantify these effects, but our independence-based analysis provides a lower bound on false alert rates.
Binary attributes. We model attributes as binary indicators (match/no-match). Continuous features, count data, or multi-level categorical attributes would require different distributional assumptions and tail bounds. The qualitative insights about combinatorial explosion in high-dimensional spaces should persist, but quantitative thresholds would differ. Extensions to Gaussian or sub-Gaussian attributes would preserve the essential exponential tail behavior underlying our critical-scale results.
Fixed thresholds. Our analysis assumes detection thresholds m are fixed at deployment time. Adaptive systems that adjust thresholds based on observed alert rates or estimated base rates could potentially extend operational lifetimes. However, Theorem 2 suggests fundamental limits: under exponential data growth, even optimally adaptive thresholds would need to grow exponentially to maintain reliability, eventually exceeding meaningful detection capabilities.
Constant sensitivity. The Bayesian analysis (Section 4.1) initially treats sensitivity s = Pr ( flag target ) as constant. Remark 9 introduces a simple threshold-dependent model, but a complete analysis would require specifying both the target match distribution and the full signal distribution of true targets and performing ROC optimization [23]. This is inherently application-dependent and beyond our current scope.
Heuristic correlation treatment. The effective degrees of freedom approach (Section 3.4; Appendix B) captures variance inflation but does not constitute rigorous large-deviation analysis. Formal treatment would require specifying mixing conditions or dependency graph structures [26,29]. Our heuristic provides qualitative guidance rather than formal guarantees.
Lack of empirical validation. We have not validated our predictions against operational surveillance data, which is typically proprietary, classified, or subject to confidentiality restrictions. Instead, we use proxy datasets for illustrative rather than validating analyses (Appendix C); these provide qualitative checks but do not constitute validation in the intended deployment environment.
Static population structure. We assume a fixed population composition with stable group sizes n g and exposure rates p g . Dynamic populations with entry, exit, demographic shifts, and changing surveillance intensity would require stochastic population models. The temporal analysis (Section 3.2) addresses data growth but not population dynamics.
Formal versus heuristic results. Theorems 1, 2, and 3, along with Propositions 1 and 3, are formal results with complete proofs. The effective dimensionality analysis (Appendix B) and sensitivity threshold model (Remark 9) are heuristic approximations.
Broader applicability. While surveillance systems motivated this analysis, the mathematical framework applies to any domain where threshold rules screen large collections across many low-probability binary indicators. The critical population bounds (Theorem 1), temporal saturation dynamics (Theorem 2), and group-level disparity amplification (Theorem 3) characterize generic properties of high-dimensional threshold detection, independent of the specific application. Natural extensions include network intrusion detection, manufacturing quality control, financial fraud screening, medical diagnostic panels, and environmental monitoring systems. The binary indicator assumption could be relaxed to accommodate hybrid frameworks combining discrete and continuous variables [40], though the essential combinatorial explosion in threshold-based screening would persist. We developed the theory through the surveillance lens because it offered the clearest exposition of the societal stakes, but the probabilistic limits derived here constrain any system that aggregates rare coincidences across high-dimensional attribute spaces.
Despite these limitations, the core mathematical structure (exponential scaling of critical populations, finite system lifetimes under data growth, and structural amplification of exposure disparities) should prove robust across modeling variations. Modeling refinements would shift numerical thresholds but not the qualitative scaling laws, which arise from intrinsic high-dimensional coincidence phenomena.

5. Conclusions

We have established sharp probabilistic limits, under standard rare event and independence (or effective independence) assumptions, for large-scale screening systems. The critical population size beyond which false alerts become inevitable scales as
n crit λ exp ( λ D ( c 1 ) ) ,
governed by large-deviation rate functions that cannot be circumvented through algorithmic refinement within these models. When data volume grows exponentially, k ( t ) = k 0 γ t , any screening system has a calculable operational lifetime
T * log m log ( k 0 p ) log γ ,
typically measured in years rather than decades. Threshold adjustments provide only logarithmic protection against exponential growth, making temporal failure inevitable rather than merely possible.
The Bayesian analysis reveals an even more stringent constraint: posterior probabilities collapse when n q r s , rendering flags epistemically meaningless well before frequentist reliability fails. This asymptotic framing clarifies the long-standing DNA database controversy: Stockmarr’s caution and Balding’s confidence apply in distinct regimes, separated by the critical scale n exp ( λ D ) .
Differential surveillance exposure creates exponential outcome disparities whenever common thresholds are applied across groups with heterogeneous match probabilities p g . The mathematics guarantees disproportionate false alert burdens through Poisson tail behavior. This is not an artifact of algorithmic bias but a structural inevitability arising from heterogeneous data collection.
These results impose non-negotiable constraints on system design. Operational surveillance systems require (1) explicit expiration dates calculated from data growth rates; (2) demographic exposure audits measuring p g across population groups; (3) capacity constraints limiting investigable flags; and (4) recognition that system-wide reliability is dominated by the subpopulation with the highest effective match probability. The mathematics is unforgiving: more data does not guarantee better decisions, and exponential growth ensures finite operational windows.
The policy implication is direct: screening systems operating near or beyond their critical parameters will generate false alerts regardless of implementation quality. Designers must either accept these mathematical limits or fundamentally restructure surveillance architectures to avoid accumulating correlated data over time. Threshold refinement and algorithmic optimization cannot solve problems rooted in probabilistic inevitability.

Funding

I acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) [funding reference number RGPIN-2019-04085].

Data Availability Statement

The illustrative examples in Appendix C use two publicly available datasets: the UCI Adult Census dataset (https://archive.ics.uci.edu/dataset/2/adult (accessed on 2 December 2025)) and the City of Chicago Crime dataset (https://data.cityofchicago.org/Public-Safety/Crimes-One-year-prior-to-present/x2n5-8w5q (accessed on 2 December 2025)). No new data were generated.

Conflicts of Interest

The author declares no conflicts of interest.

Nomenclature

SymbolDescription
nPopulation size (number of individuals screened)
kTotal number of binary attributes monitored (before dependence)
pMarginal per-attribute match probability for an innocent individual
λ Expected false matches under independence; λ = k p
mDetection threshold (individual flagged if X i m )
cThreshold multiplier; typically m c λ for c > 1
qPer-person false alert probability; q = Pr ( X m innocent )
D ( α 1 ) Poisson rate function; D ( α 1 ) = α log α α + 1
n crit Population size where expected false alerts reach order one (e.g., n crit q = 1 )
sSensitivity (true positive rate); s = Pr ( flag target )
rExpected number of true targets in the population
π Base rate (prevalence); π = r / n
p g Marginal match probability for group g
n g Size of group g
λ g Expected false matches for group g; λ g = k p g
q g Per-person false alert probability for group g
k eff Effective number of independent attributes accounting for correlation
γ Data growth factor per unit time
T * System lifetime; time at which λ ( t ) reaches the threshold m under exponential data
growth
X i Number of matching attributes for individual i
α Posterior probability threshold in Bayesian analysis
ξ Spatial correlation length
τ Temporal correlation time

Appendix A. Detailed Proofs

This appendix provides complete proofs with all intermediate steps for the main theoretical results.

Appendix A.1. Proof of Theorem 1 (Critical Population Scale)

Recall that Y Poisson ( λ ) , λ = k p , and m = c λ with c > 1 . We write q = Pr ( Y m ) for the Poisson upper tail. Replacing m by c λ (ignoring integer rounding) affects only multiplicative constants in the bounds and does not change the exponential rate.
Upper bound on q. Lemma 1 (Chernoff bound for Poisson tails) applied with m = c λ yields
q = Pr ( Y m ) exp ( λ D ( c 1 ) ) ,
where the rate function is
D ( c 1 ) = c log c c + 1 .
Lower bound on q. We obtain a lower bound on Pr ( Y = m ) using Robbins’ refinement of Stirling’s formula [41]:
m ! < 2 π m m e m e 1 / ( 12 m ) .
Hence
Pr ( Y = m ) = e λ λ m m ! e λ λ m 2 π m m e m e 1 / ( 12 m ) = 1 2 π m exp λ + m log λ m log m + m 1 12 m .
Now set m = c λ and simplify the exponent
λ + m log λ m log m + m = λ + c λ log λ c λ log ( c λ ) + c λ = λ + c λ log λ c λ log c c λ log λ + c λ = λ c λ log c + c λ = λ 1 + c log c c = λ c log c c + 1 = λ D ( c 1 ) .
Substituting this back, and using m = c λ , we obtain
Pr ( Y = m ) 1 2 π c λ exp λ D ( c 1 ) 1 12 c λ .
Since q = Pr ( Y m ) Pr ( Y = m ) , the same lower bound applies to q, establishing the two-sided tail estimate in (2):
1 2 π c λ exp λ D ( c 1 ) 1 12 c λ q exp ( λ D ( c 1 ) ) .
System-level bounds. Independence across n screened individuals implies
Pr ( no false alert ) = ( 1 q ) n , Pr ( false alert ) = 1 ( 1 q ) n .
To relate ( 1 q ) n to exponentials, we use standard inequalities for x ( 0 , 1 ) :
ln ( 1 x ) x and ln ( 1 x ) x 1 x .
The first inequality follows from the concavity of ln and the tangent bound ln ( 1 x ) x at x = 0 ; the second can be verified by comparing power-series expansions or by bounding the remainder term in the Taylor series. Exponentiating and raising to the nth power gives
e n x / ( 1 x ) ( 1 x ) n e n x ,
and hence, with x = q ,
e n q / ( 1 q ) ( 1 q ) n e n q .
Therefore,
1 e n q Pr ( false alert ) 1 e n q / ( 1 q ) ,
which are precisely the system-level bounds (3) once the Poisson tail bounds for q are substituted.
Explicit form of system bounds. Substituting the Poisson tail bounds into the system-level inequalities yields the explicit form (3). For the lower bound on Pr ( false alert ) , we use 1 e n q together with the lower bound on q:
Pr ( false alert ) 1 exp n 2 π c λ e λ D ( c 1 ) 1 / ( 12 c λ ) .
For the upper bound, we use 1 e n q / ( 1 q ) with the upper bound q e λ D ( c 1 ) . Since q / ( 1 q ) is increasing in q for q ( 0 , 1 ) ,
q 1 q e λ D ( c 1 ) 1 e λ D ( c 1 ) ,
giving
Pr ( false alert ) 1 exp n e λ D ( c 1 ) 1 e λ D ( c 1 ) .
Critical population scale. Intuitively, the system becomes unreliable once we expect on the order of one false alert per batch. Using the lower bound on Pr ( Y = m ) as a proxy for q, this corresponds to
n Pr ( Y = m ) n 2 π c λ exp λ D ( c 1 ) 1 12 c λ 1 .
Solving for n gives
n crit 2 π c λ exp λ D ( c 1 ) + 1 12 c λ .
For large λ , the correction term 1 / ( 12 c λ ) is negligible, and we obtain the asymptotic form
n crit λ e λ D ( c 1 ) .
Since the factor λ is subexponential in λ , the dominant scaling is
n crit e λ D ( c 1 ) = e k p D ( c 1 ) ,
which is the critical population size stated in Theorem 1.

Appendix A.2. Proof of Proposition 1 (Threshold Sharpness)

From Theorem 1, we have the two-sided bounds
1 2 π c λ e λ D ( c 1 ) O ( 1 / λ ) q e λ D ( c 1 ) ,
which together imply that log q = λ D ( c 1 ) + O ( log λ ) as λ .
With n = λ e α λ D ( c 1 ) , we compute
log ( n q ) = log n + log q = 1 2 log λ + α λ D ( c 1 ) λ D ( c 1 ) + O ( log λ ) = ( α 1 ) λ D ( c 1 ) + O ( log λ ) .
For α < 1 : the leading term ( α 1 ) λ D < 0 dominates, so n q 0 exponentially fast.
For α > 1 : the leading term ( α 1 ) λ D > 0 dominates, so n q exponentially fast.
The transition width Δ α = O ( 1 / ( λ D ) ) vanishes as λ . The limiting behavior (5) follows from Pr ( false alert ) 1 e n q .

Appendix A.3. Proof of Proposition 3 (Exposure Amplification)

Part (1): When λ 2 m , monotonicity gives
q 2 = Pr ( Poisson ( λ 2 ) m ) Pr ( Poisson ( m ) m ) .
For large m, normal approximation gives Pr ( Poisson ( m ) m ) 1 / 2 O ( m 1 / 2 ) .
Meanwhile, q 1 exp ( λ 1 D ( m / λ 1 1 ) ) by Lemma 1, which is exponentially small when m > λ 1 . The ratio q 2 / q 1 therefore grows at least exponentially: for any c > 0 , there exists M ( c ) such that q 2 / q 1 e c m for m M ( c ) .
Part (2): Since q 2 / q 1 , we have q 2 / ( q 1 + q 2 ) 1 .
Part (3): Group 2 reaches λ 2 = m at k = m / p 2 < m / p 1 . The disparity is maximal for k ( m / p 2 , m / p 1 ) .

Appendix B. Effective Dimensionality Under Correlation: Technical Details

This appendix provides the full technical development of correlation effects summarized in Section 3.4.

Appendix B.1. Variance Inflation and Rate Function Reduction

For binary indicators X 1 , , X k with E [ X i ] = p and pairwise correlation Corr ( X i , X j ) = ρ i j , the sum Y = i = 1 k X i has
E [ Y ] = k p ( unchanged by correlation ) , Var ( Y ) = k p ( 1 p ) + i j p ( 1 p ) ρ i j = k p ( 1 p ) 1 + i j ρ i j k ,
where the design effect  DEFF = i j ρ i j k quantifies variance inflation; for non-negative average correlation this satisfies DEFF 0 [42].
For the Poisson approximation, independence gives Var ( Y ) = λ = k p . Under positive correlation, Var ( Y ) > λ , creating heavier tails.
Positive correlation generally reduces the large-deviation rate compared to the independent case; see [29] for precise conditions. Intuitively, the rate reduction can be modeled by shrinking the exponent by the factor k eff / k .

Appendix B.2. Effective Degrees of Freedom

Rather than reducing λ itself, correlation reduces the effective degrees of freedom governing concentration behavior. Standard design effect methodology [42,43] parameterizes this through an effective sample size k eff defined such that the variance of the correlated sum equals that of k eff independent observations:
Var ( Y ) = k eff · p ( 1 p ) .
For spatial correlation with exponential decay ρ ( r ) = e r / ξ over a monitored area A, the spatial integral of the correlation function yields
k eff A 2 π ξ 2 ,
where ξ is the correlation length [43].
Similarly, for temporal correlation with correlation time τ and unit sampling interval, the Bartlett–Wilks formula [44] gives
k eff k 1 + 2 h = 1 k 1 ρ time ( h ) ( 1 h / k ) .
For exponential temporal correlation ρ time ( h ) = e h / τ with τ 1 ,
k eff k ( 1 e 1 / τ ) 1 + e 1 / τ k 2 τ .

Appendix B.3. Impact on Critical Populations

We model the effect of correlation by shrinking the effective number of independent comparisons from k to k eff , holding the marginal match probability p fixed. The true mean number of matches remains λ = k p , but the concentration (and hence the large-deviation exponent) behaves as if only k eff coordinates contributed independently.
For thresholds m = c λ with c > 1 , the tail probability under correlation satisfies
q corr exp k eff p · D ( c 1 ) = exp λ · k eff k · D ( c 1 ) .
This modifies the critical population scale. The exponent in the independent case λ D ( c 1 ) = k p D ( c 1 ) is reduced by the factor k eff / k , yielding
n crit corr λ exp k eff k · λ D ( c 1 ) = λ exp k eff p · D ( c 1 ) .
The critical population is thus reduced compared to the independent case, making systems fail at smaller populations when positive correlations are present. The factor k eff / k represents the information loss due to redundancy in correlated observations.

Appendix B.4. Quantitative Examples

Example A1
(City-Scale Spatial Surveillance). Consider fine-grained location monitoring with k = 10,000 cells covering area A = 100 km2 with correlation length ξ = 500 m. The effective number of independent spatial tests is
k eff 100 × 10 6 m 2 2 π ( 500 ) 2 64 .
Despite nominally tracking 10,000 locations, correlation reduces effective information to approximately 64 independent observations.
With p = 0.005 , the expected number of matches is λ = k p = 10,000 × 0.005 = 50 . Set the threshold at m = 75 matches (corresponding to c = m / λ = 1.5 ). The rate function is
D ( 1.5 1 ) = 1.5 log ( 1.5 ) 1.5 + 1 0.108 .
Independent case: The exponent governing the critical population is
λ D ( c 1 ) = 50 × 0.108 = 5.4 ,
giving n crit λ exp ( 5.4 ) 1560 .
Correlated case: The exponent is reduced by the factor k eff / k :
k eff k · λ D ( c 1 ) = 64 10,000 × 5.4 0.035 ,
giving n crit corr λ exp ( 0.035 ) 7 .
Spatial correlation reduces the critical population from approximately 1560 to approximately 7, a reduction of over two orders of magnitude.
Example A2
(Routine Behavior Monitoring). For daily surveillance data ( Δ t = 1 day) over one year ( k = 365 observations) monitoring movement patterns with correlation time τ = 30 days,
k eff k 2 τ = 365 60 6 .
Observing someone’s location 365 times yields effective information equivalent to approximately six independent observations.
With per-observation match probability p = 0.02 , the expected matches are λ = 365 × 0.02 = 7.3 . Set threshold m = 12 (giving c 1.64 ). The rate function is D ( 1.64 1 ) 0.173 .
Independent case: λ D ( c 1 ) 1.26 , giving n crit 10 .
Correlated case: ( k eff / k ) · λ D 0.021 , giving n crit corr 3 .
Temporal correlation reduces the critical population from approximately 10 individuals to approximately 3, making reliable surveillance of even small groups challenging.

Appendix B.5. Connections to Multiple Testing

This effective dimension reduction connects surveillance analysis to multiple-testing corrections in genomics [45,46,47] and neuroimaging [48,49,50], where similar correlation structures require Bonferroni-type corrections scaled by k eff rather than the nominal number of tests k. The mathematics forces system designers to confront the limited information content of nominally “big” data: more observations do not guarantee more information when those observations are highly correlated.
Remark A1
(Overdispersion as System Weakness). Correlation-induced overdispersion represents a fundamental vulnerability in screening systems. Positive correlation makes extreme events (high match counts) more probable than the independent Poisson model predicts, accelerating the onset of false alert saturation. Critically, system designers cannot escape this by collecting more data: additional correlated observations provide diminishing marginal information while accumulating the same false positive burden.
Appendix C illustrates these correlation effects using the City of Chicago Crime dataset, where strong spatial clustering reduces the effective dimensionality by roughly two orders of magnitude.

Appendix C. Empirical Illustrations

The theoretical results in this paper are derived independently of any data. The empirical examples below illustrate how the predicted behaviors appear in realistic settings and help clarify the practical implications of the theory. These illustrations do not affect the validity of the theoretical results but provide concrete context for their interpretation.
Two public datasets are used throughout: one providing individual-level attribute profiles and the other providing population-level event patterns across space and time. Together, they illustrate the two components common to screening problems: a collection of individuals described by multiple attributes and a collection of events observed across space and time. The mathematical structure examined in this paper captures the behavior of comparisons between these domains in the rare event regime.
The datasets are analyzed independently; no cross-dataset linkage is performed. The UCI Adult Census dataset [51] illustrates individual-level phenomena (match distributions, group disparities, and false alert concentration), while the City of Chicago Crime dataset [52] illustrates system-level dynamics (temporal saturation, spatial clustering, and correlation effects).

Appendix C.1. A Screening System from Census Data

To illustrate individual-level screening behavior, we constructed a system from the UCI Adult dataset (30,162 individuals after removing incomplete records). The system monitors k = 15 binary indicators representing potential “flags” across financial, occupational, educational, and demographic dimensions (Table A1). The 15 indicators were selected to span a range of prevalences and correlations, mirroring the heterogeneous binary features often used in real screening systems.
An individual is flagged when the number of matching attributes equals or exceeds a threshold m. Because the UCI dataset contains no true targets, every individual is treated as innocent, so all flags are interpreted as false alerts in the sense of the theoretical model.
Table A1. Screening attributes constructed from UCI Adult Census data.
Table A1. Screening attributes constructed from UCI Adult Census data.
CategoryAttributePrevalence
FinancialHigh capital gains (>USD 5000)5.2%
Any capital loss4.7%
High income (>USD 50K)24.9%
WorkSelf-employed11.8%
Overtime hours (>50/week)11.5%
Part-time (<35 h/week)15.5%
OccupationExecutive/Managerial13.2%
Professional specialty13.4%
Sales11.9%
Tech support3.0%
EducationGraduate degree8.4%
Bachelor’s degree16.7%
DemographicAge 35–5545.8%
Married47.9%
Foreign-born8.8%
For each individual, we computed the number of matching attributes. The resulting distribution has mean λ = 2.43 and variance σ 2 = 2.95 , yielding a variance-to-mean ratio of 1.22. Comparing to the Poisson ( λ ) distribution gives total variation distance d TV = 0.084 . This is larger than Monte Carlo simulations of independent attributes ( d TV 0.005 ) but still indicates reasonable model fit; the deviation reflects the correlation structure present in real demographic data, the phenomenon analyzed in Section 3.4.
Table A2 reports the fraction of the population flagged at each threshold.
Table A2. False positive rates from UCI Adult data at each threshold.
Table A2. False positive rates from UCI Adult data at each threshold.
Threshold m% Population FlaggedPer Million
341.7%417,000
425.2%252,000
513.8%138,000
65.9%59,000
71.7%17,000
80.31%3100

Appendix C.2. Poisson Model Fit

Table A3 compares empirical false alert rates from the UCI Adult data against predictions from a Poisson ( λ ) model with λ = 2.43 . The ratio of empirical to theoretical probabilities quantifies model fit across thresholds.
Table A3. Empirical versus Poisson-predicted false alert rates from UCI Adult census data ( λ = 2.43 , n = 30,162 ).
Table A3. Empirical versus Poisson-predicted false alert rates from UCI Adult census data ( λ = 2.43 , n = 30,162 ).
Threshold mEmpirical qPoisson qRatio
341.7%43.8%0.95
425.2%22.7%1.11
513.8%10.0%1.38
65.9%3.7%1.56
71.7%1.2%1.39
80.3%0.4%0.87
At intermediate thresholds ( m = 4 –7), empirical rates exceed Poisson predictions by a factor of 1.1–1.6, reflecting the overdispersion induced by attribute correlation (variance-to-mean ratio = 1.22 versus 1.0 for Poisson). This is precisely the phenomenon analyzed in Section 3.4: positive correlation inflates tail probabilities, causing false alerts to occur more frequently than independence would predict. The effect is most pronounced at moderate thresholds where the tail is neither too heavy nor too light.
At extreme thresholds ( m = 3 and m = 8 ), the ratio approaches unity, indicating that the Poisson approximation remains reasonable for order-of-magnitude estimates even when correlation is present. The total variation distance d TV = 0.084 reported earlier reflects this moderate deviation.

Appendix C.3. Group Disparities

The UCI dataset permits analysis of screening outcomes across demographic groups. Table A4 reports the mean number of attribute matches ( λ g ) for race-by-sex groups with sample size n g 100 . Exposure varies by a factor of 2.5 across groups.
Table A4. Group-level exposure in census data.
Table A4. Group-level exposure in census data.
Group n g λ g
Asian-Pacific Islander, Male6013.68
White, Male18,0382.74
Asian-Pacific Islander, Female2942.69
Black, Male14181.91
White, Female78951.90
Black, Female13991.49
As Proposition 3 predicts, the 2.5-fold exposure difference translates to exponentially larger disparities in false alert rates as the threshold increases (Table A5). Linear regression of log ( q high / q low ) on m yields slope = 0.78 , R 2 = 0.976 , p = 0.0016 , confirming the exponential amplification mechanism. The empirical slope is consistent with the theoretical rate function D ( c 1 ) governing exponential tail decay.
Table A5. Disparity amplification: comparing highest-exposure group (Asian–Pacific Islander Male, λ = 3.68 ) to lowest (Black Female, λ = 1.49 ).
Table A5. Disparity amplification: comparing highest-exposure group (Asian–Pacific Islander Male, λ = 3.68 ) to lowest (Black Female, λ = 1.49 ).
m q high q low Ratio log (Ratio)
30.6960.1694.11.41
40.5130.0598.72.17
50.3430.02215.52.74
60.1950.00630.33.41
70.0770.001107.14.67

Appendix C.4. Group Dominance

Theorem 3 predicts that system-level false alerts concentrate in whichever group maximizes n g q g and that this concentration intensifies as thresholds increase. Table A6 illustrates this pattern clearly. At m = 5 , alert shares roughly track population shares. By m = 8 , small high-exposure groups are dramatically over-represented while low-exposure groups virtually disappear.
Table A6. Group dominance intensifies with threshold. Asian–Pacific Islander males (2% of population, highest λ g ) account for 17% of false alerts at m = 8 ; Black females (4.6% of population, lowest λ g ) account for none.
Table A6. Group dominance intensifies with threshold. Asian–Pacific Islander males (2% of population, highest λ g ) account for 17% of false alerts at m = 8 ; Black females (4.6% of population, lowest λ g ) account for none.
GroupPop. % λ g Alert Share (%)
m = 5 m = 7 m = 8
Asian-Pac–Isl., Male2.03.685.08.917.0
White, Male59.82.7478.978.972.3
White, Female26.21.9011.18.55.3
Black, Male4.71.912.51.21.1
Black, Female4.61.490.70.20.0
Other groups2.71.82.34.3
Bold values indicate the extreme outcomes illustrating group dominance.
The highest-exposure group (Asian–Pacific Islander males, λ g = 3.68 ) represents only 2% of the population but accounts for 17% of false alerts at m = 8 , an 8.5-fold amplification. Conversely, Black females ( λ g = 1.49 ) represent 4.6% of the population yet generate effectively zero false alerts at the same threshold.
This divergence reflects the exponential sensitivity of q g : groups with slightly larger λ g retain a non-negligible tail probability as m increases, while groups with smaller λ g collapse exponentially. At high thresholds, contributions to n g q g no longer resemble population shares; exposure, not size, determines system-wide false alerts, precisely as predicted by Theorem 3 and Proposition 3.

Appendix C.5. Temporal Saturation from Crime Data

The Chicago Crime dataset (236,967 incidents across 50 wards over one year) permits direct observation of temporal saturation. Table A7 tracks the fraction of wards exceeding fixed crime-count thresholds as weeks accumulate. Thresholds were chosen to span the range of cumulative ward-level crime counts (approximately 2000–8000 per ward annually).
Here the same symbol m is used for the alert threshold, but the underlying quantity is cumulative ward-level crime counts rather than attribute matches. Accordingly, the numerical scale of m is much larger, though its mathematical role in determining flags is identical.
Table A7. Temporal saturation: fraction of Chicago wards exceeding threshold.
Table A7. Temporal saturation: fraction of Chicago wards exceeding threshold.
Week m = 1000 m = 2000 m = 3000 m = 4000
40%0%0%0%
822%0%0%0%
1238%4%0%0%
1658%28%4%0%
2498%38%28%6%
32100%62%38%30%
40100%80%44%34%
48100%98%62%38%
Every threshold eventually approaches saturation: raising thresholds delays but does not prevent the transition, consistent with Theorem 2. The time to 50% saturation ( T 50 ) scales approximately linearly with threshold: T 50 7 weeks for m = 1000 , ≈14 weeks for m = 1500 , and ≈29 weeks for m = 2000 . Saturation here refers to cumulative counts exceeding fixed thresholds, not an increase in crime rates; even a stationary process saturates any fixed threshold given sufficient time.

Appendix C.6. Correlation Structure

Crime counts across Chicago’s 50 wards exhibit strong spatial overdispersion: mean = 4739 crimes per ward and variance = 5,996,000 , yielding a variance-to-mean ratio = 1265. For Poisson-distributed data this ratio equals 1; the observed value indicates substantial positive spatial correlation (crime clustering), reducing effective dimensionality, as analyzed in Section 3.4.
Daily crime counts exhibit temporal autocorrelation with estimated correlation time τ 15 days. By the Bartlett–Wilks formula, this implies k eff 365 / ( 2 τ ) 12 effective independent observations per year, a reduction factor of approximately 30. This reduction in effective dimensionality directly illustrates the correlation correction derived in Section 3.4.

Appendix C.7. Summary

Table A8 summarizes the correspondence between theoretical predictions and empirical observations.
Table A8. Summary of empirical illustrations.
Table A8. Summary of empirical illustrations.
PhenomenonPredictionObservation
Match distributionPoisson ( λ ) d TV = 0.084
Disparity amplificationExponential in mslope = 0.78 , p = 0.0016
Group dominanceConcentration intensifies2% pop. → 17% alerts
Temporal saturationAll thresholds fail100% saturation observed
Spatial correlationReduces k eff Var/Mean = 1265
Temporal correlationReduces k eff τ 15 days
These illustrations confirm that the theoretical framework describes behaviors that arise naturally when screening mechanisms are applied to real data. The phenomena are not artifacts of idealized assumptions.

References

  1. Stockmarr, A. Likelihood ratios for evaluating DNA evidence when the suspect is found through a database search. Biometrics 1999, 55, 671–677. [Google Scholar] [CrossRef]
  2. Balding, D.J. The DNA database search controversy. Biometrics 2002, 58, 241–247. [Google Scholar] [CrossRef]
  3. Storvik, G. The DNA database search controversy revisited: A hierarchical Bayesian approach. Biometrics 2006, 62, 652–661. [Google Scholar]
  4. Kaye, D.H. The genealogy detectives: A constitutional analysis of familial searching. Am. Crim. Law Rev. 2013, 51, 109–163. [Google Scholar]
  5. Lyon, D. Surveillance as Social Sorting: Privacy, Risk, and Digital Discrimination; Routledge: London, UK, 2003. [Google Scholar]
  6. O’Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Crown Publishers: New York, NY, USA, 2016. [Google Scholar]
  7. Schneier, B. Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World; W. W. Norton & Company: New York, NY, USA, 2015. [Google Scholar]
  8. Zimek, A.; Schubert, E.; Kriegel, H.-P. A survey on unsupervised outlier detection in high-dimensional numerical data. Stat. Anal. Data Min. 2012, 5, 363–387. [Google Scholar] [CrossRef]
  9. Axelsson, S. The base-rate fallacy and the difficulty of intrusion detection. ACM Trans. Inf. Syst. Secur. 2000, 3, 186–205. [Google Scholar] [CrossRef]
  10. Lippmann, R.P.; Haines, J.W.; Fried, D.J.; Korba, J.; Das, K. The 1999 DARPA off-line intrusion detection evaluation. Comput. Netw. 2000, 34, 579–595. [Google Scholar] [CrossRef]
  11. Cretu, G.F.; Stavrou, A.; Locasto, M.E.; Stolfo, S.J.; Keromytis, A.D. Casting out demons: Sanitizing training data for anomaly sensors. In Proceedings of the 2008 IEEE Symposium on Security and Privacy, Oakland, CA, USA, 18–21 May 2008; pp. 81–95. [Google Scholar]
  12. Sommer, R.; Paxson, V. Outside the closed world: On using machine learning for network intrusion detection. In Proceedings of the 2010 IEEE Symposium on Security and Privacy, Oakland, CA, USA, 16–19 May 2010; pp. 305–316. [Google Scholar]
  13. Scheirer, W.J.; de Rezende Rocha, A.; Sapkota, A.; Boult, T.E. Toward open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1757–1772. [Google Scholar] [CrossRef] [PubMed]
  14. Efron, B. Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  15. Benjamini, Y.; Hochberg, Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B 1995, 57, 289–300. [Google Scholar] [CrossRef]
  16. Storey, J.D. A direct approach to false discovery rates. J. R. Stat. Soc. Ser. B 2002, 64, 479–498. [Google Scholar]
  17. Benjamini, Y.; Yekutieli, D. The control of the false discovery rate in multiple testing under dependency. Ann. Stat. 2001, 29, 1165–1188. [Google Scholar] [CrossRef]
  18. Hardt, M.; Price, E.; Srebro, N. Equality of opportunity in supervised learning. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS 2016), Barcelona, Spain, 5–10 December 2016; Volume 29. [Google Scholar]
  19. Chouldechova, A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data 2017, 5, 153–163. [Google Scholar] [CrossRef]
  20. Corbett-Davies, S.; Pierson, E.; Feller, A.; Goel, S.; Huq, A. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 797–806. [Google Scholar]
  21. Barocas, S.; Hardt, M.; Narayanan, A. Fairness and Machine Learning: Limitations and Opportunities. 2019. Available online: https://fairmlbook.org (accessed on 2 December 2025).
  22. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. 2022, 54, 1–35. [Google Scholar] [CrossRef]
  23. Fawcett, T. An introduction to ROC analysis. Pattern Recognit. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
  24. Feller, W. An Introduction to Probability Theory and Its Applications, 3rd ed.; Wiley: New York, NY, USA, 1968; Volume 1. [Google Scholar]
  25. Barbour, A.D.; Holst, L.; Janson, S. Poisson Approximation; Oxford University Press: Oxford, UK, 1992. [Google Scholar]
  26. Barbour, A.D.; Chen, L.H.Y. An Introduction to Stein’s Method; Singapore University Press: Singapore, 2005. [Google Scholar]
  27. Hoeffding, W. Probability inequalities for sums of bounded random variables. J. Am. Stat. Assoc. 1963, 58, 13–30. [Google Scholar] [CrossRef]
  28. Mitzenmacher, M.; Upfal, E. Probability and Computing: Randomized Algorithms and Probabilistic Analysis; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  29. Dembo, A.; Zeitouni, O. Large Deviations Techniques and Applications, 2nd ed.; Springer: New York, NY, USA, 2010. [Google Scholar]
  30. Hilbert, M.; López, P. The world’s technological capacity to store, communicate, and compute information. Science 2011, 332, 60–65. [Google Scholar] [CrossRef]
  31. Reinsel, D.; Gantz, J.; Rydning, J. The Digitization of the World: From Edge to Core; IDC White Paper; International Data Corporation: Framingham, MA, USA, 2018. [Google Scholar]
  32. Pollanen, M. A double birthday paradox in the study of coincidences. Mathematics 2024, 12, 3882. [Google Scholar] [CrossRef]
  33. Eubanks, V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor; St. Martin’s Press: New York, NY, USA, 2018. [Google Scholar]
  34. Balding, D.J.; Donnelly, P. Inference in forensic identification. J. R. Stat. Soc. Ser. A 1995, 158, 21–38. [Google Scholar] [CrossRef]
  35. Welch, H.G.; Black, W.C. Overdiagnosis in cancer. J. Natl. Cancer Inst. 2011, 103, 605–613. [Google Scholar] [CrossRef]
  36. Ioannidis, J.P.A. Why most published research findings are false. PLoS Med. 2005, 2, e124. [Google Scholar] [CrossRef]
  37. Kleinberg, J.; Mullainathan, S.; Raghavan, M. Inherent trade-offs in the fair determination of risk scores. In Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS 2017), Berkeley, CA, USA, 9–11 January 2017; pp. 43:1–43:23. [Google Scholar]
  38. Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; Zemel, R. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (ITCS 2012), Cambridge, MA, USA, 8–10 January 2012; pp. 214–226. [Google Scholar]
  39. Benjamin, R. Race After Technology: Abolitionist Tools for the New Jim Code; Polity Press: Cambridge, UK, 2019. [Google Scholar]
  40. Wang, M.; Zhou, D.; Chen, M. Hybrid variable monitoring: An unsupervised process monitoring framework with binary and continuous variables. Automatica 2023, 147, 110670. [Google Scholar] [CrossRef]
  41. Robbins, H. A remark on Stirling’s formula. Am. Math. Mon. 1955, 62, 26–29. [Google Scholar] [CrossRef]
  42. Kish, L. Survey Sampling; Wiley: New York, NY, USA, 1965. [Google Scholar]
  43. Bretherton, C.S.; Widmann, M.; Dymnikov, V.P.; Wallace, J.M.; Bladé, I. The effective number of spatial degrees of freedom of a time-varying field. J. Clim. 1999, 12, 1990–2009. [Google Scholar] [CrossRef]
  44. Wilks, D.S. Statistical Methods in the Atmospheric Sciences, 4th ed.; Academic Press: Cambridge, MA, USA, 2019. [Google Scholar]
  45. Nyholt, D.R. A simple correction for multiple testing for single-nucleotide polymorphisms in linkage disequilibrium with each other. Am. J. Hum. Genet. 2004, 74, 765–769. [Google Scholar] [CrossRef]
  46. Li, J.; Ji, L. Adjusting multiple testing in multilocus analyses using the eigenvalues of a correlation matrix. Heredity 2005, 95, 221–227. [Google Scholar] [CrossRef]
  47. Uffelmann, E.; Huang, Q.Q.; Munung, N.S.; de Vries, J.; Okada, Y.; Martin, A.R.; Martin, H.C.; Lappalainen, T.; Posthuma, D. Genome-wide association studies. Nat. Rev. Methods Primers 2021, 1, 59. [Google Scholar] [CrossRef]
  48. Worsley, K.J.; Marrett, S.; Neelin, P.; Vandal, A.C.; Friston, K.J.; Evans, A.C. A unified statistical approach for determining significant signals in images of cerebral activation. Hum. Brain Mapp. 1996, 4, 58–73. [Google Scholar] [CrossRef]
  49. Friston, K.J.; Worsley, K.J.; Frackowiak, R.S.J.; Mazziotta, J.C.; Evans, A.C. Assessing the significance of focal activations using their spatial extent. Hum. Brain Mapp. 1994, 1, 210–220. [Google Scholar] [CrossRef] [PubMed]
  50. Nichols, T.; Hayasaka, S. Controlling the familywise error rate in functional neuroimaging: A comparative review. Stat. Methods Med. Res. 2003, 12, 419–446. [Google Scholar] [CrossRef]
  51. Dua, D.; Graff, C. UCI Machine Learning Repository; University of California, Irvine: Irvine, CA, USA, 2019; Available online: https://archive.ics.uci.edu (accessed on 2 December 2025).
  52. City of Chicago. Crimes—One Year Prior to Present; Chicago Data Portal: Chicago, IL, USA, 2024; Available online: https://data.cityofchicago.org/Public-Safety/Crimes-One-year-prior-to-present/x2n5-8w5q (accessed on 2 December 2025).
Figure 1. Phase transitions in surveillance system reliability. Systems exhibit sharp transitions from reliable to unreliable operation across four dimensions. (a) Attribute growth: Monte Carlo simulations (orange points, 5000 runs per data point) validate the theoretical predictions (blue curve) with mean absolute error below 0.002. Takeaway: The sharp S-curve illustrates a phase transition: systems remain reliable until a critical attribute count is reached, after which reliability collapses rapidly with almost no intermediate zone. (b) Population scaling: False alert probability as a function of population size (log scale) for fixed λ and threshold. Takeaway: The transition sharpens as λ increases, confirming Proposition 1. Populations below n crit e λ D are reliable, while those above this scale almost certainly generate false alerts. (c) Temporal dynamics: Under exponential data growth with γ = 1.5 (50% annual growth), systems transition from reliable to unreliable at a predictable time T * 4 years (Theorem 2). Takeaway: Degradation is abrupt rather than gradual—systems remain functional until they cross a critical time threshold and then fail rapidly. (d) Group dominance: Two groups with different exposure rates ( p 1 = 0.005 vs. p 2 = 0.02 ) exhibit markedly different false alert trajectories. Takeaway: The high-exposure group (solid) reaches the failure regime much sooner than the low-exposure group (dashed), illustrating the structural exposure-driven disparities described in Proposition 3.
Figure 1. Phase transitions in surveillance system reliability. Systems exhibit sharp transitions from reliable to unreliable operation across four dimensions. (a) Attribute growth: Monte Carlo simulations (orange points, 5000 runs per data point) validate the theoretical predictions (blue curve) with mean absolute error below 0.002. Takeaway: The sharp S-curve illustrates a phase transition: systems remain reliable until a critical attribute count is reached, after which reliability collapses rapidly with almost no intermediate zone. (b) Population scaling: False alert probability as a function of population size (log scale) for fixed λ and threshold. Takeaway: The transition sharpens as λ increases, confirming Proposition 1. Populations below n crit e λ D are reliable, while those above this scale almost certainly generate false alerts. (c) Temporal dynamics: Under exponential data growth with γ = 1.5 (50% annual growth), systems transition from reliable to unreliable at a predictable time T * 4 years (Theorem 2). Takeaway: Degradation is abrupt rather than gradual—systems remain functional until they cross a critical time threshold and then fail rapidly. (d) Group dominance: Two groups with different exposure rates ( p 1 = 0.005 vs. p 2 = 0.02 ) exhibit markedly different false alert trajectories. Takeaway: The high-exposure group (solid) reaches the failure regime much sooner than the low-exposure group (dashed), illustrating the structural exposure-driven disparities described in Proposition 3.
Mathematics 14 00049 g001
Table 1. Poisson approximation error: theoretical bound versus actual total variation distance for representative ( k , p ) combinations.
Table 1. Poisson approximation error: theoretical bound versus actual total variation distance for representative ( k , p ) combinations.
kp λ = kp Le Cam BoundActual d TV
500.105.01.0000.026
1000.055.00.5000.013
5000.015.00.1000.0025
10000.0055.00.0500.0012
50000.0015.00.0100.0002
1000.1010.02.0000.026
10000.0110.00.2000.0025
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pollanen, M. The Probabilistic Foundations of Surveillance Failure: From False Alerts to Structural Bias. Mathematics 2026, 14, 49. https://doi.org/10.3390/math14010049

AMA Style

Pollanen M. The Probabilistic Foundations of Surveillance Failure: From False Alerts to Structural Bias. Mathematics. 2026; 14(1):49. https://doi.org/10.3390/math14010049

Chicago/Turabian Style

Pollanen, Marco. 2026. "The Probabilistic Foundations of Surveillance Failure: From False Alerts to Structural Bias" Mathematics 14, no. 1: 49. https://doi.org/10.3390/math14010049

APA Style

Pollanen, M. (2026). The Probabilistic Foundations of Surveillance Failure: From False Alerts to Structural Bias. Mathematics, 14(1), 49. https://doi.org/10.3390/math14010049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop