Skip Content
You are currently on the new version of our website. Access the old version .
SymmetrySymmetry
  • Article
  • Open Access

13 December 2025

Symmetric Discrete Distributions on the Integer Line: A Versatile Family and Applications

,
,
,
,
and
1
Department of Mathematics, College of Sciences and Arts, Najran University, P.O. Box 1988, Najran 11001, Saudi Arabia
2
Departamento de Matemática, Facultad de Ingeniería, Universidad de Atacama, Copiapó 1531772, Chile
3
Department of Mathematics, College of Science, Qassim University, Buraydah 51452, Saudi Arabia
4
Department of Mathematics and Natural Sciences, Gulf University for Science and Technology, P.O. Box 7207, Hawally 32093, Kuwait
This article belongs to the Special Issue Skewed (Asymmetrical) Probability Distributions and Applications Across Disciplines, Fourth Edition

Abstract

We introduce the Symmetric- Z (Sy- Z ) family, a unified class of symmetric discrete distributions on the integers obtained by multiplying a three-point symmetric sign variable by an independent non-negative integer-valued magnitude. This sign-magnitude construction yields interpretable, zero-centered models with tunable mass at zero and dispersion balanced across signs, making them suitable for outcomes, such as differences of counts or discretized return increments. We derive general distributional properties, including closed-form expressions for the probability mass and cumulative distribution functions, bilateral generating functions, and even moments, and show that the tail behavior is inherited from the magnitude component. A characterization by symmetry and sign–magnitude independence is established and a distinctive operational feature is proved: for independent members of the family, the sum and the difference have the same distribution. As a central example, we study the symmetric Poisson model, providing measures of skewness, kurtosis, and entropy, together with estimation via the method of moments and maximum likelihood. Simulation studies assess finite-sample performance of the estimators, and applications to datasets from finance and education show improved goodness-of-fit relative to established integer-valued competitors. Overall, the Sy- Z framework offers a mathematically tractable and interpretable basis for modeling symmetric integer-valued outcomes across diverse domains.

1. Introduction

Many discrete datasets encountered in practice take values on the non-negative integers that are routinely modeled using standard families, such as the Poisson or geometric distributions. In contrast, there are important situations where the natural support is the whole set of integers Z , most notably when observations are signed differences of counts or other zero-centered measurements. Canonical examples include score differences in sports, day-to-day changes in transaction counts or sales, inter-rater differences in clinical tallies, and discretized (symmetric) return increments in finance. For such problems, modeling directly on Z with distributions that respect symmetry around zero is both natural and desirable.
Several integer-valued distributions on Z have been proposed. Prominent instances include the Skellam distribution [1], obtained as the difference of two independent Poisson variables; the discrete Laplace distribution [2], along with related skew/asymmetric variants [3,4]; the discrete normal distribution [5]; and, recently, perturbed Laplace–type models [6]. Applications of signed count differences appear in medical and reliability studies [7,8] and in sports analytics, such as goal differences [9]. For general background on count modeling and discrete distributions, see refs. [10,11]. In addition, substantial probability mass at zero frequently arises in practice, creating links with the zero-inflated literature [12].
Beyond these classical constructions, there has been a notable increase in recent work on flexible integer-valued distributions supported on Z , including several models explicitly designed to capture symmetry and tunable dispersion. For example, ref. [13] introduced the discrete skew logistic distribution, which can accommodate symmetric and asymmetric count data and provides a useful reference for tail-shape control. Two recent contributions by refs. [14,15] developed new symmetric and perturbation-based distributions on the integers, with applications to stock exchange and hydrological data. In parallel, ref. [6] proposed a general perturbation of the discrete Laplace distribution, demonstrating improvements in financial and health datasets. More broadly, ref. [16] reviewed Skellam-type models and related integer-supported families, while ref. [17] provided an up-to-date survey on models for integer-valued data, highlighting the importance of distributions supported on Z in modern applications. These recent works underscore the need for simple, interpretable, and analytically tractable symmetric models on Z , a gap that the present Sy- Z family aims to fill. These recent developments further motivate the need for a symmetric decomposition-based model with explicit identifiability and analytical tractability, such as the Sy- Z and Sy- P formulations proposed in this work.
This paper introduces a unified and tractable framework for symmetric integer-valued data on Z , named the Sy- Z family. The construction separates a three-point symmetric sign component from a non-negative magnitude: a data-generating sign takes values in { 1 , 0 , 1 } with a tunable mass at zero and is multiplied by an independent, non-negative integer-valued variable. This sign–magnitude representation yields zero-centered, exactly symmetric models with interpretable control of the atom at zero while allowing the analyst to inherit tail behavior, dispersion, and computational convenience from the chosen baseline magnitude distribution.
We develop a coherent set of distributional results for the family: closed-form probability mass functions and cumulative distribution functions, bilateral probability generating functions, and moment identities. Symmetry implies vanishing odd moments, whereas even moments factor through the baseline magnitude. We also establish a characterization by symmetry and independence: an integer-valued distribution belongs to the proposed family if and only if it is symmetric and its sign is independent of the magnitude with a three-point symmetric distribution. Beyond these foundations, we study general consequences of the product structure, including tail transfer from the magnitude, conditions for unimodality or bimodality, and simple obstructions to infinite divisibility.
A distinctive feature of this framework is a strong symmetry property for operations on independent variables, where for two independent members of the family, the sum and the difference have the same distribution. This identity is a direct consequence of the bilateral generating function symmetry and does not generally hold for standard two-sided competitors, such as Skellam [1], discrete Laplace [2], perturbed Laplace [6], or extended Poisson models [18].
As a central example, we particularize the family with a Poisson magnitude, thereby obtaining the symmetric Poisson model. We derive distributional formulas (including entropy), discuss the induced zero mass in relation to zero-inflated counts [12], and develop estimation via the method of moments and maximum likelihood. Simulation studies assess finite-sample behavior, and applications to datasets from finance and education illustrate a competitive or improved fit relative to established alternatives supported on Z .
The remainder of the paper is organized as follows. Section 2 introduces the Sy- Z family, detailing the construction from a symmetric modified Bernoulli sign and an independent non-negative magnitude. Section 3 develops core distributional results for the family, including closed-form probability mass function (PMF) and cumulative distribution function (CDF) identities, bilateral generating functions, characterization by symmetry and sign–magnitude independence, tail transfer from the magnitude, modality conditions, criteria precluding infinite divisibility, the quantile function, and the median, as well as a discussion of first-order stochastic dominance for | Z | . Section 4 specializes in the Sy–Poisson model, deriving the moment generating function (MGF) and probability generating function (PGF), closed-form even moments (via Touchard polynomials), skewness, kurtosis, and Shannon entropy. Section 5 presents inference for Sy–Poisson: method-of-moments estimation (with asymptotic variance via the delta method), likelihood-based estimation, and both observed and expected Fisher information. Section 6 reports a simulation study evaluating the finite-sample bias and mean squared error of the maximum likelihood estimators. Section 7 provides two empirical applications (finance and education) comparing Sy–Poisson with established competitors on Z . A concluding section summarizes implications and outlines directions for future research.

2. The Sy- Z Family: Construction and Basic Setup

A key building block of the Sy- Z family is a three-point symmetric distribution on { 1 , 0 , 1 } that combines a random sign with a controllable mass at zero. This distribution will serve as the canonical sign mechanism in our sign–magnitude representation of Z. We refer to it as the symmetric modified Bernoulli distribution.
Definition 1.
Let θ ( 0 , 1 / 2 ) . A discrete random variable X is said to follow the symmetric modified Bernoulli distribution with parameter θ, denoted SMB ( θ ) , if its PMF is
P ( X = k ) = ( 1 2 θ ) 1 | k | θ | k | , k { 1 , 0 , 1 } .
Proposition 1.
Let V 1 Bernoulli ( 2 θ ) and V 2 Bernoulli ( 1 / 2 ) be independent random variables. Define
X = V 1 2 V 2 1 .
Then X SMB ( θ ) with support { 1 , 0 , 1 } and θ ( 0 , 1 / 2 ) .
Proof. 
Write W = 2 V 2 1 , so P ( W = 1 ) = P ( W = 1 ) = 1 / 2 and W V 1 , where ⊥ denotes independence between random variables. Then, we obtain
P ( X = 0 ) = P ( V 1 = 0 ) = 1 2 θ , P ( X = ± 1 ) = P ( V 1 = 1 ) P ( W = ± 1 ) = ( 2 θ ) 1 2 = θ ,
which matches the symmetric modified Bernoulli distribution with parameter θ .    □
To facilitate simulation and highlight the structural symmetry of SMB ( θ ) , we present it in Appendix A two equivalent stochastic constructions for generating X SMB ( θ ) . These representations are convenient for both algorithmic sampling and concise derivations of basic properties.

2.1. Stochastic Representation

Definition 2.
Let X SMB ( θ ) be as defined in Definition 1, and let Y be a discrete random variable with support N 0 . Assume X and Y are independent. We say that a discrete random variable Z belongs to the Sy- Z family if it admits the stochastic representation
Z = d X Y .
Proposition 2.
Let X SMB ( θ ) with θ ( 0 , 1 / 2 ) , let Y be an independent N 0 -valued random variable, and set Z = X Y . Then:
(i) 
Moments of X. For every odd integer r, E ( X r ) = 0 . For every even integer q 2 , E ( X q ) = 2 θ . In particular, E ( X ) = 0 , Var ( X ) = 2 θ , and  Var ( X ) = 2 θ .
(ii) 
Moments of Z = X Y . If  E ( Y 2 m ) < , then for all m 0 , E ( Z 2 m + 1 ) = 0 , and for all m 1 ,
E ( Z 2 m ) = E ( X 2 m ) E ( Y 2 m ) = 2 θ E ( Y 2 m ) .
In particular, E ( Z ) = 0 and Var ( Z ) = E ( Z 2 ) = 2 θ E ( Y 2 ) .
Proof. 
For X SMB ( θ ) , P ( X = 0 ) = 1 2 θ , and  P ( X = ± 1 ) = θ . Hence E ( X r ) = θ ( 1 r + ( 1 ) r ) for r 1 , which is 0 for odd r and 2 θ for even r; the variance follows since E ( X ) = 0 and E ( X 2 ) = 2 θ . For  Z = X Y with X Y , we have E ( Z k ) = E ( X k ) E ( Y k ) whenever the moments exist; the claims for odd and even powers follow by substituting the moments of X obtained above.    □

2.2. Characterization by Symmetry and Independence

We show that the Sy- Z family is exactly the class of symmetric integer-valued distributions for which the sign is independent of the magnitude and is represented by the symmetric modified Bernoulli distribution.
Theorem 1.
Let Z be an integer-valued random variable with P ( Z = 0 ) < 1 . The following statements are equivalent:
(i) 
Z belongs to the Sy- Z family; that is, there exist X SMB ( θ ) with θ ( 0 , 1 / 2 ) and an independent Y N 0 such that Z = d X Y .
(ii) 
Z is symmetric about zero, i.e.,  P ( Z = k ) = P ( Z = k ) for all k Z , and there exist random variables S and W such that S SMB ( θ ) for some θ ( 0 , 1 / 2 ) , W N 0 , S, and W are independent, and  Z = d S W .
Proof. 
(i) ⇒ (ii). Suppose Z = d X Y with X SMB ( θ ) , θ ( 0 , 1 / 2 ) , and  Y N 0 independent of X. Then Z is symmetric because
P ( Z = k ) = P ( X Y = k ) = P ( X = 1 , Y = k ) = P ( X = 1 , Y = k ) = P ( Z = k ) , k > 0 .
Set S : = X and W : = Y . By construction, S SMB ( θ ) , W N 0 , S W , and  Z = d S W . Thus (ii) holds. (ii) ⇒ (i). Conversely, suppose (ii) holds. Take X : = S and Y : = W . Then X SMB ( θ ) , Y N 0 , and X Y , with  Z = d X Y . Hence Z belongs to the Sy- Z family, so (i) holds.    □
Remark 1.
Theorem 1 establishes that Definition 2 is not merely a constructive scheme but a complete characterization of the class: symmetry, together with sign–magnitude independence and a three-point symmetric sign distribution, is equivalent to membership in Sy- Z . This validates the use of SMB ( θ ) (Definition 1) as the canonical mechanism for the sign component.
Further analytical properties of the Sy- Z distribution, including its moment generating function and characteristic function, are provided in Section 3.

3. Main Properties of the Sy- Z Family Distributions

Proposition 3.
Let X SMB ( θ ) with θ ( 0 , 1 / 2 ) and support { 1 , 0 , 1 } , and let Y N 0 be independent of X. Define Z = X Y . Then Z belongs to the Sy- Z family, and its PMF is
P ( Z = k ) = θ P ( Y = | k | ) , if k Z { 0 } , ( 1 2 θ ) + 2 θ P ( Y = 0 ) , if k = 0 .
Proof. 
Since X and Y are independent and Y 0 , consider three cases:
For k > 0 . Then
P ( Z = k ) = P ( X Y = k ) = P ( X = 1 , Y = k ) = P ( X = 1 ) P ( Y = k ) = θ P ( Y = k ) .
For k < 0 . Writing k = m with m > 0 ,
P ( Z = k ) = P ( X Y = m ) = P ( X = 1 , Y = m ) = P ( X = 1 ) P ( Y = k ) = θ P ( Y = k ) .
For k = 0 . The event { Z = 0 } occurs if X = 0 (regardless of Y) or if Y = 0 with X { ± 1 } . By independence,
P ( Z = 0 ) = P ( X = 0 ) + P ( Y = 0 , X 0 ) = ( 1 2 θ ) + P ( X 0 ) P ( Y = 0 ) = ( 1 2 θ ) + 2 θ P ( Y = 0 ) .
Combining the three cases yields the result. Normalization follows since   
k Z P ( Z = k ) = P ( Z = 0 ) + k 1 ( P ( Z = k ) + P ( Z = k ) ) = ( 1 2 θ ) + 2 θ P ( Y = 0 ) + k 1 ( θ P ( Y = k ) + θ P ( Y = k ) ) = ( 1 2 θ ) + 2 θ P ( Y = 0 ) + 2 θ k 1 P ( Y = k ) = ( 1 2 θ ) + 2 θ k 0 P ( Y = k ) 1 = 1 .
   □
Corollary 1.
Let Z be an integer-valued random variable and define W = | Z | . Then W has support N , and its PMF is
P ( W = k ) = P ( Z = k ) + P ( Z = k ) = 2 θ P ( Y = k ) , if k N { 0 } , P ( Z = 0 ) = ( 1 2 θ ) + 2 θ P ( Y = 0 ) , if k = 0 .
Moreover, we have
E | Z | = 2 θ E Y a n d V a r ( | Z | ) = 2 θ V a r ( Y ) + 2 θ 1 2 θ E Y 2 .

3.1. Identifiability Within Sy- Z

Let Z = X Y , where X SMB ( θ ) with θ ( 0 , 1 / 2 ) and Y N 0 is independent of X. Denote p Z ( k ) = P ( Z = k ) and p Y ( k ) = P ( Y = k ) for k N 0 . The PMF of Z is
p Z ( 0 ) = ( 1 2 θ ) + 2 θ p Y ( 0 ) , p Z ( k ) = θ p Y ( | k | ) for k 0 .
Proposition 4.
From the marginal distribution p Z ( · ) alone, the pair ( θ , p Y ( · ) ) is not identifiable in the model class
M = ( θ , p Y ( · ) ) : θ ( 0 , 1 / 2 ) , p Y ( · ) a P M F o n N 0 .
More precisely, for any fixed p Z ( · ) , there exist infinitely many pairs ( θ , p Y ( · ) ) M satisfying (5).
Proof. 
Fix p Z ( · ) . From (5) and the constraint 0 < θ < 1 / 2 , we have for k 1
θ p Y ( k ) = p Z ( k ) .
Thus, whenever θ is chosen, the off-zero masses of Y must satisfy p Y ( k ) = p Z ( k ) / θ for k 1 . Using normalization,
1 = k 0 p Y ( k ) = p Y ( 0 ) + k 1 p Z ( k ) θ = p Y ( 0 ) + 1 p Z ( 0 ) θ ,
which yields
p Y ( 0 ) = 1 1 p Z ( 0 ) θ .
On the other hand, the zero-mass identity in (5) gives
p Z ( 0 ) = ( 1 2 θ ) + 2 θ p Y ( 0 ) = ( 1 2 θ ) + 2 θ 1 1 p Z ( 0 ) θ = p Z ( 0 ) ,
so (6) is consistent for any θ that makes p Y ( 0 ) [ 0 , 1 ] and p Y ( k ) 0 for k 1 . For all such θ , we obtain a valid PMF p Y ( · ) producing the same p Z ( · ) . Hence, the mapping ( θ , p Y ( · ) ) p Z ( · ) is not one-to-one on M , proving non-identifiability.    □
Proposition 5.
Suppose Y belongs to a parametric family { p Y ( · ; λ ) : λ Λ } with p Y ( 0 ; λ ) known as a function of λ, and such that for every λ, the map k p Y ( k ; λ ) is injective in λ on a set of indices with nonzero p Y ( · ; λ ) . Then the parameter pair ( θ , λ ) is identifiable from p Z ( · ) .
Proof. 
From (5), for any k 1 ,
p Z ( k ) = θ p Y ( k ; λ ) .
If ( θ 1 , λ 1 ) and ( θ 2 , λ 2 ) produce the same p Z , then p Z ( k ) agrees for all k 1 , and hence
θ 1 p Y ( k ; λ 1 ) = θ 2 p Y ( k ; λ 2 ) for all k 1 .
If λ 1 λ 2 , the injectivity of k p Y ( k ; λ ) in λ implies that the ratio p Y ( k ; λ 1 ) / p Y ( k ; λ 2 ) cannot be constant in k on any set of indices with positive mass; thus, the equality above cannot hold for all k 1 unless λ 1 = λ 2 . Therefore λ 1 = λ 2 , and then θ 1 = θ 2 follows from any single k 1 . Finally, p Z ( 0 ) = ( 1 2 θ ) + 2 θ p Y ( 0 ; λ ) is automatically satisfied, so ( θ , λ ) is identifiable.    □
Corollary 2.
(i) 
If Y Poisson ( λ ) , then ( θ , λ ) is identifiable.
(ii) 
If Y Geometric ( q ) , then ( θ , q ) is identifiable.
Proof. 
For Poisson, p Y ( k ; λ ) = e λ λ k / k ! and p Y ( 0 ; λ ) = e λ ; distinct λ produce non-proportional sequences { p Y ( k ; λ ) } k 1 , so the injectivity condition holds. For geometric distributions with success probabilities q, p Y ( k ; q ) = ( 1 q ) k q , and  p Y ( 0 ; q ) = q ; distinct q once again yield non-proportional sequences on k 1 . Proposition 5 applies in both cases.    □
Corollary 3.
If θ is known (such as from external calibration), then p Y ( · ) is recovered from p Z ( · ) ( · ) via
p Y ( 0 ) = p Z ( 0 ) ( 1 2 θ ) 2 θ , p Y ( k ) = p Z ( k ) θ for k 1 ,
and this defines a valid PMF provided p Z ( · ) arises from a Sy- Z model with the given θ.
Proof. 
Equations (7) are a direct rearrangement of (5). Nonnegativity and normalization follow from the fact that p Z ( · ) is generated by the Sy- Z structure with θ .    □
Proposition 4 clarifies that, without additional structure on the magnitude Y, the sign parameter θ , and the zero mass p Y ( 0 ) are confounded through p Z ( 0 ) = ( 1 2 θ ) + 2 θ p Y ( 0 ) , while only the products θ p Y ( k ) and k 1 are determined by { p Z ( k ) } k 0 . Imposing a parametric family for Y restores identifiability (Proposition 5), as illustrated by Sy-Poisson and Sy-Geometric (Corollary 2). When θ is externally known, p Y is uniquely reconstructed from p Z (Corollary 3).

3.2. Tail Behaviour

Proposition 6.
Under the assumptions of Proposition 3, for all k 0 ,
P ( | Z | > k ) = 2 θ P ( Y > k ) .
Consequently, | Z | is tail-equivalent to Y up to factor 2 θ : if P ( Y > k ) c g ( k ) as k for some reference function g, then P ( | Z | > k ) ( 2 θ c ) g ( k ) .
Proof. 
By (3), for  k 0 ,
P ( | Z | > k ) = j > k P ( Z = j ) + P ( Z = j ) = j > k 2 θ P ( Y = j ) = 2 θ P ( Y > k ) .
The asymptotic equivalence follows immediately.    □
Remark 2.
If Y has exponential (light) tails (for example, Poisson, binomial, geometric), then so does | Z | ; if Y has regularly varying (power-distribution) tails, then so does | Z | with the same index. The parameter θ scales tail probabilities but does not change their rate.

3.3. Unimodality and Number of Modes

Proposition 7.
Let Z = X Y with X SMB ( θ ) , θ ( 0 , 1 / 2 ) , and  Y N 0 be independent. Define the off-zero mode of Y by
m + arg max k 1 P ( Y = k ) ( choose the smallest such index ) .
Then:
(i) 
For k 1 , P ( Z = k ) = P ( Z = k ) = θ P ( Y = k ) . Hence, the positive (and negative) sides of Z are proportional copies of the PMF of Y restricted to { 1 , 2 , } , and  m + is the (unique smallest) mode on the positive side of Z as well as the mode of | Z | on { 1 , 2 , } .
(ii) 
The global modes of Z are determined by comparing the mass at zero with the peak on the positive side:
single mode at 0 if P ( Z = 0 ) > θ P ( Y = m + ) , three modes at 0 , ± m + if P ( Z = 0 ) = θ P ( Y = m + ) , two symmetric modes at ± m + if P ( Z = 0 ) < θ P ( Y = m + ) .
In particular, using P ( Z = 0 ) = ( 1 2 θ ) + 2 θ P ( Y = 0 ) ,
( 1 2 θ ) + 2 θ P ( Y = 0 ) θ max k 1 P ( Y = k ) Z is unimodal at 0 .
Proof. 
Part (i) is immediate from the Sy- Z PMF: for k 1 , P ( Z = ± k ) = θ P ( Y = k ) , so the positive and negative sides inherit the shape of Y on { 1 , 2 , } . For (ii), because both sides are scaled copies of { P ( Y = k ) } k 1 , the only competitors for the global maximum are k = 0 and k = ± m + . Comparing P ( Z = 0 ) with θ P ( Y , m + ) yields three cases.    □
Corollary 4.
If Y is unimodal with a mode at m = 0 (such as, geometric or binomial with p 1 / 2 ), then Z is unimodal at 0 for all θ ( 0 , 1 / 2 ) . More generally, if Y is log-concave on N 0 , then Z is either unimodal at 0 or bimodal at ± m + , depending on the inequality in Proposition 7 (ii); no additional modes can appear.

3.4. Cumulative Distribution Function

Proposition 8.
Let X SMB ( θ ) with θ ( 0 , 1 / 2 ) , Y N 0 be a random variable with CDF F Y ( · ) , independent of X and Z = X Y . The CDF of Z is
F Z ( k ) = θ 1 F Y ( | k | 1 ) , if k 1 , ( 1 θ ) + θ P ( Y = 0 ) , if k = 0 , ( 1 θ ) + θ F Y ( k ) , if k 1 .
Proof. 
From the given PMF, P ( Z = m ) = θ P ( Y = | m | ) for m 0 , m Z , and  P ( Z = 0 ) = ( 1 2 θ ) + 2 θ P ( Y = 0 ) .
For k 1 ,
F Z ( k ) = m = k P ( Z = m ) = j = | k | θ P ( Y = j ) = θ P ( Y | k | ) = θ 1 F Y ( | k | 1 ) .
For k = 0 ,
F Z ( 0 ) = m 1 P ( Z = m ) + P ( Z = 0 ) = θ P ( Y 1 ) + ( 1 2 θ ) + 2 θ P ( Y = 0 ) = ( 1 θ ) + θ P ( Y = 0 ) .
For k 1 ,
F Z ( k ) = m 1 P ( Z = m ) + P ( Z = 0 ) + m = 1 k P ( Z = m ) = ( 1 θ ) + θ P ( Y = 0 ) + θ m = 1 k P ( Y = m ) = ( 1 θ ) + θ F Y ( k ) .
This completes the proof.    □
Corollary 5.
Under the assumptions of Proposition 8, let S Z ( k ) = P ( Z > k ) = 1 F Z ( k ) denote the survival function of Z. Then
S Z ( k ) = 1 θ S Y ( | k | 1 ) , if k 1 , θ S Y ( k ) , if k 0 ,
where S Y ( k ) = 1 F Y ( k ) denotes the survival function of Y.

3.5. First-Order Stochastic Dominance

We say that A dominates B in the first-order sense if F A ( z ) F B ( z ) holds for all z Z . For Sy- Z distributions Z = X Y with X SMB ( θ ) and Y N 0 being independent, write
T Y ( k ) : = P ( Y k ) , k N .
From Proposition 8, for any k 1 ,
F Z ( k ) = θ T Y ( k ) , F Z ( k ) = 1 θ T Y ( k + 1 ) .
Theorem 2.
Let Z i = X i Y i with X i SMB ( θ i ) , Y i N 0 , and  X i Y i ( i = 1 , 2 ). If F Z 1 ( z ) F Z 2 ( z ) for all z Z , then F Z 1 ( z ) = F Z 2 ( z ) for all z Z . In particular, strict first-order dominance cannot occur between two distinct members of the Sy- Z family.
Proof. 
Assume F Z 1 ( · ) F Z 2 ( · ) pointwise. For every k 1 ,
F Z 1 ( k ) F Z 2 ( k ) θ 1 T Y 1 ( k ) θ 2 T Y 2 ( k ) ,
and
F Z 1 ( k 1 ) F Z 2 ( k 1 ) 1 θ 1 T Y 1 ( k ) 1 θ 2 T Y 2 ( k ) θ 1 T Y 1 ( k ) θ 2 T Y 2 ( k ) .
Hence θ 1 T Y 1 ( k ) = θ 2 T Y 2 ( k ) for all k 1 . Using (8) this yields F Z 1 ( · ) = F Z 2 ( · ) on Z .    □
Remark 3.
Classical first-order dominance is too restrictive on symmetric two-sided supports. A more informative comparison works on magnitudes. Since P ( | Z | > k ) = 2 θ P ( Y > k ) ,
| Z 1 | dominates | Z 2 | in the first- order sense θ 1 P ( Y 1 > k ) θ 2 P ( Y 2 > k ) for all k 0 .
This provides a practical dominance notion for dispersion comparisons and for tail–probability assessments based on | Z | .
Remark 4.
Other stochastic orders can still distinguish members of the family. In particular, the convex order and increasing–convex order separate distributions with the same mean (zero by symmetry) but different tail weights. For Sy- Z , many such comparisons reduce to corresponding orders on | Z | (or on Y) via the product representation; a systematic treatment is left for future work.

3.6. Generating Function

Proposition 9.
Under the assumptions of Proposition 8, let M Y ( t ) = E ( e t Y ) denote the MGF of Y, and  G Y ( s ) = E ( s Y ) its PGF. Then, for all t,
M Z ( t ) = E ( e t Z ) = ( 1 2 θ ) + θ ( M Y ( t ) + M Y ( t ) ) = ( 1 2 θ ) + θ ( G Y ( e t ) + G Y ( e t ) ) .
Proof. 
By independence and conditioning on X,
M Z ( t ) = E E ( e t X Y X ) = E M Y ( t X ) = P ( X = 0 ) M Y ( 0 ) + P ( X = 1 ) M Y ( t ) + P ( X = 1 ) M Y ( t ) .
Since P ( X = 0 ) = 1 2 θ and P ( X = ± 1 ) = θ , we obtain
M Z ( t ) = ( 1 2 θ ) + θ M Y ( t ) + θ M Y ( t ) ,
which proves the first identity. The second follows from M Y ( t ) = G Y ( e t ) for integer Y 0 . The stated domain requires simultaneous finiteness of M Y ( t ) and M Y ( t ) .    □
Corollary 6.
If M Y ( 0 ) < , then E ( Z ) = 0 and Var ( Z ) = 2 θ E ( Y 2 ) .
Proof. 
Differentiating at t = 0 : M Z ( 0 ) = θ M Y ( 0 ) θ M Y ( 0 ) = 0 , and  M Z ( 0 ) = θ M Y ( 0 ) + θ M Y ( 0 ) = 2 θ E ( Y 2 ) .    □

3.7. Quantile Function and Median

Let p k = P ( Y = k ) , p 0 = P ( Y = 0 ) , F Y ( k ) = P ( Y k ) , and  T Y ( k ) = P ( Y k ) be such that Y N 0 . From Proposition 8, for integers k 1 ,
F Z ( k ) = θ T Y ( k ) , F Z ( 0 ) = 1 θ + θ p 0 , F Z ( k ) = 1 θ + θ F Y ( k ) .
Equivalently, for  k 0 ,
P ( | Z | > k ) = 2 θ P ( Y > k ) , P ( | Z | k ) = 1 2 θ P ( Y > k ) .
Proposition 10.
Let Q Z ( u ) : = inf { z Z : F Z ( z ) u } denote the (left-continuous) quantile function. Set
a : = F Z ( 1 ) = θ ( 1 p 0 ) , a 0 : = F Z ( 0 ) = 1 θ + θ p 0 .
Then, for  u ( 0 , 1 ) ,
Q Z ( u ) = k ( u ) , 0 < u a , 0 , a < u a 0 , k + ( u ) , a 0 < u < 1 ,
where k ( u ) , k + ( u ) { 1 , 2 , } are obtained from Y via
k ( u ) = min { k 1 : θ T Y ( k ) u } , k + ( u ) = min { k 1 : 1 θ + θ F Y ( k ) u } .
Proof. 
The direct inversion of the three pieces above, using that T Y is non-increasing and F Y ( · ) is non-decreasing on N 0 .    □

3.8. Median

Corollary 7.
Every Sy- Z  distribution has 0 as a median. Moreover, if  θ ( 0 , 1 / 2 ) , then 0 is the unique median.
Proof. 
Since F Z ( 1 ) = θ ( 1 p 0 ) θ < 1 / 2 and F Z ( 0 ) = 1 θ + θ p 0 > 1 / 2 , we have F Z ( 1 ) < 1 / 2 F Z ( 0 ) , so 0 is the unique integer m with F Z ( m 1 ) 1 / 2 F Z ( m ) .    □

3.9. Distribution of Sums and Differences

We now characterize the distribution of sums and differences of independent Sy- Z variables, showing that they share the same law and admit explicit convolution formulas for their PMFs.
Proposition 11.
Let Z i = X i Y i , X i SMB ( θ i ) , θ i ( 0 , 1 / 2 ) , and Y i N 0 be independent of X i , and  ( Z 1 , Z 2 ) be independent. Write p i ( k ) = P ( Y i = k ) and G Y i ( s ) = k 0 p i ( k ) s k for the one–sided PGF of Y i . Then, for  k 1 ,
P ( Z i = ± k ) = θ i p i ( k ) , P ( Z i = 0 ) = ( 1 2 θ i ) + 2 θ i p i ( 0 ) ,
the following holds:
(i) 
The bilateral PGF of Z i is
G Z i ( s ) = ( 1 2 θ i ) + θ i G Y i ( s ) + G Y i ( 1 / s ) , s > 0 ,
so G Z i ( s ) = G Z i ( 1 / s ) for all s > 0 . Consequently, for the sum S : = Z 1 + Z 2 and the difference D : = Z 1 Z 2 ,
G S ( s ) = G Z 1 ( s ) G Z 2 ( s ) , G D ( s ) = G Z 1 ( s ) G Z 2 ( 1 / s ) = G Z 1 ( s ) G Z 2 ( s ) = G S ( s ) ,
and therefore D = d S .
(ii) 
Let S : = Z 1 + Z 2 . For  k > 0 ,
P ( S = k ) = P ( Z 1 = 0 ) θ 2 p 2 ( k ) Z 1 = 0 , Z 2 = k + P ( Z 2 = 0 ) θ 1 p 1 ( k ) Z 2 = 0 , Z 1 = k + θ 1 θ 2 j = 1 k 1 p 1 ( j ) p 2 ( k j ) Z 1 > 0 , Z 2 > 0 + θ 1 θ 2 u = 1 p 1 ( u ) p 2 ( k + u ) Z 1 = u , Z 2 = k + u + θ 1 θ 2 u = 1 p 2 ( u ) p 1 ( k + u ) Z 2 = u , Z 1 = k + u .
For k = 0 ,
P ( S = 0 ) = P ( Z 1 = 0 ) P ( Z 2 = 0 ) + 2 θ 1 θ 2 u = 1 p 1 ( u ) p 2 ( u ) ,
and by symmetry P ( S = k ) = P ( S = k ) for all k 1 . By part (i), the same formulas hold for D.
Proof. 
For Z = X Y with X SMB ( θ ) and Y N 0 independent of X, we have
s Z = 1 , X = 0 , s Y , X = 1 , s Y , X = 1 ,
thus, using independence and P ( X = 0 ) = 1 2 θ , P ( X = ± 1 ) = θ ,
G Z ( s ) = ( 1 2 θ ) + θ G Y ( s ) + θ G Y ( 1 / s ) ,
which yields G Z ( s ) = G Z ( 1 / s ) for all s > 0 . For independent Z 1 , Z 2 ,
G S ( s ) = G Z 1 ( s ) G Z 2 ( s ) , G D ( s ) = G Z 1 ( s ) G Z 2 ( 1 / s ) .
Since G Z 2 ( 1 / s ) = G Z 2 ( s ) , we obtain G D ( s ) = G S ( s ) , and hence D = d S .
On the other hand, let S = Z 1 + Z 2 and k > 0 . By independence,
P ( S = k ) = z Z P ( Z 1 = z ) P ( Z 2 = k z ) .
Splitting the sum into the disjoint cases z = 0 , z = k , z = j { 1 , , k 1 } , z = u , and  z = k + u with u 1 , and using P ( Z i = 0 ) and P ( Z i = ± m ) = θ i p i ( m ) for m 1 yields
P ( S = k ) = P ( Z 1 = 0 ) θ 2 p 2 ( k ) + P ( Z 2 = 0 ) θ 1 p 1 ( k ) + θ 1 θ 2 j = 1 k 1 p 1 ( j ) p 2 ( k j ) + θ 1 θ 2 u 1 p 1 ( u ) p 2 ( k + u ) + θ 1 θ 2 u 1 p 2 ( u ) p 1 ( k + u ) ,
which is (10).
For k = 0 ,
P ( S = 0 ) = P ( Z 1 = 0 , Z 2 = 0 ) + 2 u 1 P ( Z 1 = u , Z 2 = u ) ,
and substituting the same probabilities gives (11). The symmetry of each Z i implies P ( S = k ) = P ( S = k ) for k 1 , and by part (i) the same formulas hold for D.    □
Remark 5.
Equations (10) and (11) decompose the mass of the sum S into three types of contributions: first, cases where one addend is zero; second, positive–positive convolution over { 1 , , k 1 } ; and third, positive–negative cross terms with unbalanced magnitudes (tails). This partition is useful both for theoretical bounds (for example, tail comparisons driven by the Y i ) and for stable numerical evaluation. If some Y i has finite support, the infinite sums truncate automatically. By Proposition 11, the same decomposition applies to the difference D.

4. Special Case: The Sy-Poisson Distribution

In this subsection, we particularize the Sy- Z family by taking the mixing variable Y to be Poisson. Recall the set-up in Definition 2 (and Proposition 8): X SMB ( θ ) with θ ( 0 , 1 / 2 ) , Y X takes values in N , and  Z = X Y .
Definition 3.
Let Y P ( λ ) with λ > 0 and X SMB ( θ ) , θ ( 0 , 1 / 2 ) , be independent. The random variable Z = X Y is said to follow a Sy-Poisson distribution, denoted Z S y- P ( θ , λ ) . Its PMF, inherited from the Sy- Z construction, is
P ( Z = k ) = θ e λ λ | k | | k | ! , k Z { 0 } , ( 1 2 θ ) + 2 θ e λ , k = 0 .
Consequently, the CDF of Z is obtained from Proposition 8 by replacing F Y ( · ) with the Poisson CDF.
It is useful to contrast the proposed Sy–Poisson specification with classical symmetric models on  Z . The Skellam distribution, obtained as the difference of two independent Poisson variables, is symmetric but couples the zero mass and tail decay through a single intensity parameter. Similarly, symmetric negative binomial variants provide heavier tails but still link dispersion and central mass through a common shape parameter. Zero-inflated symmetric count models allow for additional mass at zero but do not preserve exact symmetry unless extra constraints are imposed. In contrast, the Sy–Poisson model separates the sign and magnitude mechanisms, offering exact bilateral symmetry, independent control of the zero mass via θ , and inherited Poisson-type tail behavior through  λ . This decomposition makes the model both flexible and analytically tractable, and it ensures identifiability under mild parametric assumptions, providing advantages over the alternatives above.
Proposition 12.
Let Z = X Y with X SMB ( θ ) , θ ( 0 , 1 / 2 ) , Y P ( λ ) , and  λ > 0 be independent. Then the CDF of Z S y P ( θ , λ ) is
F Z ( k ) = P ( Z k ) = θ ( 1 H ( | k | , λ ) ) , k 1 , ( 1 θ ) + θ e λ , k = 0 , ( 1 θ ) + θ H ( k + 1 , λ ) , k 1 ,
where H ( m , λ ) = P ( Y m 1 ) for m 1 , H ( a , x ) = Γ ( a , x ) / Γ ( a ) , and  Γ ( a , z ) denotes the incomplete gamma function defined by z t a 1 e t d t .
Proof. 
Apply Proposition 8 with F Y ( · ) equal to the P ( λ ) CDF: F Y ( k ) = e λ j = 0 k λ j / j ! = H ( k + 1 , λ ) for k 0 , and  1 F Y ( | k | 1 ) = e λ j = | k | λ j / j ! = 1 H ( | k | , λ ) for k 1 .    □
Remark 6.
The distribution Z Sy- P ( θ , λ ) is symmetric about 0 and zero–inflated, with  P ( Z = 0 ) = ( 1 2 θ ) + 2 θ e λ . For small λ, most of the mass is concentrated at 0, so the PMF is sharply unimodal at the origin. As λ increases, the Poisson component spreads out, and two symmetric shoulders emerge on the positive and negative sides; for moderate and large λ, the central mode at 0 is flanked by two lighter side peaks, yielding an overall three–bump shape. In the limit θ 1 / 2 , the point mass at 0 tends to e λ , and the side peaks become more pronounced, whereas when θ 0 , the concentration at 0 dominates for any fixed λ. These behaviors are illustrated by the PMF in Figure 1 and Figure 2, and the corresponding CDF in Figure 3 and Figure 4.
Figure 1. Probability mass function of Z Sy- P ( θ , λ ) . Each panel fixes 0 < λ < 1 .
Figure 2. Probability mass function of Z Sy- P ( θ , λ ) . Each panel fixes λ 1 .
Figure 3. Cumulative distribution function of Z Sy- P ( θ , λ ) . Each panel fixes 0 < λ < 1 .
Figure 4. Cumulative distribution function of Z Sy- P ( θ , λ ) . Each panel fixes λ 1 .

4.1. Generating Functions

Proposition 13.
Let Z Sy- P ( θ , λ ) with θ ( 0 , 1 / 2 ) and λ > 0 , constructed as Z = X Y where X SMB ( θ ) and Y P ( λ ) are independent. Then:
M Z ( t ) = ( 1 2 θ ) + θ exp λ ( e t 1 ) + θ exp λ ( e t 1 ) , t R ,
G Z ( s ) = ( 1 2 θ ) + θ exp λ ( s 1 ) + θ exp λ ( 1 / s 1 ) , s > 0 .
Moreover, G Z ( s ) = G Z ( 1 / s ) for all s > 0 , reflecting the exact symmetry of Z.
Proof. 
Condition on X. Since Y P ( λ ) , E ( e t Y ) = exp { λ ( e t 1 ) } , and  E ( s Y ) = exp { λ ( s 1 ) } . Use P ( X = 0 ) = 1 2 θ , P ( X = ± 1 ) = θ , and the independence of X and Y.    □
Corollary 8.
All odd raw moments vanish, E ( Z 2 m + 1 ) = 0 for m 0 . For  m 1 ,
E ( Z 2 m ) = E ( X 2 m Y 2 m ) = 2 θ E ( Y 2 m ) = 2 θ T 2 m ( λ ) ,
where T r ( λ ) is the r-th Touchard polynomial [10] (raw moment of a Poisson ( λ ) ). In particular,
E ( Z 2 ) = 2 θ λ ( 1 + λ ) , E ( Z 4 ) = 2 θ λ ( 1 + 7 λ + 6 λ 2 + λ 3 ) .
Hence Var ( Z ) = 2 θ λ ( 1 + λ ) .
Corollary 9.
Skewness is 0 due to symmetry, and the (non–excess) kurtosis is
κ = E ( Z 4 ) ( Var ( Z ) ) 2 = 1 + 7 λ + 6 λ 2 + λ 3 2 θ λ ( 1 + λ ) 2 ,
and the excess kurtosis equals κ 3 .
Remark 7.
(i) The identities (14) and (15) yield derivatives at t = 0 (or s = 1 ) that recover the even moments without resorting to series expansions. (ii) The symmetry G Z ( s ) = G Z ( 1 / s ) implies that the distributions of Z 1 + Z 2 and Z 1 Z 2 coincide for independent Sy–Poisson variables; see Section 3.9.

4.2. The Total Time on Test Transform

The total time on test (TTT) transform is a standard tool in reliability analysis and quality-control methodology for assessing distributional shape and aging properties. For a non-negative random variable X with distribution function F and survival function S ( x ) = 1 F ( x ) , the  TTT transform is defined by
T ( u ) = 1 E ( X ) 0 F 1 ( u ) S ( x ) d x , 0 u 1 ,
where F 1 denotes the (generalized) quantile function of F. The function T ( u ) is increasing in u, and its curvature reveals information about the underlying failure rate behavior: concave curves indicate a decreasing failure rate (DFR), convex curves indicate an increasing failure rate (IFR), and curves close to the diagonal T ( u ) = u correspond to approximately exponential or memoryless behavior.
In practice, the TTT transform is implemented through its discrete empirical version. For ordered non-negative observations X ( 1 ) X ( n ) , the empirical TTT values are computed as
T i = 1 j = 1 n X ( j ) j = 1 i X ( j ) + ( n i ) X ( i ) , i = 1 , , n ,
and the TTT plot is obtained by graphing T i against u i = i / n . The diagonal line T ( u ) = u serves as a natural reference: empirical curves lying above the diagonal suggest DFR behavior, while those below the diagonal suggest IFR behavior.
In our setting, we apply the TTT transform to the non-negative magnitudes | Z | (equivalently, to the Y component in the Sy- Z representation), and we compare the empirical TTT plot with the TTT curve implied by the fitted Sy- P ( θ , λ ) model as a diagnostic tool; see Section 6.

4.3. Shannon Entropy

Proposition 14.
Under the assumptions of Proposition 13, the Shannon entropy H ( Z ) = k Z P ( Z = k ) log ( P ( Z = k ) ) (natural logarithm) admits an exact representation
H ( Z ) = p 0 log ( p 0 ) 2 θ ( log ( θ ) λ ) ( 1 e λ ) + λ log ( λ ) E Y log ( Y ! ) ,
where p 0 = ( 1 2 θ ) + 2 θ e λ and Y P ( λ ) . In base-2 units, replace H ( Z ) with H ( Z ) / log ( 2 ) .
Proof. 
By symmetry, write p 0 = P ( Z = 0 ) and, for  n 1 , p ± n = P ( Z = ± n ) = θ e λ λ n / n ! . Then
H ( Z ) = p 0 log ( p 0 ) 2 n = 1 p n log ( p n ) , p n = θ e λ λ n n ! .
Using log ( p n ) = log ( θ ) λ + n log ( λ ) log ( n ! ) and factoring out 2 θ e λ ,
H ( Z ) = p 0 log ( p 0 ) 2 θ e λ n = 1 λ n n ! log ( θ ) λ + n log ( λ ) log ( n ! ) = p 0 log ( p 0 ) 2 θ [ ( log ( θ ) λ ) e λ n = 1 λ n n ! 1 e λ + log ( λ ) e λ n = 1 n λ n n ! λ e λ n = 1 λ n n ! log ( n ! ) E Y ( log ( Y ! ) ) ] .
This yields (16).    □
Corollary 10.
Using Stirling’s expansion [19] log ( n ! ) = n log ( n ) n + 1 2 log ( 2 π n ) + O ( 1 / n ) and taking expectations for Y P ( λ ) ,
E ( log ( Y ! ) ) = λ ( log ( λ ) 1 ) + 1 2 log ( 2 π λ ) + O ( λ 1 ) ,
so that
H ( Z ) = p 0 log ( p 0 ) 2 θ ( log ( θ ) λ ) ( 1 e λ ) + λ 1 2 log ( 2 π λ ) + O ( λ 1 ) .

5. The Statistical Inference of the Model

5.1. Method of Moments Estimation (MoM)

Consider an i.i.d. sample z = ( z 1 , , z n ) from Z Sy- P ( θ , λ ) with θ ( 0 , 1 / 2 ) and λ > 0 . From Section 4.1 we have
E ( | Z | ) = 2 θ λ , E ( Z 2 ) = 2 θ λ ( 1 + λ ) .
Let the empirical moments be
m ¯ 1 = 1 n i = 1 n | z i | , m ¯ 2 = 1 n i = 1 n z i 2 .
Matching m ¯ 1 and m ¯ 2 to their population counterparts yields a closed form solution. If  m ¯ 1 > 0 and m ¯ 2 > m ¯ 1 , the MoM are
λ ^ M = m ¯ 2 m ¯ 1 1 , θ ^ M = m ¯ 1 2 λ ^ M = m ¯ 1 2 2 ( m ¯ 2 m ¯ 1 ) .
The feasibility conditions m ¯ 1 > 0 and m ¯ 2 > m ¯ 1 ensure λ ^ M > 0 and θ ^ M > 0 . If θ ^ M 1 / 2 , the estimate lies outside the parameter space; in practice, one may either declare the MoM fit infeasible or project to 1 / 2 ε for a small ε > 0 and use the projected value as initialization for maximum likelihood.

5.2. Asymptotic Distribution and Standard Errors (Delta Method)

Let μ 1 = E ( | Z | ) = 2 θ λ and μ 2 = E ( Z 2 ) = 2 θ λ ( 1 + λ ) . From the closed-form moments,
Var ( | Z | ) = 2 θ λ + 2 θ ( 1 2 θ ) λ 2 , E ( Z 4 ) = 2 θ λ ( 1 + 7 λ + 6 λ 2 + λ 3 ) ,
so
Var ( Z 2 ) = E ( Z 4 ) μ 2 2 = 2 θ λ ( 1 + 7 λ + 6 λ 2 + λ 3 ) { 2 θ λ ( 1 + λ ) } 2 .
Using E ( | Z | 3 ) = 2 θ λ ( 1 + 3 λ + λ 2 ) ,
Cov ( | Z | , Z 2 ) = E ( | Z | 3 ) μ 1 μ 2 = 2 θ λ 1 + 3 λ + λ 2 4 θ λ ( 1 + λ ) .
A multivariate central limit theorem [20] yields
n ( m ¯ 1 , m ¯ 2 ) ( μ 1 , μ 2 ) d N 0 , Σ , Σ = Var ( | Z | ) Cov ( | Z | , Z 2 ) Cov ( | Z | , Z 2 ) Var ( Z 2 ) .
Define the transformation g ( m ¯ 1 , m ¯ 2 ) = ( λ , θ ) by λ = m ¯ 2 / m ¯ 1 1 and θ = m ¯ 1 2 / ( 2 ( m ¯ 2 m ¯ 1 ) ) . The Jacobian at ( μ 1 , μ 2 ) is
K = μ 2 μ 1 2 1 μ 1 μ 1 ( 2 μ 2 μ 1 ) 2 ( μ 2 μ 1 ) 2 μ 1 2 2 ( μ 2 μ 1 ) 2 = 1 + λ 2 θ λ 1 2 θ λ 1 + 2 λ 2 λ 2 1 2 λ 2 .
By the delta method,
n ( λ ^ M , θ ^ M ) ( λ , θ ) d N 0 , V , V = K Σ K ,
with a practical plug-in estimator of V obtained by replacing ( θ , λ ) with ( θ ^ M , λ ^ M ) .

5.3. Algorithm for MoM Estimation

To facilitate the practical implementation of the inference method, we summarize the computational procedure below, by the Algorithm 1 that outlines the step-by-step calculation of the moment-based estimates.
Remark 8.
The closed form pair ( m ¯ 1 , m ¯ 2 ) provides stable initialization for maximum likelihood, typically improving convergence and reducing sensitivity to local optima. In very small samples, the method of moments may yield θ ^ M close to 1 / 2 ; in that case, it is advisable to compare the implied modality (see Proposition 7) using the empirical shape as a diagnostic check.
Algorithm 1 Computation of MoM estimates
1:
Compute m ¯ 1 = i | z i | / n and m ¯ 2 = i z i 2 / n .
2:
If m ¯ 1 0 or m ¯ 2 m ¯ 1 , declare the MoM fit infeasible and proceed with likelihood-based estimation.
3:
Else, compute ( λ ^ M , θ ^ M ) via (18).
4:
Optionally report delta-method standard errors using the plug-in estimate of V.

5.4. Likelihood-Based Inference

Given an i.i.d. sample z = ( z 1 , , z n ) from Z Sy- P ( θ , λ ) with θ ( 0 , 1 / 2 ) and λ > 0 , let n 0 = # { i : z i = 0 } and n 1 = n n 0 be the numbers of zeros and nonzeros, respectively. Writing p 0 = P ( Z = 0 ) = ( 1 2 θ ) + 2 θ e λ = 1 2 θ ( 1 e λ ) , the likelihood factorizes as
L ( θ , λ z ) = p 0 n 0 θ n 1 e λ n 1 λ i : z i 0 | z i | i : z i 0 | z i | ! ,
so that the log-likelihood is
( θ , λ ) = n 0 log ( p 0 ) + n 1 log ( θ ) λ n 1 + log ( λ ) i : z i 0 | z i | i : z i 0 log ( | z i | ! ) .
The score equations are obtained from
θ = 2 n 0 ( 1 e λ ) p 0 + n 1 θ , λ = 2 θ n 0 e λ p 0 n 1 + i : z i 0 | z i | λ ,
The maximum likelihood estimators (MLEs) solve / θ = 0 and / λ = 0 numerically (no closed form in general). For numerical maximization of the log-likelihood, we used the quasi-Newton BFGS algorithm [21], which provides stable performance for smooth two-parameter models such as Sy- P . The optimization was initialized at ( θ ( 0 ) , λ ( 0 ) ) = ( p ¯ 0 / 2 , max { Z ¯ 2 , 10 3 } ) , where p ¯ 0 is the empirical proportion of zeros, and Z ¯ 2 is the empirical second moment. A convergence tolerance of 10 8 on both the absolute and relative change in the objective value was imposed. To guard against potential local maxima, the optimization was repeated from five random starting points generated uniformly over ( 0 , 0.5 ) × ( 0 , 10 ) . The algorithm converged to identical estimates, indicating that the likelihood is well behaved for the Sy- P model. These implementation details ensure reproducibility and support the observed numerical stability of the MLE.

5.5. Score Derivatives and Information Matrices

Hessian (second derivatives).
2 θ 2 = 4 n 0 ( 1 e λ ) 2 p 0 2 n 1 θ 2 , 2 θ λ = 2 λ θ = 2 n 0 e λ p 0 2 , 2 λ 2 = 2 θ n 0 e λ ( 1 2 θ ) p 0 2 i : z i 0 | z i | λ 2 .

5.6. Observed Information

The observed information matrix is J ( θ , λ ) = 2 ( θ , λ ) :
J ( θ , λ ) = 4 n 0 ( 1 e λ ) 2 p 0 2 + n 1 θ 2 2 n 0 e λ p 0 2 2 n 0 e λ p 0 2 i : z i 0 | z i | λ 2 2 θ n 0 e λ ( 1 2 θ ) p 0 2 .
An asymptotically consistent covariance estimator for ( θ ^ , λ ^ ) is J ( θ ^ , λ ^ ) 1 .

5.7. Expected Fisher Information

Let q = 1 p 0 = 2 θ ( 1 e λ ) . Using E ( N 0 ) = n p 0 , E ( N 1 ) = n q , and  E ( i : z i 0 | Z i | ) = E ( i = 1 n | Z i | ) = n E ( | Z | ) = n ( 2 θ λ ) , the expected (per-observation) Fisher information is
i ( θ , λ ) = 1 n E J ( θ , λ ) = 4 ( 1 e λ ) 2 p 0 + q θ 2 2 e λ p 0 2 e λ p 0 2 θ λ 2 θ e λ ( 1 2 θ ) p 0 , I ( θ , λ ) = n i ( θ , λ ) .
Remark 9.
The observed Fisher information matrix for the Sy- P model is positive definite whenever ( θ , λ ) belongs to the interior of the parameter space ( 0 < θ < 0.5 , λ > 0 ) . This follows from the strict concavity of the log-likelihood in both parameters: the sign component induces a strictly negative second derivative in θ, while the Poisson magnitude contributes a negative curvature in λ for all λ > 0 . Consequently, the Hessian matrix is negative definite, and the observed information and its negative remain positive definite away from the boundary. Loss of positive definiteness can only occur near the boundary limits θ 0 , θ 0.5 , or λ 0 , where the model collapses to a degenerate or nearly degenerate form, and classical asymptotic theory is no longer applicable. Thus, for all practical estimation scenarios in the interior region, the observed information matrix provides a valid and reliable approximation to the asymptotic covariance.
Hence, under standard regularity conditions,
n ( θ ^ , λ ^ ) ( θ , λ ) d N 0 , i ( θ , λ ) 1 ,
and a large-sample covariance estimator is I ( θ ^ , λ ^ ) 1 = n i ( θ ^ , λ ^ ) 1 .

5.8. Percentile Estimators

Percentile estimators provide a robust and intuitive alternative to likelihood and moment based procedures. Because the Sy- P ( θ , λ ) distribution is defined through a symmetric, zero-centered mixture structure, its shape is largely determined by its quantiles, particularly those away from the median. Percentile estimators, therefore, offer a way to capture the distributional form using direct CDF inversion, without relying on the smoothness of the likelihood or long-tailed moments. Percentile estimators are obtained by matching the empirical and model based percentiles at two symmetric quantiles, namely the 25th and 75th percentiles. Let F ( z ; θ , λ ) be the Sy- P CDF, and let Q ( p ) denote the population p-th percentile, given as Q ( p ; θ , λ ) = F 1 ( p ; θ , λ ) .
Given empirical percentiles Q ˜ ( p 1 ) and Q ˜ ( p 2 ) , which are the sample percentiles at level p, the goal of percentile estimation is to find parameter values ( θ , λ ) such that the model percentiles match the data percentiles. We thus define the percentile estimators ( θ ^ , λ ^ ) as the solution of
Q ( p 1 ; θ , λ ) = Q ˜ ( p 1 ) and Q ( p 2 ; θ , λ ) = Q ˜ ( p 2 ) .
These two non-linear equations can be solved numerically to obtain the estimators. The practical utility of the proposed percentile estimators is illustrated in Section 6, where they are applied to both empirical datasets alongside the MLE estimators. The comparison highlights the robustness of percentile-based estimation, especially in settings with moderate tails or irregular zero concentrations.

6. Simulation Study

To evaluate the finite-sample properties of the proposed estimators for Z Sy- P ( θ , λ ) , we performed a comprehensive Monte Carlo experiment. This simulation mechanism follows directly from Proposition 1, which characterizes any Sy- Z random variable as the product of an independent symmetric sign and a non-negative magnitude. Specifically, the representation
Z = V 1 ( 2 V 2 1 ) Y
The constructive form is associated with the probability mass function given in Equation (12): V 1 determines whether Z = 0 or Z 0 applies, V 2 selects the sign when Z 0 , and Y supplies the magnitude. Thus, the generative steps above reproduce exactly the Sy- P distribution implied by the theoretical results. For each parameter configuration, 10,000 independent samples of size n { 10 , 20 , , 200 } were drawn under the true parameters ( θ , λ ) = ( 0.3 , 3 ) . This parameter configuration is representative of many practical scenarios: θ = 0.3 yields a moderate zero mass in the Sy- P model, while λ = 3 produces a magnitude distribution with moderate dispersion. Together, these values generate datasets whose symmetry, central concentration, and tail behavior closely resemble those encountered in empirical applications, making them well suited for assessing estimation accuracy in realistic settings. The log-likelihood ( θ , λ ) was maximized numerically to obtain MLEs, and MoM estimators were computed from the first two empirical moments. The Algorithm 2 below provides a concise summary of the exact data-generation procedure used in all Monte Carlo experiments. This step-by-step formulation clarifies how independent draws from the Sy- P model are produced and ensures full reproducibility of the simulation design.
Algorithm 2 Generation of an i.i.d. sample { Z i } i = 1 n from Sy- P ( θ , λ )
1:
for  i = 1 , , n   do
2:
       Draw V 1 i Bernoulli ( 2 θ ) . If  V 1 i = 0 , set Z i 0 and continue.
3:
       Draw V 2 i Bernoulli ( 1 / 2 ) and set S i 2 V 2 i 1 { 1 , 1 } .
4:
       Draw Y i P ( λ ) and set Z i S i Y i .
5:
end for

6.1. Comparison of MLE and MoM Estimators

For each n, bias and mean-squared error (MSE) were calculated as
Bias n ( η ^ ) = 1 R r = 1 R ( η ^ r η ) , MSE n ( η ^ ) = 1 R r = 1 R ( η ^ r η ) 2 ,
with η { θ , λ } . Figure 5 plots Monte Carlo estimates of Bias n against n, whereas Figure 6 plots Monte Carlo estimates of MSE n against n.
Figure 5. Bias of λ ^ and θ ^ under Sy- P ( θ , λ ) . Curves compare MLE and MoM estimators across sample sizes.
Figure 6. MSE of λ ^ and θ ^ under Sy- P ( θ , λ ) . Curves compare MLE and MoM estimators across sample sizes.
Both MLEs and MoM estimators show rapidly vanishing bias and MSE as the sample size increases, consistent with standard asymptotic theory. The MoM estimator displays a slightly higher MSE for small n compared to the MLE, but its performance converges to that of the MLE as n 200 .
Across the grid of n, θ ^ shows a mild positive bias, while λ ^ tends to underestimate the true value. Moreover, MSE n ( λ ^ ) remains larger than MSE n ( θ ^ ) , reflecting the higher sampling variability of the rate parameter. These bias patterns are consistent with the structure of the log-likelihood and the Fisher information described in Section 5.8. The information for θ is primarily driven by the zero and near zero observations, where the curvature of the likelihood is pronounced; this yields relatively strong identifiability for θ and explains the small positive finite-sample bias. In contrast, the information for λ depends on the dispersion of the magnitude component: when many observations fall at or near zero, the effective information about λ is reduced, leading to a mild tendency toward underestimation. As the sample size increases, both information components scale linearly with n, causing the biases in θ ^ and λ ^ to diminish, in agreement with the asymptotic theory.

6.2. Standard-Error Accuracy and Confidence-Interval Coverage

To assess standard-error accuracy and validate asymptotic normality, we formed observed Wald confidence intervals using the inverse Hessian at the MLE, as given by Var ^ ( θ ^ , λ ^ ) = J ( θ ^ , λ ^ ) 1 . For each replication and parameter, we constructed two-sided Wald intervals at the 90%, 95%, and 99% nominal levels and recorded empirical coverage along with the average interval length. Figure 7 reports the CI coverage, whereas Figure 8 shows the average CI length as functions of n.
Figure 7. Empirical coverage probabilities of the observed Wald confidence intervals for θ ^ and λ ^ as a function of n.
Figure 8. Average lengths of the observed Wald confidence intervals for θ ^ and λ ^ as a function of n.
The observed Wald intervals show coverage approaching nominal levels as n increases, with noticeable gains between small and moderate sample sizes. In cases of small λ , the intervals can be slightly conservative for very small n, but accuracy improves quickly with n, and the average lengths decrease at the expected n 1 / 2 rate, consistent with the asymptotic theory for the MLE.
A further point of reassurance concerns the use of Wald confidence intervals in a discrete setting. Although discrete models sometimes induce irregular likelihood shapes and poor Wald performance, the  Sy- P model benefits from a smooth and strictly concave log-likelihood in both parameters. The independence between the symmetric sign and the Poisson magnitude yields well-behaved score functions and a Fisher information matrix that remains finite and positive for all admissible ( θ , λ ) . These properties ensure that the MLEs lie well within the interior of the parameter space and satisfy standard differentiability conditions, so Wald intervals inherit the usual asymptotic validity even in finite samples. This explains why the simulation results show accurate coverage levels despite the inherent discreteness of the data-generating process. The observed coverage behavior can be directly linked to the analytical form of the Fisher information derived in Section 5.8. When the zero mass p 0 = ( 1 2 θ ) + 2 θ e λ is large, the term q / θ 2 in the information matrix (where q = 2 θ ( 1 e λ ) ) dominates, yielding high curvature of the log-likelihood with respect to  θ and, therefore, tighter confidence intervals. Conversely, the information component associated with  λ depends on both θ and λ through 2 θ / λ 2 θ e λ ( 1 2 θ ) / p 0 , which can be relatively flat for small  λ . This explains why the empirical coverage for  λ tends to be slightly conservative in small samples, whereas the coverage for  θ rapidly approaches nominal levels. As the sample size increases, both components of the Fisher information scale linearly with n, leading to asymptotic normality and the near-exact coverage observed for n 100 .

7. Practical Data Analysis

We illustrate the applicability of the proposed Sy- P ( θ , λ ) model using two real-life datasets. After first-differencing (day-to-day or session-to-session changes), the outcomes lie on Z , matching the model’s support.

7.1. PTT Stock Price Increments (Thailand, 2014)

The first dataset, previously analyzed in ref. [4], consists of daily closing prices for the Petroleum Authority of Thailand (PTT), recorded from 1 April 2014 to 20 October 2014 (Stock Exchange of Thailand). For completeness, the data are as follows:
-12, -11, -9, -8, -7, -6, -6, -5, -5, -5, -5, -5, -5, -5, -5, -4, -4, -4,
-4, -4, -4, -4, -4, -4, -3, -3, -3, -3, -3, -3, -3, -3, -3, -3, -2, -2,
-2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -1, -1, -1, -1, -1, -1, -1, -1,
-1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7,
7, 7, 8, 8, 9, 9, 9, 10, 14
According to ref. [4], we model price increments; that is, the day-to-day change in the closing price is measured in integer Baht. A Mann–Kendall test against a monotonic trend yields a p-value of 0.9467 , providing no evidence of such a trend and supporting the i.i.d. working assumption for the increment series.
We fit Sy- P ( θ , λ ) by maximum likelihood, obtaining ( θ ^ , λ ^ ) = ( 0.45 , 3.71 ) . A Kolmogorov–Smirnov (K–S) test gives p = 0.47 , indicating an adequate fit. Furthermore, we also find the percentile estimators by fixing p 1 = 0.25 , p 2 = 0.75 of the parameters, which are given by ( θ ^ , λ ^ ) = ( 0.37 , 3.43 ) . Figure 9 overlays the fitted probability mass function on the empirical frequencies. Figure 10 contains the total time on test (TTT) plot, as described in Section 4.2. The empirical TTT curve was compared to the TTT curve implied by the fitted Sy- P model. A close alignment of the curves indicates an adequate fit.
Figure 9. Empirical PMF and fitted Sy- P ( θ ^ , λ ^ ) for PTT price increments.
Figure 10. TTT plot for PTT price increments comparing the empirical curve with the fitted Sy- P model. The diagonal line represents the exponential distribution reference.
For benchmarking, we also fit the perturbed discrete Laplace P D L ( p , α ) [6], the discrete Laplace D L ( p ) , the discrete normal D N ( μ , σ ) [5], and the discrete asymmetric Laplace D A L D ( μ , β , λ ) [4]. Table 1 reports log-likelihood, AIC, and BIC. The  Sy- P model attains the smallest AIC/BIC, and, following ref. [22], the BIC differences relative to competitors exceed 2, providing positive evidence in favor of Sy- P . This good empirical fit is consistent with the theoretical properties of the Sy- P model. The financial return increments are nearly symmetric and exhibit a pronounced central mass at zero; the Sy- P structure accommodates both features naturally through its symmetric sign component and the tunable zero probability governed by θ . Moreover, the inherited Poisson tail behavior aligns well with the moderate dispersion observed in the positive and negative increments, explaining the improved information-criteria performance over classical competitors.
Table 1. Model comparison for PTT price increments.
Remark 10.
The Sy- P fit closely tracks competitors while imposing exact symmetry around zero and supporting closed-form manipulations for sums and differences (Section 3.9).

7.2. Attendance Increments in a Marketing Course (Lyon, 2012–2013)

We revisit a dataset previously analyzed in ref. [18], where the extended Poisson model was introduced. The data record attendance counts for 60 consecutive marketing sessions in the Bachelor program at IDRAC International Management School (Lyon, France), between 1 September 2012 and 1 April 2013. For completeness, the exact dataset is as given below:
-5, -5, -5, -4, -4, -4, -3, -3, -3, -3, -3, -3, -2, -2, -2, -2, -2, -2,
-2, -2, -2, -2, -2, -2, -2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5, 6, 7, 8
As documented in ref. [18], we analyze first differences between consecutive sessions, yielding n = 59 integer-valued observations ranging from 5 to 7. This transformation centers the process and produces signed counts, providing a natural benchmark for two-sided discrete models.
As a preliminary diagnostic, the runs test reported in ref. [18] indicates that while the raw series is not random (p-value = 0 ), the differenced series behaves as an approximately random sample (p-value = 0.7077 ), supporting independent-sample modeling at the differenced scale. We fit the symmetric Poisson model Sy- P ( θ , λ ) by maximum likelihood and compare it with standard competitors: discrete Laplace D L ( p ) , perturbed discrete Laplace P D L ( p , α ) [6], discrete normal D N ( μ , σ ) [5], and the extended Poisson E- P ( p , λ ) introduced in ref. [18]. Model fit is assessed via log-likelihood and information criteria (AIC/BIC), using grouped frequencies consistent with ref. [18] for the goodness-of-fit evaluation.
Figure 11 displays the fitted probability mass functions, whereas Figure 12 contains the Total Time on Test (TTT) plot. The empirical TTT curve was compared to the TTT curve implied by the fitted Sy- P model. A close alignment of the curves indicates an adequate fit. Table 2 reports the numerical comparisons. The symmetric Poisson achieves the best (smallest) AIC and BIC, marginally improving upon the extended Poisson while preserving explicit symmetry around zero. The estimated parameters for Sy- P are θ ^ = 0.49 and λ ^ = 2.47 , consistent with a zero-centered, moderately dispersed pattern with a high concentration of probability near the origin. Further, we also find the percentile estimators by fixing p 1 = 0.25 ,   p 2 = 0.75 of the parameters, which are given by ( θ ^ , λ ^ ) = ( 0.41 , 2.87 ) . The favorable fit obtained for attendance differences directly reflects the strengths of the Sy- P specification. These data are balanced around zero and moderately dispersed; the Sy- P model captures these patterns through exact symmetry, flexible control of the zero mass via θ , and tail heredity from the Poisson magnitude. Such theoretical features translate into practical performance, as confirmed by the reduced AIC/BIC values relative to other symmetric alternatives.
Figure 11. Empirical PMF and fitted Sy- P ( θ ^ , λ ^ ) for attendance increments.
Figure 12. TTT plot for attendance increments comparing the empirical curve with the fitted Sy- P model. The diagonal line represents the exponential distribution reference.
Table 2. Model comparison for attendance increments (IDRAC, Lyon, 2012–2013).
Remark 11.
(i) The Sy- P fit closely tracks the extended Poisson while enforcing exact symmetry and offering closed-form tools for sums/differences (Section 3.9). (ii) The estimate θ ^ = 0.49 lies near the upper boundary 1 / 2 , aligning with pronounced concentration at zero and balanced tails; convergence diagnostics were stable under maximum likelihood with MoM initialization.

8. Concluding Remarks

In this paper, we introduce the Sy- Z family, a unified and tractable framework for symmetric integer-valued modeling based on a simple sign-magnitude decomposition. Writing Z = X Y with X SMB ( θ ) and Y N 0 as independent variables yields models that are exactly symmetric around zero, allowing for interpretable control of the atom at zero and inheriting tail behavior and dispersion from the chosen magnitude distribution. Within this framework, we derived closed-form expressions for the PMF and CDF, bilateral generating functions, and even-order moments; established a characterization by symmetry and sign-magnitude independence; and studied tail transfer, modality, and the equality in law between sums and differences of independent Sy- Z variables.
Specializing in a Poisson magnitude leads to the Sy-Poisson model, for which we obtained explicit generating functions, moment identities, and entropy, and developed both method-of-moments and likelihood-based inference. Monte Carlo simulations showed that the maximum likelihood estimators exhibit small finite-sample bias and accurate Wald confidence-interval coverage, while the TTT plots and empirical applications in finance and education confirmed that Sy-Poisson can match or improve upon classical two-sided competitors on Z . Beyond these case studies, the sign-magnitude structure suggests further applications in quality-control contexts, where signed deviations from target defect levels or specification limits arise naturally. Promising directions for future work include regression extensions, dependence modeling and time-series formulations, multivariate constructions, and the development of Sy- Z based monitoring tools for quality-control problems.

Author Contributions

Conceptualization, H.S.B. and M.K.; methodology, M.K., H.S.B., H.S.S., S.R.B. and L.A.; software, M.K. and S.R.B.; validation, M.K., H.S.B., H.S.S. and S.R.B.; writing—original draft preparation, M.K., H.S.B., H.S.S. and S.R.B.; writing—review and editing, M.K., H.S.B., H.S.S., S.R.B., L.A. and A.F.D.; visualization, M.K., S.R.B., L.A. and A.F.D.; funding acquisition, L.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Deanship of Graduate Studies and Scientific Research at Najran University for funding this work under the Easy Funding Program grant code (NU/EFP/SERC/13/239).

Data Availability Statement

This paper’s application section includes a list of the data that were used, along with their citations.

Acknowledgments

The authors are thankful to the Deanship of Graduate Studies and Scientific Research at Najran University for funding this work under the Easy Funding Program grant code (NU/EFP/SERC/13/239).

Conflicts of Interest

The authors declare no potential conflicts of interest.

Appendix A

Proposition A1.
Let θ ( 0 , 1 / 2 ) . Consider independent random variables S Bernoulli ( 1 / 2 ) and T Bernoulli ( 2 θ ) . Define the random variable
X = 0 if T = 0 , 1 if S = 1 a n d T = 1 , 1 if S = 0 a n d T = 1 .
Then X SMB ( θ ) , and its PMF is given by
P ( X = 0 ) = 1 2 θ , P ( X = 1 ) = P ( X = 1 ) = θ .
Proof. 
By the definition of X, we have
{ X = 0 } = { T = 0 } , { X = 1 } = { S = 1 , T = 1 } , { X = 1 } = { S = 0 , T = 1 } .
Hence P ( X = 0 ) = P ( T = 0 ) = 1 2 θ . Using the independence of S and T,
P ( X = 1 ) = P ( S = 1 ) P ( T = 1 ) = 1 2 2 θ = θ ,
and similarly
P ( X = 1 ) = P ( S = 0 ) P ( T = 1 ) = 1 2 2 θ = θ .
Thus X has a PMF
P ( X = 0 ) = 1 2 θ , P ( X = 1 ) = P ( X = 1 ) = θ ,
which is exactly SMB ( θ ) . □
Proposition A2.
To generate a random variable X SMB ( θ ) :
1. 
Generate U Uniform ( 0 , 1 ) .
2. 
Set
X = 0 if 0 U < 1 2 θ , 1 if 1 2 θ U < 1 θ , 1 if 1 θ U < 1 .
Proof. 
Since U Uniform ( 0 , 1 ) , for any interval [ a , b ) [ 0 , 1 ) we have P ( a U < b ) = b a . By the construction,
{ X = 0 } = { 0 U < 1 2 θ } , { X = 1 } = { 1 2 θ U < 1 θ } , { X = 1 } = { 1 θ U < 1 } .
Hence
P ( X = 0 ) = ( 1 2 θ ) 0 = 1 2 θ , P ( X = 1 ) = ( 1 θ ) ( 1 2 θ ) = θ , P ( X = 1 ) = 1 ( 1 θ ) = θ .
Thus X has PMF
P ( X = 0 ) = 1 2 θ , P ( X = 1 ) = P ( X = 1 ) = θ ,
which is exactly SMB ( θ ) . □

References

  1. Skellam, J.G. The frequency distribution of the difference between two Poisson variates belonging to different populations. J. R. Stat. Soc. Ser. A 1946, 109, 296. [Google Scholar] [CrossRef] [PubMed]
  2. Inusah, S.; Kozubowski, T.J. A discrete analogue of the Laplace distribution. J. Stat. Plan. Inference 2006, 136, 1090–1102. [Google Scholar] [CrossRef]
  3. Barbiero, A. An alternative discrete skew Laplace distribution. Stat. Methodol. 2014, 16, 47–67. [Google Scholar] [CrossRef]
  4. Sangpoom, S.; Bodhisuwan, W. The discrete asymmetric Laplace distribution. J. Stat. Theory Pract. 2016, 10, 73–86. [Google Scholar] [CrossRef]
  5. Roy, D. The discrete normal distribution. Commun. Stat. Theory Methods 2003, 32, 1871–1883. [Google Scholar] [CrossRef]
  6. Bapat, S.R.; Bakouch, H.; Chesneau, C. A distribution on Z via perturbing the Laplace distribution with applications to finance and health data. STAT 2023, 12, e535. [Google Scholar] [CrossRef]
  7. Chakraborty, S.; Chakravarty, D. A new discrete probability distribution with integer support on (−,). Commun. Stat. Theory Methods 2016, 45, 492–505. [Google Scholar] [CrossRef]
  8. Ong, S.H.; Shimizu, K.; Choung, M.N. A class of distribution arising from difference of two random variables. Comput. Stat. Data Anal. 2008, 52, 1490–1499. [Google Scholar] [CrossRef]
  9. Karlis, D.; Ntzoufras, I. Analysis of sports data using bivariate Poisson models. Statistician 2003, 52, 381–393. [Google Scholar] [CrossRef]
  10. Johnson, N.L.; Kemp, A.W.; Kotz, S. Univariate Discrete Distributions, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2005. [Google Scholar]
  11. Cameron, A.C.; Trivedi, P.K. Regression Analysis of Count Data, 2nd ed.; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  12. Lambert, D. Zero-inflated Poisson regression, with an application to defects in manufacturing. Technometrics 1992, 34, 1–14. [Google Scholar] [CrossRef]
  13. Bhati, D.; Chakraborty, S.; Lateef, S.G. A discrete probability model suitable for both symmetric and asymmetric count data. Filomat 2020, 34, 2559–2572. [Google Scholar] [CrossRef]
  14. Chesneau, C.; Pakyari, R.; Kohansal, A.; Bakouch, H.S. Estimation and prediction under different schemes for a flexible symmetric distribution with applications. J. Math. 2024, 2024, 6517277. [Google Scholar] [CrossRef]
  15. Chesneau, C.; Bakouch, H.S.; Tomy, L.; Veena, G. A new discrete distribution on integers: Analytical and applied study on stock exchange and flood data. J. Stat. Manag. Syst. 2022, 25, 1899–1917. [Google Scholar] [CrossRef]
  16. Tomy, L.; Veena, G. A retrospective study on Skellam and related distributions. Austrian J. Stat. 2022, 51, 1102–1111. [Google Scholar] [CrossRef]
  17. Karlis, D.; Mamode Khan, N. Models for integer data. Annu. Rev. Stat. Its Appl. 2023, 10, 297–323. [Google Scholar] [CrossRef]
  18. Bakouch, H.S.; Kachour, M.; Nadarajah, S. An extended Poisson distribution. Commun. Stat. Theory Methods 2016, 45, 6746–6764. [Google Scholar] [CrossRef]
  19. Abramowitz, M.; Stegun, I.A. (Eds.) Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables; Dover Publications: New York, NY, USA, 1965. [Google Scholar]
  20. Rohatgi, V.K.; Saleh, A.K. An Introduction to Probability and Statistics, 2nd ed.; John Wiley and Sons: Hoboken, NJ, USA, 2000. [Google Scholar]
  21. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021; Available online: https://www.R-project.org/ (accessed on 1 November 2025).
  22. Raftery, A.E. Bayesian model selection in social research. Sociol. Methodol. 1995, 25, 111–163. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.