Next Article in Journal
Comparison between Two Algorithms for Computing the Weighted Generalized Affinity Coefficient in the Case of Interval Data
Next Article in Special Issue
The Semi-Hyperbolic Distribution and Its Applications
Previous Article in Journal
Newcomb–Benford’s Law in Neuromuscular Transmission: Validation in Hyperkalemic Conditions
Previous Article in Special Issue
A Family of Finite Mixture Distributions for Modelling Dispersion in Count Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Brief Report

On the Vector Representation of Characteristic Functions

by
Wolf-Dieter Richter
Institute of Mathematics, University of Rostock, Ulmenstraße 69, 18057 Rostock, Germany
Stats 2023, 6(4), 1072-1081; https://doi.org/10.3390/stats6040067
Submission received: 1 September 2023 / Revised: 27 September 2023 / Accepted: 8 October 2023 / Published: 10 October 2023
(This article belongs to the Special Issue Advances in Probability Theory and Statistics)

Abstract

:
Based upon the vector representation of complex numbers and the vector exponential function, we introduce the vector representation of characteristic functions and consider some of its elementary properties such as its polar representation and a vector power expansion.

1. Introduction

Basic mathematical techniques in probability theory and statistics are associated with characteristic functions and complex numbers. Innovations in the latter areas are therefore also reflected in the former and are to be presented here.
In [1], p. 24, Cramér says about the earliest origins of characteristic functions that “the first use of an analytical instrument substantially equivalent to the characteristic function seems to be due to Lagrange [2]” and that “similar functions where then systematically employed by Laplace in his great work [3]”. Expressions like e k 1 also occur in [4]. Further, significant deeping and expansion, as well as the firm anchoring in modern probability theory, are due to Lévy [5], Cramer [1], Esseen [6], Gnedenko and Kolmogoroff [7], Ibragimov and Linnik [8], Ramachandran [9], Feller [10], Lukacs [11], Petrov [12] and Bhattacharya and Ranga Rao [13].
The characteristic function of a random variable X is usually defined as
φ X ( t ) = E e i t X , t R
where E means mathematical expectation and i means the so-called imaginary unit, which is formally dealt with in the series e i x = k = o ( i x ) k k ! , x R like a constant. It is customary to define the quantity i by saying that it is not a real number but a “formal quantity” or “number” that satisfies the equation i 2 = 1 and assuming at the same time that it allows an interpretation as an element of the two-dimensional Gaussian number plane, which makes the range of values of the function x e i x appear pretty unclear. This long-standing apparent lack of mathematical rigor and some consequences resulting from this for characteristic functions will be addressed here.
To start right away, the vectors, or complex numbers, z l = x l y l , l = 1 , 2 and z = x y are considered here as elements of the complex algebraic structure C = ( V , , , · , o , 1 | , i ) where V is a two-dimensional vector space, which is chosen here as V = R 2 for simplicity, ⊕ means usual vector addition and the product of two complex numbers and the k th power of such number are accordingly explained as
z 1 z 2 = x 1 x 2 y 1 y 2 x 1 y 2 + x 2 y 1 and z k = z ( k 1 ) z , k = 1 , 2 , , z 0 = 1 | = 1 0
while multiplication of vector z by scalar λ R is denoted λ · z . The vector i = 0 1 could be called an imaginary unit for historical reasons, but it has no imaginary character at all. The reader is encouraged to distinguish here and in the following carefully between the not really comprehensible symbol i and the well-defined vector i . Obviously, z 1 | = z , and i solves the quadratic vector equation
i 2 = 1 | .
For more details about the complex algebraic structure C and its non-classical generalizations, we refer to [14,15,16,17,18,19]. The rest of the paper is organized as follows. In Section 2, we consider vector powers and the vector exponential function. The vector representation of characteristic functions and further aspects concerning it as well as several examples are studied in Section 3. A final discussion in Section 4, which includes some historical remarks and an outlook on possible further work, closes this paper.

2. Vector-Valued Exponential Function

The vector-valued vector powers z k : R 2 R 2 can be derived directly from the definition (1) or alternatively using binomial formulas as the following example implies:
x y 6 = ( x 6 15 x 4 y 2 + 15 x 2 y 4 y 6 ) · 1 | + ( 6 x 5 y 20 x 3 y 3 + 6 x y 5 ) · i .
Lemma 1.
The k-th power of z allows for the representation
z k = x k k 2 y 2 x k 2 + k 4 y 4 x k 4 k 6 y 6 x k 6 k 1 y x k 1 k 3 y 3 x k 3 + k 5 y 5 x k 5 k 7 y 7 x k 7 .
Let us say that this representation starts in the term x k , proceeds alternating in the lower and upper rows, respectively, and finally ends in the term s i g n · y k , which is on the top row if k is an even number and on the bottom row if k is odd. The s i g n is plus if k admits one of the representations k = 4 n or k = 4 n + 1 , respectively, for an n in N = { 0 , 1 , 2 , } , and it is minus if k = 4 n + 2 or k = 4 n + 3 holds for an n in N .
Proof. 
By starting from
z k = ( x 1 | + y i ) k = l = 0 k k l y l x k l i l
and making use of the equations
i ( 4 n ) = 1 | , i ( 1 + 4 n ) = i , i ( 2 + 4 n ) = 1 | , i ( 3 + 4 n ) = i , n = 0 , 1 , 2 ,
it follows that z k allows the claimed representation. □
Example 1.
The following equation shows that the multiplication of elements from the unit circle again results in elements from just there:
cos t sin t k = cos t k sin t k .
In the next step, the vector exponential function exp : R 2 R 2 is defined as the convergent vector series
exp ( z ) = k = 0 z k k ! .
In particular, we have the Euler-type formula
exp ( t · i ) = cos t sin t , t R .
Remark 1.
(a) Many more general exponential functions exp | | . | | : R 2 R 2 were introduced in [14,16,18,19] with reference to general functionals | | . | | : R 2 [ 0 , ) and a look forward to non-classically generalized complex algebraic structures (while exp refers to the Euclidean norm).
(b) Corresponding generalized trigonometric functions lead there to generalized Euler-type formulas.
(c) With the functional | | . | | = | | . | | p ,
| | x y | | p = ( | x | p + | y | p ) 1 / p , x y R 2 ,
defined for every p > 0 and the product
z 1 p z 2 = | | z 1 | | p | | z 2 | | p z 1 z 2 | | z 1 z 2 | | p
introduced in [14] instead of (1), for example, the p-generalized Euler formula
e | | . | | p t · i = cos p t sin p t , t R
with
cos p t = cos t N p ( t ) , sin p t = sin t N p ( t ) and N p ( t ) = | | cos t sin t | | p
results where e | | . | | p z means the central projection of vector exp | | . | | ( z ) = k = 0 z p k k ! onto the l p -unit circle.
(d) Visualizations of the functions f y : R R 2 with f y ( x ) = exp | | . | | ( x y ) and any fixed y R and the functions f g : R R 2 with f g ( x ) = exp | | . | | ( x g ( x ) ) and any function g : R R can help gain a deeper understanding of the vector-valued function z exp | | . | | ( z ) .

3. Characteristic Functions

3.1. An Update

The characteristic function of a random variable X, φ X : R R 2 , is formally completely correct defined by φ X ( t ) = E exp ( t X · i ) , t R . However, for the sake of convenience, we will write the multiplication of the scalar t X and the vector i differently, such that
φ X ( t ) = E exp ( i t X ) = E cos ( t X ) sin ( t X ) = u ( t ) v ( t )
looks more similar to the classical definition. The characteristic function of the random variable X is then
φ X ( t ) = u ( t ) v ( t ) = 1 0 0 1 u ( t ) v ( t ) = φ X ( t ) ¯ ,
that is, the complex conjugate of φ X or the x-axis mirrored version of φ X . If X has a finite k-th order moment, then its characteristic function has derivatives up to order k,
φ X ( k ) ( t ) = x k M k cos t x sin t x d F ( x )
where, for n = 0 , 1 , 2 , ,
M 4 n = 1 0 0 1 , M 1 + 4 n = 0 1 1 0 , M 2 + 4 n = M 4 n and M 3 + 4 n = M 1 + 4 n .
Taylor’s theorem now allows for t 0 an expansion of corresponding order for φ X ,
φ X ( t ) = 1 | + l = 1 k 1 t l E X l l ! M l 1 | + O ( t k ) 1 1
where
M 4 n 1 | = 1 | , M 1 + 4 n 1 | = i , M 2 + 4 n 1 | = 1 | and M 3 + 4 n 1 | = i
and O ( . ) means Landau’s symbol. Thus, φ X allows for the vector power expansion
φ X ( t ) = 1 t 2 2 ! E X 2 + t 4 4 ! E X 4 t E X t 3 3 ! E X 3 + t 5 5 ! E X 5
starting in the term 1 and ending in the Landau-type term O ( t k ) , which is on the top row if k is an even number and on the bottom row if k is odd. The following theorem is thus proved.
Theorem 1.
If X has a finite k-th order moment, then φ X satisfies the following vector power expansion for t 0 ,
φ X ( t ) = ( 1 t 2 2 ! E X 2 + t 4 4 ! E X 4 + ) 1 | + ( t E X t 3 3 ! E X 3 + t 5 5 ! E X 5 + ) i ,
starting in the term 1 and ending in the Landau-type term O ( t k ) , which is at the end of the expression in brackets in front of the vector 1 | if k is an even number and otherwise at the end of the expression in the other brackets.
The following corollary is an immediate conclusion from Theorem 1.
Corollary 1.
If X has a finite variance V ( X ) , then the polar representation of its characteristic function can be written for t 0 as
φ X ( t ) = r cos ϕ sin ϕ w i t h r = 1 V ( X ) t 2 + o ( t 2 ) ,
cos ϕ = 1 t 2 2 E X 2 + o ( t 2 ) 1 V ( X ) t 2 + o ( t 2 ) a n d sin ϕ = t E X + o ( t ) 1 V ( X ) t 2 + o ( t 2 ) .
If X has a finite fourth-order moment, then the following refinement holds:
r = 1 V ( X ) t 2 + t 4 12 ( E X 4 + 3 ( E X 2 ) 2 4 E X E X 3 ) + o ( t 4 ) ,
cos ϕ = 1 t 2 2 E X 2 + t 4 12 E X 4 + o ( t 4 ) 1 V ( X ) t 2 + t 4 12 ( E X 4 + 3 ( E X 2 ) 2 4 E X E X 3 ) + o ( t 4 )
and
sin ϕ = t E X t 3 6 E X 3 + o ( t 4 ) 1 V ( X ) t 2 + t 4 12 ( E X 4 + 3 ( E X 2 ) 2 4 E X E X 3 ) + o ( t 4 ) .
Now, let X 1 and X 2 be independent random variables. Then, as shown in [17], the following vector multiplication formula applies:
φ X 1 + X 2 ( t ) = φ X 1 ( t ) φ X 2 ( t ) , t R .
The following remark is intended to stimulate further investigations.
Remark 2.
(a) One might think about introducing and studying the general notion of a functional | | . | | -related characteristic function or Fourier transformation φ X ˜ ( t ) = E e | | . | | i t X , t R . In the particular case of | | . | | = | | . | | p discussed in Remark 1c, this would mean
φ X ˜ ( t ) = E cos p ( t X ) sin p ( t X ) , t R .
(b) Many statements of asymptotic probability theory are proved using characteristic functions. Strengthening of certain proofs and detection of some additional aspects could be stimulated by the present work.
(c) Visualizations of various challenging issues related to complex numbers and functions are given in [20,21]. Different figures of certain characteristic functions can be found in [22,23].
(d) As long as the representation of a characteristic function makes use of the classic imaginary unit i, it is problematic or even unsuitable for visualizing this function, since one would not know what to use for i. The vector representations of characteristic functions given here, however, can be the basis of further visualizations. In addition, the following examples show that in some cases the dependence on the imaginary unit only seems to exist.
We now continue with some examples.

3.2. Normal Distribution

The characteristic function of a standard Gaussian distributed random variable X is
φ X ( t ) = 1 2 π · cos t x e x 2 2 d x sin t x e x 2 2 d x = e t 2 2 · 1 0 , t R .
There does not appear to be anything new in this example: the imaginary part is zero and the characteristic function is real, one might think at first glance. However, complex numbers, that is, vectors from R 2 , whose imaginary part is zero, are not real numbers but vectors having one zero component. For this reason, the set of real numbers is not a subset of the set of complex numbers, although the opposite is often claimed in the literature. The dimension distinguishes the real numbers from the complex numbers. Numerous physical facts are determined by the interaction of two or more quantities, even if one should have originally only been interested in a part of these variables. Complex numbers of dimensions two, three or higher, or their non-classical generalizations like those presented in [14,15,16,17,18,19], are then adequate description tools and could possibly even provide information about hidden variables.
The polar representation of this characteristic function is
φ X p o l ( t ) [ r , ϕ ] = r cos ϕ sin ϕ where r = e t 2 2 and ϕ = 0 .

3.3. Binomial Distribution

We recall that the characteristic function of a random variable X following the Binomial distribution with parameters ( n , p ) , n N , p ( 0 , 1 ) is usually written as
φ X ( t ) = ( 1 p + p e i t ) n , t R .
Because it is not clearly said where i belongs to, it is unknown where function φ X takes values. The following theorem, however, presents a well-interpretable vector-valued update of this formula.
Theorem 2.
The characteristic function of the random variable X satisfies the vector representation
φ X ( t ) = ( ( 1 p ) · 1 | + p · exp ( i t ) ) n , t R .
Proof. 
Let p k = n k p k ( 1 p ) n k denote the probability that X attains the value k , k { 0 , 1 , , n } .
It follows from (4) and Example 1 that
φ X ( t ) = k = 0 p k · cos t k sin t k = k = 0 p k · cos t sin t k = k = 0 n k ( p · exp ( i t ) ) k ( ( 1 p ) · 1 | ) ( n k )
Remark 3.
(a) The polar representation of this characteristic function is
φ X p o l ( t ) [ r , ϕ ] = r n · cos ( n ϕ ) sin ( n ϕ ) ( m o d ( 2 π ) )
where
r = ( 1 p ) 2 + p 2 + 2 p ( 1 p ) cos t , cos ϕ = 1 p + p cos t r and sin ϕ = p r .
(b) Note that r 1 and that the inequality r 1 2 p suggests to define an estimator p ^ for p from solving the equation min t r = 2 p ^ .
(c) An alternative proof of this theorem makes use of Formula (5).

3.4. Poisson Distribution

We are looking now for the vector representation of the characteristic function of a random variable X following the Poisson distribution with parameter λ > 0 , which is usually written as
φ X ( t ) = e λ ( e i t 1 ) , t R .
The unknown quantity i occurring here is replaced by the known two-dimensional vector i in the following representation. This is made possible by the additional double use of the vector exponential function (3).
Theorem 3.
The characteristic function of X is
φ X ( t ) = E exp ( i t X ) = e λ exp ( λ · exp ( i t ) ) , t R .
Proof. 
This assertion follows with Example 1 from
k = 0 λ k k ! e λ · cos t k sin t k = e λ k = 0 λ · cos t sin t k k !

3.5. Uniform Distribution

For real numbers a < b , let X follow the uniform distribution on ( a , b ) . Then, its characteristic function is known to attain the value φ X ( 0 ) = 1 and is usually written for non-zero t as:
φ X ( t ) = e i t b e i t a i t ( b a ) .
This representation can apparently easily be converted to a vector representation if one ’interprets’ or ’substitutes’ x + i y = x y :
φ X ( t ) = i [ cos t b + i sin t b ( cos t a + i sin t a ) ] i 2 t ( b a ) = sin t b sin t a + i ( cos t a cos t b ) t ( b a ) = 1 t ( b a ) · sin t b sin t a cos t a cos t b .
Such a simple rewriting today, and in particular its admissibility, was historically obscure, arguably because of the incompletely defined character of i. One might prefer, therefore, the completely correct derivation of this vector representation starting from
E exp ( i t X ) = 1 b a · a b cos t x d x a b sin t x d x .
The usual complex representation of φ X using the unknown quantity i has thus been updated to a purely real representation as a two-dimensional vector.

3.6. Exponential Distribution

Something similar to the case of a uniform distribution applies to an exponentially distributed random variable X where
E exp ( i t X ) = λ λ 2 + t 2 ( λ · 1 | + t · i ) applies instead of φ X ( t ) = λ ( λ + i t ) λ 2 + t 2 , λ > 0 .
The corresponding polar representation is
φ X p o l ( t ) [ r , ϕ ] = r · cos ϕ sin ϕ with r = λ λ 2 + t 2 , cos ϕ = λ λ 2 + t 2 and sin ϕ = t λ 2 + t 2 .

3.7. Gamma Distribution

We now assume that X has a Gamma distribution with parameters λ > 0 and n N , X Γ ( λ , n ) . Then its characteristic function is
φ X ( t ) = λ λ 2 + t 2 n · λ t n
where, according to Lemma 1,
λ t 2 = λ 2 t 2 2 λ t , λ t 3 = λ 3 3 λ t 2 3 λ 2 t t 3 , λ t 4 = λ 4 6 λ 2 t 2 + t 4 4 ( λ 3 t λ t 3 ) ,
This purely real two-dimensional vector representation updates the usual complex one which makes use of the unknown quantity i,
φ X ( t ) = ( 1 i t λ ) n = ( λ λ 2 + t 2 ) n ( λ + i t ) n , t R .
As in the examples above, Gauss’s [24] interpretation of complex numbers as points in the two-dimensional plane can be seen particularly well here,
λ + i t λ t = λ · 1 | + t · i
where “ ξ η means “interpret ξ as η ”. Beginning with the work in [14], this status of interpretation was transformed into the status of an axiom, and at the same time, the equation i 2 = 1 , of which it is not said by which quantity i and which operation of squaring it can be fulfilled, was replaced with Equation (2). The point of view of a purely formal handling of the unexplained (imaginary or mystical) quantity i is thus not further pursued here.

4. Discussion

Fourier transformations are among the most commonly used mathematical methods. With respect to probability theory, a comprehensive theory of characteristic functions is based upon the further development of some of the examples considered here. We have just started this journey here and have not followed it far, but the reader is invited to follow this thought further. Our approach is based upon a new understanding of the role that the so-called imaginary unit plays as a vector from a space of dimension two. For the cases of two imaginary units in a three-dimensional space or k 1 such units in a space of dimension k, reference is made to the papers [15,17].
For a closely related discussion of the question of which rich variety of quadratic vector equations can be solved by the known formulas, we refer to [16,18].
In [25], p. 14, the role played by imaginary numbers is described as follows: “In algebra, when trying to find a formula for the solution of the equation of third degree, one came up with the initially meaningless expression 1 . But if you calculated with it as you are used to with the usual square roots, for example, 2 , 3 or π , something sensible always came out. This strengthens the belief in the right to exist of this structure, for which the designation i has meanwhile become common. But it was almost 300 years before Gauss showed that what had been achieved so far can reasonably be interpreted as an extension of the range of real numbers in which there is a new number whose square is 1 ”. In this regard, it is said in [26] that “the role that Euler assigns to accountability shows his algorithmic-analytical thinking and structural understanding because Euler did not have a geometric illustration of imaginary quantities at his disposal…” However, there is an unsatisfactory inaccuracy which, from a historical perspective, was not eliminated immediately after vector calculus was established in [27,28]. Regarding a further discussion of early statements on this, see [19]. Even if, as in [25,29], a complex structure is initially introduced in a seemingly correct way, later it happens in these and several other publications that i 2 = 0 1 0 1 is visibly or indirectly equated in one way or another with the real number 1 , although, obviously, this square equals the vector 0 1 .
A question that has probably not yet been dealt with comprehensively in the history of science is that authors on probability-theoretical questions such as in [1,5,6,7,8,9,10,11,12,13] do not refer to the mentioned gap in mathematical rigor and say nothing about the range of values of a characteristic function. Significantly, in [30], the characteristic function is given contradictingly on page 94 as φ X : R R and on page 104 as φ X : R C .
“How to choose a good research problem?” is the title of the article [31]. The answer to this question can be very diverse. Originally in part intended just as a didactic self-study for a pensioner, the question of the completely exact treatment of complex numbers has mutated into the development of a great variety of new, non-classically generalized complex numbers and has found expression in [14,15,16,17,18,19]. It also discusses, in particular, the aspects of feasibility and importance with regard to the choice of problem. While the question of feasibility was clarified through the development of the common basic geometric idea behind the papers [14,15,16,17,18,19], the answer to the question of the meaning will hopefully be completed in the future with more practical applications and such as here.
We close this paper with an outlook on another possible research question. Let | | . | | : R 2 R 2 be a positively homogeneous and bounded functional such that the disc B = { z R 2 : | | z | | 1 } is star-shaped with respect to the additive neutral element o , ⊙ a vector-valued vector product generated by this functional according to Definition 1 in [19], S = { z R 2 : | | z | | = 1 } the boundary of B or | | . | | unit circle, e | | . | | z : R 2 R 2 the central projection of vector exp | | . | | ( z ) R 2 onto S and
1 | = 1 0 | | 1 0 | | as well as i = 0 1 | | 0 1 | | .
If further
cos S x = cos x N ( x ) and sin S x = sin x N ( x ) with N ( x ) = | | ( cos x ) 1 | + ( sin x ) i | |
denote generalized trigonometric functions with respect to the functional | | . | | then it is known from Corollary 1 in [19] that
e | | . | | t · i = ( cos S t ) 1 | + ( sin S t ) i .
How useful is this generalized Euler formula when signals resemble generalized trigonometric functions rather than classical ones and the idea of a generalized Fourier transformation suggests itself? If, in particular, a > 0 , b > 0 , | | z | | = | | z | | ( a , b ) = ( x / a ) 2 + ( y / b ) 2 and
z 1 z 2 = z 1 ( a , b ) z 2 = | | z 1 | | ( a , b ) | | z 2 | | ( a , b ) z 1 z 2 | | z 1 z 2 | | ( a , b )
then 1 | = a 0 is multiplicative neutral, i = 0 b satisfies the equation i ( a , b ) i = 1 | and with
cos S x = cos ( a , b ) x = cos x N ( a , b ) ( x ) , sin S x = sin ( a , b ) x = sin x N ( a , b ) ( x )
and N ( x ) = N ( a , b ) ( x ) = | | ( cos x ) 1 | + ( sin x ) i | | ( a , b ) = 1 the elliptic Euler-type formula reads
e | | . | | ( a , b ) t · i = ( cos ( a , b ) t ) 1 | + ( sin ( a , b ) t ) i = a cos t b sin t S = { x y : x 2 a 2 + y 2 b 2 = 1 } .

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Cramér, H. Random Variables and Probability Distributions; University Printing House: Cambridge, UK, 1937. [Google Scholar]
  2. Lagrange, J.L. Mémoire sur l’utilité de la méthode de prende le milieu entre les résultats de plusieurs observations. Misc. Taur. 1770–1773, 5, 167–232. [Google Scholar]
  3. Laplace, P.S. Théorie Analytique des Probabilités; Encyclopedia Universalis: Paris, France, 1812. [Google Scholar]
  4. Tchebychef, P.L. Note sur la convergence de la série de Taylor. Crelle J. Die Reine Angew. Math. B 1844, 28, 279–283. [Google Scholar]
  5. Lévy, P. Calcul des Probabilités; Gauthier-Villars: Paris, France, 1925. [Google Scholar]
  6. Esseen, K.G. Fourier analysis of distributions. Acta Math. 1945, 77, 1–125. [Google Scholar] [CrossRef]
  7. Gnedenko, B.V.; Kolmogorov, A.N. Limit Distributions for Sums of Independent Random Variables; Addison-Wesley Mathematics Series; Addison-Wesley: Cambridge, MA, USA, 1954. [Google Scholar]
  8. Ibragimov, I.A.; Linnik, Y.V. Independent and Stationary Connected Variables; Nauka: Moscow, Russia, 1965. (In Russian) [Google Scholar]
  9. Ramachandran, B. Advanced Theory of Characteristic Functions; Statistical Publishing Society: Calcutta, India, 1967. [Google Scholar]
  10. Feller, W. An Introduction to Probability Theory and Its Applications; Wiley: Hoboken, NJ, USA, 1970; Volume I–II. [Google Scholar]
  11. Lukacs, E. Characteristic Functions; Charles Griffin and Company: London, UK, 1970. [Google Scholar]
  12. Petrov, V.V. Sums of Independent Random Variables; Springer: Berlin/Heidelberg, Germany, 1972. (In Russian) [Google Scholar]
  13. Bhattacharya, R.N.; Ranga Rao, R. Normal Approximation and Asymptotic Expansions; John Wiley Sons: Hoboken, NJ, USA, 1975. [Google Scholar]
  14. Richter, W.-D. On lp-complex numbers. Symmetry 2020, 12, 877. [Google Scholar] [CrossRef]
  15. Richter, W.-D. Three-complex numbers and related algebraic structures. Symmetry 2021, 13, 342. [Google Scholar] [CrossRef]
  16. Richter, W.-D. Complex numbers related to semi-antinorms, ellipses or matrix homogeneous functionals. Axioms 2021, 10, 340. [Google Scholar] [CrossRef]
  17. Richter, W.-D. On complex numbers in higher dimensions. Axioms 2022, 11, 22. [Google Scholar] [CrossRef]
  18. Richter, W.-D. On hyperbolic complex numbers. Appl. Sci. 2022, 12, 5844. [Google Scholar] [CrossRef]
  19. Richter, W.-D. Deterministic and random generalized complex numbers related to a class of positively homogeneous functionals. Axioms 2023, 12, 60. [Google Scholar] [CrossRef]
  20. Needham, T. Visual Complex Analysis; Oxford University Press: New York, NY, USA, 1997. [Google Scholar]
  21. Wegert, E. Visual Complex Functions. An Introduction with Phase Portraits; Springer: Basel, Switzerland, 2012. [Google Scholar]
  22. Sasvári, Z. Multivariate Characteristic and Correlation Functions; De Gruyter: Berlin, Germany, 2013. [Google Scholar]
  23. Witkovský, V. Numerical inversion of a characteristic function. Acta Imeco 2016, 5, 32–44. [Google Scholar]
  24. Gauss, C.F. Theoria Residuorum Biquadraticorum: Commentatio Secunda; Werke 2; Chelsea Publishing Company: New York, NY, USA, 1965; pp. 93–148. [Google Scholar]
  25. Gellert, W.; Küstner, H.; Hellwich, M.; Kästner, H. Kleine Enzyklopädie Mathematik; VEB Bibliographisches Institut: Leipzig, Germany, 1967. [Google Scholar]
  26. Thiele, R. Leonhard Euler. 15.April 1707-18. September 1783. Zur Erinnerung an seinen 300. Geburtstag. Mitteilungen Dtsch. Math. Ver. 2007, 15, 93–103. [Google Scholar]
  27. Grassmann, G. Die Lineale Ausdehnungslehre ein neuer Zweig der Mathematik; Cambridge University Press: Cambridge, UK, 1844. [Google Scholar]
  28. Hamilton, W.R. On quaternions. Proc. R. Ir. Acad. 1847, 3, 89–92. [Google Scholar]
  29. Walz, G. Lexikon der Mathematik; Spektrum Akademischer Verlag: Heidelberg/Berlin, Germany, 2001. [Google Scholar]
  30. Bremaud, P. An Introduction to Probabilistic Modeling; Springer: New York, NY, USA, 1988. [Google Scholar]
  31. Along, U. Wie wählt man ein gutes Forschungsproblem? Mitteilungen Dtsch. Math. Ver. 2010, 18, 160–163. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Richter, W.-D. On the Vector Representation of Characteristic Functions. Stats 2023, 6, 1072-1081. https://doi.org/10.3390/stats6040067

AMA Style

Richter W-D. On the Vector Representation of Characteristic Functions. Stats. 2023; 6(4):1072-1081. https://doi.org/10.3390/stats6040067

Chicago/Turabian Style

Richter, Wolf-Dieter. 2023. "On the Vector Representation of Characteristic Functions" Stats 6, no. 4: 1072-1081. https://doi.org/10.3390/stats6040067

APA Style

Richter, W. -D. (2023). On the Vector Representation of Characteristic Functions. Stats, 6(4), 1072-1081. https://doi.org/10.3390/stats6040067

Article Metrics

Back to TopTop