Phase Congruential White Noise Generator

: White noise generators can use uniform random sequences as a basis. However, such a technology may lead to deficient results if the original sequences have insufficient uniformity or omissions of random variables. This article offers a new approach for creating a phase signal generator with an improved matrix of autocorrelation coefficients. As a result, the generated signals of the white noise process have absolutely uniform intensities at the eigen Fourier frequencies. The simulation results confirm that the received signals have an adequate approximation of uniform white noise.


Introduction
The concept of white noise realization corresponds to randomness in appearance and distribution of signals [1][2][3][4]. For audible signals, the conforming range is the band of frequencies from 20 to 20,000 Hz. The randomness of such signals in this range is usually perceived by the human ear as a hissing sound with different volume intensities. White noise also manifests itself in myriad natural phenomena. For example, noises of various intensities of the sea waves, waterfall, rain, wind, etc. Outside nature, in technical systems, white noise appears in p-n junctions of semiconductors, in roars of different engines (vehicles, aircrafts, etc.), in the overlapping of many sounds associated with big cities and metropolitan life (traffic, construction, honking, life support systems), and so on.
On the other hand, the literature describes the methods of white noise creation, as well as its use in the artificial modeling of various situations. A fundamental feature of this direction is the condition that a white noise generator is required initially. The first approach, in this case, uses techniques of recording the physical phenomena with subsequent digitizing and postprocessing filtration [39,40]. This method provides quasi "natural" or "realistic" white noise, however the numerical accuracy of the registration of stochastic quantities is often not sufficient. Moreover, the technical realization is quite expensive. The second approach in creating a white noise generator utilizes the tech-niques of computer-based algorithms. In this case, the generation accuracy is quite high. However, different algorithmic methods have varying degrees of generation quality and usually they are associated with diverse disadvantages. To analyze the grade of generation, the verification methods are commonly applied.
As an example, in [41] the following form for white noise generation is proposed; it should be noted here that this is just one of many ways to implement a white noise generator. As a basis for forming n values of white noise (provided that ∈ [0, − 1 ]) the function rand() from Microsoft Visual Studio could be utilized. The algorithmic expression used in [41] can be represented here in the following mathematical form: By using this expression, the resulting values can be obtained in the range [−2: +2]. This simple formula allows creation of the elements in the required diapason. The next mandatory step is to check the condition of how much the obtained values really are the elements of a process with white noise properties. For such verification it is necessary to consider how the white noise process and the properties of its elements are formed, and here it all depends on the quality of the function rand() and its maximum value of randmax. Let us consider this in more details.
From a general point of view, the numerical sequences form a stationary random process ℙ if the characteristics of this process are not changing during its implementation. One such characteristic is the consistent observation of the presence of numerical values over the time period. Commonly, the term observation is replaced by the equivalent concept of counting. Therefore, the summation of all countings makes up the process with elements ∈ ℙ. A number of n consecutive countings can be collected together as an information signal S. Formally, there can be many signals, and each of them is a subset of the process ⊂ ℙ. Let us denote the initial counting in signal S as ∈ ; the next one should have the following index ∈ . Carrying on with the numbering of countings, the last one in this case would have the designation ∈ . This means that only the first initial n countings are considered in signal S. Let us assign this initial signal as ⊂ ℙ.
Considering the random process further, it becomes obvious that ∉ since there are only n countings in that signal, starting from index 0 to − 1. This means that the next signal starts from the counting . However, the initial counting in each signal ( , , etc.) should have a zero index relative to the beginning of the signal. To overcome this issue, the dual indexing, such as , is used. The first index i indicates the signal number , while the second one j specifies the assigned number of the counting in the corresponding signal. In this case the next counting ∈ ℙ in a random process is denoted in a random signal ⊂ ℙ by a double indexing = ∈ . This approach allows consideration of the countings of a random process from the point of view related to their sorting by signals.
Usually, the sequential signals are considered, and each of them contains an equal number of n countings with equal time intervals τ between them. In this case, the time of each information signal S could be designated as = ( − 1) . If the time τ between countings admits a continual interpretation, then continuous stationary random processes are considered. If the interval time τ and the number of countings n are finite in the signal S, then such processes are discrete. Generally, the mathematical transition from continuous signals to discrete ones is carried out using the Dirac delta function.
In a random discrete process, the countings are following one by one sequentially. Dividing a process into n sequential countings can be interpreted as observing the sequential discrete signal. Thus, the initial n countings provide forming the signal ; the next n countings organize the subsequent signal , and so on. For the next step in signal analysis, it is necessary to check whether the countings in each signal are independent. The consistent observation of countings in a period of time is not exhaustive evidence of their statistical independence. Therefore, it is necessary to analyze their interaction among other countings within the signal. For this purpose, the adjacent pairs of signals are evaluated using the mathematical correlation method. In this case, it is convenient to introduce the formal autovectors for each pair of signals. The first pair consists of the initial signal and the adjacent one ; the next pair is formed from signals and , and so on. Thus, the correlation approach uses adjacent signals and . To any pair of adjacent signals, for example, and , there is correspondence of n autovectors , , ⋯ , . The upper left index 0 underlines the origin of countings, which are taken from signal . Countings of the original signal ∈ are initially located in the autovector ∈ , which means that countings in the signal are transferred to the initial autovector as follows: Using the initial autovector (2), the next autovector could be obtained, which is shifted one counting to the right relative to the autovector . Autovector contains part of countings of the signal starting from the counting ∈ . Herewith, only the last counting in the autovector , ∈ belongs to the signal . Formation of autovector contains the following steps: From Formulas (2) and (3) it follows that the last autovector relative to the pair of signals and contains only one counting from the signal , and the remaining − 1 countings are taken from the signal as follows: Thus, with respect to signal Formula (4) completes the formation of all autovectors , ⋯ , . Now it is obviously seen that such a structure of autovectors is able to compose the matrix with dimensions × . The countings of each autovector occupy the corresponding row i in the matrix . The values in line 0 correspond to the signal and the autovector = . Line 1 contains the countings of autovector . The last row of the matrix keeps the autovector . Below is the program P070101, in which according to Formula (1) the autovectors are created in the matrix using the Random generator from the algorithmic language C#. The theoretical value of the amount of countings n has been replaced by a program variable named NS. As an example, in this particular case, the number of countings in signal is taken as = 33, although the values for NS can be chosen arbitrarily up to < 2 . The values of countings , of the theoretical initial signal are stored in the elements of the program array s0. Theoretical autovectors are located in the program matrix z.   According to Formulas (2)-(4), in the matrix of this listing there are the autovectors , , ⋯ , , which are presented line by line. An analysis of the current results obtained shows that these autovectors are not taken into account by the random function during the realization of the white noise generation. However, the white noise process has to keep the properties of statistical independence of the autovectors in each of its signal. This is one of the fundamentals of the theory of transformations of linear vectors, but unfortunately some well-known algorithmic generators do not take this important feature into account. This case provides a discussible issue: should the realization process be named as white noise generation? Moreover, how to take into account the statistical independence checking of autovectors for a generator during the realization of white noise process?
Thus, the purpose of this article is to propose instrumental algorithmic tools for generating statistically independent white noise signals. This method allows a phase signal generator to be created with an improved matrix of autocorrelation coefficients, while maintaining the property of absolutely equivalent intensities at the eigen Fourier frequencies.

White Noise Autocorrelation Matrix
The white noise autocorrelation matrix plays a fundamental role in analyzing the statistical independence of white noise signals [1][2][3][4][42][43][44]. Let us demonstrate this utilization the example of the matrix of autovectors from the previous section. Using the matrix , it is possible to calculate the paired scalar multiplications of autovectors , , ⋯ . To simplify the notation of these autovectors, let us omit the upper left index 0, i.e., instead of the indication is considered further. For autovectors and the scalar multiplication ( , ) is determined by the sum of multiplications of countings as follows: In Formula (5), the sum of multiplications of the corresponding countings is calculated, provided that these autovectors are in an orthogonal coordinate system. Formally, the geometric length or norm ‖ ‖ of the autovector in an n-dimensional orthogonal coordinate system is calculated by the following scalar multiplication: Formula (6) formally coincides with the Pythagorean Theorem in a multidimensional linear geometric space. From analytical geometry it is known (or it could be counted directly) that cosine of the angle between two linear vectors is determined by the scalar multiplication (5) between them and their norms (6). So, using Expressions (5) and (6), the following Formula (7) for the angle between the autovectors and can be obtained: In applied signal analysis the statistical estimates are very important and could be easily obtained by means of computer technologies. In this sense, it is of significant interest to evaluate the linear connections of autovectors, which for cos = 0 are usually called statistically independent. In this case such autovectors have a correlation coefficient equal to zero. For autovectors and the correlation coefficient is calculated as follows: In Expression (8) the correlation coefficient deals with centered autovectors relative to the first statistical moments ( ) and ( ), which are the following: Further, using the Expressions (9) let us determine the autovector , which is statistically centered for the autovector after corresponding adjustment of each its counting : Finally, the angle between the autovectors and is defined by using Expressions (5)-(10) in the following form: From Expressions (8) and (11) it follows that the autovectors , and the autovectors , have equal angles to each other. Thus, the initial signal is statistically independent, since all pairs of its autovectors are orthogonal.
The centered autovectors could be located in the matrix of autovectors V having the size × . Below is a program P070102, in which the matrix V contains the autovectors obtained from the autovectors of the matrix Z after using the transformation (10).
After launching the program P070201, the following result appears. The omitted values are substituted by a dash. Next, by using the matrix of autovectors V it is possible to obtain the autocorrelation matrix A. For this, the multiplication of the matrix V and its transposed variant is used as follows: The calculation of the autocorrelation matrix A in Expression (12) is the main key tool for further determining the statistical independence of the signals ∈ ℙ in the stochastic process of white noise ℙ.
Using the following formula, it is possible to verify directly that each element ∈ is a scalar multiplication of centered autovectors : If the autovectors are orthogonal, then the elements of the autocorrelation matrix A in Expression (13) are zeros, and on the main diagonal there is a minimal dispersion : In this Expression (14) the matrix I is the identity matrix, which means that this × square matrix contains the meanings of ones on the main diagonal and zeros elsewhere. Formula (14) is a fundamental tool of verifying whether a stochastic process ℙ can be considered as white noise.
If the signals are not orthogonal, then in the matrix R the elements ∈ show the cosines of the angles between the autovectors and for the corresponding elements (according to Formulas (8)-(11)). The program P070202 below calculates the autocorrelation matrix A and autocorrelation coefficients R in matrix V.  In the analysis of signals a set of consecutive n countings makes it possible to form or detect discrete spectra of multiple internal Fourier frequencies. By the nature of the frequency distribution there are white noise signals with a uniform distribution of intensities, a normal Gaussian distribution, and many others. This article focuses on white noise with a uniform intensity distribution at Fourier frequencies, i.e., all frequencies are of the same intensity.
In a random process, the signals of uniform white noise are characterized by the following properties: (1) Uniform distribution of intensities at all internal frequencies of the signal; (2) Zero value of mathematical expectation (the first moment) of the values of countings in the signal; (3) Autocorrelation matrix has a diagonal form with equal meanings of dispersion (the second moment) along the main diagonal and zero values for all other elements.
From this designation it follows that white noise signals should be orthogonal in an environment of independent countings with equal amplitudes at all internal frequencies. Therefore, property (1) forces us to consider the set of exactly n countings in the signals S with the further use of Fourier frequency analysis.
The simulation results in the above presented programs P070101, P070201 and P070202 show that the Random generator creates the countings of signals with low quality for white noise process. Their autocorrelation matrix A and the corresponding matrix of autocorrelation coefficients R are far from the desirable zero values. In line with all of this, in this article we offer a new congruential generator which has better quality of generating the uniform white noise. for the number of countings in the signal exactly corresponds to n in the previous sections above, i.e., = . Suppose that in all signals the countings are located with a constant step:

Theory
In each signal , , ⋯ , a certain spectrum of frequencies is fixed. If among these frequencies there is one with a period , then it is denoted as the initial frequency . Typically, in trigonometric studies the interval [− , + ] is used. An isomorphic transition of an observation point ∈ [0, ] to an isomorphic point ∈ [− , + ] is performed as follows: On the interval [− , + ] the frequency is equal to one, i.e., = 1. Frequencies = • , integral multiples of the initial frequency , refer to the discrete Fourier spectrum, in which is the quantity of frequencies. These frequencies make it possible to organize an orthonormal system of sine-cosine coordinates in Euclidean space In the trigonometric space with the coordinate system (17) the countings are observed at the points ∈ [− , + ], ∈ [0: − 1 ]. By the property of this space (17), the values of countings ( ) could be determined by the following Fourier polynom: It is assumed that if the information signal is closely approximated to the white noise signal, then it should have the same amplitudes at all frequencies = . However, in Expression (18), each frequency is accompanied by two amplitudes and with possibly separate distributions. This approach leads to a significant complication of the algorithm for their calculations. To get around this obstacle, let us replace the calculation of each pair of intensities , with the generation of uniform intensity at multiple frequencies with a phase shift. Let us analyze how this could be done in more detail.
It is known from trigonometric transformations that the spectral sine of the sum of two angles and can be calculated using the following expression: Considering Expressions (18) and (19) together, the following estimates are obtained: Using the Formulas (20), the relationship between the values and the Fourier coefficients , can be established as follows: The value of the angle is calculated using the Formulas (20) as well: Thus, Expression (18) is equivalent to the following sine ratio: A similar theoretical result can be obtained by using the cosine of the sum of two angles: Considering Expressions (18) and (24) together, the following estimates are derived: Using the Formulas (25), the relationship between the values and the Fourier coefficients , can be established as follows: The value of the angle is calculated using the Formulas (25) as well: Thus, Expression (18) is equivalent to the following cosine ratio: The value is not a randomly organized constant. For a white noise generator it could be set as 0 or any other number. Since the values of the amplitudes have to be constant in white noise, they can be chosen based on natural tests, or set equal to the universal meaning such as value one, for example: Frequency components are also not random variables in Fourier space. Therefore, only stochastic phases of sine (23) or cosine (28) signals can provide stochastic capabilities of a white noise signal at Fourier frequencies. This article discusses stochastic phases and which are uniformly distributed in the interval [− 2 ⁄ , + 2 ⁄ ], since the functions arcsine and arccosine are used in Formulas (22) and (27). Now it is necessary to assign the corresponding generator of uniformly distributed values for the stochastic phases and . In our previous articles [45][46][47][48][49][50], we thoroughly explored the capabilities of new congruential and twister generators, which ensure absolute completeness and uniformity of integer random variable distribution. Based on the main principles outlined in [45][46][47][48][49][50], further here we have established a new generator, which is specially designed to implement the white noise process with the above mentioned properties. In accordance with this, below is the basic generator cDeonYuliCongBase62 and on its basis the derived congruential generator cDeonYu-liCongSequence62. Together they provide the absolute completeness of sequences of uniform integer random variables of arbitrary size.
To design these generators, it is necessary, first of all, to choose the size or number of bits in an integer random variable. Below is presented the base class cDeonYu-liCongBase62, in which the number of bits w of random integer variables is specified. Their amount can be arbitrary in the range 2 ≤ ≤ 62. The quantity of random variables N in one sequence is = 2 . Class cDeonYuliCongBase62 is made in the C# programming language in Microsoft Visual Studio. This class is located in a separate namespace file nsDeonYuliCongBase62. This base class is the basis for creating the uniform sequences in the derived class cDeonYuliCongSequence62 having congruential parameters a and c. In a congruential sequence of N random elements the adjacent random variables and are calculated using the following formula: The parameter a in (30) has the following property: The parameter c is the odd number in (30): For congruential generation in accordance with Formula (30), compliance with properties (31) and (32) is mandatory. Below is a derived class cDeonYuliCongSequence62, in which these properties are checked and subsequent congruential random variables are generated. This class is located in a separate namespace file nsDeonYuliCongSequence62.

Construction and Results
When converting a discrete signal into the sum of Fourier frequencies, the following fundamental relationship is realized between the minimum amount of countings and the maximum one of Fourier frequencies on a circle of unit radius 2 long: As an example, let us set arbitrarily the quantity of frequencies in white noise equal to = 16. Then, by condition (33), each signal can contain the following amount of countings = 2 • + 1 = 2 • 16 + 1 = 33. According to Expressions (23) or (28), each spectral frequency = has its own phase , ∈ [− 2 ⁄ , + 2 ⁄ ], respectively. To generate random phases, let us take stochastic sequences consisting of = = 2 integer congruential variables ∈ [0, − 1 ] = [0: 2 − 1 ]. In binary form, each integer random variable has the following number w of bits: It should be emphasized that the congruential generator only works with an integer number of bits. From Expression (34) it follows that a quantity of Fourier frequencies should correspond to the following power function: The phase interval [− 2 ⁄ , + 2 ⁄ ] of length / is divided into subintervals of length each: = Using Expressions (35) and (36), the random phases or are determined by the congruential technology [45][46][47][48][49][50] using corresponding integer random variable : Stochastic phases (37) set the random nature of values for the countings in the white noise generator.
The stochastic values of countings together with phases for the white noise are computed in the derived class cDeonYuliCongPhase62A below, using base classes cDeonYuliCongSequence62 and cDeonYuliCongBase62 from the previous section. Joint testing of these classes will be carried out here later using sine based technology (23) in the P070401 program. for (int k = 1; k < ns; k++) // autovector shift { for (int j = 0; j < ns -1; j++) z[k, j] = z[k -1, j + 1]; z[k, ns -1] = PH.PhaseSinNext(t); t += PH.dxs; } } //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ } } After starting the program P070401, a complete uniform congruential sequence cong of integers appears on the monitor (presented in the listing below). Next in listing is a stochastic sequence of congruential phases psi, which is derived from uniform integers cong. This sequence is used to compute the countings of signal S0 using a technique of sine harmonics (23). Further, on the basis of the initial signal S0 the matrix Z of autovectors is created. Each autovector is located on the corresponding row of the matrix Z. To shorten this listing, missing numbers and matrix rows have been replaced with a dash. The result of this listing allows the movement from the matrix of autovectors Z to centering them with the same line-by-line arrangement in the matrix V. This is fulfilled by analogy with Formulas (9) and (10) regarding the mathematical expectation for each autovector separately. Below is program P070402, which presents how to implement this. The analysis of the results received above shows that even such a limited listing provides evidence that the resulting matrix R is closer to the statistical independence of white noise = than the same matrix obtained earlier in the program P070101 using the standard function Random.Next(). At the same time, the main advantage of this outcome is that the developed congruential phase generator in the program P070403 creates anew the Fourier frequency spectrum with equal intensities at the stochastic phases. The data presented in the last listing demonstrate that the uniform white noise was indeed achieved with an almost independent autocorrelation matrix for the autovectors of the original signal.

Discussion
After obtaining the correlation matrices, the first thing which should be analyzed is whether both experiments in the programs P070101 and P070403 ensure the realization of the first fundamental property of uniform white noise (considered here earlier in subsection White Noise Autocorrelation Matrix) by the equality of all intensities of the internal Fourier spectrum. Let us discuss this issue further in more detail.
When considering the frequency properties of discrete information signals, usually the Fourier polynom (11) is used, with the number of countings and the quantity of internal frequencies in the original signal . The amplitudes of the cosine and sine components are calculated from the meanings of countings ( ) using the following Euler-Fourier formulas below: The process of white noise generation could be considered successful if the obtained countings ( ) of the signal admit transformations (38)-(40) into the Euler-Fourier coefficients and . It should be checked now whether the intensities are the same at all frequencies = . This is required by the first property in the designation of white noise, i.e., the demand of uniform distribution of intensities at all internal frequencies of the signal. This check should be carried out both for the white noise of the function Random.Next() in the program P070101 and for the congruential white noise in the program P070401.
Below is the program P070501, which uses the random process generated earlier in the program P070101 for the white noise signal using the function Random.Next(). The derivable spectral amplitudes are calculated with help from sine technique using the function Fourier(), which is composed according to the Euler-Fourier Formulas (38)  Their analysis suggests that the function Random.Next() does not satisfy the first property about equality of the amplitudes of the internal frequency spectrum of white noise. Thus, taking into account the limited level of the corresponding matrix of autocorrelation coefficients R (presented in the subsection Introduction), and also the lack of equality of amplitudes in the internal spectrum of the signal frequencies, it becomes apparent that the standard function Random.Next() generates sequences which are relatively far from satisfactory quality of the white noise process.
Next, it is time to check the amplitudes of the internal frequency spectrum of the congruential white noise generator, which has been proposed in the current article. Below is the program P070502, which also uses the same number of countings. They were created earlier in the program P070401 (in subsection Construction and Results) by using congruential technology [45][46][47][48][49][50] in the generator cDeonYuliCongPhase62A. The listing of the results demonstrates that the signal contains = 33 countings and = 16 Fourier frequencies. The values of countings are in the array s0, which are taken from the signal obtained earlier (subsection Construction and Results) using the congruential phase generator cDeonYuliCongPhase62A in the program P070401. Further, the listing includes the lines AFourier with the derived intensities of the internal phase sine frequencies. These intensities were calculated using the Euler-Fourier Formulas (38)- (40) with the subsequent application of the elementary transformation = + . All meanings received with the values 0.7 finely match to the first property of the uniform white noise process. In the last part of this listing, the phases psiFourier show coincidence with the congruential phases in program P070403. It should also be noted that the application of the Euler-Fourier transform (38)-(40) completely recovers the generation of white noise process received in the class cDeonYuliCongPhase62A.
Careful analysis of all results above satisfies that the generator cDeonYu-liCongPhase62A does indeed provide the equal amplitudes of all internal phase frequencies, and that it ideally satisfies to the first property of the uniform white noise process. Thus, taking into account the better approximation of the matrix R of autocorrelation coefficients obtained in the program P070403 to the same matrix of theoretical white noise, and also taking into consideration an ideal coincidence of the intensities of the internal phase frequencies, it should be recognized that the here proposed congruential phase generator does indeed ensure a sufficiently high quality of generation of the white noise signals, which closely approximate true natural white noise.

Conclusions
Analysis of the source material shows that the algorithms of the commonly used generators of white noise signals have a low stochasticity of countings in the given observation intervals. Based on this, in this article the instrumental algorithmic tools for generating statistically independent white noise signals have been proposed. The designed techniques allowed for the creation of a new phase signal generator with an improved matrix of autocorrelation coefficients. The mathematical expressions used confirm that at Fourier frequencies a single dimensional phase random variable could be obtained. As a result, the derivative cDeonYuliCongPhase62A phase generator made it possible to create information signals with a better approximation to the uniform white noise process. The simulation outcomes verify that the information signals received have the properties of white noise signals with equal amplitudes at all internal frequencies with uniformly distributed random phases. These results can be used in a huge number of applications where white noise processes are used.
Author's Contributions: All the authors equally contributed to this work. All authors have read and agreed to the published version of the manuscript.

Funding:
The authors have no support or funding to report.
Institutional Review Board Statement: Not applicable.

Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.