A Stochastic Filling and Modeling Algorithm of Non-Equal Diameter Particles with Specified Probability Density for Porous Permeable Materials

In this paper, a model generation algorithm for non-equal diameter particles with a specified probability density distribution is proposed. Based on considering the randomness of the size and distribution of the particles, the compact stacking of the particles is realized by the compactness algorithm, and then the spatial distribution of the tightly compacted particles is made to meet the random distribution of the specified probability density and the specified volume fraction by the filtering algorithm. The computational efficiency and effectiveness of the algorithm are verified, and the effects of the particle size and volume fraction on the distribution are analyzed. Finally, the proposed model has been used to study the permeability of a titanium porous filter cartridge. The results show that the size and location of the particle samples that are generated by the proposed algorithm follow specified probability distributions according to the requirements, and the volume fraction can be adjusted. Compared with the traditional algorithm, the computational effort and complexity are reduced. The resultant model can be used to study the permeability of porous materials and provide modeling support for structural optimization and further simulation of porous materials.


Introduction
Many new functional materials are bonded and compressed by granular materials with a pore structure that follows a certain probability distribution, such as titanium porous filter cartridge and metal-matrix composites. In order to obtain a mesoscopic mechanism for materials with a specific random distribution of particles, it is necessary to study the relationship between the properties of granular materials and the microstructure morphological characteristics of the formed materials, so as to carry out an effective structural performance design of the material [1,2]. However, because of the complex mesoscopic pore structure, it is difficult to realize the visualization of mesoscopic mechanism, for example, the seepage properties, through experiments [3]. Therefore, a meso-model with a specified probability density distribution is built to provide reference for improving the accuracy and efficiency of porous media meso-modeling.
As for the generation algorithm of particle distribution, Primera et al. [4] proposed the method of triangulation to describe the pore size, but this method was completed through two-dimensional slices and could not fully represent the three-dimensional morphology of the pores. Tory E M et al. [5] continuously filled the fixed space with the sequence addition method and completed the sequence filling of the powder layer based on considering the stability of the powder particles and the interaction force between the 2 of 18 particles. Jodrey W S et al. [6] used the non-sequence rearrangement algorithm to start position allocation with large-diameter particles first, allocating the position with smalldiameter particles, and then reducing the overlap degree by moving particles. Cundall P A et al. [7] made the stacking process of particles similar to the flow process of dynamic fluid based on the discrete element method and realized the stacking of powder particles. A. Bezrukov et al. [8] describes two algorithms for the generation of random packings of spheres with arbitrary diameter distribution. The first algorithm is the force-biased algorithm. It produces isotropic packings of a very high density. The second algorithm is the Jodrey-Tory sedimentation algorithm, which simulates the successive packing of a container with spheres following gravitation. It yields packings of a lower density and of weak anisotropy. Although the above algorithms could achieve the stacking of powder particles, there existed some shortcomings in the randomness of spatial distribution, and the filling efficiency was low. At the same time, the above algorithms inevitably need to constantly detect the interference between particles, which means tedious calculation.
In addition, Bailakanavar M et al. [9] used the random order adsorption algorithm to determine whether the current particles interfered with the existing particles, retained the data of the non-interfered particles, and then established the relevant model. Xin Zhenyang et al. [10] used the perturbation algorithm to assign regular distribution positions to all particles and applied the perturbation to the amount of random distance between each particle. Through interference judgment, the position information of particles was retained. Note that through the above, the generation algorithms could ensure the randomness of the space to a certain extent. Due to the need to compare the position relationship between the specified particles and all the particles in the distribution process, the computation burden is somewhat high, which affects the efficiency of the method.
The transmission characteristics of spherical packing include electrical conductivity and permeability, among which permeability is one of the most studied properties [11][12][13]. Experimental and theoretical studies on permeability have achieved certain results, such as Darcy's Rule for fluid flow in porous media and the empirical formula Kozeny relation for permeability [14]. As the pore structure of sphere accumulation is complex like that of other porous media, it is difficult to study fluid flow in porous media by traditional methods [15].
To solve the above problems, this paper proposes a method to generate the mesomodel of non-equal diameter particles with a specified probability density distribution. The filling method of particles avoids the tedious calculation that is caused by interference detection, and since the movement of particles is only related to a small number of particles in the neighborhood, the modeling efficiency is further improved to a significant degree. Through the comparison of computation with the traditional methods and a hypothesis test, the efficiency and effectiveness of the algorithm are verified, illustrating that the proposed generation algorithm could effectively build a meso-model for porous media and titanium foam with a specified probability density distribution. The lattice Boltzmann method is used to calculate the permeability of porous material with different probability density distributions. The results show that the porous material meso-model that is established by the proposed algorithm in this paper can provide a basis for further numerical simulation of fluid permeability.

Generation Algorithm for Meso-Distribution Particles
Considering the randomness of the particles in size and distribution, Python is employed to build the algorithm, and gives the generation steps of the particles a meso-model, according to the specified probability density. The proposed algorithm consists of two parts. The first part is the compactness algorithm, which causes the particles with the size of a specified probability density distribution to stack closely together. The second part is the filtering algorithm. By using this method, the distribution of the particles in the spatial position according to the specified probability density can be realized, and the specified volume fraction can be achieved based on the requirements.

Compactness Algorithm for Particles
Let i represent the direction of the x axis, j represent the direction of the y axis, and k represent the direction of the z axis, respectively, corresponding to columns, rows and layers. The particles of random sizes are generated in the cube area according to the direction of the column, row and layer successively, and the tight compactness is realized by adjusting the position between the particles.

Formation of Non-Equal Diameter Particles
According to the actual demand, the particle size is randomly distributed according to the specified probability density by defining an appropriate sphere diameter range.
The expectation of X_r is In practice, the algorithm generates particles with a radius that is randomly distributed on the interval [a, b]; the average of the radius is At this step, the algorithm places the random sized particles roughly in a 3D matrix pattern, with particle positions vibrated by random noises. To place the first particle with radius r 1 near coordinates (0, 0, 0), the algorithm considers the particle radius, the cube area boundary restriction and the random noise. The resulting center coordinates of the particle are noted as (x 1 , y 1 , z 1 ), where x 1 = r 1 + d noise , y 1 = r 1 + d noise and z 1 = r 1 + d noise ; the second particle with radius r 2 is placed at coordinates (x 2 , y 2 , z 2 ), where x 2 = x 1 + r 1 + r 2 + d noise , y 2 = r 2 + d noise , z 2 = r 2 + d noise . the third particle with radius r 3 is placed at (x 2 + r 2 + r 3 + d noise , r 3 + d noise , r 3 + d noise ), etc. The notation d noise stands for the random noise that the algorithm introduces; it takes a different value each time it appears. The particle placement process continues until the first row on the xy-plane is filled. Next, the algorithm places the second row close to the first row on the xy-plane and the row placement process repeats until the first xy-layer is filled. Afterwards, the second particle layer is placed closely above the first layer and the layer placement process repeats until there are enough layers. Prior to the placement of any particles, the algorithm estimates the number of particles to generate by considering the scenario where the cube area is filled with particles of equal radius r avg . Assume the length of the edges of the cube along the x, y, z-axis are L x , L y and L z , respectively; the number of particles in each row, the number of rows in each layer and the number of layers are estimated as N x = L x / 2r avg + ρ , N y = L y / 2r avg + ρ and N z = L z / 2r avg + ρ , respectively. The notation ρ stands for the upper bound of the gap distance between adjacent particles and is enforced by the compactness step (Section 2.1.3). Since the density of the particles is increased by the compactness step and the goal is to distribute the particles in the entire cube area following a given probability density, the estimated values N x , N y and N z are further scaled up by an overfill factor f ov > 1. After the compactness step, the excessive particles that remain outside of the cube boundaries are easily pruned.
To ensure that each newly generated particle and the last generated particle are adjacent in the same row and layer, the adjacent scales vibrate within a very small range. The advantage of this method is that the detection of particle interference can be omitted, and thus the randomness of the position can be retained, which significantly simplifies the computational effort of the algorithm. The logical steps are as follows: (1) By specifying a minimal disturbance init_noise to add a segment of white noise, its power spectral density will be constant in the whole frequency domain, as shown in Figure 1 (when i = 1, the x coordinate of the center is evaluated on [x_min, x_min + init_noise] in a uniformly distributed scale, where x_min is simply the radius of the particle and init_noise is a preselected constant to control the amplitude of the noise). Otherwise, make the minimum coordinate x_min of the newly generated particles in row j of the kth layer equal to the previous maximum x coordinate prev_x_max, plus the radius of the new particle, before taking the corresponding disturbance distance. and thus the randomness of the position can be retained, which significantly simplifies the computational effort of the algorithm. The logical steps are as follows: (1) By specifying a minimal disturbance _ to add a segment of white noise, its power spectral density will be constant in the whole frequency domain, as shown in Figure 1 (when = 1, the x coordinate of the center is evaluated on [ _ , _ + _ ] in a uniformly distributed scale, where x_min is simply the radius of the particle and init_noise is a preselected constant to control the amplitude of the noise). Otherwise, make the minimum coordinate _ of the newly generated particles in row of the ℎ layer equal to the coordinate _ _ , plus the radius of the new particle, before taking the corresponding disturbance distance. (2) Similarly, when = 1, the y coordinate of the center is evaluated at a uniformly distributed scale on [ _ , _ + _ ], where y_min is simply the particle radius. Otherwise, since the ( − 1)th row particles in layer have been determined, the minimum coordinate _ of the newly generated particles in layer should be equal to the maximum y coordinate _ _ of the ( − 1)th row, plus the radius of the new particle, before taking the corresponding disturbance distance.
(3) Similarly, when = 1 , the z coordinate of the center is evaluated on where z_min is simply the radius of the particle. Otherwise, since the particles at the ( − 1)th layer have been determined, the minimum coordinate _ of the newly generated particles should be equal to the maximum z coordinate _ _ ( − 1) ℎ layer, plus the radius of the new particle, before taking the corresponding disturbance distance. If the amplitude of the noise is 0.1 and the mean value is 0.05, when the new particle has been generated 1000 times, the graph of the noise can be obtained, as shown in Figure  2.  (2) Similarly, when j = 1, the y coordinate of the center is evaluated at a uniformly distributed scale on [y_min, y_min + init_noise], where y_min is simply the particle radius. Otherwise, since the (j − 1)th row particles in layer k have been determined, the minimum coordinate y_min of the newly generated particles in layer k should be equal to the maximum y coordinate prev_y_max of the (j − 1)th row, plus the radius of the new particle, before taking the corresponding disturbance distance.
(3) Similarly, when k = 1, the z coordinate of the center is evaluated on [z_min, z_min + init_noise], where z_min is simply the radius of the particle. Otherwise, since the particles at the (k − 1)th layer have been determined, the minimum coordinate z_min of the newly generated particles should be equal to the maximum z coordinate prev_z_max (k − 1)th layer, plus the radius of the new particle, before taking the corresponding disturbance distance.
If the amplitude of the noise is 0.1 and the mean value is 0.05, when the new particle has been generated 1000 times, the graph of the noise can be obtained, as shown in Figure 2. (2) Similarly, when = 1, the y coordinate of the center i distributed scale on [ _ , _ + _ ], where y_mi dius. Otherwise, since the ( − 1)th row particles in layer minimum coordinate _ of the newly generated particles to the maximum y coordinate _ _ of the ( − 1)th row particle, before taking the corresponding disturbance distance where z_min is simply the radius since the particles at the ( − 1)th layer have been determined _ of the newly generated particles should be equal to t _ _ ( − 1) ℎ layer, plus the radius of the new partic sponding disturbance distance. If the amplitude of the noise is 0.1 and the mean value is has been generated 1000 times, the graph of the noise can be ob 2.  The pseudocode of the placement process is as follows in Algorithm 1.

Compactness of Particles
Assume that the randomly generated particles are stored in a 3-dimensional array M, whose size is I max × J max × K max , n neighbor is a small positive integer, and ρ is a small positive value. The position of the particles will be repositioned by coordinate translation.
By calculating the distance dz between the particle and the n neighbor neighboring particles in layer (k − 1), local neighborhood compression can be achieved to speed up the iterative process and more tightly compact the particles. While dy is the distance between the particles and the particles in the n neighbor range that are located in row (j − 1) of layer k, dx is the distance between the particle and the particles in the n neighbor range that are located in row j, column (i − 1) of layer k. Let dw = Max (dx, dy, dz)(w ∈ x, y, z). Move the particle by dw/3 in the w direction. Repeat this step until dx, dy, dz < ρ.
The center coordinate of the particle P = (x, y, z) is reduced by 1/3dw, dw = Max (dx, dy, dz)(w ∈ x, y, z), and a new position P = (x , y , z ) is obtained. The coordinates of the new position are derived as follows.
According to the aforementioned methods, the particles at the initial position can be further compressed in distance to be tightly compacted.
The pseudocode of the compactness process is as follows in Algorithm 2. where j n in [j−n neighbor , j+n neighbor ] and k n in [k−n neighbor , k+n neighbor ] Assume:

Algorithm 2 Compactness algorithm
Plane P x parallel to the yz-plane intersects the x-axis at x boundary . Plane P y parallel to the xz-plane intersects the y-axis at y boundary . Plane P z parallel to the xy-plane intersects the z-axis at z boundary .
loop has executed more than 1000 times: Report exception and quit Figure 3 is the compactness model diagram that is formed after the implementation of the compactness algorithm. The particles whose size complies with the specified distribution law are packed tightly in the cube region.

Filtering Algorithm
After the compactness algorithm, the particles with a random size dist be tightly compacted and a higher volume fraction of bulk density can be obt ever, the spatial distribution cannot meet the random distribution of the ass

Filtering Algorithm
After the compactness algorithm, the particles with a random size distribution can be tightly compacted and a higher volume fraction of bulk density can be obtained. However, the spatial distribution cannot meet the random distribution of the assigned probability density, and the volume fraction cannot be adjusted according to the demand. Filtering algorithms help with this process. This paper makes the following assumptions: (1) Assuming that the volume of the cube in which the particles are located is v, such that the particles are uniformly distributed, the probability density is the same throughout the cube; u x, y, z = 1/v (2) After the execution of the compactness algorithm and before the execution of the filtering algorithm, the total volume of the particles is v g ; (3) If the target volume fraction is α, the total volume of the particles corresponding to this volume fraction is αv; (4) The target probability density function is f x,y,z .
Given that a small zone with volume dv is centered at (x, y, z) in the cube, the probability of retaining the particles near (x, y, z) by the filtering algorithm is Substitute the hypothesis, simplify it, and we can achieve Assuming that the tightly packed particles are stored in a three-dimensional array I max × J max × K max , after using the aforementioned methods, the particles that do not conform to the given probability density distribution in space are filtered out.
The pseudocode is as follows in Algorithm 3. Figure 4 shows examples of common probability density functions (uniform, normal and exponential), as generated by the filtering algorithm.  Figure 4 shows examples of common probability density functions (uniform, norma and exponential), as generated by the filtering algorithm.

Influence of Parameters on the Algorithm
The parameters of the algorithm directly affect its performance. In this section, the influence of the design parameters on particle filling and distribution effect in the appli cation of this algorithm are discussed. The design parameters mainly include the particle size and the volume fraction. This section presents the distribution and filling effect and summarizes the related influence.

Influence of Parameters on the Algorithm
The parameters of the algorithm directly affect its performance. In this section, the influence of the design parameters on particle filling and distribution effect in the application of this algorithm are discussed. The design parameters mainly include the particle size and the volume fraction. This section presents the distribution and filling effect and summarizes the related influence.

Distribution of Particles with Different Size Ranges with Same Probability Density
Taking the uniform distribution in the x, y and z directions as an example, the distribution of the particles with different size ranges is simulated by using the proposed algorithm. The volume fraction is 5%, while the volume of the cube is 125 mm. The total volume of the particles is equal. The distribution results are shown in Table 1. Table 1. Distribution effects of particles with different size ranges with the same probability density.

Range of Particle Sizes
The Distribution of Particles

Distribution of Particles with Different Size Ranges with Same Probability Density
Taking the uniform distribution in the , and directions as an example, the distribution of the particles with different size ranges is simulated by using the proposed algorithm. The volume fraction is 5%, while the volume of the cube is 125 mm. The total volume of the particles is equal. The distribution results are shown in Table 1. Table 1. Distribution effects of particles with different size ranges with the same probability density.

Range of Particle Sizes
The As can be seen from Table 1, when the volume fraction remains unchanged, the number of particles decreases significantly with the increase in particle size, and the distribution becomes sparser.

Particles Distribution with Same Probability Density and Different Volume Fraction
In this section, the uniform distribution in the , and directions is taken as an example to simulate the distribution of particles in different volume fraction ranges by using the proposed algorithm. The radius sizes of the particles are all between 0.15-0.25 mm. The total volume of the cube is 125 . The distribution results are shown in Table  2.

Distribution of Particles with Different Size Ranges with Same Probability Density
Taking the uniform distribution in the , and directions as an example, the distribution of the particles with different size ranges is simulated by using the proposed algorithm. The volume fraction is 5%, while the volume of the cube is 125 mm. The total volume of the particles is equal. The distribution results are shown in Table 1. Table 1. Distribution effects of particles with different size ranges with the same probability density.

Range of Particle Sizes
The As can be seen from Table 1, when the volume fraction remains unchanged, the number of particles decreases significantly with the increase in particle size, and the distribution becomes sparser.

Particles Distribution with Same Probability Density and Different Volume Fraction
In this section, the uniform distribution in the , and directions is taken as an example to simulate the distribution of particles in different volume fraction ranges by using the proposed algorithm. The radius sizes of the particles are all between 0.15-0.25 mm. The total volume of the cube is 125 . The distribution results are shown in Table  2.

Distribution of Particles with Different Size Ranges with Same Probability Density
Taking the uniform distribution in the , and directions as an example, the distribution of the particles with different size ranges is simulated by using the proposed algorithm. The volume fraction is 5%, while the volume of the cube is 125 mm. The total volume of the particles is equal. The distribution results are shown in Table 1. Table 1. Distribution effects of particles with different size ranges with the same probability density.

Range of Particle Sizes
The As can be seen from Table 1, when the volume fraction remains unchanged, the number of particles decreases significantly with the increase in particle size, and the distribution becomes sparser.

Particles Distribution with Same Probability Density and Different Volume Fraction
In this section, the uniform distribution in the , and directions is taken as an example to simulate the distribution of particles in different volume fraction ranges by using the proposed algorithm. The radius sizes of the particles are all between 0.15-0.25 mm. The total volume of the cube is 125 . The distribution results are shown in Table  2.
As can be seen from Table 1, when the volume fraction remains unchanged, the number of particles decreases significantly with the increase in particle size, and the distribution becomes sparser.

Particles Distribution with Same Probability Density and Different Volume Fraction
In this section, the uniform distribution in the x, y and z directions is taken as an example to simulate the distribution of particles in different volume fraction ranges by using the proposed algorithm. The radius sizes of the particles are all between 0.15-0.25 mm. The total volume of the cube is 125 mm 3 . The distribution results are shown in Table 2.
As can be seen from Table 2, when the radius size of the particles remains unchanged, the volume of the particles increases with the increase in volume fraction, but the maximum volume fraction of the particles cannot exceed the volume fraction of the tightest compactness that is formed by the compactness algorithm.  As can be seen from Table 2, when the radius size of the particles remains unchang the volume of the particles increases with the increase in volume fraction, but the ma mum volume fraction of the particles cannot exceed the volume fraction of the tight compactness that is formed by the compactness algorithm.

Algorithm Analysis
The effectiveness of the algorithm is analyzed in two regards, namely, the efficien and the randomness. First, the efficiency of the algorithm is demonstrated from the p spective of computational complexity. Secondly, the sample that is generated by the al rithm is used to verify the randomness of the random algorithm in particle size and lo tion by the statistical hypothesis testing method.

Computational Efficiency
Taking the uniform distribution as an example, it is assumed that particles ob the random distribution, and the efficiency analysis of the take-and-place algorithm a the algorithm proposed in this paper is as follows: (1) The major steps of the take-and-place algorithm are as follows: randomly gener the position of the new particle and determine whether there is any interference betw the current particle and the existing particles. If there is no interference, the relevant d  As can be seen from Table 2, when the radius size of the particles remains unchange the volume of the particles increases with the increase in volume fraction, but the ma mum volume fraction of the particles cannot exceed the volume fraction of the tight compactness that is formed by the compactness algorithm.

Algorithm Analysis
The effectiveness of the algorithm is analyzed in two regards, namely, the efficien and the randomness. First, the efficiency of the algorithm is demonstrated from the p spective of computational complexity. Secondly, the sample that is generated by the alg rithm is used to verify the randomness of the random algorithm in particle size and lo tion by the statistical hypothesis testing method.

Computational Efficiency
Taking the uniform distribution as an example, it is assumed that particles ob the random distribution, and the efficiency analysis of the take-and-place algorithm a the algorithm proposed in this paper is as follows: (1) The major steps of the take-and-place algorithm are as follows: randomly gener the position of the new particle and determine whether there is any interference betwe the current particle and the existing particles. If there is no interference, the relevant da  As can be seen from Table 2, when the radius size of the particles remains unchange the volume of the particles increases with the increase in volume fraction, but the max mum volume fraction of the particles cannot exceed the volume fraction of the tighte compactness that is formed by the compactness algorithm.

Algorithm Analysis
The effectiveness of the algorithm is analyzed in two regards, namely, the efficienc and the randomness. First, the efficiency of the algorithm is demonstrated from the pe spective of computational complexity. Secondly, the sample that is generated by the alg rithm is used to verify the randomness of the random algorithm in particle size and loc tion by the statistical hypothesis testing method.

Computational Efficiency
Taking the uniform distribution as an example, it is assumed that particles obe the random distribution, and the efficiency analysis of the take-and-place algorithm an the algorithm proposed in this paper is as follows: (1) The major steps of the take-and-place algorithm are as follows: randomly genera the position of the new particle and determine whether there is any interference betwee the current particle and the existing particles. If there is no interference, the relevant da

Algorithm Analysis
The effectiveness of the algorithm is analyzed in two regards, namely, the efficiency and the randomness. First, the efficiency of the algorithm is demonstrated from the perspective of computational complexity. Secondly, the sample that is generated by the algorithm is used to verify the randomness of the random algorithm in particle size and location by the statistical hypothesis testing method.

Computational Efficiency
Taking the uniform distribution as an example, it is assumed that n particles obey the random distribution, and the efficiency analysis of the take-and-place algorithm and the algorithm proposed in this paper is as follows: (1) The major steps of the take-and-place algorithm are as follows: randomly generate the position of the new particle and determine whether there is any interference between the current particle and the existing particles. If there is no interference, the relevant data of the current particle are saved, and the particle is accepted; otherwise, the algorithm needs to try new locations again and again. Therefore, the take-and-place algorithm not only reduces the randomness of sample generation, but also comes with the computational complexity of c 1 (m 1 + 2m 2 +, . . . + (k − 1)m k +, . . . + (n − 1)m n ) that is far greater than O n 2 . To clarify, m k is the number of repeated generation positions when filling the kth particle and a positive constant c 1 is independent of n; (2) As mentioned in Section 2, the algorithm that is proposed in this paper consists of two parts. In the first part of the algorithm, the position of each particle is only related to a small number of particles in its neighborhood when the initial position is generated or the tightness of the particles is adjusted. Therefore, the computational complexity is c 1 n 1 , where n 1 is the number of particles and a positive constant c 1 is independent of n 1 . In the second part of the algorithm, each particle is independently processed, and the complexity is c 2 n 1 , where c 2 is a constant independent of n 1 . Accordingly, the overall computational complexity of our algorithm is O(n).
The computational complexity of the particle random generation algorithm that is proposed in this paper is significantly lower than that of the traditional algorithm, and it is highly efficient.

Random Test of Particles Size
At present, most of the simulations focus on the case of uniform and random distribution, so the random characteristics of the samples with uniform and random distribution are studied, and an χ 2 test [16] is used to evaluate the uniformity of the samples. The statistic test formula is where is the number of all random numbers, m is the number of intervals, and i is the number of the i th interval. In the uniformity test of the algorithm, we generated 3485 samples whose radii are on the interval (0.05, 0.15). The percentages of each radius are shown in Figure 5.
than ( ). To clarify, is the number of repeated generation positions when filling the th particle and a positive constant is independent of ; (2) As mentioned in Section 2, the algorithm that is proposed in this paper consists of two parts. In the first part of the algorithm, the position of each particle is only related to a small number of particles in its neighborhood when the initial position is generated or the tightness of the particles is adjusted. Therefore, the computational complexity is ′ , where is the number of particles and a positive constant ′ is independent of . In the second part of the algorithm, each particle is independently processed, and the complexity is ′ , where ′ is a constant independent of . Accordingly, the overall computational complexity of our algorithm is ( ).
The computational complexity of the particle random generation algorithm that is proposed in this paper is significantly lower than that of the traditional algorithm, and it is highly efficient.

Random Test of Particles Size
At present, most of the simulations focus on the case of uniform and random distribution, so the random characteristics of the samples with uniform and random distribution are studied, and an test [16] is used to evaluate the uniformity of the samples. The statistic test formula is where is the number of all random numbers, is the number of intervals, and is the number of the th interval. In the uniformity test of the algorithm, we generated 3485 samples whose radii are on the interval (0.05, 0.15). The percentages of each radius are shown in Figure 5. The particles sizes are evenly distributed in each interval. Assuming that the degree of freedom is 9, the calculated  value of radius R is 11.72, and the asymptotic significance is 0.230, greater than 0.05. According to statistic theory, the  distribution table The particles sizes are evenly distributed in each interval. Assuming that the degree of freedom is 9, the calculated χ 2 value of radius R is 11.72, and the asymptotic significance is 0.230, greater than 0.05. According to statistic theory, the χ 2 distribution table shows that the distribution of particle radius R that is obtained by the algorithm can be considered as an ideal uniform random distribution. The test statistical scale is shown in Table 3.

Random Test of Particle Position
Taking normal distribution as an example, the Shapiro-Wilk test method [17] and the Kolmogorov-Smirnov test method [18,19] are used to test the normality of the samples that are generated by the random algorithms, using the compactness algorithm to make the spherical center coordinates of 301 particles normally distributed along the x-axis. Table 4 is the analysis result of the normality test. As can be seen from Table 4, the sample data did not show statistical significance (p > 0.05), which means that the hypothesis (hypothesis: the data are normally distributed) is accepted and the sample has the characteristic of normality. The histogram of the sample distribution along the x-axis is shown in Figure 6.

Permeability of Porous Media
Titanium porous filter cartridge is made of porous titanium metal filter mate the powder metallurgy method. Its internal pores are curved and crisscross, and tering mechanism is typical deep filtration. Permeability is the ability of a porous m to allow fluid matter to pass through. In this section, the lattice Boltzmann method

Permeability of Porous Media
Titanium porous filter cartridge is made of porous titanium metal filter material by the powder metallurgy method. Its internal pores are curved and crisscross, and the filtering mechanism is typical deep filtration. Permeability is the ability of a porous medium to allow fluid matter to pass through. In this section, the lattice Boltzmann method is used to study the flow of fluid under the condition of mass particle filling and the influence of initial particle distribution on fluid permeability by using the meso-model of porous media that is constructed by the above algorithm.

Lattice Boltzmann Method
The motion of a fluid can be described by a set of partial differential equations, such as the Navier-Stokes equations, which are highly nonlinear in most cases and find it very difficult to obtain analytical solutions. The Lattice Boltzmann method is used to solve the numerical solution of the fluid motion equation by means of the discrete method. The lattice Boltzmann method can be regarded as a special discrete scheme of a continuous Boltzmann equation, as shown in the following formula.
where g is the discrete distribution function, e is the velocity space, i is the type of velocity, δ t is the discrete time step, t is the current time step, x is a point on the grid, and Ω is the change caused by collision.
According to the operator that is proposed by Bhatnagar, Gross, and Krook [20], Ω i (x, t) can be replaced, as shown in the following formula.
where g eq i is an equilibrium distribution function to be determined; τ is the relaxation time. The D d Q m model that is proposed by Qian et al. [21] is the basic model of the lattice Boltzmann model, where d represents d-dimensional space and m represents the number of discrete velocities of the lattice. In this paper, the velocity is discretized into 19 directions in three-dimensional space, as shown in Figure 7. The equilibrium equation [21] of this model is where is the lattice speed, is the weight coefficient, is the fluid velocity, an where C s is the lattice speed, ω is the weight coefficient,σ is the fluid velocity, and ρ m is the fluid density. The lattice velocity C s is as follows.
where c is the ratio of grid step to time step, and its value is 1. The calculation of ω i is shown in the following formula.
Let g i (x t , t) be the distribution function of time t, at lattice point x, velocity e, then the evolution equation of the distribution function is

Darcy's Law
When single-phase flow flows in porous media at a low Reynolds number, it follows Darcy's Law, also known as the Darcy model, which is one of the most basic and commonly used mathematical models for macroscopic seepage, as shown in the following equation where σ is the Darcy velocity, h is the permeability of porous media, ν is the universality coefficient of the fluid, p l is the fluid pressure, and ∇ is the Hamiltonian operator. According to the distribution function, the macroscopic parameters of the fluid density ρ l and fluid velocity σ can be obtained from formula The permeability of porous media can be calculated according to Darcy's formula [22].
where h is the permeability of the porous media, ν is the coefficient of motion universality, ∆p l is the pressure difference, and σ is the average speed.

Lattice Boltzmann Method Simulation Procedures
The lattice Boltzmann method simulation program structure is collision-migration. The specific process is as follows: (1) Setting of initial conditions; (2) Execute collision at time t; (3) Boundary processing; (4) Calculate macroscopic quantities; (5) Check whether convergence exists. If not, return to Step 2. Otherwise, go to the next step; (6) Output the result.
Suppose the fluid flows in a 4.5 × 4.5 × 4.5 mm cube tube, as shown in Figure 8. The fluid is the single-phase flow of a low Reynolds number and is incompressible. The same number of lattice points are used in the x, y and z directions in the grid division of the stacking space. The flow direction of the fluid is in the z direction, the top and bottom surfaces are the fluid outlet and inlet surfaces, respectively, and the other surfaces are boundary surfaces. When the relaxation coefficient is fixed at 1, the universality coefficient is 10 −6 . At the beginning, the velocity in the whole flow field is set at 0 and the density is 1. The method to judge the convergence is that in 50 time steps; if the change of the sum of the absolute values of the velocity along the direction of fluid flow is less than 0.0001, convergence is realized.
(5) Check whether convergence exists. If not, return to Step 2. Otherwise, go to the next step; (6) Output the result.
Suppose the fluid flows in a 4.5 × 4.5 × 4.5 mm cube tube, as shown in Figure 8. The fluid is the single-phase flow of a low Reynolds number and is incompressible. The same number of lattice points are used in the x, y and z directions in the grid division of the stacking space. The flow direction of the fluid is in the z direction, the top and bottom surfaces are the fluid outlet and inlet surfaces, respectively, and the other surfaces are boundary surfaces. When the relaxation coefficient is fixed at 1, the universality coefficient is 10 . At the beginning, the velocity in the whole flow field is set at 0 and the density is 1. The method to judge the convergence is that in 50 time steps; if the change of the sum of the absolute values of the velocity along the direction of fluid flow is less than 0.0001, convergence is realized.

The Influence of the Distribution Law
The model is divided into 1283 grid points, and each grid point has 19 velocity directions. The algorithm that is proposed in this paper is used to generate uniform distribution, normal distribution (mean is 3, and the standard deviation is 0.5) and exponential distribution, respectively (rate parameter λ is 2). The simulation results are shown in Table 5. The above table shows that the average velocity and permeability of exponential distribution are the largest. There is little difference between a uniform distribution and a normal distribution. It should be noted that this example only calculates permeability under different probability density distributions to verify the feasibility of the modeling algorithm. The parameters that are related to the probability density function, such as the direction of the distribution law and the parameters of the distribution function, have significant effects on permeability, and their effects are often coupled with each other. Therefore, further research is needed to establish a general strong law before these questions can be resolved. This modeling algorithm provides a model foundation and technical support for further study of these laws.

The Influence of the Numbers of Particles and Grids
In the case of the lattice Boltzmann method, both the number of lattice points and the volume fraction of the particles become significant influencing factors. On the premise that the particle distribution law is determined as uniform distribution, the permeability of the model is simulated for the following two cases: (1) the volume fraction of the particles is 0.55 0.57, 0.61,0.63, respectively, and the number of grid points stays the same as 1283; (2) the volume fraction is 0.60 and the number of grid points is 643, 1283, 2563, 5123, respectively. The simulation results are shown in Figures 9 and 10.  As shown in Figure 9, the simulation results show that with the same distri law, the permeability of the fluid through this distribution decreases with the incr the volume fraction. Figure 10 shows that under the same distribution law, the p bility of the fluid through the same amount of particle decreases with the increase number of lattice points. When the lattice points increase to 1283 or higher, the p bility is almost unchanged.

Conclusions
In this paper, an efficient algorithm is proposed to generate the particle mesoby specifying the probability density distribution. The following conclusions and i tions can be obtained: 1. The filling process avoids the huge calculation burden that is caused by the co ous iterative interference detection; 2. In the process of generating the initial position of the particles or realizing th pact stacking, the changes of each particle are only related to its small neighbo so it has a high compactness efficiency; 3. The computational complexity of the algorithm is first order, while that of the tional algorithm is much larger than second order, which illustrates the com  As shown in Figure 9, the simulation results show that with the same distr law, the permeability of the fluid through this distribution decreases with the inc the volume fraction. Figure 10 shows that under the same distribution law, the p bility of the fluid through the same amount of particle decreases with the increas number of lattice points. When the lattice points increase to 1283 or higher, the p bility is almost unchanged.

Conclusions
In this paper, an efficient algorithm is proposed to generate the particle meso by specifying the probability density distribution. The following conclusions and tions can be obtained: 1. The filling process avoids the huge calculation burden that is caused by the c ous iterative interference detection; 2. In the process of generating the initial position of the particles or realizing t pact stacking, the changes of each particle are only related to its small neighb so it has a high compactness efficiency; 3. The computational complexity of the algorithm is first order, while that of th tional algorithm is much larger than second order, which illustrates the co tional efficiency of the proposed algorithm; 4. With this algorithm, the size and position of the particles can be distributed ac Through analytical methods [23], the relationship between the permeability and volume fraction of porous materials can be predicted by specific formulas. The permeability of porous materials is related to the minimum and maximum radius of the cross-section through the transport phase, the shortest path of transport and the volume fraction of the pores. The specific formula is as follows. κ = (0.94 r min + 0.06 r max ) 2 ε 2.14 τ −2.44 /8 whereκ is the predicted value of permeability, r min and r max are the minimum and maximum radius of the cross-section through the transport phase, ε is the volume fraction of the pores, and τ is the shortest path of transportation.
It can be seen from the formula that the permeability of porous materials increases with the increase in the volume fraction of the pores. It is worth noting that the volume fraction of pore ε and the volume fraction of particle α are complementary, and the sum is 1. This is consistent with the change trend that is shown by the simulation.
As shown in Figure 9, the simulation results show that with the same distribution law, the permeability of the fluid through this distribution decreases with the increase in the volume fraction. Figure 10 shows that under the same distribution law, the permeability of the fluid through the same amount of particle decreases with the increase in the number of lattice points. When the lattice points increase to 1283 or higher, the permeability is almost unchanged.