Next Article in Journal
Improving the Energy Efficiency of Industrial Refrigeration Systems by Means of Data-Driven Load Management
Next Article in Special Issue
Mixing of Bi-Dispersed Milli-Beads in a Rotary Drum. Mechanical Segregation Analyzed by Lab-Scale Experiments and DEM Simulation
Previous Article in Journal
Technical Route to Achieve Ultra-Low Emission of Nitrogen Oxides with Predictive Model of Nitrogen Oxide Background Concentration
 
 
Article
Peer-Review Record

Calculating the Binary Tortuosity in DEM-Generated Granular Beds

Processes 2020, 8(9), 1105; https://doi.org/10.3390/pr8091105
by Wojciech Sobieski
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Processes 2020, 8(9), 1105; https://doi.org/10.3390/pr8091105
Submission received: 28 July 2020 / Revised: 28 August 2020 / Accepted: 3 September 2020 / Published: 4 September 2020
(This article belongs to the Special Issue DEM Simulations and Modelling of Granular Materials)

Round 1

Reviewer 1 Report

Review in attached file.

Comments for author File: Comments.pdf

Author Response

Response to Reviewer 1:

 

Dear Reviewer,

 

First, I would like to say thank you to the reviewer for the comments some of which have helped me to improve the paper. I have addressed the comments as explained below.

Best Regards,

 

Wojciech Sobieski.

 

Comments and Suggestions for Authors

 

Description and publication suggestion

 

The manuscript describes a novel and innovative way of estimating digital (or digitized) porous media tortuosity by means of modified forms of path-finding algorithms, resulting in great computational time reductions with respect to other methods as LBM.

 

The analysis and the Author results would be of interest to the porous media community, and I think this manuscript is worthy of publication, after a series of minor revisions, plus one major revision (Comment 5, below) which needs to be resolved before considering publication. My considerations follow.

 

Introduction and references

 

Comment 1) (line 16)

 

“Despite many years of investigations into the relationships between features of porous media and pressure losses, the problem is still not fully resolved.”

 

This is kind of a very strong statement. Unless we are thinking about very hard and very general problems like the existence/smoothness of solutions to Navier-Stokes in 3D, then we can at least say that there have been quite extensive studies leading to a strong theoretical foundation in the area of connecting porous structure to flow behaviour. I would enrich this statement, for example, citing (at least superficially) all the work done in the upscaling literature, which led to (among other results) an analytical derivation of Darcy (and Darcy-Forchheimer) laws starting from Stokes (and Navier-Stokes) equations. The two main methods are volume averaging and homogenization theory, whose main references could be respectively.

 

Whitaker and Hornung Whitaker, S., 2013. The method of volume averaging (Vol. 13). Springer Science & Business Media.

Hornung, U., 1996. Homogenization and porous media (Vol. 6). Springer Science & Business Media.

 

The work is still very much valid, and the works cited later by the Author (lines 34..37) are all relevant, but I’m afraid also other readers would be taken aback by the statement in line 16, which I would then complete in this way.

 

I suppose that we are thinking about different things. I agree that we have many laws which may be used to predict pressure loses in porous media. Eq. (1) has a topology of many of them. We know that in general this topology is correct for quite a wide range of Reynolds number. We can assume or calculate the filtration velocity and state the fluid properties. However, the results depend also on material properties (I mean here A and B terms in Eq. (1)). An analogous case is the Young modulus in Hook's law. In the past I performed few different experiments with porous columns. I tried to predict the pressure drop for wide range of filtration velocities. To reach this aim I collected many formulas describing A and B terms in Eq. (1). It turned out that they give very different results for the same data. Particularly the Forchheimer coefficient (B term) differed by 7 orders of magnitude dependent on the formula applied. No formula gave me an approximately correct value. After this test I did not know how to predict the pressure drop in a specific case by the use of Eq. (1) (exactly the same problem appears in the case of  the formulas collected in Table 1 - which of them should be used in a specific case?). I obtained finally the correct values of A and B terms from an experiment. In this context I still believe that the problem mentioned in the introduction is not fully resolved. However, I removed the following sentence: “In consequence, to this day a general and universal theory related to this field has not been developed.”

 

Comment 2)

 

The rest of the Introduction gives a good overview of the workplan of the manuscript. The different search algorithms (path search, path tracking, A*) are very well explained in the Methods chapter, but I would advise to maybe also spend a few words also here in the Introduction to give a brief overview of those – which are already available, which are innovations from the Author, et c. It would help the reader to contextualize better the “why” and the “how” of these methodologies from the beginning of the paper. Maybe explain them in terms of similarities to depth-first / breadth-first searches ? (if the Author deems it appropriate).

 

I agree that this part was quite poor. For this reason I extended the Introduction in a significant way.

 

Comment 3)

 

Another general comment: the Author states (in line 97) that the need for developing new and efficient path searching algorithms is (partly) driven by the unfeasibility of applying Koponen’s method due to high computational cost.

 

I’ve read reference 36 by the Author et al. (very interesting work which I didn’t know) which justifies this point, but only exploring LBM as a tool for solving the velocity flow field. Has the Author also explored the possibility of using more classical CFD (e.g. finite-volume methods) to obtain the same data? If so, it would be an interesting comparison and point to add, as CFD is another very widely used method for investigating flow and transport in porous media, especially multiphase phenomena, so it would be interesting.

 

Please note that this is to be seen as a comment of personal interest of the Reviewer in the role of a potential CFD-expert reader (as I am myself more used to CFD for porous media) – if the Author cannot, or does not want to add more information on this point, it is perfectly fine.

 

Indeed, the Koponen methodology (Eq. (4)) may be applied for each velocity field having an appropriate resolution. In turn, a velocity field may be obtained using different numerical methods such as the Finite Volume Method, the Immersed Boundary Method, the Lattice Boltzmann Method and even the Finite Element Method or Finite Difference Method. Independent of the method used, the main problem is to generate a numerical grid of high quality. The problem is particularly important in throats, where the width of pore channels tends to be? come close to? zero in contact points. At the same time the number of cells (nodes, elements) cannot drop below a certain minimum. To get around this problem, a minimum distance between particles forming the solid part of a porous body may be defined. This makes meshing a little easier. I have some experience with using Finite Volume Method to calculate velocity field in a pore space. However, the Lattice Boltzmann Method is much easier and more convenient to use.

 

Also, I’ve noticed you misspelled Amir Raoof’s name as Roaff in the bibliography, so give another look to the biblio section.

 

This bug is corrected.

 

Materials and methods

 

Comment 4)

 

“ … but nowadays other simple shapes […] or complex shapes consisting of rigidly connected bodies with basic shapes (the so-called clumps) are increasingly used … ”

 

It’s actually been a few years that DEM or rigid-body dynamics methods are begin used to create packings of complex non-convex shapes which are not actually clumps of spheres. You can take a look at these two works, but there are more.

 

You don’t have to cite them, but I would just amend the statement, for correctness; maybe noting that complex shapes can be explored but spherical packing are still a very good starting point for an investigation like the one proposed in this manuscript.

 

Partopour, B. and Dixon, A.G., 2017. An integrated workflow for resolved-particle packed bed models with complex particle shapes. Powder Technology, 322, pp.258-272.

 

Boccardo, G., Augier, F., Haroun, Y., Ferre, D. and Marchisio, D.L., 2015. Validation of a novel open-source work-flow for the simulation of packed-bed reactors. Chemical Engineering Journal, 279, pp.809-820.

 

This point will actually be very relevant in a possible extension of this work: the creation of a packing made of non-convex shapes (e.g. “raschig rings”, and many others) will very easily lead to the insurgence of quasi-stagnant zones very similar in behaviour to the “blind cavities” the Author mentions in line 87 as potentially problematic for the application of the proposed methodology.

 

Indeed, I should consider the latest trends. I corrected the content as follows: "Originally, the particles were always spherical, but nowadays other simple shapes (such as cylinders, cuboids, polyhedrons, etc.) or complex shapes described by independent grids are used. Another quite popular approach is based on defining complex shapes by a set of rigidly connected bodies with basic shapes (the so-called clumps)." Thank You very much for this remark.

 

Results and discussion

 

Comment 5) (line 270 and on, Chapter 3.1)

 

This is the most important comment I have on this manuscript: I have serious doubts regarding the generated packings as described in Figure 6 and shown in Figure 7.

 

First, when trying to enlarge the statistical sample under consideration in order to find the most representative one (i.e. finding the Representative Elementary Volume) it is usually expected to find a plateau, which should then correspond to the experimental ground truth. Here instead it seems that there is a clear trend, and if we were to create a packing of 1500, or 2000 spheres, the porosity should go down, to smaller values than the 0.41 of the experimental value.

 

As it should: to my experience (and published data, see Boyer below, and Boccardo earlier) the porosity of a random packing of monodisperse spheres is well below 0.4, to maybe ~0.36 or so; slightly polydisperse ones as the ones considered here even more so.

 

Boyer, C., Volpi, C. and Ferschneider, G., 2007. Hydrodynamics of trickle bed reactors at high pressure: Two-phase flow model for pressure drop and liquid holdup, formulation and experimental validation. Chemical Engineering Science, 62(24), pp.7026-7032.

 

I do agree with all the Reviewer's comments however, my aim was not to create a bed with the minimum porosity. I always investigate the possibility to obtain a virtual bed with the porosity equal to the experimental porosity described in Section 2.1 (0.413). Fig. 6 shows that it is impossible to obtain the target porosity, independent of the number of iterations, if the number of particles is too small. I proved in the preliminary test that at least 1000 particles must be defined to obtain this target porosity. In such a case the target porosity is reached by performing less than one million time steps (but still not every running of the program provides the assumed target porosity). In this context it is impossible to obtain the porosity that would be smaller than the target porosity: a DEM simulation ends when the target porosity is reached. Of course, larger systems may be used, too, but it is not is not favourable in the context of next stages of investigation presented in the paper. I tried to reach a compromise between the size of the system and the computational costs.

 

The Author has cited (line 108) the works of Cooke and Ribeiro (et al.) as experimental data: but the first deals with ordered arrangements of spheres, which are not relevant here, and the second only shows a value of \epsilon=0.41 in table III, related to a slender packed bed with a D/dp = 3.17 (!) This is very much different than the packing the Author shows in the photograph, and is a situation in which wall effects on the structure strongly dominate .. that Ribeiro paper is titled “Porosity […] in packed beds: side wall effects”, after all.

 

I am very much aware of this situation, but I could not find other reference data in the literature.

 

And this connects to my second point, which is the clear and visible wall effects in Figure 7 – by which I mean the very ordered sphere structures at the edges of the containing cube (the “columns” of spheres) and of course the planar arrangement of spheres close to the sides of the cube. A “hard wall” of this kind will influence local porosity in the vicinity of the wall for lengths equal to multiple sphere diameters (see Giese below),

 

Giese, M., Rottschafer, K. and Vortmeyer, D., 1998. Measured and modeled superficial flow profiles in packed beds with liquid flow. American Institute of Chemical Engineers. AIChE Journal, 44(2), p.484.

so it will both influence the porosity calculation, but also will result in a “constrained” structure that is different from the much larger packing (“larger” = many more spheres randomly packed far away from walls) the Author shows in the photograph in Figure 2, leading to uncertainties in all the subsequent calculations.

 

*** I considered three options at this stage: 1) direct creating of a virtual bed consisting of 1000 particles (advantages: shorter time of a DEM simulation, flat inlet and outlet surfaces, high number of possible paths, final porosity equals the target porosity; disadvantages: quasi-regular arrangement of particles located close to the cuboid surface, particularly along edges); 2) creating a particle cloud consisting of  much more than 1000 particles and next rejecting all particles lying in the boundary layer (advantages: reducing the impact of the boundary effect; disadvantages: longer time of a DEM simulation; unknown initial number of particles; unknown thickness of the boundary zone, non-flat inlet and outlet surfaces, making it difficult to define locations of Starting Points - it may lead to underestimating the tortuosity; difficulties in obtaining the target porosity, more difficult porosity calculation); 3) creating a particle cloud consisting of much more than 1000 particles and next ca cutting the resulting bed to obtain a smaller cuboid (advantages: reducing the impact of the boundary effect, flat inlet and outlet surfaces; disadvantages: longer time of a DEM simulation; unknown initial number of particles; unknown thickness of the boundary zone, difficulties in obtaining the target porosity, more difficult porosity calculation, less number of possible paths - large areas of the inlet surface belong to the solid part of the porous body). I stated that the first option is the most convenient in the context of the article and its aim.

 

I think that the work proposed in this paper is innovative and would be of interest to the community, so I think it is worthy of publication.

 

Thank You for such a good opinion about my paper.

 

Nonetheless, these are serious issues that have to be corrected: both the technical issue (wall effects, absence of a REV) and the reference issue (mismatch in the geometries of Author’s experiments, Author’s model, and referenced literature).

 

I don’t think the Author necessarily needs to recreate the in-silico packings and/or the experimental evaluation of porosity. I think that it may be sufficient to show the application of these interesting methodologies in a model packing as the proposed, and not trying to force a comparison to experimental data, which as it is proposed now is quite problematic (for the reasons just discussed). A quick way of modifying the in-silico model in order to get rid of wall effects would be to generate a bigger packing of spheres and extracting a sub-volume far from the walls (Fig. 4 in Boccardo, referenced above, and many other works): in this way only a bit of YADE computational time is added, and no path-search time is added. But again, this could be the scope of another investigation.

 

The paper focuses on algorithms and their features. I think, that in this context the way of geometry creation is not very important. All algorithms were tested on the basis of the same examples, which means that as a result their comparison should be meaningful.

 

Please note that Starting Points are located at some distance from side surfaces of the bed, which reduces the impact of the boundary zone. It was not my aim to investigate the boundary effects (in fact, there are a few different directions which may be explored in the paper). However, thank You for this comment. I see now that I should focus more on the REV problem.

 

I am afraid I do not quite understand the comment  on the "mismatch in the geometries of Author’s experiments." I assumed that the use of parameters related to an existing granular bed it will logic. Thanks to such an assumption, I have a possibility to continue my investigations and compare the results obtained in the hereby paper with other data.

 

Typos)

 

A re-read of the manuscript would be helpful: some typos are found here and there, and some syntactic forms that make the reading a bit more difficult should be given some attention. An example of the latter is found right off the bat in the abstract, where there is this impersonal statement:

 

Line 7) “It was stated that the A-Star Algorithm gives values similar (but always slightly underestimated) to the values obtained via approaches based on the Lattice Boltzmann Method or the Path Tracking Method …”

 

Shouldn’t this rather be “It is found in this work that the A* … “. I am not trying to be fruitlessly nitpicky about grammar, but in this case it’s not immediately clear if this is a statement about something known in the literature, or about the Author’s own result. So improving this will help to convey the work’s novelty and catch the reader’s attention more!

 

Regarding typos, see for example:

 

Line 16) “than the more complicated” -> then the more complicated

Line 47) “exemplary results” -> “example of results illustrating the range of the tortuosity .. “. In general, avoid using “exemplary” to mean “an example of”, as the meaning is quite different (see also line 62)

Line 50) “than it is called” -> then it is called (also see line 87, 197, and others)

Line 69) “Such as choice” -> such a choice

Line 154 + 388) I think the A* algorithm is much older than 1986 (it should be the step-up from Dijkstra’s if I’m not mistaken). Checking online it seems to be 1968 so maybe it was a typo from the Author – please recheck.

and others.

 

The paper has been checked again with the focus on its style and language.

 

Yes, you are right. Indeed, the A-Star algorithm was developed in 1968. Thank you for pointing out this mistake.

Reviewer 2 Report

  1. The calculations of the tortuosity by LBM and path tracking technology will provide the most accurate results according to my knowledge. However, the computational cost is often very high. When you compared your results from the proposed two simple and fast algorithms, I think you can focus on the comparison with those results instead of the empirical equations. In this case, a different conclusion may be drawn as indicated clearly in Fig. 18, which is that the A-star algorithm will generate a more acceptable results. 
  2. As pointed out in the paper, refinement can be further enhanced. However, the author chose the maximum of 200 by 200 by 200. The reason was given as computational burden. Is it possible to parallelize your code to reduce the computational time? Do you think the error can further be reduced when a more refined grid system is used?
  3. Some minor written errors: Line 21: by saved; Line 69: such as; Line 263:due the fact; and so on.

Author Response

Response to Reviewer 2:

 

Dear Reviewer,

 

First, I would like to say thank you to the reviewer for the comments some of which have helped me to improve the paper. I have addressed the comments as explained below.

 

Best Regards,

 

Wojciech Sobieski.

 

The calculations of the tortuosity by LBM and path tracking technology will provide the most accurate results according to my knowledge. However, the computational cost is often very high. When you compared your results from the proposed two simple and fast algorithms, I think you can focus on the comparison with those results instead of the empirical equations. In this case, a different conclusion may be drawn as indicated clearly in Fig. 18, which is that the A-star algorithm will generate a more acceptable results.

 

I agree that the use of LBM simulation is one of the best techniques for calculating tortuosity. However, in the literature different empirical formulas serving to calculate the tortuosity are also available. Because they are quite popular, I decided to comment this issue in the paper. I decided that such information may be interesting for people using the above-mentioned empirical formulas. I expected that if I did not comment on this issue, I would be asked for additions during the review phase.

 

As pointed out in the paper, refinement can be further enhanced. However, the author chose the maximum of 200 by 200 by 200. The reason was given as computational burden. Is it possible to parallelize your code to reduce the computational time? Do you think the error can further be reduced when a more refined grid system is used?

 

Unfortunately, I have no experience writing parallel codes. However, I think that it is possible in general. I suppose, that the regular grid of Start Points may be divided in zones and paths belonging to each zone may be calculated as an independent process.

 

Some minor written errors: Line 21: by saved; Line 69: such as; Line 263:due the fact; and so on.

 

The language was corrected in the new submission.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

I think the Authors justified their research design choices and improved the manuscript enough to now warrant its publication in Processes.

Back to TopTop