Critical Aspects in the Modeling of Sub-GeV Calorimetric Particle Detectors: The Case Study of the High-Energy Particle Detector (HEPD-02) on Board the CSES-02 Satellite
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsReviewer Report on the manuscript „Critical aspects in the modelling of sub-GeV particle detectors: the case study of HEPD-02“ by Bartocci et al.
Answer to the authors and editor:
The draft showcases the challenges of detailed Monte Carlo GEANT4 simulations for state-of-the-art particle instruments based on the the HEPD-02 detector.
Thus the paper is not only of interest for future users of HEPD-02 data but also for a more general audience – i.e. all instrumental scientist of the numerous different particle instrument.
The level of detail in the draft is well acknowledged and the quality of the view graphs as well as the mathematical expressions are already on production level.
However, there are some aspects that need refinement and some inconsistencies that need to be solved. More importantly, certain conclusions can not be drawn based on the presented analysis.
Thus, I suggest the editor a minor revision of the paper. I’d be happy to serve again as referee for the revision.
In the following, I list the most important comments under “General Remarks” followed by a list of more specific comments.
General Remarks
-
In Line 127ff and Fig. 4 energy depositions are compared for different Physicslist for Protons with an energy of 228~MeV and no differences are observed. However, the different combination of lists (i.e. FTF and QGSP) are – as also stated by you in lines 114, 115 – only introducing differences above the GeV range. Thus the GEANT4 simulation is using the same processes (and probabilities) in those lists for the 228~MeV protons. This is the reason why only the _BIC, _BERT, _INCLXX alter your result. This should be clearly stated since no conclusion to be drawn for differences between QGSP, FTF by analyzing these “low” energies. For those, a comparison above the GeV range would have been of interest.
-
Line 144: “HEPD-02 flight observations will help validate physics-list performance” – this is a too strong statement to make. Analysis of the HEPD-02 data is based on your simulations (and their physics lists), hence the HEPD-02 data can by definition not be used as validation for these lists. Further it is not clear why flight data would be more useful for validation than any ground based measurement (i.e. HEPD-02 calibration runs in well defined radiation fields). I’d recommend to remove this sentence. If the author would like to keep it, this has to be discussed in much detail and issues with this approach need to be clearly stated.
-
Fig 8 and corresponding analysis in lines 256ff.: The HEPD-02 is designed for 5-200 MeV. Yet, in Fig. 8 you discuss the differences of the quenching models for much higher energies, i.e. MIPs. The dE/dx is significantly larger for lower energies and for a 200 MeV/nuc ion the quenching will have a larger overall effect than for these MIPs. Hence this analysis needs to be redone for these energies. Otherwise, the low percentages in differences discussed in this section are misleading and conclusions for HEPD-02 can not be drawn based on this.
-
The discussion on the fit results given in Fig. 9 needs refinement (see comments below Lines 350-358 for more details). While I agree that the comparison shows that the simulation reflects the measurements, the conclusions (“excellent agreement”, “strong validation”) are too strong here given the differences.
More specific comments:
-
Line 6: 5-200 MeV – is this supposed to be 5-200 MeV / nucleon for ions? Please clarify in the paper.
-
Section 2.1: GEANT4 tends to have issues with overlapping volumes. Since you are explaining the detector simulation for a broader audience, it might be worthwhile to mention that extra care is necessary while using GDMLs automatically produced from STEP files.
-
Line 104: clarify E-Box and sensor head in the figure, i.e. “[…] detector subsystem (right) and the electronics subsystem (left)”
-
Fig 4 and 5: are those protons simulated isotopically? Of beam? This should be clearly stated.
-
Line 172: “vertically” might be difficult to understand for the broader audience, consider rephrasing to “vertically (i.e. parallel to the Z-Axis in Fig.1)”. Furthermore, it is not clear if this indicates a beam (a single point as source) or particle tracks parallel to the z-axis from an extended source. This should be clarified. (Maybe a reference to a later section would help?)
-
Line 187: You’ve correctly argued that a too small cut length would result in larger computational time. A comment on how the run-times compare for the different cuts would be helpful for the audience.
-
Line 188 and Fig. 6: the origin of the data should be mentioned here as well (it is first mentioned in Section 4)
-
Line 276: “to be less than 10%” - either add a citation or explain / show this (i.e. in the appendix)
-
Line 296 and 303/304: The simulation was compared to cosmic muons as stated in Line 296. In lines 303 and 304 the author are discussing various ion simulations – it is difficult to comprehend which simulation was used at which part in this chain. Please clarify.
-
Line 304: I suspect you mean 3 GeV / amu rather than 3 MeV / amu? Furthermore, in Fig. 8 MIP is denoted as Beta*Gamma=3. While the differences do not really matter hear, a consistent “definition” might help the reader.
-
Line 335: For the sake of completeness, please mentioned the accumulation time during which the data was taken at the beam line.
-
Line 341 and Fig. 9: “experimental data are normalized to unit area” – it is not clear how this normalization is done. Is the data divided by a scalar? Why is this done? And why do the statistical uncertainties (errorbars in Fig. 9) are reasonable for the number of counts (i.e. Poisson statistics) after the normalization? The uncertainty should be derived on total counts and not the normalized value. Furthermore, the y-label in Fig. 9 should make clear that a normalization was performed, i.e. “normalized counts” rather than counts.
-
Line 345: “the ADC ranges of primary interest” – explain how these are defined.
-
Fig. 9: please rescale the y-axis range of the inlets to 0.75-1.25 in order to use the entire range and allow for more details to be seen.
-
Line 350: How are the fit ranges in Fig. 9 chosen? i.e. for the tracker the tail is not included in the fit while it is for the RAN. How sensitive is the MPV for the peak position for different fit ranges, this should be analyzed and discussed (especially with respect to the next comment)
-
Line 353: “the fit reveal excellent agreement in peak position” – excellent agreement is a poorly defined statement. Furthermore, the MPV resulting from the EXP and MC of TR2 and EN2 are not in agreement to each other (considering the uncertainties). This has to be discussed and the differences need to be addressed in comparison to other uncertainties.
-
Line 358: “are also consistent between data and simulation” - as stated above the values are not consistent. It should be clearly stated that these differences are larger than the fit uncertainty.
-
Line 358: “deviation lie within expected systematic uncertainties” – which uncertainties? And how large are they?
-
Fig. 10: a figure showing the differences / ratios of MC and EXP over the 2D hit map would allow for a more detailed discussion.
-
Fig. 10: the Hit maps for EXP and MC should have the same axis ticks for better comparison
-
Line 371: What is the origin of the pixel pitch values (i.e. 29.24ym) here?
-
Line 388: I do not understand the normalization, please clarify. The axis of the 1D normalized histogram should read “normalized counts” (rather than “counts”).
-
Line 393: “with discrepancies below 10%” - this can not be seen in the figure. Please add a second axis with the ratio of both 1D histograms (MC and EXP). Furthermore, the deviation seems to be the strongest for HitX Position >40ym. Here, it definitely seems to be larger than 10%. What is the reason for this? And why is it only seen for HitX and not HitY?
-
Line 410: The “3MeV” difference is in contrast to the “4MeV” difference given in Line 140. This should be consistent. Furthermore, it is mentioned that this difference is “increasing towards higher energies”. However, based on Fig.5 , bottom right panel, only the relative difference is increasing, the absolute difference seems to be rather constant. Please double check this statement.
-
Line 413: “validated against beam test data” – this statement is too strong given the general comment #3 above on the quenching study being performed on MIP energy particles only.
-
Line 416: “calorimeter ADC distribution agree within 10%” - based on Lines 355, 356 this should be “within 1%”, right?
-
Line 423: The point (i) with the validation of Geant4 models based on flight data has to be rewritten based on general comment #2 above. Also it has to be stated why the quenching parameter (ii) can be improved with flight data – a controlled radiation environment in a test beam is much better suited for that. Please consider removing both statements.
-
Table A3: the MPV for the peak positions (and their uncertainties) differ from those given in lines 354-356. Same for the decay portion in line 357.
Author Response
Please see the attachment.
Author Response File:
Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsI read the article with great interest. Scientifically the paper is sound and I do recommend it to be published after a minor review.
Personally I feel that the title should be more specific. “Critical aspects in the modelling of sub-GeV particle detectors” feels like it is a review article with general recommendations on modelling. The paper defines a G4 simulation and then adds three modelling blocks: detailed geometry from CAD, physics list choice, digitization (quenching, optical response, pixel response). Finally it validates against a single 228 MeV proton beam test. However it does not survey multiple detector types or technologies. It does not show that the same “critical aspects” hold for, for example gas trackers, Cherenkov detectors etc. I recommend the authors to think about narrowing down the title.
This manuscript details modelling issues that are very important and will be a very useful case study for modelling future telescope missions.
A few technical questions:
– I am not familiar with the utilized beam. What was the flux of the 228 MeV beam? Are the authors sure that there is no pile-up that would lead to multi-track events in DIR? Even if few % of events have pile-up it should be added, or mentioned as systematic.
– Why did the authors pick GUIMesh over CADMesh to convert engineering geometries into G4 native ones?
– It would significantly strengthen the paper if the authors could a table with the list of systematic errors. They mostly emphasize that the physicslists might add 2-3% uncertainty but later on they state the instrumental systematics (gain, quenching, optical maps, channel non-uniformities) together can be 5x larger.
– Additionally what matters are the observables. I understand that it might be out of the scope of the paper but what matters are the observables. Eg. for the case of the physicslist it would be useful to see the effect of applying different physicslists in the simulation and then quantifying the change for the CSES-02 science observables. (energy scale, resolution, particle ID, fluxes).
– Figure 10: why does the simulation fit the Y direction well, while for the X the tails do not fit? Is it because the beam is not Gaussian in that projection?
Also please check the manuscript for typos, eg.“calotrimeter” → “calorimeter” (Appendix B intro).
Author Response
Please see the attachment.
Author Response File:
Author Response.pdf

