Next Article in Journal
Reliability Estimation Using EM Algorithm with Censored Data: A Case Study on Centrifugal Pumps in an Oil Refinery
Next Article in Special Issue
Dosimetric Evaluation of 177Lu Peptide Receptor Radionuclide Therapy Using GATE and Planet Dose
Previous Article in Journal
Narrative Review on Methods of Activating Irrigation Liquids for Root Canal Treatment
Previous Article in Special Issue
CyberKnife Ultra-Hypofractionated SBRT for Localized Prostate Cancer with Dose Escalation to the Dominant Intraprostatic Lesion: In Silico Planning Study
 
 
Article
Peer-Review Record

First Study of a HEXITEC Detector for Secondary Particle Characterisation during Proton Beam Therapy

Appl. Sci. 2023, 13(13), 7735; https://doi.org/10.3390/app13137735
by Maria L. Perez-Lara 1, Jia C. Khong 1, Matthew D. Wilson 2, Ben D. Cline 2 and Robert M. Moss 1,*
Reviewer 1: Anonymous
Reviewer 2:
Appl. Sci. 2023, 13(13), 7735; https://doi.org/10.3390/app13137735
Submission received: 18 May 2023 / Revised: 15 June 2023 / Accepted: 28 June 2023 / Published: 30 June 2023
(This article belongs to the Special Issue Medical Physics: Latest Advances and Prospects)

Round 1

Reviewer 1 Report

I think range verification in proton therapy is an important topic, and this type of detector appears to be well suited for a Compton telescope. I would like to see some clarification in the paper on it's possible use in a Compton telescope. Here are some comments on sections that I found to be unclear. In general I think some editing and added explanations can suffice, without any major changes to the paper. 

Lines 50,51: I don't understand how the statements beginning "Moreover" apply to PET. I don't know what biological washout it, so I'll pass over that. Certainly patient motion is a factor that I understand. But "Hounsfield unit based tissue classification"---what does this have to do with the PET information on where the positrons are annihilating? And "actual tumor location"? The actual location is what it is. How can the true position produce an error in the range estimate? If you make an error in the range, you can't blame it on the true position.

Line 94: can you clarify the ideas for use of this detector in a Compton camera. Would it be used as the absorber, or the scatterer, or both? Can you summarize the requirements that such a telescope would impose on the pixel detector?

Line 109 states that the detector gives no information about the incoming particle, yet line 112 states that information from the detector will be used to determine whether the particle is a gamma. There seems to be a logical contradiction there.

Figure 2 caption: what does it mean by "following Eq. 1"? The nozzle current, which is the left side of Eq. 1, is plotted as the ordinate, but the graph as a whole has little to do with Eq. 1, which relates the nozzle current to the cyclotron current and says nothing about the beam energy.

Figure 3: the most striking parts of the two figures are the long, curved clusters, which I assume must be due to particles (electrons?). The single-pixel clusters of low energy are rather difficult even to see. Maybe the caption of the text could say something about the different clusters that are seen there. In the case of the G4 simulation you could get the MC truth information.

Line 207: regarding the statement that particles are naturally filtered by the HEXITEC. Neutrons are hit-or-miss, but I would think that electrons and isotopes of any reasonable energy would be detected by the hardware with very high efficiency. Is there really something in the hardware implementation that would filter them out? If so, then explain what it is. I would guess that the phantom material filters out a lot of the particles, by stopping them.

Line 218: the statement "can be easily filtered" seems to me to be too strong. Looking at Figure 5, at most 50% of the particle contamination would be removed by the cut on cluster size. Yes, it is "easy" to make such a cut, but "easily filtered" to me kind of implies that most of the contamination would be removed.

Line 220: I cannot follow the logic in this paragraph, especially in the two sentences starting here. The first sentence seems to want to explain why "other particles such as neutrons and electrons can be easily filtered", since it follows that statement, beginning with "This is because. . .". But it only talks about neutrons, and why does the fact that the neutron tends to scatter multiple times help? It could thereby produce multiple small clusters, each of which could fake a gamma. Similarly, the neutron capture doesn't appear to help the filtering, as it can result in even more isolated clusters removed from where the neutron first hits. So I'm having trouble understanding what point is really being made about neutrons. To me they seem much more problematic than electrons. Since the authors have access to the MC truth, I think it would be useful to have the "other particles" in Figure 5 separated between neutrons and electrons and whatever else (scattered protons?).

Line 234: "particle classification process" seems to me to be too grandiose a phrase for describing a simple cut on cluster size. It's not really classifying. Instead it just reduces the background to gammas by a factor of 1/2 or so. Regarding the phrase "not yet fully optimized", where was it optimized at all? It's kind of obvious where to place the cut in Figure 5 in order to have high efficiency while cutting into the background. I don't think there was an optimization process discussed. Putting that aside, however, how would the authors envision some real optimization taking place? Are there more variables besides cluster size that can be used, perhaps in some sort of machine-learning algorithm?

Line 249: again, what "further filtering" might be done? I'm sure the authors have thought about it.

Line 260: why is the improvement at shallower depths an important advantage in practice? I would think that the need for good imaging and verification of the Bragg peak location would be greater the greater is the depth of the tumor, so it seems to me to be unfortunate that the imaging will work less well at greater depths.

Line 270: here finally is something about possible further filtering. But realistically, how much could you gain by looking at cluster patterns when the signal is almost all in clusters of one or two pixels? Is the idea to look at patterns of unassociated hits to try to detect patterns left by neutrons? That could be very difficult in a frame with multiple photon hits. The following sentence about spectral resolution is also intriguing, but I cannot see how this would be used in filtering. More explanation is needed.

Line 272: I do not understand the last sentence in the paper at all (except for the obvious fact that the spectral resolution depends on the detector thickness). Please clarify.

The English is generally pretty good. Here are a few things that caught my attention

Line 188: the period after "volume" should be, I assume, a comma.

Line 247: what is an "equivalence process"

Line 262: "is be lower": I guess the "be" should be deleted.

Author Response

We deeply appreciate the reviewer's comments and suggestions,  which have been taken into account. Please see the attachment for detailed response.

Author Response File: Author Response.pdf

Reviewer 2 Report

 Referee report on the article " First Study of a HEXITEC Detector for Secondary Particle Characterisation During Proton Beam Therapy "


     The article is clear and concise. I would like to clarify the issue of error calculation (related to Table 1), otherwise I have only minor comments, and some suggestions regarding the reformulation of formulas.

     Comments to the content :

        a)  it would be nice to have a comment on the expected precision of the Geant4 model (QGSP_BIC_HP_EMZ), just to show that this systematic uncertainty is small ?
        b)  it seems rather strange and little bit unphysical to talk about "time taken to produce one single proton", as if protons come in strictly regular intervals.
            I'd suggest to work with the proton flux instead ( I_out / q_p ), as a rate of primary protons (d n_p / dt ). This is just a formality, but it would simplify the narrative ( see below )
        c)  I am somewhat surprised that there is no information about the background, i.e. what kind of signal HEXITEC registers in the absence of primary proton beam.
           That seems to be a very simple measurement to do , before even starting the experiment. There is no limit available whatsoever ?
        d) It would be nice for the reader to have an idea about the typical dose of protons used in the therapy, from which one could deduce relative uncertainty of the measurement.
        e)  I am also wondering about the uncertainty of the parameter alpha in Eq.1 - how is it measured and what is the corresponding uncertainty ?
              There are no errors in Fig.1 or are they invisible because too small ?


     Comments to figures:
      
          Can you somewhat enlarge titles and labels in Fig.2 and 4-6 ?   There is plenty of space for that I believe.


     Comments to the text :

     Line 66-69 :  I suggest to replace the sentences : "The production of these particles is due ...diffuses its energy among the nucleons."
          by  " The production of these particles is due to the fact that when protons strike a nucleus in the body, the nucleus gets excited and returns  to its ground state by emitting particles such as gammas or neutrons, or disintegrates into lighter nuclei [15]. "

         (  I'd also consider replacing "gammas" by "photons" but that's just a personal taste. If the literature in the field prefers gammas so be it )     

     Equations 2-4 :   I'd replace l.163-159 ( " Let t_p be ...." ) by :
            
        =====
            The number of protons per frame is calculated with help of primary proton flux as

                        ppf =  I_out / q_p   *  t_f            (2)

          where q_p is the proton charge and t_f the time per frame.  The number of frames that should be used at UCLH to equalize the number of primaries n
          in Geant4 sample is defined as

                       N_frames = n / ppf = n / t_f * q_p / I_out       (3)
         =====
                 
          ( There really is no need to define t_p and you save one equation ).

        Table :

          Table 1 is a bit problematic, because the concept follows a logic of rescaling experimental data to the simulation. You should quote the measurement with experimental errors which come from the total amount of data collected ( you have measured way more than the number of frames quoted in the table, didn't you ? ), these usually do not depend on the size of the simulated sample.   I would rather expect to see the total number of frames measured with corresponding uncertainty  ( the error on the number of frames has to scale with the number of frames since the frequency does not depend on the proton energy, or does it ? ).   In case you have really measured only once and collected the number of frames you quote here,
the errors should still scale with the number of frames, at least as far as I can see.

    The error on number of frames is your systematic uncertainty which should be added quadratically to the statistical uncertainty ( after multiplication by the number of protons per frame, as a function of proton energy ).  From the Table 1 we do not really learn what the data errors are.
   I find the Table 1 rather confusing and would drop it from the paper, leaving only the information about uncertainty of the measured frequency in the text ( that is the leading uncertainty of the measurement it seems? ) 
        

  =====     

   Best wishes and congratulations for this interesting article, I recommend the publication,  after clarification of these few points.

 

 

Comments for author File: Comments.pdf

Author Response

We deeply appreciate the reviewer's comments. Please see the attachment for detailed responses. Many thanks.

Author Response File: Author Response.pdf

Back to TopTop