Evaluation of the Characteristics of Short Acquisition Times Using the Clear Adaptive Low-Noise Method and Advanced Intelligent Clear-IQ Engine
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis study aims to evaluate two noise reduction methods implemented on the Cartesian Prime PET/CT scanner available from Canon Medical Systems Corporation. The first algorithm evaluated is the Clear adaptive Low-noise Method (CaLM) which is based on the non-local means approach and is applied to PET images obtained using standard OSEM/MLEM reconstruction methods. The other algorithm evaluated is the Advanced intelligent Clear-IQ Engine (AiCE) which is an AI based reconstruction engine. The primary motivation for using noise reduction algorithms is to shorten the time spent on PET data acquisition.
Overall the work performed is scientifically sound and of interest to many readers within the medical imaging community. Unfortunately, the manuscript contains numerous significant shortcomings which must be addressed before it is suitable for publication.
General comments:
Some of the parameters used for image evaluation are not clearly defined and described. The parameter QH10mm seems not to be defined in the text at all. The parameters CV and N10mm have basically identical definitions via eq. (1) and (3), so the reason for the different designations should be justified and commented. Is the only difference between the two parameters the sphere size (10 versus 37 mm)? In general the chosen nomenclature seems to follow the “Japanese Guideline for Oncology FDG PET/CT Data Acquisition Protocol: synopsis of Version 2.0” which on several points may be different or similar to the more well known “NEMA Standards Publication NU 2-2018”. It is recommended to clarify the nomenclature and include the NEMA publication as a reference.
Some very unusual and peculiar features in the data are not assessed and commented. This is e.g. evident for the RC values displayed in figure 3:
- RC is decreasing as the number of iterations is increased. This is in contradiction with the general observed behavior of OSEM/MLEM reconstruction
- The data for 30 and 90 s acquisition time seem to stand out, i.e. deviate from a monotonically and smooth variation with acquisition time
Similar issues are seen in figure 4, 9, 10, and 11. This issue must be further analyzed, and it would be very interesting for comparison also to present data fore pure OSEM reconstruction without application of CaLM.
All figures (1-14) are rather difficult to read and interpret for the reader:
1) The font size for labels and numbers for the x/y-axis are far too small,
2) the CV values within each rectangle are difficult to read.
It is recommended to substantially revise the figures, e.g. having a dual color scale with a clear transition between colors at the threshold value (CV=10%, RC = 38%,…). Alternatively and maybe even better, show all data as 2D scatter plots with multiple curves for the various acquisition times.
The sentence in figure captions like “The red line represents the recommended…” should be rephrased for clarity.
Specific comments and recommendations (line number):
L32: Missing definition of QH10mm.
L104-110: The definition of regions of interest (ROIs) is somewhat confusion and contains a repeated statement. It should be clarified, that one ROI is placed on a central slice though each sphere, and that the 12 x 5 ROIs are placed in the background.
L115: The term “maximum total value” is unclear. Is it the maximum pixel value within the sphere or is it the average pixel value? This is an important distinction, since maximum values are more prone to (positive) bias in noisy data.
L119-121: Repeated statement for SD10mm.
P6-7: The data shown in figure 5 and 6 seem to be completely identical? Please investigate.
L284: Please rephrase “similar similarity”
L291: Replace “thought” with “observed” or similar expression.
Author Response
Dear Reviewer 1,
Thank you very much for agreeing to review our manuscript.
We greatly appreciate your valuable feedback.
Below are our point-by-point responses to your comments.
Since QH10mm was not defined in the text, we have now included its definition in both the Abstract and Materials and Methods sections.
The difference in the calculation of CV and N10mm is due to the different sizes of the ROIs being set.
To clarify this distinction, we have added a sentence before the equations for both indices.
The term "guidelines" was unclear, so we have explicitly stated the reference:
"According to the Japanese guideline for the oncology FDG-PET/CT data acquisition protocol: Synopsis of Version 2.0 [1]" in Section 2.3 (Analysis of Phantom Images).
Regarding the phenomenon where RC decreases with an increase in the number of iterations:
This could potentially be attributed to the principles of CaLM (Non-Local method).
Since the 10mm sphere is significantly smaller compared to other hot spheres, the signal emitted from this sphere might have been judged as noise, similar to the background (BG) region.
This hypothesis would require further testing by modeling the Non-Local Mean (NLM) filter and examining its effect.
However, the NLM filter in CaLM is implemented as a "black box," unlike the original NLM filter, which has various parameters.
Therefore, we have not included this explanation in the current manuscript.
We have made the following changes to the colormap used in the figures:
We replaced the gradient with just two colors and removed the values of each index from the colormap.
This now clearly distinguishes which image reconstruction conditions meet or do not meet the physical indices.
Additionally, we have adjusted the font size of the labels on both the x-axis and y-axis.
We have removed the inaccurate statement "The red line represents the recommended value" from the figure descriptions.
Other revisions based on your comments have been made as follows:
Text deletions are shown in blue, and added or revised text is shown in red.
Sincerely,
The Authors
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe article reads well, with sufficient details for the methods and findings. I have no comments.
Author Response
Dear Reviewer 2,
Thank you very much for reviewing our manuscript.
We sincerely appreciate your time and effort.
We look forward to your continued guidance and support.
Sincerely,
The Authors
Reviewer 3 Report
Comments and Suggestions for AuthorsThis study aims to evaluate the short acquisition time in a semiconductor-based PET/CT system using two reconstruction approaches, CaLM and AiCE on a phantom. Although it is an interesting study, the results are not very well presented.
1. More background introduction about CaLM and AiCE should be added to the introduction section. How exactly are those two approaches applied? What is “Mild, Standard, Strong”? More explanation is needed for 2.2 Image reconstruction, line 98 to 102,how exactly the two approaches are applied.
2. A figure showing the location of ROIs on the phantom image should be presented. Also an image of phantom using CaLM should also be presented.
3. The author mentioned using “physical indicators” to evaluate images quantitatively (introduction line 61-63). And later on showed some physical indices values in table 4. What exactly are those indicators? And how are those values in table 4 calculated? In line 212-218, “achieve all physical indices”, what exactly are all physical indices?
4. The figures are not well presented. The fonts are very small on the figure axes title. The numbers in each box are unreadable. If the numbers in each box doesn’t offer any useful information, then please remove the numbers and just use the color bar to show the difference. But if the numbers are useful, then use some tables to list all the numbers!
5. All the figure captions mentioned “red line”. But I don’t see any red lines, there are only red and blue boxes. Also the collar bar seems all wrong. As in figure 1 and 2 , on the color bar blue=0, red=10, but the numbers in the boxes seem to range from 150 (red) to 3 (blue). Figure 3, the color bar shows blue=38, but the numbers in blue boxes go up to 50. Please check all the figures!
6. For the visual evaluation, I assume the images are scored by some raters. How many raters evaluated the images? If there’s only one rater, the results in table 5 doesn’t mean anything. If there’s multiple raters, the average scores with standard deviation should be presented. Why is visual evaluation only conducted on AiCE not CaLM?
Author Response
Dear Reviewer 3,
Thank you very much for taking the time to review our manuscript.
We sincerely appreciate your insightful comments and suggestions.
Below are our point-by-point responses to the issues you raised.
1.
We have added explanations of CaLM and AiCE in the Introduction section.
We believe this addition helps clarify the mechanisms of CaLM and AiCE, which are later described in Section 2.2 Image reconstruction.
2.
In Figure 1, we have presented images reconstructed using the conventional Gaussian filter as well as CaLM.
In Figure 2, we have shown the placement of the ROIs used in this study.
3.
We found the term "all physical indices" in Table 4 to be unclear, so we have revised it to
"all physical indices (CV, RC, N10mm, and QH10mm/N10mm)" for clarity.
4 & 5
We have revised all figures to improve clarity and visual presentation.
To make the figures more intuitive, we used only two colors:
blue to indicate values that met the guideline recommendations, and red for those that did not.
6.
The visual assessments were performed by two radiological technologists, each with over 5 years of experience in nuclear medicine.
As this information was missing in the previous version, we have now included the following sentence in the manuscript:
"These visual assessments were performed by a radiological technologist with more than 5 years of experience in nuclear medicine."
As for why the visual evaluation was conducted only for AiCE and not for CaLM:
In this study, more than 900 different images were generated using CaLM due to the following combinations:
iterations (20) × subsets (1) × CaLM (3 types) × PSF (ON/OFF) × acquisition time (8 types).
Therefore, conducting visual assessments for all of these images was not feasible in practice.
The revised manuscript is provided in two versions:
-
In version v3-1, deleted text is shown in blue, and added or modified text is shown in red.
Sincerely,
The Authors
Author Response File: Author Response.pdf
Round 2
Reviewer 3 Report
Comments and Suggestions for AuthorsThe manuscript is improved a lot. A few minor issues:
- Figure 2 has 5 different phantom images, please specify what are those in the figure caption.
- In terms of the visual assessments, it seems only 1 technician did the evaluation, then what does the standard deviation represent in table 5? Such evaluation should be performed by at lease 3 different peoples to make the results meaningful.
Author Response
Dear Reviewer 3,
Thank you very much for your valuable review and constructive comments.
Please find our responses to your suggestions below:
Regarding the explanation of Figure 2:
We have revised the figure legend as follows:
“The slice on which the hot spheres were most clearly visualized was defined as the center slice. ROIs were placed on the center slice as well as on the slices located at ±1 cm and ±2 cm from it. (a) to (e) show the −2 cm, −1 cm, center, +1 cm, and +2 cm slices, respectively.”
Regarding the number of visual assessors:
The original manuscript lacked the phrase “two radiological technologists.” It previously stated:
“These visual assessments were performed by a radiological technologist with more than 5 years of experience in nuclear medicine.”
We have now revised the description to reflect the involvement of three assessors, as follows:
“These visual assessments were performed by three radiological technologists, each with more than 5 years of experience in nuclear medicine.”
Correspondingly, the contents of Table 5 have been updated to reflect this change.
Thank you again for your helpful feedback.
Sincerely,
Author
Author Response File: Author Response.pdf