Next Article in Journal
Neoplasms in the Nasal Cavity Identified and Tracked with an Artificial Intelligence-Assisted Nasal Endoscopic Diagnostic System
Previous Article in Journal
Plant-Derived Anti-Cancer Therapeutics and Biopharmaceuticals
 
 
Article
Peer-Review Record

HDNLS: Hybrid Deep-Learning and Non-Linear Least Squares-Based Method for Fast Multi-Component T1ρ Mapping in the Knee Joint

Bioengineering 2025, 12(1), 8; https://doi.org/10.3390/bioengineering12010008
by Dilbag Singh *, Ravinder R. Regatte and Marcelo V. W. Zibetti *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Bioengineering 2025, 12(1), 8; https://doi.org/10.3390/bioengineering12010008
Submission received: 29 October 2024 / Revised: 10 December 2024 / Accepted: 20 December 2024 / Published: 25 December 2024
(This article belongs to the Section Biomechanics and Sports Medicine)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

I have the following questions about this paper:

1.       The author may elaborate on how their suggested approach handles the computational requirements and reduces the possibility of convergent local minima, especially when initial parameter estimations are inadequate. The author should also investigate robust initialization techniques to improve the precision and dependability of their methodology.

2.       The author should discuss if the suggested AI models can produce entire knee joint relaxation maps with accuracy and dependability that are on par with NLS techniques.

3.       The author may discuss how supervised DL systems’ viability and scalability are affected by their reliance on reference values from NLS techniques, especially when working with big datasets.

4.       When applying the DL model to real-world MRI data, the author should critically evaluate how the use of synthetic data for training affects the model's robustness and generalizability.

5.       The author should explain how the customized loss function balances the weights between data consistency and parameter accuracy.

6.       When restricting the number of NLS iterations to less than 300, the author may provide more details about the trade-off between accuracy and processing efficiency.

 

7.       Whether the little error increase at 200 iterations for SE fitting is clinically or practically meaningful, and if so, under what circumstances is this speed-accuracy trade-off acceptable, should be covered by the author. 

Author Response

The authors sincerely thank the reviewers and editors for their valuable comments and suggestions. A detailed point-by-point response to each comment is provided in the attached document. The following responses specifically address the comments raised by you.

 

R1C1: The author may elaborate on how their suggested approach handles the computational requirements and reduces the possibility of convergent local minima, especially when initial parameter estimations are inadequate. The author should also investigate robust initialization techniques to improve the precision and dependability of their methodology.

Response: Thanks for this comment. In the following section, we elaborate on how our proposed HDNLS approach addresses computational requirements and mitigates the issue of slow convergence and suboptimal solutions, especially under inadequate initial parameter estimations.

HDNLS combines the speed of deep learning (DL) and the accuracy of non-linear least squares (NLS). The DL component provides a preliminary estimate of the parameters as an initial guess of NLS. This significantly narrows the search space for NLS. This reduces the computational burden of NLS iterations while minimizing the risk of converging to a suboptimal solution caused by poor initialization.

The NLS optimization process in HDNLS utilizes the Trust Region Conjugate Gradient (TRCG), which is known for its robustness in handling NLS problems. By dynamically adjusting the trust region and using iterative Hessian inversions, TRCG ensures a stable convergence. There are two important points to consider: 1) the problem is not strictly convex, consequently there is no guarantee to converge to a global minimum. Good initialization and reliable optimization algorithms such as TRCG help mitigate this issue. 2) the problem is ill-posed, and due to noise, there is no guarantee that the global minimum will be a meaningful practical result Good initialization and regularization help address this issue). Therefore, our goal is to find a robust way to achieve meaningful minima.

Through ablation studies (Section 3), we determined that HDNLS achieves a performance comparable to that of NLS with significantly fewer iterations. For example, as highlighted in the paper, only 200 iterations for the ME and BE fitting (and 300 for SE) are sufficient, as opposed to 2000 iterations for the NLS. This optimization directly reduces computational overhead while maintaining high accuracy.

 

R1C2: The author should discuss if the suggested AI models can produce entire knee joint relaxation maps with accuracy and dependability that are on par with NLS techniques.

Response: Thanks for raising this point. The proposed HDNLS model effectively combines DL with NLS to address the challenges of producing accurate and reliable T1ρ relaxation maps for both the cartilage of the knee and the entire knee joint.

The HDNLS model demonstrates precision comparable to NLS for producing both knee cartilage and whole-knee relaxation maps, with the added advantage of faster computation. Although challenges remain in handling the noise-sensitiveness of the BE short component, HDNLS represents a robust, efficient, and fast alternative to traditional NLS techniques, particularly for applications that require comprehensive knee joint analysis. In the near future, we will extend HDNLS to handle the noise sensitivity in the BE short component as well.

 

R1C3: The author may discuss how supervised DL systems’ viability and scalability are affected by their reliance on reference values from NLS techniques, especially when working with big datasets.

Response: We appreciate the reviewer’s suggestion. Supervised DL systems inherently depend on high-quality reference data for effective training. In the case of T1ρ mapping, these reference values are often derived from NLS techniques. However, generating such reference data is computationally expensive and time-intensive, especially when working with large datasets, which significantly affects scalability. Furthermore, the quality of the reference data can influence the performance of the DL models. If the NLS-based reference data contains noise or other inaccuracies, it may contain less meaningful solutions due to the ill-posedness of the T1ρ mapping problem. Consequently, training a DL model with those NLS solutions will affect the performance of the DL model.

HDNLS addresses this limitation by eliminating the dependence on NLS-generated reference values for training. Instead, it uses synthetic data to train the DL model. The trained DL component provides a preliminary estimate of the parameters as an initial guess of NLS. This significantly narrows the search space for NLS. This reduces the computational burden of NLS iterations while minimizing the risk of converging to suboptimal (less meaningful) solutions caused by poor initialization and ill-posedness.

Additionally, HDNLS provides the flexibility to adjust the number of NLS iterations. This enables users to balance precision and computational cost. Thus, HDNLS is a more practical and scalable solution for handling large datasets compared to supervised DL systems.

 

R1C4: When applying the DL model to real-world MRI data, the author should critically evaluate how the use of synthetic data for training affects the model's robustness and generalizability.

Response: Thank you for raising this point. While the standalone use of a DL model trained on synthetic data may lack robustness and generalizability, the HDNLS framework effectively addresses these concerns through its hybrid design. Using NLS refinement to iteratively correct DL predictions based on real-world MRI data, HDNLS ensures that the final parameter estimates share similar goodness-of-fit properties, measured by NRMSR, as NLS solutions. This hybrid approach not only mitigates the limitations associated with DL and NLS methods but also offers a practical solution for fast and reliable multi-component T1ρ mapping in the knee joint.

In Figures 15 to 18 and Tables 7 to 9 on pp. 23 to 26, we present a comprehensive comparison of NLS, RNLS, DL, and HDNLS, highlighting their respective performance metrics and trade-offs on both synthetic and real MRI data.

 

R1C5: The author should explain how the customized loss function balances the weights between data consistency and parameter accuracy.

Response: Thank you for this comment. The customized loss function in the HDNLS framework is designed to balance two critical objectives: maintaining data consistency and achieving parameter accuracy. Data consistency ensures that the predicted relaxation signals align closely with the observed signals, obtaining reliable goodness-of-fit properties, while parameter accuracy focuses on precisely estimating the underlying parameters.

To achieve this balance, we incorporated weighting parameters (γs and γθ) into the loss function. These weights were fine-tuned through sensitivity analysis. The analysis revealed the optimal values for γs and γθ, ensuring that neither objective was disproportionately prioritized at the expense of the other.

We have added a new section, Section 2.3, to provide this analysis. Additionally, Figure 6 on pp. 11 illustrates the results of these sensitivity analyses. This demonstrates how the customized loss function effectively balances data consistency and parameter accuracy in the HDNLS framework.

 

R1C6: When restricting the number of NLS iterations to less than 300, the author may provide more details about the trade-off between accuracy and processing efficiency.

Response: The restriction to fewer than 300 iterations in the HDNLS addresses the dual priorities of processing efficiency and precision in quantitative MRI applications. HDNLS demonstrates flexibility by adapting to various clinical and research requirements. This provides a customized approach to balance speed and precision:

  • For applications where rapid results are essential, Ultrafast-NLS or Superfast-HDNLS (10 and 50 iterations) provide quick results with minimal computational demands. These configurations prioritize processing efficiency while maintaining reasonable precision.
  • For cases where precision is paramount, HDNLS with 200–500 iterations offers near-NLS performance. Despite the higher number of iterations, it remains substantially faster than traditional NLS methods. This ensures a balance between computational cost and precision.

This flexibility allows users to customize HDNLS to specific needs, making it a versatile, robust, and efficient tool for T1ρ mapping in quantitative MRI. By offering adaptable configurations, the HDNLS framework ensures technical soundness while retaining clinical relevance, making it suitable for a wide range of applications. Please refer to Figures 11 to 14 and Tables 5 and 6.

 

R1C7: Whether the little error increase at 200 iterations for SE fitting is clinically or practically meaningful, and if so, under what circumstances is this speed-accuracy trade-off acceptable, should be covered by the author. 

Response: The slight error increase at 200 iterations for SE fitting in the HDNLS framework is generally not clinically significant for routine diagnostic or exploratory research applications, where speed and computational efficiency are prioritized. However, in precision-critical contexts, such as advanced tissue characterization or longitudinal treatment monitoring, the enhanced accuracy provided by 300 and 500 iterations is essential. The flexibility of the HDNLS framework allows users to customize the trade-off between speed and precision to suit their specific needs. By incorporating configurable iteration settings and clear usage guidelines, HDNLS ensures adaptability across a broad spectrum of clinical and research scenarios, making it a robust and practical tool for diverse quantitative MRI applications.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The paper proposes a method to improve the performance in terms if accuracy and speed for non-linear square in fitting parameters in magnetic resonance over knee joint. 

The proposal lies in the algorithmic domain, where a combination of deep learning with non-linear square. The deep learning part is partially trained over synthetic data and voxels.

Content issues:

- the prior art in deep learning is too short l 126-127. Taking into account that DL is a part of the proposal, this prior work should be established more clearly

- "The NN architecture comprises 7 repeated blocks of fully connected layers, each containing 512 intermediate elements followed by a nonlinear activation function and dropouts" It appears that this architecture is custom and thus the benefit of pre-training/transfer learning is lost.

- Figure 7 - it needs more explanation if the paper is not intended for knee-joint specialists. Why it is better? I notice differences, but there are too few to argue why one is better. Please detail. Same for figure 8

 

Some text/formatting issues:

- Section 1.4 is not aligned with the rest of the text

- Figure 4 is right aligned

 

Overall, the paper is well written with clear content and convincing proposal and evaluation. Some details are omitted.

Author Response

The authors sincerely thank the reviewers and editors for their valuable comments and suggestions. A detailed point-by-point response to each comment is provided in the attached document. The following responses specifically address the comments raised by you.

 

R2C1: The prior art in deep learning is too short l 126-127. Taking into account that DL is a part of the proposal, this prior work should be established more clearly.

Response: Thank you for your observation regarding the need to expand on prior art in machine learning (ML) and DL as it pertains to T1ρ fitting. In response to your suggestion, we have included a new Table 1 in the revised manuscript. This table comprehensively reviews AI-based T1ρ fitting models, highlighting their features, strengths, and associated challenges.

Table 1 now establishes a clear context for the proposed hybrid DL and NLS approach by outlining how current AI-based models focus predominantly on standalone ML or DL techniques. It also identifies gaps in these methods, such as limited exploration of complex models like SE and BE fitting, sensitivity to noise, and the absence of iterative regularization techniques like those in NLS.

R2C2:  The NN architecture comprises 7 repeated blocks of fully connected layers, each containing 512 intermediate elements followed by a nonlinear activation function and dropouts" It appears that this architecture is custom and thus the benefit of pre-training/transfer learning is lost.

Response: Thank you for this comment. The DL architecture used in this manuscript was designed for voxel-wise multi-component T1ρ fitting. While we acknowledge that pre-trained models and transfer learning could offer potential advantages in some scenarios, however, their application in our work is limited. Pre-trained models typically excel in feature extraction from large-scale datasets, such as image classification or segmentation tasks. However, they may not provide significant benefits for voxel-wise MR relaxation estimation, where the input and output distributions differ substantially from standard image-based tasks. Furthermore, the use of synthetic data allowed us to fully control the range of input parameters. This ensures that the used DL model was customized to the requirements of T1ρ fitting without relying on external datasets.

However, we acknowledge the potential of incorporating transfer learning in future research by utilizing pre-trained models from similar domains, such as quantitative MRI or exponential model fitting tasks. This approach could serve as a basis for faster convergence and improved generalization. Consequently, we have updated Section 5.3: Future Directions to include this as a proposed avenue for future exploration.

 

R2C3:  Figure 7 - it needs more explanation if the paper is not intended for knee-joint specialists. Why it is better? I notice differences, but there are too few to argue why one is better. Please detail. Same for figure 8

Response: Thank you for your important comment. We included the ore description of both Figures 7 and 8 (11 and 12 in the revised manuscript) in the revised manuscript.

“Although it is challenging to visually distinguish differences between the obtained T1ρ maps, the evaluated squared error differences provide a clear distinction. Higher squared error values indicate poorer performance. From Figures 11 and 12, the squared error maps for HDNLS and Relaxed-HDNLS show significantly lower errors. This highlights their superior performance and closer alignment with the NLS-based T1ρ maps.”

 

R2C4: Some text/formatting issues:

- Section 1.4 is not aligned with the rest of the text

- Figure 4 is right aligned

Response: As suggested by the reviewer, we carefully aligned and crosschecked all the paragraphs, Tables, and Figures presented in the revised manuscript.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

Even for T1ρ mapping regression problems, proper data splitting into train/validation/test sets remains essential. While the authors effectively used synthetic data generation for training their DL model, they don't explicitly mention using a validation set during the development process. Without a validation set, they cannot properly tune hyperparameters, monitor for overfitting during training, or properly assess their model's generalization ability during development.

The authors employed convergence validation using MNAD, which indicates some level of convergence; however, this metric alone does not equate to traditional learning curves that illustrate the relationship between training and validation loss over epochs, model learning dynamics, and potential overfitting or underfitting issues. The authors used synthetic data generated through sampling parameters from known distributions, but without proper three-way data splitting (training/validation/test), they cannot fully assess whether their model is overfitting, if it has converged optimally, or how well it balances bias and variance.

While they used 5 datasets for training and 5 different healthy subjects for testing, proper three-way data splitting of their data would have allowed them to use the validation set for hyperparameter tuning and early stopping, monitor training dynamics through proper learning curves, better assess model generalization, prevent potential data leakage between training and testing.

Given these limitations, the authors should at minimum discuss these methodological constraints in their paper. This would provide readers with a clearer understanding of potential limitations in their findings and acknowledge the possibility that their results could be overly optimistic without proper validation data separation during training.

Some key details about the deep learning architecture and training process are provided. However, more specifics about hyperparameters and optimization settings would improve reproducibility of their study.

Hence, the discussion section could be more comprehensive in addressing limitations. Expand on technical limitations and generalizability concerns. Explicitly acknowledge the potential for your results to be overly optimistic.

The paper also lacks a comprehensive discussion of AI ethics in healthcare. Add a section discussing ethical implications and public acceptance of AI in medical imaging, with recent citations.

Author Response

The authors sincerely thank the reviewers and editors for their valuable comments and suggestions. A detailed point-by-point response to each comment is provided in the attached document. The following responses specifically address the comments raised by you.

 

R3C1: Even for T1ρ mapping regression problems, proper data splitting into train/validation/test sets remains essential. While the authors effectively used synthetic data generation for training their DL model, they don't explicitly mention using a validation set during the development process. Without a validation set, they cannot properly tune hyperparameters, monitor for overfitting during training, or properly assess their model's generalization ability during development.

Response: Thank you for highlighting this critical point. We agree that proper data separation into train, validation, and test sets is essential for robust model development and evaluation. We would like to clarify the following regarding our methodology:

  1. Validation Set: During the development process, we split the synthetic dataset into three subsets Training Set (80%), Validation Set (10%), and Test Set (10%).
  2. Hyperparameter Tuning: We selected the hyperparameters of the DL model using model sensitivity analysis (Refer to Figures 4 and 5 on pp. 9 and 10). Similarly, we adjusted the weighting parameters (γs and γθ) in the custom loss function to ensure balanced optimization of residual and parameter errors (Refer to Figure 6 on pp. 11).

We have added a new section, named Section 2.3, to provide this analysis.

 

R3C2: The authors used convergence validation using MNAD, which indicates some level of convergence; however, this metric alone does not equate to traditional learning curves that illustrate the relationship between training and validation loss over epochs, model learning dynamics, and potential overfitting or underfitting issues. The authors used synthetic data generated through sampling parameters from known distributions, but without proper three-way data splitting (training/validation/test), they cannot fully assess whether their model is overfitting, if it has converged optimally, or how well it balances bias and variance.

Response: We agree with the reviewer, following the previous comment, we mentioned that we did employ proper data splitting during the training of the DL model. A validation set was also used to monitor model performance and detect potential overfitting or underfitting. Specifically, synthetic data were divided into training, validation, and test subsets in an 80:10:10 ratio. The validation set was used during training to calculate the customized loss function and adjust the hyperparameters. This ensures that the learning dynamics of the model are carefully monitored.

Although median of normalized absolute difference (MNAD) and normalized root mean squared residual (NRMSR) analyses were highlighted in our manuscript. These analyses primarily serve to evaluate the trade-offs introduced by the HDNLS. These metrics were not used to assess the convergence of the DL model during training, they are used to assess fitting quality, as typically used in T1ρ mapping studies.

We have updated Section 2.4 to include this explanation of the data-splitting strategy and the use of learning curves to ensure convergence. Please refer to Figure 7 on pp. 12.

 

R3C3: While they used 5 datasets for training and 5 different healthy subjects for testing, proper three-way data splitting of their data would have allowed them to use the validation set for hyperparameter tuning and early stopping, monitor training dynamics through proper learning curves, better assess model generalization, prevent potential data leakage between training and testing. Given these limitations, the authors should at minimum discuss these methodological constraints in their paper. This would provide readers with a clearer understanding of potential limitations in their findings and acknowledge the possibility that their results could be overly optimistic without proper validation data separation during training.

Some key details about the deep learning architecture and training process are provided. However, more specifics about hyperparameters and optimization settings would improve reproducibility of their study.

Response: Thanks for highlighting these important factors, to address your concern as mentioned in Responses R3C1 and R3C2, we have added new sections, Sections 2.3 and 2.4, in the revised manuscript. It provides details about the sensitivity analysis of the DL model and also the training and validation curves by considering customized loss. Please refer to Figures 4 to 7 on pp. 9 to 12.

 

R3C4: Hence, the discussion section could be more comprehensive in addressing limitations. Expand on technical limitations and generalizability concerns. Explicitly acknowledge the potential for your results to be overly optimistic.

Response: Thank you for your valuable feedback. As suggested, we have included a suitable discussion of the limitations of the proposed model along with the future directions in Sections 5.2 and 5.3, respectively.

 

R3C5: The paper also lacks a comprehensive discussion of AI ethics in healthcare. Add a section discussing ethical implications and public acceptance of AI in medical imaging, with recent citations.

Response: We completely agree with the reviewer. As authors from a healthcare institution, our studies adhere to rigorous ethical principles, ensuring transparency, fairness, and respect for participant rights. Below, we outline how key aspects of AI ethics were incorporated throughout the study:

  1. Adherence to Ethical Standards:

(i) Declaration of Helsinki Compliance: The study was conducted in alignment with the Declaration of Helsinki, emphasizing the protection of participant health, rights, and privacy.

(ii) Institutional Review Board (IRB) Approval: The protocol was approved by the Institutional Review Board (IRB) of NYU Langone Health (Protocol Code: i21-00710, Approval Date: March 13, 2022), ensuring adherence to ethical research guidelines.

  1. Privacy and Data Security

(i) HIPAA Compliance: The study was fully compliant with the Health Insurance Portability and Accountability Act (HIPAA), ensuring the confidentiality of participant data during both data collection and AI model training.

(ii) Use of Synthetic Training Data: To minimize privacy risks, the deep learning (DL) component of HDNLS was trained on synthetic data generated from relaxation models. This eliminates the dependency on real patient datasets.

  1. Informed Consent

(i) All participants provided written informed consent before undergoing MRI scans. They were informed about the purpose of the study, the potential risks, and the nature of the data being collected.

(ii) Participants retained the right to withdraw from the study at any time, ensuring autonomy in their participation.

  1. Transparency and Explainability

(i) Model Interpretability: The hybrid HDNLS framework combines deep learning and non-linear least squares (NLS) methods, providing clinicians with interpretable results by refining AI predictions through established optimization techniques.

(ii) Customized Loss Function: The DL model utilized a customized loss function to enforce data consistency. Thus, it ensures clinically meaningful predictions.

f. Accountability: HDNLS was rigorously tested on real MRI data, with performance compared against standard methods (NLS, RNLS, and DL). This ensured accountability for the HDNLS’s accuracy and reliability in clinical applications.

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

After reviewing the manuscript titled "HDNLS: Hybrid Deep-Learning and Non-linear Least Squares-based Method for Fast Multi-Component T1ρ Mapping in the Knee Joint," I regret to recommend its rejection.

Key Reasons:

  1. Lack of Innovation: The proposed hybrid method lacks clear novelty compared to existing approaches combining deep learning and optimization techniques.
  2. Limited Validation: Experiments are restricted to a small dataset of 12 healthy volunteers, with over-reliance on synthetic data, limiting clinical applicability.
  3. Inadequate Benchmarking: Comparisons with more advanced state-of-the-art methods are missing.
  4. Unclear Trade-Offs: The performance-speed trade-offs among the proposed variants are insufficiently justified for practical use.
  5. Shortcomings in BE Analysis: Challenges in bi-exponential fitting due to noise sensitivity are acknowledged but not effectively addressed.

The manuscript demonstrates potential but requires significant improvements in novelty, validation, and clarity for future consideration.

Comments on the Quality of English Language

The quality of the English language in the manuscript is generally acceptable, but improvements are needed for clarity and readability. Certain sections, particularly the methodology and results, are overly technical and include redundant information, which may confuse readers. Simplifying these parts and ensuring concise, clear explanations would enhance the manuscript's overall presentation. Additionally, attention to grammatical consistency and sentence structure would further improve the readability. Proofreading by a native English speaker or professional editing service is recommended.

Author Response

The authors sincerely thank the reviewers and editors for their valuable comments and suggestions. A detailed point-by-point response to each comment is provided in the attached document. The following responses specifically address the comments raised by you.

 

R4C1: Lack of Innovation: The proposed hybrid method lacks clear novelty compared to existing approaches combining deep learning and optimization techniques.

Response: We thank the referee for his/her concern about innovation. We would like to clarify, though, the key novel aspects of our work, which, to the best of our knowledge, have not been reported in the literature, particularly for T1ρ mapping in the knee joint: The key contributions of the paper are as follows:

  1. a) Initially, a DL-based multi-component T1ρ mapping method is proposed. This approach utilizes synthetic data for training, thereby eliminating the need for reference MRI data.
  2. b) The HDNLS method is proposed for fast multi-component T1ρ mapping in the knee joint. This method integrates DL and NLS. It effectively addresses key limitations of NLS, including sensitivity to initial guesses, poor convergence speed, and high computational cost.

In this sense, we are not aware of any other published work that proposes similar innovations in the context proposed here. We appreciate it if the referee could include any reference for us in the future.

 

R4C2: Limited Validation: Experiments are restricted to a small dataset of 12 healthy volunteers, with over-reliance on synthetic data, limiting clinical applicability.

Response: We appreciate your observation about the limited validation of our study due to the small dataset of 12 healthy volunteers and the reliance on synthetic data. However, this is a necessity of any relatively new application where large datasets do not exist to train DL models. Nevertheless, this study is intended as an initial proof-of-concept to demonstrate the feasibility and effectiveness of the HDNLS approach for multi-component T1ρ mapping.

Although synthetic data provided a controlled and diverse environment for training the DL model, we acknowledge its limitations in capturing the full complexity of real-world clinical scenarios. To address this, we have planned the following steps for future work:

(a) Future studies will include testing on larger and more diverse datasets, as data becomes available. This will allow us to evaluate HDNLS in clinically relevant scenarios and further validate its robustness.

(b) We will expand validation to include data acquired from different MRI vendors and protocols to ensure generalizability and compatibility in varied clinical settings.

(c) To reduce the dependence on synthetic data, we aim to extend HDNLS by utilizing self-supervised learning and transfer learning approaches.

(d) As part of future studies, we will evaluate the HDNLS framework in clinical workflows to assess its utility and impact on patient diagnosis and treatment planning.

Please refer to Sections 5.2 and 5.3.

 

R4C3: Inadequate Benchmarking: Comparisons with more advanced state-of-the-art methods are missing.

Response: Thank you for pointing out the importance of benchmarking against state-of-the-art methods. In the context of multi-component T1ρ mapping, Non-Linear Least Squares (NLS) and Regularized NLS (RNLS) are widely recognized as the standard benchmarking methods in this domain due to their accuracy and reliability. Thus, comparative analysis primarily focuses on these methods, as our objective is to develop a hybrid model that delivers comparable results to NLS while addressing its limitations, such as computational cost and sensitivity to initial guesses.

To illustrate the effectiveness of the HDNLS framework, we present detailed comparisons with NLS and RNLS in Figures 15 to 18 and Tables 7 to 9 of the manuscript. These comparisons demonstrate that HDNLS achieves similar accuracy while offering significant improvements in computational efficiency.

 

R4C4: Unclear Trade-Offs: The performance-speed trade-offs among the proposed variants are insufficiently justified for practical use.

Response: Thanks for raising this issue. HDNLS addresses the dual priorities of processing efficiency and precision in quantitative MRI applications. HDNLS demonstrates flexibility by adapting to various clinical and research requirements. Ultimately, the performance-speed trade-off is a used decision because it is application-dependent. We merely provide examples of how to balance speed and precision in the context of T1ρ mapping:

  • For applications where rapid results are essential, Ultrafast-NLS or Superfast-HDNLS (10 and 50 iterations) provide quick results with minimal computational demands. These configurations prioritize processing efficiency while maintaining reasonable precision.
  • For cases where precision is paramount, HDNLS with 200–500 iterations offers near-NLS performance. Despite the higher number of iterations, it remains substantially faster than traditional NLS methods. This ensures a balance between computational cost and precision.

This flexibility allows users to customize HDNLS to specific needs, making it a versatile, robust, and efficient tool for T1ρ mapping in quantitative MRI. By offering adaptable configurations, the HDNLS framework ensures technical soundness while retaining clinical relevance, making it suitable for a wide range of applications. Please refer to Figures 11 to 14 and Tables 5 and 6.

R4C5: Shortcomings in BE Analysis: Challenges in bi-exponential fitting due to noise sensitivity are acknowledged but not effectively addressed.

Response: Thank you for highlighting the need to address the challenges in BE short component fitting, particularly its sensitivity to noise. We acknowledge that accurately estimating the BE short component is inherently challenging due to its rapid decay dynamics and low signal intensity, which makes it more susceptible to noise.

To address this issue, we propose several enhancements for future iterations of HDNLS:

  1. We plan to implement regularization techniques, such as spatial smoothness constraints and multi-voxel regularization, to stabilize parameter estimation across neighboring voxels.
  2. Advanced noise-aware DL architectures, such as those incorporating uncertainty quantification, will be integrated into the DL component of HDNLS. These models are designed to explicitly account for input data variability, enhancing robustness and reliability under noisy conditions.
  3. To assign appropriate emphasis to the short component, we will explore adaptive weighting schemes in the loss function during training for BE fitting. This approach can reduce the sensitivity of the short component to noise while preserving the overall performance of the model.

These strategies will be systematically evaluated and integrated into the HDNLS framework to enhance its effectiveness for BE short component fitting. We have included these planned enhancements in the revised manuscript as future directions to demonstrate our commitment to addressing this limitation and improving the robustness of HDNLS. Please refer to Sections 5.2 and 5.3.

 

R4C6: The quality of the English language in the manuscript is generally acceptable, but improvements are needed for clarity and readability. Certain sections, particularly the methodology and results, are overly technical and include redundant information, which may confuse readers. Simplifying these parts and ensuring concise, clear explanations would enhance the manuscript's overall presentation. Additionally, attention to grammatical consistency and sentence structure would further improve the readability. Proofreading by a native English speaker or professional editing service is recommended.

Response: We have revised the manuscript to simplify overly technical sections, removed redundant information, and improved grammatical consistency to enhance accessibility. Additionally, the manuscript has undergone professional proofreading to ensure polished language and improved presentation.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Comments and Suggestions for Authors

Thank you for addressing my comments from the first round. I have reviewed the revised manuscript and the responses to my concerns. I find the revisions satisfactory, and the manuscript has been improved significantly. I have no additional comments and recommend the manuscript for acceptance and publication.

Back to TopTop