Next Article in Journal
Enhanced Anaerobic Digestion of Sewage Sludge Through the Integration of Thermal Hydrolysis and Bioelectrochemical Anaerobic Digestion
Next Article in Special Issue
Parameter-Reduced YOLOv8n with GhostConv and C3Ghost for Automated Blood Cell Detection
Previous Article in Journal
Advancing Microplastic and Nanoplastic Toxicity Assessment: Insights from Human Organoid Models
Previous Article in Special Issue
Quantification of Craniofacial Growth Pattern Based on Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MyoNet: Deep Learning-Based Myocardial Strain Quantification from Cine Cardiac MRI

1
Department of Radiology, Northwestern University, Chicago, IL 60611, USA
2
Department of Radiology, Medical College of Wisconsin, Milwaukee, WI 53226, USA
3
CREATIS, 69621 Lyon, France
4
Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
*
Author to whom correspondence should be addressed.
Bioengineering 2026, 13(3), 310; https://doi.org/10.3390/bioengineering13030310
Submission received: 8 January 2026 / Revised: 2 March 2026 / Accepted: 4 March 2026 / Published: 7 March 2026

Abstract

To develop and assess MyoNet, a deep learning (DL)-based network for measuring myocardial regional function from cine cardiac magnetic resonance (CMR) images, and compare its efficacy with ResMyoNet as an efficient alternative to SinMod-derived reference. MyoNet was tested alongside ResMyoNet on datasets from Dahl salt-sensitive rat models undergoing radiation therapy (RT). Both networks were designed to extract displacement maps from cine images, were specifically optimized for detailed myocardial deformation, employed advanced convolution operations with alternating kernel sizes for spatial and temporal analysis, and robust loss functions. MyoNet demonstrated superior performance in myocardial strain measurement, achieving high consistency with the SinMod-derived reference strains. It outperformed ResMyoNet, achieving higher performance metrics, including SSIM of 0.961 and 0.960, ICC of 0.973 and 0.975, and Pearson CC of 0.973 and 0.953 for circumferential (Ecc) and radial (Err) strains, respectively. Its accuracy and efficiency in generating strain measurements were validated through comprehensive statistical analyses. MyoNet offers a significant advancement in myocardial strain analysis from cine CMR images, potentially revolutionizing cardiac imaging in pre-clinical studies. Its ability to provide detailed and reliable measurements positions it as a valuable tool for clinical applications, particularly in monitoring the cardiac health of cancer patients.

Graphical Abstract

1. Introduction

The significance of regional cardiac function analysis, especially using cardiac resonance (CMR) myocardial strain analysis, has gained increased attention. Myocardial strain allows for a detailed understanding of myocardial contractility compared to conventional global function measures. Traditionally, the evaluation of cardiac function has mainly relied on measuring left ventricular ejection fraction (LVEF), a valuable yet limited tool that primarily provides global information about cardiac function without the ability to detect localized myocardial contractility abnormalities or dysfunctions [1]. Therefore, LVEF can mask underlying subclinical cardiac dysfunction, thus delaying the detection of heart diseases, as in thoracic radiation-induced cardiotoxicity [1,2]. Radiation therapy (RT), essential in cancer treatment, can inadvertently damage the heart, leading to long-term complications including ischemic heart disease, fibrosis, arrhythmias, cardiomyopathy, valvular abnormalities, and pericarditis, particularly in breast and lung cancer treatments [3,4,5,6,7]. These complications often emerge subclinically, making early detection crucial [6,8].
Animal models, such as Dahl salt-sensitive (SS) rats, provide valuable insights into cardiotoxic implications post-RT [9,10], helping understand disease progression and potential interventions. Cardiac magnetic resonance (CMR) has emerged as integral in measuring regional cardiac function, offering non-invasive evaluation without ionizing radiation. CMR sequences, like tagging [11] and CMR feature tracking (CMR-FT) [12], have been developed for this purpose. However, CMR tagging is not widely adopted in clinical practice due to the necessary longer acquisition times and the need for special post-processing techniques [13,14,15], while CMR-FT lacks intramyocardial markers, affecting accuracy in regional strain analysis [16,17]. These challenges are exacerbated in small animal models due to differences in image contrast and magnetic field strengths in pre-clinical and clinical MRI studies [18,19].
In this study, we propose the integration of deep learning (DL) with cine CMR to generate fast, accurate, and reproducible myocardial strain analysis without the need for tagging. Specifically, we use supervised learning to allow DL models to learn from tagged examples and predict strain from cine images. By leveraging the detailed displacement fields from the tagged images, our goal is to design, train, and compare two DL networks, MyoNet and ResMyoNet, to create an accurate, rapid algorithm for generating myocardial strain measurements from routinely acquired cine images. We hypothesize that these networks will allow for effective and efficient estimation of myocardial strain, therefore providing innovative solutions for early detection of subclinical cardiac function in cardiotoxicity.

2. Materials and Methods

2.1. Study Population and Data Preprocessing

This study (Figure 1) was approved by the institutional animal care committee of the Medical College of Wisconsin and used data from 22 Dahl SS rats undergoing RT, yielding 64 short-axis slices, each containing 20 cine and corresponding tagged timeframes of cardiac MRI images, as previously described [20,21]. The initial preprocessing involved a manual segmentation of the left ventricle (LV) in cine images to highlight the LV myocardium by identifying the endocardium and epicardium contours to generate binary images. All images were then cropped to uniform dimensions of 128 × 128.
For tagged images, the SinMod technique (InTag, Lyon, France) [14,22] was utilized to generate x- and y-displacement fields, which served as the reference standard (ground truth) for network training. These displacement fields were aligned with the segmented cine images and rescaled to an intensity range between 32 and 255 for clear differentiation of the image background (set to 0). This scaling was an invertible affine normalization used only to bound network outputs. Predicted values were transformed back to signed displacement in the original units using the inverse affine mapping, and all displacement error metrics and strain calculations were performed after this inverse transform. These displacement fields were also standardized to dimensions of 128 × 128 through cropping to match cine images.

2.2. MyoNet

MyoNet is a novel deep learning network tailored for myocardial strain analysis using spatiotemporal data in CMR. It utilizes the capabilities of DL for processing spatial (in-slice tissue motion) and temporal (tissue motion across consecutive timeframes) dimensions, aiming to improve the accuracy and efficiency of strain analysis.
MyoNet, similar to the conventional Unet [23,24] (Figure 2), includes five encoding and decoding layers, capturing both local features through the down-sampling path (encoder) and larger context through the up-sampling path (decoder).
In the encoding stage, MyoNet employs sequential 3D convolution operations followed by a max pooling operation with alternating kernel sizes (1 × 3 × 3 and 3 × 1 × 1) for distinct spatial and temporal feature capture. Dilation in deeper layers, combined with varied kernel sizes and alternating dilation rates (1 × 2 × 2 and 2 × 1 × 1), expands the receptive field, allowing global feature learning across a wider spatiotemporal extent without compromising resolution. The decoder, symmetrical to the encoder, uses up-sampling and convolutional blocks, enriched by skip connections to preserve high-frequency details. The network concludes with a 1 × 1 × 1 convolution and a sigmoid activation function, normalizing the output [25].
MyoNet is optimized for straightforward and efficient feature extraction, minimizing overfitting and enhancing interpretability. Its architecture focuses on context understanding within the data, crucial for myocardial strain analysis.

2.3. ResMyoNet

ResMyoNet exhibits significant similarities to MyoNet, especially when handling spatiotemporal data (Figure 3), and distinguishes itself with its modified ResUnet architecture [26]. It incorporates residual connections for efficient gradient flow during training. It employs a series of layers with custom residual blocks, each comprising a shortcut path and a main path with spatiotemporal convolutions, followed by batch normalization and ReLU activation. The tanh activation function is applied post-convolution, aiding in gradient propagation and training stability [27,28].
ResMyoNet’s residual connections not only facilitate faster convergence and mitigate the vanishing gradient problem but also contribute to its ability for deeper and more intricate feature extraction.

2.4. Experimental Setup

Our dataset included 64 short-axis slices from 22 scans, each comprising 20 cine images and corresponding SinMod-derived x- and y-displacement fields. The input data were standardized by normalizing to ensure consistency for the network’s learning process. Additionally, data augmentation techniques were applied to enhance diversity and assist in model generalization. The dataset was divided at the scan level (to prevent data leakage between sets) into distinct training (80%), validation (10%), and testing sets (10%). The training set underwent further data augmentation through random flip techniques to enhance the model’s robustness and adaptability. The validation set was crucial for hyperparameter tuning and performance evaluation, while the testing set served as the final model assessment on unseen data.
Model implementation was conducted using PyTorch version 2.0.1 and Python 3.11 on a high-performance computing system equipped with an NVIDIA Quadro GV100 GPU. Training utilized the RMSprop optimizer with an initial learning rate of 1 × 10−3. The batch size was set to 4. Both networks were trained for 100 epochs, with model selection based on the best validation loss.

2.5. Optimization and Loss Functions

During the training, a combination of loss functions was utilized to effectively guide the optimization process. First, the mean squared error (MSE) loss was used to penalize large errors and improve accuracy in estimating x- and y-displacement fields. The MSE is defined as follows:
L M S E = 1 n i = 1 n x i x i ^ 2 + y i y i ^ 2 ,
Here  x i and y i signify the normalized pixel values representing x- and y-displacement fields generated by tagging, while x i ^ and y i ^ are the predicted x- and y-displacement fields from our models at the time frame i and n is the number of frames. Simultaneously, the smooth L1 loss was introduced, functioning akin to the L2 loss for small differences and resembling the L1 loss for larger differences, which is formulated as follows:
L S m o o t h = 0.5 x i j x i j ^ 2 + y i j y i j ^ 2 i f   x i j x i j ^ < 1   a n d   y i j y i j ^ < 1 x i j x i j ^ + y i j y i j ^ 0.5 o t h e r w i s e ,
Here  j is the pixel location while i is the time frame. This combination fosters a balance that enhances robustness to outliers, maintaining the capacity to underscore significant inconsistencies. Beyond these, a specialized loss was crafted exclusively for this application. This custom loss emphasizes errors where the segmented areas of the x- and y-displacement fields were generated by tagging.
L c u s t o m = j = 1 m z i j m i f   m > 0 0 o t h e r w i s e z i j = x i j x i j ^ 2 + y i j y i j ^ 2 i f   x i j   a n d   y i j > 0.125 0 o t h e r w i s e ,
As mentioned, the x- and y-displacement fields were rescaled to pixel values ranging from 32 to 255. After normalization, this range is transformed to 0.125 to 1.
The total loss function was computed as a combination of the three components:
L T o t a l = L M S E + L s m o o t h + L c u s t o m

2.6. Statistical Analysis

The model performance of MyoNet and ResMyoNet was evaluated using several metrics. For displacement field assessment, we computed the structural similarity index (SSIM) [29] for image quality, root mean squared error (RMSE), and mean endpoint error (EPE) for vector displacement accuracy. Additionally, strains derived from MyoNet and ResMyoNet were compared with SinMod-derived reference values using the intraclass correlation coefficients (ICC) [30], Pearson correlation coefficient [31], and coefficient of variation (CV). To evaluate the agreement between strains acquired from MyoNet, ResMyoNet, and SinMod, a Bland–Altman plot analysis was also performed [32]. Training accuracy was defined as 1 minus the normalized mean squared error between predicted and target displacement fields.

2.7. Strain Computation

Circumferential (Ecc) and radial (Err) strains were computed from the predicted x- and y-displacement fields using a Lagrangian formulation. For each timeframe, the 2D displacement vector u x , y = u x x , y ,   u y x , y was converted to a deformation gradient F = I + u , where spatial gradients were calculated on the grid using central finite differences within the myocardium mask. The Green-Lagrange strain tensor was computed as E = 0.5 F T F I [33]. For segment analysis, pixel-wise strains were averaged within each of the six AHA short-axis segments [34].

3. Results

3.1. Strain Analysis by MyoNet and ResMyoNet

Figure 4 displays the x- and y-displacement fields obtained from SinMod, MyoNet, and ResMyoNet. The accompanying error maps provide insight into the differences between these displacement fields, underscoring the differences and potential implications for the fidelity and accuracy of each approach in evaluating myocardial displacements. Following this, bar plots in Figure 5 were generated to depict the circumferential (Ecc) and radial (Err) strains obtained through three distinct imaging techniques: tagging (gold standard), MyoNet, and ResMyoNet. These measurements were specifically performed for each of the six segments per slice as defined by the American Heart Association (AHA) segmentation model [34]. These bar plots by the three methods (SinMod-derived reference, MyoNet, and ResMyoNet) show remarkably analogous patterns. In addition, a global analysis was conducted to provide a comprehensive assessment of the strain metrics across all segments. Both Ecc and Err for each of six segments per slice underwent statistical analysis using Student’s t-test with Bonferroni correction for multiple comparisons. No segments showed a statistically significant difference after correction (p < 0.05). Without correction, one segment (anterior) in Err from MyoNet and one segment (inferoseptal) in Err from ResMyoNet were statistically different when compared to SinMod-derived values (Table 1).
Bland–Altman plots were constructed (Figure 6) to compare the strain measures between SinMod vs. MyoNet and SinMod vs. ResMyoNet. Bland–Altman analysis demonstrated minimal systematic bias between methods. For Ecc, MyoNet showed a mean bias of −1.9%, while ResMyoNet showed a mean bias of −2.0%. For Err, MyoNet demonstrated a mean bias of 1.0%, while ResMyoNet showed a mean bias of 2.0%. These small biases suggest clinically acceptable agreement between deep learning methods and the SinMod reference. This consistency is especially noteworthy as the data points encompass all segments from the test dataset.

3.2. Model Performance and Computational Efficiency

The performance consistency of both MyoNet and ResMyoNet networks was further assessed through an analysis of accuracy and loss over the span of the training epochs. As demonstrated in Figure 7, while both models underwent training for 100 epochs, ResMyoNet notably reached high and stable accuracy levels along with low and consistent loss values within just a few initial epochs. This rapid stabilization contrasts with MyoNet, which required more epochs to achieve similar performance metrics.
A comparative analysis was conducted to discern the efficiency of the MyoNet and ResMyoNet in the context of training, characterized by evaluating their computation times across 10 and 100 epochs. MyoNet required a computation time of 158 s for 10 epochs, escalating to 1546 s for 100 epochs. In contrast, ResMyoNet requires only 124 s for 10 epochs and 1177 s for 100 epochs. For all measurements, ResMyoNet consistently exhibited shorter computation durations when compared to MyoNet, emphasizing its relative efficiency in the training phase.

3.3. Performance Metrics of MyoNet and ResMyoNet

To assess and compare the performance of the MyoNet and ResMyoNet networks for generating Ecc and Err, we utilized multiple metrics (Table 2). For displacement field accuracy, MyoNet achieved RMSE of 1.06 mm in x and 0.94 mm in y-displacement fields, while ResMyoNet achieved RMSE of 1.24 mm in x and 1.00 mm in y-displacement fields. Examining the SSIM values, MyoNet achieved higher values (0.961, 0.960) compared to ResMyoNet (0.937, 0.934). As for ICC, MyoNet displayed slightly higher than ResMyoNet. For the Pearson CC analysis, both networks showed strong correlations. Lastly, concerning the CV metrics, MyoNet’s values were presented as higher than ResMyoNet’s.

4. Discussion

We demonstrate that the developed MyoNet and ResMyoNet each show unique characteristics and differ in their performance metrics, underscoring the distinct capabilities and strengths of each network.
In contrast to existing methodologies [35], which focused on mice with a small cohort size (n = 8), the proposed algorithm is validated on a larger and more diverse dataset. This enhances the generalizability of the results. Moreover, while the approach in [35] requires manual localization of the LV blood pool center point, it automates this step, thereby reducing potential sources of human error and increasing efficiency. The Siamese architecture’s bidirectional motion learning [36] offers a dual pathway for processing input data, allowing the simultaneous analysis of different frames. This approach ensures that the temporal relationships between consecutive frames are well = established, fostering a more precise capture of myocardial motion dynamics. However, bidirectional motion learning may overlook subtle myocardial motion patterns that do not manifest across both pathways, and this might make the model more sensitive to irregularities or noise in input data sequences.
The use of the 3D convolutional approach (spatiotemporal) over the 2D (spatial) with subsequent 1D (temporal) strategy (2D + 1D) [37] is multifaceted. The 3D convolution inherently integrates spatial and temporal information in a unified manner, capturing the intricate relationships across three dimensions. In contrast, the 2D + 1D approach would first process spatial data and subsequently add the temporal dimension, potentially missing interactions between spatial and temporal features that are crucial for myocardial strain analysis. Importantly, the proposed networks alternate kernel sizes that focus on the spatial and temporal dimensions alternatively. MyoNet and ResMyoNet, with their unique designs, emphasize the proficient extraction of both spatial and temporal features, illustrated by kernel size alternations. Traditional methodologies (non-alternating kernel size) often grapple with the complexity of parsing data that spans both spatial and temporal dimensions. To address this, the proposed networks leverage deep learning’s inherent strengths in processing spatial and temporal information synergistically, aiming to augment the accuracy and efficiency of strain analysis in cardiac imaging. This indicates a dedicated mechanism for assimilating spatial and temporal variations within the input data. Moreover, the convolutional operations employ dilation, expanding the receptive field without compromising spatial resolution. This has the capability to capture extensive spatiotemporal patterns, possibly elucidating the superior accuracy metrics associated with MyoNet and ResMyoNet.
ResMyoNet integrates the potency of residual connections, known for circumventing the vanishing gradient problem, ensuring efficient backpropagation even in deeper architectures [28]. This feature is likely pivotal in facilitating a rapid training convergence for ResMyoNet, a probable explanation for its reduced training computational time. Yet, it is paramount to acknowledge inherent constraints. While MyoNet’s focus on detail improves accuracy, it concurrently demands more computational resources. ResMyoNet, although optimized for efficiency, might present a subtle compromise on intricate detail extraction in certain cases. These tradeoffs underscore the balance between accuracy and efficiency in deep learning models. If optimized accuracy and attention to detail are the priority, then MyoNet is the preferred choice for the user. However, when rapid training and computational efficiency are crucial, ResMyoNet is the more appropriate choice.
The selection of loss functions is crucial to balance accuracy with robustness. The mean squared error (MSE) ensures a foundational penalty for discrepancies in predicted x- and y-displacement fields. However, to address potential outliers such as anomalous displacement fields, inaccurate predictions, and data acquisition inconsistencies, the smooth L1 loss was employed. This loss transitions between penalizing small and large discrepancies, enhancing model robustness. Furthermore, the introduction of a custom loss function focuses on regions of pronounced cardiac strain. By emphasizing errors in segmented displacement areas and setting thresholds for significant displacements, this custom loss function ensures that the networks prioritize myocardial contractility patterns, merging precision with specificity.
Our study has some limitations. First, while there are minimal variations in the imaging parameters between cine and tagging sequences, it is important to note that both sequences capture the same spatial (slice) and temporal (timeframes) dimensions. This can potentially manifest as systematic biases in the strain evaluations determined by our developed models, thereby influencing the direct comparability of results derived from both sequences. Notably, the results illustrated that, in the context of Err strains, MyoNet’s interpretation of the anterior segments and ResMyoNet’s interpretation of the septal segments showed statistical differences when compared to tagging. This divergence might indicate that certain myocardial segments may be more susceptible to variations in the imaging sequences for different networks, or there might be differences between these specific segments that the networks process differently. Addressing such specific inconsistencies is important for the practical implementation of deep learning techniques in myocardial strain analysis. Another limitation is that our dataset was limited to 64 cases, each composed of 20 timeframes. While this dataset has been instrumental for drawing preliminary conclusions, the robustness and generalizability of the findings would undoubtedly benefit from a larger dataset. Another limitation of the proposed algorithms is that they focus on analyzing short-axis images, providing circumferential and radial strains. Future work includes generalizing algorithms to include analysis of long-axis slices. The demonstrated efficacy and insights provided by both MyoNet and ResMyoNet signify a promising step forward for pre-clinical cardiac imaging and strain analysis. Their capabilities, particularly in accurately interpreting spatiotemporal data, could offer researchers a more comprehensive and in-depth understanding of myocardial contractility patterns, potentially improving personalized diagnosis and treatment strategies. MyoNet provides superior quantitative metrics, while ResMyoNet offers a more efficient network. As these networks undergo further refinement and are validated on larger, more diverse datasets, translation to clinical practices becomes increasingly promising, marking a significant advancement in artificial intelligence (AI)-augmented cardiac care.

5. Conclusions

Our study demonstrates the significant advancements MyoNet brings to myocardial strain analysis in cine CMR, offering a notable improvement over the traditional tagging methods. Its innovative DL architecture enables precise and efficient analysis of myocardial strain directly from cine images, thereby reducing the need for tagged acquisitions and shortening acquisition times. This progress not only makes CMR more cost-effective in research and pre-clinical settings but also minimizes patient discomfort and scan times, expanding the utility of CMR in research applications. While ResMyoNet also contributes to the field in terms of training efficiency, MyoNet stands out for its comprehensive analysis capabilities. As MyoNet and ResMyoNet continue to evolve, validating them with comprehensive patient datasets will be crucial for their future clinical translation.

Author Contributions

Conceptualization, D.A. and E.-S.I.; methodology, D.A.; software, D.A.; validation, D.A.; formal analysis, D.A.; investigation, D.A.; resources, D.A., P.C. (Patrick Clarysse), P.C. (Pierre Croisille), C.B., and E.-S.I.; data curation, D.A., C.B., and E.-S.I. writing—original draft preparation, D.A.; writing—review and editing, D.A., A.N., and E.-S.I.; visualization, D.A.; supervision, E.-S.I.; project administration, C.B. and E.-S.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The animal study protocol was approved by the Institutional Review Board of the Medical College of Wisconsin (Protocol code: AUA00004200 and date of approval: 18 August 2021). All procedures were performed in accordance with the ethical standards of the institution and with the Helsinki (1964) declaration and its later amendments.

Informed Consent Statement

Not applicable.

Data Availability Statement

Restrictions apply to the datasets. The datasets used in this study are available from the corresponding author upon reasonable request, subject to institutional approval. Code can be shared with editors and reviewers upon reasonable request.

Acknowledgments

The authors would like to acknowledge the Daniel M. Soref Charitable Trust at the Medical College of Wisconsin, Milwaukee, USA.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DLDeep learning
CMRCardiac magnetic resonance
RTRadiation therapy
EccCircumferential strain
ErrRadial strain
LVEFLeft ventricular ejection fraction
SSSalt-sensitive
CMR-FTCMR feature tracking
LVLeft ventricle
RMSERoot mean squared error
MSEMean squared error
SSIMStructural similarity index
ICCIntraclass correlation coefficient
CVCoefficient of variation
AHAAmerican Heart Association
SDStandard deviation

References

  1. Kongbundansuk, S.; Hundley, W.G. Noninvasive imaging of cardiovascular injury related to the treatment of cancer. JACC Cardiovasc. Imaging 2014, 7, 824–838. [Google Scholar] [CrossRef]
  2. Anthony, F.Y.; Ky, B. Roadmap for biomarkers of cancer therapy cardiotoxicity. Heart 2016, 102, 425–430. [Google Scholar]
  3. Henson, K.; McGale, P.; Taylor, C.; Darby, S. Radiation-related mortality from heart disease and lung cancer more than 20 years after radiotherapy for breast cancer. Br. J. Cancer 2013, 108, 179–182. [Google Scholar] [CrossRef]
  4. Carlson, L.E.; Watt, G.P.; Tonorezos, E.S.; Chow, E.J.; Yu, A.F.; Woods, M.; Lynch, C.F.; John, E.M.; Mellemkjӕr, L.; Brooks, J.D. Coronary artery disease in young women after radiation therapy for breast cancer: The WECARE study. Cardio Oncol. 2021, 3, 381–392. [Google Scholar] [CrossRef]
  5. Darby, S.C.; Ewertz, M.; McGale, P.; Bennet, A.M.; Blom-Goldman, U.; Brønnum, D.; Correa, C.; Cutter, D.; Gagliardi, G.; Gigante, B. Risk of ischemic heart disease in women after radiotherapy for breast cancer. N. Engl. J. Med. 2013, 368, 987–998. [Google Scholar] [CrossRef]
  6. Erven, K.; Florian, A.; Slagmolen, P.; Sweldens, C.; Jurcut, R.; Wildiers, H.; Voigt, J.-U.; Weltens, C. Subclinical cardiotoxicity detected by strain rate imaging up to 14 months after breast radiation therapy. Int. J. Radiat. Oncol. Biol. Phys. 2013, 85, 1172–1178. [Google Scholar] [CrossRef] [PubMed]
  7. Ibrahim, E.-S.H.; Baruah, D.; Croisille, P.; Stojanovska, J.; Rubenstein, J.C.; Frei, A.; Schlaak, R.A.; Lin, C.-Y.; Pipke, J.L.; Lemke, A. Cardiac magnetic resonance for early detection of radiation therapy-induced cardiotoxicity in a small animal model. Cardio Oncol. 2021, 3, 113–130. [Google Scholar] [CrossRef]
  8. An, D.; Ibrahim, E.-S. Elucidating Early Radiation-Induced Cardiotoxicity Markers in Preclinical Genetic Models Through Advanced Machine Learning and Cardiac MRI. J. Imaging 2024, 10, 308. [Google Scholar] [CrossRef]
  9. Lenarczyk, M.; Kronenberg, A.; Mäder, M.; North, P.E.; Komorowski, R.; Cheng, Q.; Little, M.P.; Chiang, I.-H.; LaTessa, C.; Jardine, J. Age at exposure to radiation determines severity of renal and cardiac disease in rats. Radiat. Res. 2019, 192, 63–74. [Google Scholar] [CrossRef] [PubMed]
  10. Rapp, J. Dahl salt-susceptible and salt-resistant rats. A review. Hypertension 1982, 4, 753–763. [Google Scholar] [CrossRef]
  11. Axel, L.; Dougherty, L. MR imaging of motion with spatial modulation of magnetization. Radiology 1989, 171, 841–845. [Google Scholar] [CrossRef]
  12. Hor, K.N.; Baumann, R.; Pedrizzetti, G.; Tonti, G.; Gottliebson, W.M.; Taylor, M.; Benson, D.W.; Mazur, W. Magnetic resonance derived myocardial strain assessment using feature tracking. J. Vis. Exp. JoVE 2011, 48, 2356. [Google Scholar]
  13. Clarysse, P.; Croisille, P. Cardiac motion analysis in tagged MRI. In Multi-Modality Cardiac Imaging: Processing and Analysis; Wiley: Hoboken, NJ, USA, 2015; pp. 247–255. [Google Scholar]
  14. Ibrahim, E.-S.H.; Stojanovska, J.; Hassanein, A.; Duvernoy, C.; Croisille, P.; Pop-Busui, R.; Swanson, S.D. Regional cardiac function analysis from tagged MRI images. Comparison of techniques: Harmonic-Phase (HARP) versus Sinusoidal-Modeling (SinMod) analysis. Magn. Reson. Imaging 2018, 54, 271–282. [Google Scholar] [CrossRef]
  15. Attili, A.K.; Schuster, A.; Nagel, E.; Reiber, J.H.; Van der Geest, R.J. Quantification in cardiac MRI: Advances in image acquisition and processing. Int. J. Cardiovasc. Imaging 2010, 26, 27–40. [Google Scholar] [CrossRef]
  16. Morton, G.; Schuster, A.; Jogiya, R.; Kutty, S.; Beerbaum, P.; Nagel, E. Inter-study reproducibility of cardiovascular magnetic resonance myocardial feature tracking. J. Cardiovasc. Magn. Reson. 2012, 14, 34. [Google Scholar] [CrossRef]
  17. Schuster, A.; Morton, G.; Hussain, S.T.; Jogiya, R.; Kutty, S.; Asrress, K.N.; Makowski, M.R.; Bigalke, B.; Perera, D.; Beerbaum, P. The intra-observer reproducibility of cardiovascular magnetic resonance myocardial feature tracking strain assessment is independent of field strength. Eur. J. Radiol. 2013, 82, 296–301. [Google Scholar] [CrossRef]
  18. Kraff, O.; Quick, H.H. 7T: Physics, safety, and potential clinical applications. J. Magn. Reson. Imaging 2017, 46, 1573–1589. [Google Scholar] [CrossRef] [PubMed]
  19. Soher, B.J.; Dale, B.M.; Merkle, E.M. A review of MR physics: 3T versus 1.5T. Magn. Reson. Imaging Clin. N. Am. 2007, 15, 277–290. [Google Scholar] [CrossRef] [PubMed]
  20. Schlaak, R.A.; SenthilKumar, G.; Boerma, M.; Bergom, C. Advances in preclinical research models of radiation-induced cardiac toxicity. Cancers 2020, 12, 415. [Google Scholar] [CrossRef]
  21. Ibrahim, E.-S.H.; Baruah, D.; Budde, M.; Rubenstein, J.; Frei, A.; Schlaak, R.; Gore, E.; Bergom, C. Optimized cardiac functional MRI of small-animal models of cancer radiation therapy. Magn. Reson. Imaging 2020, 73, 130–137. [Google Scholar] [CrossRef] [PubMed]
  22. Arts, T.; Prinzen, F.W.; Delhaas, T.; Milles, J.R.; Rossi, A.C.; Clarysse, P. Mapping displacement and deformation of the heart with local sine-wave modeling. IEEE Trans. Med. Imaging 2010, 29, 1114–1123. [Google Scholar] [CrossRef] [PubMed]
  23. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016; Springer: Cham, Switzerland, 2016. [Google Scholar]
  24. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2015. [Google Scholar]
  25. Han, J.; Moraga, C. The influence of the sigmoid function parameters on the speed of backpropagation learning. In From Natural to Artificial Neural Computation—International Workshop on Artificial Neural Networks; Springer: Berlin/Heidelberg, Germany, 1995. [Google Scholar]
  26. Diakogiannis, F.I.; Waldner, F.; Caccetta, P.; Wu, C. ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote Sens. 2020, 162, 94–114. [Google Scholar] [CrossRef]
  27. Nwankpa, C.; Ijomah, W.; Gachagan, A.; Marshall, S. Activation functions: Comparison of trends in practice and research for deep learning. arXiv 2018, arXiv:1811.03378. [Google Scholar] [CrossRef]
  28. Marquez, E.S.; Hare, J.S.; Niranjan, M. Deep cascade learning. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5475–5485. [Google Scholar] [CrossRef]
  29. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  30. Koch, G.G. Intraclass correlation coefficient. In Encyclopedia of Statistical Sciences; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  31. Pearson, K., VII. Note on regression and inheritance in the case of two parents. Proc. R. Soc. Lond. 1895, 58, 240–242. [Google Scholar] [CrossRef]
  32. Bland, J.M.; Altman, D. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986, 327, 307–310. [Google Scholar] [CrossRef]
  33. Ibrahim, E.-S.H. Heart Mechanics: Magnetic Resonance Imaging—Mathematical Modeling, Pulse Sequences, and Image Analysis; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  34. Cerqueira, M.D.; Weissman, N.J.; Dilsizian, V.; Jacobs, A.K.; Kaul, S.; Laskey, W.K.; Pennell, D.J.; Rumberger, J.A.; Ryan, T.; Verani, M.S.; et al. Standardized myocardial segmentation and nomenclature for tomographic imaging of the heart: A statement for healthcare professionals from the Cardiac Imaging Committee of the Council on Clinical Cardiology of the American Heart Association. Circulation 2002, 105, 539–542. [Google Scholar]
  35. Hammouda, K.; Khalifa, F.; Abdeltawab, H.; Elnakib, A.; Giridharan, G.; Zhu, M.; Ng, C.; Dassanayaka, S.; Kong, M.; Darwish, H. A new framework for performing cardiac strain analysis from cine MRI imaging in mice. Sci. Rep. 2020, 10, 7725. [Google Scholar] [CrossRef] [PubMed]
  36. Graves, C.V.; Rebelo, M.F.; Moreno, R.A.; Dantas, R.N., Jr.; Assunção, A.N., Jr.; Nomura, C.H.; Gutierrez, M.A. Siamese pyramidal deep learning network for strain estimation in 3D cardiac cine-MR. Comput. Med. Imaging Graph. 2023, 108, 102283. [Google Scholar] [CrossRef]
  37. Sandino, C.M.; Lai, P.; Vasanawala, S.S.; Cheng, J.Y. Accelerating cardiac cine MRI using a deep learning-based ESPIRiT reconstruction. Magn. Reson. Med. 2021, 85, 152–167. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Workflow of the image processing and deep learning framework. Cine and tagged MRI images are shown on the top. Cine images are then segmented into binary images to be used as input. Concurrently, tagged images undergo SinMod to generate x- and y-displacement fields, which are utilized as target datasets for the networks MyoNet and ResMyoNet.
Figure 1. Workflow of the image processing and deep learning framework. Cine and tagged MRI images are shown on the top. Cine images are then segmented into binary images to be used as input. Concurrently, tagged images undergo SinMod to generate x- and y-displacement fields, which are utilized as target datasets for the networks MyoNet and ResMyoNet.
Bioengineering 13 00310 g001
Figure 2. Schematic representation of the MyoNet. The network is structured similarly to Unet, consisting of five encoding and decoding layers. Input is segmented binary cine spatiotemporal images, and output is segmented x- and y-displacement fields.
Figure 2. Schematic representation of the MyoNet. The network is structured similarly to Unet, consisting of five encoding and decoding layers. Input is segmented binary cine spatiotemporal images, and output is segmented x- and y-displacement fields.
Bioengineering 13 00310 g002
Figure 3. Schematic representation of the ResMyoNet. The network is structured similarly to ResUnet, consisting of five encoding and decoding layers. Input and output specifications align with those of MyoNet. The primary distinction is the block composition, emphasized by the integration of residual connections.
Figure 3. Schematic representation of the ResMyoNet. The network is structured similarly to ResUnet, consisting of five encoding and decoding layers. Input and output specifications align with those of MyoNet. The primary distinction is the block composition, emphasized by the integration of residual connections.
Bioengineering 13 00310 g003
Figure 4. The figure displays x- and y-displacement field images derived from SinMod-derived reference, MyoNet, and ResMyoNet. Error maps highlight the discrepancies between displacement fields compared to SinMod. The error maps offer a detailed view of the spatial distribution of errors, emphasizing regions of higher and lower deviations between the methods.
Figure 4. The figure displays x- and y-displacement field images derived from SinMod-derived reference, MyoNet, and ResMyoNet. Error maps highlight the discrepancies between displacement fields compared to SinMod. The error maps offer a detailed view of the spatial distribution of errors, emphasizing regions of higher and lower deviations between the methods.
Bioengineering 13 00310 g004
Figure 5. Bar plots illustrate circumferential (Ecc) and radial (Err) strains from three imaging methods: SinMod, MyoNet, and ResMyoNet, across the six AHA-defined segments with added global strain values.
Figure 5. Bar plots illustrate circumferential (Ecc) and radial (Err) strains from three imaging methods: SinMod, MyoNet, and ResMyoNet, across the six AHA-defined segments with added global strain values.
Bioengineering 13 00310 g005
Figure 6. Bland–Altman plots comparing strain measures for (A) Ecc: SinMod vs. MyoNet, (B) Ecc: SinMod vs. ResMyoNet, (C) Err: SinMod vs. MyoNet, and (D) Err: SinMod vs. ResMyoNet. Across both Ecc and Err, the majority of data points fall within the 2 standard deviation limits, indicating a high degree of agreement between the methods. Each data point represents a segment, with a total of 42 segments (from 7 test datasets, each yielding 6 segments included in these plots).
Figure 6. Bland–Altman plots comparing strain measures for (A) Ecc: SinMod vs. MyoNet, (B) Ecc: SinMod vs. ResMyoNet, (C) Err: SinMod vs. MyoNet, and (D) Err: SinMod vs. ResMyoNet. Across both Ecc and Err, the majority of data points fall within the 2 standard deviation limits, indicating a high degree of agreement between the methods. Each data point represents a segment, with a total of 42 segments (from 7 test datasets, each yielding 6 segments included in these plots).
Bioengineering 13 00310 g006
Figure 7. Evaluation of MyoNet and ResMyoNet performance over training epochs. (A) Accuracy of MyoNet, (B) Accuracy of ResMyoNet, (C) Loss of MyoNet, and (D) Loss of ResMyoNet. ResMyoNet quickly achieves high accuracy and maintains low loss values, whereas MyoNet takes more epochs to reach comparable levels.
Figure 7. Evaluation of MyoNet and ResMyoNet performance over training epochs. (A) Accuracy of MyoNet, (B) Accuracy of ResMyoNet, (C) Loss of MyoNet, and (D) Loss of ResMyoNet. ResMyoNet quickly achieves high accuracy and maintains low loss values, whereas MyoNet takes more epochs to reach comparable levels.
Bioengineering 13 00310 g007
Table 1. t-statistic and p-value results from Student’s t-test.
Table 1. t-statistic and p-value results from Student’s t-test.
MyoNetResMyoNet
EccErrEccErr
AHA 1 t = 1.45t = 3.16t = 1.80t = 1.42
p = 0.20p = 0.02 *p = 0.12p = 0.20
AHA 2t = 1.65t = 0.37t = 0.96t = 1.55
p = 0.15p = 0.72p = 0.38p = 0.17
AHA 3t = 1.89t = 0.30t = 1.34t = 2.88
p = 0.11p = 0.77p = 0.46p = 0.03 *
AHA 4t = 1.14t = 1.28t = 0.79t = 1.05
p = 0.30p = 0.25p = 0.80p = 0.49
AHA 5t = 0.46t = 1.58t = 0.26t = 0.74
p = 0.66p = 0.17p = 0.80p = 0.49
AHA 6t = 0.75t = 1.74t = 0.95t = −1.53
p = 0.48p = 0.13p = 0.38p = 0.18
*: significant difference. AHA, American Heart Association.
Table 2. Comparative performance metrics of MyoNet and ResMyoNet in generating Ecc and Err.
Table 2. Comparative performance metrics of MyoNet and ResMyoNet in generating Ecc and Err.
MyoNetResMyoNet
EccErrEccErr
SSIM 0.9610.9600.9370.934
ICC0.9730.9750.9550.955
Pearson CC0.9730.9750.9560.955
CV32.44734.44521.74922.116
SSIM, structural similarity index; ICC, intraclass correlation coefficient; Pearson CC, Pearson correlation coefficient; CV, coefficient of variation.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

An, D.; Nencka, A.; Clarysse, P.; Croisille, P.; Bergom, C.; Ibrahim, E.-S. MyoNet: Deep Learning-Based Myocardial Strain Quantification from Cine Cardiac MRI. Bioengineering 2026, 13, 310. https://doi.org/10.3390/bioengineering13030310

AMA Style

An D, Nencka A, Clarysse P, Croisille P, Bergom C, Ibrahim E-S. MyoNet: Deep Learning-Based Myocardial Strain Quantification from Cine Cardiac MRI. Bioengineering. 2026; 13(3):310. https://doi.org/10.3390/bioengineering13030310

Chicago/Turabian Style

An, Dayeong, Andrew Nencka, Patrick Clarysse, Pierre Croisille, Carmen Bergom, and El-Sayed Ibrahim. 2026. "MyoNet: Deep Learning-Based Myocardial Strain Quantification from Cine Cardiac MRI" Bioengineering 13, no. 3: 310. https://doi.org/10.3390/bioengineering13030310

APA Style

An, D., Nencka, A., Clarysse, P., Croisille, P., Bergom, C., & Ibrahim, E.-S. (2026). MyoNet: Deep Learning-Based Myocardial Strain Quantification from Cine Cardiac MRI. Bioengineering, 13(3), 310. https://doi.org/10.3390/bioengineering13030310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop