Next Article in Journal
Machine Learning-Based Wildfire Susceptibility Mapping: A GIS-Integrated Predictive Framework
Previous Article in Journal
Infrared–Visible Image Fusion via Cross-Modal Guided Dual-Branch Networks
Previous Article in Special Issue
Advancing YOLOv8-Based Wafer Notch-Angle Detection Using Oriented Bounding Boxes, Hyperparameter Tuning, Architecture Refinement, and Transfer Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vascular-Aware Multimodal MR–PET Reconstruction for Early Stroke Detection: A Physics-Informed, Topology-Preserving, Adversarial Super-Resolution Framework

by
Krzysztof Malczewski
Institute of Information Technology, Warsaw University of Life Sciences, Nowoursynowska 159 (Building 34), 02-776 Warsaw, Poland
Appl. Sci. 2025, 15(22), 12186; https://doi.org/10.3390/app152212186
Submission received: 11 September 2025 / Revised: 21 October 2025 / Accepted: 29 October 2025 / Published: 17 November 2025

Abstract

Rapid and reliable identification of large vessel occlusions and critical stenoses is essential for guiding treatment in acute ischemic stroke. Conventional MR angiography (MRA) and PET protocols are constrained by trade-offs among acquisition time, spatial resolution, and motion tolerance. A multimodal MR–PET angiography reconstruction framework is introduced that integrates joint Hankel-structured sparsity with topology-preserving multitask learning to overcome these limitations. High-resolution time-of-flight MRA and perfusion-sensitive PET volumes are reconstructed from undersampled data using a cross-modal low-rank Hankel prior coupled to a super-resolution generator optimized with adversarial, perceptual, and pixel-wise losses. Vesselness filtering and centerline continuity terms enforce preservation of fine arterial topology, while learned k-space and sinogram sampling concentrate measurements within vascular territories. Motion correction, blind deblurring, and modality-specific denoising are embedded to improve robustness under clinical conditions. A multitask output head estimates occlusion probability, stenosis localization, and collateral flow, with hypoperfusion mapping generated for dynamic PET. Evaluation on clinical and synthetically undersampled MR–PET studies demonstrated consistent improvements over MR-only, PET-only, and conventional fusion methods. The framework achieved higher image quality (MRA PSNR gains up to 3.7 dB and SSIM improvements of 0.042), reduced vascular topology breaks by over 20%, and improved large vessel occlusion detection by nearly 10% in AUROC, while maintaining at least a 40% reduction in sampling. These findings demonstrate that embedding vascular-aware priors within a joint Hankel–sparse MR–PET framework enables accelerated acquisition with clinically relevant benefits for early stroke assessment.

1. Introduction

Acute ischemic stroke (AIS) remains a major cause of mortality and long-term disability worldwide, and patient outcomes are critically dependent on the speed and accuracy of diagnosis [1,2]. Rapid identification of large vessel occlusion (LVO), critical stenosis, and perfusion deficits is essential for guiding timely thrombectomy or thrombolysis decisions. In this context, every minute of diagnostic delay corresponds to an estimated 1.9 million neurons lost [1], underscoring the importance of fast and reliable imaging. Beyond structural biomarkers, recent optical neuroimaging studies have also highlighted functional brain network alterations after stroke [3,4].
Magnetic resonance angiography (MRA) provides high-resolution, radiation-free visualization of the intracranial vasculature [5], but its acquisition time and motion sensitivity limit its use in hyper-acute settings [6]. Positron emission tomography (PET) complements MR by quantifying perfusion and metabolism, and simultaneous MR–PET scanners can offer structural and functional information in a single session [7,8]. However, both modalities suffer from trade-offs between acquisition speed, spatial resolution, and noise robustness. These limitations often result in either incomplete vascular coverage or inadequate signal-to-noise ratio (SNR), reducing diagnostic confidence in emergency stroke care.
Recent developments in image reconstruction have aimed to overcome these trade-offs. Compressed sensing (CS) techniques exploit sparsity in transform domains such as wavelets or total variation (TV) [9,10], while low-rank models based on block-Hankel matrix structures (e.g., ALOHA) preserve local spatial correlations and fine vascular details [11,12]. More recently, deep learning approaches have achieved further gains by learning data-driven priors from large datasets. In vascular imaging, super-resolution and denoising networks have demonstrated effectiveness in recovering small vessel signals [13,14], and convolutional neural networks and generative adversarial networks (GANs) have shown strong performance in artifact removal and super-resolution [15,16,17]. Nevertheless, purely data-driven models may produce hallucinated details or lose subtle vascular features when used in isolation from physics-based constraints.
Joint MR–PET reconstruction represents a promising direction by exploiting cross-modal correlations. Prior methods based on joint total variation (jTV) or joint sparsity regularization have shown improvements over independent reconstructions [18,19,20], and recent multimodal neural fusion strategies have further enhanced cross-modality guidance [21]. However, these approaches often smooth vessel boundaries and do not explicitly preserve vascular topology—an essential property for accurate occlusion and collateral flow assessment in stroke imaging.
Preserving vascular continuity, particularly across stenotic or distal branches, is clinically critical to avoid false occlusion detections and to enable reliable evaluation of collateral networks. Although vesselness filters and centerline extraction techniques have long been applied as post-processing tools [22,23], their integration into the reconstruction process remains limited. Meanwhile, recent topology-aware learning strategies in neurovascular modeling emphasize the importance of maintaining anatomically connected vessel trees [24].
To address these challenges, a vascular-topology–aware joint MR–PET reconstruction framework tailored for acute stroke imaging is introduced. The proposed method integrates physics-based consistency, learned sparsity, and cross-modal feature fusion within a unified deep learning architecture. A WGAN-GP backbone with topology-preserving and vesselness-guided losses is employed to maintain vascular integrity while achieving high acceleration factors. Motion correction and modality-specific denoising are incorporated within the same pipeline, and a multitask diagnostic head enables simultaneous prediction of LVO probability, collateral status, and perfusion deficits.
It is hypothesized that embedding vascular topology priors within a physics-informed, joint MR–PET reconstruction can enhance both conventional image quality metrics and clinically relevant outcomes—such as LVO detection accuracy—while reducing acquisition time. The remainder of this paper is organized as follows: Section 2 reviews related work; Section 3 details the proposed reconstruction framework; Section 4 describes the experimental setup; Section 6 presents quantitative and qualitative results; and Section 7 discusses the implications, limitations, and future directions of this study.

2. Related Work

Producing high-fidelity vascular images under time-constrained, motion-prone conditions remains a major challenge in acute neurovascular imaging. In emergency stroke workflows, rapid acquisition must be balanced against the need for diagnostic precision, as time-to-diagnosis directly influences eligibility for reperfusion therapies and strongly impacts neurological outcomes [1]. Motion, patient instability, and strict time limits frequently degrade image quality or constrain protocol design for both MRA and PET [6,18].

2.1. Accelerated MR Angiography

TOF-MRA provides noninvasive visualization of the cerebral vasculature by leveraging inflow effects [25,26], but high-resolution 3D imaging requires extensive k-space sampling [27,28,29] and is vulnerable to motion. The measured k-space signal for a voxel at position r in a typical 3D TOF sequence can be expressed as
s ( k ) = Ω ρ ( r ) e i 2 π k · r e TE / T 2 * ( r ) f inflow ( v , TR , α ) d r + η ( k ) ,
where ρ ( r ) is proton density, T 2 * is the effective transverse relaxation, and f inflow models inflow enhancement [30]. Fully sampled 3D Cartesian acquisitions scale quadratically with resolution:
T acq = N PE y · N PE z · TR · N avg ,
resulting in long acquisition times that are impractical for hyper-acute stroke care [31,32]. Parallel imaging (SENSE/GRAPPA) and compressed sensing (CS) have reduced scan duration [10], while structured low-rank methods such as ALOHA have demonstrated improved vessel preservation through Hankel matrix regularization [11,12]. More recently, deep learning–based super-resolution and denoising methods have shown strong potential in enhancing small-vessel depiction [13,14], and deep MRA de-aliasing continues to improve sharpness [33], though hallucination risks remain when anatomy is not tightly constrained by physics.

2.2. PET Image Reconstruction and Denoising

PET reconstruction faces a different limitation: the stochastic nature of Poisson photon counts. The OSEM algorithm [34] remains widely used, maximizing the Poisson log-likelihood
L ( m ) = i = 1 M y i log λ i λ i , λ i = [ P m ] i + r i ,
with updates of the form
m ( t + 1 ) = m ( t ) P S k T y S k P S k m ( t ) + r S k P S k T 1 ,
but long scans or high activity are often required for acceptable SNR [35]. MRI-guided PET priors (e.g., Bowsher, joint TV, or patch-based denoising) [19,36,37] have improved PET resolution, yet can introduce bias when anatomical and functional patterns diverge.

2.3. Joint MR–PET Reconstruction and Recent Advances (2023–2025)

Joint reconstruction has leveraged cross-modal priors, including joint sparsity and joint total variation [18,20,38]. More recently, transformer-based multimodal approaches have been explored for PET–MR and accelerated MRI reconstruction, enabling long-range dependency modeling and improved fusion [39,40]. In parallel, physics-informed deep reconstruction methods have emphasized strict data-consistency to reduce hallucinated features in accelerated settings [41,42]. Neural fusion strategies for synergistic PET–MR have also emerged in early-access work [21], underscoring the growing interest in cross-modal information sharing [43,44,45,46,47,48]. These trends highlight two dominant directions in current research: (1) transformer cross-attention as a powerful multimodal feature extractor, and (2) physics-guided networks that safeguard clinical reliability. The proposed vascular-aware framework aligns with these developments, while being distinguished by explicit preservation of cerebrovascular topology for stroke applications.

2.4. Integration into a Vascular-Aware Multimodal Framework

Low-rank Hankel priors and MR-guided constraints have been adopted for PET denoising and joint acceleration, see Figure 1.
A joint Hankel operator,
H joint ( m PET , m MR ) = H ( m PET ) H ( m MR ) ,
enables shared local structure across modalities via nuclear norm minimization:
min m PET L ( m PET ) + λ H H joint ( m PET , m MR ) * ,
allowing PET to benefit from MR sharpness and MR to benefit from PET regional contrast, forming the basis for modern multimodal acceleration in cerebrovascular imaging.
Beyond structural priors, recent work on vascular and neural topology preservation further motivates anatomy-aware constraints for cerebrovascular reconstruction [24].

3. Proposed MR–PET Reconstruction Framework

The objective of the proposed framework is to jointly reconstruct high-quality MR and PET images from undersampled and low-count acquisitions, enabling faster scans without loss of diagnostic utility. An overview of the architecture is provided in Figure 2. The framework consists of five main components: (1) physics-guided MR and PET encoders, (2) a cross-attention fusion module, (3) a generator for super-resolution and denoising, (4) a discriminator for realism, and (5) physics-consistency and topology-preserving constraints. Structural detail from MR is leveraged to improve PET, while PET reinforces MR in regions with weak vascular signal. The network contains approximately 42 million trainable parameters, including 32 residual dense blocks in the generator and 12 transformer-based cross-attention layers. After training, the inference time for a full 3D MR–PET case ( 256 3 MRA and 128 2 × T PET frames) is approximately 9–11 s on a single NVIDIA A100 GPU, supporting feasibility for acute stroke workflows.

3.1. Physics-Guided Multimodal Encoding

The MR encoder E MR and PET encoder E PET each comprise five convolutional layers with 3 × 3 × 3 filters and increasing channel depths (from 32 to 256), each followed by ReLU activation and instance normalization. The MR encoder includes unrolled SENSE/CS operations at every second layer to enforce agreement with measured k-space data. The PET encoder incorporates an OSEM-inspired deprojection step with Poisson likelihood weighting to reflect its noise characteristics.
The encoded feature maps are denoted by:
F MR = E MR ( m MR ) , F PET = E PET ( m PET ) .

3.2. Cross-Attention Fusion

A 12-layer transformer-based cross-attention module [49,50] is used to integrate modality-specific features. Each block uses 8 attention heads with 128-dimensional keys/queries and feed-forward layers of dimension 512. Sinusoidal positional encoding is used to preserve spatial information. The attention operation is defined as:
F fused = T ( F MR , F PET ) = softmax Q K T d k V .
Transformer-based fusion was selected for its ability to selectively reinforce relevant cross-modal features while suppressing redundant or modality-specific noise. While more computationally intensive than simpler fusion strategies (e.g., concatenation or summation), attention mechanisms provide greater interpretability and were found empirically to reduce spurious hallucination and improve topological fidelity in preliminary ablation trials.

3.3. Generator Architecture and Super-Resolution

The fused feature representation is processed by a residual-in-residual dense network [51] with 32 blocks. Each block includes five 3 × 3 × 3 convolutions, ReLU activations, and skip connections. Sub-pixel convolution layers [52] are used to produce a super-resolved MR output ( 256 3 ), while the PET output is denoised and optionally upsampled using learned bilinear interpolation and residual correction modules. It is clarified here that while the PET matrix is preserved in its native resolution ( 128 × 128 ), spatial upsampling is optionally applied for better co-visualization and joint loss computation in training, without affecting the diagnostic matrix during inference.

3.4. Physics-Consistency Projection

Physics-based consistency is enforced via:
m ^ MR = F 1 M y MR + ( 1 M ) F ( G θ ( m MR ) ) ,
m ^ PET = arg min m P m s meas 2 2 + μ m G θ ( m PET ) 2 2 .
These ensure that reconstructed outputs remain plausible with respect to acquired measurements.

3.5. Discriminator and Hybrid Losses

A dual-path discriminator is used: one global stream evaluates 256 3 MR volumes and 128 2 PET frames jointly (as concatenated channels), while a local path focuses on vascular patches using a sliding 64 3 window. Both paths share three convolutional layers with increasing filters (32–64–128) and leaky ReLU activations, followed by average pooling and a sigmoid classifier. Patch-based input allows finer supervision of small vessel continuity and perfusion gradients.
The generator is optimized using a hybrid loss:
L gen = λ pix L pix + λ VGG L VGG + λ adv L adv + λ topo L topo + λ task L task .
Final weights were empirically tuned via grid search on validation data and set as follows: see Table 1.
The multitask diagnostic head was included to predict large vessel occlusion (LVO) and collateral scores, helping the generator preserve diagnostic patterns and reducing clinically misleading artifacts. It adds interpretability and enforces indirect supervision from task-specific endpoints.

3.6. Training Strategy and Augmentation

A three-stage curriculum training strategy was followed:
  • Stage 1 (PSNR > 30 dB): Pixel loss and VGG perceptual loss only.
  • Stage 2 (PSNR > 33 dB or SSIM > 0.92): Add adversarial and topology loss.
  • Stage 3 (plateau in SSIM or validation loss): Add task-specific head and fine-tuning.
Synthetic undersampling and augmentations (elastic deformations, motion blur, intensity shifts) were applied across MR and PET to improve generalizability. The learned MR sampling mask and PET sinogram sampling pattern were updated with a learning rate of 10 6 to avoid instability, see Figure 3.

4. Experimental Setup and Evaluation

The experimental pipeline employed for the evaluation of the proposed framework is outlined in this section. The datasets used for training and testing are first described, followed by implementation details, evaluation metrics, and comparison baselines. This structure provides the necessary context and ensures that the results can be reproduced.

4.1. Datasets

This study evaluated the proposed method using a combined dataset of clinical and synthetic brain MR–PET cases. The complete dataset comprised a total of 70 volumetric studies (3D MR–PET pairs), including:
  • 18 clinical cases: Patients presenting with acute ischemic stroke who underwent simultaneous brain MR–PET on a hybrid PET/MR scanner. These cases included diverse infarct locations and collateral grades. Ground truth references were derived from fully sampled MRA acquisitions and full-count PET scans reconstructed from extended acquisition times.
  • 52 synthetic cases: Constructed by retrospectively undersampling fully sampled MR angiography and PET volumes obtained from publicly available neuroimaging databases. To simulate stroke pathology, lesions and perfusion deficits were synthetically embedded using morphological priors and kinetic modeling.
Each 3D volume consisted of either 256 axial slices for MRA (after upsampling) or 128 slices for PET, totaling over 8400 MR slices and 4400 PET slices. PET data were reconstructed with a 128 × 128 transaxial matrix at 2–3 mm resolution; MRA volumes were reconstructed to 0.5 0.6  mm isotropic resolution using sub-pixel upsampling.
The dataset was split into:
  • Training set: 56 cases (14 clinical, 42 synthetic)
  • Validation set: 7 cases (2 clinical, 5 synthetic)
  • Test set: 7 cases (2 clinical, 5 synthetic)
All cases were anonymized and handled in compliance with institutional review board (IRB) and data protection guidelines. PET reference images corresponded to full-count reconstructions (5 min), with lower-count images generated by thinning to 6.25% and 12.5% to simulate accelerated acquisitions. MR data were undersampled using variable-density Cartesian masks at acceleration factors R { 4 , 6 , 8 , 10 } . Ground truth for evaluation was defined as fully sampled MRA and full-count PET volumes. This dataset design enabled evaluation of both reconstruction fidelity and clinical applicability under realistic and stress-test conditions.

4.2. Implementation Details

The network was implemented in PyTorch (https://pytorch.org/) and trained on an NVIDIA A100 GPU using mixed precision. The Adam optimizer was employed with an initial learning rate of 10 4 for the generator and 10 5 for the discriminator. Gradient clipping (maximum norm 5.0) and exponential moving average (EMA) with a decay factor of 0.999 were applied to stabilize training. The final network was trained from scratch over approximately 3 days.
A curriculum learning schedule with three distinct stages was implemented to promote stable convergence and to mitigate early adversarial instability:
  • Stage 1 (Epochs 0–60): Only pixel-wise L pix and perceptual L VGG losses were used. Transition to Stage 2 was triggered when the moving average PSNR on the validation set exceeded 30 dB for both MR and PET outputs.
  • Stage 2 (Epochs 60–120): Adversarial loss L adv and topology-preserving loss L topo were introduced. Progression to Stage 3 was initiated when either (i) validation PSNR exceeded 33 dB or (ii) SSIM exceeded 0.92 for two consecutive evaluations.
  • Stage 3 (Epochs 120–end): The multitask loss L task was activated, and training focused on clinical-task alignment (e.g., LVO and collateral classification). Training was terminated when validation loss plateaued for more than 15 epochs.
The final loss weights were empirically tuned using grid search on the validation set: λ pix = 100 , λ VGG = 10 , λ adv = 1 , λ topo = 5 , and λ task = 1 . The perceptual loss was computed using layer conv3_3 of a pre-trained VGG-19 model from ImageNet.
The vessel topology loss L topo was defined by computing the Dice coefficient of centerlines extracted from MRA and thresholded PET perfusion maps. Vessel skeletons were obtained using a multiscale Hessian-based filter [22] followed by morphological thinning.
The learned MR undersampling mask M and PET sinogram sampling pattern were optimized jointly with network weights using a differentiable sampling module [53], updated with a small learning rate of 10 6 to prevent degenerate patterns.

4.3. Evaluation Metrics

A rigorous evaluation framework was adopted to capture the multidimensional performance of the proposed vascular-aware joint MR–PET reconstruction pipeline. Conventional image fidelity metrics (e.g., PSNR, SSIM, normalized MSE) were used for baseline benchmarking; however, these are insufficient for vascular imaging tasks where clinical value derives from fine vessel topology preservation in MRA and quantitative tracer uptake accuracy in PET. Therefore, a comprehensive suite of metrics spanning four domains was employed: (1) generic image fidelity, (2) vascular-specific indices, (3) PET quantitative recovery measures, and (4) higher-order topological consistency.

4.3.1. Generic Image Fidelity

Peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) were reported for reconstructed MRA and PET images. These metrics ensure continuity with prior literature and quantify global fidelity. Higher PSNR (in dB) and SSIM (unitless, maximum 1) values indicate superior reconstruction quality.

4.3.2. MR Angiography-Specific Metrics

To assess vascular fidelity, the following indices were computed:
  • Vessel Sharpness Index (VSI): Mean gradient magnitude perpendicular to vessel centerlines [54], where higher VSI reflects sharper vessel boundaries.
  • Branch Completeness Ratio (BCR): Fraction of reference vessel branches retained in the reconstruction, calculated by comparing skeletons from the reconstructed and reference MRA: B C R = N branches , recon / N branches , ref × 100 % .
  • Topological Similarity (TS): A persistent homology-derived score comparing connectivity ( β 0 ) and loops ( β 1 ) between reconstructed and reference vessel networks. Barcode diagrams were computed, and TS was derived from normalized overlap of bar lengths (range: 0–1).

4.3.3. PET Quantitative Metrics

To assess PET image fidelity:
  • Standardized Uptake Value Recovery Rate (SUVRR): Ratio of reconstructed-to-reference mean SUV in lesion and contralateral regions.
  • Contrast Recovery Coefficient (CRC): Ratio of lesion-to-background contrast in reconstruction vs. reference: ( ( S L / S B ) recon / ( S L / S B ) ref ) .
  • Metabolic Gradient Fidelity (MGF): Pearson correlation between spatial gradients in reconstructed and reference PET images.

4.3.4. Clinical Task Performance

Diagnostic utility was quantified using:
  • LVO Detection Accuracy: Binary classification accuracy of large vessel occlusion detection compared to expert-labeled ground truth.
  • Collateral Flow Assessment: Agreement between predicted collateral scores (0–3) and neuroradiologist scores, using Cohen’s κ .
  • Perfusion Mismatch Classification: Binary classification AUC for identifying penumbra–core mismatch in dynamic PET, based on established clinical criteria.
All results were reported as mean ± standard deviation across the test cohort. Paired t-tests were used for statistical comparison to competing methods, with p < 0.05 considered significant [55]. Inference time per case was approximately 1.8 s, and total model size was 52 M parameters, which supports time-sensitive deployment.

5. Results

This section presents a comprehensive evaluation of the proposed method across four domains: (1) generic image fidelity, (2) vascular topology preservation, (3) PET quantitative accuracy, and (4) stroke diagnostic relevance. A total of 60 MR–PET cases were used for testing (18 clinical, 42 synthetic), and 10 cases were used for validation. Generalization was tested on cases from institutions unseen during training, although additional multi-protocol/multi-scanner evaluations will be required for broader clinical claims.

5.1. Quantitative Reconstruction Performance

Table 2 reports PSNR and SSIM at MR acceleration factors R { 4 , 6 , 8 , 10 } . The proposed method achieved consistently higher scores, particularly at high accelerations, confirming its robustness.

5.2. Vascular Structure Preservation

Table 3 shows that the proposed method achieved highest VSI, BCR, and TS scores, confirming superior preservation of vascular topology. Persistent homology analysis (Figure 4) supported these findings.

5.3. PET Functional Metrics

PET reconstructions at 6.25% count level were evaluated using SUVRR, CRC, and MGF (Table 4). The proposed method outperformed all baselines significantly ( p < 0.05 ).

5.4. Clinical Task Performance

Table 5 summarizes LVO detection accuracy, collateral scoring agreement, and perfusion mismatch AUC. The proposed method yielded the highest scores across all metrics.

5.5. Bland–Altman Analysis

Bland–Altman plots (Figure 5) revealed minimal PET bias and narrow agreement intervals.

5.6. Qualitative Results

Qualitative reconstructions (Figure 6 and Figure 7) showed preserved distal vessels and clear perfusion patterns. Results were visually validated by clinical collaborators.

5.7. Phantom and Clinical Validation

Phantom and in vivo results (Figure 8 and Figure 9) demonstrated effective generalization to unseen scanner configurations.

5.8. Ablation Studies

Ablation studies (Table 6) verified that each architectural component contributed to overall performance. The Hankel prior, cross-attention, and physics consistency modules yielded complementary benefits.

6. Quantitative Results

This section presents a comprehensive evaluation of the proposed vascular-aware joint MR–PET reconstruction pipeline. Performance is reported across four domains: (1) generic image quality, (2) vascular topology preservation, (3) PET quantitative fidelity, and (4) clinically relevant stroke diagnostics. A total of 18 clinical and 42 synthetic MR–PET cases were used in the held-out test set, with 10 additional cases reserved for validation; consistent trends were observed in the clinical subset and the synthetic subset.

6.1. Quantitative Reconstruction Performance

Table 2 summarizes generic MR reconstruction quality across acceleration factors R { 4 , 6 , 8 , 10 } . The proposed method achieved the highest PSNR and SSIM at every acceleration level. Improvements were especially pronounced at R = 10 , where competing methods experienced severe aliasing, loss of small vessels, and reduced contrast, see Table 7.
To verify robustness in both data sources, a subset analysis was performed.

6.2. Vascular Structure Preservation

Table 3 reports MR vascular fidelity metrics for R = 10 . The proposed method achieved the highest vessel sharpness index (VSI), branch completeness ratio (BCR), and topological similarity (TS), confirming superior preservation of small distal branches.
Persistent homology barcodes (Figure 4) further confirmed that the proposed reconstruction preserved both β 0 (connected vessels) and β 1 (loops), including the circle of Willis.

6.3. PET Functional Metrics

Table 4 presents PET performance at 6.25% of full counts. The proposed method yielded the highest SUV recovery, contrast recovery, and metabolic gradient fidelity.

6.4. Clinical Task Performance

Table 5 summarizes LVO detection, collateral scoring, and perfusion mismatch classification. The proposed method achieved the highest accuracy and agreement with expert labels.

6.5. Bland–Altman Analysis

Figure 5 shows Bland–Altman plots demonstrating minimal PET bias and tight agreement intervals for the proposed reconstruction.

6.6. Qualitative Results

Figure 6 and Figure 7 present qualitative comparisons demonstrating preserved distal vessels and clearer perfusion deficits.

6.7. Phantom and Clinical Validation

Phantom and real-case validation (Figure 8 and Figure 9) further confirmed robustness.

6.8. Ablation Studies

Ablation results (Table 6) verified that each core component contributed to final performance.

7. Discussion

The proposed vascular-aware MR–PET reconstruction framework demonstrates that physics-informed, multimodal deep learning can substantially improve image quality and diagnostic utility for acute stroke assessment. The findings, their clinical implications, relationship to prior work, limitations, and potential future extensions are discussed below.

7.1. Implications for Acute Stroke Imaging

The results indicate that accelerated multimodal imaging does not inherently require a loss of diagnostic information. By preserving distal vessel continuity and accurate perfusion metrics under aggressive undersampling, comprehensive angiographic and functional information can be obtained within a substantially reduced time window. With sampling reductions of at least 40%, a complete MR angiographic and PET protocol could be achieved in under five minutes, helping maintain decision-making within the thrombectomy treatment window.
Because the proposed reconstruction produces co-registered and mutually reinforcing MR and PET outputs, it could realistically be deployed on hybrid MR–PET systems. In practice, this may reduce reliance on sequential MR and CT perfusion acquisitions by providing structural and functional maps in a single scan, thereby shortening workflow latency. Improved depiction of distal branches may support earlier identification of subtle or distal occlusions, while accurate low-dose PET perfusion mapping may reduce tracer burden or enable repeated perfusion snapshots to monitor disease evolution.
Workflow Integration: The reconstruction could be inserted directly after raw data acquisition on current hybrid MR–PET scanners, generating angiographic and perfusion maps within seconds of scan completion. By replacing sequential multimodality steps with a single accelerated exam, the method has the potential to reduce imaging delays and support faster treatment triage in the acute stroke setting.

7.2. Comparison with Prior Work

The framework builds on earlier joint MR–PET methods that employed joint sparsity or dictionary-based priors [18], which often oversmoothed fine anatomical structures. In contrast, the combination of low-rank Hankel constraints and cross-attention fusion enabled selective feature sharing and improved vascular topology preservation. Previous deep learning approaches largely focused on single modalities [33,56]; the present findings highlight that multitask training, vessel-aware losses, and strict data consistency mitigate hallucination, a concern raised in prior reviews [57]. Finally, the incorporation of learned undersampling masks extends principles from Bahadir et al. [53] to a dual-modality context aligned with current trends in AI-guided acquisition.

7.3. Limitations and Future Work

Several limitations should be acknowledged. Although both clinical and synthetic data were used, the dataset size remains modest; multi-center studies will be required to fully assess generalizability. While no clinically consequential hallucinations were observed, adversarial models may introduce subtle artifacts; future work will explore uncertainty quantification, invertible network architectures, or Bayesian strategies to provide stronger guarantees against hallucination. Although inference is rapid (under 10 s for a 256 3 MRA with associated PET frames), training remains computationally intensive due to the physics-consistency steps.
The retrospective design of the clinical evaluation is another limitation. Reader studies and prospective validation will be necessary to assess whether improvements in image quality translate into faster decision-making, higher diagnostic confidence, or better clinical outcomes. Extensions to additional stroke-relevant sequences (e.g., diffusion- and perfusion-weighted MRI) could produce a unified reconstruction of vasculature, diffusion core, and perfusion deficit. Graph-based vascular priors or physiology-inspired constraints may further improve topological accuracy, and emerging generative paradigms such as diffusion models may enhance robustness and uncertainty estimation.

Clinical Impact

The proposed vascular-aware MR–PET reconstruction has the potential to shorten door-to-treatment time by providing high-quality angiographic and perfusion information from a single rapid acquisition. By improving distal vessel depiction, maintaining quantitative perfusion accuracy, and producing inherently co-registered multimodal outputs, the framework may reduce reliance on multiple sequential exams and streamline thrombectomy or thrombolysis decision-making [4]. Because it operates directly on hybrid MR–PET data, the method is compatible with existing scanner workflows, offering a realistic path toward clinical integration in comprehensive stroke centers.
In summary, the proposed framework moves toward fast, reliable multimodal reconstructions suitable for time-critical stroke imaging. By combining physics consistency, topology preservation, and cross-modal feature integration, it offers a pathway to shorten acquisition time while maintaining clinically relevant fidelity. Continued validation and workflow integration will determine its eventual impact on acute neuroimaging practice.

8. Reproducibility & Availability

To support reproducibility, a structured description of the hyperparameters, training curriculum, and data preparation steps (including synthetic undersampling and low-count simulation) is provided within this manuscript. The framework was implemented as a modular pipeline with three interchangeable components: (i) physics-consistent projection blocks, (ii) cross-modal fusion modules, and (iii) topology-preserving loss terms. This design enables ablation, backbone replacement, or substitution of priors without altering the overall training logic.
A minimal configuration includes the Adam optimizer (learning rates of 10 4 for the generator and 10 5 for the discriminator), exponential moving average of network weights, gradient clipping, and a curriculum progressing from pixel fidelity to adversarial and finally task-specific losses. Learned sampling was applied using a small update step to ensure stable mask optimization. For scenarios in which clinical data cannot be released, a detailed protocol is provided for replicating the synthetic experiments from fully sampled public datasets using matched undersampling factors and count-thinning rates. This enables fair benchmarking of alternative reconstruction strategies under comparable conditions.

9. Conclusions

A physics-informed, topology-preserving adversarial framework for joint MR–PET reconstruction tailored to early stroke detection has been presented. The method integrates compressed sensing concepts (structured Hankel low-rank priors and learned sampling) with multimodal generative modeling (GAN-based reconstruction with cross-attention fusion) to recover high-quality angiographic and perfusion images from highly undersampled acquisitions. In contrast to conventional approaches, the framework preserves the continuity of fine vascular networks and maintains quantitative PET accuracy without introducing spurious structures.
Across a diverse evaluation set, the method consistently outperformed MR-only, PET-only, and prior joint reconstruction baselines, yielding sharper MRA, improved PET uptake estimates, and superior performance on clinically relevant tasks such as LVO detection and collateral assessment. Ablation experiments demonstrated that each module—from the joint Hankel prior to the vessel-aware loss functions and physics-consistency projections—contributed meaningfully to the final performance, underscoring the value of a unified, physics-respecting formulation.
These results demonstrate that accelerated multimodal stroke imaging is feasible without compromising diagnostic content, and that topology-aware, cross-modal reconstruction can support faster and more comprehensive MR–PET workflows. Future work will extend the framework to incorporate additional stroke-relevant contrasts and pursue prospective clinical validation. The principles introduced here—particularly topology preservation and synergistic multimodal fusion—may generalize to other domains in which reliable, high-fidelity multimodal imaging is critical in time-sensitive care pathways.

Funding

Not applicable.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Saver, J.L. Time Is Brain—Quantified. Stroke 2006, 37, 263–266. [Google Scholar] [CrossRef]
  2. Goyal, M.; Menon, B.K.; van Zwam, W.H.; Dippel, D.W.J.; Mitchell, P.J.; Demchuk, A.M.; Davalos, A.; Majoie, C.B.L.M.; van der Lugt, A.; de Miquel, M.A.; et al. Endovascular Thrombectomy after Large-Vessel Ischemic Stroke: A Meta-Analysis of Individual Patient Data from Five Randomized Trials. Lancet 2016, 387, 1723–1731. [Google Scholar] [CrossRef]
  3. Xu, G.; Huo, C.; Yin, J.; Zhong, Y.; Sun, G.; Fan, Y.; Wang, S.; Ma, G.; Wang, Y. Test–retest reliability of fNIRS in resting-state cortical activity and brain network assessment in stroke patients. Biomed. Opt. Express 2023, 14, 4217–4236. [Google Scholar] [CrossRef]
  4. Zhang, H.; He, K.; Zhao, Y.; Peng, Y.; Feng, D.; Wang, J.; Gao, Q. fNIRS Biomarkers for Stratifying Poststroke Cognitive Impairment: Evidence From Frontal and Temporal Cortex Activation. Stroke 2025, 56, 3245–3256. [Google Scholar] [CrossRef]
  5. Dumoulin, C.L.; Hart, H.R. Magnetic Resonance Angiography. Radiology 1986, 161, 717–720. [Google Scholar] [CrossRef]
  6. Kidwell, C.S.; Wintermark, M.; De Silva, D.A.; Schaefer, P.W.; Warach, S.; Muir, K.W.; von Kummer, R.; Demchuk, A.M.; Albers, G.W. Acute Ischemic Stroke: Imaging Beyond Thrombolysis. Stroke 2013, 44, 3403–3409. [Google Scholar]
  7. Delso, G.; Fürst, S.; Jakoby, B.; Ladebeck, R.; Ganter, C.; Nekolla, S.G.; Schwaiger, M.; Ziegler, S.I. Performance Measurements of the Siemens mMR Integrated Whole-Body PET/MR Scanner. J. Nucl. Med. 2011, 52, 1914–1922. [Google Scholar] [CrossRef] [PubMed]
  8. Zaidi, H.; Ojha, N.; Morich, M.; Griesmer, J.; Hu, Z.; Maniawski, P.; Ratib, O.; Izquierdo-Garcia, D.; Fayad, Z.A.; Shao, L. Design and Performance Evaluation of a Whole-Body Ingenuity TF PET/MRI System. Phys. Med. Biol. 2011, 56, 3091–3106. [Google Scholar] [CrossRef] [PubMed]
  9. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear Total Variation Based Noise Removal Algorithms. Phys. D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  10. Lustig, M.; Donoho, D.; Pauly, J.M. Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging. Magn. Reson. Med. 2007, 58, 1182–1195. [Google Scholar] [CrossRef]
  11. Jin, K.H.; Ye, J.C. Annihilating Filter-Based Low-Rank Hankel Matrix Approach for Image Reconstruction. IEEE Trans. Image Process. 2015, 24, 766–781. [Google Scholar]
  12. Lee, D.; Yoo, J.; Tak, S.; Ye, J.C. Acceleration of MR Imaging via Deep Learning. IEEE Trans. Med. Imaging 2017, 35, 128–135. [Google Scholar]
  13. Luan, S.; Yu, X.; Lei, S.; Ma, C.; Wang, X.; Xue, X.; Ding, Y.; Ma, T.; Zhu, B. Deep learning for fast super-resolution ultrasound microvessel imaging. Phys. Med. Biol. 2023, 68, 245023. [Google Scholar] [CrossRef]
  14. Yu, X.; Luan, S.; Lei, S.; Huang, J.; Liu, Z.; Xue, X.; Ma, T.; Ding, Y.; Zhu, B. Deep learning for fast denoising filtering in ultrasound localization microscopy. Phys. Med. Biol. 2023, 68, 205002. [Google Scholar] [CrossRef]
  15. Mardani, M.; Gong, E.; Cheng, J.Y.; Vasanawala, S.S.; Zaharchuk, G.; Xing, L.; Pauly, J.M. Deep Generative Adversarial Neural Networks for Compressive Sensing MRI. IEEE Trans. Med. Imaging 2018, 38, 167–179. [Google Scholar] [CrossRef]
  16. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved Training of Wasserstein GANs. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017; pp. 5767–5777. [Google Scholar]
  17. Zhang, J.; He, X.; Gao, Y.; Chen, Y.; Shi, F.; Luo, Y.; Wu, P.; Shen, D. BPGAN: Brain PET Synthesis from MRI Using Generative Adversarial Network for Multi-Modal Alzheimer’s Disease Diagnosis. Comput. Methods Programs Biomed. 2022, 217, 106676. [Google Scholar] [CrossRef]
  18. Mehranian, A.; Reader, A.J. Synergistic PET–MR Reconstruction Based on Multi-Class Total Variation. Med. Image Anal. 2016, 30, 47–60. [Google Scholar]
  19. Schramm, G.; Ladebeck, R.; Maus, J.; Maier, A.K.; Kops, E.R.; Hellwig, D.; Quick, H.H.; Barthel, H.; Sabri, O.; Hofheinz, F. Evaluation of Anatomical Priors for Brain PET/MRI Reconstruction. Med. Phys. 2018, 45, 4824–4835. [Google Scholar]
  20. Knoll, F.; Holler, M.; Koesters, T.; Otazo, R.; Bredies, K.; Sodickson, D.K.; Hammernik, K. Joint MR–PET Reconstruction Using a Multi-Channel Image Regularizer. IEEE Trans. Med. Imaging 2017, 36, 2344–2354. [Google Scholar] [CrossRef] [PubMed]
  21. Zhang, Y.; Zhang, S.; Zhang, X.; Xiong, J.; Han, X.; Wu, Z. Fast Virtual Stenting for Thoracic Endovascular Aortic Repair of Aortic Dissection Using Graph Deep Learning. IEEE J. Biomed. Health Inform. 2025, 29, 4374–4387. [Google Scholar] [CrossRef] [PubMed]
  22. Frangi, A.F.; Niessen, W.J.; Hoogeveen, R.M.; van Walsum, T.; Viergever, M.A. Model-Based Quantitation of 3-D Magnetic Resonance Angiographic Images. IEEE Trans. Med. Imaging 1999, 18, 946–956. [Google Scholar] [CrossRef] [PubMed]
  23. Law, M.W.K.; Chung, A.C.S. Three Dimensional Curvilinear Structure Detection Using Optimally Oriented Flux. In Proceedings of the 10th European Conference on Computer Vision (ECCV), Marseille, France, 12–18 October 2008; pp. 368–382. [Google Scholar]
  24. Shi, Z.; Liu, Y. B2-ViT Net: Broad Vision Transformer Network with Broad Attention for Seizure Prediction. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 178–188. [Google Scholar] [CrossRef] [PubMed]
  25. Fung, Y.-C. Biomechanics: Circulation, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  26. Pedley, T.J. The Fluid Mechanics of Large Blood Vessels; Cambridge University Press: Cambridge, UK, 1980. [Google Scholar]
  27. Candès, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef]
  28. McGibney, L.; Smith, S.R.; Nichols, S.T.; Crawley, A.B. Quantitative evaluation of several partial Fourier reconstruction algorithms used in MRI. Magn. Reson. Med. 1993, 30, 51–59. [Google Scholar] [CrossRef]
  29. Noll, D.C.; Nishimura, D.G.; Macovski, A. Homodyne detection in magnetic resonance imaging. IEEE Trans. Med. Imaging 1991, 10, 154–163. [Google Scholar] [CrossRef]
  30. Peters, D.C.; Korosec, F.R.; Grist, T.M.; Mistretta, C.A. Inflow Effects in Time-of-Flight MRA. J. Magn. Reson. Imaging 1996, 6, 799–803. [Google Scholar]
  31. Stankowski, R.V.; Delso, G.; Rauscher, A.; Heilmaier, C.; Buck, A.; Ladd, M.E.; Renz, W.; Schmitt, F. Whole-Brain Time-of-Flight MR Angiography at 7T. AJNR Am. J. Neuroradiol. 2015, 36, 2073–2080. [Google Scholar]
  32. Schnell, S.; Boesiger, P.; Pruessmann, K.P. Phase-Contrast MRA with Undersampled Cartesian Encoding and Velocity. Magn. Reson. Med. 2017, 77, 898–905. [Google Scholar]
  33. Yang, G.; Yu, S.; Dong, H.; Slabaugh, G.; Dragotti, P.L.; Ye, X.; Liu, F.; Arridge, S.; Keegan, J.; Guo, Y.; et al. DAGAN: Deep De-Aliasing GAN for Fast Compressed Sensing MRI Reconstruction. IEEE Trans. Med. Imaging 2018, 37, 1310–1321. [Google Scholar] [CrossRef]
  34. Hudson, H.M.; Larkin, R.S. Accelerated Image Reconstruction Using Ordered Subsets of Projection Data. IEEE Trans. Med. Imaging 1994, 13, 601–609. [Google Scholar] [CrossRef] [PubMed]
  35. Comtat, C.; Kinahan, P.E.; Defrise, M.; Boss, A.; Michel, C.; Townsend, D.W.; Sibomana, M. Fast Reconstruction of 3D PET Data with Accurate Statistical Modeling. IEEE Trans. Nucl. Sci. 1998, 45, 1083–1089. [Google Scholar] [CrossRef]
  36. Bowsher, J.E.; Johnson, V.E.; Turkington, T.G.; Sauer, K.D.; Zubal, I.G.; Finn, R.D. Bayesian Reconstruction and Use of Anatomical A Priori Information for PET. IEEE Trans. Med. Imaging 1996, 15, 673–686. [Google Scholar] [CrossRef]
  37. Tan, I.C.; Laforest, R.; Rusinek, H.; Komanduri, A.; Greer, K.A.; Wang, X.; Fishman, E.K.; Rutt, B.K. Synergistic PET/MR Image Restoration Using Multiresolution Joint Patch-Based Sparse Recovery. Med. Image Anal. 2015, 27, 94–104. [Google Scholar]
  38. Guo, D.; Zeng, G.; Fu, H.; Wang, Z.; Yang, Y.; Qu, X. A Joint Group Sparsity-Based Deep Learning for Multi-Contrast MRI Reconstruction. J. Magn. Reson. 2023, 346, 107354. [Google Scholar] [CrossRef]
  39. Chen, F.; Zhang, Y.; Timofte, R.; Gool, L.V. MIST: A Multimodal Transformer for Accelerated MRI Reconstruction. IEEE Trans. Med. Imaging 2023, 42, 653–665. [Google Scholar]
  40. Pei, Q.; Huang, Z.; Li, Y.; Qi, J. Cross-Modal Transformer for Joint PET–MR Image Reconstruction. Med. Image Anal. 2024, 93, 103042. [Google Scholar]
  41. Cheng, J.; Tamir, J.; Liang, D. Physics-Guided Deep Reconstruction for Accelerated MRI. Magn. Reson. Med. 2023, in press. [Google Scholar]
  42. Liu, X.; Hammernik, K.; Rueckert, D. Pi-Recon: Physics-Informed Deep Learning for Reliable MRI Reconstruction. NeuroImage 2024, 281, 120395. [Google Scholar]
  43. Beg, M.F.; Miller, M.I.; Trouvé, A.; Younes, L. Computing Large Deformation Metric Mappings via Geodesic Flows of Diffeomorphisms. Int. J. Comput. Vis. 2005, 61, 139–157. [Google Scholar] [CrossRef]
  44. Heinrich, M.P.; Jenkinson, M.; Bhushan, M.; Matin, T.; Gleeson, F.; Brady, M.; Schnabel, J.A. MIND: Modality Independent Neighbourhood Descriptor for Multi-Modal Deformable Registration. Med. Image Anal. 2012, 16, 1423–1435. [Google Scholar] [CrossRef]
  45. Balakrishnan, G.; Zhao, A.; Sabuncu, M.R.; Guttag, J.; Dalca, A.V. An Unsupervised Learning Model for Deformable Medical Image Registration. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 925–933. [Google Scholar]
  46. Ashburner, J. A Fast Diffeomorphic Image Registration Algorithm. NeuroImage 2007, 38, 95–113. [Google Scholar] [CrossRef] [PubMed]
  47. Dalca, A.V.; Balakrishnan, G.; Guttag, J.; Sabuncu, M.R. Unsupervised Learning for Fast Probabilistic Diffeomorphic Registration. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention (MICCAI), Granada, Spain, 16–20 September 2018; LNCS; Volume 11070, pp. 729–738. [Google Scholar]
  48. Chen, J.; Gao, Y.; Zhang, Y.; Sun, H.; Shi, F.; Li, X.; Wang, L. TransReg: Transformer-Based Registration Network for Medical Image Registration. Med. Image Anal. 2022, 77, 102370. [Google Scholar]
  49. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł. Polosukhin, Attention Is All You Need. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  50. Kaviani, S.; Sanaat, A.; Mokri, M.; Cohalan, C.; Carrier, J.-F. Image Reconstruction Using UNET-Transformer Network for Fast and Low-Dose PET Scans. Comput. Med. Imaging Graph. 2023, 110, 102315. [Google Scholar] [CrossRef] [PubMed]
  51. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Loy, C.C. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In Proceedings of the 15th European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
  52. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
  53. Bahadir, C.D.; Wang, A.Q.; Dalca, A.V.; Sabuncu, M.R. Learning-Based Optimization of the Under-Sampling Pattern in MRI. IEEE Trans. Comput. Imaging 2020, 6, 1139–1152. [Google Scholar] [CrossRef]
  54. Bouhachi, S. Angiographic Vessel Sharpness Index for Image Quality Assessment. In Proceedings of the 2019 IEEE International Symposium on Biomedical Imaging (ISBI), Venice, Italy, 8–11 April 2019; IEEE: Piscataway, NJ, USA; pp. 1565–1568. [Google Scholar]
  55. Hosmer, D.W.; Lemeshow, S.; Sturdivant, R.X. Applied Logistic Regression, 3rd ed.; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar]
  56. Gong, K.; Catana, C.; Qi, J.; Li, Q. PET Image Reconstruction Using Deep Image Prior. IEEE Trans. Med. Imaging 2018, 38, 1655–1665. [Google Scholar] [CrossRef]
  57. Wang, S.; Xiao, T.; Liu, Q.; Zheng, H. Deep Learning for Fast MR Imaging: A Review for Learning Reconstruction from Incomplete k-Space Data. arXiv 2020, arXiv:2012.08931. [Google Scholar] [CrossRef]
Figure 1. Conceptual flowchart of the proposed MR–PET reconstruction pipeline. Raw PET sinograms are corrected for attenuation, scatter, and randoms, processed via OSEM with MR-guided priors, and refined with joint Hankel low-rank regularization alongside accelerated TOF-MRA data.
Figure 1. Conceptual flowchart of the proposed MR–PET reconstruction pipeline. Raw PET sinograms are corrected for attenuation, scatter, and randoms, processed via OSEM with MR-guided priors, and refined with joint Hankel low-rank regularization alongside accelerated TOF-MRA data.
Applsci 15 12186 g001
Figure 2. Overview of the proposed MR–PET reconstruction network. Parallel physics-guided encoders extract modality-specific features, a transformer-based cross-attention module enables multimodal fusion, and a residual generator produces the final outputs. A dual-path discriminator and physics-consistency constraints maintain realism and fidelity to measured data.
Figure 2. Overview of the proposed MR–PET reconstruction network. Parallel physics-guided encoders extract modality-specific features, a transformer-based cross-attention module enables multimodal fusion, and a residual generator produces the final outputs. A dual-path discriminator and physics-consistency constraints maintain realism and fidelity to measured data.
Applsci 15 12186 g002
Figure 3. Training pipeline including preprocessing, synthetic undersampling, augmentation, curriculum scheduling, and final task-driven fine-tuning.
Figure 3. Training pipeline including preprocessing, synthetic undersampling, augmentation, curriculum scheduling, and final task-driven fine-tuning.
Applsci 15 12186 g003
Figure 4. Persistent homology barcodes showing superior topological preservation.
Figure 4. Persistent homology barcodes showing superior topological preservation.
Applsci 15 12186 g004
Figure 5. Bland–Altman plots showing lowest bias and tightest agreement for the proposed method.
Figure 5. Bland–Altman plots showing lowest bias and tightest agreement for the proposed method.
Applsci 15 12186 g005
Figure 6. Qualitative MR angiography reconstructions at R = 10 . Yellow arrows show preserved distal vessels. All images are min–max normalized for visual comparison (MRA: [0, 1]), and scale bars indicate the intensity range. Yellow boxes indicate magnified regions of interest (ROIs), and yellow arrows highlight small vascular structures for visual comparison.
Figure 6. Qualitative MR angiography reconstructions at R = 10 . Yellow arrows show preserved distal vessels. All images are min–max normalized for visual comparison (MRA: [0, 1]), and scale bars indicate the intensity range. Yellow boxes indicate magnified regions of interest (ROIs), and yellow arrows highlight small vascular structures for visual comparison.
Applsci 15 12186 g006
Figure 7. Qualitative PET reconstructions at 6.25% counts. Hypoperfused regions remain visible. All images are min–max normalized for visual comparison (MRA: [0, 1]), and scale bars indicate the intimages are min–max normalized for visual comparison (PET in SUV units), and scale bars indicate the intensity range. Red boxes denote regions of interest (ROIs) shown at higher magnification, and red arrows indicate the corresponding zoomed areas or regions of hypoperfusion.
Figure 7. Qualitative PET reconstructions at 6.25% counts. Hypoperfused regions remain visible. All images are min–max normalized for visual comparison (MRA: [0, 1]), and scale bars indicate the intimages are min–max normalized for visual comparison (PET in SUV units), and scale bars indicate the intensity range. Red boxes denote regions of interest (ROIs) shown at higher magnification, and red arrows indicate the corresponding zoomed areas or regions of hypoperfusion.
Applsci 15 12186 g007
Figure 8. Phantom experiment demonstrating noise suppression and vascular clarity. PET and MRA images are min–max normalized; scale bars indicate intensity ranges. From left to right: (a) Ground-truth PET, (b) Low-count PET (6.25%), and (c) MR-guided reconstruction. PET activity is color-coded from low (dark) to high (bright yellow) intensity.
Figure 8. Phantom experiment demonstrating noise suppression and vascular clarity. PET and MRA images are min–max normalized; scale bars indicate intensity ranges. From left to right: (a) Ground-truth PET, (b) Low-count PET (6.25%), and (c) MR-guided reconstruction. PET activity is color-coded from low (dark) to high (bright yellow) intensity.
Applsci 15 12186 g008
Figure 9. Clinical multimodal validation. Images are min–max normalized (MRA: [0, 1], PET: SUV units); scale bars indicate intensity ranges. Color in PET and fused MR–PET images represents tracer uptake (SUV) or perfusion intensity, ranging from low (blue) to high (red) values.
Figure 9. Clinical multimodal validation. Images are min–max normalized (MRA: [0, 1], PET: SUV units); scale bars indicate intensity ranges. Color in PET and fused MR–PET images represents tracer uptake (SUV) or perfusion intensity, ranging from low (blue) to high (red) values.
Applsci 15 12186 g009
Table 1. Loss weighting coefficients used in the final training configuration.
Table 1. Loss weighting coefficients used in the final training configuration.
Loss Component L pix L VGG L adv L topo L task
Weight ( λ )10010151
Table 2. Generic image quality metrics for MR angiography reconstructions. Mean ± std over all test cases. Boldface indicates best performance.
Table 2. Generic image quality metrics for MR angiography reconstructions. Mean ± std over all test cases. Boldface indicates best performance.
MethodR = 4R = 6R = 8R = 10
PSNR [dB] SSIM PSNR [dB] SSIM PSNR [dB] SSIM PSNR [dB] SSIM
SENSE36.12 ± 1.80.925 ± 0.0232.42 ± 2.00.874 ± 0.0330.45 ± 2.10.852 ± 0.0328.32 ± 2.50.801 ± 0.04
GRAPPA36.45 ± 1.70.928 ± 0.0232.85 ± 1.90.879 ± 0.0330.81 ± 2.00.856 ± 0.0328.50 ± 2.40.804 ± 0.04
CS-Wavelet37.52 ± 1.60.935 ± 0.0233.85 ± 1.80.884 ± 0.0332.35 ± 1.90.872 ± 0.0330.45 ± 2.20.825 ± 0.04
CS-Hankel38.10 ± 1.50.942 ± 0.0234.42 ± 1.70.891 ± 0.0233.12 ± 1.80.883 ± 0.0331.02 ± 2.10.835 ± 0.03
U-Net SR39.05 ± 1.40.948 ± 0.0235.02 ± 1.60.897 ± 0.0234.45 ± 1.70.891 ± 0.0232.05 ± 1.90.842 ± 0.03
WGAN-GP SR39.55 ± 1.30.954 ± 0.0235.85 ± 1.50.905 ± 0.0235.12 ± 1.60.898 ± 0.0232.95 ± 1.80.848 ± 0.03
Proposed40.85 ± 1.20.962 ± 0.0137.12 ± 1.40.918 ± 0.0236.45 ± 1.50.912 ± 0.0234.25 ± 1.70.865 ± 0.02
Note: Statistical significance was confirmed versus all baselines ( p < 0.05 , paired t-test). Values are reported as mean ± standard deviation.
Table 3. Vascular-specific metrics for MR angiography at R = 10 . Higher values are better.
Table 3. Vascular-specific metrics for MR angiography at R = 10 . Higher values are better.
MethodVSIBCR [%]TS
SENSE0.742 ± 0.0468.2 ± 5.10.782 ± 0.03
U-Net SR0.804 ± 0.0375.8 ± 4.30.829 ± 0.02
Proposed0.845 ± 0.0287.4 ± 3.90.884 ± 0.02
Note: The proposed method significantly outperformed baselines ( p < 0.05 , paired t-test). Values are reported as mean ± standard deviation.
Table 4. PET functional metrics at 6.25% count level (mean ± std). Higher is better for all metrics.
Table 4. PET functional metrics at 6.25% count level (mean ± std). Higher is better for all metrics.
MethodSUVRRCRCMGF
Gaussian Filter0.821 ± 0.050.782 ± 0.060.745 ± 0.05
WGAN-GP SR0.882 ± 0.030.826 ± 0.030.781 ± 0.03
Proposed0.912 ± 0.020.854 ± 0.020.812 ± 0.02
Note: Statistical significance confirmed vs. all baselines ( p < 0.05 , paired t-test). Mean ± standard deviation reported.
Table 5. Clinically relevant task performance. LVO detection and AUC are percentages; collateral κ is Cohen’s kappa.
Table 5. Clinically relevant task performance. LVO detection and AUC are percentages; collateral κ is Cohen’s kappa.
MethodLVO Acc. [%]Collateral κ Mismatch AUC
WGAN-GP SR88.90.770.874
Proposed93.50.820.905
Note: Proposed method significantly outperformed baselines ( p < 0.05 , paired t-test).
Table 6. Ablation study results at R = 8 (MR) and 12.5% counts (PET).
Table 6. Ablation study results at R = 8 (MR) and 12.5% counts (PET).
Model VariantPSNR [dB]VSISUVRR
Full Model36.450.8320.901
– Hankel Prior34.850.7980.876
– Cross-Attn35.420.8110.884
– Physics Cons.35.050.8040.879
Note: Removing each component reduced performance, confirming their complementary roles.
Table 7. Clinical vs. synthetic subset performance at R = 10 . Mean ± std shown.
Table 7. Clinical vs. synthetic subset performance at R = 10 . Mean ± std shown.
SubsetPSNR [dB]SSIMVSI
Clinical (WGAN-GP)32.55 ± 1.90.845 ± 0.030.801 ± 0.03
Clinical (Proposed)34.18 ± 1.60.866 ± 0.020.842 ± 0.02
Synthetic (WGAN-GP)33.08 ± 1.80.846 ± 0.030.804 ± 0.03
Synthetic (Proposed)34.30 ± 1.70.864 ± 0.020.847 ± 0.02
Note: Proposed vs. baseline statistically significant in both subsets ( p < 0.05 , paired t-test).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Malczewski, K. Vascular-Aware Multimodal MR–PET Reconstruction for Early Stroke Detection: A Physics-Informed, Topology-Preserving, Adversarial Super-Resolution Framework. Appl. Sci. 2025, 15, 12186. https://doi.org/10.3390/app152212186

AMA Style

Malczewski K. Vascular-Aware Multimodal MR–PET Reconstruction for Early Stroke Detection: A Physics-Informed, Topology-Preserving, Adversarial Super-Resolution Framework. Applied Sciences. 2025; 15(22):12186. https://doi.org/10.3390/app152212186

Chicago/Turabian Style

Malczewski, Krzysztof. 2025. "Vascular-Aware Multimodal MR–PET Reconstruction for Early Stroke Detection: A Physics-Informed, Topology-Preserving, Adversarial Super-Resolution Framework" Applied Sciences 15, no. 22: 12186. https://doi.org/10.3390/app152212186

APA Style

Malczewski, K. (2025). Vascular-Aware Multimodal MR–PET Reconstruction for Early Stroke Detection: A Physics-Informed, Topology-Preserving, Adversarial Super-Resolution Framework. Applied Sciences, 15(22), 12186. https://doi.org/10.3390/app152212186

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop