Hybrid-Frequency-Aware Mixture-of-Experts Method for CT Metal Artifact Reduction
Abstract
1. Introduction
- We propose HFMoE, a unified network that integrates wavelet and Fourier transforms to effectively exploit the complementary benefits of spatial-frequency and global spectral information for robust CT-MAR.
- A hybrid-frequency interaction encoder is developed to capture multi-scale features, which incorporates concurrent wavelet, Fourier, and cascaded wavelet-Fourier modulation branches to facilitate multi-resolution spectral refinement, ensuring the restoration of local textures, global structures, and the complex cross-domain dependencies between them.
- A Frequency-Aware MoE strategy is implemented in the decoder to dynamically allocate frequency-specific experts according to the degradation severity of the input, enabling adaptive artifact correction.
- Extensive evaluations on multiple benchmarks, including synthesized and clinical datasets, validate that our method outperforms existing state-of-the-art methods in terms of both quantitative metrics and visual quality.
2. Related Works
2.1. CT Metal Artifact Reduction
2.2. Frequency-Domain Modeling for Image Restoration
2.3. Mixture-of-Experts in Low-Level Vision
3. Materials and Methods
3.1. Frequency-Domain Foundations: DWT and FFT
3.1.1. Discrete Wavelet Transform for Localized Analysis
3.1.2. Fast Fourier Transform for Global Context
3.1.3. Motivation for a Hybrid Frequency-Aware Design
3.2. Overview
3.3. Hybrid-Frequency Interaction Module (HFIM)
- Branch 1: Wavelet-Domain Modulation. This branch focuses on localized details. is decomposed via DWT into four subbands: . The high-frequency (HF) subbands are concatenated with the low-frequency (LF) component and processed by depthwise-separable convolution blocks :
- Branch 2: Fourier-Domain Modulation. This branch targets globally distributed artifacts. The feature is transformed via 2D FFT: . A learnable frequency filter performs element-wise modulation, followed by the inverse FFT (iFFT):
- Branch 3: Cascaded Wavelet–Fourier Modulation. This branch models interactions across scales and frequencies. The HF subbands from an initial DWT of are first transformed to the Fourier domain, filtered, and then transformed back:
- The final output is obtained via IWT, i.e., .
3.4. Frequency-Aware Mixture-of-Experts Module (FAME)
- Expert Design. Our expert pool consists of three types of experts with heterogeneous receptive fields designed to handle different degradation patterns via multi-scale feature arbitration:
- Local Expert (): Employs pointwise convolution ( Conv) for pixel-wise adjustment, minimizing the risk of blurring fine textures during artifact-free region restoration.
- Wavelet Expert (): Applies a DWT-IWT sandwich with a depthwise convolution in the wavelet domain, providing mid-range context to suppress localized streaks while preserving sub-band edges.
- Fourier Expert (): Approximates global self-attention via Fourier autocorrelation, offering a holistic RF to neutralize large-scale, cross-image streaks.
- The i-th expert processes a routed feature and produces an output .
- Frequency-Aware Routing. The gating network facilitates input-conditional execution by mapping frequency signatures to the most appropriate expert. A lightweight router, implemented as a linear layer followed by softmax, generates a routing weight vector . For each spatial position (or token), only the expert with the highest weight is activated (Top-1 routing). By dynamically selecting the optimal RF for each token, the model circumvents the limitations of static architectures, thereby ensuring robust MAR performance across varying levels of artifact severity. To encourage balanced utilization and align expert choice with frequency content, we employ an auxiliary load-balancing loss [41].
3.5. Loss Functions
- 1. Reconstruction Loss (): An loss [43] ensures pixel-level fidelity, weighted inside () and outside the metal trace region:
- 2. Hybrid Frequency Loss (): This loss enforces consistency in both wavelet and Fourier domains. Let and denote operations extracting high-frequency wavelet subbands and Fourier magnitude spectra, respectively.
- 3. Routing Auxiliary Loss (): Adopted from [41], this loss balances expert utilization and encourages alignment between routing decisions and task complexity.
4. Experiments
4.1. Physically-Informed Artifact Simulation and Clinical Dataset
- Synthesized DeepLesion Dataset. Following the protocol established in [6,44], we constructed a synthesized dataset based on the DeepLesion collection. The training set consisted of 1000 paired clean and metal-corrupted volumes, generated by simulating metal artifacts on 1200 randomly sampled metal-free abdominal/thoracic CT images using 90 distinct implant types [7]. To faithfully replicate complex clinical scenarios, the pipeline integrated a polychromatic X-ray model (with incident photons) to couple beam-hardening and partial-volume effects with realistic Poisson noise, while extensively randomizing metal mask size, orientation, and position within diverse anatomical regions to reflect clinical variations in patient habitus and implant trajectories. For evaluation, 200 pairs featuring 10 unseen implant types were reserved, focusing on physical attenuation characteristics and geometric morphology rather than specific chemical compositions. These test implants covered a wide range of spatial scales, from 35 pixels (tiny fragments) to 2061 pixels (large prostheses). Consistent with prior works [6,44], all volumes were resampled into 640 projection views under a standard fan-beam geometry and reconstructed at a resolution of .
- Clinical SpineWeb Dataset (available at https://csi-workshop.weebly.com/ or https://zenodo.org/records/7049844, accessed on 1 June 2014). To validate clinical generalization ability, we employed the SpineWeb dataset as an external testbed. This dataset consists of post-operative spinal CT scans with thoracolumbar instrumentation. We selected a set of spine volumes with implants that were not used in training. Metal implants were segmented using a standard threshold of 2500 HU. This dataset served to evaluate the model’s performance on real-world metallic hardware distinct from the training distribution.
- Clinical CLINIC-metal Dataset. We also utilized the CLINIC-metal pelvic CT dataset [45] to test generalization to different anatomies. This multi-center dataset contained pelvic CT scans with severe artifacts, which challenged the model across different anatomies and acquisition protocols.
4.2. Implementation Details
4.3. Experimental Results and Discussion
4.3.1. Evaluation Metrics
4.3.2. Quantitative Evaluation
4.3.3. Qualitative Evaluation
- Synthesized DeepLesion. Figure 3 presents the reconstruction results for cases with varying implant sizes. HFMoE demonstrates robust artifact suppression, particularly in regions surrounding large implants. While competing methods often leave noticeable residual streaks (indicated by circles), our approach effectively mitigates these artifacts and better preserves anatomical boundaries. For small implants, HFMoE also achieves higher texture fidelity compared to existing methods.
- Clinical Datasets. Qualitative results on the clinical SpineWeb and CLINIC-metal datasets are presented in Figure 4 and Figure 5, respectively. Unlike synthetic benchmarks, these clinical datasets involve real-world metal implants and noise patterns that are complex and difficult to model, presenting substantial challenges for robust artifact reduction. On SpineWeb, HFMoE effectively reduces beam-hardening streaks induced by spinal fixation hardware. It suppresses long-range artifacts while maintaining the continuity and visibility of vertebral structures, in contrast to baseline methods that either leave residual streaks or over-smooth anatomical details. On the CLINIC-metal dataset, which contains complex pelvic anatomy and diverse implant configurations, our method demonstrates robustness to various degradation patterns and generalizes well across different tissue textures, indicating its capacity to handle heterogeneous clinical scenarios. Moreover, HFMoE consistently achieves an effective trade-off between noise suppression and detail preservation. It preserves tissue boundaries and structural sharpness more effectively than other methods, which often introduce blurring, texture loss, or local distortions when confronted with severe noise and unknown implants.
4.4. Computational Efficiency
4.5. Ablation Study
4.5.1. Efficacy of HFIM
4.5.2. Role of FAME
4.5.3. Impact of Loss Function
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Wang, H.; Li, Y.; Zhang, H.; Chen, J.; Ma, K.; Meng, D.; Zheng, Y. InDuDoNet: An interpretable dual domain network for CT metal artifact reduction. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2021, Strasbourg, France, 27 September–1 October 2021; pp. 107–118. [Google Scholar]
- Kalender, W.A.; Hebel, R.; Ebersberger, J. Reduction of CT artifacts caused by metallic implants. Radiology 1987, 164, 576–577. [Google Scholar] [CrossRef] [PubMed]
- Meyer, E.; Raupach, R.; Lell, M.; Schmidt, B.; Kachelrieß, M. Normalized metal artifact reduction (NMAR) in computed tomography. Med. Phys. 2010, 37, 5482–5493. [Google Scholar] [CrossRef] [PubMed]
- Tian, C.; Cheng, T.; Peng, Z.; Zuo, W.; Tian, Y.; Zhang, Q.; Wang, F.Y.; Zhang, D. A survey on deep learning fundamentals. Artif. Intell. Rev. 2025, 58, 381. [Google Scholar] [CrossRef]
- Wang, H.; Li, Y.; Meng, D.; Zheng, Y. Adaptive Convolutional Dictionary Network for CT Metal Artifact Reduction. In Proceedings of the 31st International Joint Conference on Artificial Intelligence—IJCAI, Vienna, Austria, 23–29 July 2022; pp. 1401–1407. [Google Scholar]
- Wang, H.; Xie, Q.; Zeng, D.; Ma, J.; Meng, D.; Zheng, Y. OSCNet: Orientation-Shared Convolutional Network for CT Metal Artifact Learning. IEEE Trans. Med. Imaging 2023, 43, 489–502. [Google Scholar] [CrossRef]
- Zhang, Y.; Yu, H. Convolutional neural network based metal artifact reduction in X-ray computed tomography. IEEE Trans. Med. Imaging 2018, 37, 1370–1381. [Google Scholar] [CrossRef]
- Ghani, M.U.; Karl, W.C. Fast enhanced CT metal artifact reduction using data domain deep learning. IEEE Trans. Comput. Imaging 2019, 6, 181–193. [Google Scholar] [CrossRef]
- Lin, W.A.; Liao, H.; Peng, C.; Sun, X.; Zhang, J.; Luo, J.; Chellappa, R.; Zhou, S.K. DuDoNet: Dual domain network for CT metal artifact reduction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 10512–10521. [Google Scholar]
- Zhou, B.; Chen, X.; Zhou, S.K.; Duncan, J.S.; Liu, C. DuDoDR-Net: Dual-domain data consistent recurrent network for simultaneous sparse view and metal artifact reduction in computed tomography. Med. Image Anal. 2022, 75, 102289. [Google Scholar] [CrossRef]
- Wang, H.; Yang, S.; Bai, X.; Wang, Z.; Wu, J.; Lv, Y.; Cao, G. IRDNet: Iterative Relation-Based Dual-Domain Network via Metal Artifact Feature Guidance for CT Metal Artifact Reduction. IEEE Trans. Radiat. Plasma Med. Sci. 2024, 8, 959–972. [Google Scholar] [CrossRef]
- Zheng, S.; Zhang, D.; Yu, C.; Jia, L.; Zhu, L.; Huang, Z.; Zhu, D.; Yu, H. MAReraser: Metal Artifact Reduction with Image Prior Using CNN and Transformer Together. In Proceedings of the 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Lisbon, Portugal, 3–6 December 2024; pp. 4060–4065. [Google Scholar]
- Yao, X.; Tan, J.; Deng, Z.; Xiong, D.; Zhao, Q.; Wu, M. MUPO-Net: A Multilevel Dual-domain Progressive Enhancement Network with Embedded Attention for CT Metal Artifact Reduction. In Proceedings of the ICASSP 2025—2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India, 6–11 April 2025; pp. 1–5. [Google Scholar]
- Tian, C.; Zheng, M.; Li, B.; Zhang, Y.; Zhang, S.; Zhang, D. Perceptive self-supervised learning network for noisy image watermark removal. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 7069–7079. [Google Scholar] [CrossRef]
- Tian, C.; Zheng, M.; Lin, C.W.; Li, Z.; Zhang, D. Heterogeneous window transformer for image denoising. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 6621–6632. [Google Scholar] [CrossRef]
- Tian, C.; Song, M.; Zuo, W.; Du, B.; Zhang, Y.; Zhang, S. Application of convolutional neural networks in image super-resolution. CAAI Trans. Intell. Syst. 2025, 20, 719–749. [Google Scholar]
- Tian, C.; Xie, J.; Li, L.; Zuo, W.; Zhang, Y.; Zhang, D. A Perception CNN for Facial Expression Recognition. IEEE Trans. Image Process. 2025, 34, 8101–8113. [Google Scholar] [CrossRef] [PubMed]
- Lyu, Y.; Lin, W.A.; Liao, H.; Lu, J.; Zhou, S.K. Encoding metal mask projection for metal artifact reduction in computed tomography. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2020, Lima, Peru, 4–8 October 2020; pp. 147–157. [Google Scholar]
- Li, Z.; Gao, Q.; Wu, Y.; Niu, C.; Zhang, J.; Wang, M.; Wang, G.; Shan, H. Quad-Net: Quad-domain network for CT metal artifact reduction. IEEE Trans. Med. Imaging 2024, 43, 1866–1879. [Google Scholar] [CrossRef] [PubMed]
- Liu, P.; Zhang, H.; Tian, C.; Jiang, F.; Zhang, Y.; Zuo, W. Tri-Domain Filtering Transformer for CT Metal Artifact Reduction. IEEE Trans. Radiat. Plasma Med. Sci. 2025. [Google Scholar] [CrossRef]
- Peng, C.; Qiu, B.; Li, M.; Yang, Y.; Zhang, C.; Gong, L.; Zheng, J. GPU-accelerated dynamic wavelet thresholding algorithm for X-ray CT metal artifact reduction. IEEE Trans. Radiat. Plasma Med. Sci. 2017, 2, 17–26. [Google Scholar] [CrossRef]
- Liu, P.; Zhang, H.; Zhang, K.; Lin, L.; Zuo, W. Multi-level wavelet-CNN for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 773–782. [Google Scholar]
- Cui, Y.; Ren, W.; Cao, X.; Knoll, A. Image restoration via frequency selection. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 46, 1093–1108. [Google Scholar] [CrossRef]
- Dai, T.; Wang, J.; Guo, H.; Li, J.; Wang, J.; Zhu, Z. FreqFormer: Frequency-aware transformer for lightweight image super-resolution. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24), Jeju, Republic of Korea, 3–9 August 2024; pp. 731–739. [Google Scholar]
- Tian, C.; Zhang, X.; Zhang, Q.; Yang, M.; Ju, Z. Image super-resolution via dynamic network. CAAI Trans. Intell. Technol. 2024, 9, 837–849. [Google Scholar] [CrossRef]
- Suvorov, R.; Logacheva, E.; Mashikhin, A.; Remizova, A.; Ashukha, A.; Silvestrov, A.; Kong, N.; Goka, H.; Park, K.; Lempitsky, V. Resolution-Robust Large Mask Inpainting With Fourier Convolutions. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2022; pp. 2149–2159. [Google Scholar]
- Liu, T.; Li, B.; Du, X.; Jiang, B.; Geng, L.; Wang, F.; Zhao, Z. FAIR: Frequency-aware image restoration for industrial visual anomaly detection. arXiv 2023, arXiv:2309.07068. [Google Scholar] [CrossRef]
- Tian, C.; Liu, K.; Zhang, B.; Huang, Z.; Lin, C.W.; Zhang, D. A Dynamic Transformer Network for Vehicle Detection. IEEE Trans. Consum. Electron. 2025, 71, 2387–2394. [Google Scholar] [CrossRef]
- Deeba, F.; Kun, S.; Ali Dharejo, F.; Zhou, Y. Wavelet-Based Enhanced Medical Image Super Resolution. IEEE Access 2020, 8, 37035–37044. [Google Scholar] [CrossRef]
- Dharejo, F.A.; Zawish, M.; Deeba, F.; Zhou, Y.; Dev, K.; Khowaja, S.A.; Qureshi, N.M.F. Multimodal-boost: Multimodal medical image super-resolution using multi-attention network with wavelet transform. IEEE/ACM Trans. Comput. Biol. Bioinform. 2022, 20, 2420–2433. [Google Scholar] [CrossRef] [PubMed]
- Jiang, X.; Zhang, X.; Gao, N.; Deng, Y. When fast fourier transform meets transformer for image restoration. In Computer Vision—ECCV 2024, 18th European Conference, Milan, Italy, 29 September–4 October 2024; Springer: Cham, Switzerland, 2024; pp. 381–402. [Google Scholar]
- Zhao, C.; Cai, W.; Dong, C.; Hu, C. Wavelet-based fourier information interaction with frequency diffusion adjustment for underwater image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 8281–8291. [Google Scholar]
- Ren, Y.; Li, X.; Li, B.; Wang, X.; Guo, M.; Zhao, S.; Zhang, L.; Chen, Z. MoE-DiffIR: Task-Customized Diffusion Priors for Universal Compressed Image Restoration. In Computer Vision—ECCV 2024, 18th European Conference, Milan, Italy, 29 September–4 October 2024; Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G., Eds.; Springer: Cham, Switzerland, 2024; Volume 15067, pp. 116–134. [Google Scholar]
- Mandal, D.; Chattopadhyay, S.; Tong, G.; Chakravarthula, P. UniCoRN: Latent Diffusion-based Unified Controllable Image Restoration Network across Multiple Degradations. arXiv 2025, arXiv:2503.15868. [Google Scholar]
- Lin, J.; Zhang, Z.; Li, W.; Pei, R.; Xu, H.; Zhang, H.; Zuo, W. UniRestorer: Universal Image Restoration via Adaptively Estimating Image Degradation at Proper Granularity. arXiv 2024, arXiv:2412.20157. [Google Scholar] [CrossRef]
- An, T.; Gao, H.; Liu, R.; Dai, K.; Xie, T.; Li, R.; Wang, K.; Zhao, L. An MoE-Driven Unified Image Restoration Framework for Adverse Weather Conditions. IEEE Trans. Circuits Syst. Video Technol. 2025, in press. [Google Scholar] [CrossRef]
- Yang, Z.; Chen, H.; Qian, Z.; Yi, Y.; Zhang, H.; Zhao, D.; Wei, B.; Xu, Y. All-in-one medical image restoration via task-adaptive routing. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2024, 27th International Conference, Marrakesh, Morocco, 6–10 October 2024; Springer: Cham, Switzerland, 2024; pp. 67–77. [Google Scholar]
- Deng, Z.; Campbell, J. Sparse mixture-of-experts for non-uniform noise reduction in MRI images. In Proceedings of the Winter Conference on Applications of Computer Vision (WACV), Tucson, AZ, USA, 28 February–6 March 2025; pp. 297–305. [Google Scholar]
- Wang, Z.; Ru, Y.; Chetouani, A.; Chen, F.; Bauer, F.; Zhang, L.; Hans, D.; Jennane, R.; Jarraya, M.; Chen, Y.H. MoEDiff-SR: Mixture of Experts-Guided Diffusion Model for Region-Adaptive MRI Super-Resolution. arXiv 2025, arXiv:2504.07308. [Google Scholar]
- Wang, Y.; Li, Y.; Zheng, Z.; Zhang, X.P.; Wei, M. M2Restore: Mixture-of-Experts-based Mamba-CNN Fusion Framework for All-in-One Image Restoration. arXiv 2025, arXiv:2506.07814. [Google Scholar] [CrossRef]
- Zamfir, E.; Wu, Z.; Mehta, N.; Tan, Y.; Paudel, D.P.; Zhang, Y.; Timofte, R. Complexity experts are task-discriminative learners for any image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 11–15 June 2025; pp. 12753–12763. [Google Scholar]
- Tiwari, N.; Hemrajamani, N.; Goyal, D. Improved digital image watermarking algorithm based on hybrid DWT-FFT and SVD techniques. Indian J. Sci. Technol. 2017, 10, 1–7. [Google Scholar] [CrossRef][Green Version]
- Tian, C.; Song, M.; Fan, X.; Zheng, X.; Zhang, B.; Zhang, D. A Tree-guided CNN for image super-resolution. IEEE Trans. Consum. Electron. 2025, 71, 3631–3640. [Google Scholar] [CrossRef]
- Yu, L.; Zhang, Z.; Li, X.; Xing, L. Deep Sinogram Completion with Image Prior for Metal Artifact Reduction in CT Images. IEEE Trans. Med. Imaging 2020, 40, 228–238. [Google Scholar] [CrossRef]
- Liu, P.; Han, H.; Du, Y.; Zhu, H.; Li, Y.; Gu, F.; Xiao, H.; Li, J.; Zhao, C.; Xiao, L.; et al. Deep learning to segment pelvic bones: Large-scale CT datasets and baseline models. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 749–756. [Google Scholar] [CrossRef]
- Cheng, T.; Bi, T.; Ji, W.; Tian, C. Graph convolutional network for image restoration: A survey. Mathematics 2024, 12, 2020. [Google Scholar] [CrossRef]
- Wang, T.; Xia, W.; Huang, Y.; Sun, H.; Liu, Y.; Chen, H.; Zhou, J.; Zhang, Y. Dual-domain adaptive-scaling non-local network for CT metal artifact reduction. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2021, 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Springer: Cham, Switzerland, 2021; pp. 243–253. [Google Scholar]
- Wang, H.; Li, Y.; He, N.; Ma, K.; Meng, D.; Zheng, Y. DICDNet: Deep Interpretable Convolutional Dictionary Network for Metal Artifact Reduction in CT Images. IEEE Trans. Med. Imaging 2021, 41, 869–880. [Google Scholar] [CrossRef]
- Wang, H.; Xie, Q.; Li, Y.; Huang, Y.; Meng, D.; Zheng, Y. Orientation-Shared Convolution Representation for CT Metal Artifact Learning. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2022, Singapore, 18–22 September 2022; pp. 665–675. [Google Scholar]
- Wang, H.; Li, Y.; Zhang, H.; Meng, D.; Zheng, Y. InDuDoNet+: A deep unfolding dual domain network for metal artifact reduction in CT images. Med. Image Anal. 2023, 85, 102729. [Google Scholar] [CrossRef]
- Liu, X.; Xie, Y.; Diao, S.; Tan, S.; Liang, X. Unsupervised CT metal artifact reduction by plugging diffusion priors in dual domains. IEEE Trans. Med. Imaging 2024, 43, 3533–3545. [Google Scholar] [CrossRef]
- BaoShun, S.; ShaoLei, Z.; ZhaoRan, F. Artifact Region-Aware Transformer: Global Context Helps CT Metal Artifact Reduction. IEEE Signal Process. Lett. 2024, 31, 1249–1253. [Google Scholar] [CrossRef]










| Task | Features | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| CT-MAR | 16 | 1 | 2 | 4 | 6 | 6 | 4 | 2 | 1 |
| Methods | Large Metal | ⟶ | Medium Metal | ⟶ | Small Metal | Average |
|---|---|---|---|---|---|---|
| Input | 24.12/0.6761 | 26.13/0.7471 | 27.75/0.7659 | 28.53/0.7964 | 28.78/0.8076 | 27.06/0.7586 |
| LI (1987) [2] | 27.21/0.8920 | 28.31/0.9185 | 29.86/0.9464 | 30.40/0.9555 | 30.57/0.9608 | 29.27/0.9347 |
| NMAR (2010) [3] | 27.66/0.9114 | 28.81/0.9373 | 29.69/0.9465 | 30.44/0.9591 | 30.79/0.9669 | 29.48/0.9442 |
| CNNMAR (2018) [7] | 28.92/0.9433 | 29.89/0.9588 | 30.84/0.9706 | 31.11/0.9743 | 31.14/0.9752 | 30.38/0.9644 |
| DuDoNet (2019) [9] | 29.87/0.9723 | 30.60/0.9786 | 31.46/0.9839 | 31.85/0.9858 | 31.91/0.9862 | 31.14/0.9814 |
| DuDoNet++ (2020) [18] | 36.17/0.9784 | 38.34/0.9891 | 40.32/0.9913 | 41.56/0.9919 | 42.08/0.9921 | 39.69/0.9886 |
| DSCMAR (2020) [44] | 34.04/0.9343 | 33.10/0.9362 | 33.37/0.9384 | 32.75/0.9393 | 32.77/0.9395 | 33.21/0.9375 |
| DAN-Net (2021) [47] | 30.82/0.9750 | 31.30/0.9796 | 33.39/0.9852 | 35.02/0.9883 | 43.61/0.9950 | 34.83/0.9846 |
| InDuDoNet (2021) [1] | 36.74/0.9742 | 39.32/0.9893 | 41.86/0.9944 | 44.47/0.9948 | 45.01/0.9958 | 41.48/0.9897 |
| DICDNet (2021) [48] | 37.19/0.9853 | 39.53/0.9908 | 42.25/0.9941 | 44.91/0.9953 | 45.27/0.9958 | 41.83/0.9923 |
| ACDNet (2022) [5] | 37.91/0.9872 | 39.30/0.9920 | 41.14/0.9949 | 42.43/0.9961 | 42.64/0.9965 | 40.68/0.9933 |
| OSCNet (2022) [49] | 37.70/0.9883 | 39.88/0.9902 | 42.92/0.9950 | 45.04/0.9958 | 45.45/0.9962 | 42.19/0.9931 |
| InDuDoNet+ (2023) [50] | 36.28/0.9736 | 39.23/0.9872 | 41.81/0.9937 | 45.03/0.9952 | 45.15/0.9959 | 41.50/0.9891 |
| OSCNet+ (2023) [6] | 38.98/0.9897 | 40.72/0.9930 | 43.46/0.9956 | 45.51/0.9965 | 45.99/0.9968 | 42.93/0.9943 |
| DuDoDp (2024) [51] | 29.11/0.9580 | 29.30/0.9631 | 29.07/0.9668 | 28.33/0.9673 | 28.33/0.9673 | 28.83/0.9645 |
| MARFormer (2024) [52] | 40.56/0.9903 | 42.32/0.9933 | 43.90/0.9947 | 45.78/0.9956 | 45.70/0.9957 | 43.65/0.9939 |
| MoCE-IR (2025) [41] | 41.23/0.9889 | 42.44/0.9912 | 44.02/0.9923 | 45.48/0.9932 | 45.98/0.9931 | 43.86/0.9917 |
| HFMoE (Ours) | 41.70/0.9916 | 43.09/0.9940 | 44.43/0.9951 | 46.20/0.9961 | 46.71/0.9963 | 44.43/0.9946 |
| Methods | Large Metal | ⟶ | Medium Metal | ⟶ | Small Metal | All |
|---|---|---|---|---|---|---|
| MARFormer (2024) [52] | 3.4027 | 2.9308 | 2.8868 | 3.0317 | 1.8288 | 1.7979 |
| MoCE-IR (2025) [41] | 3.2286 | 2.7271 | 2.6668 | 3.0740 | 2.0365 | 1.9791 |
| HFMoE (Ours) | 3.1001 | 2.5992 | 2.5328 | 3.2013 | 2.0181 | 1.7377 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Liu, P.; Zhang, H.; Zhang, C.; Jiang, F. Hybrid-Frequency-Aware Mixture-of-Experts Method for CT Metal Artifact Reduction. Mathematics 2026, 14, 494. https://doi.org/10.3390/math14030494
Liu P, Zhang H, Zhang C, Jiang F. Hybrid-Frequency-Aware Mixture-of-Experts Method for CT Metal Artifact Reduction. Mathematics. 2026; 14(3):494. https://doi.org/10.3390/math14030494
Chicago/Turabian StyleLiu, Pengju, Hongzhi Zhang, Chuanhao Zhang, and Feng Jiang. 2026. "Hybrid-Frequency-Aware Mixture-of-Experts Method for CT Metal Artifact Reduction" Mathematics 14, no. 3: 494. https://doi.org/10.3390/math14030494
APA StyleLiu, P., Zhang, H., Zhang, C., & Jiang, F. (2026). Hybrid-Frequency-Aware Mixture-of-Experts Method for CT Metal Artifact Reduction. Mathematics, 14(3), 494. https://doi.org/10.3390/math14030494

