A Polynomial and Fourier Basis Network for Vision-Based Translation Tasks
Abstract
1. Introduction
- Motivate and demonstrate the performance gains achieved by incorporating trainable polynomial and Fourier features into the selected deep learning architecture;
- Design a versatile, lightweight architecture trained in an unsupervised manner that effectively enhances images captured under diverse environmental conditions, with a focus on challenging driving-related outdoor scenarios.
- Merging universal function approximation in an image translation task to substantially reduce parameter size;
- Introducing learnable polynomial and Fourier features to better represent features for high dimension reconstructions;
- Introducing the first lightweight architecture trained with an unsupervised technique that can effectively handle multiple adverse imaging conditions, including atmospheric haze, rain, low illumination, and night-to-day transformations, while maintaining strong performance in both image quality and object detection.
A Motivation for the Proposed Approach
“Let be an algebra of real continuous functions on a compact set K. If separates points on K and if vanishes at no point of K, then the uniform closure of consists of all real continuous functions on K”.
2. Related Work
2.1. Image Restoration Networks
2.2. Adaptive Activation Functions
3. Methodology
3.1. The Encoder/Decoder Modules
3.2. Training the Overall Structure
4. Experimental Results and Discussion
4.1. Dehazing
4.2. Deraining
4.3. Dark to Bright
4.4. Night to Day
5. Ablation and Other Experiments
Limitations
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zheng, D.; Wu, X.M.; Yang, S.; Zhang, J.; Hu, J.F.; Zheng, W.S. Selective Hourglass Mapping for Universal Image Restoration Based on Diffusion Model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 25445–25455. [Google Scholar]
- Tu, Z.; Talebi, H.; Zhang, H.; Yang, F.; Milanfar, P.; Bovik, A.; Li, Y. Maxim: Multi-axis MLP for image processing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5769–5780. [Google Scholar]
- Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar]
- Pao, Y. Adaptive Pattern Recognition and Neural Networks; Addison-Wesley Publishing Co., Inc.: Reading, MA, USA, 1989. [Google Scholar]
- Mathews, V.J.; Sicuranza, G.L. Polynomial Signal Processing; Wiley: New York, NY, USA, 2000. [Google Scholar]
- Carini, A.; Sicuranza, G.L. Fourier nonlinear filters. Signal Process. 2014, 94, 183–194. [Google Scholar] [CrossRef]
- Lapedes, A.; Farber, R. Nonlinear Signal Processing Using Neural Networks: Prediction and System Modelling; Technical Report; Los Alamos National Laboratory: Los Alamos, NM, USA, 1987. [Google Scholar]
- Sopena, J.M.; Romero, E.; Alquezar, R. Neural networks with periodic and monotonic activation functions: A comparative study in classification problems. In Proceedings of the 1999 Ninth International Conference on Artificial Neural Networks ICANN 99, Edinburgh, UK, 7–10 September 1999. [Google Scholar]
- Gashler, M.S.; Ashmore, S.C. Modeling time series data with deep Fourier neural networks. Neurocomputing 2016, 188, 3–11. [Google Scholar] [CrossRef]
- Jagtap, A.D.; Shin, Y.; Kawaguchi, K.; Karniadakis, G.E. Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions. Neurocomputing 2022, 468, 165–180. [Google Scholar] [CrossRef]
- Rafajłowicz, E.; Pawlak, M. On function recovery by neural networks based on orthogonal expansions. Nonlinear Anal. Theory Methods Appl. 1997, 30, 1343–1354. [Google Scholar] [CrossRef]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5728–5739. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14821–14831. [Google Scholar]
- Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. Uformer: A General U-Shaped Transformer for Image Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogn. (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17683–17693. [Google Scholar]
- Chen, D.; He, M.; Fan, Q.; Liao, J.; Zhang, L.; Hou, D.; Yuan, L.; Hua, G. Gated context aggregation network for image dehazing and deraining. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019; IEEE: New York, NY, USA 2019; pp. 1375–1383. [Google Scholar]
- Qin, X.; Wang, Z.; Bai, Y.; Xie, X.; Jia, H. FFA-Net: Feature fusion attention network for single image dehazing. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 11908–11915. [Google Scholar]
- Cui, Y.; Knoll, A. Exploring the potential of channel interactions for image restoration. Knowl.-Based Syst. 2023, 282, 111156. [Google Scholar] [CrossRef]
- Dwivedi, P.; Chakraborty, S. FrTrGAN: Single image dehazing using the frequency component of transmission maps in the generative adversarial network. Comput. Vis. Image Underst. 2025, 255, 104336. [Google Scholar] [CrossRef]
- Gao, H.; Ma, B.; Zhang, Y.; Yang, J.; Yang, J.; Dang, D. Frequency domain task-adaptive network for restoring images with combined degradations. Pattern Recognit. 2025, 158, 111057. [Google Scholar]
- Zhang, S.; Zhang, X.; Shen, L.; Wan, S.; Ren, W. Wavelet-based physically guided normalization network for real-time traffic dehazing. Pattern Recognit. 2026, 172, 112451. [Google Scholar] [CrossRef]
- Yang, Y.; Wang, C.; Liu, R.; Zhang, L.; Guo, X.; Tao, D. Self-Augmented Unpaired Image Dehazing via Density and Depth Decomposition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 2037–2046. [Google Scholar]
- Yang, Y.; Wang, C.; Guo, X.; Tao, D. Robust Unpaired Image Dehazing via Density and Depth Decomposition. Int. J. Comput. Vis. 2024, 132, 1557–1577. [Google Scholar] [CrossRef]
- Sun, H.; Luo, Z.; Ren, D.; Du, B.; Chang, L.; Wan, J. Unsupervised Multi-Branch Network with High-Frequency Enhancement for Image Dehazing. Pattern Recognit. 2024, 156, 110763. [Google Scholar] [CrossRef]
- Xue, M.; Fan, S.; Palaiahnakote, S.; Zhou, M. UR2P-Dehaze: Learning a Simple Image Dehaze Enhancer via Unpaired Rich Physical Prior. Pattern Recognit. 2026, 170, 111997. [Google Scholar] [CrossRef]
- Lan, Y.; Cui, Z.; Luo, X.; Liu, C.; Wang, N.; Zhang, M.; Su, Y.; Liu, D. When Schrödinger Bridge Meets Real-World Image Dehazing with Unpaired Training. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Honolulu, HI, USA, 19–23 October 2025; pp. 8756–8765. [Google Scholar]
- Potlapalli, V.; Zamir, S.W.; Khan, S.; Khan, F.S. PromptIR: Prompting for All-in-One Blind Image Restoration. In Proceedings of the 37th International Conference on Neural Information Processing Systems, New Orleans, LA, USA, 10–16 December 2023; Volume 36. [Google Scholar]
- Gao, H.; Yang, J.; Zhang, Y.; Wang, N.; Yang, J.; Dang, D. Prompt-based Ingredient-Oriented All-in-One Image Restoration. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 9458–9471. [Google Scholar]
- Sitzmann, V.; Martel, J.N.P.; Bergman, A.W.; Lindell, D.B.; Wetzstein, G. Implicit Neural Representations with Periodic Activation Functions; Curran Associates Inc.: Red Hook, NY, USA, 2020; NIPS ’20. [Google Scholar]
- Ranjan, P.; Khan, P.; Kumar, S.; Das, S.K. log-Sigmoid Activation-Based Long Short-Term Memory for Time-Series Data Classification. IEEE Trans. Artif. Intell. 2024, 5, 672–683. [Google Scholar] [CrossRef]
- Anoosheh, A.; Sattler, T.; Timofte, R.; Pollefeys, M.; Van Gool, L. Night-to-day image translation for retrieval-based localization. In Proceedings of the International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; IEEE: New York, NY, USA, 2019; pp. 5958–5964. [Google Scholar]
- Bhattacharya, J.; Modi, S.; Gregorat, L.; Ramponi, G. D2BGAN: A Dark to Bright Image Conversion Model for Quality Enhancement and Analysis Tasks Without Paired Supervision. IEEE Access 2022, 10, 57942–57961. [Google Scholar] [CrossRef]
- Venkatanath, N.; Praneeth, D.; Bh, M.C.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the 2015 21st National Conference on Communications (NCC), Mumbai, India, 27 February–1 March 2015; IEEE: New York, NY, USA, 2015; pp. 1–6. [Google Scholar]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
- Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3213–3223. [Google Scholar]
- Sakaridis, C.; Dai, D.; Hecker, S.; Van Gool, L. Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 707–724. [Google Scholar]
- Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Reside: A benchmark for single image dehazing. arXiv 2017, arXiv:1712.04143. [Google Scholar]
- Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. Aod-net: All-in-one dehazing network. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4770–4778. [Google Scholar]
- Song, Y.; He, Z.; Qian, H.; Du, X. Vision transformers for single image dehazing. IEEE Trans. Image Process. 2023, 32, 1927–1941. [Google Scholar] [CrossRef]
- Dong, H.; Pan, J.; Xiang, L.; Hu, Z.; Zhang, X.; Wang, F.; Yang, M.H. Multi-scale boosted dehazing network with dense feature fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2157–2167. [Google Scholar]
- Tsai, F.J.; Peng, Y.T.; Lin, Y.Y.; Lin, C.W. PHATNet: A Physics-guided Haze Transfer Network for Domain-adaptive Real-world Image Dehazing. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Honolulu, HI, USA, 19–25 October 2025; pp. 5591–5600. [Google Scholar]
- Wang, Z.; Zhao, H.; Peng, J.; Yao, L.; Zhao, K. ODCR: Orthogonal Decoupling Contrastive Regularization for Unpaired Image Dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 25479–25489. [Google Scholar]
- Liu, Y.; Wang, X.; Hu, E.; Wang, A.; Shiri, B.; Lin, W. VNDHR: Variational Single Nighttime Image Dehazing for Enhancing Visibility in Intelligent Transportation Systems via Hybrid Regularization. IEEE Trans. Intell. Transp. Syst. 2025, 26, 10189–10203. [Google Scholar] [CrossRef]
- Cong, X.; Gui, J.; Zhang, J.; Hou, J.; Shen, H. A Semi-Supervised Nighttime Dehazing Baseline with Spatial-Frequency Aware and Realistic Brightness Constraint. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 2631–2640. [Google Scholar]
- Jin, Y.; Lin, B.; Yan, W.; Yuan, Y.; Ye, W.; Tan, R.T. Enhancing Visibility in Nighttime Haze Images Using Guided APSF and Gradient Adaptive Convolution. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October–3 November 2023; pp. 2446–2457. [Google Scholar]
- Yang, C.H.; Lin, Y.H.; Lu, Y.C. A Variation-Based Nighttime Image Dehazing Flow with a Physically Valid Illumination Estimator and a Luminance-Guided Coloring Model. IEEE Access 2022, 10, 50153–50166. [Google Scholar] [CrossRef]
- Zhang, J.; Cao, Y.; Zha, Z.J.; Tao, D. Nighttime Dehazing with a Synthetic Benchmark. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 2355–2363. [Google Scholar]
- Qian, R.; Tan, R.T.; Yang, W.; Su, J.; Liu, J. Attentive generative adversarial network for raindrop removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2482–2491. [Google Scholar]
- Fu, X.; Huang, J.; Zeng, D.; Huang, Y.; Ding, X.; Paisley, J. Removing rain from single images via a deep detail network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3855–3863. [Google Scholar]
- Fu, X.; Huang, J.; Ding, X.; Liao, Y.; Paisley, J. Clearing the skies: A deep network architecture for single-image rain removal. IEEE Trans. Image Process. 2017, 26, 2944–2956. [Google Scholar] [CrossRef]
- Li, S.; Araujo, I.B.; Ren, W.; Wang, Z.; Tokuda, E.K.; Junior, R.H.; Cesar-Junior, R.; Zhang, J.; Guo, X.; Cao, X. Single image deraining: A comprehensive benchmark analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June2019; pp. 3838–3847. [Google Scholar]
- Kenk, M.A.; Hassaballah, M. DAWN: Vehicle detection in adverse weather nature dataset. arXiv 2020, arXiv:2008.05402. [Google Scholar] [CrossRef]
- Zhang, H.; Patel, V.M. Density-aware single image de-raining using a multi-stream dense network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 695–704. [Google Scholar]
- Jiang, K.; Wang, Z.; Yi, P.; Chen, C.; Huang, B.; Luo, Y.; Ma, J.; Jiang, J. Multi-scale progressive fusion network for single image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8346–8355. [Google Scholar]
- Ren, D.; Zuo, W.; Hu, Q.; Zhu, P.; Meng, D. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3937–3946. [Google Scholar]
- Gao, H.; Yang, J.; Zhang, Y.; Wang, N.; Yang, J.; Dang, D. A novel single-stage network for accurate image restoration. Vis. Comput. 2024, 40, 7385–7398. [Google Scholar] [CrossRef]
- Yasarla, R.; Patel, V.M. Uncertainty guided multi-scale residual learning-using a cycle spinning cnn for single image de-raining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8405–8414. [Google Scholar]
- Pavao, A.; Guyon, I.; Letournel, A.C.; Tran, D.T.; Baro, X.; Escalante, H.J.; Escalera, S.; Thomas, T.; Xu, Z. CodaLab Competitions: An Open Source Platform to Organize Scientific Challenges. J. Mach. Learn. Res. 2023, 24, 9525–9530. [Google Scholar]
- Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 12–19 June 2020; pp. 2633–2642. [Google Scholar]
- Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023. [Google Scholar]
- Yan, Q.; Feng, Y.; Zhang, C.; Pang, G.; Shi, K.; Wu, P.; Dong, W.; Sun, J.; Zhang, Y. HVI: A New Color Space for Low-light Image Enhancement. In Proceedings of the 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 10–17 June 2025; pp. 5678–5687. [Google Scholar]
- Fu, H.; Gong, M.; Wang, C.; Batmanghelich, K.; Zhang, K.; Tao, D. Geometry-consistent generative adversarial networks for one-sided unsupervised domain mapping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2427–2436. [Google Scholar]
- Parmar, G.; Park, T.; Narasimhan, S.; Zhu, J.Y. One-Step Image Translation with Text-to-Image Models. arXiv 2024, arXiv:2403.12036. [Google Scholar]
















| Channel R | Channel G | Channel B |
|---|---|---|
| R | G | B |
| Method | P | N | B | P | N | B | P | N | B |
|---|---|---|---|---|---|---|---|---|---|
| Dawn-Haze | RESIDE | Cityscapes | |||||||
| Original hazy | 54.65 | 4.19 | 45.21 | 54.13 | 3.73 | 38.48 | 45.32 | 2.93 | 33.63 |
| AOD | 55.59 | 3.78 | 42.98 | 54.95 | 3.70 | 36.85 | 41.16 | 2.52 | 34.06 |
| DHF | 53.54 | 3.70 | 41.37 | 51.66 | 3.36 | 32.68 | 44.24 | 2.95 | 32.54 |
| GCANet | 38.36 | 3.82 | 32.32 | 43.25 | 3.54 | 28.70 | 36.68 | 2.99 | 31.02 |
| FFA | 50.91 | 3.83 | 43.05 | 51.09 | 3.46 | 35.86 | 42.05 | 2.74 | 34.10 |
| MSBDN | 61.47 | 4.77 | 43.12 | 51.72 | 3.48 | 34.18 | 39.19 | 2.90 | 30.10 |
| MAXIM | 57.85 | 4.27 | 44.57 | 54.08 | 3.71 | 37.59 | 51.31 | 2.83 | 25.38 |
| PROMPTIR | 54.82 | 3.93 | 41.26 | 55.60 | 3.54 | 35.84 | 42.80 | 2.70 | 25.28 |
| DIFFUIR | 54.67 | 3.85 | 44.51 | 53.17 | 3.61 | 37.06 | 43.31 | 2.58 | 25.43 |
| FDTANET | 39.92 | 4.03 | 33.29 | 43.20 | 3.44 | 30.44 | 34.61 | 2.63 | 22.92 |
| CAPTNET | 54.72 | 4.13 | 44.79 | 54.99 | 3.67 | 39.84 | 45.95 | 2.76 | 26.52 |
| D4 | 52.29 | 3.86 | 42.44 | 52.74 | 3.52 | 36.02 | 44.25 | 2.80 | 32.82 |
| D4+ | 51.58 | 3.94 | 43.89 | 58.74 | 4.13 | 43.03 | 54.87 | 3.27 | 41.96 |
| PHATNET | 53.22 | 3.79 | 36.40 | 52.21 | 3.41 | 29.45 | 34.24 | 2.40 | 23.41 |
| NLCNet | 32.23 | 3.66 | 26.60 | 35.85 | 3.14 | 23.69 | 32.63 | 2.65 | 26.67 |
| Method | PSNR | SSIM | PSNR | SSIM | Params. |
|---|---|---|---|---|---|
| Cityscape | Cityscape | SOTS-O | SOTS-O | M | |
| AOD | 28.05 | 0.756 | 24.14 | 0.92 | 0.02 |
| DHF | 27.75 | 0.792 | 33.79 | 0.97 | 25.44 |
| FFA | 28.12 | 0.761 | 32.05 | 0.97 | 4.456 |
| MSBDN | 28.51 | 0.849 | 23.36 | 0.87 | 31 |
| MAXIM | 28.25 | 0.783 | 34.19 | 0.98 | 14.1 |
| GCANet | 28.08 | 0.805 | 19.98 | 0.70 | 0.702 |
| PROMPTIR | 27.97 | 0.753 | 31.13 | 0.97 | 35.59 |
| DIFFUIR | 28.20 | 0.794 | 30.83 | 0.95 | 36.26 |
| FDTANET | 28.04 | 0.746 | 31.25 | 0.95 | 34.10 |
| CAPTNET | 27.73 | 0.718 | 29.28 | 0.96 | 24.36 |
| NLCNet | 30.48 | 0.878 | 32.57 | 0.97 | 5.83 |
| Method | PSNR | SSIM |
|---|---|---|
| D4 | 25.85 | 0.957 |
| D4+ | 26.32 | 0.961 |
| ODCR | 26.16 | 0.960 |
| UME-NET | 27.26 | 0.931 |
| UR2P-Dehaze | 27.53 | 0.967 |
| NLCUNet | 32.42 | 0.97 |
| Methods | OSFD | VNID | GAPSF | SFSNiD | VNDHR | NLCNet |
|---|---|---|---|---|---|---|
| PSNR | 11.90 | 11.71 | 13.17 | 13.59 | 14.29 | 19.53 |
| SSIM | 0.70 | 0.69 | 0.69 | 0.66 | 0.75 | 0.89 |
| Method | P | N | B | P | N | B |
|---|---|---|---|---|---|---|
| RIS | DAWNRAIN | |||||
| Original rainy | 65.93 | 5.09 | 43.61 | 49.60 | 3.81 | 41.57 |
| MAXIM | 75.64 | 4.86 | 44.21 | 68.10 | 4.19 | 43.95 |
| Restormer | 66.87 | 4.84 | 43.91 | 51.33 | 4.04 | 42.12 |
| DIDMDN | 71.92 | 3.79 | 38.73 | 79.26 | 4.47 | 48.53 |
| MSPFN | 69.83 | 4.29 | 42.43 | 71.18 | 4.72 | 46.47 |
| MPRNet | 71.16 | 4.75 | 45.73 | 60.29 | 4.46 | 44.02 |
| UMRL | 67.47 | 3.88 | 34.52 | 77.36 | 4.49 | 48.27 |
| PRENet | 65.62 | 4.59 | 42.19 | 61.26 | 4.49 | 43.61 |
| M3SNet | 67.39 | 4.36 | 41.08 | 55.50 | 4.16 | 40.76 |
| PromptIR | 65.64 | 4.51 | 40.60 | 52.17 | 3.77 | 39.65 |
| DIFFUIR | 64.11 | 4.87 | 42.79 | 46.11 | 3.41 | 37.46 |
| FDTANET | 57.61 | 3.74 | 33.09 | 35.18 | 3.50 | 32.02 |
| CAPTNET | 65.54 | 4.77 | 43.69 | 50.51 | 3.78 | 40.96 |
| NLCNet | 47.01 | 3.58 | 28.06 | 27.19 | 3.20 | 26.78 |
| Method | Size (MB) | Param (M) | Hazy | Rainy |
|---|---|---|---|---|
| (mAP) | (mAP) | |||
| MAXIM | 164 | 14.1 | 59.66 | 41.66 |
| MPRNet | 14 | 20.1 | - | 44.05 |
| Restormer | 100 | 26.12 | - | 44.66 |
| PromptIR | 388.2 | 35.59 | 58.59 | 44.41 |
| DiffuIR | 553.9 | 36.26 | 58.73 | 44.37 |
| CAPTNET | 93.2 | 24.36 | 51.4 | 44 |
| FDTANET | 130.5 | 34.10 | 51.28 | 44.97 |
| NLCNet | 20.8 | 5.83 | 60.75 | 47.58 |
| Method | LOL | LOL | NITRE | NITRE |
|---|---|---|---|---|
| PSNR | SSIM | PSNR | SSIM | |
| ZDCE | 14.86 | 0.559 | - | - |
| RUAS | 16.41 | 0.50 | - | - |
| EnlightenGAN | 17.48 | 0.651 | - | - |
| Restormer | 20.41 | 0.806 | - | - |
| Prompt-IR | 22.89 | 0.847 | - | - |
| MAXIM | 23.43 | 0.863 | 28.14 | 0.76 |
| DiffUIR-L | 25.12 | 0.907 | 27.93 | 0.32 |
| RTXF | 25.15 | 0.846 | 27.89 | 0.45 |
| Cidnet | 23.80 | 0.85 | 28.07 | 0.77 |
| NLCNet | 28.58 | 0.85 | 28.14 | 0.76 |
| Network | TURBO | EnDec | NLCNet | |
|---|---|---|---|---|
| FID | 34.61 | 34.49 | 38.03 | 32 |
| TURBO | 0.025 | 0.031 | 0.022 | 0.021 |
| Version | Original | EnDec | EnDec+ | NLCNet |
|---|---|---|---|---|
| PSNR | 20.31 | 22.02 | 21.50 | 22.66 |
| SSIM | 0.782 | 0.787 | 0.775 | 0.801 |
| ResBlocks | - | 9 | 4 | 4 |
| Version | V1 | V2 | V3 | V4 | V5 | V6 |
|---|---|---|---|---|---|---|
| PSNR | 17.90 | 18.79 | 17.67 | 18.07 | 17.22 | 17.71 |
| SSIM | 0.628 | 0.619 | 0.610 | 0.626 | 0.450 | 0.575 |
| Loss | SSIM | PSNR |
|---|---|---|
| cycle | 0.827 | 30.114 |
| cycle + contextual | 0.827 | 30.23 |
| cycle + contex + latent | 0.878 | 30.480 |
| Method | PSNR | SSIM | Params |
|---|---|---|---|
| SOTS-O | SOTS-O | (M) | |
| EnDec | 32.27 | 0.97 | 11.67 |
| EnDec_CA | 32.13 | 0.97 | 11.74 |
| EnDec_SPA | 30.57 | 0.79 | 11.67 |
| EnDec_CSPA | 32.08 | 0.97 | 11.74 |
| NLCNet | 32.42 | 0.97 | 5.83 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Bhattacharya, J.; Carini, A.; Marsi, S.; Ramponi, G. A Polynomial and Fourier Basis Network for Vision-Based Translation Tasks. Electronics 2026, 15, 52. https://doi.org/10.3390/electronics15010052
Bhattacharya J, Carini A, Marsi S, Ramponi G. A Polynomial and Fourier Basis Network for Vision-Based Translation Tasks. Electronics. 2026; 15(1):52. https://doi.org/10.3390/electronics15010052
Chicago/Turabian StyleBhattacharya, Jhilik, Alberto Carini, Stefano Marsi, and Giovanni Ramponi. 2026. "A Polynomial and Fourier Basis Network for Vision-Based Translation Tasks" Electronics 15, no. 1: 52. https://doi.org/10.3390/electronics15010052
APA StyleBhattacharya, J., Carini, A., Marsi, S., & Ramponi, G. (2026). A Polynomial and Fourier Basis Network for Vision-Based Translation Tasks. Electronics, 15(1), 52. https://doi.org/10.3390/electronics15010052

