Skip Content
You are currently on the new version of our website. Access the old version .

Journal of Imaging

Journal of Imaging is an international, multi/interdisciplinary, peer-reviewed, open access journal of imaging techniques, published online monthly by MDPI.

Indexed in PubMed | Quartile Ranking JCR - Q2 (Imaging Science and Photographic Technology)

All Articles (2,240)

Multiscale RGB-Guided Fusion for Hyperspectral Image Super-Resolution

  • Matteo Kolyszko,
  • Marco Buzzelli and
  • Raimondo Schettini
  • + 1 author

Hyperspectral imaging (HSI) enables fine spectral analysis but is often limited by low spatial resolution due to sensor constraints. To address this, we propose CGNet, a color-guided hyperspectral super-resolution network that leverages complementary information from low-resolution hyperspectral inputs and high-resolution RGB images. CGNet adopts a dual-encoder design: the RGB encoder extracts hierarchical spatial features, while the HSI encoder progressively upsamples spectral features. A multi-scale fusion decoder then combines both modalities in a coarse-to-fine manner to reconstruct the high-resolution HSI. Training is driven by a hybrid loss that balances L1 and Spectral Angle Mapper (SAM), which ablation studies confirm as the most effective formulation. Experiments on two benchmarks, ARAD1K and StereoMSI, at ×4 and ×6 upscaling factors demonstrate that CGNet consistently outperforms state-of-the-art baselines. CGNet achieves higher PSNR and SSIM, lower SAM, and reduced ΔE00, confirming its ability to recover sharp spatial structures while preserving spectral fidelity.

28 January 2026

Overview of the proposed CGNet architecture. The network takes as input a low-resolution hyperspectral image and a high-resolution RGB image. It employs two parallel encoders to extract multiscale spectral and spatial features, which are progressively fused in a coarse-to-fine manner by the fusion decoder. The final output is a super-resolved hyperspectral image with full spatial and spectral fidelity.

Radiographic imaging remains a cornerstone of diagnostic practice. However, accurate interpretation faces challenges from subtle visual signatures, anatomical variability, and inter-observer inconsistency. Conventional deep learning approaches, such as convolutional neural networks and vision transformers, deliver strong predictive performance but often lack anatomical grounding and interpretability, limiting their trustworthiness in imaging applications. To address these challenges, we present SpineNeuroSym, a neuro-geometric imaging framework that unifies geometry-aware learning and symbolic reasoning for explainable medical image analysis. The framework integrates weakly supervised keypoint and region-of-interest discovery, a dual-stream graph–transformer backbone, and a Differentiable Radiographic Geometry Module (dRGM) that computes clinically relevant indices (e.g., slip ratio, disc asymmetry, sacroiliac spacing, and curvature measures). A Neuro-Symbolic Constraint Layer (NSCL) enforces monotonic logic in image-derived predictions, while a Counterfactual Geometry Diffusion (CGD) module generates rare imaging phenotypes and provides diagnostic auditing through counterfactual validation. Evaluated on a comprehensive dataset of 1613 spinal radiographs from Sunpasitthiprasong Hospital encompassing six diagnostic categories—spondylolisthesis (n = 496), infection (n = 322), spondyloarthropathy (n = 275), normal cervical (n = 192), normal thoracic (n = 70), and normal lumbar spine (n = 258)—SpineNeuroSym achieved 89.4% classification accuracy, a macro-F1 of 0.872, and an AUROC of 0.941, outperforming eight state-of-the-art imaging baselines. These results highlight how integrating neuro-geometric modeling, symbolic constraints, and counterfactual validation advances explainable, trustworthy, and reproducible medical imaging AI, establishing a pathway toward transparent image analysis systems.

28 January 2026

Stepwise workflow of the proposed methodology.

Existing 3D point cloud enhancement methods typically rely on artificially designed geometric transformations or local blending strategies, which are prone to introducing illogical deformations, struggle to preserve global structure, and exhibit insufficient adaptability to diverse degradation patterns. To address these limitations, this paper proposes SFD-ADNet—an adaptive deformation framework based on a dual spatial–frequency domain. It achieves 3D point cloud augmentation by explicitly learning deformation parameters rather than applying predefined perturbations. By jointly modeling spatial structural dependencies and spectral features, SFD-ADNet generates augmented samples that are both structurally aware and task-relevant. In the spatial domain, a hierarchical sequence encoder coupled with a bidirectional Mamba-based deformation predictor captures long-range geometric dependencies and local structural variations, enabling adaptive position-aware deformation control. In the frequency domain, a multi-scale dual-channel mechanism based on adaptive Chebyshev polynomials separates low-frequency structural components from high-frequency details, allowing the model to suppress noise-sensitive distortions while preserving the global geometric skeleton. The two deformation predictions dynamically fuse to balance structural fidelity and sample diversity. Extensive experiments conducted on ModelNet40-C and ScanObjectNN-C involved synthetic CAD models and real-world scanned point clouds under diverse perturbation conditions. SFD-ADNet, as a universal augmentation module, reduces the mCE metrics of PointNet++ and different backbone networks by over 20%. Experiments demonstrate that SFD-ADNet achieves state-of-the-art robustness while preserving critical geometric structures. Furthermore, models enhanced by SFD-ADNet demonstrate consistently improved robustness against diverse point cloud attacks, validating the efficacy of adaptive space-frequency deformation in robust point cloud learning.

26 January 2026

The framework of SFD-ADNet.

Cross-scene hyperspectral image (HSI) classification under single-source domain generalization (DG) is a crucial yet challenging task in remote sensing. The core difficulty lies in generalizing from a limited source domain to unseen target scenes. We formalize this through the causal theory, where different sensing scenes are viewed as distinct interventions on a shared physical system. This perspective reveals two fundamental obstacles: interventional distribution shifts arising from varying acquisition conditions, and confounding biases induced by spurious correlations driven by domain-specific factors. Taking the above considerations into account, we propose CauseHSI, a causality-inspired framework that offers new insights into cross-scene HSI classification. CauseHSI consists of two key components: a Counterfactual Generation Module (CGM) that perturbs domain-specific factors to generate diverse counterfactual variants, simulating cross-domain interventions while preserving semantic consistency, and a Causal Disentanglement Module (CDM) that separates invariant causal semantics from spurious correlations through structured constraints under a structural causal model, ultimately guiding the model to focus on domain-invariant and generalizable representations. By aligning model learning with causal principles, CauseHSI enhances robustness against domain shifts. Extensive experiments on the Pavia, Houston, and HyRANK datasets demonstrate that CauseHSI outperforms existing DG methods.

26 January 2026

(a) SCM of the image generation process. Z, S, X and Y denote the latent physical properties, sensing scene, observed image, and semantic label, respectively. (b) Illustration of representation disentanglement. 
  
    X
    c
  
 and 
  
    X
    n
  
 denote the causal features and non-causal features, respectively.

News & Conferences

Issues

Open for Submission

Editor's Choice

Reprints of Collections

Advances in Retinal Image Processing
Reprint

Advances in Retinal Image Processing

Editors: P. Jidesh, Vasudevan (Vengu) Lakshminarayanan

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
J. Imaging - ISSN 2313-433X