You are currently viewing a new version of our website. To view the old version click .

Journal of Imaging

Journal of Imaging is an international, multi/interdisciplinary, peer-reviewed, open access journal of imaging techniques, published online monthly by MDPI.

Indexed in PubMed | Quartile Ranking JCR - Q2 (Imaging Science and Photographic Technology)

All Articles (2,211)

Hybrid Skeleton-Based Motion Templates for Cross-View and Appearance-Robust Gait Recognition

  • João Ferreira Nunes,
  • Pedro Miguel Moreira and
  • João Manuel R. S. Tavares

Gait recognition methods based on silhouette templates, such as the Gait Energy Image (GEI), achieve high accuracy under controlled conditions but often degrade when appearance varies due to viewpoint, clothing, or carried objects. In contrast, skeleton-based approaches provide interpretable motion cues but remain sensitive to pose-estimation noise. This work proposes two compact 2D skeletal descriptors—Gait Skeleton Images (GSIs)—that encode 3D joint trajectories into line-based and joint-based static templates compatible with standard 2D CNN architectures. A unified processing pipeline is introduced, including skeletal topology normalization, rigid view alignment, orthographic projection, and pixel-level rendering. Core design factors are analyzed on the GRIDDS dataset, where depth-based 3D coordinates provide stable ground truth for evaluating structural choices and rendering parameters. An extensive evaluation is then conducted on the widely used CASIA-B dataset, using 3D coordinates estimated via human pose estimation, to assess robustness under viewpoint, clothing, and carrying covariates. Results show that although GEIs achieve the highest same-view accuracy, GSI variants exhibit reduced degradation under appearance changes and demonstrate greater stability under severe cross-view conditions. These findings indicate that compact skeletal templates can complement appearance-based descriptors and may benefit further from continued advances in 3D human pose estimation.

7 January 2026

Overview of the proposed Gait Skeleton Image (GSI) processing pipeline.

A Unified Complex-Fresnel Model for Physically Based Long-Wave Infrared Imaging and Simulation

  • Peter ter Heerdt,
  • William Keustermans and
  • Ivan De Boi
  • + 1 author

Accurate modelling of reflection, transmission, absorption, and emission at material interfaces is essential for infrared imaging, rendering, and the simulation of optical and sensing systems. This need is particularly pronounced across the short-wave to long-wave infrared (SWIR–LWIR) spectrum, where many materials exhibit dispersion- and wavelength-dependent attenuation described by complex refractive indices. In this work, we introduce a unified formulation of the full Fresnel equations that directly incorporates wavelength-dependent complex refractive-index data and provides physically consistent interface behaviour for both dielectrics and conductors. The approach reformulates the classical Fresnel expressions to eliminate sign ambiguities and numerical instabilities, resulting in a stable evaluation across incidence angles and for strongly absorbing materials. We demonstrate the model through spectral-rendering simulations that illustrate realistic reflectance and transmittance behaviour for materials with different infrared optical properties. To assess its suitability for thermal-infrared applications, we also compare the simulated long-wave emission of a heated glass sphere with measurements from a LWIR camera. The agreement between measured and simulated radiometric trends indicates that the proposed formulation offers a practical and physically grounded tool for wavelength-parametric interface modelling in infrared imaging, supporting applications in spectral rendering, synthetic data generation, and infrared system analysis.

7 January 2026

Deep Learning-Assisted Autofocus for Aerial Cameras in Maritime Photography

  • Haiying Liu,
  • Yingchao Li and
  • Shilong Xu
  • + 3 authors

To address the unreliable autofocus problem of drone-mounted visible-light aerial cameras in low-contrast maritime environments, this paper proposes an autofocus system that combines deep-learning-based coarse focusing with traditional search-based fine adjustment. The system uses a built-high-contrast resolution test chart as the signal source. Images captured by the imaging sensor are fed into a lightweight convolutional neural network to regress the defocus distance, enabling fast focus positioning. This avoids the weak signal and inaccurate focusing often encountered when adjusting focus directly on low-contrast sea surfaces. In the fine-focusing stage, a hybrid strategy integrating hill-climbing search and inverse correction is adopted. By evaluating the image sharpness function, the system accurately locks onto the optimal focal plane, forming intelligent closed-loop control. Experiments show that this method, which combines imaging of the built-in calibration target with deep-learning-based coarse focusing, significantly improves focusing efficiency. Compared with traditional full-range search strategies, the focusing speed is increased by approximately 60%. While ensuring high accuracy and strong adaptability, the proposed approach effectively enhances the overall imaging performance of aerial cameras in low-contrast maritime conditions.

7 January 2026

Automated animal identification is a practical task for reuniting lost pets with their owners, yet current systems often struggle due to limited dataset scale and reliance on unimodal visual cues. This study introduces a multimodal verification framework that enhances visual features with semantic identity priors derived from synthetic textual descriptions. We constructed a massive training corpus of 1.9 million photographs covering 695,091 unique animals to support this investigation. Through systematic ablation studies, we identified SigLIP2-Giant and E5-Small-v2 as the optimal vision and text backbones. We further evaluated fusion strategies ranging from simple concatenation to adaptive gating to determine the best method for integrating these modalities. Our proposed approach utilizes a gated fusion mechanism and achieved a Top-1 accuracy of 84.28% and an Equal Error Rate of 0.0422 on a comprehensive test protocol. These results represent an 11% improvement over leading unimodal baselines and demonstrate that integrating synthesized semantic descriptions significantly refines decision boundaries in large-scale pet re-identification.

7 January 2026

News & Conferences

Issues

Open for Submission

Editor's Choice

Reprints of Collections

Computational Intelligence in Remote Sensing
Reprint

Computational Intelligence in Remote Sensing

2nd Edition
Editors: Yue Wu, Kai Qin, Maoguo Gong, Qiguang Miao

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
J. Imaging - ISSN 2313-433X