You are currently viewing a new version of our website. To view the old version click .

Journal of Imaging

Journal of Imaging is an international, multi/interdisciplinary, peer-reviewed, open access journal of imaging techniques, published online monthly by MDPI.

Indexed in PubMed | Quartile Ranking JCR - Q2 (Imaging Science and Photographic Technology)

All Articles (2,181)

The use of Multispectral (MS) imaging is growing fast across many research fields. However, one of the obstacles researchers face is the limited availability of multispectral image databases. This arises from two factors: multispectral cameras are a relatively recent technology, and they are not widely available. Hence, the development of an image database is crucial for research on multispectral images. This study takes advantage of two high-end MS cameras in visible and near-infrared based on filter array technology developed in the PImRob platform, the University of Burgundy, to provide a freely accessible database. The database includes high-resolution MS images taken from different plants and weeds, along with annotated images and masks. The original raw images and the demosaicked images have been provided. The database has been developed for research on demosaicking techniques, segmentation algorithms, or deep learning for crop/weed discrimination.

20 December 2025

Set-up for MS image acquisition (this camera was supported by the EU PENTA/CAVIAR).

Non-Destructive Mangosteen Volume Estimation via Multi-View Instance Segmentation and Hybrid Geometric Modeling

  • Wattanapong Kurdthongmee,
  • Arsanchai Sukkuea and
  • Md Eshrat E Alahi
  • + 1 author

In precision agriculture, accurate, non-destructive estimation of fruit volume is crucial for quality grading, yield prediction, and post-harvest management. While vision-based methods provided some usefulness, fruits with complex geometry—such as mangosteen (Garcinia mangostana L.)—are difficult due to their large calyx, which may lead to difficulties in solving using traditional form-modeling methods. Traditional geometric solutions such as ellipsoid approximations, diameter–height estimation, and shape-from-silhouette reconstruction often fail because the irregular calyx generates asymmetric protrusions that violate their basic form assumptions. We offer a novel study framework employing both multi-view instance segmentation and hybrid geometrical feature modeling to quantitatively model mangosteen volume with traditional 2D imaging. A You Only Look Once (YOLO)-based segmentation model was employed to explicitly separate the fruit body from the calyx. Calyx inclusion resulted in dense geometric noise and reduced model performance (R2<0.40). We trained eight regression models on a curated and augmented 900 image dataset (N=720, test N=180). The models used single-view and multi-view geometric regressors (VA1.5), polynomial hybrid configurations, ellipsoid-based approximations, as well as hybrid feature formulations. Multi-view models consistently outperformed single-view models, and the average predictive accuracy improved from R2=0.6493 to R2=0.7290. The best model is indeed a hybrid linear regression model with side- and bottom-area features—(As1.5, Ab1.5)—combined with ellipsoid-derived volume estimation—(Vellipsoid)—which resulted in R2=0.7290, a Mean Absolute Percentage Error (MAPE) of 16.04%, and a Root Mean Square Error (RMSE) of 31.9 cm3 on the test set. These results confirm the proposed model as a low-cost, interpretable, and flexible model for real-time fruit volume estimation, ready for incorporation into automated sorting and grading systems integrated in post-harvest processing pipelines.

19 December 2025

Workflow diagram for Phase 1: image acquisition, instance segmentation, and geometric feature extraction. This phase includes multi-view image capture, YOLOv8-based segmentation, per-image calibration, as well as derivation of area and dimension-based features.

A Structured Review and Quantitative Profiling of Public Brain MRI Datasets for Foundation Model Development

  • Minh Sao Khue Luu,
  • Margaret V. Benedichuk and
  • Ekaterina I. Roppert
  • + 2 authors

The development of foundation models for brain MRI depends critically on the scale, diversity, and consistency of available data, yet systematic assessments of these factors remain scarce. In this study, we analyze 54 publicly accessible brain MRI datasets encompassing over 538,031 scans to provide a structured, multi-level overview tailored to foundation model development. At the dataset level, we characterize modality composition, disease coverage, and dataset scale, revealing strong imbalances between large healthy cohorts and smaller clinical populations. At the image level, we quantify voxel spacing, orientation, and intensity distributions across 14 representative datasets, demonstrating substantial heterogeneity that can influence representation learning. We then perform a quantitative evaluation of preprocessing variability, examining how intensity normalization, bias field correction, skull stripping, spatial registration, and interpolation alter voxel statistics and geometry. While these steps improve within-dataset consistency, residual differences persist between datasets. Finally, a feature-space case study using a 3D DenseNet121 shows measurable residual covariate shift after standardized preprocessing, confirming that harmonization alone cannot eliminate inter-dataset bias. Together, these analyses provide a unified characterization of variability in public brain MRI resources and emphasize the need for preprocessing-aware and domain-adaptive strategies in the design of generalizable brain MRI foundation models.

18 December 2025

Distribution of subjects by disease category after removing the undefined “Multiple Diseases” group. The x-axis uses a logarithmic scale to enable visualization across several orders of magnitude, from hundreds to tens of thousands of subjects.

Existing salient object detection methods for optical remote sensing images still face certain limitations due to complex background variations, significant scale discrepancies among targets, severe background interference, and diverse topological structures. On the one hand, the feature transmission process often neglects the constraints and complementary effects of high-level features on low-level features, leading to insufficient feature interaction and weakened model representation. On the other hand, decoder architectures generally rely on simple cascaded structures, which fail to adequately exploit and utilize contextual information. To address these challenges, this study proposes a Hierarchical Semantic Interaction Module to enhance salient object detection performance in optical remote sensing scenarios. The module introduces foreground content modeling and a hierarchical semantic interaction mechanism within a multi-scale feature space, reinforcing the synergy and complementarity among features at different levels. This effectively highlights multi-scale and multi-type salient regions in complex backgrounds. Extensive experiments on multiple optical remote sensing datasets demonstrate the effectiveness of the proposed method. Specifically, on the EORSSD dataset, our full model integrating both CA and PA modules improves the max F-measure from 0.8826 to 0.9100 (↑2.74%), increases maxE from 0.9603 to 0.9727 (↑1.24%), and enhances the S-measure from 0.9026 to 0.9295 (↑2.69%) compared with the baseline. These results clearly demonstrate the effectiveness of the proposed modules and verify the robustness and strong generalization capability of our method in complex remote sensing scenarios.

17 December 2025

HSIMNet model network structure diagram.

News & Conferences

Issues

Open for Submission

Editor's Choice

Reprints of Collections

Computational Intelligence in Remote Sensing
Reprint

Computational Intelligence in Remote Sensing

2nd Edition
Editors: Yue Wu, Kai Qin, Maoguo Gong, Qiguang Miao

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
J. Imaging - ISSN 2313-433X