You are currently on the new version of our website. Access the old version .

Journal of Imaging

Journal of Imaging is an international, multi/interdisciplinary, peer-reviewed, open access journal of imaging techniques, published online monthly by MDPI.

Indexed in PubMed | Quartile Ranking JCR - Q2 (Imaging Science and Photographic Technology)

All Articles (2,232)

  • Study Protocol
  • Open Access

Prostate cancer (PCa) is the most common malignancy in men worldwide. Multiparametric MRI (mpMRI) improves the detection of clinically significant PCa (csPCa); however, it remains limited by false-positive findings and inter-observer variability. Time-dependent diffusion (TDD) MRI provides microstructural information that may enhance csPCa characterization beyond standard mpMRI. This prospective observational diagnostic accuracy study protocol describes the evaluation of PROS-TD-AI, an in-house developed AI workflow integrating TDD-derived metrics for zone-aware csPCa risk prediction. PROS-TD-AI will be compared with PI-RADS v2.1 in routine clinical imaging using MRI-targeted prostate biopsy as the reference standard.

22 January 2026

Study flow and imaging protocols. Key exclusion criteria and contraindications are listed on the left. TDD-derived microstructural parameters will be extracted from all patients undergoing mpMRI. mpMRI, multiparametric MRI; TDD, time-dependent diffusion; PI-RADS, Prostate Imaging–Reporting and Data System; csPCa, clinically significant prostate cancer; non-csPCa, non-clinically significant prostate cancer.

Single-frequency ground penetrating radar (GPR) systems are fundamentally constrained by a trade-off between penetration depth and resolution, alongside issues like narrow bandwidth and ringing interference. To break this limitation, we have developed a multi-frequency data fusion technique grounded in convolutional sparse representation (CSR). The proposed methodology involves spatially registering multi-frequency GPR signals and fusing them via a CSR framework, where the convolutional dictionaries are derived from simulated high-definition GPR data. Extensive evaluation using information entropy, average gradient, mutual information, and visual information fidelity demonstrates the superiority of our method over traditional fusion approaches (e.g., weighted average, PCA, 2D wavelets). Tests on simulated and real data confirm that our CSR-based fusion successfully synergizes the deep penetration of low frequencies with the fine resolution of high frequencies, leading to substantial gains in GPR image clarity and interpretability.

22 January 2026

GPR data fusion process.

Interpretable Diagnosis of Pulmonary Emphysema on Low-Dose CT Using ResNet Embeddings

  • Talshyn Sarsembayeva,
  • Madina Mansurova and
  • Stepan Serebryakov
  • + 1 author

Accurate and interpretable detection of pulmonary emphysema on low-dose computed tomography (LDCT) remains a critical challenge for large-scale screening and population health studies. This work proposes a quality-controlled and interpretable deep learning pipeline for emphysema assessment using ResNet-152 embeddings. The pipeline integrates automated lung segmentation, quality-control filtering, and extraction of 2048-dimensional embeddings from mid-lung patches, followed by analysis using logistic regression, LASSO, and recursive feature elimination (RFE). The embeddings are further fused with quantitative CT (QCT) markers, including %LAA, Perc15, and total lung volume (TLV), to enhance robustness and interpretability. Bootstrapped validation demonstrates strong diagnostic performance (ROC-AUC = 0.996, PR-AUC = 0.962, balanced accuracy = 0.931) with low computational cost. The proposed approach shows that ResNet embeddings pretrained on CT data can be effectively reused without retraining for emphysema characterization, providing a reproducible and explainable framework suitable as a research and screening-support framework for population-level LDCT analysis.

21 January 2026

Overview of the explainable emphysema analysis pipeline. The process includes lung segmentation and automatic labeling based on low attenuation area percentage (LAA%), deep feature extraction with ResNet152, visualization with t-SNE/PCA, and interpretation using logistic and ridge regression.

Meibomian gland dysfunction (MGD) is a leading cause of dry eye disease, assessable through gland atrophy degree. While deep learning (DL) has advanced meibomian gland (MG) segmentation and MGD classification, existing methods treat these tasks independently and suffer from domain shift across multi-center imaging devices. We propose ADAM-Net, an attention-guided unsupervised domain adaptation multi-task framework that jointly models MG segmentation and MGD classification. Our model introduces structure-aware multi-task learning and anatomy-guided attention to enhance feature sharing, suppress background noise, and improve glandular region perception. For the cross-domain tasks MGD-1K→{K5M, CR-2, LV II}, this study systematically evaluates the overall performance of ADAM-Net from multiple perspectives. The experimental results show that ADAM-Net achieves classification accuracies of 77.93%, 74.86%, and 81.77% on the target domains, significantly outperforming current mainstream unsupervised domain adaptation (UDA) methods. The F1-score and the Matthews correlation coefficient (MCC-score) indicate that the model maintains robust discriminative capability even under class-imbalanced scenarios. t-SNE visualizations further validate its cross-domain feature alignment capability. These demonstrate that ADAM-Net exhibits strong robustness and interpretability in multi-center scenarios and provide an effective solution for automated MGD assessment.

21 January 2026

Overall architecture of the proposed ADAM-Net for attention-guided multi-task unsupervised domain adaptation.

News & Conferences

Issues

Open for Submission

Editor's Choice

Reprints of Collections

Advances in Retinal Image Processing
Reprint

Advances in Retinal Image Processing

Editors: P. Jidesh, Vasudevan (Vengu) Lakshminarayanan

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
J. Imaging - ISSN 2313-433X