Celebrating the 10th Anniversary of the Journal of Imaging

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: 31 December 2025 | Viewed by 1980

Special Issue Editors


E-Mail Website
Guest Editor
Department of Informatics, Systems and Communication, University of Milano-Bicocca, viale Sarca, 336, 20126 Milano, Italy
Interests: color imaging; image and video processing; analysis and classification; visual information systems; image quality
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
Interests: computer vision; image processing; machine learning; intelligent systems

Special Issue Information

Dear Colleagues,

The MDPI Journal of Imaging is delighted to announce a Special Issue to commemorate its 10th anniversary. Over the past decade, the journal has been at the forefront of publishing high-quality research across all areas of imaging science and technology. This milestone reflects the invaluable contributions of our authors, reviewers, and readers worldwide.

To mark this occasion, we are inviting submissions for a Special Issue that will showcase the most innovative, impactful, and visionary research in imaging science. We welcome contributions from both well-established experts and emerging researchers, aiming to provide a comprehensive view of the current state and future directions of imaging.  Submissions are encouraged in the following areas:

- Image Processing and Analysis: New methodologies, algorithms, and applications.
- Computer Vision: Advances in object detection, recognition, and scene understanding.
- Multimodal Imaging: Integration and analysis of data from different imaging modalities.
- Medical Imaging: Novel techniques for diagnosis, treatment, and monitoring.
- Remote Sensing and Satellite Imaging: Applications in environmental monitoring and earth observation.
- Image Quality and Enhancement: Perception-driven quality metrics and enhancement techniques.
- Imaging Systems and Devices: Advances in hardware and software systems.
- Emerging Applications: Imaging in art, archeology, cultural heritage, and beyond.
- Artificial Intelligence in Imaging: Deep learning and other AI techniques applied to imaging tasks.
- Ethics and Sustainability in Imaging: Addressing challenges related to fairness, privacy, and energy efficiency.

We invite you to join us in celebrating this important milestone by contributing to a special publication that highlights the journal's legacy and achievements. This is a unique opportunity to share your groundbreaking research with a global audience, make an impact in the field of imaging science, and play a significant role in shaping the future of this ever-evolving discipline.

All submissions will undergo a rigorous peer-review process. Authors are encouraged to follow the Journal of Imaging author guidelines (https://www.mdpi.com/journal/jimaging/instructions) when preparing their manuscripts.

We look forward to receiving your contributions and celebrating this significant milestone together!

Prof. Dr. Raimondo Schettini
Dr. Guanghui (Richard) Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image processing
  • image analysis
  • computer vision
  • deep learning
  • machine learning
  • object detection
  • multimodal Imaging
  • medical imaging
  • remote sensing
  • image quality
  • imaging systems and devices
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2046 KiB  
Article
Breast Lesion Detection Using Weakly Dependent Customized Features and Machine Learning Models with Explainable Artificial Intelligence
by Simona Moldovanu, Dan Munteanu, Keka C. Biswas and Luminita Moraru
J. Imaging 2025, 11(5), 135; https://doi.org/10.3390/jimaging11050135 - 28 Apr 2025
Viewed by 133
Abstract
This research proposes a novel strategy for accurate breast lesion classification that combines explainable artificial intelligence (XAI), machine learning (ML) classifiers, and customized weakly dependent features from ultrasound (BU) images. Two new weakly dependent feature classes are proposed to improve the diagnostic accuracy [...] Read more.
This research proposes a novel strategy for accurate breast lesion classification that combines explainable artificial intelligence (XAI), machine learning (ML) classifiers, and customized weakly dependent features from ultrasound (BU) images. Two new weakly dependent feature classes are proposed to improve the diagnostic accuracy and diversify the training data. These are based on image intensity variations and the area of bounded partitions and provide complementary rather than overlapping information. ML classifiers such as Random Forest (RF), Extreme Gradient Boosting (XGB), Gradient Boosting Classifiers (GBC), and LASSO regression were trained with both customized feature classes. To validate the reliability of our study and the results obtained, we conducted a statistical analysis using the McNemar test. Later, an XAI model was combined with ML to tackle the influence of certain features, the constraints of feature selection, and the interpretability capabilities across various ML models. LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations) models were used in the XAI process to enhance the transparency and interpretation in clinical decision-making. The results revealed common relevant features for the malignant class, consistently identified by all of the classifiers, and for the benign class. However, we observed variations in the feature importance rankings across the different classifiers. Furthermore, our study demonstrates that the correlation between dependent features does not impact explainability. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Figure 1

23 pages, 37586 KiB  
Article
Revisiting Wölfflin in the Age of AI: A Study of Classical and Baroque Composition in Generative Models
by Adrien Deliege, Maria Giulia Dondero and Enzo D’Armenio
J. Imaging 2025, 11(5), 128; https://doi.org/10.3390/jimaging11050128 - 22 Apr 2025
Viewed by 130
Abstract
This study explores how contemporary text-to-image models interpret and generate Classical and Baroque styles under Wölfflin’s framework—two categories that are atemporal and transversal across media. Our goal is to see whether generative AI can replicate the nuanced stylistic cues that art historians attribute [...] Read more.
This study explores how contemporary text-to-image models interpret and generate Classical and Baroque styles under Wölfflin’s framework—two categories that are atemporal and transversal across media. Our goal is to see whether generative AI can replicate the nuanced stylistic cues that art historians attribute to them. We prompted two popular models (DALL•E and Midjourney) using explicit style labels (e.g., “baroque” and “classical”) as well as more implicit cues (e.g., “dynamic”, “static”, or reworked Wölfflin descriptors). We then collected expert ratings and conducted broader qualitative reviews to assess how each output aligned with Wölfflin’s characteristics. Our findings suggest that the term “baroque” usually evokes features recognizable in typically historical Baroque artworks, while “classical” often yields less distinct results, particularly when a specified genre (portrait, still life) imposes a centered, closed-form composition. Removing explicit style labels may produce highly abstract images, revealing that Wölfflin’s descriptors alone may be insufficient to convey Classical or Baroque styles efficiently. Interestingly, the term “dynamic” gives rather chaotic images, yet this chaos is somehow ordered, centered, and has an almost Classical feel. Altogether, these observations highlight the complexity of bridging canonical stylistic frameworks and contemporary AI training biases, underscoring the need to update or refine Wölfflin’s atemporal categories to accommodate how generative models—and modern visual culture—reinterpret Classical and Baroque. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Figure 1

24 pages, 11715 KiB  
Article
Assessing Cancer Presence in Prostate MRI Using Multi-Encoder Cross-Attention Networks
by Avtantil Dimitriadis, Grigorios Kalliatakis, Richard Osuala, Dimitri Kessler, Simone Mazzetti, Daniele Regge, Oliver Diaz, Karim Lekadir, Dimitrios Fotiadis, Manolis Tsiknakis, Nikolaos Papanikolaou, ProCAncer-I Consortium and Kostas Marias
J. Imaging 2025, 11(4), 98; https://doi.org/10.3390/jimaging11040098 - 26 Mar 2025
Viewed by 462
Abstract
Prostate cancer (PCa) is currently the second most prevalent cancer among men. Accurate diagnosis of PCa can provide effective treatment for patients and reduce mortality. Previous works have merely focused on either lesion detection or lesion classification of PCa from magnetic resonance imaging [...] Read more.
Prostate cancer (PCa) is currently the second most prevalent cancer among men. Accurate diagnosis of PCa can provide effective treatment for patients and reduce mortality. Previous works have merely focused on either lesion detection or lesion classification of PCa from magnetic resonance imaging (MRI). In this work we focus on a critical, yet underexplored task of the PCa clinical workflow: distinguishing cases with cancer presence (pathologically confirmed PCa patients) from conditions with no suspicious PCa findings (no cancer presence). To this end, we conduct large-scale experiments for this task for the first time by adopting and processing the multi-centric ProstateNET Imaging Archive which contains more than 6 million image representations of PCa from more than 11,000 PCa cases, representing the largest collection of PCa MR images. Bi-parametric MR (bpMRI) images of 4504 patients alongside their clinical variables are used for training, while the architectures are evaluated on two hold-out test sets of 975 retrospective and 435 prospective patients. Our proposed multi-encoder-cross-attention-fusion architecture achieved a promising area under the receiver operating characteristic curve (AUC) of 0.91. This demonstrates our method’s capability of fusing complex bi-parametric imaging modalities and enhancing model robustness, paving the way towards the clinical adoption of deep learning models for accurately determining the presence of PCa across patient populations. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Figure 1

21 pages, 4293 KiB  
Article
A Highly Robust Encoder–Decoder Network with Multi-Scale Feature Enhancement and Attention Gate for the Reduction of Mixed Gaussian and Salt-and-Pepper Noise in Digital Images
by Milan Tripathi, Waree Kongprawechnon and Toshiaki Kondo
J. Imaging 2025, 11(2), 51; https://doi.org/10.3390/jimaging11020051 - 10 Feb 2025
Viewed by 742
Abstract
Image denoising is crucial for correcting distortions caused by environmental factors and technical limitations. We propose a novel and highly robust encoder–decoder network (HREDN) for effectively removing mixed salt-and-pepper and Gaussian noise from digital images. HREDN integrates a multi-scale feature enhancement block in [...] Read more.
Image denoising is crucial for correcting distortions caused by environmental factors and technical limitations. We propose a novel and highly robust encoder–decoder network (HREDN) for effectively removing mixed salt-and-pepper and Gaussian noise from digital images. HREDN integrates a multi-scale feature enhancement block in the encoder, allowing the network to capture features at various scales and handle complex noise patterns more effectively. To mitigate information loss during encoding, skip connections transfer essential feature maps from the encoder to the decoder, preserving structural details. However, skip connections can also propagate redundant information. To address this, we incorporate attention gates within the skip connections, ensuring that only relevant features are passed to the decoding layers. We evaluate the robustness of the proposed method across facial, medical, and remote sensing domains. The experimental results demonstrate that HREDN excels in preserving edge details and structural features in denoised images, outperforming state-of-the-art techniques in both qualitative and quantitative measures. Statistical analysis further highlights the model’s ability to effectively remove noise in diverse, complex scenarios with images of varying resolutions across multiple domains. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Figure 1

Back to TopTop