Self-Supervised Learning for Image Processing and Analysis

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Image and Video Processing".

Deadline for manuscript submissions: 31 July 2025 | Viewed by 4445

Special Issue Editors

Center for Biotechnology and Interdisciplinary Studies, Biomedical Imaging Center, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
Interests: unsupervised/self-supervised learning; weakly supervised learning; representation learning; clustering; biomedical imaging and analysis
School of Telecommunication and Information Engineering, Xian University of Posts and Telecommunications, Xi’an 710121, China
Interests: weakly supervised learning; representation learning; image super-resolution

E-Mail Website
Guest Editor
School of Information Science and Technology, Northwest University, Xi’an 710127, China
Interests: biomedical image analysis and understanding; self-supervised learning; representation learning

E-Mail Website
Guest Editor
Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, China
Interests: image processing and analysis; remote sensing image processing; artificial intelligence applications on satellite images

Special Issue Information

Dear Colleagues,

In the rapidly evolving field of deep learning, self-supervised learning has emerged as a groundbreaking approach that leverages unlabeled data to learn meaningful representations. This Special Issue aims to showcase the latest advancements in self-supervised deep learning techniques and their applications in image processing and analysis. We invite researchers and practitioners to submit original research and review articles that contribute to the theoretical foundations, algorithmic advancements, and diverse applications of self-supervised deep learning in image processing and analysis.

Topics of Interest

We welcome submissions on a wide range of topics related to self-supervised deep learning for image processing and analysis, including, but not limited to, the following:

  • Novel self-supervised learning models and algorithms for image processing.
  • Advances in contrastive learning, clustering, and generative models in the context of image analysis.
  • Applications of self-supervised learning in medical imaging, remote sensing, and multimedia analysis.
  • Self-supervised learning for image segmentation, classification, and enhancement.
  • The integration of self-supervised learning with other unsupervised, semi-supervised, and supervised learning paradigms.
  • Evaluation metrics and benchmarks for self-supervised learning in image processing.
  • The interpretability and explainability of self-supervised deep learning models for image analysis.
  • Challenges and opportunities in deploying self-supervised learning models in real-world scenarios.

Dr. Chuang Niu
Dr. Qian Wang
Dr. Xin Cao
Dr. Shenghan Ren
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • self-supervised learning
  • unsupervised learning
  • image processing
  • image analysis
  • semi-supervised learning
  • weakly supervised learning
  • medical imaging
  • medical image analysis
  • remote sensing imagery

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 640 KiB  
Article
Enhancing U-Net Segmentation Accuracy Through Comprehensive Data Preprocessing
by Talshyn Sarsembayeva, Madina Mansurova, Assel Abdildayeva and Stepan Serebryakov
J. Imaging 2025, 11(2), 50; https://doi.org/10.3390/jimaging11020050 - 8 Feb 2025
Viewed by 1360
Abstract
The accurate segmentation of lung regions in computed tomography (CT) scans is critical for the automated analysis of lung diseases such as chronic obstructive pulmonary disease (COPD) and COVID-19. This paper focuses on enhancing the accuracy of U-Net segmentation models through a robust [...] Read more.
The accurate segmentation of lung regions in computed tomography (CT) scans is critical for the automated analysis of lung diseases such as chronic obstructive pulmonary disease (COPD) and COVID-19. This paper focuses on enhancing the accuracy of U-Net segmentation models through a robust preprocessing pipeline. The pipeline includes CT image normalization, binarization to extract lung regions, and morphological operations to remove artifacts. Additionally, the proposed method applies region-of-interest (ROI) filtering to isolate lung areas effectively. The dataset preprocessing significantly improves segmentation quality by providing clean and consistent input data for the U-Net model. Experimental results demonstrate that the Intersection over Union (IoU) and Dice coefficient exceeded 0.95 on training datasets. This work highlights the importance of preprocessing as a standalone step for optimizing deep learning-based medical image analysis. Full article
(This article belongs to the Special Issue Self-Supervised Learning for Image Processing and Analysis)
Show Figures

Figure 1

11 pages, 2783 KiB  
Article
Implicit 3D Human Reconstruction Guided by Parametric Models and Normal Maps
by Yong Ren, Mingquan Zhou, Yifan Wang, Long Feng, Qiuquan Zhu, Kang Li and Guohua Geng
J. Imaging 2024, 10(6), 133; https://doi.org/10.3390/jimaging10060133 - 29 May 2024
Cited by 1 | Viewed by 1639
Abstract
Accurate and robust 3D human modeling from a single image presents significant challenges. Existing methods have shown potential, but they often fail to generate reconstructions that match the level of detail in the input image. These methods particularly struggle with loose clothing. They [...] Read more.
Accurate and robust 3D human modeling from a single image presents significant challenges. Existing methods have shown potential, but they often fail to generate reconstructions that match the level of detail in the input image. These methods particularly struggle with loose clothing. They typically employ parameterized human models to constrain the reconstruction process, ensuring the results do not deviate too far from the model and produce anomalies. However, this also limits the recovery of loose clothing. To address this issue, we propose an end-to-end method called IHRPN for reconstructing clothed humans from a single 2D human image. This method includes a feature extraction module for semantic extraction of image features. We propose an image semantic feature extraction aimed at achieving pixel model space consistency and enhancing the robustness of loose clothing. We extract features from the input image to infer and recover the SMPL-X mesh, and then combine it with a normal map to guide the implicit function to reconstruct the complete clothed human. Unlike traditional methods, we use local features for implicit surface regression. Our experimental results show that our IHRPN method performs excellently on the CAPE and AGORA datasets, achieving good performance, and the reconstruction of loose clothing is noticeably more accurate and robust. Full article
(This article belongs to the Special Issue Self-Supervised Learning for Image Processing and Analysis)
Show Figures

Figure 1

Back to TopTop