Self-Supervised Learning for Image Processing and Analysis

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Image and Video Processing".

Deadline for manuscript submissions: 31 July 2025 | Viewed by 7898

Special Issue Editors

Center for Biotechnology and Interdisciplinary Studies, Biomedical Imaging Center, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
Interests: unsupervised/self-supervised learning; weakly supervised learning; representation learning; clustering; biomedical imaging and analysis
School of Telecommunication and Information Engineering, Xian University of Posts and Telecommunications, Xi’an 710121, China
Interests: weakly supervised learning; representation learning; image super-resolution

E-Mail Website
Guest Editor
School of Information Science and Technology, Northwest University, Xi’an 710127, China
Interests: biomedical image analysis and understanding; self-supervised learning; representation learning

E-Mail Website
Guest Editor
Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, China
Interests: image processing and analysis; remote sensing image processing; artificial intelligence applications on satellite images

Special Issue Information

Dear Colleagues,

In the rapidly evolving field of deep learning, self-supervised learning has emerged as a groundbreaking approach that leverages unlabeled data to learn meaningful representations. This Special Issue aims to showcase the latest advancements in self-supervised deep learning techniques and their applications in image processing and analysis. We invite researchers and practitioners to submit original research and review articles that contribute to the theoretical foundations, algorithmic advancements, and diverse applications of self-supervised deep learning in image processing and analysis.

Topics of Interest

We welcome submissions on a wide range of topics related to self-supervised deep learning for image processing and analysis, including, but not limited to, the following:

  • Novel self-supervised learning models and algorithms for image processing.
  • Advances in contrastive learning, clustering, and generative models in the context of image analysis.
  • Applications of self-supervised learning in medical imaging, remote sensing, and multimedia analysis.
  • Self-supervised learning for image segmentation, classification, and enhancement.
  • The integration of self-supervised learning with other unsupervised, semi-supervised, and supervised learning paradigms.
  • Evaluation metrics and benchmarks for self-supervised learning in image processing.
  • The interpretability and explainability of self-supervised deep learning models for image analysis.
  • Challenges and opportunities in deploying self-supervised learning models in real-world scenarios.

Dr. Chuang Niu
Dr. Qian Wang
Dr. Xin Cao
Dr. Shenghan Ren
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • self-supervised learning
  • unsupervised learning
  • image processing
  • image analysis
  • semi-supervised learning
  • weakly supervised learning
  • medical imaging
  • medical image analysis
  • remote sensing imagery

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 11237 KiB  
Article
Reclassification Scheme for Image Analysis in GRASS GIS Using Gradient Boosting Algorithm: A Case of Djibouti, East Africa
by Polina Lemenkova
J. Imaging 2025, 11(8), 249; https://doi.org/10.3390/jimaging11080249 - 23 Jul 2025
Abstract
Image analysis is a valuable approach in a wide array of environmental applications. Mapping land cover categories depicted from satellite images enables the monitoring of landscape dynamics. Such a technique plays a key role for land management and predictive ecosystem modelling. Satellite-based mapping [...] Read more.
Image analysis is a valuable approach in a wide array of environmental applications. Mapping land cover categories depicted from satellite images enables the monitoring of landscape dynamics. Such a technique plays a key role for land management and predictive ecosystem modelling. Satellite-based mapping of environmental dynamics enables us to define factors that trigger these processes and are crucial for our understanding of Earth system processes. In this study, a reclassification scheme of image analysis was developed for mapping the adjusted categorisation of land cover types using multispectral remote sensing datasets and Geographic Resources Analysis Support System (GRASS) Geographic Information System (GIS) software. The data included four Landsat 8–9 satellite images on 2015, 2019, 2021 and 2023. The sequence of time series was used to determine land cover dynamics. The classification scheme consisting of 17 initial land cover classes was employed by logical workflow to extract 10 key land cover types of the coastal areas of Bab-el-Mandeb Strait, southern Red Sea. Special attention is placed to identify changes in the land categories regarding the thermal saline lake, Lake Assal, with fluctuating salinity and water levels. The methodology included the use of machine learning (ML) image analysis GRASS GIS modules ‘r.reclass’ for the reclassification of a raster map based on category values. Other modules included ‘r.random’, ‘r.learn.train’ and ‘r.learn.predict’ for gradient boosting ML classifier and ‘i.cluster’ and ‘i.maxlik’ for clustering and maximum-likelihood discriminant analysis. To reveal changes in the land cover categories around the Lake of Assal, this study uses ML and reclassification methods for image analysis. Auxiliary modules included ‘i.group’, ‘r.import’ and other GRASS GIS scripting techniques applied to Landsat image processing and for the identification of land cover variables. The results of image processing demonstrated annual fluctuations in the landscapes around the saline lake and changes in semi-arid and desert land cover types over Djibouti. The increase in the extent of semi-desert areas and the decrease in natural vegetation proved the processes of desertification of the arid environment in Djibouti caused by climate effects. The developed land cover maps provided information for assessing spatial–temporal changes in Djibouti. The proposed ML-based methodology using GRASS GIS can be employed for integrating techniques of image analysis for land management in other arid regions of Africa. Full article
(This article belongs to the Special Issue Self-Supervised Learning for Image Processing and Analysis)
Show Figures

Figure 1

23 pages, 4896 KiB  
Article
Insulator Surface Defect Detection Method Based on Graph Feature Diffusion Distillation
by Shucai Li, Na Zhang, Gang Yang, Yannong Hou and Xingzhong Zhang
J. Imaging 2025, 11(6), 190; https://doi.org/10.3390/jimaging11060190 - 10 Jun 2025
Viewed by 1165
Abstract
Aiming at the difficulties of scarcity of defect samples on the surface of power insulators, irregular morphology and insufficient pixel-level localization accuracy, this paper proposes a defect detection method based on graph feature diffusion distillation named GFDD. The feature bias problem is alleviated [...] Read more.
Aiming at the difficulties of scarcity of defect samples on the surface of power insulators, irregular morphology and insufficient pixel-level localization accuracy, this paper proposes a defect detection method based on graph feature diffusion distillation named GFDD. The feature bias problem is alleviated by constructing a dual-division teachers architecture with graph feature consistency constraints, while the cross-layer feature fusion module is utilized to dynamically aggregate multi-scale information to reduce redundancy; the diffusion distillation mechanism is designed to break through the traditional single-layer feature transfer limitation, and the global context modeling capability is enhanced by fusing deep semantics and shallow details through channel attention. In the self-built dataset, GFDD achieves 96.6% Pi.AUROC, 97.7% Im.AUROC and 95.1% F1-score, which is 2.4–3.2% higher than the existing optimal methods; it maintains excellent generalization and robustness in multiple public dataset tests. The method provides a high-precision solution for automated inspection of insulator surface defect and has certain engineering value. Full article
(This article belongs to the Special Issue Self-Supervised Learning for Image Processing and Analysis)
Show Figures

Figure 1

15 pages, 640 KiB  
Article
Enhancing U-Net Segmentation Accuracy Through Comprehensive Data Preprocessing
by Talshyn Sarsembayeva, Madina Mansurova, Assel Abdildayeva and Stepan Serebryakov
J. Imaging 2025, 11(2), 50; https://doi.org/10.3390/jimaging11020050 - 8 Feb 2025
Cited by 1 | Viewed by 2769
Abstract
The accurate segmentation of lung regions in computed tomography (CT) scans is critical for the automated analysis of lung diseases such as chronic obstructive pulmonary disease (COPD) and COVID-19. This paper focuses on enhancing the accuracy of U-Net segmentation models through a robust [...] Read more.
The accurate segmentation of lung regions in computed tomography (CT) scans is critical for the automated analysis of lung diseases such as chronic obstructive pulmonary disease (COPD) and COVID-19. This paper focuses on enhancing the accuracy of U-Net segmentation models through a robust preprocessing pipeline. The pipeline includes CT image normalization, binarization to extract lung regions, and morphological operations to remove artifacts. Additionally, the proposed method applies region-of-interest (ROI) filtering to isolate lung areas effectively. The dataset preprocessing significantly improves segmentation quality by providing clean and consistent input data for the U-Net model. Experimental results demonstrate that the Intersection over Union (IoU) and Dice coefficient exceeded 0.95 on training datasets. This work highlights the importance of preprocessing as a standalone step for optimizing deep learning-based medical image analysis. Full article
(This article belongs to the Special Issue Self-Supervised Learning for Image Processing and Analysis)
Show Figures

Figure 1

11 pages, 2783 KiB  
Article
Implicit 3D Human Reconstruction Guided by Parametric Models and Normal Maps
by Yong Ren, Mingquan Zhou, Yifan Wang, Long Feng, Qiuquan Zhu, Kang Li and Guohua Geng
J. Imaging 2024, 10(6), 133; https://doi.org/10.3390/jimaging10060133 - 29 May 2024
Cited by 2 | Viewed by 2355
Abstract
Accurate and robust 3D human modeling from a single image presents significant challenges. Existing methods have shown potential, but they often fail to generate reconstructions that match the level of detail in the input image. These methods particularly struggle with loose clothing. They [...] Read more.
Accurate and robust 3D human modeling from a single image presents significant challenges. Existing methods have shown potential, but they often fail to generate reconstructions that match the level of detail in the input image. These methods particularly struggle with loose clothing. They typically employ parameterized human models to constrain the reconstruction process, ensuring the results do not deviate too far from the model and produce anomalies. However, this also limits the recovery of loose clothing. To address this issue, we propose an end-to-end method called IHRPN for reconstructing clothed humans from a single 2D human image. This method includes a feature extraction module for semantic extraction of image features. We propose an image semantic feature extraction aimed at achieving pixel model space consistency and enhancing the robustness of loose clothing. We extract features from the input image to infer and recover the SMPL-X mesh, and then combine it with a normal map to guide the implicit function to reconstruct the complete clothed human. Unlike traditional methods, we use local features for implicit surface regression. Our experimental results show that our IHRPN method performs excellently on the CAPE and AGORA datasets, achieving good performance, and the reconstruction of loose clothing is noticeably more accurate and robust. Full article
(This article belongs to the Special Issue Self-Supervised Learning for Image Processing and Analysis)
Show Figures

Figure 1

Back to TopTop