sensors-logo

Journal Browser

Journal Browser

Image Feature Extraction for Computer Vision Tasks in Sensor Systems and Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 15 September 2025 | Viewed by 2195

Special Issue Editors


E-Mail Website
Guest Editor
1. School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi'an 710021, China
2. Institute for Integrated and Intelligent Systems, Griffith University, Brisbane 4222, Australia
Interests: machine learning; computer vision; image analysis; pattern recognition
School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
Interests: signal processing; pattern recognition

E-Mail Website
Guest Editor
School of Electronic and Information, Xi’an Polytechnic University, Xi'an 710048, China
Interests: deep learning; computer vision; pattern recognition

E-Mail Website
Guest Editor
School of Electrical Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi'an 710021, China
Interests: optical flow estimation; computer vision; target detection and tracking

Special Issue Information

Dear Colleagues,

With sensor technology's rapid development, computer vision plays an increasingly important role in various applications. In particular, integrating sensor and computer vision technologies drives innovation and transformation across industries such as smart cities, autonomous driving, medical image analysis, industrial automation, and environmental monitoring. This Special Issue focuses on applying image feature extraction techniques to sensor systems within computer vision.

Image feature extraction is one of the core technologies in computer vision. It enables task recognition, tracking, and classification by extracting meaningful information from images. Sensor systems (such as cameras, infrared sensors, LiDAR, Radar, depth sensors, etc.) collect large amounts of image data in various application scenarios, and efficient and accurate image feature extraction can significantly enhance system performance and intelligence.

This Special Issue covers, but is not limited to, the following topics:

  • Representation Learning;
  • Deep Learning Architecture Design;
  • Image Feature Extraction Methods;
  • Visual Inspection and Quality Control in IoT;
  • Medical Image Analysis;
  • Image Denoising and Enhancement;
  • Image Classification;
  • Feature Matching;
  • Image Defect Detection;
  • Three-Dimensional Reconstruction.

Dr. Weichuan Zhang
Dr. Yanbing Li
Dr. Jie Ren
Dr. Jin Lu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • representation learning
  • deep learning
  • image feature extraction
  • medical image analysis
  • image denoising and enhancement
  • image classification
  • feature matching
  • image defect detection
  • three-dimensional reconstruction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 14637 KiB  
Article
Enhancing Bottleneck Concept Learning in Image Classification
by Xingfu Cheng, Zhaofeng Niu, Zhouqiang Jiang and Liangzhi Li
Sensors 2025, 25(8), 2398; https://doi.org/10.3390/s25082398 - 10 Apr 2025
Viewed by 298
Abstract
Deep neural networks (DNNs) have demonstrated exceptional performance in image classification. However, their “black-box” nature raises concerns about trust and transparency, particularly in high-stakes fields such as healthcare and autonomous systems. While explainable AI (XAI) methods attempt to address these concerns through feature- [...] Read more.
Deep neural networks (DNNs) have demonstrated exceptional performance in image classification. However, their “black-box” nature raises concerns about trust and transparency, particularly in high-stakes fields such as healthcare and autonomous systems. While explainable AI (XAI) methods attempt to address these concerns through feature- or concept-based explanations, existing approaches are often limited by the need for manually defined concepts, overly abstract granularity, or misalignment with human semantics. This paper introduces the Enhanced Bottleneck Concept Learner (E-BotCL), a self-supervised framework that autonomously discovers task-relevant, interpretable semantic concepts via a dual-path contrastive learning strategy and multi-task regularization. By combining contrastive learning to build robust concept prototypes, attention mechanisms for spatial localization, and feature aggregation to activate concepts, E-BotCL enables end-to-end concept learning and classification without requiring human supervision. Experiments conducted on the CUB200 and ImageNet datasets demonstrated that E-BotCL significantly enhanced interpretability while maintaining classification accuracy. Specifically, two interpretability metrics, the Concept Discovery Rate (CDR) and Concept Consistency (CC), improved by 0.6104 and 0.4486, respectively. This work advances the balance between model performance and transparency, offering a scalable solution for interpretable decision-making in complex vision tasks. Full article
Show Figures

Figure 1

19 pages, 554 KiB  
Article
Unleashing the Potential of Pre-Trained Diffusion Models for Generalizable Person Re-Identification
by Jiachen Li and Xiaojin Gong
Sensors 2025, 25(2), 552; https://doi.org/10.3390/s25020552 - 18 Jan 2025
Viewed by 1534
Abstract
Domain-generalizable re-identification (DG Re-ID) aims to train a model on one or more source domains and evaluate its performance on unseen target domains, a task that has attracted growing attention due to its practical relevance. While numerous methods have been proposed, most rely [...] Read more.
Domain-generalizable re-identification (DG Re-ID) aims to train a model on one or more source domains and evaluate its performance on unseen target domains, a task that has attracted growing attention due to its practical relevance. While numerous methods have been proposed, most rely on discriminative or contrastive learning frameworks to learn generalizable feature representations. However, these approaches often fail to mitigate shortcut learning, leading to suboptimal performance. In this work, we propose a novel method called diffusion model-assisted representation learning with a correlation-aware conditioning scheme (DCAC) to enhance DG Re-ID. Our method integrates a discriminative and contrastive Re-ID model with a pre-trained diffusion model through a correlation-aware conditioning scheme. By incorporating ID classification probabilities generated from the Re-ID model with a set of learnable ID-wise prompts, the conditioning scheme injects dark knowledge that captures ID correlations to guide the diffusion process. Simultaneously, feedback from the diffusion model is back-propagated through the conditioning scheme to the Re-ID model, effectively improving the generalization capability of Re-ID features. Extensive experiments on both single-source and multi-source DG Re-ID tasks demonstrate that our method achieves state-of-the-art performance. Comprehensive ablation studies further validate the effectiveness of the proposed approach, providing insights into its robustness. Full article
Show Figures

Figure 1

Back to TopTop