sensors-logo

Journal Browser

Journal Browser

Sensors for Pattern Recognition and Computer Vision

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 30 September 2025 | Viewed by 1507

Special Issue Editor


E-Mail Website
Guest Editor
Higher Technical School of Computer Engineering, Universidad Rey Juan Carlos, c/Tulipan sn, Mostoles, 28922 Madrid, Spain
Interests: computer vision; software engineering; document recognition

Special Issue Information

Dear Colleagues,

This Special Issue, entitled “Sensors for Pattern Recognition and Computer Vision”, collates original peer reviewed papers in the field of advanced sensors for pattern recognition and computer vision.

This Special Issue aims to explore various topics related to the use of sensors and the data they generate, both in pattern recognition and specifically in computer vision problems.

For instance, we welcome papers addressing innovations in image capture devices or image sequence capture in their different forms: 2D, 3D, visible light, infrared, ultraviolet, X-rays, MRI, and more.

Papers presenting new image datasets obtained from innovative types of sensors, accompanied by descriptions of these sensors and the applications that process them, are also welcome to be submitted.

Papers that describe pattern recognition or computer vision applications incorporating sensors in novel ways are also of interest.

Additionally, papers discussing novel algorithms or models to enhance data captured by existing sensors, applied to pattern recognition problems in general or computer vision in particular, are encouraged.

Prof. Dr. Jose F. Velez
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision sensors
  • pattern recognition sensors
  • image data augmentation
  • image data enhancement
  • cameras (RGB, 3D, infrared, multispectral, X-ray, thermal)
  • lidar
  • scanners
  • MRI
  • ultrasonic sensors

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

35 pages, 8283 KiB  
Article
PIABC: Point Spread Function Interpolative Aberration Correction
by Chanhyeong Cho, Chanyoung Kim and Sanghoon Sull
Sensors 2025, 25(12), 3773; https://doi.org/10.3390/s25123773 - 17 Jun 2025
Viewed by 405
Abstract
Image quality in high-resolution digital single-lens reflex (DSLR) systems is degraded by Complementary Metal-Oxide-Semiconductor (CMOS) sensor noise and optical imperfections. Sensor noise becomes pronounced under high-ISO (International Organization for Standardization) settings, while optical aberrations such as blur and chromatic fringing distort the signal. [...] Read more.
Image quality in high-resolution digital single-lens reflex (DSLR) systems is degraded by Complementary Metal-Oxide-Semiconductor (CMOS) sensor noise and optical imperfections. Sensor noise becomes pronounced under high-ISO (International Organization for Standardization) settings, while optical aberrations such as blur and chromatic fringing distort the signal. Optical and sensor-level noise are distinct and hard to separate, but prior studies suggest that improving optical fidelity can suppress or mask sensor noise. Upon this understanding, we introduce a framework that utilizes densely interpolated Point Spread Functions (PSFs) to recover high-fidelity images. The process begins by simulating Gaussian-based PSFs as pixel-wise chromatic and spatial distortions derived from real degraded images. These PSFs are then encoded into a latent space to enhance their features and used to generate refined PSFs via similarity-weighted interpolation at each target position. The interpolated PSFs are applied through Wiener filtering, followed by residual correction, to restore images with improved structural fidelity and perceptual quality. We compare our method—based on pixel-wise, physical correction, and densely interpolated PSF at pre-processing—with post-processing networks, including deformable convolutional neural networks (CNNs) that enhance image quality without modeling degradation. Evaluations on DIV2K and RealSR-V3 confirm that our strategy not only enhances structural restoration but also more effectively suppresses sensor-induced artifacts, demonstrating the benefit of explicit physical priors for perceptual fidelity. Full article
(This article belongs to the Special Issue Sensors for Pattern Recognition and Computer Vision)
Show Figures

Figure 1

26 pages, 7868 KiB  
Article
A System for Real-Time Detection of Abandoned Luggage
by Ivan Vrsalovic, Jonatan Lerga and Marina Ivasic-Kos
Sensors 2025, 25(9), 2872; https://doi.org/10.3390/s25092872 - 2 May 2025
Viewed by 728
Abstract
In this paper, we propose a system for the real-time automatic detection of abandoned luggage in an airport recorded by surveillance cameras. To do this, we use an adapted YOLOv11-s model and a proposed algorithm for detecting unattended luggage. The system uses the [...] Read more.
In this paper, we propose a system for the real-time automatic detection of abandoned luggage in an airport recorded by surveillance cameras. To do this, we use an adapted YOLOv11-s model and a proposed algorithm for detecting unattended luggage. The system uses the OpenCV library for the video processing of the recorded footage, a detector, and an algorithm that analyzes the movement of a person and their luggage and evaluates their spatial and temporal relationships to determine whether the luggage is truly abandoned. We used several popular deep convolutional neural network architectures for object detection, e.g., Yolov8, Yolov11, and DETR encoder–decoder transformer with a ResNet-50 deep convolutional backbone, we fine-tuned them on our dataset, and compared their performance in detecting people and luggage in surveillance scenes recorded by an airport surveillance camera. The fine-tuned model significantly improved the detection of people and luggage captured by the airport surveillance camera in our custom dataset. The fine-tuned YOLOv8 and YOLOv11 models achieved excellent real-time results on a challenging dataset consisting only of small and medium-sized objects. They achieved real-time precision (mAP) of over 88%, while their precision for medium-sized objects was over 96%. However, the YOLOv11-s model achieved the highest precision in detecting small objects, corresponding to 85.8%, which is why we selected it as a component of the abandoned luggage detection system. The abandoned luggage detection algorithm was tested in various scenarios where luggage may be left behind and in situations that may be potentially suspicious and showed promising results. Full article
(This article belongs to the Special Issue Sensors for Pattern Recognition and Computer Vision)
Show Figures

Figure 1

Back to TopTop