sensors-logo

Journal Browser

Journal Browser

Computer Visions and Pattern Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 16906

Special Issue Editor


E-Mail Website
Guest Editor
School of Computer Science and Technology, Harbin Institute of Technology, Shandong 264209, China
Interests: computer Vision; machine learning

Special Issue Information

Dear Colleagues,

As an advanced pattern recognition method, deep learning has been successfully applied to many computer vision tasks, such as image classification, object detection, image segmentation, action recognition, 3D reconstruction, etc. Although many sophisticated deep learning methods have been proposed in the literature, there are still some challenges, such as a huge number of labeled samples, expensive computational resources, and lack of model explanations. To address these problems, some new methods have been proposed, such as self-supervised learning, contrast learning, knowledge distillation, and neural architecture search.

This Special Issue focuses on novel deep learning methods applied to computer vision tasks. This Special Issue aims at providing a forum for researchers from computer vision, pattern recognition, and machine learning to present recent progress. More specifically, this Special Issue is seeking high-quality original contributions and high-level technical papers addressing the main research challenges related to deep learning methods in the field of computer vision.

Potential topics include but are not limited to:

  • Self-supervised learning
  • Contrast learning
  • Knowledge distillation
  • Neural architecture search
  • Image enhancement
  • Image recognition
  • Medical image processing
  • Video behavior analysis
  • 3D reconstruction

Prof. Dr. Shengping Zhang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 5933 KiB  
Article
Moving Object Tracking Based on Sparse Optical Flow with Moving Window and Target Estimator
by Hosik Choi, Byungmun Kang and DaeEun Kim
Sensors 2022, 22(8), 2878; https://doi.org/10.3390/s22082878 - 8 Apr 2022
Cited by 8 | Viewed by 3754
Abstract
Moving object detection and tracking are technologies applied to wide research fields including traffic monitoring and recognition of workers in surrounding heavy equipment environments. However, the conventional moving object detection methods have faced many problems such as much computing time, image noises, and [...] Read more.
Moving object detection and tracking are technologies applied to wide research fields including traffic monitoring and recognition of workers in surrounding heavy equipment environments. However, the conventional moving object detection methods have faced many problems such as much computing time, image noises, and disappearance of targets due to obstacles. In this paper, we introduce a new moving object detection and tracking algorithm based on the sparse optical flow for reducing computing time, removing noises and estimating the target efficiently. The developed algorithm maintains a variety of corner features with refreshed corner features, and the moving window detector is proposed to determine the feature points for tracking, based on the location history of the points. The performance of detecting moving objects is greatly improved through the moving window detector and the continuous target estimation. The memory-based estimator provides the capability to recall the location of corner features for a period of time, and it has an effect of tracking targets obscured by obstacles. The suggested approach was applied to real environments including various illumination (indoor and outdoor) conditions, a number of moving objects and obstacles, and the performance was evaluated on an embedded board (Raspberry pi4). The experimental results show that the proposed method maintains a high FPS (frame per seconds) and improves the accuracy performance, compared with the conventional optical flow methods and vision approaches such as Haar-like and Hog methods. Full article
(This article belongs to the Special Issue Computer Visions and Pattern Recognition)
Show Figures

Figure 1

15 pages, 25044 KiB  
Article
DTS-Depth: Real-Time Single-Image Depth Estimation Using Depth-to-Space Image Construction
by Hatem Ibrahem, Ahmed Salem and Hyun-Soo Kang
Sensors 2022, 22(5), 1914; https://doi.org/10.3390/s22051914 - 1 Mar 2022
Cited by 6 | Viewed by 4917
Abstract
As most of the recent high-resolution depth-estimation algorithms are computationally so expensive that they cannot work in real time, the common solution is using a low-resolution input image to reduce the computational complexity. We propose a different approach, an efficient and real-time convolutional [...] Read more.
As most of the recent high-resolution depth-estimation algorithms are computationally so expensive that they cannot work in real time, the common solution is using a low-resolution input image to reduce the computational complexity. We propose a different approach, an efficient and real-time convolutional neural network-based depth-estimation algorithm using a single high-resolution image as the input. The proposed method efficiently constructs a high-resolution depth map using a small encoding architecture and eliminates the need for a decoder, which is typically used in the encoder–decoder architectures employed for depth estimation. The proposed algorithm adopts a modified MobileNetV2 architecture, which is a lightweight architecture, to estimate the depth information through the depth-to-space image construction, which is generally employed in image super-resolution. As a result, it realizes fast frame processing and can predict a high-accuracy depth in real time. We train and test our method on the challenging KITTI, Cityscapes, and NYUV2 depth datasets. The proposed method achieves low relative absolute error (0.028 for KITTI, 0.167 for CITYSCAPES, and 0.069 for NYUV2) while working at speed reaching 48 frames per second on a GPU and 20 frames per second on a CPU for high-resolution test images. We compare our method with the state-of-the-art methods on depth estimation, showing that our method outperforms those methods. However, the architecture is less complex and works in real time. Full article
(This article belongs to the Special Issue Computer Visions and Pattern Recognition)
Show Figures

Figure 1

21 pages, 11989 KiB  
Article
A Social Distance Estimation and Crowd Monitoring System for Surveillance Cameras
by Mohammad Al-Sa’d, Serkan Kiranyaz, Iftikhar Ahmad, Christian Sundell, Matti Vakkuri and Moncef Gabbouj
Sensors 2022, 22(2), 418; https://doi.org/10.3390/s22020418 - 6 Jan 2022
Cited by 20 | Viewed by 4085
Abstract
Social distancing is crucial to restrain the spread of diseases such as COVID-19, but complete adherence to safety guidelines is not guaranteed. Monitoring social distancing through mass surveillance is paramount to develop appropriate mitigation plans and exit strategies. Nevertheless, it is a labor-intensive [...] Read more.
Social distancing is crucial to restrain the spread of diseases such as COVID-19, but complete adherence to safety guidelines is not guaranteed. Monitoring social distancing through mass surveillance is paramount to develop appropriate mitigation plans and exit strategies. Nevertheless, it is a labor-intensive task that is prone to human error and tainted with plausible breaches of privacy. This paper presents a privacy-preserving adaptive social distance estimation and crowd monitoring solution for camera surveillance systems. We develop a novel person localization strategy through pose estimation, build a privacy-preserving adaptive smoothing and tracking model to mitigate occlusions and noisy/missing measurements, compute inter-personal distances in the real-world coordinates, detect social distance infractions, and identify overcrowded regions in a scene. Performance evaluation is carried out by testing the system’s ability in person detection, localization, density estimation, anomaly recognition, and high-risk areas identification. We compare the proposed system to the latest techniques and examine the performance gain delivered by the localization and smoothing/tracking algorithms. Experimental results indicate a considerable improvement, across different metrics, when utilizing the developed system. In addition, they show its potential and functionality for applications other than social distancing. Full article
(This article belongs to the Special Issue Computer Visions and Pattern Recognition)
Show Figures

Figure 1

18 pages, 1291 KiB  
Article
Comparison of Machine Learning and Sentiment Analysis in Detection of Suspicious Online Reviewers on Different Type of Data
by Kristina Machova, Marian Mach and Matej Vasilko
Sensors 2022, 22(1), 155; https://doi.org/10.3390/s22010155 - 27 Dec 2021
Cited by 15 | Viewed by 3427
Abstract
The article focuses on solving an important problem of detecting suspicious reviewers in online discussions on social networks. We have concentrated on a special type of suspicious authors, on trolls. We have used methods of machine learning for generation of detection models to [...] Read more.
The article focuses on solving an important problem of detecting suspicious reviewers in online discussions on social networks. We have concentrated on a special type of suspicious authors, on trolls. We have used methods of machine learning for generation of detection models to discriminate a troll reviewer from a common reviewer, but also methods of sentiment analysis to recognize the sentiment typical for troll’s comments. The sentiment analysis can be provided also using machine learning or lexicon-based approach. We have used lexicon-based sentiment analysis for its better ability to detect a dictionary typical for troll authors. We have achieved Accuracy = 0.95 and F1 = 0.80 using sentiment analysis. The best results using machine learning methods were achieved by support vector machine, Accuracy = 0.986 and F1 = 0.988, using a dataset with the set of all selected attributes. We can conclude that detection model based on machine learning is more successful than lexicon-based sentiment analysis, but the difference in accuracy is not so large as in F1 measure. Full article
(This article belongs to the Special Issue Computer Visions and Pattern Recognition)
Show Figures

Figure 1

Back to TopTop