sensors-logo

Journal Browser

Journal Browser

Special Issue "Image Processing and Analysis for Object Detection"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 1 October 2022 | Viewed by 1395

Special Issue Editors

Prof. Dr. Kaihua Zhang
E-Mail Website
Guest Editor
School of Computer Science, Nanjing University of Information Science and Technology, Nanjing, China
Interests: image segmentation; level sets; visual tracking
Prof. Dr. Wanli Xue
E-Mail Website
Guest Editor
School of Computer Science and Engineering, Tianjin University of Technology, Tianjin, China
Interests: visual tracking; sign language recognition
Dr. Bo Liu
E-Mail Website
Guest Editor
JD Finance America Corporation, Mountain View, CA 94089, USA
Interests: multimedia analysis; sign language recognition
Dr. Guangwei Gao
E-Mail Website
Guest Editor
Institute of Advanced Technology, Nanjing University of Posts and Telecommunications, Nanjing, China
Interests: face recognition; image super-resolution

Special Issue Information

Dear Colleagues,

We have witnessed an explosion in interest related to the research and development of deep learning techniques for computer vision in recent years. As deep learning covers almost all fields of science and engineering, computer vision remains one of its primary application areas. Specifically, the use of deep learning to handle computer vision tasks has led to numerous unprecedented performances, such as high-accuracy object detection, visual tracking, image segmentation, image/video super-resolution, satellite image processing, and saliency object detection, which cannot achieve promising performance through the use of conventional methods.

This Special Issue aims to cover recent advancements in computer vision that involve the use of deep learning methods, with a particular interest in low-level and high-level computer vision tasks. Both original research and review articles are welcome. Topics include, but are not limited to, the following:

  • Image/video super-resolution with deep learning approaches;
  • Object detection, visual tracking, and image/video segmentation with deep learning approaches;
  • Supervised and unsupervised learning for image/video processing;
  • Satellite image processing with deep learning techniques;
  • Low-light image enhancement using deep learning approaches.

Dr. Kaihua Zhang
Prof. Dr. Wanli Xue
Dr. Bo Liu
Dr. Guangwei Gao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • computer vision
  • object detection
  • visual tracking
  • image super-resolution
  • saliency object detection

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
LEOD-Net: Learning Line-Encoded Bounding Boxes for Real-Time Object Detection
Sensors 2022, 22(10), 3699; https://doi.org/10.3390/s22103699 - 12 May 2022
Viewed by 418
Abstract
This paper proposes a learnable line encoding technique for bounding boxes commonly used in the object detection task. A bounding box is simply encoded using two main points: the top-left corner and the bottom-right corner of the bounding box; then, a lightweight convolutional [...] Read more.
This paper proposes a learnable line encoding technique for bounding boxes commonly used in the object detection task. A bounding box is simply encoded using two main points: the top-left corner and the bottom-right corner of the bounding box; then, a lightweight convolutional neural network (CNN) is employed to learn the lines and propose high-resolution line masks for each category of classes using a pixel-shuffle operation. Post-processing is applied to the predicted line masks to filtrate them and estimate clear lines based on a progressive probabilistic Hough transform. The proposed method was trained and evaluated on two common object detection benchmarks: Pascal VOC2007 and MS-COCO2017. The proposed model attains high mean average precision (mAP) values (78.8% for VOC2007 and 48.1% for COCO2017) while processing each frame in a few milliseconds (37 ms for PASCAL VOC and 47 ms for COCO). The strength of the proposed method lies in its simplicity and ease of implementation unlike the recent state-of-the-art methods in object detection, which include complex processing pipelines. Full article
(This article belongs to the Special Issue Image Processing and Analysis for Object Detection)
Show Figures

Figure 1

Article
SCD: A Stacked Carton Dataset for Detection and Segmentation
Sensors 2022, 22(10), 3617; https://doi.org/10.3390/s22103617 - 10 May 2022
Cited by 2 | Viewed by 570
Abstract
Carton detection is an important technique in the automatic logistics system and can be applied to many applications such as the stacking and unstacking of cartons and the unloading of cartons in the containers. However, there is no public large-scale carton dataset for [...] Read more.
Carton detection is an important technique in the automatic logistics system and can be applied to many applications such as the stacking and unstacking of cartons and the unloading of cartons in the containers. However, there is no public large-scale carton dataset for the research community to train and evaluate the carton detection models up to now, which hinders the development of carton detection. In this article, we present a large-scale carton dataset named Stacked Carton Dataset (SCD) with the goal of advancing the state-of-the-art in carton detection. Images were collected from the Internet and several warehouses, and objects were labeled for precise localization using instance mask annotation. There were a total of 250,000 instance masks from 16,136 images. Naturally, a suite of benchmarks was established with several popular detectors and instance segmentation models. In addition, we designed a carton detector based on RetinaNet by embedding our proposed Offset Prediction between the Classification and Localization module (OPCL) and the Boundary Guided Supervision module (BGS). OPCL alleviates the imbalance problem between classification and localization quality, which boosts AP by 3.14.7% on SCD at the model level, while BGS guides the detector to pay more attention to the boundary information of cartons and decouple repeated carton textures at the task level. To demonstrate the generalization of OPCL for other datasets, we conducted extensive experiments on MS COCO and PASCAL VOC. The improvements in AP on MS COCO and PASCAL VOC were 1.82.2% and 3.44.3%, respectively. Full article
(This article belongs to the Special Issue Image Processing and Analysis for Object Detection)
Show Figures

Figure 1

Back to TopTop