sensors-logo

Journal Browser

Journal Browser

Feature Extraction for Unconventional Visual Sensors or Specific Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (15 December 2021) | Viewed by 14664

Special Issue Editors


E-Mail Website
Guest Editor
Institut Pascal, ComSee Team, Université Clermont Auvergne, Clermont-Ferrand, France
Interests: computer vision; image sensors; image sequences; mobile robots

E-Mail Website
Guest Editor
Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, France
Interests: computer vision robotics; artificial vision; 3-D object localization; recognition and modeling

E-Mail Website
Guest Editor
College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
Interests: computer vision; photogrammetry; computational photography

Special Issue Information

Dear Colleague,

Detecting, tracking, and recognizing visual characteristics are fundamental tasks in computer vision. Despite advances in computing power, pixel resolution, and frame rate, on the one hand, and the deep learning revolution, on the other, this is still a difficult problem and existing methods are still not as robust and reliable as human vision.

This is particularly true when using non-conventional sensors with different acquisition modalities or in very specific applications for which off-the-shelf approaches are not suitable. In such cases, deep learning does not necessarily improve performance due to the absence of large databases.

This Special Issue will bring together original and innovative work on visual feature extraction either in images from unconventional visual sensors, or in classical images but for very specific applications for which standard primitives and descriptors are no longer sufficiently efficient, discriminating, or precise.

Submitted papers may focus (but not exclusively) on recent advances in primitive extraction in images from multi- and hyperspectral cameras, thermal cameras, light field or plenoptic cameras, event-based cameras, rolling-shutter cameras, time-of-flight cameras, or even scanners and MRI. In the same way, the work can concern methods for extracting and matching primitives in classical images with innovations justified by a particular application context (3D reconstruction, material characterization, inspection and quality control, specific tasks in robotics, and data fusion with range sensors such as, e.g., LiDAR or MMW radar).

Dr. Omar Ait Aider
Dr. Michel Dhome
Dr. Yizhen Lao

Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image feature extraction
  • feature descriptors
  • feature matching
  • unconventional cameras
  • object recognition
  • object tracking
  • 3D vision

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 6284 KiB  
Article
Accuracy and Speed Improvement of Event Camera Motion Estimation Using a Bird’s-Eye View Transformation
by Takehiro Ozawa, Yusuke Sekikawa and Hideo Saito
Sensors 2022, 22(3), 773; https://doi.org/10.3390/s22030773 - 20 Jan 2022
Cited by 11 | Viewed by 2934
Abstract
Event cameras are bio-inspired sensors that have a high dynamic range and temporal resolution. This property enables motion estimation from textures with repeating patterns, which is difficult to achieve with RGB cameras. Therefore, motion estimation of an event camera is expected to be [...] Read more.
Event cameras are bio-inspired sensors that have a high dynamic range and temporal resolution. This property enables motion estimation from textures with repeating patterns, which is difficult to achieve with RGB cameras. Therefore, motion estimation of an event camera is expected to be applied to vehicle position estimation. An existing method, called contrast maximization, is one of the methods that can be used for event camera motion estimation by capturing road surfaces. However, contrast maximization tends to fall into a local solution when estimating three-dimensional motion, which makes correct estimation difficult. To solve this problem, we propose a method for motion estimation by optimizing contrast in the bird’s-eye view space. Instead of performing three-dimensional motion estimation, we reduced the dimensionality to two-dimensional motion estimation by transforming the event data to a bird’s-eye view using homography calculated from the event camera position. This transformation mitigates the problem of the loss function becoming non-convex, which occurs in conventional methods. As a quantitative experiment, we created event data by using a car simulator and evaluated our motion estimation method, showing an improvement in accuracy and speed. In addition, we conducted estimation from real event data and evaluated the results qualitatively, showing an improvement in accuracy. Full article
Show Figures

Figure 1

19 pages, 2898 KiB  
Article
Ensemble Method of Convolutional Neural Networks with Directed Acyclic Graph Using Dermoscopic Images: Melanoma Detection Application
by Arthur Cartel Foahom Gouabou, Jean-Luc Damoiseaux, Jilliana Monnier, Rabah Iguernaissi, Abdellatif Moudafi and Djamal Merad
Sensors 2021, 21(12), 3999; https://doi.org/10.3390/s21123999 - 10 Jun 2021
Cited by 20 | Viewed by 5803
Abstract
The early detection of melanoma is the most efficient way to reduce its mortality rate. Dermatologists achieve this task with the help of dermoscopy, a non-invasive tool allowing the visualization of patterns of skin lesions. Computer-aided diagnosis (CAD) systems developed on dermoscopic images [...] Read more.
The early detection of melanoma is the most efficient way to reduce its mortality rate. Dermatologists achieve this task with the help of dermoscopy, a non-invasive tool allowing the visualization of patterns of skin lesions. Computer-aided diagnosis (CAD) systems developed on dermoscopic images are needed to assist dermatologists. These systems rely mainly on multiclass classification approaches. However, the multiclass classification of skin lesions by an automated system remains a challenging task. Decomposing a multiclass problem into a binary problem can reduce the complexity of the initial problem and increase the overall performance. This paper proposes a CAD system to classify dermoscopic images into three diagnosis classes: melanoma, nevi, and seborrheic keratosis. We introduce a novel ensemble scheme of convolutional neural networks (CNNs), inspired by decomposition and ensemble methods, to improve the performance of the CAD system. Unlike conventional ensemble methods, we use a directed acyclic graph to aggregate binary CNNs for the melanoma detection task. On the ISIC 2018 public dataset, our method achieves the best balanced accuracy (76.6%) among multiclass CNNs, an ensemble of multiclass CNNs with classical aggregation methods, and other related works. Our results reveal that the directed acyclic graph is a meaningful approach to develop a reliable and robust automated diagnosis system for the multiclass classification of dermoscopic images. Full article
Show Figures

Figure 1

21 pages, 10675 KiB  
Article
Spatial Location in Integrated Circuits through Infrared Microscopy
by Raphaël Abelé, Jean-Luc Damoiseaux, Redouane El Moubtahij, Jean-Marc Boi, Daniele Fronte, Pierre-Yvan Liardet and Djamal Merad
Sensors 2021, 21(6), 2175; https://doi.org/10.3390/s21062175 - 20 Mar 2021
Cited by 1 | Viewed by 2075
Abstract
In this paper, we present an infrared microscopy based approach for structures’ location in integrated circuits, to automate their secure characterization. The use of an infrared sensor is the key device for internal integrated circuit inspection. Two main issues are addressed. The first [...] Read more.
In this paper, we present an infrared microscopy based approach for structures’ location in integrated circuits, to automate their secure characterization. The use of an infrared sensor is the key device for internal integrated circuit inspection. Two main issues are addressed. The first concerns the scan of integrated circuits using a motorized optical system composed of an infrared uncooled camera combined with an optical microscope. An automated system is required to focus the conductive tracks under the silicon layer. It is solved by an autofocus system analyzing the infrared images through a discrete polynomial image transform which allows an accurate features detection to build a focus metric robust against specific image degradation inherent to the acquisition context. The second issue concerns the location of structures to be characterized on the conductive tracks. Dealing with a large amount of redundancy and noise, a graph-matching method is presented—discriminating graph labels are developed to overcome the redundancy, while a flexible assignment optimizer solves the inexact matching arising from noises on graphs. The resulting automated location system brings reproducibility for secure characterization of integrated systems, besides accuracy and time speed increase. Full article
Show Figures

Figure 1

26 pages, 6274 KiB  
Article
FSD-BRIEF: A Distorted BRIEF Descriptor for Fisheye Image Based on Spherical Perspective Model
by Yutong Zhang, Jianmei Song, Yan Ding, Yating Yuan and Hua-Liang Wei
Sensors 2021, 21(5), 1839; https://doi.org/10.3390/s21051839 - 6 Mar 2021
Cited by 6 | Viewed by 2695
Abstract
Fisheye images with a far larger Field of View (FOV) have severe radial distortion, with the result that the associated image feature matching process cannot achieve the best performance if the traditional feature descriptors are used. To address this challenge, this paper reports [...] Read more.
Fisheye images with a far larger Field of View (FOV) have severe radial distortion, with the result that the associated image feature matching process cannot achieve the best performance if the traditional feature descriptors are used. To address this challenge, this paper reports a novel distorted Binary Robust Independent Elementary Feature (BRIEF) descriptor for fisheye images based on a spherical perspective model. Firstly, the 3D gray centroid of feature points is designed, and the position and direction of the feature points on the spherical image are described by a constructed feature point attitude matrix. Then, based on the attitude matrix of feature points, the coordinate mapping relationship between the BRIEF descriptor template and the fisheye image is established to realize the computation associated with the distorted BRIEF descriptor. Four experiments are provided to test and verify the invariance and matching performance of the proposed descriptor for a fisheye image. The experimental results show that the proposed descriptor works well for distortion invariance and can significantly improve the matching performance in fisheye images. Full article
Show Figures

Figure 1

Back to TopTop