Reprint

Visual Sensors

Edited by
March 2020
738 pages
  • ISBN978-3-03928-338-5 (Paperback)
  • ISBN978-3-03928-339-2 (PDF)

This book is a reprint of the Special Issue Visual Sensors that was published in

Chemistry & Materials Science
Engineering
Environmental & Earth Sciences
Summary
Visual sensors are able to capture a large quantity of information from the environment around them. A wide variety of visual systems can be found, from the classical monocular systems to omnidirectional, RGB-D, and more sophisticated 3D systems. Every configuration presents some specific characteristics that make them useful for solving different problems. Their range of applications is wide and varied, including robotics, industry, agriculture, quality control, visual inspection, surveillance, autonomous driving, and navigation aid systems. In this book, several problems that employ visual sensors are presented. Among them, we highlight visual SLAM, image retrieval, manipulation, calibration, object recognition, navigation, etc.
Format
  • Paperback
License
© 2020 by the authors; CC BY-NC-ND license
Keywords
3D reconstruction; RGB-D sensor; non-rigid reconstruction; pedestrian detection; boosted decision tree; scale invariance; receptive field correspondence; soft decision tree; single-shot 3D shape measurement; digital image correlation; warp function; inverse compositional Gauss-Newton algorithm; UAV image; dynamic programming; seam-line; optical flow; image mosaic; iris recognition; presentation attack detection; convolutional neural network; support vector machines; content-based image retrieval; textile retrieval; textile localization; texture retrieval; texture description; visual sensors; iris recognition; iris segmentation; semantic segmentation; convolutional neural network (CNN); visible light and near-infrared light camera sensors; laser sensor; line scan camera; lane marking detection; support vector machine (SVM); image binarization; lane marking reconstruction; automated design; vision system; FOV; illumination; recognition algorithm; action localization; action segmentation; 3D ConvNets; LSTM; visual sensors; image retrieval; hybrid histogram descriptor; perceptually uniform histogram; motif co-occurrence histogram; omnidirectional imaging; visual localization; catadioptric sensor; visual information fusion; image processing; underwater imaging; embedded systems; stereo vision; visual odometry; 3D reconstruction; handshape recognition; sign language; finger alphabet; skeletal data; visual odometry; ego-motion estimation; stereo; RGB-D; mobile robots; around view monitor (AVM) system; automatic calibration; lane marking; parking assist system; advanced driver assistance system (ADAS); pose estimation; symmetry axis; point cloud; sweet pepper; semantic mapping; RGB-D SLAM; visual mapping; indoor visual SLAM; adaptive model; motion estimation; stereo camera; person re-identification; end-to-end architecture; appearance-temporal features; Siamese network; pivotal frames; visual tracking; correlation filters; motion-aware; adaptive update strategy; confidence response map; camera calibration; Gray code; checkerboard; visual sensor; image retrieval; human visual system; local parallel cross pattern; pose estimation; straight wing aircraft; structure extraction; consistent line clustering; parallel line; planes intersection; salient region detection; appearance based model; regression based model; human visual attention; background dictionary; quality control; fringe projection profilometry; depth image registration; 3D reconstruction; speed measurement; stereo-vision; large field of view; vibration; calibration; CLOSIB; statistical information of gray-levels differences; Local Binary Patterns; texture classification; texture description; Visual Sensors; SLAM; RGB-D; indoor environment; Manhattan frame estimation; orientation relevance; spatial transformation; robotic welding; seam tracking; visual detection; narrow butt joint; GTAW; LRF; camera calibration; extrinsic calibration; sensors combination; geometric moments; camera pose; rotation-angle; measurement error; robotics; robot manipulation; depth vision; star image prediction; star sensor; Richardson-Lucy algorithm; neural network; tightly-coupled VIO; SLAM; fused point and line feature matching; pose estimates; simplified initialization strategy; patrol robot; map representation; vision-guided robotic grasping; object recognition; pose estimation; global feature descriptor; iterative closest point; n/a