sensors-logo

Journal Browser

Journal Browser

Smart Vision Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 March 2019) | Viewed by 16888

Special Issue Editor


E-Mail Website
Guest Editor
Dipartimento di Ingegneria dell'Informazione – DII, Università Politecnica delle Marche, 60131 Ancona, Italy
Interests: robotics vision (for aerial, ground, and underwater autonomous systems); artificial intelligence; intelligent mechatronic systems; remote sensing; precision farming
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Perception is acquiring real world representations by interacting with the environment. Strictly, the necessary input stimuli (sensory information) are not part of actual perceptual processes, which consist of several sub processes, mainly selection, organization and interpretation of stimuli. Therefore, in human beings, perception is not only the visual system (from eyes to retina), but it also includes the brain and its neural networks. Similarly, the current development trend in machine vision systems is to transform the vision sensors into smart cameras by integrating in the same system image sensor (CCD/CMOS and lens), processor and memory (DSP together with FPGA, GPU, and CPU units), communication interface, and software (operating system and algorithms). Smart vision sensors can be designed to produce multiple sensorial modalities, from the classical 2D monocular or omnidirectional systems to the recent RGB-D (depth) and 3D systems and can adjust parameters to suit different applications. In addition, a natural evolution of smart cameras is towards smart camera networks using multiple vision sensors.

The aim of this Special Issue is to present some of the possibilities that smart cameras offer by exploring, at low level, new hardware and software solutions, as well as by incorporating into sensors, at high level, advanced artificial intelligence techniques for image understanding and decision-making.

Smart visual sensor applications can now be found in robotics, visual inspection and industry, environmental monitoring and agriculture, security and surveillance, autonomous driving and in many other fields.

Prof. Dr. Primo Zingaretti
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 2D/3D feature extraction at sensor level
  • Intelligent onboard processing
  • Embedded computer vision algorithms
  • Embedded computer vision architectures (e.g., DSP, FPGA, GPU)
  • Real-time image and video processing (e.g., deblurring, super-resolution)
  • Image and video understanding
  • Smart industrial cameras
  • Smart camera networks

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 7114 KiB  
Article
Advances in Nuclear Radiation Sensing: Enabling 3-D Gamma-Ray Vision
by Kai Vetter, Ross Barnowski, Joshua W. Cates, Andrew Haefner, Tenzing H.Y. Joshi, Ryan Pavlovsky and Brian J. Quiter
Sensors 2019, 19(11), 2541; https://doi.org/10.3390/s19112541 - 4 Jun 2019
Cited by 53 | Viewed by 6371
Abstract
The enormous advances in sensing and data processing technologies in combination with recent developments in nuclear radiation detection and imaging enable unprecedented and “smarter” ways to detect, map, and visualize nuclear radiation. The recently developed concept of three-dimensional (3-D) Scene-data fusion allows us [...] Read more.
The enormous advances in sensing and data processing technologies in combination with recent developments in nuclear radiation detection and imaging enable unprecedented and “smarter” ways to detect, map, and visualize nuclear radiation. The recently developed concept of three-dimensional (3-D) Scene-data fusion allows us now to “see” nuclear radiation in three dimensions, in real time, and specific to radionuclides. It is based on a multi-sensor instrument that is able to map a local scene and to fuse the scene data with nuclear radiation data in 3-D while the instrument is freely moving through the scene. This new concept is agnostic of the deployment platform and the specific radiation detection or imaging modality. We have demonstrated this 3-D Scene-data fusion concept in a range of configurations in locations, such as the Fukushima Prefecture in Japan or Chernobyl in Ukraine on unmanned and manned aerial and ground-based platforms. It provides new means in the detection, mapping, and visualization of radiological and nuclear materials relevant for the safe and secure operation of nuclear and radiological facilities or in the response to accidental or intentional releases of radioactive materials where a timely, accurate, and effective assessment is critical. In addition, the ability to visualize nuclear radiation in 3-D and in real time provides new means in the communication with public and facilitates to overcome one of the major public concerns of not being able to “see” nuclear radiation. Full article
(This article belongs to the Special Issue Smart Vision Sensors)
Show Figures

Figure 1

17 pages, 17669 KiB  
Article
Orientation-Constrained System for Lamp Detection in Buildings Based on Computer Vision
by Francisco Troncoso-Pastoriza, Pablo Eguía-Oller, Rebeca P. Díaz-Redondo, Enrique Granada-Álvarez and Aitor Erkoreka
Sensors 2019, 19(7), 1516; https://doi.org/10.3390/s19071516 - 28 Mar 2019
Cited by 2 | Viewed by 3025
Abstract
Computer vision is used in this work to detect lighting elements in buildings with the goal of improving the accuracy of previous methods to provide a precise inventory of the location and state of lamps. Using the framework developed in our previous works, [...] Read more.
Computer vision is used in this work to detect lighting elements in buildings with the goal of improving the accuracy of previous methods to provide a precise inventory of the location and state of lamps. Using the framework developed in our previous works, we introduce two new modifications to enhance the system: first, a constraint on the orientation of the detected poses in the optimization methods for both the initial and the refined estimates based on the geometric information of the building information modelling (BIM) model; second, an additional reprojection error filtering step to discard the erroneous poses introduced with the orientation restrictions, keeping the identification and localization errors low while greatly increasing the number of detections. These enhancements are tested in five different case studies with more than 30,000 images, with results showing improvements in the number of detections, the percentage of correct model and state identifications, and the distance between detections and reference positions. Full article
(This article belongs to the Special Issue Smart Vision Sensors)
Show Figures

Figure 1

14 pages, 1890 KiB  
Article
Smart Camera Aware Crowd Counting via Multiple Task Fractional Stride Deep Learning
by Minglei Tong, Lyuyuan Fan, Hao Nan and Yan Zhao
Sensors 2019, 19(6), 1346; https://doi.org/10.3390/s19061346 - 18 Mar 2019
Cited by 8 | Viewed by 3586
Abstract
Estimating the number of people in highly clustered crowd scenes is an extremely challenging task on account of serious occlusion and non-uniformity distribution in one crowd image. Traditional works on crowd counting take advantage of different CNN like networks to regress crowd density [...] Read more.
Estimating the number of people in highly clustered crowd scenes is an extremely challenging task on account of serious occlusion and non-uniformity distribution in one crowd image. Traditional works on crowd counting take advantage of different CNN like networks to regress crowd density map, and further predict the count. In contrast, we investigate a simple but valid deep learning model that concentrates on accurately predicting the density map and simultaneously training a density level classifier to relax parameters of the network to prevent dangerous stampede with a smart camera. First, a combination of atrous and fractional stride convolutional neural network (CAFN) is proposed to deliver larger receptive fields and reduce the loss of details during down-sampling by using dilated kernels. Second, the expanded architecture is offered to not only precisely regress the density map, but also classify the density level of the crowd in the meantime (MTCAFN, multiple tasks CAFN for both regression and classification). Third, experimental results demonstrated on four datasets (Shanghai Tech A (MAE = 88.1) and B (MAE = 18.8), WorldExpo’10(average MAE = 8.2), NS UCF_CC_50(MAE = 303.2) prove our proposed method can deliver effective performance. Full article
(This article belongs to the Special Issue Smart Vision Sensors)
Show Figures

Figure 1

17 pages, 12265 KiB  
Article
Automatic Rectification of the Hybrid Stereo Vision System
by Chengtao Cai, Bing Fan, Xin Liang and Qidan Zhu
Sensors 2018, 18(10), 3355; https://doi.org/10.3390/s18103355 - 8 Oct 2018
Cited by 2 | Viewed by 3049
Abstract
By combining the advantages of 360-degree field of view cameras and the high resolution of conventional cameras, the hybrid stereo vision system could be widely used in surveillance. As the relative position of the two cameras is not constant over time, its automatic [...] Read more.
By combining the advantages of 360-degree field of view cameras and the high resolution of conventional cameras, the hybrid stereo vision system could be widely used in surveillance. As the relative position of the two cameras is not constant over time, its automatic rectification is highly desirable when adopting a hybrid stereo vision system for practical use. In this work, we provide a method for rectifying the dynamic hybrid stereo vision system automatically. A perspective projection model is proposed to reduce the computation complexity of the hybrid stereoscopic 3D reconstruction. The rectification transformation is calculated by solving a nonlinear constrained optimization problem for a given set of corresponding point pairs. The experimental results demonstrate the accuracy and effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Smart Vision Sensors)
Show Figures

Figure 1

Back to TopTop