sensors-logo

Journal Browser

Journal Browser

Special Issue "Smart Vision Sensors"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 March 2019).

Special Issue Editor

Prof. Dr. Primo Zingaretti
E-Mail Website
Guest Editor
Department of Information Engineering - DII, Marche Polytechnic University, I-60131 Ancona, Italy
Interests: computer vision; localization and object tracking; 3D modeling; smart/intelligent sensors; machine learning; data fusion and deep learning in sensor systems; human-computer interaction; virtual and augmented reality
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Perception is acquiring real world representations by interacting with the environment. Strictly, the necessary input stimuli (sensory information) are not part of actual perceptual processes, which consist of several sub processes, mainly selection, organization and interpretation of stimuli. Therefore, in human beings, perception is not only the visual system (from eyes to retina), but it also includes the brain and its neural networks. Similarly, the current development trend in machine vision systems is to transform the vision sensors into smart cameras by integrating in the same system image sensor (CCD/CMOS and lens), processor and memory (DSP together with FPGA, GPU, and CPU units), communication interface, and software (operating system and algorithms). Smart vision sensors can be designed to produce multiple sensorial modalities, from the classical 2D monocular or omnidirectional systems to the recent RGB-D (depth) and 3D systems and can adjust parameters to suit different applications. In addition, a natural evolution of smart cameras is towards smart camera networks using multiple vision sensors.

The aim of this Special Issue is to present some of the possibilities that smart cameras offer by exploring, at low level, new hardware and software solutions, as well as by incorporating into sensors, at high level, advanced artificial intelligence techniques for image understanding and decision-making.

Smart visual sensor applications can now be found in robotics, visual inspection and industry, environmental monitoring and agriculture, security and surveillance, autonomous driving and in many other fields.

Prof. Dr. Primo Zingaretti
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 2D/3D feature extraction at sensor level
  • Intelligent onboard processing
  • Embedded computer vision algorithms
  • Embedded computer vision architectures (e.g., DSP, FPGA, GPU)
  • Real-time image and video processing (e.g., deblurring, super-resolution)
  • Image and video understanding
  • Smart industrial cameras
  • Smart camera networks

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Advances in Nuclear Radiation Sensing: Enabling 3-D Gamma-Ray Vision
Sensors 2019, 19(11), 2541; https://doi.org/10.3390/s19112541 - 04 Jun 2019
Cited by 1
Abstract
The enormous advances in sensing and data processing technologies in combination with recent developments in nuclear radiation detection and imaging enable unprecedented and “smarter” ways to detect, map, and visualize nuclear radiation. The recently developed concept of three-dimensional (3-D) Scene-data fusion allows us [...] Read more.
The enormous advances in sensing and data processing technologies in combination with recent developments in nuclear radiation detection and imaging enable unprecedented and “smarter” ways to detect, map, and visualize nuclear radiation. The recently developed concept of three-dimensional (3-D) Scene-data fusion allows us now to “see” nuclear radiation in three dimensions, in real time, and specific to radionuclides. It is based on a multi-sensor instrument that is able to map a local scene and to fuse the scene data with nuclear radiation data in 3-D while the instrument is freely moving through the scene. This new concept is agnostic of the deployment platform and the specific radiation detection or imaging modality. We have demonstrated this 3-D Scene-data fusion concept in a range of configurations in locations, such as the Fukushima Prefecture in Japan or Chernobyl in Ukraine on unmanned and manned aerial and ground-based platforms. It provides new means in the detection, mapping, and visualization of radiological and nuclear materials relevant for the safe and secure operation of nuclear and radiological facilities or in the response to accidental or intentional releases of radioactive materials where a timely, accurate, and effective assessment is critical. In addition, the ability to visualize nuclear radiation in 3-D and in real time provides new means in the communication with public and facilitates to overcome one of the major public concerns of not being able to “see” nuclear radiation. Full article
(This article belongs to the Special Issue Smart Vision Sensors)
Show Figures

Figure 1

Open AccessArticle
Orientation-Constrained System for Lamp Detection in Buildings Based on Computer Vision
Sensors 2019, 19(7), 1516; https://doi.org/10.3390/s19071516 - 28 Mar 2019
Cited by 1
Abstract
Computer vision is used in this work to detect lighting elements in buildings with the goal of improving the accuracy of previous methods to provide a precise inventory of the location and state of lamps. Using the framework developed in our previous works, [...] Read more.
Computer vision is used in this work to detect lighting elements in buildings with the goal of improving the accuracy of previous methods to provide a precise inventory of the location and state of lamps. Using the framework developed in our previous works, we introduce two new modifications to enhance the system: first, a constraint on the orientation of the detected poses in the optimization methods for both the initial and the refined estimates based on the geometric information of the building information modelling (BIM) model; second, an additional reprojection error filtering step to discard the erroneous poses introduced with the orientation restrictions, keeping the identification and localization errors low while greatly increasing the number of detections. These enhancements are tested in five different case studies with more than 30,000 images, with results showing improvements in the number of detections, the percentage of correct model and state identifications, and the distance between detections and reference positions. Full article
(This article belongs to the Special Issue Smart Vision Sensors)
Show Figures

Figure 1

Open AccessArticle
Smart Camera Aware Crowd Counting via Multiple Task Fractional Stride Deep Learning
Sensors 2019, 19(6), 1346; https://doi.org/10.3390/s19061346 - 18 Mar 2019
Abstract
Estimating the number of people in highly clustered crowd scenes is an extremely challenging task on account of serious occlusion and non-uniformity distribution in one crowd image. Traditional works on crowd counting take advantage of different CNN like networks to regress crowd density [...] Read more.
Estimating the number of people in highly clustered crowd scenes is an extremely challenging task on account of serious occlusion and non-uniformity distribution in one crowd image. Traditional works on crowd counting take advantage of different CNN like networks to regress crowd density map, and further predict the count. In contrast, we investigate a simple but valid deep learning model that concentrates on accurately predicting the density map and simultaneously training a density level classifier to relax parameters of the network to prevent dangerous stampede with a smart camera. First, a combination of atrous and fractional stride convolutional neural network (CAFN) is proposed to deliver larger receptive fields and reduce the loss of details during down-sampling by using dilated kernels. Second, the expanded architecture is offered to not only precisely regress the density map, but also classify the density level of the crowd in the meantime (MTCAFN, multiple tasks CAFN for both regression and classification). Third, experimental results demonstrated on four datasets (Shanghai Tech A (MAE = 88.1) and B (MAE = 18.8), WorldExpo’10(average MAE = 8.2), NS UCF_CC_50(MAE = 303.2) prove our proposed method can deliver effective performance. Full article
(This article belongs to the Special Issue Smart Vision Sensors)
Show Figures

Figure 1

Open AccessArticle
Automatic Rectification of the Hybrid Stereo Vision System
Sensors 2018, 18(10), 3355; https://doi.org/10.3390/s18103355 - 08 Oct 2018
Abstract
By combining the advantages of 360-degree field of view cameras and the high resolution of conventional cameras, the hybrid stereo vision system could be widely used in surveillance. As the relative position of the two cameras is not constant over time, its automatic [...] Read more.
By combining the advantages of 360-degree field of view cameras and the high resolution of conventional cameras, the hybrid stereo vision system could be widely used in surveillance. As the relative position of the two cameras is not constant over time, its automatic rectification is highly desirable when adopting a hybrid stereo vision system for practical use. In this work, we provide a method for rectifying the dynamic hybrid stereo vision system automatically. A perspective projection model is proposed to reduce the computation complexity of the hybrid stereoscopic 3D reconstruction. The rectification transformation is calculated by solving a nonlinear constrained optimization problem for a given set of corresponding point pairs. The experimental results demonstrate the accuracy and effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Smart Vision Sensors)
Show Figures

Figure 1

Back to TopTop