sensors-logo

Journal Browser

Journal Browser

Special Issue "Camera as a Smart-Sensor (CaaSS)"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 31 May 2020.

Special Issue Editors

Prof. Dr. Peter Corcoran
Website
Guest Editor
College of Engineering & Informatics at National University of Ireland Galway
Interests: smart-imaging/advanced digital imaging solutions; AI, deep learning, data augmentation and data generation (GANs); computer vision and computational imaging and emerging Edge-AI implementations
Prof. Dr. Saraju P. Mohanty
Website
Guest Editor
Computer Science and Engineering, University of North Texas
Interests: smart electronic systems; security and energy aware cyber-physical systems (CPS); IoMT based approaches for smart healthcare; IoT-enabled consumer electronics for smart cities

Special Issue Information

Dear Colleagues,

Digital camera technology has evolved over the past 3 decades to provide image and video quality of incredible quality, while, thanks to the mass market adoption of digital imaging technologies, the costs of image sensors and the associated optical systems and image signal processors continue to decrease. In parallel, advances in computational imaging and deep learning technologies have led to a new generation of advanced computer vision algorithms. New edge-AI technologies will enable these sophisticated algorithms to be integrated directly with the sensing and optical systems to enable a new generation of smart-vision sensors. An important aspect of such new sensors is that they can meet emerging needs to manage and curate data-privacy and can also help to reduce the energy requirements by eliminating the need to send large amounts of raw image and video data to data-centers for postprocessing and cloud-based storage.

This Special Issue welcomes new research contributions and applied research on new synergies across these fields that enable a new generation of ‘Camera as a Smart Sensor’ technologies. Review articles that are well-aligned with this Special Issue theme will also be considered.

Suitable topics can include:   

  • Novel combinations of commodity camera or image sensor technologies with edge-AI or embedded computational imaging algorithms;
  • Novel uses of camera or image sensors for new sensing applications;
  • Advanced deep learning techniques to enable new sensing with commodity camera or sensors;
  • New nonvisible sensing technologies that leverage advanced computational or edge-AI algorithms;
  • Optical design aspects of CaaSS;
  • Electronic design aspects of CaaSS, including new ISP hardware architectures;
  • CaaSS research targeted at privacy management or optimization of energy’
  • Large-scale deployments or novel products or commercial systems that leverage CaaSS in new use-cases or industry applications (e.g., smart city deployments, checkout-free shops, quality assurance and inspection lines, domestic service robotics).

Prof. Dr. Peter Corcoran
Prof. Dr. Saraju P. Mohanty
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Digital imaging
  • Digital camera
  • CMOS sensor
  • Embedded computer vision
  • Edge-AI
  • Camera as a Smart Sensor (CaaSS)

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
A Unified Deep Framework for Joint 3D Pose Estimation and Action Recognition from a Single RGB Camera
Sensors 2020, 20(7), 1825; https://doi.org/10.3390/s20071825 - 25 Mar 2020
Abstract
We present a deep learning-based multitask framework for joint 3D human pose estimation and action recognition from RGB sensors using simple cameras. The approach proceeds along two stages. In the first, a real-time 2D pose detector is run to determine the precise pixel [...] Read more.
We present a deep learning-based multitask framework for joint 3D human pose estimation and action recognition from RGB sensors using simple cameras. The approach proceeds along two stages. In the first, a real-time 2D pose detector is run to determine the precise pixel location of important keypoints of the human body. A two-stream deep neural network is then designed and trained to map detected 2D keypoints into 3D poses. In the second stage, the Efficient Neural Architecture Search (ENAS) algorithm is deployed to find an optimal network architecture that is used for modeling the spatio-temporal evolution of the estimated 3D poses via an image-based intermediate representation and performing action recognition. Experiments on Human3.6M, MSR Action3D and SBU Kinect Interaction datasets verify the effectiveness of the proposed method on the targeted tasks. Moreover, we show that the method requires a low computational budget for training and inference. In particular, the experimental results show that by using a monocular RGB sensor, we can develop a 3D pose estimation and human action recognition approach that reaches the performance of RGB-depth sensors. This opens up many opportunities for leveraging RGB cameras (which are much cheaper than depth cameras and extensively deployed in private and public places) to build intelligent recognition systems. Full article
(This article belongs to the Special Issue Camera as a Smart-Sensor (CaaSS))
Show Figures

Figure 1

Open AccessArticle
SLAM-Based Self-Calibration of a Binocular Stereo Vision Rig in Real-Time
Sensors 2020, 20(3), 621; https://doi.org/10.3390/s20030621 - 22 Jan 2020
Abstract
The calibration problem of binocular stereo vision rig is critical for its practical application. However, most existing calibration methods are based on manual off-line algorithms for specific reference targets or patterns. In this paper, we propose a novel simultaneous localization and mapping (SLAM)-based [...] Read more.
The calibration problem of binocular stereo vision rig is critical for its practical application. However, most existing calibration methods are based on manual off-line algorithms for specific reference targets or patterns. In this paper, we propose a novel simultaneous localization and mapping (SLAM)-based self-calibration method designed to achieve real-time, automatic and accurate calibration of the binocular stereo vision (BSV) rig’s extrinsic parameters in a short period without auxiliary equipment and special calibration markers, assuming the intrinsic parameters of the left and right cameras are known in advance. The main contribution of this paper is to use the SLAM algorithm as our main tool for the calibration method. The method mainly consists of two parts: SLAM-based construction of 3D scene point map and extrinsic parameter calibration. In the first part, the SLAM mainly constructs a 3D feature point map of the natural environment, which is used as a calibration area map. To improve the efficiency of calibration, a lightweight, real-time visual SLAM is built. In the second part, extrinsic parameters are calibrated through the 3D scene point map created by the SLAM. Ultimately, field experiments are performed to evaluate the feasibility, repeatability, and efficiency of our self-calibration method. The experimental data shows that the average absolute error of the Euler angles and translation vectors obtained by our method relative to the reference values obtained by Zhang’s calibration method does not exceed 0.5˚ and 2 mm, respectively. The distribution range of the most widely spread parameter in Euler angles is less than 0.2˚ while that in translation vectors does not exceed 2.15 mm. Under the general texture scene and the normal driving speed of the mobile robot, the calibration time can be generally maintained within 10 s. The above results prove that our proposed method is reliable and has practical value. Full article
(This article belongs to the Special Issue Camera as a Smart-Sensor (CaaSS))
Show Figures

Figure 1

Back to TopTop