sensors-logo

Journal Browser

Journal Browser

Smart Image Sensors II

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (30 June 2022) | Viewed by 2253

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611-6200, USA
Interests: computer architecture; embedded systems; embedded vision intelligence; cybersecurity; FPGA and high-performance computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of Nebraska-Lincoln, 1400 R Street, Lincoln, NE 68588, USA
Interests: cyberphysical systems; Internet of Things; embedded computing; embedded computer vision; VLSI systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electrical and Computer Engineering, Georgia Institute of Technology, North Avenue, Atlanta, GA 30332, USA
Interests: low-power, variation tolerant, and reliable VLSI systems; device/circuit level modeling/estimation of power, yield, and reliability; technology-circuit co-design methodologies; self-adaptive systems with on-chip sensing and a repair technique; memory design for VLSI applications; ultralow power and fault-tolerant nanoelectronics: technology, circuit, and computing platforms
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Cameras are pervasively used in a wide range of applications, including monitoring and surveillance, crowd analysis, traffic control, precision agriculture, remote sensing, and manufacturing. While increasing the resolution of image sensors allows capturing the tiny details of remote landscape and events, the fast-growing amount of data generated by modern image sensors is outpacing our capability to transport, store, and extract relevant content. Comparable to the 72 gigabytes that human vision sends to the brain every second, the amount of data produced by such systems in the future poses two major challenges: transportation and computation. First, current and future communication systems, even with the most advanced video compression architecture, will not be able to provide the required bandwidth to transport huge data streams. Second, the extraction of relevant information from a large amount of noisy video data poses substantial challenges that can be overcome only by utilizing highly specialized computers. While post priori and non-real-time video analysis may be sufficient for certain groups of applications, it does not suffice for applications such as driving assistance, surveillance, or onboard remote sensing using cameras on drones, which require near real-time video and image analyses. The goal of this Special Issue is to explore ongoing studies aimed at tackling the Big Data challenge in future imaging applications by pushing computation closer to image sensors and exploiting the massive parallel nature of sensor arrays to filter out noisy data early in the capture process and provide only structure data to high-level processing and knowledge inference stages.

We are interested in vertically integrated technology, such as focal plane sensor processors (FPSP) and vision sensors, that incorporate massively parallel and possibly hierarchical architecture in the sensor, along with artificial intelligent algorithm in order to directly infer the scene at the source of data. The following topics are of interest:

  • Advanced image sensor architectures;
  • In-Sensor computation for image processing applications;
  • Integrated learning and knowledge inference in image sensors;
  • Technology and fabrication.

Prof. Dr. Christophe Bobda
Prof. Dr. Marilyn Wolf
Prof. Dr. Saibal Mukhopadhyay
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Related Special Issue

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 1436 KiB  
Article
Motion-Based Object Location on a Smart Image Sensor Using On-Pixel Memory
by Wladimir Valenzuela, Antonio Saavedra, Payman Zarkesh-Ha and Miguel Figueroa
Sensors 2022, 22(17), 6538; https://doi.org/10.3390/s22176538 - 30 Aug 2022
Cited by 4 | Viewed by 1720
Abstract
Object location is a crucial computer vision method often used as a previous stage to object classification. Object-location algorithms require high computational and memory resources, which poses a difficult challenge for portable and low-power devices, even when the algorithm is implemented using dedicated [...] Read more.
Object location is a crucial computer vision method often used as a previous stage to object classification. Object-location algorithms require high computational and memory resources, which poses a difficult challenge for portable and low-power devices, even when the algorithm is implemented using dedicated digital hardware. Moving part of the computation to the imager may reduce the memory requirements of the digital post-processor and exploit the parallelism available in the algorithm. This paper presents the architecture of a Smart Imaging Sensor (SIS) that performs object location using pixel-level parallelism. The SIS is based on a custom smart pixel, capable of computing frame differences in the analog domain, and a digital coprocessor that performs morphological operations and connected components to determine the bounding boxes of the detected objects. The smart-pixel array implements on-pixel temporal difference computation using analog memories to detect motion between consecutive frames. Our SIS can operate in two modes: (1) as a conventional image sensor and (2) as a smart sensor which delivers a binary image that highlights the pixels in which movement is detected between consecutive frames and the object bounding boxes. In this paper, we present the design of the smart pixel and evaluate its performance using post-parasitic extraction on a 0.35 µm mixed-signal CMOS process. With a pixel-pitch of 32 µm × 32 µm, we achieved a fill factor of 28%. To evaluate the scalability of the design, we ported the layout to a 0.18 µm process, achieving a fill factor of 74%. On an array of 320×240 smart pixels, the circuit operates at a maximum frame rate of 3846 frames per second. The digital coprocessor was implemented and validated on a Xilinx Artix-7 XC7A35T field-programmable gate array that runs at 125 MHz, locates objects in a video frame in 0.614 µs, and has a power consumption of 58 mW. Full article
(This article belongs to the Special Issue Smart Image Sensors II)
Show Figures

Figure 1

11 pages, 5004 KiB  
Communication
On-CMOS Image Sensor Processing for Lane Detection
by Soyeon Lee, Bohyeok Jeong, Keunyeol Park, Minkyu Song and Soo Youn Kim
Sensors 2021, 21(11), 3713; https://doi.org/10.3390/s21113713 - 26 May 2021
Cited by 8 | Viewed by 2759
Abstract
This paper presents a CMOS image sensor (CIS) with built-in lane detection computing circuits for automotive applications. We propose on-CIS processing with an edge detection mask used in the readout circuit of the conventional CIS structure for high-speed lane detection. Furthermore, the edge [...] Read more.
This paper presents a CMOS image sensor (CIS) with built-in lane detection computing circuits for automotive applications. We propose on-CIS processing with an edge detection mask used in the readout circuit of the conventional CIS structure for high-speed lane detection. Furthermore, the edge detection mask can detect the edges of slanting lanes to improve accuracy. A prototype of the proposed CIS was fabricated using a 110 nm CIS process. It has an image resolution of 160 (H) × 120 (V) and a frame rate of 113, and it occupies an area of 5900 μm × 5240 μm. A comparison of its lane detection accuracy with that of existing edge detection algorithms shows that it achieves an acceptable accuracy. Moreover, the total power consumption of the proposed CIS is 9.7 mW at pixel, analog, and digital supply voltages of 3.3, 3.3, and 1.5 V, respectively. Full article
(This article belongs to the Special Issue Smart Image Sensors)
Show Figures

Figure 1

20 pages, 9702 KiB  
Article
A Task-Driven Feedback Imager with Uncertainty Driven Hybrid Control
by Burhan A. Mudassar, Priyabrata Saha, Marilyn Wolf and Saibal Mukhopadhyay
Sensors 2021, 21(8), 2610; https://doi.org/10.3390/s21082610 - 08 Apr 2021
Cited by 2 | Viewed by 1801
Abstract
Deep Neural Network (DNN) systems tend to produce overconfident or uncalibrated outputs. This poses problems for active sensor systems that have a DNN module as the main feedback controller. In this paper, we study a closed-loop feedback smart camera from the lens of [...] Read more.
Deep Neural Network (DNN) systems tend to produce overconfident or uncalibrated outputs. This poses problems for active sensor systems that have a DNN module as the main feedback controller. In this paper, we study a closed-loop feedback smart camera from the lens of uncertainty estimation. The uncertainty of the task output is used to characterize and facilitate the feedback operation. The DNN uncertainty in the feedback system is estimated and characterized using both sampling and non-sampling based methods. In addition, we propose a closed-loop control that incorporates uncertainty information when providing feedback. We show two modes of control, one that prioritizes false positives and one that prioritizes false negatives, and a hybrid approach combining the two. We apply the uncertainty-driven control to the tasks of object detection, object tracking, and action detection. The hybrid system improves object detection and tracking accuracy on the CAMEL dataset by 1.1% each respectively. For the action detection task, the hybrid approach improves accuracy by 1.4%. Full article
(This article belongs to the Special Issue Smart Image Sensors)
Show Figures

Figure 1

20 pages, 3177 KiB  
Article
Towards an Efficient CNN Inference Architecture Enabling In-Sensor Processing
by Md Jubaer Hossain Pantho, Pankaj Bhowmik and Christophe Bobda
Sensors 2021, 21(6), 1955; https://doi.org/10.3390/s21061955 - 10 Mar 2021
Cited by 9 | Viewed by 2986
Abstract
The astounding development of optical sensing imaging technology, coupled with the impressive improvements in machine learning algorithms, has increased our ability to understand and extract information from scenic events. In most cases, Convolution neural networks (CNNs) are largely adopted to infer knowledge due [...] Read more.
The astounding development of optical sensing imaging technology, coupled with the impressive improvements in machine learning algorithms, has increased our ability to understand and extract information from scenic events. In most cases, Convolution neural networks (CNNs) are largely adopted to infer knowledge due to their surprising success in automation, surveillance, and many other application domains. However, the convolution operations’ overwhelming computation demand has somewhat limited their use in remote sensing edge devices. In these platforms, real-time processing remains a challenging task due to the tight constraints on resources and power. Here, the transfer and processing of non-relevant image pixels act as a bottleneck on the entire system. It is possible to overcome this bottleneck by exploiting the high bandwidth available at the sensor interface by designing a CNN inference architecture near the sensor. This paper presents an attention-based pixel processing architecture to facilitate the CNN inference near the image sensor. We propose an efficient computation method to reduce the dynamic power by decreasing the overall computation of the convolution operations. The proposed method reduces redundancies by using a hierarchical optimization approach. The approach minimizes power consumption for convolution operations by exploiting the Spatio-temporal redundancies found in the incoming feature maps and performs computations only on selected regions based on their relevance score. The proposed design addresses problems related to the mapping of computations onto an array of processing elements (PEs) and introduces a suitable network structure for communication. The PEs are highly optimized to provide low latency and power for CNN applications. While designing the model, we exploit the concepts of biological vision systems to reduce computation and energy. We prototype the model in a Virtex UltraScale+ FPGA and implement it in Application Specific Integrated Circuit (ASIC) using the TSMC 90nm technology library. The results suggest that the proposed architecture significantly reduces dynamic power consumption and achieves high-speed up surpassing existing embedded processors’ computational capabilities. Full article
(This article belongs to the Special Issue Smart Image Sensors)
Show Figures

Figure 1

20 pages, 4329 KiB  
Article
HARP: Hierarchical Attention Oriented Region-Based Processing for High-Performance Computation in Vision Sensor
by Pankaj Bhowmik, Md Jubaer Hossain Pantho and Christophe Bobda
Sensors 2021, 21(5), 1757; https://doi.org/10.3390/s21051757 - 04 Mar 2021
Cited by 7 | Viewed by 2206
Abstract
Cameras are widely adopted for high image quality with the rapid advancement of complementary metal-oxide-semiconductor (CMOS) image sensors while offloading vision applications’ computation to the cloud. It raises concern for time-critical applications such as autonomous driving, surveillance, and defense systems since moving pixels [...] Read more.
Cameras are widely adopted for high image quality with the rapid advancement of complementary metal-oxide-semiconductor (CMOS) image sensors while offloading vision applications’ computation to the cloud. It raises concern for time-critical applications such as autonomous driving, surveillance, and defense systems since moving pixels from the sensor’s focal plane are expensive. This paper presents a hardware architecture for smart cameras that understands the salient regions from an image frame and then performs high-level inference computation for sensor-level information creation instead of transporting raw pixels. A visual attention-oriented computational strategy helps to filter a significant amount of redundant spatiotemporal data collected at the focal plane. A computationally expensive learning model is then applied to the interesting regions of the image. The hierarchical processing in the pixels’ data path demonstrates a bottom-up architecture with massive parallelism and gives high throughput by exploiting the large bandwidth available at the image source. We prototype the model in field-programmable gate array (FPGA) and application-specific integrated circuit (ASIC) for integrating with a pixel-parallel image sensor. The experiment results show that our approach achieves significant speedup while in certain conditions exhibits up to 45% more energy efficiency with the attention-oriented processing. Although there is an area overhead for inheriting attention-oriented processing, the achieved performance based on energy consumption, latency, and memory utilization overcomes that limitation. Full article
(This article belongs to the Special Issue Smart Image Sensors)
Show Figures

Figure 1

18 pages, 7766 KiB  
Article
Resolution Improvement of Light Field Imaging via a Nematic Liquid Crystal Microlens with Added Multi-Walled Carbon Nanotubes
by Hui Li, Yi Yu, Jing Peng, Yuntao Wu and Yanduo Zhang
Sensors 2020, 20(19), 5557; https://doi.org/10.3390/s20195557 - 28 Sep 2020
Cited by 8 | Viewed by 1870
Abstract
A relatively simple method to improve the image resolution of light field based on a liquid crystal (LC) microlens doped with multi-walled carbon nanotubes (MWCNTs) was developed and evaluated. As the nanoparticles were doped in LC, its electro-optical features could enhance, leading to [...] Read more.
A relatively simple method to improve the image resolution of light field based on a liquid crystal (LC) microlens doped with multi-walled carbon nanotubes (MWCNTs) was developed and evaluated. As the nanoparticles were doped in LC, its electro-optical features could enhance, leading to a short response time compared to the pure LC microlens. With the maximum use of the proposed LC microlens, a method combining aperiodicity extraction and weighted average algorithm was adopted to realize the high-resolution light field imaging. The aperiodicity extraction method was proposed, which could effectively improve resolution of view angle image. For synthesizing the full resolution image at 0 Vrms and the extracted view angle image of light field imaging at 2.0 Vrms, the final high-resolution light field imaging could be obtained in a short time by weighted average algorithm. In this way, the common problem of low resolution in light field imaging could be solved. This proposed method was in good agreement with our experimental results. And it was also in line with the development of the trend of the smart imaging sensor combining algorithm with hardware. Full article
(This article belongs to the Special Issue Smart Image Sensors)
Show Figures

Figure 1

15 pages, 4861 KiB  
Article
Depth-of-Field-Extended Plenoptic Camera Based on Tunable Multi-Focus Liquid-Crystal Microlens Array
by Mingce Chen, Wenda He, Dong Wei, Chai Hu, Jiashuo Shi, Xinyu Zhang, Haiwei Wang and Changsheng Xie
Sensors 2020, 20(15), 4142; https://doi.org/10.3390/s20154142 - 25 Jul 2020
Cited by 20 | Viewed by 3875
Abstract
Plenoptic cameras have received a wide range of research interest because it can record the 4D plenoptic function or radiance including the radiation power and ray direction. One of its important applications is digital refocusing, which can obtain 2D images focused at different [...] Read more.
Plenoptic cameras have received a wide range of research interest because it can record the 4D plenoptic function or radiance including the radiation power and ray direction. One of its important applications is digital refocusing, which can obtain 2D images focused at different depths. To achieve digital refocusing in a wide range, a large depth of field (DOF) is needed, but there are fundamental optical limitations to this. In this paper, we proposed a plenoptic camera with an extended DOF by integrating a main lens, a tunable multi-focus liquid-crystal microlens array (TMF-LCMLA), and a complementary metal oxide semiconductor (CMOS) sensor together. The TMF-LCMLA was fabricated by traditional photolithography and standard microelectronic techniques, and its optical characteristics including interference patterns, focal lengths, and point spread functions (PSFs) were experimentally analyzed. Experiments demonstrated that the proposed plenoptic camera has a wider range of digital refocusing compared to the plenoptic camera based on a conventional liquid-crystal microlens array (LCMLA) with only one corresponding focal length at a certain voltage, which is equivalent to the extension of DOF. In addition, it also has a 2D/3D switchable function, which is not available with conventional plenoptic cameras. Full article
(This article belongs to the Special Issue Smart Image Sensors)
Show Figures

Figure 1

16 pages, 64462 KiB  
Article
Development of Reliable, High Performance WLCSP for BSI CMOS Image Sensor for Automotive Application
by Tianshen Zhou, Shuying Ma, Daquan Yu, Ming Li and Tao Hang
Sensors 2020, 20(15), 4077; https://doi.org/10.3390/s20154077 - 22 Jul 2020
Cited by 8 | Viewed by 10933
Abstract
To meet the urgent market demand for small package size and high reliability performance for automotive CMOS image sensor (CIS) application, wafer level chip scale packaging (WLCSP) technology using through silicon vias (TSV) needs to be developed to replace current chip on board [...] Read more.
To meet the urgent market demand for small package size and high reliability performance for automotive CMOS image sensor (CIS) application, wafer level chip scale packaging (WLCSP) technology using through silicon vias (TSV) needs to be developed to replace current chip on board (COB) packages. In this paper, a WLCSP with the size of 5.82 mm × 5.22 mm and thickness of 850 μm was developed for the backside illumination (BSI) CIS chip using a 65 nm node with a size of 5.8 mm × 5.2 mm. The packaged product has 1392 × 976 pixels and a resolution of up to 60 frames per second with more than 120 dB dynamic range. The structure of the 3D package was designed and the key fabrication processes on a 12” inch wafer were investigated. More than 98% yield and excellent optical performance of the CIS package was achieved after process optimization. The final packages were qualified by AEC-Q100 Grade 2. Full article
(This article belongs to the Special Issue Smart Image Sensors)
Show Figures

Figure 1

14 pages, 5648 KiB  
Article
Design of an Always-On Image Sensor Using an Analog Lightweight Convolutional Neural Network
by Jaihyuk Choi, Sungjae Lee, Youngdoo Son and Soo Youn Kim
Sensors 2020, 20(11), 3101; https://doi.org/10.3390/s20113101 - 30 May 2020
Cited by 16 | Viewed by 5443
Abstract
This paper presents an always-on Complementary Metal Oxide Semiconductor (CMOS) image sensor (CIS) using an analog convolutional neural network for image classification in mobile applications. To reduce the power consumption as well as the overall processing time, we propose analog convolution circuits for [...] Read more.
This paper presents an always-on Complementary Metal Oxide Semiconductor (CMOS) image sensor (CIS) using an analog convolutional neural network for image classification in mobile applications. To reduce the power consumption as well as the overall processing time, we propose analog convolution circuits for computing convolution, max-pooling, and correlated double sampling operations without operational transconductance amplifiers. In addition, we used the voltage-mode MAX circuit for max pooling in the analog domain. After the analog convolution processing, the image data were reduced by 99.58% and were converted to digital with a 4-bit single-slope analog-to-digital converter. After the conversion, images were classified by the fully connected processor, which is traditionally performed in the digital domain. The measurement results show that we achieved an 89.33% image classification accuracy. The prototype CIS was fabricated in a 0.11 μm 1-poly 4-metal CIS process with a standard 4T-active pixel sensor. The image resolution was 160 × 120, and the total power consumption of the proposed CIS was 1.12 mW with a 3.3 V supply voltage and a maximum frame rate of 120. Full article
(This article belongs to the Special Issue Smart Image Sensors)
Show Figures

Figure 1

Back to TopTop