Next Article in Journal
An Aquatic Treadmill Alters Lower Limb Walking Dynamics in Typically Developing Children and Children with Cerebral Palsy
Previous Article in Journal
Adaptive Cooperative Search Algorithm for Air Pollution Detection Using Drones
Previous Article in Special Issue
Polarization-Modulated Optical Homodyne for Time-of-Flight Imaging with Standard CMOS Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

The Design of a Computer Vision Sensor Based on a Low-Power Edge Detection Circuit

Department of System Semiconductor, Dongguk University, Seoul 04620, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(10), 3219; https://doi.org/10.3390/s25103219
Submission received: 2 April 2025 / Revised: 11 May 2025 / Accepted: 15 May 2025 / Published: 20 May 2025
(This article belongs to the Special Issue Recent Advances in CMOS Image Sensor)

Abstract

:
We propose a complementary metal-oxide-semiconductor (CMOS) image sensor (CIS) that performs edge mask computation and detection during the analog-to-digital (A/D) conversion process to output 1-bit edge images. By utilizing the characteristics of the edge that can obtain a 1-bit image, the edge mask and thresholding operations are performed simultaneously during the A/D conversion process, thereby reducing memory capacity along with a high number of frames per second (FPS). Additionally, by implementing a 1-bit analog-to-digital converter (ADC) instead of a high-resolution ADC and counter through the 1-bit edge data obtained from the edge mask operation, both static and dynamic power consumption are reduced. The proposed CIS, fabricated with a one-poly six-metal CIS process with a 4T-active pixel sensor, has a core area of 2.546 mm × 1.923 mm in a chip area of 2.558 mm × 4.3 mm. The total power consumption is 1.52 mW at 23 FPS, with power supplies of 2.8 V and 1.5 V for the analog domain and 1.5 V for the digital domain.

1. Introduction

In computer vision, edge detection is a fundamental technique that allows for the extraction of critical features from images, facilitating accurate image segmentation by separating objects from the background. Based on these advantages, several studies have been conducted to apply computer vision techniques, such as biometric recognition and lane detection, to various applications [1,2,3]. In applications such as mobile face/iris recognition and autonomous driving lane detection, low power consumption (≤1 mW) [4] and high speed (≥30 FPS) are crucial requirements due to limited battery capacity and real-time object inference.
Conventional methods for object recognition, as illustrated in Figure 1, involve the following steps. The light incident on the photodiodes of a CMOS image sensor is converted into voltage values corresponding to its intensity through a voltage buffer. These voltage values are then converted into digital codes using an analog-to-digital converter (ADC), representing raw data, and are stored in a separate memory. Subsequently, edge detection or image classification operations are performed using the arithmetic logic unit (ALU) within the central processing unit or graphics processing unit of a mobile device’s application processor (AP) [5]. The energy consumed during data transfer from memory to the ALU significantly affects the overall power efficiency of computational implementation.
Therefore, research efforts have been dedicated to implementing edge detection and neural network operations within the readout circuit of a CMOS image sensor (CIS), leveraging the CIS for tasks beyond obtaining image information [6,7,8,9,10,11]. By integrating computations previously performed on external APs into the CIS, advantages can be gained in terms of power consumption and speed. However, the use of high-resolution ADCs with more than 8 bits is crucial during the A/D conversion process. High-resolution ADCs and counters account for approximately 40% of the total power consumption of the CIS [12], making them unsuitable for low-power CIS designs and resulting in a decrease in speed. Furthermore, since edge images are binary (1-bit), they can store only the essential features of an image using minimal memory capacity, which is a significant advantage. However, this advantage cannot be fully utilized when employing high-resolution ADCs.
In this study, we propose the implementation of a low-power, high-speed edge detection algorithm by simultaneously applying edge masks and performing detection using switch MUX and a comparator, as illustrated in Figure 1b. We directly output the edge image using a row buffer and 1-bit comparator, eliminating the need for high-resolution ADCs and counters. Furthermore, we utilize the output results of the edge image as inputs into a binarized neural network, which achieves image classification with low power consumption and reduced memory usage. This approach enables the implementation of low-power computer vision sensors.
This paper is structured as follows. Section 2 discusses the proposed low-power edge detection circuit, including circuit design and implementation. Section 3 describes the fabricated chip and the measurement environment, followed by the analysis of measurement results using various evaluation metrics. The conclusion is presented in Section 4.

2. The Proposed CIS for Low-Power Edge Detection

2.1. Architecture Flow

The built-in edge mask [9] in the presented CIS structure is illustrated in Figure 2. This mask demonstrates performance comparable to the widely used Sobel mask, achieving a Pratt’s figure of merit result of 97.24% and offering ease of hardware implementation, as well as efficient detection of diagonal edges. The overall block diagram of the proposed low-power edge detection CIS system is illustrated in Figure 3. The system comprises a 480 × 480 pixel array consisting of 4T-active pixel sensors (APS). To facilitate the capacitor and column amplifier layout in the rear block, four pixels are grouped together as a unit, with only one pixel utilizing a voltage buffer. As a result, an active 120 × 120 pixel array is obtained. It should be noted that the image resolution required in vision-based sensing is mainly determined by the object size, the distance between the sensor and the object, and the camera’s field of view. Dimensions of 120 × 120 pixels can detect the presence of an object of 10 mm in size at a distance of about 1 m [13]. The implementation of the built-in mask involves a row buffer block that stores two rows of data. During the pixel readout process, correlated double sampling (CDS) is performed to eliminate noise and preserve the sampled values. Additionally, an edge detection (ED) layer is constructed, comprising a column select switch (CSS) block for striding the mask by one and a comparator block for applying a threshold to the computed mask values, resulting in the generation of the final edge output. The detected edge data are stored in a 1-bit column memory and sequentially read out using a horizontal scanner.
The system operates in two modes, CIS mode and edge detection mode, as shown in Figure 3. In CIS mode (Figure 4a), the CIS image is outputted by directly converting the voltage values that have undergone CDS into a multi-bit digital representation using a 6-bit single-slope analog-to-digital converter (SSADC), without the application of the edge mask. On the other hand, in edge detection mode (Figure 4b), the edge image is generated by applying the built-in mask to the two rows of data stored in the row buffer, followed by mask computation and threshold application using the CSS and comparator, respectively. The digital code values are outputted based on the selected mode using the output select block. Figure 4c shows the comparator and edge detection timing used in this study.
When the input voltage ramp is included in the threshold voltage range, a comparator’s out value that flips from High to Low is generated, and this signal is applied as a clock for the negative edge triggered Flip-Flop (N-ETDFF). VDD is input to the data of this N-ETDFF, and when triggered by the clock, Qb is output as Low, completing the thresholding process. In this way, unlike the existing method where the ramp had to be applied for a time equal to the clock frequency multiplied by the n-bit code, high fps can be implemented by simultaneously applying contour detection and thresholding without being limited by the ramp application time.

2.2. Pixel Voltage Sampling and Edge Mask Operation

The voltage value converted from the voltage buffer of the pixel is stored in the holding capacitor through CDS in the row buffer [14]. Figure 5 illustrates the switched capacitor structure of the row buffer and the timing diagram and circuit operation according to the phases.
The Vrst in the reset phase and Vsig in the signal phase are input to PIXIN, and the charge conservation principle using the sampling capacitor, holding capacitor, and operational transconductance amplifier (OTA) is used to drive the PIXOUT of Equation (1). In this circuit, as CS and CH have the same capacitance, it outputs Vref+ΔPIX, where 1V of Vref is used in this paper.
P I X O U T = V r e f + C s C H V r s t V s i g = V r e f + P I X
The two holding capacitors store odd-row and even-row data, respectively, and the values required for edge mask operations are transferred to the back-end circuitry, as shown in Figure 6a, through S1 and S2. The pixel values with the offset removed through CDS undergo built-in mask operations using the column select switch and comparator. Figure 6b represents a timing diagram for the mask operation of x-direction and y-direction gradient, Gx and Gy, where the blue color corresponds to the data operation in the first row and the green color represents the data operation in the third row. With respect to Gx, when CLK1 is activated, the data from the (N − 1)th row of the (M − 1)th column (=ΔPIX(N−1,M−1)) are transmitted to the comparator. At this moment, with RST turned ON, the charge quantity equal to the positive input capacitance of the comparator (=CIN) multiplied by VLT-ΔPIX1 is stored. Here, VLT represents the logic threshold voltage obtained when the input and output of the first-stage circuit, which is a single-ended differential amplifier inside the comparator, are connected in a feedback loop. Then, RST is turned off, and CLK2 is activated to receive the data from the (M + 1)th row of the (N + 1)th column (=ΔPIX(N+1,M+1)). As a result, the charge quantity of the capacitor changes to CIN{VLT-(ΔPIX(N−1,M−1)-ΔPIX(N+1,M+1))}, and the mask operation in the Gy direction is simultaneously performed in the same manner. This is expressed by Equation (2).
G x = V L T V r e f + P I X N 1 , M 1 V r e f + P I X ( N + 1 , M + 1 )   = V L T P I X N 1 , M 1 P I X N + 1 , M + 1 G y = V L T V r e f + P I X N 1 , M + 1 V r e f + P I X ( N + 1 , M 1 )   = V L T { P I X N 1 , M + 1 P I X N + 1 , M 1 }

2.3. Thresholding Operation

After the mask operation is completed, the voltage values may contain noise or a false edge. Therefore, it is necessary to apply a thresholding process that treats only values above a certain threshold as edges. This process requires separate software processing [9]. In this circuit, we propose an edge detection circuit structure that combines edge mask operations and threshold application processes simultaneously to achieve a high processing speed and reduce unnecessary counter toggling in order to scale down the power supply of the comparator for a low-power edge detection circuit. Equation (2) for the completed mask operation can have a positive or negative value based on the magnitudes of ΔPIX(N-1,M-1) and ΔPIX(N+1,M+1). Therefore, to convert this into digital code, a ramp waveform is applied as V L T ± V P P , P I X E L (i.e., dynamic range of pixels) and an additional counter is employed to generate a clock signal that increases progressively from the edges towards the center based on the VLT. For example, as shown in Figure 7a, when applying a threshold of 0.5 to the converted full code in order to highlight the half portion, the 32 codes of the 5-bit data (excluding the most significant bit, representing polarity) will output as 0 for the lowest 16 codes and 1 for the upper 16 codes. We apply this method by using the SSADC operation to apply a threshold using a ramp signal during the A/D conversion process so that a separate process is not required after high-resolution image conversion. As an example with Th = 0.5, as shown in Figure 7b, a ramp slope is applied only to the window that outputs 0, which is the edge, among 1LSBХ16, and when the mask operation value is included within this range, the flip signal is connected to the CLK of TGFF, and when the data value being forced by VDD is triggered, the Qb signal that is output is used to determine that it is not edge data. Conversely, if the operation value is not included within the threshold window, the Qb signal remains unchanged and outputs High, indicating that it is edge data. This approach, as the ramp input range is not limited by the maximum voltage range of the pixel, allows for scaling down the supply voltage of the comparator from 3.3 V to 1.5 V. Moreover, since there is no need for a counter, both static and dynamic power consumption can be reduced. Furthermore, unlike previously, where the ramp time was set to be equal to the clock frequency multiplied by 2n, where n represents ADC resolution, in this method, by performing edge detection and threshold application simultaneously, the frame rate can be improved without being limited by the input time. The detected Gx and Gy mask results are combined by an OR gate and output as 1-bit data. See Figure 8.
Table 1 summarizes the post-simulation results of offset voltage, gain, and unity gain frequency for different corners. In all process corners, the values of Voffset are lower than the 1LSB (=15.6 mV) threshold, which is based on a 5-bit resolution with a ramp Vpp of 500 mV. This indicates that the comparator achieves an accuracy comparable to a 5-bit ADC.

3. Experimental Results

3.1. Layout and Chip Photograph of the Proposed CIS

Figure 9a,b present the layout and a photograph, respectively, of the proposed CIS. The chip was fabricated using a 0.11 μm one-poly six-metal CIS process, with a total area of 2.558 mm (H) × 4.300 mm (V). To lay out the proposed structure in an in-column method within a 480 × 480 pixel array, four pixels are grouped as a unit, with 120 × 120 pixels actively utilized. The pitch of each pixel is 3.25 × 3.25. In the analog domain, the pixel and row buffer operate at a supply voltage of 2.8 V, while the comparator operates at a voltage of 1.5 V. In the digital domain, the memory and horizontal scanner operate with a power supply voltage of 1.5 V.

3.2. Measurement Results

Figure 10 shows the environment for measuring the fabricated chip. The measurement process involves coding the necessary signals for the internal circuit operation of the chip in Verilog, which are then applied to the FPGA connected to the motherboard from the host PC. These signals are then applied to the chip on the daughterboard, and the measured test image is observed using an image program.
Figure 11 showcases images corresponding to three different thresholds. (a) and (b) represent results obtained from measuring human faces, while (c) and (d) depict measurements of objects. The thresholds 0.5, 0.25, and 0.125 refer to the applied ramp slopes of 16 LSB, 8 LSB, and 4 LSB, respectively, based on the ramp Vpp used for the 6-bit CIS image measurement. As the threshold decreases, the number of edge outputs increases, resulting in corresponding changes in the images.
The measured images, shown in Figure 12, are compared to the Sobel, Prewitt, and Robert masks based on the reference measurement image. The evaluation involves comparing the mean squared error (MSE), peak signal-to-noise ratio (PSNR) [15], and accuracy, which represent image similarity.
MSE quantifies the average squared difference between the original and reconstructed images. A lower value indicates a higher similarity between the images. PSNR is calculated by logarithmically transforming the MSE, as described in Equation (3), where R represents the maximum possible value of the input image, typically 255 for an 8-bit image. PSNR is indicated in terms of dB, and a higher value signifies greater similarity between the images. Generally, a PSNR value above 30 dB is considered indicative of high quality.
P S N R [ d B ] = 10 log 10 R 2 M S E
Table 2 and Table 3 summarize the MSE and PSNR values by comparing the images obtained by applying the three masks to the measured edge images. Both Image (1) and Image (2) have PSNR values above 55, demonstrating performance comparable to the Prewitt mask. Table 4 calculates accuracy by determining the ratio of pixels indicating the same code among the total number of pixels when overlaying the measured image with the images obtained using the three masks, as defined in Equation (4). It can be observed that, excluding the Robert mask, which is sensitive to noise, both the Sobel and Prewitt masks exhibit an accuracy of over 90%.
A c c u r a c y [ % ] = N u m b e r   o f   p i x e l s   w i t h   m a t c h i n g   c o d e T o t a l   n u m b e r   o f   p i x e l s × 100
Table 5 summarizes the main performance metrics of the proposed CIS with edge detection, while Table 6 presents a performance comparison of edge detection algorithms integrated into edge detection with previous studies [5,7,9]. All of these studies [5,7,9] utilize ADCs with a resolution of 8 bits or higher. References [5,7] convert pixel data into high-resolution CIS images in the digital domain for edge detection, whereas [9] converts analog edge information into 8-bit digital codes and applies thresholding through additional software processes to obtain the final edge information. In this study, we only use a low pixel resolution of 120 × 120 and consume significantly reduced levels of power, up to 83.8% lower compared to other papers, by detecting edges without using counters alongside a low supply voltage ADC. In the process of first checking pixel operation for edge detection sensor mode, additional power consumption was added to circuits such as the memory array and ADC. Therefore, additional power optimization is possible in the process of supporting both CIS and edge detection modes. The proposed circuit can output a 1-bit outline image directly without an n-bit image conversion process by limiting the ramp voltage itself to the threshold voltage. As a result, the continuous clock toggling required for counter operation is unnecessary, resulting in a reduction of dynamic power consumption of approximately 96.6% from 2.91 μW of dynamic consumption consumed in the method of [9] to 98.74 nW during A/D conversion. Additionally, by performing edge operations and thresholding simultaneously during the A/D conversion process, we can achieve a high frame rate. The thresholding process can be applied to all types of mask operations, and by extracting only important information as edges from images, it can be utilized as a preprocessing step for image classification.

4. Conclusions

In this paper, we demonstrate an edge mask calculation and detection CIS using a 1-bit ADC fabricated with a 1P6M 0.11 µm CIS process. Instead of using a high-resolution ADC, we implement an edge detection algorithm in the readout circuit of the CIS using an in-column approach, allowing us to output edge images without the need for additional processing. We employ an SSADC structure, which is suitable for column-parallel implementation. By applying a ramp signal only to the regions that do not require edge detection, we avoid converting the entire dynamic range of pixels into digital codes and directly output 1-bit images. This approach reduces unnecessary counter toggling and decreases power consumption. As a result, in edge detection mode, our proposed CIS consumes 1.52 mW of power while achieving a speed of 235 frames per second (FPS), satisfying the criteria of low power consumption and a fast inference speed of over 30 FPS in battery-constrained small-form factor devices. We believe that the proposed CIS for edge detection can be applied to low-power computer vision sensors for the rapid detection of objects or faces.

Author Contributions

S.L. and S.Y.K. conceived and designed the circuits. Y.C.Y., S.M.H., K.H.L., S.J.L., J.M., and K.L. performed the experiments, and H.L., T.J., and M.S. analyzed the data. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP)-ITRC (Information Technology Research Center) grant funded by the Korea government (MSIT) (IITP-2025-RS-2024-00438007) and the Technology Innovation Program (Public–private joint investment semiconductor R&D program (K-CHIPS) to foster high-quality human resources) (RS-2023-00235137, 50%) funded by the Ministry of Trade, Industry, and Energy (MOTIE, Republic of Korea) (1415187379).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated in this study are available from the corresponding author upon reasonable request.

Acknowledgments

The EDA tool is supported by the IC Design Education Center (IDEC) of the Republic of Korea.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Singh, A.; Singh, M.; Singh, B. Face detection and eyes extraction using sobel edge detection and morphological operations. In Proceedings of the 2016 Conference on Advances in Signal Processing (CASP), Pune, India, 9–11 June 2016; pp. 295–300. [Google Scholar] [CrossRef]
  2. Vizoni, M.V.; Marana, A.N. Ocular Recognition Using Deep Features for Identity Authentication. In Proceedings of the 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Niteroi, Brazil, 1–3 July 2020; pp. 155–160. [Google Scholar]
  3. Andrei, M.-A.; Boiangiu, C.-A.; Tarbă, N.; Voncilă, M.-L. Robust Lane Detection and Tracking Algorithm for Steering Assist Systems. Machines 2022, 10, 10. [Google Scholar] [CrossRef]
  4. Kim, J.-H.; Kim, C.; Kim, K.; Yoo, H.-J. An Ultra-Low-Power Analog-Digital Hybrid CNN Face Recognition Processor Integrated with a CIS for Always-on Mobile Devices. In Proceedings of the 2019 IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, 26–29 May 2019; pp. 1–5. [Google Scholar] [CrossRef]
  5. Jin, M.; Noh, H.; Song, M.; Kim, S.Y. Design of an Edge-Detection CMOS Image Sensor with Built-in Mask Circuits. Sensors 2020, 20, 3649. [Google Scholar] [CrossRef] [PubMed]
  6. Lee, H.; Lee, K.; Kim, S.Y. Input-Signal-Based Power-Gated Single-Slope ADC for Low-Power CMOS Image Sensors. J. Integr. Circuits Syst. 2025, 11, 11–16. [Google Scholar] [CrossRef]
  7. Kim, H.-J.; Hwang, S.-I.; Chung, J.-H.; Park, J.-H.; Ryu, S.-T. A Dual-Imaging Speed-Enhanced CMOS Image Sensor for Real-Time Edge Image Extraction. IEEE J. Solid-State Circuits 2017, 52, 2488–2497. [Google Scholar] [CrossRef]
  8. Park, M.-J.; Kim, H.-J. A Real-Time Edge-Detection CMOS Image Sensor for Machine Vision Applications. IEEE Sens. J. 2023, 23, 9254–9261. [Google Scholar] [CrossRef]
  9. Lee, S.; Jeong, B.; Park, K.; Song, M.; Kim, S.Y. On-CMOS Image Sensor Processing for Lane Detection. Sensors 2021, 21, 3713. [Google Scholar] [CrossRef] [PubMed]
  10. Asghar, M.S.; Shah, S.A.A.; Kim, H. A Low Power Mixed Signal Convolutional Neural Network for Deep Learning SoC. J. Integr. Circuits Syst. 2023, 9, 3. Available online: https://jicas.idec.or.kr/index.php/JICAS/article/view/189 (accessed on 14 May 2025).
  11. Choi, J.; Lee, S.; Son, Y.; Kim, S.Y. Design of an Always-On Image Sensor Using an Analog Lightweight Convolutional Neural Network. Sensors 2020, 20, 3101. [Google Scholar] [CrossRef] [PubMed]
  12. Kobayashi, M.; Onuki, Y.; Kawabata, K.; Sekine, H.; Tsuboi, T.; Muto, T.; Akiyama, T.; Matsuno, Y.; Takahashi, H.; Koizumi, T.; et al. A 1.8e-rms Temporal Noise Over 110-dB-Dynamic Range 3.4 µm Pixel Pitch Global-Shutter CMOS Image Sensor with Dual-Gain Amplifiers SS-ADC, Light Guide Structure, and Multiple-Accumulation Shutter. IEEE J. Solid-State Circuits 2018, 53, 219–228. [Google Scholar] [CrossRef]
  13. Fernández Maimó, L.; Huertas Celdrán, A.; Perales Gómez, Á.L.; García Clemente, F.J.; Weimer, J.; Lee, I. Intelligent and Dynamic Ransomware Spread Detection and Mitigation in Integrated Clinical Environments. Sensors 2019, 19, 1114. [Google Scholar] [CrossRef] [PubMed]
  14. Young, C.; Omid-Zohoor, A.; Lajevardi, P.; Murmann, B. A Data-Compressive 1.5/2.75-bit Log-Gradient QVGA Image Sensor With Multi-Scale Readout for Always-On Object Detection. IEEE J. Solid-State Circuits 2019, 54, 2932–2946. [Google Scholar] [CrossRef]
  15. Poobathy, D.; Chezian, R. Edge Detection Operators: Peak Signal to Noise Ratio Based Comparison. Int. J. Image Graph. Signal Process. 2014, 6, 55–61. [Google Scholar] [CrossRef]
Figure 1. Image processing flow of (a) a conventional system and (b) the proposed system.
Figure 1. Image processing flow of (a) a conventional system and (b) the proposed system.
Sensors 25 03219 g001
Figure 2. Built-in mask.
Figure 2. Built-in mask.
Sensors 25 03219 g002
Figure 3. The proposed CVS Architecture.
Figure 3. The proposed CVS Architecture.
Sensors 25 03219 g003
Figure 4. System flow with different modes: (a) CIS mode and (b) edge detection mode, and (c) comparator circuit and edge detection operations.
Figure 4. System flow with different modes: (a) CIS mode and (b) edge detection mode, and (c) comparator circuit and edge detection operations.
Sensors 25 03219 g004
Figure 5. Row buffer operation according to the phase and timing diagram.
Figure 5. Row buffer operation according to the phase and timing diagram.
Sensors 25 03219 g005
Figure 6. (a) Column select switch and comparator in the edge detection layer. (b) Timing diagram for mask operation.
Figure 6. (a) Column select switch and comparator in the edge detection layer. (b) Timing diagram for mask operation.
Sensors 25 03219 g006
Figure 7. (a) Conventional and (b) proposed thresholding method.
Figure 7. (a) Conventional and (b) proposed thresholding method.
Sensors 25 03219 g007
Figure 8. The offset voltage generated during the comparison operation.
Figure 8. The offset voltage generated during the comparison operation.
Sensors 25 03219 g008
Figure 9. (a) Layout of the proposed CIS and (b) mircrophotograph of the fabricated chip.
Figure 9. (a) Layout of the proposed CIS and (b) mircrophotograph of the fabricated chip.
Sensors 25 03219 g009
Figure 10. Measurement environment.
Figure 10. Measurement environment.
Sensors 25 03219 g010
Figure 11. Measured edge images using 3 threshold values. (a) Woman’s face; (b) man’s face; (c) car; (d) shark doll.
Figure 11. Measured edge images using 3 threshold values. (a) Woman’s face; (b) man’s face; (c) car; (d) shark doll.
Sensors 25 03219 g011
Figure 12. Meaured image using the built-in, Sobel, Prewitt and Robert masks.
Figure 12. Meaured image using the built-in, Sobel, Prewitt and Robert masks.
Sensors 25 03219 g012
Table 1. Post-simulation results of the comparator under process variation.
Table 1. Post-simulation results of the comparator under process variation.
ProcessVoffset [mV]Open Loop Gain [dB]Unity Gain Frequency [MHz]
ff6.3971.359.3
nn8.5072.246.1
ss11.472.135.9
Table 2. MSE results of 3 mask images based on the measured image.
Table 2. MSE results of 3 mask images based on the measured image.
MSE(1)(2)
Sobel0.140.12
Prewitt0.110.10
Robert0.150.12
Table 3. PSNR results of 3 mask images based on the measured image.
Table 3. PSNR results of 3 mask images based on the measured image.
PSNR [dB](1)(2)
Sobel56.6657.17
Prewitt57.6558.16
Robert56.3857.35
Table 4. Accuracy results of 3 mask images based on the measured image.
Table 4. Accuracy results of 3 mask images based on the measured image.
Accuracy [%](1)(2)
Sobel90%92%
Prewitt90%93%
Robert86%87%
Table 5. Performance table of the proposed CIS with edge detection.
Table 5. Performance table of the proposed CIS with edge detection.
Process0.11 μm 1P6M CIS process
Pixel size 3.25   μ m   ×   3.25 μm
Pixel type4T-APS
Pixel resolution 120   ×   120
Chip area 2.558   mm   ×   4.3 mm
Core area 2.546   mm   ×   1.923 mm
Supply voltages2.8V,1.5V(Analog)/1.5V(Digital)
ADC resolution1bit (5-bit ADC-comparable accuracy)
Clock frequency10 MHz
Power consumption1.52 mW
FPS235 @ Edge detection mode
220 @ CIS mode
Table 6. Performance comparison table for CIS integrated edge detection.
Table 6. Performance comparison table for CIS integrated edge detection.
[5][7][9]This work
Edge ImageSensors 25 03219 i001Sensors 25 03219 i002Sensors 25 03219 i003Sensors 25 03219 i004
Progress0.09 μm 1P4M CIS0.18 μm 1P4M CIS0.11 μm 1P4M CIS0.11 μm 1P6M CIS
Resolution 1920   ×   1440 160   ×   120 160   ×   120 120   ×   120
Pixel pitch 1.4   μ m   ×   1.4 μm 4.9   μ m   ×   4.9 μm 3.2   μ m   ×   3.2 μm 3.25   μ m   ×   3.25 μm
Supply voltage2.8 V (Pixel)/3.3 V (Analog)/1.2 V (Digital)2.8 V (Pixel)/1.8 V (Circuit)3.3 V (Pixel, Analog)
1.5 V (Digital)
3.3 V (Pixel)/2.8 V (Analog)/1.5 V (Analog, Digital)
Power9.4 mW4.3 mW9.4 mW1.52 mW
FPS603200145235
* FoM [pJ/pixel/frame]56.7703376.4** 449.2
* FoM = Power/(NPIX × FPS); ** FoM = 28.1 [pJ/pixel/frame] for 480 × 480 array with 58.75 FPS.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, S.; Yun, Y.C.; Heu, S.M.; Lee, K.H.; Lee, S.J.; Lee, K.; Moon, J.; Lim, H.; Jang, T.; Song, M.; et al. The Design of a Computer Vision Sensor Based on a Low-Power Edge Detection Circuit. Sensors 2025, 25, 3219. https://doi.org/10.3390/s25103219

AMA Style

Lee S, Yun YC, Heu SM, Lee KH, Lee SJ, Lee K, Moon J, Lim H, Jang T, Song M, et al. The Design of a Computer Vision Sensor Based on a Low-Power Edge Detection Circuit. Sensors. 2025; 25(10):3219. https://doi.org/10.3390/s25103219

Chicago/Turabian Style

Lee, Suhyeon, Yu Chan Yun, Seung Min Heu, Kyu Hyun Lee, Seung Joon Lee, Kyungmin Lee, Jiin Moon, Hyuna Lim, Taeun Jang, Minkyu Song, and et al. 2025. "The Design of a Computer Vision Sensor Based on a Low-Power Edge Detection Circuit" Sensors 25, no. 10: 3219. https://doi.org/10.3390/s25103219

APA Style

Lee, S., Yun, Y. C., Heu, S. M., Lee, K. H., Lee, S. J., Lee, K., Moon, J., Lim, H., Jang, T., Song, M., & Kim, S. Y. (2025). The Design of a Computer Vision Sensor Based on a Low-Power Edge Detection Circuit. Sensors, 25(10), 3219. https://doi.org/10.3390/s25103219

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop