# A Hybrid Bionic Image Sensor Achieving FOV Extension and Foveated Imaging

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Methods

_{ij}is the sequence of each camera, and i and j denote the row and column numbers respectively. The red and yellow dot-dashed lines are the original optical axes of the cameras, and the blue dot-dashed line is the optical axis of the central ommatidium that is deviated by the Risley prisms, Φ

_{v}

_{min}and Φ

_{h}

_{min}denote the minimum angles between the edge of the FOV of central ommatidium (the edge closer to the red dot-dashed line) and the red-dashed line in the vertical and horizontal directions with the two prisms aligned, φ

_{v}and φ

_{h}denote inclined angles between the original optical axis of the central ommatidium and the optical axes of the peripheral ommatidia of C

_{12}/C

_{32}and C

_{21}/C

_{23}, respectively. θ

_{v}and θ

_{h}denote half FOVs in the vertical and horizontal directions of the individual ommatidium. Risley prisms are composed of two identical prisms which can be rotated independently.

#### 2.1. FOV Extension

_{v}

_{(h)}should be configured according to the values of Φ

_{v}

_{min}and Φ

_{h}

_{min}.

**I**is the vector of incident light from one pixel, and

**R**is the vector of emergent light. The two subscripts of

**I**and

**R**indicate the sequence of prisms and the sequence of surfaces of each prism, respectively.

**I**

_{11}is calculated as

_{0}, n

_{0}) represents the center of the pixel array. The normal vectors of the four surfaces are calculated as:

_{22}, through the Risley prisms is obtained. Then, the deviation angle, Φ, and the azimuth angle, Θ, as illustrated in Figure 2a, can be calculated with R

_{22}.

_{h}

_{min}. In the same way, the minimum inclined angle Φ

_{v}

_{min}between the lower edge of FOV and the negative direction of the z-axis can be obtained with X = 1, Y = 1:N/2 and ${\varphi}_{1}={\varphi}_{2}=270\xb0$. Then, the conditions of Equation (2) can be fulfilled using the values of Φ

_{v}

_{min}and Φ

_{h}

_{min}.

_{v}

_{(h)}= 2(φ

_{v}

_{(h)}+ θ

_{v}

_{(h)}), and the FOV is extended by 2φ

_{v}

_{(h)}.

#### 2.2. Super-Resolution

_{o}= D/1.22𝜆f’, where D is the entrance pupil [32] diameter, and λ is the optical wavelength. The Nyquist frequency of the pixel array is v

_{p}= 1/2p. Assuming H = v

_{o}/v

_{p}, the spatial resolution of the camera can be improved up to H times with super-resolution techniques, theoretically. H is the ratio of v

_{o}and v

_{p}, and it is a constant for a given imaging system with a fixed focal length. The parameter, h (h ≤ H), is viewed as a resolution improvement factor. The step length of the sub-pixel shift in the image plane is sl = p/h, where h is an integer. Given the object’s distance, v, the step length of sub-pixel shifts in the object’s plane is SL = sl∙v/f’. Figure 3a,b shows the scan pattern of the optical axis with odd and even values of h, respectively. The numbers on the circle dots denote the sequences of sub-pixel points. The green circle dots represent the intersection of the object’s plane and the optical axis with the initial phase angle. The purple circle dots are arranged by referring to the green dots.

**DV**

^{0}is the deviation vector of the initial optical axis before the sub-pixel shift. In the example shown in Figure 3a,

**DV**

^{0}=

**DV**

^{13}.

**I**

_{11}= (0,0,−1) into the non-paraxial ray tracing method, the deviation angle, Φ, and the azimuth angle, Θ, of the optical axis can be obtained, and

**DV**

^{0}can be computed as (v∙tan Φ∙cos Θ; v∙tan Φ∙sin Θ). Then, k

_{1}and k

_{2}can be calculated as

**DV**

^{i}and norms (k

_{1}and k

_{2}) of $\mathit{D}{\mathit{V}}_{1}^{i}$ and $\mathit{D}{\mathit{V}}_{2}^{i}$, the two phase angles ${\varphi}_{1}^{i}$ and ${\varphi}_{2}^{i}$ can be computed as

#### 2.3. Foveated Imaging

## 3. Simulations and Analysis

#### 3.1. FOV Extension

_{v}

_{(h)min}+ θ

_{v}

_{(h)}= φ

_{v}

_{(h)}from Equation (2), and the lower limits correspond to φ

_{v}

_{(h)}= θ

_{v}

_{(h)}.

_{v}= 2.88 and FER

_{h}= 2.9, maximumly. When α is 11°, the inflection point occurs, as shown in Figure 4b, which corresponds to the situation in which the right edge of FOV of the fovea is parallel to the initial optical axis before the Risley prisms when ${\varphi}_{1}={\varphi}_{2}=180\xb0$. In this situation, the scan field of the fovea covers the whole FOV of the HBIS, and only the overlaps among the periphery ommatidia need to be ensured with φ

_{h}≤ θ

_{h}. φ

_{h}= θ

_{h}corresponds to FER

_{h}= 2. Regarding α = 9° in Figure 4a, n = 2.2 in Figure 4c and n = 2.4 in Figure 4d, the inflection points occur in the same reason, and they also have a two-fold FOV extension in respective directions.

_{h}= θ

_{h}, and φ

_{v}= θ

_{v}is the condition for inflection points in vertical direction. As θ

_{h}> θ

_{v}, the inflection points in horizontal direction are with larger α and n than the inflection points in vertical direction.

#### 3.2. Imaging with Sub-Pixel Shifts for Super-Resolution

#### 3.3. Foveated Imaging

## 4. Experiments and Results

#### 4.1. Prototype Parameters

_{h}≤ 18.4°, φ

_{v}≤ 12.9° and 1 ≤ h ≤ 7. The prototype employed commercial off-the-shelf cameras, and mechanical assembly and 3D printing techniques were used for the frame, of which errors from size and assembly were inevitable. In order to avoid loss of scene, the parameters were set as φ

_{h}= 15°, φ

_{v}= 10° and φ

_{c}= 17.8°. The prototype is shown in Figure 7. The two prisms were driven by two stepping motors, respectively, and the prisms and stepping motors were connected by conveyor belt. The stepping motors were controlled by a computer through serial ports.

#### 4.2. Experimental Results

_{v}and FER

_{h}of the prototype were 2.0 and 2.2, respectively, which are verified by Figure 8a. The experimental results for the FOV extension were consistent with the theoretical values: FER

_{v}= 2.2 and FER

_{h}= 2.3. Figure 8b shows the super-resolution image in the fovea. The three pairs of local regions were located in the central and periphery regions of the super-resolution image. We can see that more details of the scene are restored, and less artifacts are retained after the super-resolution reconstruction.

## 5. Discussion

_{h}and φ

_{v}were set according to the upper limits (the red line) of Figure 4b. In Figure 10a, the HBIS achieves FER

_{v}= 2.5 and FER

_{h}= 2.6, while the fovea can only scan part of the whole FOV. In Figure 10b, the HBIS achieves FER

_{v}= FER

_{h}= 2 which is smaller than in Figure 10a, but the scan field of the fovea covers the whole FOV of the HBIS. It is verified that the larger the wedge angles are, the smaller the whole FOV is. The same law exists between the refractive index and the whole FOV. In addition, the fovea can move over the entire FOV of the HBIS for Figure 10b, and this makes the HBIS more outstanding than that presented in ref. [15] which uses a 5 × 5 camera array achieving twice the FOV extension and a foveal ratio of 5.9 without the capability of fovea movement.

## 6. Conclusions and Future Work

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Song, Y.M.; Xie, Y.; Malyarchuk, V.; Xiao, J.; Jung, I.; Choi, K.J.; Liu, Z.; Park, H.; Lu, C.; Kim, R.H. Digital cameras with designs inspired by the arthropod eye. Nature
**2013**, 497, 95–99. [Google Scholar] [CrossRef] [PubMed] - Wu, S.; Jiang, T.; Zhang, G.; Schoenemann, B.; Neri, F.; Zhu, M.; Bu, C.; Han, J.; Kuhnert, K.-D. Artificial compound eye: A survey of the state-of-the-art. Artif. Intell. Rev.
**2017**, 48, 573–603. [Google Scholar] [CrossRef] - Shi, C.; Wang, Y.; Liu, C.; Wang, T.; Zhang, H.; Liao, W.; Xu, Z.; Yu, W. SCECam: A spherical compound eye camera for fast location and recognition of objects at a large field of view. Opt. Express
**2017**, 25, 32333–32345. [Google Scholar] [CrossRef] - Yi, Q.; Hong, H. Continuously zoom imaging probe for the multi-resolution foveated laparoscope. Biomed. Opt. Express
**2016**, 7, 1175–1182. [Google Scholar] - Cao, J.; Hao, Q.; Xia, W.; Peng, Y.; Cheng, Y.; Mu, J.; Wang, P. Design and realization of retina-like three-dimensional imaging based on a moems mirror. Opt. Lasers Eng.
**2016**, 82, 1–13. [Google Scholar] [CrossRef] - Borst, A.; Plett, J. Optical devices: Seeing the world through an insect’s eyes. Nature
**2013**, 497, 47–48. [Google Scholar] [CrossRef] [PubMed] - Prabhakara, R.S.; Wright, C.H.G.; Barrett, S.F. Motion detection: A biomimetic vision sensor versus a ccd camera sensor. IEEE Sens. J.
**2012**, 12, 298–307. [Google Scholar] [CrossRef] - Srinivasan, M.V.; Bernard, G.D. Effect of motion on visual-acuity of compound eye-theoretical-analysis. Vis. Res.
**1975**, 15, 515–525. [Google Scholar] [CrossRef] - Zhang, S.W.; Lehrer, M.; Srinivasan, M.V. Eye-specific learning of routes and “signposts” by walking honeybees. J. Comp. Physiol. A Sens. Neural Behav. Physiol.
**1998**, 182, 747–754. [Google Scholar] [CrossRef] - Duparré, J.W.; Wippermann, F.C. Micro-optical artificial compound eyes. Bioinspir. Biomim.
**2006**, 1, R1. [Google Scholar] [CrossRef] [PubMed] - Druart, G.; Guérineau, N.; Haïdar, R.; Lambert, E.; Tauvy, M.; Thétas, S.; Rommeluère, S.; Primot, J.; Deschamps, J. Multicam: A Miniature Cryogenic Camera for Infrared Detection; SPIE Photonics Europe: Bellingham, DC, USA, 2008. [Google Scholar]
- Carles, G.; Downing, J.; Harvey, A.R. Super-resolution imaging using a camera array. Opt. Lett.
**2014**, 39, 1889–1892. [Google Scholar] [CrossRef] [PubMed] - Lee, W.B.; Jang, H.; Park, S.; Song, Y.M.; Lee, H.N. COMPU-EYE: A high resolution computational compound eye. Opt. Express
**2016**, 24, 2013–2026. [Google Scholar] [CrossRef] [PubMed] - Brady, D.J.; Gehm, M.E.; Stack, R.A.; Marks, D.L.; Kittle, D.S.; Golish, D.R.; Vera, E.M.; Feller, S.D. Multiscale gigapixel photography. Nature
**2012**, 486, 386. [Google Scholar] [CrossRef] - Carles, G.; Chen, S.; Bustin, N.; Downing, J.; Mccall, D.; Wood, A.; Harvey, A.R. Multi-aperture foveated imaging. Opt. Lett.
**2016**, 41, 1869–1872. [Google Scholar] [CrossRef] [PubMed] - Floreano, D.; Pericet-Camara, R.; Viollet, S.; Ruffier, F.; Brückner, A.; Leitel, R.; Buss, W.; Menouni, M.; Expert, F.; Juston, R.; et al. Miniature curved artificial compound eyes. Proc. Natl. Acad. Sci. USA
**2013**, 110, 9267–9272. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Viollet, S.; Godiot, S.; Leitel, R.; Buss, W.; Breugnon, P.; Menouni, M.; Juston, R.; Expert, F.; Colonnier, F.; L’Eplattenier, G.; et al. Hardware architecture and cutting-edge assembly process of a tiny curved compound eye. Sensors
**2014**, 14, 21702–21721. [Google Scholar] [CrossRef] [PubMed] - Jeong, K.H.; Kim, J.; Lee, L.P. Biologically inspired artificial compound eyes. Science
**2006**, 312, 557. [Google Scholar] [CrossRef] [PubMed] - Ko, H.C.; Stoykovich, M.P.; Song, J.; Malyarchuk, V.; Choi, W.M.; Yu, C.J.; Iii, J.B.G.; Xiao, J.; Wang, S.; Huang, Y. A hemispherical electronic eye camera based on compressible silicon optoelectronics. Nature
**2008**, 454, 748–753. [Google Scholar] [CrossRef] [PubMed] - Sargent, R.; Bartley, C.; Dille, P.; Keller, J.; Nourbakhsh, I.; Legrand, R. Timelapse Gigapan: Capturing, Sharing, and Exploring Timelapse Gigapixel Imagery. In Proceedings of the Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, PA, USA, 11–13 November 2010. [Google Scholar]
- Hua, H.; Liu, S. Dual-sensor foveated imaging system. Appl. Opt.
**2008**, 47, 317–327. [Google Scholar] [CrossRef] [PubMed] - Rasolzadeh, B.; Björkman, M.; Huebner, K.; Kragic, D. An active vision system for detecting, fixating and manipulating objects in the real world. Int. J. Robot. Res.
**2010**, 29, 133–154. [Google Scholar] [CrossRef] - Ude, A.; Gaskett, C.; Cheng, G. Foveated Vision Systems with Two Cameras per Eye. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, 15–19 May 2006; pp. 3457–3462. [Google Scholar]
- González, M.; Sánchezpedraza, A.; Marfil, R.; Rodríguez, J.; Bandera, A. Data-driven multiresolution camera using the foveal adaptive pyramid. Sensors
**2016**, 16, 2003. [Google Scholar] [CrossRef] [PubMed] - Belay, G.Y.; Ottevaere, H.; Meuret, Y.; Vervaeke, M.; Erps, J.V.; Thienpont, H. Demonstration of a multichannel, multiresolution imaging system. Appl. Opt.
**2013**, 52, 6081–6089. [Google Scholar] [CrossRef] [PubMed] - Wei, K.; Zeng, H.; Zhao, Y. Insect-human hybrid eye (IHHE): An adaptive optofluidic lens combining the structural characteristics of insect and human eyes. Lab Chip
**2014**, 14, 3594–3602. [Google Scholar] [CrossRef] [PubMed] - Wu, X.; Wang, X.; Zhang, J.; Yuan, Y.; Chen, X. Design of microcamera for field curvature and distortion correction in monocentric multiscale foveated imaging system. Opt. Commun.
**2017**, 389, 189–196. [Google Scholar] [CrossRef] - Cheng, Y.; Cao, J.; Hao, Q.; Zhang, F.H.; Wang, S.P.; Xia, W.Z.; Meng, L.T.; Zhang, Y.K.; Yu, H.Y. Compound eye and retina-like combination sensor with a large field of view based on a space-variant curved micro lens array. Appl. Opt.
**2017**, 56, 3502–3509. [Google Scholar] [CrossRef] [PubMed] - Li, Y.J. Closed form analytical inverse solutions for risley-prism-based beam steering systems in different configurations. Appl. Opt.
**2011**, 50, 4302–4309. [Google Scholar] [CrossRef] [PubMed] - Yang, Y.G. Analytic solution of free space optical beam steering using risley prisms. J. Lightw. Technol.
**2008**, 26, 3576–3583. [Google Scholar] [CrossRef] - Land, M.F.; Nilsson, D.E. Animal Eyes, 2nd ed.; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
- Entrance Pupil. Available online: https://en.wikipedia.org/wiki/Entrance_pupil (accessed on 20 March 2018).
- Lavigne, V.; Ricard, B. Fast Risley Prisms Camera Steering System: Calibration and Image Distortions Correction through the Use of a Three-Dimensional Refraction Model. Opt. Eng.
**2007**, 46, 043201. [Google Scholar] - Li, A.; Liu, X.; Sun, W. Forward and inverse solutions for three-element risley prism beam scanners. Opt. Express
**2017**, 25, 7677–7688. [Google Scholar] [CrossRef] [PubMed] - Cheng, Y.; Cao, J.; Meng, L.; Wang, Z.; Zhang, K.; Ning, Y.; Hao, Q. Reducing defocus aberration of a compound and human hybrid eye using liquid lens. Appl. Opt.
**2018**, 57, 1679–1688. [Google Scholar] [CrossRef] [PubMed] - Arena, P.; Bucolo, M.; Fortuna, L.; Occhipinti, L. Cellular neural networks for real-time DNA microarray analysis. IEEE Eng. Med. Biol. Mag.
**2002**, 21, 17–25. [Google Scholar] [CrossRef] [PubMed] - Arena, P.; Basile, A.; Bucolo, M.; Fortuna, L. An object oriented segmentation on analog cnn chip. IEEE Trans. Circuits Syst. I Fundam. Theory Appl.
**2003**, 50, 837–846. [Google Scholar] [CrossRef]

**Figure 1.**Schematic design of the hybrid bionic image sensor (HBIS); (

**a**) the basic structure of the HBIS; (

**b**) the situation in which the thin end of one prism is aligned with the thick end of the other prism; (

**c**) the situation in which the two prisms are aligned, and the thick ends are oriented to vertical or horizontal directions.

**Figure 2.**The central ommatidium imaging (

**a**) and ray tracing (

**b**) models with Risley prisms. Risley prisms are located close to the entrance pupil of the optical system of the central ommaditium. The red lines of (

**b**) represent light beams in the air, and the blue lines are light beams inside the prisms.

**Figure 3.**Sub-pixel scan patterns of (

**a**) h = 5 and (

**b**) h = 4, and (

**c**) the model for inverse solutions of Risley prisms.

**Figure 4.**Vertical and horizontal whole field of view (FOV) and FOV extension ratio (FER) versus the wedge angle, α, and the refractive index, n. For (

**a**,

**b**), n = 1.5; for (

**c**,

**d**), α = 4°.

**Figure 5.**The maximum alignment error (AE) versus (

**a**) initial phase angles, (

**b**) wedge angle (α) and (

**c**) refractive index (n).

**Figure 6.**Bandwidth saving ratio (BSR) versus (

**a**) the wedge angle, α, and (

**b**) the refractive index, n.

**Figure 8.**The stitched image with extended field of view (FOV) (

**a**) and the super-resolution image of the fovea (

**b**). The colored rectangles with dashed lines denote the FOV covered by ommatidia of C

_{12}, C

_{21}, C

_{23}and C

_{32}in (

**a**), respectively. For the three pairs of local regions in (

**b**), the left ones are pieces of the super-resolution image, and the right ones are pieces of a sub-image.

**Figure 9.**An indoor super-resolution image of the fovea; (

**a**) shows the super-resolution image; the left columns of (

**b**–

**f**) correspond to the local regions of (

**a**) marked as 1 to 5, and the right columns of (

**b**–

**f**) come from the matched regions of one sub-image.

**Figure 10.**Field of view (FOV) distribution of periphery ommatidia and fovea scan field. (

**a**) φ

_{h}= 18.4°, φ

_{v}= 12.9°, α = 4°; (

**b**) φ

_{h}= 11.3°, φ

_{v}= 8.5°, α = 11°.

Parameter Type | Abbreviation | Values |
---|---|---|

Pixel pitch | p | 3.75 μm |

Rows × columns of pixel array | M × N | 960 × 1280 |

Focal length | f’ | 12 mm |

F-number | F | 1.4 |

Wedge angle | α | 4° |

Object distance | v | 50 mm |

Refractive index | n | 1.5 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Hao, Q.; Wang, Z.; Cao, J.; Zhang, F.
A Hybrid Bionic Image Sensor Achieving FOV Extension and Foveated Imaging. *Sensors* **2018**, *18*, 1042.
https://doi.org/10.3390/s18041042

**AMA Style**

Hao Q, Wang Z, Cao J, Zhang F.
A Hybrid Bionic Image Sensor Achieving FOV Extension and Foveated Imaging. *Sensors*. 2018; 18(4):1042.
https://doi.org/10.3390/s18041042

**Chicago/Turabian Style**

Hao, Qun, Zihan Wang, Jie Cao, and Fanghua Zhang.
2018. "A Hybrid Bionic Image Sensor Achieving FOV Extension and Foveated Imaging" *Sensors* 18, no. 4: 1042.
https://doi.org/10.3390/s18041042