Next Article in Journal
A Context-Aware Mobile User Behavior-Based Neighbor Finding Approach for Preference Profile Construction
Previous Article in Journal
Next Place Prediction Based on Spatiotemporal Pattern Mining of Mobile Device Logs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-time Imaging Orientation Determination System to Verify Imaging Polarization Navigation Algorithm

1
State Key Laboratory of Precision Measurement Technology and Instruments, Tsinghua University, Beijing 100084, China
2
Ordnance Engineering College, Shijiazhuang 050003, China
3
Qian Xuesen Laboratory of Space Technology, China Academy of Space Technology, Beijing 100094, China
*
Authors to whom correspondence should be addressed.
Sensors 2016, 16(2), 144; https://doi.org/10.3390/s16020144
Submission received: 11 December 2015 / Revised: 15 January 2016 / Accepted: 18 January 2016 / Published: 23 January 2016
(This article belongs to the Section Physical Sensors)

Abstract

:
Bio-inspired imaging polarization navigation which can provide navigation information and is capable of sensing polarization information has advantages of high-precision and anti-interference over polarization navigation sensors that use photodiodes. Although all types of imaging polarimeters exist, they may not qualify for the research on the imaging polarization navigation algorithm. To verify the algorithm, a real-time imaging orientation determination system was designed and implemented. Essential calibration procedures for the type of system that contained camera parameter calibration and the inconsistency of complementary metal oxide semiconductor calibration were discussed, designed, and implemented. Calibration results were used to undistort and rectify the multi-camera system. An orientation determination experiment was conducted. The results indicated that the system could acquire and compute the polarized skylight images throughout the calibrations and resolve orientation by the algorithm to verify in real-time. An orientation determination algorithm based on image processing was tested on the system. The performance and properties of the algorithm were evaluated. The rate of the algorithm was over 1 Hz, the error was over 0.313°, and the population standard deviation was 0.148° without any data filter.

Graphical Abstract

1. Introduction

Bio-inspired polarization navigation is a novel, promising navigation method. When studying the foraging behavior of insects, researchers have discovered that insects treat the polarization pattern of scattered skylight as an azimuth reference [1,2]. Several prototypes have been developed based on the polarization-sensitive structure of insects [3,4,5,6,7,8]. These prototypes use photodiodes as photoelectric converters so that a decline occurs when the beams received by the sensors are shrouded by shelters, such as clouds. Imaging polarization navigation sensors can solve these problems because they can obtain abundant polarization information as well as spatial information with high precision and strong anti-interference. Few appropriate imaging polarimeters for navigation have been tested, although recent algorithms have been proposed that extract navigation information from the image of polarized skylight [9,10].
Developing an imaging polarimeter is essential to test and verify the imaging polarization navigation algorithm, although many types of imaging polarimeters are available for measuring the radiance distribution in sky or ocean [11,12,13,14,15,16] or for remote sensing. These instruments or systems are unsuitable to verify the navigation algorithm on account of their high complexity, cost, or insufficiency in real-time applications. An instrument with a rotating element is the imaging polarimeter that is easiest to build, but it does not meet the requirements in real-time because images for calculation are not acquired simultaneously [9,10,12,13,15]. The instruments for measuring radiance distribution in the sky and ocean share the similarity of having three cameras. Large field-of-view fisheye lenses can acquire considerable information, but it increases the probability for sunlight to enter the camera. The fisheye camera system needs high-precision assembly and highly complex calibration procedures [16] than a general wide-angle lens. The prototype consists of a liquid crystal variable retarder that is costly and runs under strict thermal conditions [14]; by contrast, the imaging polarimeter often runs under uncertain thermal conditions.
In this study, an easy-to-build real-time imaging orientation determination system was designed and implemented to verify the imaging polarization navigation algorithm. The optical system contained three 8 mm lenses to avoid received sunlight and for ease in calibration and rectification. The essential calibration procedures for this type of system, such as camera parameter calibration, and the inconsistency of complementary metal oxide semiconductor (CMOS) calibration were analyzed, designed, and implemented. The results were used to undistort and rectify the multi-camera system. The results of the orientation determination experiment indicated that the system could acquire and compute the polarized skylight images, while the imaging polarization navigation algorithm described in [9] to be tested was loaded in the system, which can resolve orientation in real-time. The update rate of the system was over 1 Hz, the error was over 0.313°, and the population standard deviation was 0.148° without any data filter.

2. Fundamentals of Instrument Description

2.1. Principle of the System

The fundamentals of the polarization imaging algorithm of the system are based on the Stokes parameter method, which can describe the polarization state of electromagnetic radiation through light intensity. The Stokes parameter contains four parameters, which are I, Q, U, and V. V is deduced because circular light hardly exists in skylight [17,18,19]. The Stokes vector is defined as S = [ I Q U ] T in this study. Mueller calculus is a mathematical operation that describes the effect of lenses or polarizers on a beam, which can be expressed by manipulating the Stokes vectors as:
S o u t = M S i n
The first row of the Mueller matrix is extracted because the only variable that can be measured directly is I o u t in S o u t , i.e.,:
I o u t = m 11 I i n + m 12 Q i n + m 13 U i n I o u t = 1 2 ( I i n + Q i n cos 2 α + U i n sin 2 α )
where α is the angle of the axis of the polarizer from x–axis of the coordinates we define. The x–axis of the coordinates points to the right of the camera and the y–axis points to the bottom as viewed from the lens. The system comprises three cameras, namely, A, B, and C. Three polarizers with axes of −90, −45, and 0 exist in front of the cameras.
[ I 90 ( i , j ) I 45 ( i , j ) I 0 ( i , j ) ] = 1 2 [ 1 1 0 1 0 1 1 1 0 ] [ I ( i , j ) Q ( i , j ) U ( i , j ) ]
According to Equation (3), I, Q, and U images are linearly transformed from intensity images I−90, I−45, and I0 gathered by cameras A, B, and C, respectively, as:
[ I 90 ( i , j ) I 45 ( i , j ) I 0 ( i , j ) ] = 1 2 [ 1 1 0 1 0 1 1 1 0 ] [ I ( i , j ) Q ( i , j ) U ( i , j ) ] i n v [ I ( i , j ) Q ( i , j ) U ( i , j ) ] = [ 1 0 1 1 0 1 1 2 1 ] [ I 90 ( i , j ) I 45 ( i , j ) I 0 ( i , j ) ]
The angle of polarization (AoP) and the degree of linear polarization (DoLP) images are calculated from I, Q, and U images as follows [20]:
A o P ( i , j ) = { 1 2 arctan ( U ( i , j ) Q ( i , j ) ) Q ( i , j ) < 0 1 2 arctan ( U ( i , j ) Q ( i , j ) ) + 90 ( Q ( i , j ) > 0 ) ( U ( i , j ) < 0 ) 1 2 arctan ( U ( i , j ) Q ( i , j ) ) 90 ( Q ( i , j ) > 0 ) ( U ( i , j ) > 0 )
D o L P ( i , j ) = U 2 ( i , j ) + Q 2 ( i , j ) I ( i , j )
We define the image of the angle of the orientation of the E-vector as the AoE image, and the angle is from the local meridian:
A o E ( i , j ) = A o P ( i , j ) φ
where φ is the angle of local meridian clockwise from y–axis of the coordinates we define as viewed from the lens.

2.2. Structure of the System

The multi-camera system has the advantage of simultaneous imaging. In this study, a fisheye lens is an unnecessary option because the system is not designed to acquire as much data as possible. The optical system contains three Computar m 0814-mp 8 mm lenses (Computar, Tokyo, Japan), and they are mounted on three Daheng HV1310-FM monochrome cameras (Daheng, Beijing, China). The three cameras are bolted on an aluminum structure. The data acquired by the cameras are transferred through the IEEE-1394 bus and processed and calculated on a laptop. The three IEEE-1394 buses from the three cameras are concentrated by a hub to one bus for a single port on the computer, as shown in Figure 1.
Figure 1. Hardware structure of the system.
Figure 1. Hardware structure of the system.
Sensors 16 00144 g001
Several subprograms exist in the main program, which are the image acquisition program, the camera control program, the user interface program, and the process program which contains the polarization navigation algorithm to be tested. The main program, and most subprograms, are programmed in LabVIEW. Off-the-shelf interfaces exist for instrumentation and industrial digital camera specification (IIDC) IEEE-1394 camera to control and acquire data. The process program in which the AoP image and the DoLP are calculated from intensity images, the corrections of cameras, and the pattern recognition code are developed in the C programming language and packaged in a dynamic link library (DLL) that can be called by LabVIEW, as presented in Figure 2.
Figure 2. Flowchart of the primary process program.
Figure 2. Flowchart of the primary process program.
Sensors 16 00144 g002

3. Calibration

3.1. Calibration of Camera Parameters

The general calibration procedures, which are roll off, linearity, dark offset, and immersion calibrations for the camera system, were discussed in [21,22,23]. In this study, we concentrate on the critical methods in calibration for the multi-camera system because inevitable inconsistencies occur, namely, the inconsistency in the intrinsic and extrinsic parameters and the response inconsistency of CMOS. A hypothesis is established for the system to correctly run such that the cameras acquire the same images if no analyzer exists in front of the cameras. In fact, the inconsistency of intrinsic parameters is caused by the manufacturing error of lens and the assembly of CMOS and lens, as shown in Figure 3. The inconsistency of extrinsic parameters is caused by the assembly of cameras, as shown in Figure 4. The inconsistency of cameras cannot be diminished, although we can achieve an accurate assembly, which is difficult and expensive to create. Nonetheless, the inconsistency of cameras can be calibrated and compensated easily.
Figure 3. (a) Inconsistency caused by intrinsic parameters; and (b) inconsistency caused by the distortion of lens.
Figure 3. (a) Inconsistency caused by intrinsic parameters; and (b) inconsistency caused by the distortion of lens.
Sensors 16 00144 g003
Figure 4. Inconsistency caused by extrinsic parameters.
Figure 4. Inconsistency caused by extrinsic parameters.
Sensors 16 00144 g004
The pin-hole camera model can be expressed as:
s m = Α [ R | t ] M
or:
s [ u v 1 ] = [ f x 0 c x 0 f y c y 0 0 1 ] [ r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 ] [ X Y Z 1 ]
where ( X , Y , Z ) are the coordinates of a 3D point in the world coordinate space. ( u , v ) are the coordinates of the projection point in pixels. A is a camera matrix or an intrinsic parameter matrix. ( c x , c y ) are the principal point. ( f x , f y ) are the focal lengths. s is a factor scale that is often equal to Z.
[ X Y Z ] = [ r sin θ cos φ r sin θ sin φ r cos θ ]
where (r, θ, φ) are the spherical coordinates of points in the world coordinate space. The spherical coordinates are appropriate to describe the skylight beams received by the system. Substituting Equation (10) into Equation (8) or (9), we obtain the following:
s [ u v 1 ] = [ f x 0 c x 0 f y c y 0 0 1 ] [ r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 ] [ r sin θ cos φ r sin θ sin φ r cos θ 1 ]
Many methods can be used to calibrate cameras, such as Zhang’s method [24]. This calibration method only requires cameras to observe a painted planar pattern shown at a few different orientations. Either cameras or the planar pattern can be freely moved. In the present study, the intrinsic parameter matrix A and the extrinsic parameter matrix R are essential, whereas the translation vector can be neglected because the offset of the same beam caused by the translation among the three cameras (approximately 50 mm) is less than 1 pixel (8 mm lens is selected in the system). The calibration precision is approximately several pixels.
In the past, Zhang’s calibration was always used for single-camera calibration or for stereo calibration. To calibrate a triple-camera system, we divide the calibration into three single-camera calibrations and two stereo calibrations. First, several groups (approximately 20 groups) of images are acquired for calibration. Photos of the same planar pattern for each group should be taken simultaneously, and all grids on the plane should be in the field of view of all cameras. Second, the intrinsic matrices of cameras A, B, and C should be calculated separately, and the calibration results should be saved. Finally, the extrinsic matrices should be calibrated. In this step, two stereo calibrations between cameras A and B and between cameras A and C are conducted because the coordinate space of camera A is selected as the world coordinate space. The result of this step is expressed as two Rodrigues vectors that indicate the relativity of the coordinates of cameras B and C with respect to the coordinates of camera A.

3.2. Calibration of Inconsistency of Cameras

We can consider that respective pixels (i, j) of cameras measure identical beams geometrically after the calibration in Section 3.1. The responses of the respective pixels (i, j) of cameras should be identical according to Equation (4). The model of the response of cameras can be expressed as [23]:
I 90 ( i , j ) = K a ( i , j ) I A ( i , j ) I 45 ( i , j ) = K b ( i , j ) I B ( i , j ) I 0 ( i , j ) = K c ( i , j ) I C ( i , j )
IA, IB, and IC denote geometric calibrated images without the dark offset acquired by cameras A, B, and C. I−90, I−45, and I0 are the ideal images for polarization calculation. The necessary coefficient images Ka, Kb, and Kc should be involved to compensate for the inconsistency of cameras. A large stable dome source is an ideal data source for the system, although it is infeasible. Nonetheless, we can regard skylight as the “dome” under good weather conditions. When calibrating, filters should be removed and cameras should shoot the same scene, such as the “dome”. Images sequences I A ( k ) , I B ( k ) , and I C ( k ) are collected, rectified. Averaged images I A m e a n , I B m e a n , and I C m e a n are calculated as Equation (13) to decrease the effect of the instability of skylight and the random noise of cameras:
I A m e a n = k I A ( k ) N , I B m e a n = k I B ( k ) N , and I C m e a n = k I C ( k ) N
The coefficient images Ka, Kb, and Kc are calculated as Equation (14):
I m e a n = I A m e a n + I B m e a n + I C m e a n 3 K a ( i , j ) = I m e a n ( i , j ) I A m e a n ( i , j ) K b ( i , j ) = I m e a n ( i , j ) I B m e a n ( i , j ) K c ( i , j ) = I m e a n ( i , j ) I C m e a n ( i , j )

3.3. Correction

When the system runs, the original images acquired by cameras A, B, and C are remapped using their intrinsic matrices to rectify the images. The rectified images of cameras B and C are then remapped to the coordinate of camera A, and the three images should be in the same coordinates. Finally, the geometrically-transformed images are multiplied by coefficient images Ka, Kb, and Kc as Equation (12), and qualified to calculate Stokes images.

4. Polarization Navigation Algorithm

The system is designed to verify the algorithm of the imaging polarization navigation algorithm. In this study, the orientation algorithm based on the image processing is an angle algorithm of which the azimuth refers to the solar meridian. The solar meridian has three features, namely, an E-vector of 90°, a straight line, and through the principal point. These three features are sufficient conditions to define a line as the solar meridian [9]. Thus, the algorithm consists of threshold extraction and a special simplified Hough transform (HT).
The threshold extraction is expressed as follows:
{ B S M ( i , j ) = 1 , ( | A o E ( i , j ) π 2 | < R ) B S M ( i , j ) = 0 , ( the others )
where BSM is the binary image of the solar meridian. Owing to the noise, no exact 90° point exists, and tolerance R is needed. The principle of the selection of R is discussed in [9]. The binary points in BSM are transformed to a parameter space by the HT as Equation (16). The peak in the parameter space stands for the solar meridian, and the coordinate of the peak is the parameter of the solar meridian.
H ( Ω ) = i = 1 n k ( X i , Ω )
where X i = [ x 1 i x 2 i x N i ] denotes the feature points defined in an N-dimensional feature space, and i is the index of feature points. Ω = [ ω 1 ω 2 ω M ] is a point in an M-dimensional parameter space. The function k is the HT kernel. In the system, the line to measure is in a 2D feature space, and the variable of the line to measure is only the angle θ that goes through the principal point. Thus, the HT in the algorithm can be expressed as:
H ( θ ) = i = 1 n k ( ( x , y ) i , θ )
The kernel function is as follows:
k ( ( x , y ) i , θ ) = { 1 tan θ = y x 0    others
The peak in the parameter space is the angle of the solar meridian. The flow of the algorithm is shown in Figure 5.
Figure 5. The flowchart of the orientation algorithm.
Figure 5. The flowchart of the orientation algorithm.
Sensors 16 00144 g005

5. Experimental Results

5.1. Calibration Results

We use the Matlab camera calibration toolbox, which contains Zhang’s calibration and stereo calibration, as well as a plane to calibrate the camera parameters. The three cameras simultaneously acquire images for extrinsic calibration, and the intrinsic parameters are calibrated. The intrinsic and extrinsic parameters are listed in Table 1. Figure 6 consists of the AoE image and the BSM image of solar meridian, which are original and calibrated.
Table 1. Measured intrinsic and extrinsic parameters of the cameras.
Table 1. Measured intrinsic and extrinsic parameters of the cameras.
Camera ACamera BCamera C
Intrinsic parameters
Principal point(616.68, 510.45)(651.25, 560.96)(643.76, 525.58)
Focal length(1622.43, 1621.89)(1619.19, 1619.91)(1616.81, 1616.49)
Distortion coefficient[−0.104 0.125][−0.091 0.059][−0.092 0.032]
Extrinsic parameters
[0 0 0][−0.014 0.013 −0.002][−0.014 0.0150.002]
Differences exist in the camera parameters, which cause serious errors and, thus, should be corrected. For example, the size of the CMOS sensor of the camera is 1024 × 1280 pixels, and their ideal principal points are (639.5, 511.5). According to Table 1, several dozens of pixels of difference exist between the measured and ideal principal points among the three cameras. According to Figure 6, the AoE and BSM images calculated from the original images have a certain degree of distortion, the binary image of solar meridian is insignificantly distorted, and the images calculated from the calibrated images are more similar to the ideal pattern in polarized skylight.
Figure 6. (a) AoE and BSM images calculated from the calibrated intensity images; and (b) AoE and BSM images calculated from the original intensity images.
Figure 6. (a) AoE and BSM images calculated from the calibrated intensity images; and (b) AoE and BSM images calculated from the original intensity images.
Sensors 16 00144 g006

5.2. Orientation Experiment

The experiment was conducted on 16 September 2015 at the top of a building located at 40° 0’ 22” N latitude and 116° 19’ 9” E longitude. The facilities besides the system are the motorized rotation stage (RSA100 Zolix), a controller (SC300-B Zolix), a battery (12 V 60 Ah), and a DC-to-AC inverter. The repeatability accuracy of the rotation stage system is better than 0.005°. The image acquisition unit of the system in experiment is shown in Figure 7. The rotation stage runs at speeds of 1° per second and 10° per second to verify the dynamic performance of the algorithm. The results are shown in Figure 8 and Figure 9.
Figure 7. Image acquisition unit of the system.
Figure 7. Image acquisition unit of the system.
Sensors 16 00144 g007
According to the results, the algorithm that is used in the system can recongnize the dynamic polarized skylight pattern and solve the orientation from the pattern. The curves show that the system can track the motion of the rotation stage in rates of 1° per second and 10° per second.
Figure 8. Angle measured throughout the system, which is uniformly rotated at a rate of 1° per second.
Figure 8. Angle measured throughout the system, which is uniformly rotated at a rate of 1° per second.
Sensors 16 00144 g008
Figure 9. Angle measured throughout the system, which is uniformly rotated at a rate of 10° per second.
Figure 9. Angle measured throughout the system, which is uniformly rotated at a rate of 10° per second.
Sensors 16 00144 g009
Several groups of data are acquired to analyze the measurement error, as shown in Table 2 and Table 3. The error indicates the precision of the system and the algorithm. Twelve groups are acquired every 15° in the overall range (180°). The error is over 0.313°, and the population standard deviation is 0.148° ( population σ 2 = g r o u p = 1 12 N g r o u p σ g r o u p 2 / g r o u p = 1 12 N g r o u p ).
Table 2. Data and statistics of precision test Groups 1–6.
Table 2. Data and statistics of precision test Groups 1–6.
Group123456
IndexData
110.48025.15039.73853.54568.60084.390
210.52025.32539.39453.53568.70784.198
310.41325.35039.62153.44268.77984.080
410.69225.32539.62953.53268.75083.950
510.66025.46739.35853.25068.85083.839
610.71725.37539.43953.32469.04084.105
710.60024.95039.36953.44868.79283.867
810.60025.51739.62153.31368.66784.242
910.71725.16739.42553.45568.80884.023
1010.53025.18339.55053.33268.57584.077
1110.66025.40039.42153.36267.65384.050
1210.57525.18339.33053.59368.41884.183
1310.64025.32538.98353.51368.42884.196
1410.52025.35039.07553.54368.53584.215
Statistics
Mean10.59525.29139.42553.44268.61484.101
Σ0.0890.1430.2010.1040.3130.146

6. Conclusions and Outlooks

In this study, an orientation determination system was designed and implemented to verify and test the imaging polarization navigation algorithm. The system comprised three cameras to detect the polarized skylight. The essential calibration procedures, camera parameter calibration, and inconsistency of the response of CMOS were discussed, designed, and calibrated. The multi-camera system was easy to build because of the application of the three 8 mm lenses and the convenient calibration procedures for it. The calibration results were used to correct and rectify the images acquired by the cameras. The orientation determination experiment was conducted at different rates. The result indicated that the system could acquire and compute the polarized skylight images and resolve the orientation by the algorithm for verification in real-time. The update rate of the system was over 1 Hz, the error was over 0.313°, and the population standard deviation was 0.148° without any data filter.
The easy-to-build orientation determination experimental system enabled us to verify the polarization orientation algorithm in real time. However, certain problems remain. Synchronization is critical in control and data fusion in navigation. Thus, a synchronization signal should be involved for acquisition. Meanwhile, a heavy computational load is needed to acquire and process images for the PC in the system. Introducing a graphic processing unit and a digital signal processor will solve the problem and improve the measurement rate. Althouth significant polarization information and image information brings the advantages of high-precision and anti-interference, excessive redundant data would also increase computational load. Optimized data acquisition or an object tracking algorithm, such as a Kalman filter and mean shift, could focus on the key pattern in images instead of searching in entire image to decrease computational load. Finally, a bio-inspired polarization sensor for navigation must be added into an integrated navigation system that contains GNSS, INS, and magnetic sensors. Thus, the data fusion algorithm for polarization sensors is a significant development direction.
Table 3. Data and statistics of precision test Groups 7–12.
Table 3. Data and statistics of precision test Groups 7–12.
Group789101112
IndexData
199.691115.710129.540141.860156.760171.980
299.733115.680129.710141.810156.640172.070
399.743115.520129.770141.970156.710171.850
499.811115.670129.630141.790156.820171.840
599.871115.560129.590141.690156.780171.920
699.647115.640129.530141.780156.760171.880
799.797115.790129.660141.830156.600172.040
899.629115.810129.560141.820156.810172.060
999.546115.990129.540141.850156.530172.020
1099.664115.630129.800141.790156.580171.920
1199.648115.750129.490141.730156.440171.910
1299.763115.760129.800141.900156.660171.960
1399.586115.680129.650141.850156.720171.750
1499.708115.780129.990141.870156.640171.820
Statistics
Mean99.703115.712129.661141.824156.675171.930
Σ0.0870.1120.1340.0670.1070.093

Acknowledgments

This work is partially supported by a grant from the National High Technology Research and Development Program of China (863 Program, No. 2013AA122601). The laboratory experiments were performed at the State Key Laboratory of Precision Measurement Technology and Instruments at Tsinghua University. We gratefully acknowledge the support of both institutions. We also thank Yingze Cao for her assistance in the experiments.

Author Contributions

Hao Lu and Kaichun Zhao performed the experiments and analyzed the data. Zheng You and Kaoli Huang conceived and supervised the experiments. Hao Lu wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rössel, S.; Wehner, R. Polarization vision in bees. Nature 1986, 323, 128–131. [Google Scholar] [CrossRef]
  2. Labhart, T.; Keller, K. Fine structure and growth of the polarization-sensitive dorsal rim area in the compound eye of larval crickets. Naturwissenschaften 1992, 79, 527–529. [Google Scholar] [CrossRef]
  3. Chahl, J.; Mizutani, A. Biomimetic attitude and orientation sensors. IEEE Sens. J. 2012, 12, 289–297. [Google Scholar] [CrossRef]
  4. Chu, J.; Zhao, K.; Wang, T.; Zhang, Q. Research on a novel polarization sensor for navigation. In Proceedings of the IEEE Conference on Information Acquisition, Seogwipo-si, Korea, 8–11 July 2007; pp. 241–246.
  5. Chu, J.; Zhao, K.; Zhang, Q.; Wang, T. Construction and performance test of a novel polarization sensor for navigation. Sens. Actuators A Phys. 2008, 148, 75–82. [Google Scholar] [CrossRef]
  6. Lambrinos, D.; Möller, R.; Labhart, T.; Pfeifer, R.; Wehner, R. A mobile robot employing insect strategies for navigation. Robot. Auton.Syst. 2000, 30, 39–64. [Google Scholar] [CrossRef]
  7. Xian, Z.; Hu, X.; Lian, J.; Zhang, L.; Cao, J.; Wang, Y.; Ma, T. A Novel Angle Computation and Calibration Algorithm of Bio-Inspired Sky-Light Polarization Navigation Sensor. Sensors 2014, 14, 17068–17088. [Google Scholar] [CrossRef] [PubMed]
  8. Karman, S.B.; Diah, S.Z.M.; Gebeshuber, I.C. Bio-Inspired polarized skylight-based navigation sensors: A review. Sensors 2012, 12, 14232–14261. [Google Scholar] [CrossRef] [PubMed]
  9. Lu, H.; Zhao, K.; You, Zheng.; Huang, Kaoli. Angle algorithm based on Hough transform for imaging polarization navigation sensor. Opt. Express 2015, 23, 7248–7262. [Google Scholar] [CrossRef] [PubMed]
  10. Ma, T.; Hu, X.; Zhang, L.; Lian, J.; He, X.; Wang, Y.; Xian, Z. An Evaluation of Skylight Polarization Patterns for Navigation. Sensors 2015, 15, 5895–5913. [Google Scholar] [CrossRef] [PubMed]
  11. Tyo, J.S.; Goldstein, D.L.; Chenault, D.B.; Shaw, J.A. Review of passive imaging polarimetry for remote sensing applications. Appl. Opt. 2006, 45, 5453–5469. [Google Scholar] [CrossRef] [PubMed]
  12. Horváth, G.; Barta, A.; Gál, J.; Suhai, B.; Haiman, O. Ground-based full-sky imaging polarimetry of rapidly changing skies and its use for polarimetric cloud detection. Appl. Opt. 2002, 41, 543–559. [Google Scholar] [CrossRef] [PubMed]
  13. Voss, K.J.; Souaidia, N. POLRADS: Polarization radiance distribution measurement system. Opt. Express 2010, 18, 19672–19680. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, Y.; Zhao, H.; Song, P.; Shi, S.; Xu, W.; Liang, X. Ground-based full-sky imaging polarimeter based on liquid crystal variable retarders. Opt. Express 2014, 22, 8749–8764. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, Y.; Hu, X.; Lian, J.; Zhang, L.; Xian, Z.; Ma, T. Design of a Device for Sky Light Polarization Measurements. Sensors 2014, 14, 14916–14931. [Google Scholar] [CrossRef] [PubMed]
  16. Wang, D.; Liang, H.; Zhu, H.; Zhang, S. A Bionic Camera-Based Polarization Navigation Sensor. Sensors 2014, 14, 13006–13023. [Google Scholar] [CrossRef] [PubMed]
  17. McMaster, W.H. Polarization and the Stokes Parameters. Am. J. Phys. 1954, 22. [Google Scholar] [CrossRef]
  18. Li, L.; Li, Z.Q.; Wendisch, M. Simulation of the influence of aerosol particles on Stokes parameters of polarized skylight. In Proceedings of the IOP conference series: Earth and environmental science, Beijing, China, 21–26 April 2014; pp. 12–26.
  19. Pust, N.J.; Shaw, J.A. All-sky polarization imaging. In Proceedings of the SPIE 6682, Polarization Science and Remote Sensing III, San Diego, CA, USA, 13 September 2007.
  20. Miyazaki, D.; Ammar, M.; Kawakami, R.; Ikeuchi, K. Estimating sunlight polarization using a fish-eye lens. IPSJ Trans. Comput. Vis. Appl. 2009, 1, 288–300. [Google Scholar] [CrossRef]
  21. Voss, K.J.; Chapin, A.L. Upwelling radiance distribution camera system, NURADS. Opt. Express 2005, 13, 4250–4262. [Google Scholar] [CrossRef] [PubMed]
  22. Voss, K.J.; Zibordi, G. Radiometric and geometric calibration of a spectral electro-optic “fisheye” camera radiance distribution system. J. Atmos. Ocean. Technol. 1989, 6, 652–662. [Google Scholar] [CrossRef]
  23. Powell, S.B.; Gruev, V. Calibration methods for division-of-focal-plane polarimeters. Opt. Express 2013, 21, 21039–21055. [Google Scholar] [CrossRef] [PubMed]
  24. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Lu, H.; Zhao, K.; Wang, X.; You, Z.; Huang, K. Real-time Imaging Orientation Determination System to Verify Imaging Polarization Navigation Algorithm. Sensors 2016, 16, 144. https://doi.org/10.3390/s16020144

AMA Style

Lu H, Zhao K, Wang X, You Z, Huang K. Real-time Imaging Orientation Determination System to Verify Imaging Polarization Navigation Algorithm. Sensors. 2016; 16(2):144. https://doi.org/10.3390/s16020144

Chicago/Turabian Style

Lu, Hao, Kaichun Zhao, Xiaochu Wang, Zheng You, and Kaoli Huang. 2016. "Real-time Imaging Orientation Determination System to Verify Imaging Polarization Navigation Algorithm" Sensors 16, no. 2: 144. https://doi.org/10.3390/s16020144

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop