# Performance Characterization of the Smartphone Video Guidance Sensor as Vision-Based Positioning System

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

**Target Pattern and Coordinate System**. The target pattern used for SVGS is a modified version of the AVGS pattern. A target example is shown on a 3U CubeSat mockup in Figure 1. Three illuminated targets are mounted coplanar (at the edges of the long face of the CubeSat), while a fourth is mounted on a boom. By placing the fourth illuminated target out of plane relative to the others, the accuracy of the relative attitude calculations is increased. The SVGS target spacecraft coordinate system is defined as follows: (i) the origin of the coordinate system is at the base of the boom, (ii) the y-axis points along the direction from target 2 to target 1, (iii) the z-axis points from the origin towards target 3, and (iv) the x-axis completes the right-handed triad. The 6-DOF position/attitude vector calculated by the SVGS algorithm is defined in a coordinate system with the same orientation as above but with the origin located at the center of the image plane formed by the smartphone camera.

_{A}, while a vector to point “a” from the perspective center is v

_{a}:

_{x}and F

_{y}, where the m

_{ij}values are elements of the direction cosine matrix M:

_{x}and F

_{y}using a Taylor series expansion truncated after the second term yields:

_{0}is an initial guess for the state vector, and ΔV is the difference between this guess and the actual state vector:

_{x}and ε

_{y}are the x and y error due to the Taylor series approximation. Each of the four targets in the SVGS target pattern has a corresponding set of these two equations; the resulting eight equations can be represented in matrix form using the following notation:

**SVGS Collinearity Formulation**. In SVGS, the general form of the collinearity equations described above is narrowed down to reflect the state vector formulation used by AVGS. AVGS sensor measurements used angle pairs, azimuth and elevation, measured in the image frame to define the location of each retro-reflective target in the image. Azimuth and elevation are measured with respect to a vector to the perspective center and the target locations in the captured image. Azimuth, A

_{z}, (Equation (11)), and elevation, E

_{l}(Equation (12)), replace Equations (3) and (4) to yield:

**Implementation of the SVGS Algorithm.**The SVGS calculation begins with the capture by the smartphone camera of the illuminated pattern on the target spacecraft. The image is then processed: the image is first converted to a binary image using a specified brightness threshold value. Blob extraction is performed on the binary image to find all bright spot locations. Location, size and shape characteristics of the blobs are captured. Depending on whether there are any other objects in the field of view that may generate bright background-noise spots, the number of blobs may exceed the number of targets. To account for any noise and to properly identify which target is which, a subset of four blobs is selected from among all that are identified, and some basic geometric alignment checks derived from the known orientation of the targets are applied. This process is iterated until the four targets have been identified and properly labeled. The target centroids are then fed into the state determination algorithms. Using the collinearity equation formulation, the relative state is determined using a least-squares procedure. The SVGS algorithm flow is shown in Figure 3 [7].

## 2. Materials and Methods

#### 2.1. SVGS Coordinate System and Targets

#### 2.2. Overview of SVGS Motion Tests

#### 2.3. Assessment of Linear Motion Measurements

#### 2.4. Assessment of Angular Motion Measurements

## 3. Results

#### 3.1. Linear and Rotational Motion Assessment Tests

#### 3.2. Effect of Sampling Rate on SVGS Performance

#### 3.3. SVGS Error Statistics

## 4. Discussion

## 5. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Mizuchi, Y.; Choi, Y.; Kim, Y.-B.; Hagiwara, Y.; Ogura, T. Vision-based markerless measurement system for relative vessel positioning. IET Sci. Meas. Technol.
**2016**, 10, 653–658. [Google Scholar] [CrossRef] - Ahmadinejad, F.; Farazi, H.; Ghidary, S.S. A low-cost vision-based tracking system for position control of quadrotor robots. In Proceedings of the 2013 First RSI/ISM International Conference on Robotics and Mechatronics, Tehran, Iran, 13–15 February 2013; pp. 356–361. [Google Scholar]
- Song, M.; Ou, Z.; Castellanos, E.; Ylipiha, T.; Kämäräinen, T.; Siekkinen, M.; Ylä-Jääski, A.; Hui, P. Exploring Vision-Based Techniques for Outdoor Positioning Systems: A Feasibility Study. IEEE Trans. Mob. Comput.
**2017**, 16, 3361–3375. [Google Scholar] [CrossRef] - Cizek, P.; Faigl, J.; Masri, D. Low-latency image processing for vision-based navigation systems. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 781–786. [Google Scholar]
- Ahmad, J.; Warren, A. FPGA based Deterministic Latency Image Acquisition and Processing System for Automated Driving Systems. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar]
- Howard, R.T.; Book, M.L.; Bryan, T.C. Video-based sensor for tracking three-dimensional targets. In Proceedings of the Atmospheric Propagation, Adaptive Systems, and Laser Radar Technology for Remote Sensing, Barcelona, Spain, 25–28 September 2001; Volume 4167, pp. 242–252. [Google Scholar]
- Becker, C.; Howard, R.; Rakoczy, J. Smartphone Video Guidance Sensor for Small Satellites. In Proceedings of the 27th Annual AIAA/USU Conference on Small Satellites, Logan, UT, USA, 8–13 August 2013. [Google Scholar]
- Rakoczy, J. Application of the Photogrammetric Collinearity Equations to the Orbital Express Advanced Video Guidance Sensor Six Degree-of-Freedom Solution; Technical Memorandum, Marshall Space Flight Center: Huntsville, AL, USA, 2003. [Google Scholar]
- Bryan, T.C.; Howard, R.; Johnson, J.E.; Lee, J.E.; Murphy, L.; Spencer, S.H. Next Generation Advanced Video Guidance Sensor. In Proceedings of the 2008 IEEE Aerospace Conference, Big Sky, MT, Canada, 1–8 March 2008. [Google Scholar] [CrossRef]
- Mullins, L.; Heaton, A.; Lomas, J. Advanced Video Guidance Sensor Inverse Perspective Algorithm; Marshall Space Flight Center: Huntsville, AL, USA, 2003. [Google Scholar]
- Howard, R.; Bryan, T.; Lee, J.; Robertson, B. Next Generation Advanced Video Guidance Sensor: Development and Test; NASA AAS 09-064; NASA: Washington, DC, USA, 2009. [Google Scholar]
- Howard, R.T.; Heaton, A.F.; Pinson, R.M.; Carrington, C.K. Orbital Express Advanced Video Guidance Sensor. In Proceedings of the 2008 IEEE Aerospace Conference, Big Sky, MT, Canada, 1–8 March 2008; pp. 1–10. [Google Scholar]
- Howard, R.T.; Bryan, T.C. DART AVGS flight results. In Sensors and Systems for Space Applications; Howard, R.T., Ed.; SPIE Digital Library Location: Bellingham, WA, USA, 2007; Volume 6555, p. 65550L. [Google Scholar] [CrossRef]
- Howard, R.T.; Bryan, T.C.; Brewster, L.L.; Lee, J.E. Proximity operations and docking sensor development. In Proceedings of the 2009 IEEE Aerospace conference, Big Sky, MT, Canada, 7–14 March 2009; pp. 1–10. [Google Scholar]
- Moffitt, F.H.; Mikhail, E.M. Photogrammetry; Harper & Row Inc.: New York, NY, USA, 1980. [Google Scholar]
- The Mathworks, Inc. Matlab Release R2019a; The Mathworks, Inc.: Natick, MA, USA, 2019. [Google Scholar]

**Figure 1.**The operational concept of Smartphone Video Guidance Sensor (SVGS) [7]. The target’s six-degrees-of-freedom (DOF) state can be transmitted from the SVGS device to the spacecraft’s guidance, navigation and control system (GN & C).

**Figure 3.**SVGS algorithm flow [7].

**Figure 5.**SVGS targets: (

**a**) retroreflective target R1, (

**b**) smaller retroreflective target R2 and (

**c**) LED target.

**Figure 10.**Angular motion testbed: (

**a**) roll axis, (

**b**) rotational platform and (

**c**) experimental setup.

**Figure 11.**(

**a**) Actual motion and SVGS position estimates (z-axis); target moving at 1 m p–p sinusoidal at 0.008 Hz. (

**b**) Histogram of position error (z-axis); target moving at 1 m p–p sinusoidal at 0.008 Hz.

**Figure 12.**(

**a**) Actual velocity (measured by encoder) and SVGS velocity estimates (z-axis) for target moving at 1 m p–p sinusoidal at 0.008 Hz. (

**b**) Histogram of velocity error (z-axis) for target moving at 1 m p–p sinusoidal at 0.008 Hz.

**Figure 13.**(

**a**) Angular position measured by encoder and SVGS-based angular-position estimate (roll-axis). (

**b**) Histogram of angular-position measurement error (roll-axis).

**Figure 14.**(

**a**) Angular velocity measured by encoder and SVGS-based angular velocity estimate in roll-axis. (

**b**) Histogram of angular-velocity estimate error in roll-axis.

**Figure 15.**Actual SVGS sampling time (

**left**), and histogram plot of sampling time distribution (

**right**).

**Figure 17.**(

**a**) Mean error and standard deviation in linear position, z-axis. (

**b**) Mean error and standard deviation in linear velocity estimation, z-axis.

**Figure 18.**(

**a**) Mean error and standard deviation in linear position, x-axis. (

**b**) Mean error and standard deviation in linear velocity estimation, x-axis.

**Figure 19.**(

**a**) Mean error and standard deviation in angular position, roll-axis. (

**b**) Mean error and standard deviation in angular velocity estimation, roll-axis.

**Table 1.**Dimensions of SVGS targets. The target coordinate system is defined in Figure 1, left.

Target Type | Target Number | X (m) | Y (m) | Z (m) |
---|---|---|---|---|

R_{1} | 1 | 0 | 0.143 | 0 |

2 | 0 | −0.143 | 0 | |

3 | 0 | 0 | 0.051 | |

4 | 0.094 | 0 | 0 | |

R_{2} | 1 | 0 | 0.041 | 0 |

2 | 0 | −0.041 | 0 | |

3 | 0 | 0 | 0.022 | |

4 | 0.036 | 0 | 0 | |

LED | 1 | 0 | 0.055 | 0 |

2 | 0 | −0.055 | 0 | |

3 | 0 | 0 | 0.037 | |

4 | 0.048 | 0 | 0 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Hariri, N.; Gutierrez, H.; Rakoczy, J.; Howard, R.; Bertaska, I. Performance Characterization of the Smartphone Video Guidance Sensor as Vision-Based Positioning System. *Sensors* **2020**, *20*, 5299.
https://doi.org/10.3390/s20185299

**AMA Style**

Hariri N, Gutierrez H, Rakoczy J, Howard R, Bertaska I. Performance Characterization of the Smartphone Video Guidance Sensor as Vision-Based Positioning System. *Sensors*. 2020; 20(18):5299.
https://doi.org/10.3390/s20185299

**Chicago/Turabian Style**

Hariri, Nasir, Hector Gutierrez, John Rakoczy, Richard Howard, and Ivan Bertaska. 2020. "Performance Characterization of the Smartphone Video Guidance Sensor as Vision-Based Positioning System" *Sensors* 20, no. 18: 5299.
https://doi.org/10.3390/s20185299