Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles
Abstract
:1. Introduction
2. Related Studies
2.1. Object Detection and Recognition Approaches with Computer Vision
2.1.1. Keypoint-Based Approaches
2.1.2. Hierarchical and Cascaded Approaches
2.2. Detection and Localization of mUAVs with Computer Vision
2.3. Detection and Localization of mUAVs with Other Modalities
2.4. The Current Study
Study | Vehicle | Detection Method | Detection Performance | Motion Blur | Training Time | Testing Time | Background Complexity | Environment | Distance Estimation |
---|---|---|---|---|---|---|---|---|---|
Lin et al., 2014 | mUAV | Boosted cascaded classifiers with Haar-like features | No | No | No | No | Medium | Outdoor | Yes (low accuracy) |
Zhang et al., 2014 | mUAV | Boosted cascaded classifiers with Haar-like features | No | No | No | No | Medium | Outdoor | Yes (low accuracy) |
Petridis et al., 2008 | Aircraft | Boosted cascaded classifiers with Haar-like features | Yes | No | No | No | High | Outdoor | No |
Dey et al., 2009; 2011 | Aircraft | Morphological filtering | Yes | No | NA | No | Low | Outdoor | No |
Lai et al., 2011 | mUAV | Morphological filtering | Yes | Yes | NA | Yes | High | Outdoor | No |
Current study | mUAV | Boosted cascaded classifiers with Haar-like, LBP and HOG features | Yes | Yes | Yes | Yes | High | Indoor and Outdoor | Yes |
3. Methods
3.1. A Cascaded Approach to mUAV Detection
Algorithm 2: Learning A Cascade of Classifiers (adapted from [34]). |
3.1.1. Integral Images
3.2. Cascaded Detection Using Haar-like Features (C-HAAR)
3.3. Cascaded Detection Using Local Binary Patterns (C-LBP)
3.4. Cascaded Detection Using Histogram of Oriented Gradients (C-HOG)
3.5. Distance Estimation
4. Experimental Setup and Data Collection
- mUAV: We used a quadrotor platform shown in Figure 6a. Open-source Arducopter [77] hardware and software are used as the flight controller. The distance between the motors on the same axis is 60 cm. Twelve markers are placed around the plastic cup of the quadrotor for which we define a rigid body. The body coordinate frame of the quadrotor is illustrated in Figure 6a. The -axis and -axis are towards the forward and right direction of the quadrotor, respectively. The -axis points downwards with respect to the quadrotor.
- Camera: We use two different electro-optic cameras for indoors and outdoors due to varying needs in both environments. For indoors, the synchronization property of the camera is vital, since we have to ensure that the 3D position data obtained from the motion capture system and the captured frames are synchronized in time. Complying with this requirement, we use a camera from Basler (capturing resolution videos at 30 fps in gray scale) mounted on top of the motion capture system. It weighs about 220 g, including its lens, whose maximum horizontal and vertical angle of views are and , respectively. The power consumption of the camera is about 3 W, and it outputs the data through a Gigabit Ethernet port. The body coordinate frame of the camera is centered at the projection center. The -axis is towards the right side of the camera; the -axis points down from the camera; and the -axis coincides with the optical axis of the camera lens, as depicted in Figure 6b.Due to difficulties in powering and recording of the indoor camera outdoors, we use another camera ( PowerShot A2200 HD) to capture outdoor videos. This camera is able to record videos at a resolution at 30 fps in color. However, we use gray scale versions of the videos in our study.Although we needed to utilize a different camera outdoors due to logistic issues, we should note that our indoor camera is suitable to be placed on mUAVs in terms of SWaP constraints. Moreover, alternative cameras with similar image qualities compared to our cameras are also available on the market, even with less SWaP requirements.
- Motion capture system (used for indoor analysis): We use the II VZ4000 3D real-time motion capture system (MOCAP) (PhoeniX Technologies Incorporated) that can sense the 3D positions of active markers up to a rate of 4348 real-time 3D data points per second with an accuracy of ∼ mm RMS in ∼190 cubic meters of space. In our setup, the MOCAP provides the ground truth 3D positions of the markers mounted on the quadrotor. The system provides the 3D data as labeled with the unique IDs of the markers. It has an operating angle of in both pitch and yaw, and its maximum sensing distance is 7 m at minimum exposure. The body coordinate frame of the MOCAP is illustrated in Figure 6c.
- Linear rail platform (used for indoor analysis): We constructed a linear motorized rail platform to move the camera and the MOCAP together in a controlled manner, so that we are able to capture videos of the quadrotor only with single motion types, i.e., lateral, up-down, rotational and approach-leave motions. With this platform, we are able to move the camera and MOCAP assembly on a horizontal line of approximately 5 m up to a 1-m/s speed.
4.1. Ground Truth Extraction
4.2. Data Collection for Training
4.3. Data Collection for Testing
- Lateral: The camera performs left-to-right or right-to-left maneuvers while the quadrotor is fixed at different positions, as illustrated in Figure 10. As seen in the top view, the perpendicular distance of the quadrotor to the camera motion course is changed by 1 m for each of 5 distances. For each distance, the height of the quadrotor is adjusted to 3 different (top, middle and bottom) levels with 1 m apart, making a total of 15 different position for lateral videos. Left-to-right and right-to-left videos collected in this manner allow us to test the features’ resilience against large background changes.In each video, the camera is moved along an approximately 5-m path. However, when the perpendicular distance is 1 m and 2 m and, the quadrotor is not fully visible in the videos for the top and bottom levels. Therefore, these videos are excluded from the dataset, resulting in 22 videos with a total of 2543 frames.
- Up-down: The quadrotor performs a vertical motion from the floor to the ceiling for the up motion and vice versa for the down motion. The motion of the quadrotor is performed manually with the help of a hanging rope. The change in the height of the quadrotor is approximately 3 m in each video. During the motion of the quadrotor, the camera remains fixed. For each of the 5 different positions shown in Figure 10, one up and one down video are recorded, resulting in 10 videos with a total of 1710 frames. These videos are used for testing the features’ resilience against large appearance changes.
- Yaw: The quadrotor turns around itself in a clockwise or counter clockwise direction, while both the camera and the quadrotor are stationary. The quadrotor is positioned at the same 15 different points used in the lateral videos. Since the quadrotor is not fully present in the videos recorded for the top and bottom levels when the perpendicular distance is 1 m and 2 m, these videos are omitted from the dataset. Hence, there are 22 videos with a total of 8107 frames in this group. These videos are used for testing the features’ resilience against viewpoint changes causing large appearance changes.
- Approach-leave: In these videos, the camera approaches the quadrotor or leaves from it while the quadrotor is stationary. There are 9 different positions for the quadrotor a with 1-m distance separation, as illustrated in Figure 10. The motion path of the camera is approximately 5 m. Approach and leave videos are recorded separately and we have 18 videos with a total of 3574 frames for this group. These videos are used for testing whether the features are affected by large scale and appearance changes.
Lateral | Up-Down | Yaw | Approach-Leave | |
---|---|---|---|---|
Scale | Moderate | Moderate | Small | Large |
Appearance | Moderate | Large | Large | Large |
Background | Large | No Change | No Change | Moderate |
5. Results
5.1. Performance Metrics
5.2. Indoor Evaluation
Feature Type | C-HAAR | C-LBP | C-HOG | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Number of Stages | 11 | 13 | 15 | 17 | 19 | 11 | 13 | 15 | 17 | 19 | 11 | 13 | 15 | 17 | 19 |
Maximum F-Score | 0.903 | 0.920 | 0.836 | 0.958 | 0.976 | 0.904 | 0.936 | 0.940 | 0.962 | 0.964 | 0.818 | 0.848 | 0.842 | 0.839 | 0.862 |
F-Score at Default Threshold | 0.058 | 0.143 | 0.286 | 0.570 | 0.822 | 0.104 | 0.345 | 0.774 | 0.943 | 0.954 | 0.404 | 0.550 | 0.627 | 0.664 | 0.716 |
5.3. Outdoor Evaluation
Feature Type | C-HAAR | C-LBP | C-HOG | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11 | 13 | 15 | 17 | 19 | 11 | 13 | 15 | 17 | 19 | 11 | 13 | 15 | 17 | 19 | ||
CALM | Maximum F-Score | 0.979 | 0.987 | 0.991 | 0.991 | 0.997 | 0.930 | 0.951 | 0.953 | 0.977 | 0.985 | 0.846 | 0.822 | 0.781 | 0.732 | 0.842 |
F-Score at Default Threshold | 0.036 | 0.112 | 0.248 | 0.536 | 0.734 | 0.040 | 0.095 | 0.266 | 0.670 | 0.930 | 0.118 | 0.144 | 0.168 | 0.189 | 0.216 | |
AGILE | Maximum F-Score | 0.965 | 0.983 | 0.988 | 0.987 | 0.989 | 0.887 | 0.902 | 0.890 | 0.947 | 0.942 | 0.719 | 0.735 | 0.619 | 0.600 | 0.713 |
F-Score at Default Threshold | 0.034 | 0.108 | 0.282 | 0.727 | 0.906 | 0.041 | 0.094 | 0.260 | 0.704 | 0.920 | 0.121 | 0.146 | 0.168 | 0.188 | 0.211 | |
MOVING BACKGROUND | Maximum F-Score | 0.955 | 0.965 | 0.969 | 0.963 | 0.967 | 0.935 | 0.870 | 0.940 | 0.954 | 0.964 | 0.797 | 0.840 | 0.785 | 0.777 | 0.832 |
F-Score at Default Threshold | 0.030 | 0.084 | 0.169 | 0.274 | 0.441 | 0.043 | 0.111 | 0.269 | 0.480 | 0.747 | 0.158 | 0.180 | 0.199 | 0.216 | 0.234 | |
OVERALL | Maximum F-Score | 0.955 | 0.972 | 0.977 | 0.973 | 0.975 | 0.906 | 0.869 | 0.915 | 0.949 | 0.957 | 0.770 | 0.801 | 0.707 | 0.672 | 0.781 |
F-Score at Default Threshold | 0.033 | 0.099 | 0.221 | 0.429 | 0.627 | 0.042 | 0.100 | 0.265 | 0.594 | 0.850 | 0.132 | 0.157 | 0.178 | 0.198 | 0.221 |
5.4. Performance under Motion Blur
5.5. Distance Estimation
5.5.1. Time to Collision Estimation Analysis
5.6. Time Analysis
5.6.1. Training Time Analysis
Feature Type | C-HAAR | C-LBP | C-HOG |
---|---|---|---|
Indoor | 98.31 | 22.94 | 13.53 |
Outdoor | 177.59 | 0.87 | 0.52 |
5.6.2. Testing Time Analysis
5.7. Sample Visual Results
6. Conclusions
Supplementary Files
Supplementary File 1Acknowledgments
Author Contributions
Conflicts of Interest
References
- Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
- Yuan, C.; Zhang, Y.; Liu, Z. A survey on technologies for automatic forest fire monitoring, detection, and fighting using unmanned aerial vehicles and remote sensing techniques. Can. J. For. Res. 2015, 45, 783–792. [Google Scholar] [CrossRef]
- Ackerman, E. When Drone Delivery Makes Sense. IEEE Spectrum. 25 September 2014. Available online: http://spectrum.ieee.org/automaton/robotics/aerial-robots/when-drone-delivery-makes-sense (accessed on 19 August 2015).
- Holmes, K. Man Detained Outside White House for Trying to Fly Drone. CNN. 15 May 2015. Available online: http://edition.cnn.com/2015/05/14/politics/white-house-drone-arrest/ (accessed on 19 August 2015).
- Martinez, M.; Vercammen, P.; Brumfield, B. Above spectacular wildfire on freeway rises new scourge: Drones. CNN. 19 July 2015. Available online: http://edition.cnn.com/2015/07/18/us/california-freeway-fire/ (accessed on 19 August 2015).
- Andreopoulos, A.; Tsotsos, J.K. 50 Years of object recognition: Directions forward. Comput. Vis. Image Underst. 2013, 117, 827–891. [Google Scholar] [CrossRef]
- Campbell, R.J.; Flynn, P.J. A survey of free-form object representation and recognition techniques. Comput. Vis. Image Underst. 2001, 81, 166–210. [Google Scholar] [CrossRef]
- Lowe, D.G. Object recognition from local scale-invariant features. Int. Conf. Comput. Vis. 1999, 2, 1150–1157. [Google Scholar]
- Belongie, S.; Malik, J.; Puzicha, J. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 509–522. [Google Scholar] [CrossRef]
- Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. IEEE Conf. Comput. Vis. Pattern Recognit. 2001, 1, 511–518. [Google Scholar]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. IEEE Conf. Comput. Vis. Pattern Recognit. 2005, 1, 886–893. [Google Scholar]
- Serre, T.; Wolf, L.; Bileschi, S.; Riesenhuber, M.; Poggio, T. Robust object recognition with cortex-like mechanisms. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 411–426. [Google Scholar] [CrossRef] [PubMed]
- Boutell, M.R.; Luo, J.; Shen, X.; Brown, C.M. Learning multi-label scene classification. Pattern Recog. 2004, 37, 1757–1771. [Google Scholar] [CrossRef]
- Rosten, E.; Drummond, T. Machine Learning for High-Speed Corner Detection. Eur. Conf. Comput. Vis. 2006, 3951, 430–443. [Google Scholar]
- Trajkovic, M.; Hedley, M. Fast corner detection. Image Vis. Comput. 1998, 16, 75–87. [Google Scholar] [CrossRef]
- Harris, C.; Stephens, M. A Combined Corner and Edge Detector. In Proceedings of the 4th Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–151.
- Matas, J.; Chum, O.; Urban, M.; Pajdla, T. Robust Wide Baseline Stereo from Maximally Stable Extremal Regions. In Proceedings of the British Machine Vision Conference, Cardiff, UK, 2–5 September 2002; pp. 36.1–36.10.
- Shi, J.; Tomasi, C. Good features to track. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 21–23 June 1994; pp. 593–600.
- Tuytelaars, T.; Mikolajczyk, K. Local invariant feature detectors: A survey. Found. Trends Comput. Graph. Vis. 2008, 3, 177–280. [Google Scholar] [CrossRef] [Green Version]
- Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. BRIEF: Binary Robust Independent Elementary Features. Eur. Conf. Comput. Vis. 2010, 6314, 778–792. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G.R. ORB: An efficient alternative to SIFT or SURF. Int. Conf. Comput. Vis. 2011, 2564–2571. [Google Scholar]
- Leutenegger, S.; Chli, M.; Siegwart, R.Y. BRISK: Binary Robust Invariant Scalable Keypoints. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2548–2555.
- Vandergheynst, P.; Ortiz, R.; Alahi, A. FREAK: Fast Retina Keypoint. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 510–517.
- Winn, J.; Criminisi, A.; Minka, T. Object categorization by learned universal visual dictionary. Int. Conf. Comput. Vis. 2005, 2, 1800–1807. [Google Scholar]
- Murphy, K.; Torralba, A.; Eaton, D.; Freeman, W. Object detection and localization using local and global features. In Toward Category-Level Object Recognition; Springer: Berlin/Heidelberg, Germany, 2006; pp. 382–400. [Google Scholar]
- Csurka, G.; Dance, C.R.; Fan, L.; Willamowski, J.; Bray, C. Visual categorization with bags of keypoints. In Proceedings of the Workshop on Statistical Learning in Computer Vision, ECCV, Prague, Czech Republic, 10–16 May 2001; pp. 1–22.
- Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems (NIPS) 25; Pereira, F., Burges, C., Bottou, L., Weinberger, K., Eds.; Curran Associates, Inc.: New York, NY, USA, 2012; pp. 1097–1105. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Dietterich, T.G. Ensemble methods in machine learning. In Multiple Classifier Systems; Springer: Berlin/Heidelberg, Germany, 2000; pp. 1–15. [Google Scholar]
- Rowley, H.A.; Baluja, S.; Kanade, T. Neural network-based face detection. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 23–38. [Google Scholar] [CrossRef]
- Viola, P.; Jones, M.J. Robust real-time face detection. Int. J. Comput. Vis. 2004, 57, 137–154. [Google Scholar] [CrossRef]
- Freund, Y.; Schapire, R.E. A desicion-theoretic generalization of on-line learning and an application to boosting. In Computational Learning Theory; Springer: Berlin/Heidelberg, Germany, 1995; pp. 23–37. [Google Scholar]
- Liao, S.; Zhu, X.; Lei, Z.; Zhang, L.; Li, S.Z. Learning multi-scale block local binary patterns for face recognition. In Advances in Biometrics; Springer: Berlin/Heidelberg, Germany, 2007; pp. 828–837. [Google Scholar]
- Zhu, Q.; Yeh, M.C.; Cheng, K.T.; Avidan, S. Fast human detection using a cascade of histograms of oriented gradients. IEEE Conf. Comput. Vis. Pattern Recog. 2006, 2, 1491–1498. [Google Scholar]
- Heredia, G.; Caballero, F.; Maza, I.; Merino, L.; Viguria, A.; Ollero, A. Multi-Unmanned Aerial Vehicle (UAV) Cooperative Fault Detection Employing Differential Global Positioning (DGPS), Inertial and Vision Sensors. Sensors 2009, 9, 7566–7579. [Google Scholar] [CrossRef] [PubMed]
- Hu, J.; Xie, L.; Xu, J.; Xu, Z. Multi-Agent Cooperative Target Search. Sensors 2014, 14, 9408–9428. [Google Scholar] [CrossRef] [PubMed]
- Rodriguez-Canosa, G.R.; Thomas, S.; del Cerro, J.; Barrientos, A.; MacDonald, B. A Real-Time Method to Detect and Track Moving Objects (DATMO) from Unmanned Aerial Vehicles (UAVs) Using a Single Camera. Remote Sens. 2012, 4, 1090–1111. [Google Scholar] [CrossRef]
- Doitsidis, L.; Weiss, S.; Renzaglia, A.; Achtelik, M.W.; Kosmatopoulos, E.; Siegwart, R.; Scaramuzza, D. Optimal Surveillance Coverage for Teams of Micro Aerial Vehicles in GPS-Denied Environments Using Onboard Vision. Auton. Robots 2012, 33, 173–188. [Google Scholar] [CrossRef] [Green Version]
- Saska, M.; Chudoba, J.; Precil, L.; Thomas, J.; Loianno, G.; Tresnak, A.; Vonasek, V.; Kumar, V. Autonomous deployment of swarms of micro-aerial vehicles in cooperative surveillance. In Proceedings of the 2014 International Conference on Unmanned Aircraft Systems (ICUAS), Orlando, FL, USA, 27–30 May 2014; pp. 584–595.
- Rosnell, T.; Honkavaara, E. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera. Sensors 2012, 12, 453–480. [Google Scholar] [CrossRef] [PubMed]
- Shen, S.; Mulgaonkar, Y.; Michael, N.; Kumar, V. Vision-based State Estimation for Autonomous Rotorcraft MAVs in Complex Environments. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013.
- Shen, S.; Mulgaonkar, Y.; Michael, N.; Kumar, V. Vision-Based State Estimation and Trajectory Control Towards Aggressive Flight with a Quadrotor. In Proceedings of the Robotics: Science and Systems (RSS), Berlin, Germany, 24–28 June 2013.
- Shen, S.; Mulgaonkar, Y.; Michael, N.; Kumar, V. Initialization-Free Monocular Visual-Inertial Estimation with Application to Autonomous MAVs. In Proceedings of the International Symposium on Experimental Robotics, Marrakech, Morocco, 15–18 June 2014.
- Scaramuzza, D.; Achtelik, M.C.; Doitsidis, L.; Fraundorfer, F.; Kosmatopoulos, E.B.; Martinelli, A.; Achtelik, M.W.; Chli, M.; Chatzichristofis, S.A.; Kneip, L.; et al. Vision-Controlled Micro Flying Robots: From System Design to Autonomous Navigation and Mapping in GPS-denied Environments. IEEE Robot. Autom. Mag. 2014, 21. [Google Scholar] [CrossRef]
- Achtelik, M.; Weiss, S.; Chli, M.; Dellaert, F.; Siegwart, R. Collaborative Stereo. In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA, 25–30 September 2011; pp. 2242–2248.
- Hesch, J.A.; Kottas, D.G.; Bowman, S.L.; Roumeliotis, S.I. Camera-IMU-based localization: Observability analysis and consistency improvement. Int. J. Robot. Res. 2013, 33, 182–201. [Google Scholar] [CrossRef]
- Krajnik, T.; Nitsche, M.; Faigl, J.; Vanek, P.; Saska, M.; Preucil, L.; Duckett, T.; Mejail, M. A Practical Multirobot Localization System. J. Intell. Robot. Syst. 2014, 76, 539–562. [Google Scholar] [CrossRef] [Green Version]
- Faigl, J.; Krajnik, T.; Chudoba, J.; Preucil, L.; Saska, M. Low-cost embedded system for relative localization in robotic swarms. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013; pp. 993–998.
- Lin, F.; Peng, K.; Dong, X.; Zhao, S.; Chen, B. Vision-based formation for UAVs. In Proceedings of the IEEE International Conference on Control Automation (ICCA), Taichung, Taiwan, 18–20 June 2014; pp. 1375–1380.
- Zhang, M.; Lin, F.; Chen, B. Vision-based detection and pose estimation for formation of micro aerial vehicles. In Proceedings of the International Conference on Automation Robotics Vision (ICARCV), Singapore, Singapore, 10–12 December 2014; pp. 1473–1478.
- Lai, J.; Mejias, L.; Ford, J.J. Airborne vision-based collision-detection system. J. Field Robot. 2011, 28, 137–157. [Google Scholar] [CrossRef] [Green Version]
- Petridis, S.; Geyer, C.; Singh, S. Learning to Detect Aircraft at Low Resolutions. In Computer Vision Systems; Gasteratos, A., Vincze, M., Tsotsos, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5008, pp. 474–483. [Google Scholar]
- Dey, D.; Geyer, C.; Singh, S.; Digioia, M. Passive, long-range detection of Aircraft: Towards a field deployable Sense and Avoid System. In Proceedings of the Field and Service Robotics, Cambridge, MA, USA, 14–16 July 2009.
- Dey, D.; Geyer, C.; Singh, S.; Digioia, M. A cascaded method to detect aircraft in video imagery. Int. J. Robot. Res. 2011, 30, 1527–1540. [Google Scholar] [CrossRef]
- Vásárhelyi, G.; Virágh, C.; Somorjai, G.; Tarcai, N.; Szörényi, T.; Nepusz, T.; Vicsek, T. Outdoor flocking and formation flight with autonomous aerial robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Chicago, IL, USA, 14–18 September 2014; pp. 3866–3873.
- Brewer, E.; Haentjens, G.; Gavrilets, V.; McGraw, G. A low SWaP implementation of high integrity relative navigation for small UAS. In Proceedings of the Position, Location and Navigation Symposium, Monterey, CA, USA, 5–8 May 2014; pp. 1183–1187.
- Roberts, J. Enabling Collective Operation of Indoor Flying Robots. Ph.D. Thesis, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland, April 2011. [Google Scholar]
- Roberts, J.; Stirling, T.; Zufferey, J.; Floreano, D. 3-D Relative Positioning Sensor for Indoor Flying Robots. Auton. Robots 2012, 33, 5–20. [Google Scholar] [CrossRef]
- Stirling, T.; Roberts, J.; Zufferey, J.; Floreano, D. Indoor Navigation with a Swarm of Flying Robots. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), St. Paul, MN, USA, 14–18 May 2012.
- Welsby, J.; Melhuish, C.; Lane, C.; Qy, B. Autonomous minimalist following in three dimensions: A study with small-scale dirigibles. In Proceedings of the Towards Intelligent Mobile Robots, Coventry, UK, 6–9 August 2001.
- Raharijaona, T.; Mignon, P.; Juston, R.; Kerhuel, L.; Viollet, S. HyperCube: A Small Lensless Position Sensing Device for the Tracking of Flickering Infrared LEDs. Sensors 2015, 15, 16484–16502. [Google Scholar] [CrossRef] [PubMed]
- Etter, W.; Martin, P.; Mangharam, R. Cooperative Flight Guidance of Autonomous Unmanned Aerial Vehicles. In Proceedings of the CPS Week Workshop on Networks of Cooperating Objects, Chicago, IL, USA, 11–14 April 2011.
- Basiri, M.; Schill, F.; Floreano, D.; Lima, P. Audio-based Relative Positioning System for Multiple Micro Air Vehicle Systems. In Proceedings of the Robotics: Science and Systems (RSS), Berlin, Germany, 24–28 June 2013.
- Tijs, E.; de Croon, G.; Wind, J.; Remes, B.; de Wagter, C.; de Bree, H.E.; Ruijsink, R. Hear-and-Avoid for Micro Air Vehicles. In Proceedings of the International Micro Air Vehicle Conference and Competitions (IMAV), Braunschweig, Germany, 6–9 July 2010.
- Nishitani, A.; Nishida, Y.; Mizoguch, H. Omnidirectional ultrasonic location sensor. In Proceedings of the IEEE Conference on Sensors, Irvine, CA, USA, 30 October–3 November 2005.
- Maxim, P.M.; Hettiarachchi, S.; Spears, W.M.; Spears, D.F.; Hamann, J.; Kunkel, T.; Speiser, C. Trilateration localization for multi-robot teams. In Proceedings of the Sixth International Conference on Informatics in Control, Automation and Robotics, Special Session on MultiAgent Robotic Systems (ICINCO), Funchal, Madeira, Portugal, 11–15 May 2008.
- Rivard, F.; Bisson, J.; Michaud, F.; Letourneau, D. Ultrasonic relative positioning for multi-robot systems. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Pasadena, CA, USA, 19–23 May 2008; pp. 323–328.
- Moses, A.; Rutherford, M.; Valavanis, K. Radar-based detection and identification for miniature air vehicles. In Proceedings of the IEEE International Conference on Control Applications (CCA), Denver, CO, USA, 28–30 September 2011; pp. 933–940.
- Moses, A.; Rutherford, M.J.; Kontitsis, M.; Valavanis, K.P. UAV-borne X-band radar for collision avoidance. Robotica 2014, 32, 97–114. [Google Scholar] [CrossRef]
- Lienhart, R.; Maydt, J. An extended set of Haar-like features for rapid object detection. In Proceedings of the International Conference on Image, Rochester, NY, USA, 11–15 May 2002; Volume 1, pp. 900–903.
- Papageorgiou, C.P.; Oren, M.; Poggio, T. A general framework for object detection. In Proceedings of the International Conference on Computer Vision, Bombay, India, 4–7 January 1998; pp. 555–562.
- Ojala, T.; Pietikainen, M.; Harwood, D. Performance evaluation of texture measures with classification based on Kullback discrimination of distributions. In Proceedings of the 12th IAPR International Conference on Pattern Recognition, Jerusalem, Israel, 9–13 October 1994; Volume 1, pp. 582–585.
- Schölkopf, B.; Smola, A.J.; Williamson, R.C.; Bartlett, P.L. New support vector algorithms. Neural Comput. 2000, 12, 1207–1245. [Google Scholar] [CrossRef] [PubMed]
- 3DRobotics. Arducopter: Full-Featured, Open-Source Multicopter UAV Controller. Available online: http://copter.ardupilot.com/ (accessed on 19 August 2015).
- Gaschler, A. Real-Time Marker-Based Motion Tracking: Application to Kinematic Model Estimation of a Humanoid Robot. Master’s Thesis, Technische Universität München, München, Germany, February 2011. [Google Scholar]
- Gaschler, A.; Springer, M.; Rickert, M.; Knoll, A. Intuitive Robot Tasks with Augmented Reality and Virtual Obstacles. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014.
- Horn, B.K.P.; Hilden, H.; Negahdaripour, S. Closed-Form Solution of Absolute Orientation using Orthonormal Matrices. J. Opt. Soc. Am. 1988, 5, 1127–1135. [Google Scholar] [CrossRef]
- Umeyama, S. Least-squares estimation of transformation parameters between two point patterns. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 376–380. [Google Scholar] [CrossRef]
- Bradski, G. The OpenCV Library. Dr. Dobb’s J. Softw. Tools. 2000. Available online: http://www.drdobbs.com/open-source/the-opencv-library/184404319?queryText=opencv.
- Kaewtrakulpong, P.; Bowden, R. An Improved Adaptive Background Mixture Model for Real-time Tracking with Shadow Detection. In Video-Based Surveillance Systems; Remagnino, P., Jones, G., Paragios, N., Regazzoni, C., Eds.; Springer: New York, NY, USA, 2002; pp. 135–144. [Google Scholar]
- Jaccard, P. The distribution of the flora in the Alpine zone. New Phytol. 1912, 11, 37–50. [Google Scholar] [CrossRef]
- Rekleitis, I.M. Visual Motion Estimation based on Motion Blur Interpretation. Master’s Thesis, School of Computer Science, McGill University, Montreal, QC, Canada, 1995. [Google Scholar]
- Soe, A.K.; Zhang, X. A simple PSF parameters estimation method for the de-blurring of linear motion blurred images using wiener filter in OpenCV. In Proceedings of the International Conference on Systems and Informatics (ICSAI), Yantai, China, 19–21 May 2012; pp. 1855–1860.
- Hulens, D.; Verbeke, J.; Goedeme, T. How to Choose the Best Embedded Processing Platform for on-Board UAV Image Processing? In Proceedings of the 10th International Conference on Computer Vision Theory and Applications, Berlin, Germany, 11–14 March 2015; pp. 377–386.
- AscendingTechnologies. AscTec Mastermind. Available online: http://www.asctec.de/en/asctec-mastermind/ (accessed on 19 August 2015).
- Leibe, B.; Schindler, K.; van Gool, L. Coupled detection and trajectory estimation for multi-object tracking. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8.
- Huang, C.; Wu, B.; Nevatia, R. Robust object tracking by hierarchical association of detection. 788–801.
- Stalder, S.; Grabner, H.; van Gool, L. Cascaded confidence filtering for improved tracking-by-detection. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; pp. 369–382. [Google Scholar]
- Dollar, P.; Wojek, C.; Schiele, B.; Perona, P. Pedestrian detection: An evaluation of the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 743–761. [Google Scholar] [CrossRef] [PubMed]
© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gökçe, F.; Üçoluk, G.; Şahin, E.; Kalkan, S. Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles. Sensors 2015, 15, 23805-23846. https://doi.org/10.3390/s150923805
Gökçe F, Üçoluk G, Şahin E, Kalkan S. Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles. Sensors. 2015; 15(9):23805-23846. https://doi.org/10.3390/s150923805
Chicago/Turabian StyleGökçe, Fatih, Göktürk Üçoluk, Erol Şahin, and Sinan Kalkan. 2015. "Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles" Sensors 15, no. 9: 23805-23846. https://doi.org/10.3390/s150923805
APA StyleGökçe, F., Üçoluk, G., Şahin, E., & Kalkan, S. (2015). Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles. Sensors, 15(9), 23805-23846. https://doi.org/10.3390/s150923805