Dynamic Target Tracking and Ingressing of a Small UAV Using Monocular Sensor Based on the Geometric Constraints
Abstract
:1. Introduction
- Aiming at the problem of tracking dynamic targets, a single-frame parallel-features positioning method (SPPM) is proposed. Compared with a standard solution to the perspective 3 points problem of moving targets, our method extracts the coplanar parallel constraint relations between target feature points to construct high-order non-linear over-determined equations with unknown depth values. Then, we introduced an improved Newton numerical optimization based on the Runge–Kutta method, which greatly reduces the error caused by 2D detection in UAVs’ actual engineering applications. In our experiments, after randomly increasing the 2D point error within three pixels, SPPM can still guarantee the 3D positioning error within 1.10%. Experimental results show that SPPM is robust to 2D detection errors and in the presence of detection noise;
- Based on the SPPM, a 2D feature recognition algorithm for parallel feature extraction was designed. Then, we introduced a monocular SLAM algorithm based on PTAM for navigation. Finally, an indoor UAV visual positioning and tracking framework that integrates target feature recognition, monocular SLAM positioning, and dynamic target tracking was constructed;
- To verify the effectiveness and robustness of the framework, we have carried out several indoor flight test experiments for a small UAV AR. DRONE2.0 [31] to track, and the UAV is equipped with a lightweight monocular camera as the visual front-end which is shown in Figure 1. Combining our SPPM and our intelligent monocular SLAM navigation platform [32], a complete indoor autonomous navigation and positioning system is proposed. Our method is systematically evaluated by considering the computation amount, the convergence speed of the depth value, and the tracking accuracy. In the actual flight test, the average number of iterations of the depth estimation equations is only 1.94 for visual data with the resolution of . The UAV can fly through the center of the door frame successfully, and the root mean square errors (RMSE) of the dynamic target is smaller than 7.92 cm.
2. Background and Preliminaries
2.1. Applicable Scene and Key Technologies
- Autonomous navigation: The fusion information of the visual sensor and the inertial navigation device is used to complete the estimation of the pose of the UAV without GNSS signals, and provide basic navigation information for subsequent flight missions;
- Target recognition: The door-like targets are analyzed, the parallel feature points of the target are extracted, and the target tracking of the two-dimensional plane is realized;
- Target 3D positioning: The key issue we consider in this article is to solve the 3D space position from the 2D point image feature. One mainstream solution is to use inertial navigation or binocular to solve epipolar geometry which is also called triangulation, and the other popular solution is to use depth neural to estimate the depth information of the image. However, these methods often require auxiliary sensors such as inertial navigation devices or several frames of image data for a joint solution.
2.2. Background of 2D Target Tracking
2.3. Mathematical Preliminaries
2.3.1. Coordinate System
- The world coordinate system: The absolute coordinate system with the fixed objective world. The world coordinate system is often used as the benchmark coordinate system to describe the spatial position of the target we are tracking, which is expressed by ;
- The pixel coordinate system : The center of the image plane is taken as the coordinate origin, and the axis and axis are parallel to the two vertical sides of the image plane, respectively, which is expressed by . It is used to describe the 2D projection position of the target;
- The camera coordinate system : The optical center of the camera is taken as the coordinate origin, the axis and axis are parallel to the axis and axis of the image coordinate system respectively, and the optical axis of the camera is the axis , which is expressed by .
2.3.2. Camera Projection Model
2.3.3. Numerical Newton Iteration Method
2.3.4. Generalized Inverse Matrix and Singular Value Decomposition
3. Framework of Tracking Algorithm
3.1. Overall Framework and Mathematical Models
- Monocular visual SLAM model: In this paper, the UAV target tracking is based on the monocular vision, so a monocular vision SLAM method is introduced as the positioning algorithm for the UAV, which can save cost and improve the system design compactness. Additionally, because of the common visual load, it is more convenient for researchers to carry out data processing and synchronization. The camera pose provided by this model will be used to calculate the world coordinates of the target. The pose calculation is used as an input to our core algorithm (SPPM). Therefore, in this article, we will not perform in-depth research in mono-SLAM;
- Two-dimensional target detection and camera projection model: In this paper, the traditional target detection algorithm is used to ensure the real-time performance of the detection module. Considering the generality, two projection models of the pinhole camera and the ATAN camera can both adapt to our subsequent algorithm. This paper focuses on the ATAN projection model, and the pinhole model can be regarded as a simplify case. The camera calibration method in this article refers to the calibration method for the FOV model camera which is also called the arctangent (ATAN) model in the ethzasl_ptam project of the ETH Zurich autonomous systems lab (ASL). This method applies to both global shutter and rolling shutter cameras. For specific methods, please refer to the official GitHub website of autonomous systems lab: https://github.com/ethz-asl/ethzasl_ptam (accessed on 12 July 2021);
- Three-dimensional monocular depth estimation and positioning model: The depth information solution also called the scale calculation problem, is one of the key problems in the target positioning and tracking technology based on the monocular vision. In this paper, the camera projection model and geometric constraint equations are used to construct the depth equations model. The target is abstracted as a rectangle or a parallelogram, and the improved high-order Newton iterative algorithm is used to realize the efficient real-time numerical solution of the depth information of the target feature points. Finally, the Kalman filter and linear regression are used to filter and estimate the target trajectory.
3.2. Extraction of Two-Dimensional Feature Points
- Pre-processing: The visual information of AR. Drone2.0 is processed for noise removal, color enhancement processing, and then the RGB color space is converted into the HSV color space. Preprocessing work is to improve the image quality of the monocular camera to a certain extent and facilitate the subsequent modules to perform related filtering processing;
- Target extraction: The shape template or color filter of the target can be selected for target recognition and extraction. Then, use the open operation in morphology to denoise the target edge. Finally, use the edge extraction function in OpenCV2.0 to obtain the edge vector information of the target;
- Corner extraction: After obtaining the edge vector, Hough transform was used to obtain the sides of the quadrilateral target or use the random sample consensus (RANSAC) [48] algorithm to re-estimate the sides of the target. The use of the RANSAC algorithm takes into account the partial lack of target edge information caused by illumination or occlusion, and the RANSAC algorithm can be used to reconstruct the target edges. Finally, the 2D coordinates of each vertex of the target can be obtained by calculating the intersection of each side. Figure 5a–d, with the door frame detection as an example, illustrate the specific process of target identification and feature corner extraction. The target detection algorithm used in this paper is characterized by fast and real-time performance, but it may have a slight pixel detection error which is usually within 1 to 2 pixels.
3.3. Depth Estimation and 3D Locating
3.4. Improved Newton Iteration Method
3.5. Kalman Filter and Linear Regression
- Under the condition of filter convergence, our method pays more attention to the convergence speed of the filter due to the rapid flight of the UAV. The original position of the measured value has little effect on the estimation. The initial value is determined as ;
- For the process noise covariance matrix , since the random mobile interference in the state equation can be reflected by the measured value, can take a small value;
- The measurement noise covariance is based on the actual measurement situation;
- Set the parameters of the filter, the initial covariance matrix , and the process noise covariance Matrix , measurement noise covariance ( is the n-order identity matrix).
3.6. Flight Controller in Target Tracking
4. Experimental Results and Analysis
4.1. Experimental Hardware Platform
4.2. Transplantation of PTAM Module
4.3. Experimental Test Environment
4.4. Laboratory Experiment and Result Analysis
4.4.1. Construction and Analysis of Geometric Constraint Equations
4.4.2. Static Ranging Experiment and Analysis
4.4.3. Experiment of Anti-Detection Error
4.4.4. Flight Experiment and Analysis of Dynamic Target Tracking
4.4.5. Flying through a Door Frame
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Weiss, S.; Achtelik, M.W.; Lynen, S.; Chli, M.; Siegward, R. Real-time onboard visual-inertial state estimation and self-calibration of mavs in unknown environments. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 957–964. [Google Scholar]
- Chowdhary, G.; Johnson, E.N.; Magree, D.; Wu, A.; Shein, A. GPS-denied Indoor and Outdoor Monocular Vision Aided Navigation and Control of Unmanned Aircraft. J. Field Robot. 2013, 30, 415–438. [Google Scholar] [CrossRef]
- Ruan, W.-Y.; Duan, H.-B. Multi-UAV obstacle avoidance control via multi-objective social learning pigeon-inspired optimization. Front. Inf. Technol. Electron. Eng. 2020, 21, 740–748. [Google Scholar] [CrossRef]
- Shao, Y.; Zhao, Z.-F.; Li, R.-P.; Zhou, Y.-G. Target detection for multi-UAVs via digital pheromones and navigation algorithm in unknown environments. Front. Inf. Technol. Electron. Eng. 2020, 21, 796–808. [Google Scholar] [CrossRef]
- Yang, T.; Li, P.; Zhang, H.; Li, J.; Li, Z. Monocular Vision SLAM-Based UAV Autonomous Landing in Emergencies and Unknown Environments. Electronics 2018, 7, 73. [Google Scholar] [CrossRef] [Green Version]
- Mahony, R.; Kumar, V.; Corke, P. Multirotor Aerial Vehicles: Modeling, Estimation, and Control of Quadrotor. IEEE Robot. Autom. Mag. 2012, 19, 20–32. [Google Scholar] [CrossRef]
- Shen, S.; Michael, N.; Kumar, V. Autonomous indoor 3D exploration with a micro-aerial vehicle. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 9–15. [Google Scholar]
- Orgeira-Crespo, P.; Ulloa, C.; Rey-Gonzalez, G.; García, J.P. Methodology for Indoor Positioning and Landing of an Unmanned Aerial Vehicle in a Smart Manufacturing Plant for Light Part Delivery. Electronics 2020, 9, 1680. [Google Scholar] [CrossRef]
- Akhtar, N.; Mian, A. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey. IEEE Access 2018, 6, 14410–14430. [Google Scholar] [CrossRef]
- Yuan, X.; Feng, Z.-Y.; Xu, W.-J.; Wei, Z.-Q.; Liu, R.-P. Secure connectivity analysis in unmanned aerial vehicle networks. Front. Inf. Technol. Electron. Eng. 2018, 19, 409–422. [Google Scholar] [CrossRef]
- Lu, Y.; Xue, Z.; Xia, G.-S.; Zhang, L. A survey on vision-based UAV navigation. Geo-Spat. Inf. Sci. 2018, 21, 21–32. [Google Scholar] [CrossRef] [Green Version]
- Zhu, X.; Zhang, X.; Qu, Y. Consensus-based three-dimensional multi-UAV formation control strategy with high precision. Front. Inf. Technol. Electron. Eng. 2017, 18, 968–977. [Google Scholar]
- Falanga, D.; Mueggler, E.; Faessler, M.; Scaramuzza, D. Aggressive quadrotor flight through narrow gaps with onboard sensing and computing using active vision. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5774–5781. [Google Scholar] [CrossRef] [Green Version]
- Pachauri, A.; More, V.; Gaidhani, P.; Gupta, N. Autonomous Ingress of a UAV through a window using Monocular Vision. arXiv 2016, arXiv:1607.07006. [Google Scholar]
- Zhao, F.; Zeng, Y.; Wang, G.; Bai, J.; Xu, B. A Brain-Inspired Decision Making Model Based on Top-Down Biasing of Prefrontal Cortex to Basal Ganglia and Its Application in Autonomous UAV Explorations. Cogn. Comput. 2017, 10, 296–306. [Google Scholar] [CrossRef]
- Albrektsen, S.M.; Bryne, T.H.; Johansen, T.A. Robust and secure UAV navigation using GNSS, phased-array radio system and inertial sensor fusion. In Proceedings of the 2018 IEEE Conference on Control Technology and Applications (CCTA), Copenhagen, Denmark, 21–24 August 2018; pp. 1338–1345. [Google Scholar]
- Weiss, U.; Biber, P. Plant detection and mapping for agricultural robots using a 3D LIDAR sensor. Robot. Auton. Syst. 2011, 59, 265–273. [Google Scholar] [CrossRef]
- Henry, P.; Krainin, M.; Herbst, E.; Ren, X.; Fox, D. RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments. Int. J. Robot. Res. 2012, 31, 647–663. [Google Scholar] [CrossRef] [Green Version]
- Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle adjustment—A modern synthesis. In Vision Algorithms: Theory and Practice, Proceedings of the International Workshop on Vision Algorithms, Corfu, Greece, 21–22 September 1999; Springer: Berlin/Heidelberg, Germany, 1999; pp. 298–372. [Google Scholar]
- Wang, Y.; Wang, P.; Yang, Z.; Luo, C.; Yang, Y.; Xu, W. UnOS: Unified unsupervised optical-flow and stereo-depth estimation by watching videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 9 January 2020; pp. 8063–8073. [Google Scholar] [CrossRef]
- Cheng, H.; An, P.; Zhang, Z. Model of relationship among views number, stereo resolution and max stereo angle for multi-view acquisition/stereo display system. In Proceedings of the 9th International Forum on Digital TV and Wireless Multimedia Communication, IFTC 2012, Shanghai, China, 9–10 November 2012; pp. 508–514. [Google Scholar] [CrossRef]
- Gomez-Ojeda, R.; Moreno, F.-A.; Zuniga-Noel, D.; Scaramuzza, D.; Gonzalez-Jimenez, J. PL-SLAM: A Stereo SLAM System Through the Combination of Points and Line Segments. IEEE Trans. Robot. 2019, 35, 734–746. [Google Scholar] [CrossRef] [Green Version]
- Qin, T.; Li, P.; Shen, S. Relocalization, global optimization and map merging for monocular visual-inertial SLAM. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1197–1204. [Google Scholar]
- Weiss, S.; Achtelik, M.W.; Lynen, S.; Achtelik, M.C.; Kneip, L.; Chli, M.; Siegwart, R. Monocular Vision for Long-term Micro Aerial Vehicle State Estimation: A Compendium. J. Field Robot. 2013, 30, 803–831. [Google Scholar] [CrossRef]
- Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
- Nützi, G.; Weiss, S.; Scaramuzza, D.; Siegwart, R. Fusion of IMU and vision for absolute scale estimation in monocular SLAM. J. Intell. Robot. Syst. 2010, 61, 287–299. [Google Scholar] [CrossRef] [Green Version]
- Zhou, D.; Dai, Y.; Li, H. Ground-Plane-Based Absolute Scale Estimation for Monocular Visual Odometry. IEEE Trans. Intell. Transp. Syst. 2019, 21, 791–802. [Google Scholar] [CrossRef]
- Qiu, K.; Liu, T.; Shen, S. Model-Based Global Localization for Aerial Robots Using Edge Alignment. IEEE Robot. Autom. Lett. 2017, 2, 1256–1263. [Google Scholar] [CrossRef]
- Gan, Y.; Xu, X.; Sun, W.; Lin, L. Monocular depth estimation with affinity, vertical pooling, and label enhancement. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 224–239. [Google Scholar]
- Zou, Y.; Luo, Z.; Huang, J.-B. DF-Net: Unsupervised joint learning of depth and flow using cross-task consistency. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 38–55. [Google Scholar] [CrossRef] [Green Version]
- Bendig, J.; Bolten, A.; Bareth, G. Introducing a low-cost mini-UAV for thermal- and multispectral-imaging. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXIX-B1, 345–349. [Google Scholar] [CrossRef] [Green Version]
- Wang, Z.H.; Zhang, T.; Qin, K.Y.; Zhu, B. A Vision-Aided Navigation System by Ground-Aerial Vehicle Cooperation for UAV in GNSS-Denied Environments. In Proceedings of the 2018 IEEE CSAA Guidance, Navigation and Control Conference (CGNCC), Xiamen, China, 10–12 August 2018; pp. 1–6. [Google Scholar]
- Wang, N.; Wang, G.Y. Shape Descriptor with Morphology Method for Color-based Tracking. Int. J. Autom. Comput. 2007, 4, 101–108. [Google Scholar] [CrossRef]
- Tsai, D.M.; Molina, D.E.R. Morphology-based defect detection in machined surfaces with circular tool-mark patterns. Measurement 2019, 134, 209–217. [Google Scholar] [CrossRef]
- Yin, J.; Fu, C.; Hu, J. Using incremental subspace and contour template for object tracking. J. Netw. Comput. Appl. 2012, 35, 1740–1748. [Google Scholar] [CrossRef]
- Guo, J.; Zhu, C. Dynamic displacement measurement of large-scale structures based on the Lucas–Kanade template tracking algorithm. Mech. Syst. Signal. Process. 2016, 66–67, 425–436. [Google Scholar] [CrossRef]
- Henriques, J.F.; Caseiro, R.; Martins, P.; Batista, J. High-Speed Tracking with Kernelized Correlation Filters. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 583–596. [Google Scholar] [CrossRef] [Green Version]
- Li, H.; Shen, C.; Shi, Q. Robust real-time visual tracking with compressed sensing. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 1305–1312. [Google Scholar]
- Kumar, N.; Parate, P. Fragment-based real-time object tracking: A sparse representation approach. In Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 433–436. [Google Scholar]
- Qin, Y.; Shen, G.; Zhao, W.; Che, Y.; Yu, M.; Jin, X. A network security entity recognition method based on feature template and CNN-BiLSTM-CRF. Front. Inf. Technol. Electron. Eng. 2019, 20, 872–884. [Google Scholar] [CrossRef]
- Pang, S.; del Coz, J.J.; Yu, Z.; Luaces-Rodriguez, O.; Diez-Pelaez, J. Deep learning to frame objects for visual target tracking. Eng. Appl. Artif. Intell. 2017, 65, 406–420. [Google Scholar] [CrossRef] [Green Version]
- Devernay, F.; Faugeras, O. Straight lines have to be straight. Mach. Vis. Appl. 2001, 13, 14–24. [Google Scholar] [CrossRef]
- Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef] [Green Version]
- Kou, J.; Li, Y.; Wang, X. Some modifications of Newton’s method with fifth-order convergence. J. Comput. Appl. Math. 2007, 209, 146–152. [Google Scholar] [CrossRef] [Green Version]
- Sharma, J.R.; Gupta, P. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl. 2014, 67, 591–601. [Google Scholar] [CrossRef]
- Zhou, W.; Hou, J. A New Adaptive Robust Unscented Kalman Filter for Improving the Accuracy of Target Tracking. IEEE Access 2019, 7, 77476–77489. [Google Scholar] [CrossRef]
- Al-Kanan, H.; Li, F. A Simplified Accuracy Enhancement to the Saleh AM/AM Modeling and Linearization of Solid-State RF Power Amplifiers. Electronics 2020, 9, 1806. [Google Scholar] [CrossRef]
- Chum, O. Two-View Geometry Estimation by Random Sample and Consensus. Ph.D. Thesis, CTU, Prague, Czech Republic, 2005. [Google Scholar]
- Gao, X.S.; Hou, X.R.; Tang, J.; Cheng, H.-F. Complete solution classification for the perspective-three-point problem. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 930–943. [Google Scholar]
- Engel, J.; Sturm, J.; Cremers, D. Scale-aware navigation of a low-cost quadrocopter with a monocular camera. Robot. Auton. Syst. 2014, 62, 1646–1656. [Google Scholar] [CrossRef] [Green Version]
Method | Triangulation | Deep Learning | SPPM |
---|---|---|---|
Complexity | Medium | High | Low |
Number of frames required | 2 or more | Great amount | 1 |
Required Information | Baseline | Template | Geometric Constraints |
Required Sensor | Binocular or Mono-Inertial | Mono | Mono |
Parameters | Value |
---|---|
Size | 45.2 cm × 45.2 cm |
Weight | 380 g |
Maximum speed | 18 km/h |
Endurance | 18 min |
Operating radius | 50 m |
Method | Depth of Four Corners/cm | No. | |||
---|---|---|---|---|---|
Ground Truth | 400 | 400 | 400 | 400 | 1 |
Newton Method | 408 | 410 | 405 | 339 | 2 |
0 | 0 | 0 | 0 | 3 | |
0 | 0 | 172 | 192 | 4 | |
*− | − | *+ | + | 5 | |
305 | 356 | 305 | 356 | 6 | |
352 | 389 | 405 | 393 | 7 | |
384 | 384 | 303 | 303 | 8 | |
399 | 393 | 399 | 396 | 9 | |
399 | 395 | 399 | 396 | 10 | |
402 | 404 | 403 | 397 | 11 |
No. | Selection Mode | * NoE | Model Analysis |
---|---|---|---|
3 | length of one long side + length of one short side + two equations for each opposite side parallel condition | 4 | Zero solution due to improper selection of parallel conditions |
4 | length of two diagonals + a set of two parallel conditions on opposite sides | 4 | Insufficient constraints for parallel conditions and diagonal lead to wrong solutions |
5 | length of two short sides + length of one diagonal +one parallel conditions on opposite sides | 5 | Insufficient constraints lead to negative spurious solutions |
6 | length of two short sides + a set of two parallel conditions on long opposite sides | 4 | Insufficient parallel conditions and diagonal constraints lead to wrong solutions |
7 | length of three sides + length of one diagonal | 4 | No use of parallel constraints leads to large solution error |
8 | length of two diagonals + a set of two parallel conditions on opposite sides | 4 | Insufficient constraints for diagonal and parallel condition leads to wrong solutions |
9 | length of four sides + a set of three parallel conditions on opposite sides | 7 | Redundant conditions, Overdetermined equations, Correct solutions |
10 | length of three sides + length of one diagonal + a set of three parallel conditions on opposite sides | 7 | Redundant conditions, Overdetermined equations, Correct solutions |
11 | length of two sides opposite sides + two parallel condition of the opposite sides | 4 | Reasonable constraints, Full rank equations, Correct solutions |
No. | Target Detection | Improve Newton Iteration |
---|---|---|
9 | 71.6 ms | 2.93 ms |
10 | 70.8 ms | 2.76 ms |
11 | 72.4 ms | 1.81 ms |
Distance | 1 m | 1.5 m | 2 m |
---|---|---|---|
Original Newton method | 8.01% | 14.6% | 1.62% |
SPPM | 1.03% | 0.23% | 0.31% |
Distance | 1 m | 1.5 m | 2 m |
---|---|---|---|
RMSPE | 1.04% | 0.23% | 0.34% |
Axis | X | Y | Z |
---|---|---|---|
RMSE | 8.24 cm | 3.43 cm | 5.12 cm |
Axis | X | Y | Z |
---|---|---|---|
RMSE | 7.92 cm | 3.12 cm | 5.48 cm |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, Z.-H.; Chen, W.-J.; Qin, K.-Y. Dynamic Target Tracking and Ingressing of a Small UAV Using Monocular Sensor Based on the Geometric Constraints. Electronics 2021, 10, 1931. https://doi.org/10.3390/electronics10161931
Wang Z-H, Chen W-J, Qin K-Y. Dynamic Target Tracking and Ingressing of a Small UAV Using Monocular Sensor Based on the Geometric Constraints. Electronics. 2021; 10(16):1931. https://doi.org/10.3390/electronics10161931
Chicago/Turabian StyleWang, Zi-Hao, Wen-Jie Chen, and Kai-Yu Qin. 2021. "Dynamic Target Tracking and Ingressing of a Small UAV Using Monocular Sensor Based on the Geometric Constraints" Electronics 10, no. 16: 1931. https://doi.org/10.3390/electronics10161931
APA StyleWang, Z.-H., Chen, W.-J., & Qin, K.-Y. (2021). Dynamic Target Tracking and Ingressing of a Small UAV Using Monocular Sensor Based on the Geometric Constraints. Electronics, 10(16), 1931. https://doi.org/10.3390/electronics10161931