Algorithm for Extracting the 3D Pose Information of Hyphantria cunea (Drury) with Monocular Vision
Abstract
:1. Introduction
2. Materials and Methods
2.1. Sampling of H. cunea
2.2. Definition of 3D Posture Information of H. cunea
2.3. Image Data Acquisition
2.4. Extraction of 3D Posture of Wings
2.4.1. Overall Workflow
- The characteristics of 3D posture change of multi-pose H. cunea were investigated, and the key feature points for extraction and localization methods of torso and wing posture information were determined.
- The 3D posture information of the H. cunea was extracted. The key points were located, the 3D coordinates of the key points were obtained, and the wing angle, the intersection equation of the wing plane, and the bending information of the torso were calculated. The calculation method of the torso and wing feature points was established to obtain the 3D information characteristics of H. cunea.
- A validation method for the accuracy of the 3D posture of H. cunea was established: a Metrology-grade 3D scanner (Artec Micro, Santa Clara, CA, USA) measurement method was used to verify the posture information obtained by monocular vision.
2.4.2. Image Preprocessing
2.4.3. Approximate Location of Key Points
2.4.4. Precise Positioning and Matching of Key Points
2.4.5. Extraction of 3D Posture Features of the Torso
2.4.6. Calculation of the 3D Posture Information
2.5. Accuracy Verification of the 3D Information Extraction
3. Results and Discussion
3.1. Result
3.2. Accuracy Verification
3.3. Discussion
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Liu, H.J.; Luo, Y.Q.; Wen, J.B.; Zhang, Z.M.; Feng, J.H.; Tao, W.Q. Pest risk assessment of Dendroctonus valens, Hyphantria cunea and Apriona swainsoni in Beijing area. J. Beijing For. Univ. 2005, 27, 81–87. [Google Scholar]
- Wen, C.; Wu, D.; Hu, H.; Wei, P. Pose estimation-dependent identification method for field moth images using deep learning architecture. Biosyst. Eng. 2015, 136, 117–128. [Google Scholar] [CrossRef]
- Lv, J.; Yao, Q.; Liu, Q.; Xue, J.; Chen, H.M.; Yang, B.J.; Tang, J. Multi-target rice lamp trap pest identification based on template matching Method research. China Rice Sci. 2012, 26, 619–623. [Google Scholar]
- Li, W.Y.; Chen, M.X.; Li, M.; Sun, C.H.; Du, S.F. Automatic identification method of target pests in orchard based on attitude description. Agric. Chin. J. Mech. Eng. 2014, 45, 54–59. [Google Scholar] [CrossRef]
- Li, W.Y.; Du, S.F.; Li, M.; Chen, M.X.; Sun, C.H. Fuzzy classification of orchard pest posture based on Zernike moments. In Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Beijing, China, 6–11 July 2014. [Google Scholar]
- Li, W.Y.; Li, M.; Chen, M.X.; Qian, J.P.; Sun, C.H.; Du, S.F. Feature extraction and multi-pose pests of crops based on machine vision Classification method. Trans. Chin. Soc. Agric. Eng. 2014, 30, 154–162. [Google Scholar]
- Ding, W.G.; Taylor, G. Automatic moth detection from trap images for pest management. Comput. Electron. Agric. 2016, 123, 17–28. [Google Scholar] [CrossRef] [Green Version]
- Chen, J.; Fan, Y.Y.; Wang, T.; Zhang, C.; Qiu, Z.J.; He, Y. Automatic Segmentation and Counting of Aphid Nymphs on Leaves Using Convolutional Neural Networks. Agronomy 2018, 8, 129. [Google Scholar] [CrossRef] [Green Version]
- Cheng, X.; Zhang, Y.; Chen, Y.; Wu, Y.; Yue, Y. Pest identification via deep residual learning in complex background. Comput. Electron. Agric. 2017, 141, 351–356. [Google Scholar] [CrossRef]
- Xie, C.; Wang, R.; Zhang, J.; Chen, P.; Dong, W.; Li, R.; Chen, T.; Chen, H. Multi-level learning features for automatic classification of field crop pests. Comput. Electron. Agric. 2018, 152, 233–241. [Google Scholar] [CrossRef]
- Shen, Y.; Zhou, H.; Li, J.; Jian, F.; Jayas, D.S. Detection of stored-grain insects using deep learning. Comput. Electron. Agric. 2018, 145, 319–325. [Google Scholar] [CrossRef]
- Sun, Y.; Zhang, D.Y.; Yuan, M.S.; Ren, L.L.; Liu, W.P.; Wang, J.X. In-trap red turpentine beetle detection model based on deep learning. Agric. Chin. J. Mech. Eng. 2018, 49, 180–187. [Google Scholar]
- He, Y.; Zhou, Z.Y.; Tian, L.H.; Liu, Y.F.; Luo, X.W. Brown rice planthopper (Nilaparvata lugens Stal.) detection based on deep learning. Precis. Agric. 2020, 21, 1385–1402. [Google Scholar] [CrossRef]
- Wang, F.Y.; Wang, R.J.; Xie, C.J.; Yang, P.; Liu, L. Fusing multi-scale context-aware information representation for automatic in-field pest detection and recognition. Comput. Electron. Agric. 2020, 169, 11. [Google Scholar] [CrossRef]
- Jiao, L.; Dong, S.F.; Zhang, S.Y.; Xie, C.J.; Wang, H.Q. AF-RCNN: An anchor-free convolutional neural network for multi-categories agricultural pest detection. Comput. Electron. Agric. 2020, 174, 9. [Google Scholar] [CrossRef]
- Khanramaki, M.; Askari, A.E.; Kozegar, E. Citrus pests classification using an ensemble of deep learning models. Comput. Electron. Agric. 2021, 186, 11. [Google Scholar] [CrossRef]
- Wang, K.; Mu, Z.C. A 3D ear recognition method with pose robustens. China Sci. Pap. 2013, 8, 6. [Google Scholar]
- Tang, H.L. Face Recognition Based on 3D Features. Ph.D. Dissertation, Beijing University of Technology, Beijing, China, 2011. [Google Scholar]
- Bel, B.S.; Schmelzle, S.; Blüthgen, N.; Heethoff, M. An automated device for the digitization and 3D modelling of insects, combining extended-depth-of-field and all-side multi-view imaging. ZooKeys 2018, 759, 1–27. [Google Scholar]
- Lau, J.M. 3D digital model reconstruction of insects from a single pair of stereoscopic images. J. Microsc.-Oxford 2013, 212, 107–121. [Google Scholar]
- Nguyen, C.; Lovell, D.; Oberprieler, R.; Jennings, D.; Adcock, M.; Gates-Stuart, E.; Salle, J. Virtual 3D Models of Insects for Accelerated Quarantine Control. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, Sydney, NSW, Australia, 2–8 December 2013; pp. 161–167. [Google Scholar]
- Nguyen, C.V.; Lovell, D.R.; Adcock, M.; La, S.J. Capturing natural-colour 3D models of insects for species discovery and diagnostics. PLoS ONE 2014, 9, e94346. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Qian, J.; Dang, S.; Wang, Z.; Zhou, X.; Dan, D.; Yao, B.; Tong, Y.; Yang, H.; Lu, Y.; Chen, Y.; et al. Large-scale 3D imaging of insects with natural color. Opt. Express 2019, 27, 4845–4857. [Google Scholar] [CrossRef] [PubMed]
- Ge, S.Q.; Wipfler, B.; Pohl, H.; Hua, Y.; Slipiński, A.; Yang, X.K.; Beutel, R.G. The first complete 3D reconstruction of a Spanish fly primary larva (Lytta vesicatoria, Meloidae, Coleoptera). PLoS ONE 2012, 7, e52511. [Google Scholar] [CrossRef] [PubMed]
- Ge, S.Q.; Hua, Y.; Ren, J.; Slipinski, A.; Wipfler, B. Transformation of head structures during the metamorphosis of Chrysomela populi (Coleoptera: Chrysomelidae). Arthropod Syst. Phylogeny 2015, 73, 129–152. [Google Scholar]
- Grzywacz, A.; Goral, T.; Szpila, K.; Hall, M.J.R. Confocal laser scanning microscopy as a valuable tool in Diptera larval morphology studies. Parasitol. Res. 2014, 113, 4297–4302. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Klaus, A.V.; Kulasekera, V.L.; Schawaroch, V. Three-dimensional visualization of insect morphology using confocal laser scanning microscopy. J. Microsc.-Oxford 2003, 212, 107–121. [Google Scholar] [CrossRef] [PubMed]
- Li, W.P. Three-Dimensional Live Monitoring of Insect Embryo Development and Microcirculation Imaging Based on OCT Technology. Master’s Thesis, Hebei University, Baoding, China, 2018. [Google Scholar]
- Mohoric, A.; Bozic, J.; Mrak, P.; Tusar, K.; Lin, C.; Sepe, A.; Mikac, U.; Mikhaylov, G.; Sersa, I. In vivo continuous three-dimensional magnetic resonance microscopy: A study of metamorphosis in Carniolan worker honey bees (Apis mellifera carnica). J. Exp. Biol. 2020, 223, jeb225250. [Google Scholar] [CrossRef] [PubMed]
- Rother, L.; Kraft, N.; Smith, D.B.; El Jundi, B.; Gill, R.J.; Pfeiffer, K. A micro-CT-based standard brain atlas of the bumblebee. Cell Tissue Res. 2021, 386, 29–45. [Google Scholar] [CrossRef] [PubMed]
- Yu, X.; Sun, M. A computational study of the wing–wing and wing–body interactions of a model insect. Acta Mech. Sin. 2009, 25, 421–431. [Google Scholar] [CrossRef]
- Chen, M.W.; Sun, M. Experimental observation and mechanical analysis of the rapid take-off process of bees and flies. Acta Mech. Sin. 2014, 35, 3222–3231. [Google Scholar]
- Chen, M.W.; Zhang, Y.L.; Sun, M. Wing and body motion and aerodynamic and leg forces during take-off in droneflies. J. R. Soc. Interface 2013, 10, 20130808. [Google Scholar] [CrossRef] [Green Version]
- Huang, Y.B. Motion Analysis and Simulation Design of Flapping Wing Flight. Master’s Thesis, Zhejiang University, Hangzhou, China, 2018. [Google Scholar]
- Lv, D.; Sun, J.F.; Li, Q.; Wang, Q. 3D pose estimation of target based on ladar range image. Infrared Laser Eng. 2015, 44, 1115–1120. [Google Scholar]
- Zhang, R.K.; Chen, M.X.; Li, M.; Yang, X.T.; Wen, J.B. The calculation method of the angle between the forewings in the three-dimensional pose of moths based on machine vision. For. Sci. 2017, 53, 120–130. [Google Scholar]
- Chen, M.X.; Zhang, R.K.; Li, M.; Wen, J.B.; Yang, X.T.; Zhao, L. Insect Recognition Device and Method Based on 3D Posture Estimation. ZL 201611269850.7, 20 December 2016. [Google Scholar]
- Zhang, D.F. MATLAB Digital Image Processing, 2nd ed.; China Machine Press: Beijing, China, 2012; pp. 214–216. [Google Scholar]
- Yu, C.H. SPSS and Statistical Analysis; Publishing House of Electronics Industry: Beijing, China, 2007; pp. 122–132. [Google Scholar]
- Serrano-Pedraza, I.; Vancleef, K.; Read, J.C. Avoiding monocular artifacts in clinical stereotests presented on column-interleaved digital stereoscopic displays. J. Vis. 2016, 16, 13. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; Liu, W.; Pan, Y.; Ma, J.; Wang, F. A knowledge-driven approach for 3D high temporal-spatial measurement of an arbitrary contouring error of CNC machine tools using monocular vision. Sensors 2019, 19, 744. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Sample | Angle between the Wings (°) | Direction Vector of Intersecting Lines |
---|---|---|
1 | 103.3151 | 1 × 1010 × (−0.3349, −0.0415, −1.6122) |
2 | 286.5005 | 1 × 109 × (1.0672, −0.1245, 0.8172) |
3 | 127.7247 | 1 × 1011 × (0.0041, 0.0049, −1.1289) |
4 | 102.4134 | 1 × 1011 × (0.2565, 0.0142, 1.2537) |
5 | 72.4769 | 1 × 1010 × (−0.5009, 0.2267, 7.0706) |
6 | 150.6711 | 1 × 1010 × (−2.7502, 0.1524, −5.3349) |
7 | 342.7297 | 1 × 1010 × (−4.5790, −0.3979, −4.2846) |
8 | 172.2616 | 1 × 1010 × (0.0630, 2.3785, −6.0611) |
9 | 271.8351 | 1 × 1010 × (0.9570, 2.2869, 9.5959) |
Sample | Head Coordinates | Tail Coordinates | Fitting Equation of Torso Edge |
---|---|---|---|
1 | (−50, −20, 280) | (0, 0, 752) | z = −0.0013y3 + 0.867y2 − 2.4018y + 764.8990 z = −0.0398y3 − 1.5512y2 − 1.9434y + 855.0992 z = −0.0013x3 + 0.867x2 − 2.4018x + 764.8990 z = −0.0398x3 − 1.5512x2 − 1.9434x + 855.0992 |
2 | (0, 0, 322) | (0, 0, 632) | z = −0.0226 × x2 + (−1.1978 × x) + 661.8303 z = −0.1110 × x2 + (2.4081× x) + 645.0075 z = −0.0110 × y2 + (−3.5920 × y) + 668.6825 z = −0.1332 × y2 + (−3.1020 × y) + 652.3501 |
3 | (34, −23, 194) | (0, 0, 487) | z = 0.0007 × y3 + (−0.1996 × y2) + (−6.9745 × y) + 433.1011 z = 0.0024 × y3 + (−0.2119 × y2) + (−0.5866 × y) + 498.5497 z = 0.0007 × x3 + (−0.1996 × x2) + (−6.9745 × x) + 433.1011 z = 0.0024 × x3 + (−0.2119 × x2) + (−0.5866 × x) + 498.5497 |
4 | (24, −43, 180) | (0, −21, 479) | z = −0.0001 × y3 + (−0.0185 × y2) + (−0.5754 × y) + 516.1118 z = −0.0381 × y3 + (0.9691 × y2) + (2.1505 × y) + 372.4526 z = −0.0001 × x3 + (−0.0185 × x2) + (−0.5754 × x) + 516.1118 z = −0.0381 × x3 + (0.9691 × x2) + (2.1505 × x) + 372.4526 |
5 | (0, 0, 380) | (0, −10, 491) | z = −0.7636 × y3 + (65.3431 × y2) + (−1842.6824 × y) + 17,521.4701 z = 0.0013 × y3 + (0.0429 × y2) + (0.4754×y) + 482.8186 z = −0.7636 × x3 + (65.3431 × x2) + (−1842.6824 × x) + 17,521.4701 z = 0.0013 × x3 + (0.0429 × x2) + (0.4754 × x) + 482.8186 |
6 | (−220, 0, 980) | (0, 35, 1730) | z = 0.0001 × y3 + (−0.0176 × y2) + (−1.6880 × y) + 1753.9989 z = 0.0730 × y3 + (12.3106 × y2) + (675.1220 × y) + 13,564.7679 z = 0.0001 × x3 + (−0.0176 × x2) + (−1.6880 × x) + 1753.9989 z = 0.0730 × x3 + (12.3106 × x2) + (675.1220 × x) + 13,564.7679 |
7 | (−130, 40, 1300) | (0, −30, 1990) | z = −0.3541 × y2 + (38.6749 × y) + 1014.3517 z = −0.0397 × y2 + (1.6088 × y) + 1984.9320 z = 0.1512 × x2 + (−33.5388 × x) + 3533.1131 z = −0.2801 × y2 + (−32.5045 × y) + 993.5104 |
8 | (0, 0, 1030) | (0, 0, 1852) | z = −0.0844 × y2 + (2.2203 × y) + 1814.2713 z = −0.0145 × y2 + (0.6312 × y) + 1893.2772 z = −0.0844 × x2 + (2.2203 × x) + 1814.2713 z = −0.0145 × x2 + (0.6312 × x) + 1893.2772 |
9 | (80, −80, 700) | (0, 41, 1616) | z = −0.0281 × y2 + (−0.5978 × y) + 1605.8748 z = −0.0486 × y2 + (−0.9896 × y) + 1660.7973 z = −0.0281 × x2 + (−0.5978 × x) + 1605.8748 z = −0.0486 × x2 + (−0.9896 × x) + 1660.7973 |
Samples | Measured Value (°) | Reference Value (°) | Absolute Error (°) | Relative Tolerance (%) |
---|---|---|---|---|
1 | 103.3151 | 106.5433 | −3.2282 | 3.03 |
2 | 286.5005 | 287.4093 | −0.9088 | 0.32 |
3 | 127.7247 | 129.6543 | −1.9296 | 1.49 |
4 | 102.4134 | 104.6035 | −2.1901 | 2.09 |
5 | 72.4769 | 73.7544 | −1.2775 | 1.73 |
6 | 150.6711 | 152.9886 | −2.3175 | 1.51 |
7 | 342.7297 | 340.9005 | 1.8292 | 0.54 |
8 | 172.2616 | 173.5770 | −1.3154 | 0.76 |
9 | 271.8351 | 270.4576 | 1.3775 | 0.51 |
Item | Mean (°) | Standard Deviation (°) | t Value | df | Sig. (Two-Sided) |
---|---|---|---|---|---|
Reference and Measured Values | −1.1067 | 1.6852 | −1.970 | 8 | 0.084 |
The Selected Marker Point | Measured Value of Coordinate Z (pixel) | Calculated Value of Coordinate Z (px) | Absolute Error | Relative Tolerance |
---|---|---|---|---|
1 | 386 | 404 | 18 | 4.66 |
2 | 397 | 421 | 24 | 6.04 |
3 | 424 | 427 | 3 | 0.71 |
4 | 452 | 460 | 8 | 1.77 |
5 | 540 | 552 | 12 | 2.22 |
6 | 564 | 571 | 7 | 1.24 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, M.; Zhang, R.; Han, M.; Yi, T.; Xu, G.; Ren, L.; Chen, L. Algorithm for Extracting the 3D Pose Information of Hyphantria cunea (Drury) with Monocular Vision. Agriculture 2022, 12, 507. https://doi.org/10.3390/agriculture12040507
Chen M, Zhang R, Han M, Yi T, Xu G, Ren L, Chen L. Algorithm for Extracting the 3D Pose Information of Hyphantria cunea (Drury) with Monocular Vision. Agriculture. 2022; 12(4):507. https://doi.org/10.3390/agriculture12040507
Chicago/Turabian StyleChen, Meixiang, Ruirui Zhang, Meng Han, Tongchuan Yi, Gang Xu, Lili Ren, and Liping Chen. 2022. "Algorithm for Extracting the 3D Pose Information of Hyphantria cunea (Drury) with Monocular Vision" Agriculture 12, no. 4: 507. https://doi.org/10.3390/agriculture12040507
APA StyleChen, M., Zhang, R., Han, M., Yi, T., Xu, G., Ren, L., & Chen, L. (2022). Algorithm for Extracting the 3D Pose Information of Hyphantria cunea (Drury) with Monocular Vision. Agriculture, 12(4), 507. https://doi.org/10.3390/agriculture12040507