Algorithm for Extracting the 3D Pose Information of Hyphantria cunea (Drury) with Monocular Vision

Currently, the robustness of pest recognition algorithms based on sample augmentation with two-dimensional images is negatively affected by moth pests with different postures. Obtaining three-dimensional (3D) posture information of pests can provide information for 3D model deformation and generate training samples for deep learning models. In this study, an algorithm of the 3D posture information extraction method for Hyphantria cunea (Drury) based on monocular vision is proposed. Four images of every collected sample of H. cunea were taken at 90° intervals. The 3D pose information of the wings was extracted using boundary tracking, edge fitting, precise positioning and matching, and calculation. The 3D posture information of the torso was obtained by edge extraction and curve fitting. Finally, the 3D posture information of the wings and abdomen obtained by this method was compared with that obtained by Metrology-grade 3D scanner measurement. The results showed that the relative error of the wing angle was between 0.32% and 3.03%, the root mean square error was 1.9363, and the average relative error of the torso was 2.77%. The 3D posture information of H. cunea can provide important data support for sample augmentation and species identification of moth pests.


Introduction
Hyphantria cunea (Drury) belongs to Lepidoptera, Arctiidae. It is a pest that resides worldwide, feeds on miscellaneous plants, reproduces considerably, may cause devastating damage to agricultural and forestry crops, and is difficult to control and manage [1]. Acquisition of occurrence information of H. cunea is an important prerequisite for its early detection and accurate control. With the development of machine vision technology, its application in pest identification is increasingly common. However, it remains difficult to identify multi-pose pests [2]. Lv et al. [3] developed an identification method for multiobjective rice light-trap pests based on template matching. They obtained an accurate rate of multi-template matching of 83.1%, which was higher than the accurate rate of single template matching (59.9%). Li et al. [4][5][6] developed three methods for multi-pose pest classification using two-dimensional (2D) image information. These methods included feature extraction and classification of multi-pose pests based on machine vision, automatic identification of orchard pests based on posture description, and fuzzy classification of orchard pest posture based on Zernike moments. The classification methods achieved a good recognition effect in laboratory conditions. In recent years, deep learning has been gradually applied to improve the identification accuracy of pests because of its outstanding advantages. Wen et al. [2] used the structural similarity index to estimate the posture of apple moth pests, built a deep neural network for moth identification based on improved pyramidal stacked denoising autoencoder architecture, and an achieved identification accuracy of 96.9%. Ding and Taylor [7] proposed an automatic detection pipeline based on deep learning for identifying codling moths in field traps and achieved a recognition accuracy of 93.1%. Chen et al. [8] proposed a method for segmentation and counting of aphid nymphs on pak choi leaves using convolutional neural networks, and the counting precision achieved was high. Cheng et al. [9] proposed a pest identification method based on deep residual learning with complex farmland background; the classification accuracy of 10 crop pests was 98.67%. Xie et al. [10] designed a multilevel fusion classification framework of field crop pests that was aligned with the multilevel deep feature learning model, and the results on 40 common pest species in field crops showed that the multilevel learning feature model outperformed the state-of-the-art methods of pest classification. Shen et al. [11] developed an improved inception network to extract feature maps and used Faster Region-Based Convolutional Neural Network (R-CNN) to classify stored-grain insects. They achieved a mean average precision of 88%. Sun et al. [12] trained a Faster R-CNN model optimized with the K-means clustering algorithm to detect the red turpentine beetle with unconstrained postures. The area under the curve for object and trap on all test sets reached 0.9350 and 0.9722, respectively. A two-layer Faster R-CNN was proposed to detect brown rice planthopper (Nilaparvata lugens Stal). The accuracy and recall rate of the detection model reached 94.5% and 88.0%, respectively, and the detection result was much better than that of the YOLO v3 algorithm [13]. Wang et al. [14] proposed a novel two-stage mobile vision-based cascading pest detection approach (DeepPest), and the pest images were classified into crop categories based on multi-scale contextual information. Then, a multi-projection pest detection model was trained with crop-related pest images. Jiao et al. [15] proposed an anchor-free region convolutional neural network (AF-RCNN) to classify pests via an end-to-end way. The AF-RCNN obtained 56.4% mean average precision and 85.1% mean recall on a dataset of 24 pests. Khanramaki et al. [16] used an ensemble of deep learning models to classify three citrus pests. Diversity at the classifier, feature, and data levels were comprehensively considered, and the recognition accuracy was 99.04%.
The main problem in the deep learning models for pest identification is the lack of a large number of training samples. Currently, 2D images are primarily used for geometric and light intensity transformation in samples augmentation. The posture diversity of the pest training samples is limited, which creates difficulties in meeting the requirements for deep learning training. Deformation simulation based on three-dimensional (3D) data does not cause information loss when the object posture changes and has strong robustness in multi-pose object recognition [17,18].
The 3D technology application in insects primarily includes 3D model reconstruction, insect flight posture, and 3D posture estimation. Machine vision, confocal laser scanning microscope (CLSM), magnetic resonance imaging (MRI), and micro-computerized tomography (micro-CT) are the primary methods for 3D model reconstructions of insects. The 3D reconstruction of insects based on machine vision primarily uses monocular or stereo vision with a rotating platform to acquire insect images and 3D reconstruction software to achieve the 3D reconstruction of adult insects [19][20][21][22][23]. CLSM, MRI, and micro-CT are primarily used to reconstruct 3D models of insect local organs or larvae [24][25][26][27][28][29][30]. In the aspect of insect flight posture, the kinematic parameters of insect hovering and autonomous flight are obtained for the development of micro-bionic aircraft. Yu and Sun used numerical methods to solve Navier-Stokes equations to study the aerodynamic interaction between the wings and the body on both sides of a model insect during flight. When hovering, the influence of the interaction between the wings and the body and the wings was less than 3% and less than 2%, respectively [31]. Chen and Sun [32,33] reconstructed and analyzed the kinematic parameters of the wings and body of the fly in the process of rapid take-off and autonomous flight based on the contour information of the image obtained by three high-speed cameras placed in orthogonal position. Huang [34] reconstructed the 3D shells of insects and used their projection to estimate the insect posture. The flapping angle curve was obtained by using the fly data set to verify the posture estimation algorithm, and the flapping and swing angles had similar regular variations. Lv et al. [35] proposed an effective method of obtaining target 3D posture based on lidar, which can simplify the recognition and improve the recognition rate. There are few studies on 3D posture applied to insect recognition. Zhang et al. [36] manually marked the key points of insect wings, extracted the spatial coordinates of the marked feature points of moth forewings based on the Harris corner detection method, and obtained the angle of the forewings through calculation. Moreover, Chen et al. [37] designed an insect recognition device and method based on 3D posture estimation.
Three-dimensional posture information extraction is an important basis for a 3D model of insect deformation. However, so far, there are few studies on the extraction of 3D insect postures [36], and these are primarily based on the artificial marking method to extract 3D insect postures. In this study, we proposed a scheme to extract 3D posture information of H. cunea based on machine vision and quantified the 3D posture characteristics of H. cunea, which provides an information source for the subsequent construction of the 3D deformation calculation method for feature-preserving augmentation of moth pest samples. Additionally, the method presented in the study provides a view of generating a large number of 3D posture dataset for current popular deep learning models in an effective way.

Sampling of H. cunea
A self-developed automatic pest monitoring device was used to obtain the samples of H. cunea from Xiaotangshan Precision Agriculture Demonstration Base in Changping District, Beijing, China. We aimed to collect samples of H. cunea with different postures so that the images of the samples could facilitate subsequent data processing and improve accuracy.

Definition of 3D Posture Information of H. cunea
The posture change of Lepidopteran pests is caused by the rotation of wings around the humeral angle and deformation of the torso. According to the characteristics of shape and posture changes of the H. cunea, the 3D posture information of the H. cunea was divided into wings and torso. The angle of the wings on the dorsal side of the insect, namely the intersection line of the plane where the wings are, is the 3D posture of the insect wing. Figure 1 shows images of the ventral and dorsal sides of H. cunea samples in various postures. The wings of H. cunea include the fore and hind wings, which are nearly triangular and comprise three margins and three angles. The three angles are the humeral, apical, and anal angles ( Figure 2). The fore wings of H. cunea cover the hind wings in a general posture, and the fore and hind wings are almost in the same plane. Moreover, the shape of the hind wings exposed to the outside is similar to that of the fore wings and is almost triangular. In this experiment, the hind wings were ignored: the fore and hind wings were  The wings of H. cunea include the fore and hind wings, which are nearly triangular and comprise three margins and three angles. The three angles are the humeral, apical, and anal angles ( Figure 2). The fore wings of H. cunea cover the hind wings in a general posture, and the fore and hind wings are almost in the same plane. Moreover, the shape of the hind wings exposed to the outside is similar to that of the fore wings and is almost triangular. In this experiment, the hind wings were ignored: the fore and hind wings were combined into a plane calculation, and three inflection points of the wing contour curve were selected as three key points. The positions of the three points in the study were not the strictly defined humeral, apical, and anal angles of the front wing. The wings of H. cunea include the fore and hind wings, which are nearly triangular and comprise three margins and three angles. The three angles are the humeral, apical, and anal angles ( Figure 2). The fore wings of H. cunea cover the hind wings in a general posture, and the fore and hind wings are almost in the same plane. Moreover, the shape of the hind wings exposed to the outside is similar to that of the fore wings and is almost triangular. In this experiment, the hind wings were ignored: the fore and hind wings were combined into a plane calculation, and three inflection points of the wing contour curve were selected as three key points. The positions of the three points in the study were not the strictly defined humeral, apical, and anal angles of the front wing. The torso part of a moth pest includes the head, thorax, and abdomen. Figure 3 shows the images of the abdomen of H. cunea in different postures. The torso posture change is caused by abdominal distortion or bending deformation. The posture of H. cunea is primarily of two types. The first is with the wings open: the angle between the two wings is more than 180°, and every angle of the abdomen is exposed. The second is with the wings retracted: the angle between the two wings is less than 180°, and the abdomen is visible in the front, but the back is occluded. Therefore, it is necessary to extract the 3D posture information of the torso using two methods according to whether the wing angle is greater than 180°. The torso part of a moth pest includes the head, thorax, and abdomen. Figure 3 shows the images of the abdomen of H. cunea in different postures. The torso posture change is caused by abdominal distortion or bending deformation. The posture of H. cunea is primarily of two types. The first is with the wings open: the angle between the two wings is more than 180 • , and every angle of the abdomen is exposed. The second is with the wings retracted: the angle between the two wings is less than 180 • , and the abdomen is visible in the front, but the back is occluded. Therefore, it is necessary to extract the 3D posture information of the torso using two methods according to whether the wing angle is greater than 180 • .

Image Data Acquisition
The image acquisition platform is shown in Figure 4. The background plate was a white square paper with a 1 cm unit, and the insect specimens were fixed vertically to the axis of rotation. A background plate was used for converting pixel coordinates to corresponding actual coordinates in the later stage. In the formal collection of H. cunea sample images, the square paper plate with a white background was replaced with a black background plate to facilitate the differentiation between the background and the H. cunea, and to extract the features of H. cunea. The insect needle and the axis of rotation were connected by a graduated rotating platform, and the insect needle could rotate along with the rotating platform. The camera used was a Panasonic DMC-GH4 (Panasonic, Suzhou, China) with a resolution of 2448 × 2448 pixels. The focal length of the camera was 60.0 mm, and the aperture value was f/2.8. The light source for insect image acquisition was

Image Data Acquisition
The image acquisition platform is shown in Figure 4. The background plate was a white square paper with a 1 cm unit, and the insect specimens were fixed vertically to the axis of rotation. A background plate was used for converting pixel coordinates to corresponding actual coordinates in the later stage. In the formal collection of H. cunea sample images, the square paper plate with a white background was replaced with a black background plate to facilitate the differentiation between the background and the H. cunea, Agriculture 2022, 12, 507 5 of 17 and to extract the features of H. cunea. The insect needle and the axis of rotation were connected by a graduated rotating platform, and the insect needle could rotate along with the rotating platform. The camera used was a Panasonic DMC-GH4 (Panasonic, Suzhou, China) with a resolution of 2448 × 2448 pixels. The focal length of the camera was 60.0 mm, and the aperture value was f/2.8. The light source for insect image acquisition was RL100-75, which is a white shadowless ring light source. The ring light source and the camera were fixed when the insect sample photos were taken, and the insect needle was rotated, starting from the dorsal side of the H. cunea sample. A photo was taken every 90 • rotation; a total of 4 photos were taken for each H. cunea sample. To minimize distortion, the camera image plane was parallel to the base plane of the image acquisition platform.
white square paper with a 1 cm unit, and the insect specimens were fixed vertically to the axis of rotation. A background plate was used for converting pixel coordinates to corresponding actual coordinates in the later stage. In the formal collection of H. cunea sample images, the square paper plate with a white background was replaced with a black background plate to facilitate the differentiation between the background and the H. cunea, and to extract the features of H. cunea. The insect needle and the axis of rotation were connected by a graduated rotating platform, and the insect needle could rotate along with the rotating platform. The camera used was a Panasonic DMC-GH4 (Panasonic, Suzhou, China) with a resolution of 2448 × 2448 pixels. The focal length of the camera was 60.0 mm, and the aperture value was f/2.8. The light source for insect image acquisition was RL100-75, which is a white shadowless ring light source. The ring light source and the camera were fixed when the insect sample photos were taken, and the insect needle was rotated, starting from the dorsal side of the H. cunea sample. A photo was taken every 90° rotation; a total of 4 photos were taken for each H. cunea sample. To minimize distortion, the camera image plane was parallel to the base plane of the image acquisition platform.

Overall Workflow
The image processing software used was MATLAB R2017a (Matrix Laboratory, Natick, MA, USA), the operating system was WIN10 (Microsoft, Seattle, WA, USA), the PC The image processing software used was MATLAB R2017a (Matrix Laboratory, Natick, MA, USA), the operating system was WIN10 (Microsoft, Seattle, WA, USA), the PC processor was Intel Core i7-7700, CPU 3.60 GHz. The extraction method of the 3D posture information of H. cunea was as follows: 1.
The characteristics of 3D posture change of multi-pose H. cunea were investigated, and the key feature points for extraction and localization methods of torso and wing posture information were determined.

2.
The 3D posture information of the H. cunea was extracted. The key points were located, the 3D coordinates of the key points were obtained, and the wing angle, the intersection equation of the wing plane, and the bending information of the torso were calculated. The calculation method of the torso and wing feature points was established to obtain the 3D information characteristics of H. cunea.

3.
A validation method for the accuracy of the 3D posture of H. cunea was established: a Metrology-grade 3D scanner (Artec Micro, Santa Clara, CA, USA) measurement method was used to verify the posture information obtained by monocular vision.
The entire extraction process is shown in Figure 5.

Image Preprocessing
The purpose of image preprocessing is to simultaneously extract the features of the target object and suppress the interference of non-target objects.
intersection equation of the wing plane, and the bending information of the torso were calculated. The calculation method of the torso and wing feature points was established to obtain the 3D information characteristics of H. cunea. 3. A validation method for the accuracy of the 3D posture of H. cunea was established: a Metrology-grade 3D scanner (Artec Micro, Santa Clara, CA, USA) measurement method was used to verify the posture information obtained by monocular vision.
The entire extraction process is shown in Figure 5.

Image Preprocessing
The purpose of image preprocessing is to simultaneously extract the features of the target object and suppress the interference of non-target objects.
The color image of H. cunea was transformed to a gray image based on the rgb2gray() function. The speed of image processing was improved by removing the color information of the image while retaining the brightness information. In this study, median filtering was used to remove noise from the image, and to obtain edge information on H. cunea. Median filtering is a type of sequential statistical filtering suitable for extracting the edge of an image. The median filter function is expressed in Equation (1) [38].
is the original image, is the processed image, [ , ] is the two-dimensional template, [ , ] is 3 × 3 filter window by default. The color image of H. cunea was transformed to a gray image based on the rgb2gray() function. The speed of image processing was improved by removing the color information of the image while retaining the brightness information. In this study, median filtering was used to remove noise from the image, and to obtain edge information on H. cunea. Median filtering is a type of sequential statistical filtering suitable for extracting the edge of an image. The median filter function is expressed in Equation (1) [38].
A is the original image, B is the processed image, [m, n] is the two-dimensional template, [m, n] is 3 × 3 filter window by default.
Based on median filtering, the image was converted to a binary image through the application of erosion and dilation as shown in Figure 6. Then bwtraceboundary() function was used for edge tracking. All edge points were saved.

Approximate Location of Key Points
The target key points were the inflection points of the contour. The method in this study was to extract edge features, collect edge points, and synthesize the edge points into curves or straight lines. Then, the intersections of the lines were used as the reference points for the key points. The key point was located near the reference point, but the key point must be located precisely.
The specific experimental scheme was as follows. According to the posture characteristics of H. cunea, the 3D posture characteristics of its wing were divided into two types. In the first case, the wings of the H. cunea were withdrawn, and the angle of the wings was less than 180 • . First, the images of H. cunea were preprocessed to provide a basis for subsequent detection and localization. As shown in Figure 6, the image was converted to a binary image, and erosion and dilation were used to eliminate the influence of insect needles and other non-wing contours. Then the bwtraceboundary() function was used for edge tracking, given the search starting point and search direction. The function returned a line coordinate array. Figure 7 shows the edge tracking result. The coordinate points after tracking and extraction were saved as an Excel table. Based on median filtering, the image was converted to a binary image through the application of erosion and dilation as shown in Figure 6. Then bwtraceboundary() function was used for edge tracking. All edge points were saved.

Approximate Location of Key Points
The target key points were the inflection points of the contour. The method in this study was to extract edge features, collect edge points, and synthesize the edge points into curves or straight lines. Then, the intersections of the lines were used as the reference points for the key points. The key point was located near the reference point, but the key point must be located precisely.
The specific experimental scheme was as follows. According to the posture characteristics of H. cunea, the 3D posture characteristics of its wing were divided into two types. In the first case, the wings of the H. cunea were withdrawn, and the angle of the wings was less than 180°. First, the images of H. cunea were preprocessed to provide a basis for subsequent detection and localization. As shown in Figure 6, the image was converted to a binary image, and erosion and dilation were used to eliminate the influence of insect needles and other non-wing contours. Then the bwtraceboundary() function was used for edge tracking, given the search starting point and search direction. The function returned a line coordinate array. Figure 7 shows the edge tracking result. The coordinate points after tracking and extraction were saved as an Excel table. The second step was edge fitting. There were too many edge points stored from each picture, which would not improve the fitting accuracy and would increase the fitting time. Thus, a step size of 10 was set for the edge points, and the coordinates of one point every step distance were saved. Then, ni edge points could be screened out for edge fitting.
When the wings angle of the H. cunea was less than 180°, points of the anal angle of the two wings overlapped. For the dorsal image, according to the location of five reference points of the humeral, apical, anal angle, the ni points must be divided into five sections, and the five reference points must be fitted. The edge points were segmented by traversing all of them. The maximum and minimum points of x and y in the coordinates (x,y) were selected; that is, four points on edge, A1, A2, and A4. The points larger than the ordinate of the left and right adjacent points were selected as A3 and A9, respectively. Points A5 and A6 were selected according to (lA1-A3)/(lA1-A5) = 5, and (lA1-A9)/(lA1-A6) = 5. The The second step was edge fitting. There were too many edge points stored from each picture, which would not improve the fitting accuracy and would increase the fitting time. Thus, a step size of 10 was set for the edge points, and the coordinates of one point every step distance were saved. Then, n i edge points could be screened out for edge fitting.
When the wings angle of the H. cunea was less than 180 • , points of the anal angle of the two wings overlapped. For the dorsal image, according to the location of five reference points of the humeral, apical, anal angle, the n i points must be divided into five sections, and the five reference points must be fitted. The edge points were segmented by traversing all of them. The maximum and minimum points of x and y in the coordinates (x,y) were selected; that is, four points on edge, A1, A2, and A4. The points larger than the ordinate of the left and right adjacent points were selected as A3 and A9, respectively. Points A5 and A6 were selected according to (l A1-A3 )/(l A1-A5 ) = 5, and (l A1-A9 )/(l A1-A6 ) = 5. The point A8 with the lowest y-coordinate was selected after traversing l A3-A9 . Finally, A5, A6, A3, A8, and A9 were used as the segmentation points, and the images in Figure 8 were obtained after edge fitting. The intersection of the two fitting lines was denoted as the reference point of the key point. Similarly, the n i points of the right-side view image rotated 90 • clockwise should be divided into 4 sections, and 3 key points could be obtained by fitting the curve. The minimum points of x, maximum and minimum points of y in the coordinates (x,y) of the right view image were selected, that is, the three most marginal points B1, B2, and B3. Points B4 and B5 were selected according to (l B1-B3 )/(l B1-B5 ) = 5, and (l B1-B2 )/(l B1-B4 ) = 5. B2, B3, B4, and B5 were used as the segmentation points, and the image in Figure 8d was obtained after edge fitting. The intersection of the two fitting lines was denoted as the reference point of the key point. In the second case. with the wings of H. cunea unfolded, the angle of the wings was more than 180°, and there was no triangle contour from the dorsal view image. The right and left view images were obtained when the H. cunea rotated 90°, and the approximate triangular wings were clearly visible. The outline of the humeral angle point was calculated based on the above methods, and key points were approximately positioned after fitting right and left view image edge points. The anal angle point was the lowest point in the wing profile, that is, the point with the lowest y value. The apical corner of the right wing is the leftmost point of the contour points, that is, the lowest x value point. The apical corner of the left wing is the rightmost point of the contour points, that is, the point with the maximum x value. The apical corner of the left wing was the rightmost point of the contour points, that is, the point with the maximum x value. Figure 9 shows the left-wing humeral angle fitting schematic diagram and the right-wing humeral angle fitting contraction. In the second case. with the wings of H. cunea unfolded, the angle of the wings was more than 180 • , and there was no triangle contour from the dorsal view image. The right and left view images were obtained when the H. cunea rotated 90 • , and the approximate triangular wings were clearly visible. The outline of the humeral angle point was calculated based on the above methods, and key points were approximately positioned after fitting right and left view image edge points. The anal angle point was the lowest point in the wing profile, that is, the point with the lowest y value. The apical corner of the right wing is the leftmost point of the contour points, that is, the lowest x value point. The apical corner of the left wing is the rightmost point of the contour points, that is, the point with the maximum x value. The apical corner of the left wing was the rightmost point of the contour points, that is, the point with the maximum x value. Figure 9 shows the left-wing humeral angle fitting schematic diagram and the right-wing humeral angle fitting contraction.
wing is the leftmost point of the contour points, that is, the lowest x value point. The apica corner of the left wing is the rightmost point of the contour points, that is, the point with the maximum x value. The apical corner of the left wing was the rightmost point of th contour points, that is, the point with the maximum x value. Figure 9 shows the left-wing humeral angle fitting schematic diagram and the right-wing humeral angle fitting con traction.

Precise Positioning and Matching of Key Points
The precise method consisted of calculating the reference points of the humeral, apical, and anal angle points on each wing. Then, the 10 points closest to the reference point in all edge points were traversed, and the center point of the 10 points was calculated; that is, the precise coordinates of the key points were obtained according to the following Equation (2).
where X and Y are the coordinates of center point on X and Y axes respectively. x i , and y i are the coordinates of edge points on X and Y axes. Since the camera was fixed when taking the image, a picture was taken when the insect needle rotated 90 • clockwise. Therefore, the abscissa of the key points changed, but the ordinate remained the same (i.e., the y values of the two matched key points were the same). Owing to the error in finding the intersection point of the fitting edge, the coordinate y of the corresponding key points of the two pictures may not be equal. The two y values were averaged to obtain the final y value of the key points on the dorsal right side. Finally, the coordinates (x l1 ,y l1 ), (x bl1 ,y l1 ) and (x br1 ,y r1 ), (x r1 ,y r1 ) of two pairs of humeral angle points, the coordinates (x l2 ,y l2 ), (x bl2 ,y l2 ) and (x br2 ,y r2 ), (x r2 ,y r2 ) of two pairs of anal angle points, and the coordinates (x l3 ,y l3 ), (x bl3 ,y l3 ) and (x br3 ,y r3 ), (x r3 ,y r3 ) of two pairs of apical angle points were obtained.

Extraction of 3D Posture Features of the Torso
If the angle of the wings of the H. cunea was greater than 180 • , its abdomen was visible. The abdominal image was obtained when H. cunea was rotated 180 • from the dorsal view, and another image was obtained when H. cunea was rotated 90 • again. Curve contour fitting was performed in the middle, as shown in Figure 10. Finally, two points and four curve segments were combined to form a 3D frame.
If the angle of the wings was less than 180 • , the abdomen was not completely visible, and half of it was covered by wings. In this case, the abdomen image could not be collected unless the wings of the insect were removed. Twenty H. cunea samples with the angle of the wings smaller than 180 • were found, and their wings were removed to expose their abdomens. Two view pictures were obtained, and the proportion of the width of the same ordinate value between the part covered by the wing and the part not covered by the wing was calculated. Because the partial error of the abdominal occlusion had little influence on the experimental results, the 3D posture information of the edge position and bending of the occluded part of the H. cunea could be calculated from the side that was not occluded.

Extraction of 3D Posture Features of the Torso
If the angle of the wings of the H. cunea was greater than 180°, its abdomen was visible. The abdominal image was obtained when H. cunea was rotated 180° from the dorsal view, and another image was obtained when H. cunea was rotated 90° again. Curve contour fitting was performed in the middle, as shown in Figure 10. Finally, two points and four curve segments were combined to form a 3D frame. If the angle of the wings was less than 180°, the abdomen was not completely visible, and half of it was covered by wings. In this case, the abdomen image could not be collected unless the wings of the insect were removed. Twenty H. cunea samples with the angle of the wings smaller than 180° were found, and their wings were removed to expose their abdomens. Two view pictures were obtained, and the proportion of the width of the same ordinate value between the part covered by the wing and the part not covered by the wing was calculated. Because the partial error of the abdominal occlusion had little influence on the experimental results, the 3D posture information of the edge position and bending of the occluded part of the H. cunea could be calculated from the side that was not occluded.
The position of H. cunea in space is shown in Figure 11, and the coordinate syste was established, as shown in Figure 12.  The relationship between 2D point coordinates and 3D point coordinates was obtained as follows. Right wing: and left wing: In the formula, X r is the abscissa of the insect needle on the right-wing diagram, X l is the abscissa of the insect needle on the left-wing diagram, Y is the abscissa of the insect needle in the dorsal view, x_r is the 2D abscissa of the center point of the right-side view image, x_l is the 2D abscissa of the center point of the left-side view image, x_br is the 2D abscissa of a point on the right wing in a dorsal view image, x_bl is the 2D abscissa of a point on the left wing in a dorsal view image, and y is the corresponding average ordinate of the two viewing angles.
According to Equations (3) and (4), the 3D coordinates of each key point and two sets of coordinates or six coordinate points of the H. cunea sample were obtained. Then, the plane of each wing, according to the three points, the plane normal vector, and the angle between the two planes could be calculated, that is, the angle between the two wings. The two plane equations were simultaneous, and the equation of the intersection line and its direction vector was obtained.
The establishment of the coordinate system and the conversion formula of the 2D and 3D coordinate systems of the torso were the same as those of the wing. Finally, four 3D curve equations and the 3D coordinates of the head and tail were obtained; this was the 3D posture information of the torso.

Accuracy Verification of the 3D Information Extraction
To verify the feasibility and reliability of the proposed method, the results of the study were compared with the reference values of a Metrology-grade 3D scanner measurement. The reference values of the angles between the wings of the H. cunea were measured by a Metrology-grade 3D scanner with a point accuracy of up to 10 microns. The two methods were compared to verify the reliability of the method proposed for the extraction of 3D posture information of H. cunea in this study.
Paired samples refer to the same samples tested twice at different times or to two samples that had similar test records. The differences between paired samples were compared. In this study, every method was tested twice on each H. cunea sample. Therefore, the data of the two methods could be paired, meeting the conditions of a paired t-test for their comparison. To verify the accuracy of the proposed method and measure the calculation deviation, the root mean square error (RMSE) was calculated: the smaller the error, the higher the accuracy.
During the extraction and verification of the 3D abdominal information of H. cunea, for each fitting equation, a number of points were found along its corresponding abdominal edge at a certain distance. A Metrology-grade 3D scanner was used to measure the points and find their coordinates. When x = 0, y was put into the equation for the torso, and z 1 was obtained. When y = 0, x was put into the equation, and z 1 was obtained. Then z 1 was compared with the coordinate z measured by the Metrology-grade 3D scanner.

Result
In the study, four images were taken for each H. cunea sample for a total of nine samples, and 36 images were obtained. In Figure 13, matching results of key points for different angles of the wings were presented. As shown in Figure 13, the key points of wings with different angles were well identified, and the corresponding fit lines were of a reasonable accuracy. The angle between the wings and the direction vector of the intersection line for each sample was calculated, as shown in Table 1. In Table 1, it can be seen that different samples have the variant angle of the wings; for example, 72.4769° for sample 5, while 342.7297 for sample 7. Specifically, the direction vectors of intersecting lines were also presented in Table 1. As shown in Table 1, the direction vectors of intersecting lines were slightly different for different angles between the wings, even for similar angles of wings; for example, in sample 1 and sample 4, the angles of wings are 103.3151, and 102.4134, but the direction vectors of intersecting lines are also quite different. This means the diversity and complexity of H. cunea in the real world. The contour features of H. cunea were extracted, as shown in Table 2. The point with the smallest ordinate of the pixel was the head, and the point with the largest ordinate of the pixel was the tail. The curve contour of H. cunea abdomen was also fitted and showed in Table 2. The angle between the wings and the direction vector of the intersection line for each sample was calculated, as shown in Table 1. In Table 1, it can be seen that different samples have the variant angle of the wings; for example, 72.4769 • for sample 5, while 342.7297 for sample 7. Specifically, the direction vectors of intersecting lines were also presented in Table 1. As shown in Table 1, the direction vectors of intersecting lines were slightly different for different angles between the wings, even for similar angles of wings; for example, in sample 1 and sample 4, the angles of wings are 103.3151, and 102.4134, but the direction vectors of intersecting lines are also quite different. This means the diversity and complexity of H. cunea in the real world. The contour features of H. cunea were extracted, as shown in Table 2. The point with the smallest ordinate of the pixel was the head, and the point with the largest ordinate of the pixel was the tail. The curve contour of H. cunea abdomen was also fitted and showed in Table 2.  To test the accuracy of the proposed calculation method, we determined the calculation deviation and calculated the RMSE (also known as standard error). The RMSE was 1.9363 • , which showed that the error between reference and measured values was small, the accuracy of this method was high.
For the verification of the 3D torso information, six points were selected from one equation. The verification results are shown in Table 5. The maximum relative error was 6.04%, the minimum relative error was 0.71%, and the average relative error was 2.77%. The results obtained by other fitting equations were consistent with Table 5. Therefore, using this method, there were large errors in individual areas, but the overall abdominal contour could be approximately defined by the fitting equation.

Discussion
The research on using monocular vision to calculate the 3D pose information of H. cunea is less reported. Currently, 3D scanner measurement is the most accurate method to obtain the 3D pose information of H. cunea. However, the high cost of the device limits the wide usage of this method. In this study, a low-cost device was constructed, and the corresponding algorithm was developed to calculate the 3D pose information of H. cunea. The accuracy validation shown in Section 3 indicates the performance of the presented method achieved a reasonable accuracy. The average relative error of measured angles of wings was 1.33% for the presented method when compared to the true value from the 3D scanner. This error is similar to the angle measurement of wings conducted by another research, for example, 1.02% for Bollworm [36]. Thus, the present method can be used to obtain 3D pose information of H. cunea in a cheap and fast way. As a result, the application of machine vision and deep learning in pest recognition may benefit from the study, for the training samples can be obtained more easily than before.
The result shown in Table 3 indicates that the absolute errors vary with individuals, and the relative errors were range from 0.32% to 3.03%. This means the performance of the method may be influenced by the input samples. In fact, for monocular vision, the depth, as well as clarity of images are both important in this study [40,41]. In some cases, we could not obtain an image with both depth and clarity well. As a result, the inaccurate input images decrease the performance of the method. As for the presented method, inaccurate input images influence the estimation accuracy of the torso, including the three aspects. Firstly, an unclear image would decrease the accuracy of the key points obtained from the feature extraction method. Then, the low-quality key points influence the matched key points. As a result, it decreased the accuracy of the fitted edge and obtained inaccurate angles of the wings or 3D torso information. Thus, more work needs to be conducted on how to improve the quality of the captured images and enhance the robustness of the method.

Conclusions
H. cunea was selected as the representative moth for this experiment, which aimed to create and verify an algorithm to extract 3D posture. Feature point extraction and location method of posture information of H. cunea wing and torso was developed, and the 3D information of H. cunea was obtained. The accuracy of the 3D posture information was verified, and the results showed that the 3D information of H. cunea was obtained accurately by this method. The 3D posture information could provide a data source for 3D model deformation simulation. To conclude this paper, the following key points are summarized about the extraction and location method of 3D posture information of H. cunea.
The images of H. cunea were obtained by an insect image acquisition system, and the posture change of the moth included wing and torso deformation. The wing deformation is caused by the rotation of the wing around the humeral angle, and the torso deformation is caused by bending and twisting of the torso. According to the characteristics of the posture change of the moth, the 3D posture of H. cunea was defined, and the key parts that could represent the 3D posture information of H. cunea were determined.
The approximate location of the insect wing key points was established based on boundary tracking and edge fitting. The coordinates of the key points of the wings were obtained based on precise location and key-point matching. The angle of the wings was calculated to obtain the 3D posture information of the wings. The 3D posture information of H. cunea wing angle obtained by this method was compared with the Metrology-grade 3D scanner measurement, and the results showed that the relative error of the wing angle was between 0.32% and 3.03%, the average relative error was 1.33%, and the RMSE was 1.9363.
Through the edge extraction and curve fitting of the abdomen, the head and tail coordinates and the fitting equations of the torso edge of H. cunea were obtained to extract the 3D posture of the insect abdomen. The information obtained by this method was compared with the Metrology-grade 3D scanner measurements, and the results showed that the average relative error of the torso was 2.77%.
The above results confirmed that the proposed method for extraction of 3D posture information of H. cunea was accurate. The 3D posture information of H. cunea extracted using our method can provide important data support for sample augmentation and species identification of moth pests.