Study on Comprehensive Calibration and Image Sieving for Coal-Gangue Separation Parallel Robot

: Online sorting robots based on image recognition are key pieces of equipment for the intelligent washing of coal mines. In this paper, a Delta-type, coal gangue sorting, parallel robot is designed to automatically identify and sort scattered coal and gangue on conveyor belts by conﬁguring the image recognition system. Robot calibration technology can reduce the inﬂuence of installation error on system accuracy and provides the basis for the robot to accurately track and grab gangue. Due to the fact that the angle deﬂection error between the conveyor belt coordinate system and the robot coordinate system is not considered in the traditional conveyor belt calibration method, an improved comprehensive calibration method is put forward in this paper. Firstly, the working principle and image recognition and positioning process of the Delta coal gangue sorting robot are introduced. The scale factor parameter Factor c of the conveyor encoder is adopted to characterize the relationship between the moving distance of the conveyor and the encoder. The conveyor belt calibration experiment is described in detail. The transformation matrix between the camera, the conveyor belt, and the robot are obtained after establishment of the three respective coordinate systems. The experimental results show that the maximum cumulative deviation of traditional calibration method is 13.841 mm and the comprehensive calibration method is 3.839 mm. The main innovation of the comprehensive calibration is such that the accurate position of each coordinate in the robot coordinate system can be determined. This comprehensive calibration method is simple and feasible, and can e ﬀ ectively improve system calibration accuracy and reduce robot installation error on the grasping accuracy. Moreover, a calculation method to eliminate duplicate images is put forward, with the frame rate of the vision system set at seven frames per second to avoid image repetition acquisition and missing images. The experimental results show that this calculation method e ﬀ ectively improves the processing e ﬃ ciency of the recognition system, thereby meeting the demands of the grab precision of coal gangue separation engineering. The goal revolving around “safety with few people and safety with none” can therefore be achieved in coal gangue sorting using robots.


Introduction
Coal mine intellectualization is the core technical support used to achieve high-quality development of the coal industry [1][2][3], with the intelligent coal washing system being one of the top ten constructed repeated images is proposed, thereby effectively solving the problems of repeated collection and missed shooting of gangue images and improving the processing efficiency of the recognition system.

Overall Scheme of the Coal Gangue Sorting Robot
In this paper, a Delta-type, coal gangue sorting, parallel robot is designed to automatically identify and sort the scattered coal and gangue on the conveyor belt by configuring the image recognition system. Coal and gangue belong to two-dimensional plane object recognition on the conveyor belt, so it is necessary to carry an eye-to-hand industrial camera to track and provide feedback to the automatically identified gangue position through the conveyor belt encoder. The overall structure of coal gangue online sorting parallel robot is shown in Figure 1. The industrial camera is installed at the input end of the coal gangue mixed conveyor belt, with the camera collecting coal gangue images outside the Delta parallel robot workspace and transmitting the image information to the gangue image recognition software for recognition and analysis. If the recognition result is gangue, the gangue position information is transmitted through the Tool Centre Position (TCP) network protocol via the information database of the Kemotion controller. The robot control system completes the tracking of the target gangue by transmitting the gangue position information and the feedback encoder numerical information and grabs the gangue using the pneumatic gripper. When the gangue is captured successfully, the Kemotion controller system automatically deletes the relevant gangue position data in the information database and the system enters the next working cycle.
Appl. Sci. 2020, 10, 7059 3 of 17 Finally, the calibration experiment regarding the conveyor belt and the vision system is completed, and the feasibility of the calibration method is verified. Further, regarding the problem of repeated image pickup in the process of coal gangue image recognition, a calculation method to eliminate repeated images is proposed, thereby effectively solving the problems of repeated collection and missed shooting of gangue images and improving the processing efficiency of the recognition system.

Overall Scheme of the Coal Gangue Sorting Robot
In this paper, a Delta-type, coal gangue sorting, parallel robot is designed to automatically identify and sort the scattered coal and gangue on the conveyor belt by configuring the image recognition system. Coal and gangue belong to two-dimensional plane object recognition on the conveyor belt, so it is necessary to carry an eye-to-hand industrial camera to track and provide feedback to the automatically identified gangue position through the conveyor belt encoder. The overall structure of coal gangue online sorting parallel robot is shown in Figure 1. The industrial camera is installed at the input end of the coal gangue mixed conveyor belt, with the camera collecting coal gangue images outside the Delta parallel robot workspace and transmitting the image information to the gangue image recognition software for recognition and analysis. If the recognition result is gangue, the gangue position information is transmitted through the Tool Centre Position (TCP) network protocol via the information database of the Kemotion controller. The robot control system completes the tracking of the target gangue by transmitting the gangue position information and the feedback encoder numerical information and grabs the gangue using the pneumatic gripper. When the gangue is captured successfully, the Kemotion controller system automatically deletes the relevant gangue position data in the information database and the system enters the next working cycle. The robot completes the recognition and positioning function of coal gangue through the control system and transmits the gangue position information obtained by visual processing to the image position information database via the TCP protocol, converting it into the coordinate position under the robot coordinate system combined with the conveyor belt calibration. At this time, the robot control system processes the data, carries out trajectory planning, and completes the tracking and grasping actions. Figure 2 shows the design process of the coal gangue image recognition and positioning software. The robot completes the recognition and positioning function of coal gangue through the control system and transmits the gangue position information obtained by visual processing to the image position information database via the TCP protocol, converting it into the coordinate position under the robot coordinate system combined with the conveyor belt calibration. At this time, the robot control system processes the data, carries out trajectory planning, and completes the tracking and grasping actions. Figure 2 shows the design process of the coal gangue image recognition and positioning software.

Comprehensive Calibration of the Coal Gangue Sorting System
After the gangue position coordinates are determined, the sorting robot needs to carry out comprehensive calibration of the system in order to accurately complete the grasping work and convert the coordinate position information recognized by the vision system to the position information under the robot coordinate system. The robot vision sorting system also requires system calibration, including robot body calibration, camera calibration, and hand-eye calibration. The integrated calibration between the vision system, the robot, and the conveyor belt is the basis for the robot to achieve high-precision grasping control. Because the ideal structural parameters of Delta robots usually exhibit some deviation compared with the actual structural parameters, the moving direction of the conveyor belt cannot be guaranteed to be parallel to the X-axis of the robot coordinate system and the plane of the conveyor belt is perpendicular to the Z-axis of robot coordinate system. There is therefore a small angle deviation between them, which affects the accuracy of the sorting robot. Directly improving the machining and installation accuracy of parts greatly increases the manufacturing cost. Therefore, this paper improves the traditional conveyor belt and visual calibration method, thereby reducing the impact of robot installation error on the grasping accuracy.

Calibration between the Conveyor Belt System and the Robot System
The coordinate system relationship between the systems is shown in Figure 3. The robot coordinate system established regarding the robot is , and the camera coordinate system established for the industrial camera is In the field of view of the camera, the conveyor belt plane is used as the XY plane to establish the conveyor belt initial coordinate system The XC direction is consistent with the movement direction of the conveyor belt.

Comprehensive Calibration of the Coal Gangue Sorting System
After the gangue position coordinates are determined, the sorting robot needs to carry out comprehensive calibration of the system in order to accurately complete the grasping work and convert the coordinate position information recognized by the vision system to the position information under the robot coordinate system. The robot vision sorting system also requires system calibration, including robot body calibration, camera calibration, and hand-eye calibration. The integrated calibration between the vision system, the robot, and the conveyor belt is the basis for the robot to achieve high-precision grasping control. Because the ideal structural parameters of Delta robots usually exhibit some deviation compared with the actual structural parameters, the moving direction of the conveyor belt cannot be guaranteed to be parallel to the X-axis of the robot coordinate system and the plane of the conveyor belt is perpendicular to the Z-axis of robot coordinate system. There is therefore a small angle deviation between them, which affects the accuracy of the sorting robot. Directly improving the machining and installation accuracy of parts greatly increases the manufacturing cost. Therefore, this paper improves the traditional conveyor belt and visual calibration method, thereby reducing the impact of robot installation error on the grasping accuracy.

Calibration between the Conveyor Belt System and the Robot System
The coordinate system relationship between the systems is shown in Figure 3. The robot coordinate system established regarding the robot is R − X R Y R Z R , and the camera coordinate system established for the industrial camera is V − X V Y V Z V . In the field of view of the camera, the conveyor belt plane is used as the XY plane to establish the conveyor belt initial coordinate system C − X C Y C Z C . The X C direction is consistent with the movement direction of the conveyor belt.

•
Transformation matrix and scale factor The calibration of the conveyor belt calculates the pose of the conveyor belt relative to the robot coordinate system. The matrix H R C is used to represent the transformation relationship between the two coordinate systems. If the initial position of the conveyor belt measured by the camera is P C , then the attitude of this point in the robot coordinate system is: Since the conveyor belt coordinate system changes dynamically along the direction of conveyor belt movement, the value of the change can be calculated using the change of the encoder. In this Appl. Sci. 2020, 10, 7059 5 of 17 paper, a parameter of the conveyor belt is represented by the encoder scale factor Factor c (encoder factor). The proportional relationship between the reading change in the robot coordinates and the encoder reading change is the scale factor [31].

 Transformation matrix and scale factor
The calibration of the conveyor belt calculates the pose of the conveyor belt relative to the robot coordinate system. The matrix R C H is used to represent the transformation relationship between the two coordinate systems. If the initial position of the conveyor belt measured by the camera is C P , then the attitude of this point in the robot coordinate system is: Since the conveyor belt coordinate system changes dynamically along the direction of conveyor belt movement, the value of the change can be calculated using the change of the encoder. In this paper, a parameter of the conveyor belt is represented by the encoder scale factor (encoder factor). The proportional relationship between the reading change in the robot coordinates and the encoder reading change is the scale factor [31].

 Calibration method of the conveyor belt
The calibration of the conveyor belt is completed on the robot Kemotion control system. The controller has a conveyor belt tracking function module and completes the relevant configuration work, shifts the visual coordinate system to the conveyor belt, and establishes a conveyor belt coordinate system Trackingbase. The whole calibration process uses the teaching device to display the robot end coordinate position point and the value provided by feedback by the encoder in real time. The calibration steps are as follows: Place the object within the visual range, then click the "workpiece grab" button. At this time, check the encoder reading Restart the conveyor belt, move the workpiece to run for a distance (the workpiece is still in the grasp range), pause the conveyor belt again, and move the robot to the position above the workpiece grasping point (i.e., point P2, as shown in Figure 4c). Record the robot position 2 Select another workpiece (workpiece 2) and place it at the diagonal point of the first workpiece in the visual range (in order to improve accuracy, there should be a maximum deviation from

•
Calibration method of the conveyor belt The calibration of the conveyor belt is completed on the robot Kemotion control system. The controller has a conveyor belt tracking function module and completes the relevant configuration work, shifts the visual coordinate system to the conveyor belt, and establishes a conveyor belt coordinate system Trackingbase. The whole calibration process uses the teaching device to display the robot end coordinate position point and the value provided by feedback by the encoder in real time. The calibration steps are as follows:

1.
Place the object within the visual range, then click the "workpiece grab" button. At this time, check the encoder reading V e0 using the teaching device, as shown in Figure 4a.

2.
Start the conveyor belt, move the workpiece to the working area of the robot, pause the conveyor belt, manually jog the robot to the workpiece grasping position (i.e., point P 1 , as shown in Figure 4b) using the teaching device, and record the robot position P R 1 (x 1 , y 1 , z 1 ) and encoder reading V e1 .

3.
Restart the conveyor belt, move the workpiece to run for a distance (the workpiece is still in the grasp range), pause the conveyor belt again, and move the robot to the position above the workpiece grasping point (i.e., point P 2 , as shown in Figure 4c). Record the robot position P R 2 (x 2 , y 2 , z 2 ) and encoder position V e2 .

4.
Select another workpiece (workpiece 2) and place it at the diagonal point of the first workpiece in the visual range (in order to improve accuracy, there should be a maximum deviation from workpiece 1 in the Y direction). Then click "workpiece grab" and record the encoder reading V e3 , as shown in Figure 4d.

5.
Start the conveyor belt, move workpiece 2 to the middle of the robot's workspace, pause the conveyor belt, move the robot inching function to point P 3 of workpiece 2, and record the position of robot and encoder. Record the position P R 3 (x 3 , y 3 , z 3 ) in the robot coordinate system and the encoder reading V e3 , as shown in Figure 4e. In summary, the coordinates of  According to the above data, the scale factor can be obtained by In summary, the coordinates of P R 1 , P R 2 , P R 3 , and O R C in the scope of robot grasping are marked, and the relationship is shown in Figure 5, where O R C represents the dynamic origin position of the conveyor belt. In summary, the coordinates of  According to the above data, the scale factor can be obtained by According to the above data, the scale factor can be obtained by Appl. Sci. 2020, 10, 7059 8 of 17 It can be seen that if the encoder readings V e1 and V e2 at a starting distance and an end position within a certain distance of the conveyor belt movement are known, the movement distance of the target object in the direction of the conveyor movement can be obtained according to the scale factor.
The origin of the conveyor belt coordinate system enters into the range of the robot after a distance of ∆L 1 . At this time, the coordinate of the origin of the conveyor belt coordinate system relative to the robot coordinate system is set as O R C . The moving distance is therefore represented by V e3 , the encoder reading which calibrates the starting point position of block 2, and V e3 , the encoder reading whereby calibration block 2 runs to position P 3 .
According to the coordinate vector diagram of conveyor belt, the following relationship can be established.
O R C : The coordinate of the origin of the moved conveyor belt coordinate system relative to the robot coordinate system. P R i (i = 1,2,3,4) represents the position of the workpiece in the robot coordinate system.
The coordinates of the origin of the moved conveyor belt coordinate system relative to the robot coordinate system can be obtained by the above Equation.
Substituting the coordinates of P R 1 , P R 2 , P R 3 , we achieve the following: The expression of the basic coordinate system of the conveyor belt is obtained by Appl. Sci. 2020, 10, 7059 9 of 17 The O R C coordinate value is substituted into Equation (9) to obtain the following results:  Figure 6 shows the original coordinate system of the conveyor belt, which is C', the coordinate system C obtained by ∆L 1 after operation, and the robot coordinate system R. H C C represents the relationship matrix between the original conveyor belt coordinate system C' and the coordinate system C obtained by running ∆L 1 . H R C represents the relationship matrix between the coordinate system C obtained by running ∆L 1 and the robot coordinate system.
O coordinate value is substituted into Equation (9) to obtain the following results:  Figure 6 shows the original coordinate system of the conveyor belt, which is C', the coordinate system C obtained by 1 L  after operation, and the robot coordinate system R.
The relationship matrix between the original coordinates of the conveyor belt and the robot coordinate system can be obtained by The transformation matrix H R C between the dynamic conveyor coordinate system and the robot coordinate system is obtained via Equations (8) and (9).
The relationship matrix between the original coordinates of the conveyor belt and the robot coordinate system can be obtained by The position of the target point in the robot coordinate system is P R and the initial point of the camera measurement point in the initial coordinate system of the conveyor belt is P C , therefore, the relationship between the two can be expressed by Equation (13).
The transformation matrix is therefore Through the calibration method, the transformation matrix between the conveyor coordinate system and the robot coordinate system can be obtained, and the influence of assembly error is reduced.

Calibration between the Camera System and the Conveyor Belt System
The purpose of belt calibration is to solve the problem of camera operation outside the robot's operating range. Through the calibration experiment of the conveyor belt, the relationship expression between the conveyor coordinate system and the camera coordinate system can be obtained. In combination with H R C obtained from Equation (12), the relationship matrix between the camera coordinate system and the robot coordinate system is further obtained.
The camera of the system is outside the robot operation space, so the encoder value should be introduced in the calibration of camera external parameters. Firstly, the four target points are placed in the visual operation range. After the camera is positioned, the positions of the calibration points relative to the camera coordinate system P V 1 , P V 2 , P V 3 , P V 4 are calculated. The conveyor belt is started to make the conveyor belt move ∆L into the operation range of P R 1 , P R 2 , P R 3 , and P R 4 , and the conversion relationship between the scale factor Factor c and the two coordinate systems H R C alongside the conversion relationship between the camera coordinate system and the conveyor belt coordinate system is obtained as follows: From Equation (14), the transformation matrix expression between the camera coordinate system and the conveyor coordinate system is obtained as follows:

System Comprehensive Calibration Experiment
The traditional calibration method of conveyor belt does not consider the angle deflection error between the conveyor belt coordinate system and the robot coordinate system. In this paper, by improving the traditional conveyor belt calibration method, a more accurate transformation relationship matrix between the two coordinate systems is obtained and the influence of the two calibration methods on the system error is compared through experiments.
As shown in Figure 7, the camera is calibrated to get the internal parameters of the camera and the position of the read target point in the camera coordinate system P V i . The end probe of the robot is moved to the initial point O to obtain the coordinate position O 0 of the O point in the robot coordinate system. The conveyor belt is controlled to move an equal distance four times with each movement comprising 150 mm, which is the corresponding encoder variation.
The position under the robot coordinate after each movement is measured as shown in Figure  improving the traditional conveyor belt calibration method, a more accurate transformation relationship matrix between the two coordinate systems is obtained and the influence of the two calibration methods on the system error is compared through experiments.
As shown in Figure 7, the camera is calibrated to get the internal parameters of the camera and the position of the read target point in the camera coordinate system .   Using the above two calibration methods, the data of two groups of positions in the robot coordinate system after movement are obtained respectively, as shown in Table 1.  The end probe of the robot is moved to the initial point O to obtain the coordinate position O0 of the O point in the robot coordinate system. The conveyor belt is controlled to move an equal distance four times with each movement comprising 150 mm, which is the corresponding encoder variation.  Using the above two calibration methods, the data of two groups of positions in the robot coordinate system after movement are obtained respectively, as shown in Table 1. Using the above two calibration methods, the data of two groups of positions in the robot coordinate system after movement are obtained respectively, as shown in Table 1. According to the data shown in Table 1, the maximum cumulative deviation of the calibration method in this paper is 3.839 mm in the specified operating space, which is far lower than the 13.841 mm obtained with the traditional calibration method. The deviation error caused by the calibration method in this paper is obviously smaller than that caused by the traditional calibration method. The system calibration accuracy is effectively improved, and the calibration method is simple and practical.

Principle of Image Screening and Recognition
The field of view of the camera taken by coal and gangue passing through the image acquisition area on the conveyor belt is shown in Figure 9. The gangue image in the green ellipse is incomplete, indicating distorted picture information. Therefore, it is necessary to design a calculation method to screen the coal gangue image so that the recognition system automatically skips the distortion area and only collects the complete image information in the red box.
According to the data shown in Table 1, the maximum cumulative deviation of the calibration method in this paper is 3.839 mm in the specified operating space, which is far lower than the 13.841 mm obtained with the traditional calibration method. The deviation error caused by the calibration method in this paper is obviously smaller than that caused by the traditional calibration method. The system calibration accuracy is effectively improved, and the calibration method is simple and practical.

Principle of Image Screening and Recognition
The field of view of the camera taken by coal and gangue passing through the image acquisition area on the conveyor belt is shown in Figure 9. The gangue image in the green ellipse is incomplete, indicating distorted picture information. Therefore, it is necessary to design a calculation method to screen the coal gangue image so that the recognition system automatically skips the distortion area and only collects the complete image information in the red box. In this experiment, coal and gangue within the size of 60-200 mm are identified. In order to ensure that fractions larger or smaller than the capacity of the robot do not get on the conveyor, the coal gangue sorting robot is equipped with a coal gangue queuing arrangement device (as shown by the label 2 in Figure 10) at the input end of the coal gangue conveying belt. As shown in the Figure  10, the coal is first conveyed to the vibration classification screen, as shown by the label 1, through the coal conveyor for screening and grading according to particle size. Then, the coal gangue mixture is transported to the conveyor belt, and the alignment of the coal gangue and the distance interval between them are controlled by the queuing mechanism, as seen in 2. In addition, the coal gangue can be simultaneously separated. The materials on the coal gangue conveyor belt are scattered, the shape of coal gangue is irregular, and it is not stacked up and down. In this experiment, coal and gangue within the size of 60-200 mm are identified. In order to ensure that fractions larger or smaller than the capacity of the robot do not get on the conveyor, the coal gangue sorting robot is equipped with a coal gangue queuing arrangement device (as shown by the label 2 in Figure 10) at the input end of the coal gangue conveying belt. As shown in the Figure 10, the coal is first conveyed to the vibration classification screen, as shown by the label 1, through the coal conveyor for screening and grading according to particle size. Then, the coal gangue mixture is transported to the conveyor belt, and the alignment of the coal gangue and the distance interval between them are controlled by the queuing mechanism, as seen in 2. In addition, the coal gangue can be simultaneously separated. The materials on the coal gangue conveyor belt are scattered, the shape of coal gangue is irregular, and it is not stacked up and down. As shown in Figure 11, the integrity of the selected image information can be ensured by the selected regional centroid As shown in Figure 11, the integrity of the selected image information can be ensured by the selected regional centroid P ∈ [r, r + N].
In Figure 11, r is the maximum radius of the coal or gangue sample in mm and L is the field distance in the direction of conveyor belt movement in mm. As shown in Figure 11, the integrity of the selected image information can be ensured by the selected regional centroid [r, r N] P ∈ + . L N r r Figure 11. Distance-based screening method.
In Figure 11, r is the maximum radius of the coal or gangue sample in mm and L is the field distance in the direction of conveyor belt movement in mm.
Before calculating the regional eigenvalue, the centroid position is selected first, which can effectively avoid distortion of coal gangue image information and improve system recognition efficiency.
In practical engineering applications, when coal and gangue pass through the visual acquisition system, the frame per second (FPS) from the vision system depends on the field of view (FOV) and the velocity C V of the conveyor belt. In most cases, the frequency of photographing should not be too fast to reduce data processing burden of the image recognition system. Therefore, it is necessary to select the appropriate acquisition frequency to photograph every object on the conveyor belt one or two times per second when passing through the visual acquisition area to maintain the stability of the visual system. When it is photographed twice per second, the frame rate of the vision system is calculated as follows Equation (17) (17) where the maximum running speed of the conveyor belt is 0.5m/s C V = and the visual field size of the vision system in the moving direction of the conveyor belt is FOV = 0.7 m; therefore, when the frame rate of the vision system is set to seven frames per second, the missing images problem can be effectively avoided.
Different from the sensor trigger, the coal gangue recognition system uses the method involving taking pictures on time. Although this improves the recognition efficiency of the system, it introduces interference of repeated objects and increases the calculation pressure of the controller. As shown in Figure 8, four images are collected using the time-based image acquisition method. It can be seen Before calculating the regional eigenvalue, the centroid position is selected first, which can effectively avoid distortion of coal gangue image information and improve system recognition efficiency.
In practical engineering applications, when coal and gangue pass through the visual acquisition system, the frame per second (FPS) from the vision system depends on the field of view (FOV) and the velocity V C of the conveyor belt. In most cases, the frequency of photographing should not be too fast to reduce data processing burden of the image recognition system. Therefore, it is necessary to select the appropriate acquisition frequency to photograph every object on the conveyor belt one or two times per second when passing through the visual acquisition area to maintain the stability of the visual system. When it is photographed twice per second, the frame rate of the vision system is calculated as follows Equation (17): where the maximum running speed of the conveyor belt is V C = 0.5m/s and the visual field size of the vision system in the moving direction of the conveyor belt is FOV = 0.7 m; therefore, when the frame rate of the vision system is set to seven frames per second, the missing images problem can be effectively avoided. Different from the sensor trigger, the coal gangue recognition system uses the method involving taking pictures on time. Although this improves the recognition efficiency of the system, it introduces interference of repeated objects and increases the calculation pressure of the controller. As shown in Figure 8, four images are collected using the time-based image acquisition method. It can be seen from the figure that the same gangue, P1, appears several times in the four pictures. The image system processes the images collected each time. If the same gangue is identified in several acquisition, the system repeatedly sends the position information of the gangue to the robot. Therefore, a mechanism is needed to distinguish the same objects in the four images. If the collected position data is judged to be the existing object, the data are discarded.
Assuming that the coordinate information of the new data collected is P = (x 0 , y 0 , z 0 , E 0 ) and E 0 is the value of the conveyor belt encoder, at this time, Equation (18) can be used to determine whether the data are discarded.
where E i represents the encoder value recorded last time, (x i , y i , z i ) is the coordinate position information of each gangue identified in the previous picture, and ∆ is the allowable error range set in advance.
In conclusion, by selecting the appropriate acquisition frame rate and using the repeat elimination calculation method, the problem of waste rock missing and repeated shooting is effectively solved, thereby improving the processing efficiency of the sorting system.

Experimental Verification
In this paper, the online identification and sorting experiment of coal gangue was carried out in the laboratory. The camera was Genie Nano M2590 NIR of DALSA, and its related parameters are shown in Table 2. In this paper, the Computar M1614-MP lens was selected; its specific parameters are shown in Table 3.  Figure 12 shows the online recognition process of coal gangue identification software. Coal and gangue pass through the visual acquisition area in turn. Gangue is identified by gangue image recognition software, and the coordinate information of gangue is transmitted to the target information database via the TCP protocol. Figure 13 shows the coal gangue grabbing experiment. The parallel robot controls pneumatic hand grasping to allow tracking and gangue grasping. The experimental results show that the Delta coal gangue sorting parallel robot has good coal gangue recognition, thereby meeting the requirements of engineering site sorting speed and accuracy.
Appl. Sci. 2020, 10, 7059 15 of 17 information database via the TCP protocol. Figure 13 shows the coal gangue grabbing experiment. The parallel robot controls pneumatic hand grasping to allow tracking and gangue grasping. The experimental results show that the Delta coal gangue sorting parallel robot has good coal gangue recognition, thereby meeting the requirements of engineering site sorting speed and accuracy.

Conclusions
(1) A Delta parallel coal gangue online sorting robot based on image recognition is proposed, its working principle is explained, and the design process of image recognition and positioning function is introduced. The structured robot can be used for online automatic recognition and sorting of coal gangue, achieving the goal of "safety with few people and safety with none".
(2) An improved comprehensive calibration method is proposed. By solving the transformation matrix between the camera coordinate system and the conveyor belt coordinate system and between the conveyor belt coordinate system and the robot coordinate system, the influence of robot installation error on grasping accuracy is effectively avoided. The experimental results show that the information database via the TCP protocol. Figure 13 shows the coal gangue grabbing experiment. The parallel robot controls pneumatic hand grasping to allow tracking and gangue grasping. The experimental results show that the Delta coal gangue sorting parallel robot has good coal gangue recognition, thereby meeting the requirements of engineering site sorting speed and accuracy.

Conclusions
(1) A Delta parallel coal gangue online sorting robot based on image recognition is proposed, its working principle is explained, and the design process of image recognition and positioning function is introduced. The structured robot can be used for online automatic recognition and sorting of coal gangue, achieving the goal of "safety with few people and safety with none".
(2) An improved comprehensive calibration method is proposed. By solving the transformation matrix between the camera coordinate system and the conveyor belt coordinate system and between the conveyor belt coordinate system and the robot coordinate system, the influence of robot installation error on grasping accuracy is effectively avoided. The experimental results show that the

Conclusions
(1) A Delta parallel coal gangue online sorting robot based on image recognition is proposed, its working principle is explained, and the design process of image recognition and positioning function is introduced. The structured robot can be used for online automatic recognition and sorting of coal gangue, achieving the goal of "safety with few people and safety with none".
(2) An improved comprehensive calibration method is proposed. By solving the transformation matrix between the camera coordinate system and the conveyor belt coordinate system and between the conveyor belt coordinate system and the robot coordinate system, the influence of robot installation error on grasping accuracy is effectively avoided. The experimental results show that the integrated calibration method is simple, significantly improves robot grasping accuracy, and meets the actual needs of coal gangue sorting.
(3) Regarding the problem of repeated image acquisition in coal gangue recognition system, the problems of missed shot and repeated image acquisition are effectively solved by selecting the appropriate acquisition frame rate and repeated elimination calculation method. The experimental results show that this method effectively improves the sorting system efficiency.
(4) The application of the parallel robot in the field of coal gangue sorting is a preliminary exploration, and this experiment proves that it is feasible and effective. With the continuous advancement of intelligent construction in coal mines, there are still many technical bottlenecks in some common key technologies of coal mine robots, especially in special coal mine robots that can operate independently. Challenges of this study and possibilities of improvement exist with respect to different technological aspects, such as fuzzy logic mechanisms or using 3D cameras, etc. Therefore, strengthening the basic theoretical research and innovation of intelligent equipment is also an inevitable requirement for the coal industry to realize safe, efficient, intelligent, and green production.