Next Article in Journal
The Effect of the Addition of Hemp Seeds, Amaranth, and Golden Flaxseed on the Nutritional Value, Physical, Sensory Characteristics, and Safety of Poultry Pâté
Next Article in Special Issue
Sim–Real Mapping of an Image-Based Robot Arm Controller Using Deep Reinforcement Learning
Previous Article in Journal
Thermodynamic Analysis of Air-Cycle Refrigeration Systems with Expansion Work Recovery for Compartment Air Conditioning
Previous Article in Special Issue
Remote-Controlled Method with Force and Visual Assists Based on Time to Collision for Mobile Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Autonomous, Digital-Twin Free Path Planning and Deployment for Robotic NDT: Introducing LPAS: Locate, Plan, Approach, Scan Using Low Cost Vision Sensors

1
Centre of Ultrasonic Engineering (CUE), University of Strathclyde, Glasgow G1 1XW, UK
2
The Welding Institute (TWI) Wales, Port Talbot SA13 1SB, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(10), 5288; https://doi.org/10.3390/app12105288
Submission received: 3 May 2022 / Revised: 18 May 2022 / Accepted: 20 May 2022 / Published: 23 May 2022
(This article belongs to the Special Issue Recent Development and Applications of Remote Robot Systems)

Abstract

:
Robotised Non Destructive Testing (NDT) presents multifaceted advantages, saving time and reducing repetitive manual workloads for highly skilled Ultrasonic Testing (UT) operators. Due to the requisite accuracy and reliability of the field, robotic NDT has traditionally relied on digital twins for complex path planning procedures enabling precise deployment of NDT equipment. This paper presents a multi-scale and collision-free path planning and implementation methodology enabling rapid deployment of robotised NDT with commercially available sensors. Novel algorithms are developed to plan paths over noisy and incomplete point clouds from low-cost sensors without the need for surface primitives. Further novelty is introduced in online path corrections utilising laser and force feedback while applying a Conformable-Wedge probe UT sensor. Finally, a novel source of data beneficial to automated NDT is introduced by collecting frictional forces of the surface informing the operator of the surface preparation quality. The culmination of this work is a new path-planning free, single-shot automated process removing the need for complex operator-driven procedures with a known surface, visualising collected data for the operator as a three-dimensional C-scan model. The dynamic robotic control enables a move to the industry 4.0 model of adaptive online path planning. Experimental results indicate the flexible and streamlined pipeline for robotic deployment, and demonstrate intuitive data visualisation to aid highly skilled operators in a wide field of industries.

1. Introduction

Robotic inspection and path planning procedures are well defined for parts with accurate digital twins, enabling automatic path extraction [1]. This condition is not necessarily met in a range of industries such as remanufacturing, moulded part manufacturing or autonomous site inspection in hazardous environments.
The wear of parts in remanufacturing, or non-critical warping of parts in moulded-part manufacturing such as spring-back [2] means that the part does not necessarily match the available CAD model. For legacy parts or the inspection of free-form surfaces autonomously, there may be no CAD model available for path planning.
The authors have sought to address the issue of unseen simultaneous autonomous surface profiling and scanning with the Complete-Surface-Finding-Algorithm (CSFA) within [3]. The CSFA has succeeded in creating a criteria to search an entire surface and in defining a criterion that specifies completion. For surfaces of significant complexity such as struts and other extrusions, additional environmental information may be necessary to detect and prevent collisions.
In this work, a low-cost RGB/Depth camera is utilised for a pre-scan in order to gain a rough model of the part. Both 2D and 3D imaging with RGB and RGB/D cameras are becoming prevalent within NDT. Despite this, many issues with map reconstruction still prove to be a critical block to more general usage within robotised NDT.
An initial study of 2D Visual Odometry and part mosaicing [4] highlighted several drawbacks with visual imaging and NDT. The key block to wide-spread usage is that highly specular surfaces such as polished metal produce significant visual artefacts. For sectors such as remanufacturing and aerospace that rely on polished metal or highly reflective Carbon-Fibre Reinforced Polymer (CFRP) samples, this is a critical issue. A related work found that a robotically applied Structure-from-Motion (SfM) schema could reproduce parts with sub-millimetre accuracy once the point clouds had been processed [5]. Key to this method is the presence of significant visual artefacts on the part, allowing dense point-cloud extraction from subsequent 2D images taken at known positions. These studies have presented significant improvements in reverse engineering of parts for later NDT scanning. However, the scans taken for these parts are completed by two complex robotic procedures that are operator-specified. The operator must input a path that is known not to either collide with the part or meet kinematic singularities, and is entirely reachable by the robot. In order to construct a dense 3D map of the part, the data threshold is high, requiring hundreds of images with up to 85 % image overlap to ensure proper part-reconstruction. This presents the significant disadvantage of requiring the part to be stationary within the cell while camera and UT probe tools are interchanged, since the derived CAD model is localised with respect to the robotic work-space.
Small one-shot surface profiling systems such as RGB/D or LIDAR cameras present solutions to this problem, able to profile the surface with the UT probe attached. Several autonomous robotic arm [6,7] and drone based [8] NDT scanning methods rely on this approach. The key drawback of these deployments are the reliance on primitives such as cylinders and planes, reducing flexibility when faced with complex, warped or unknown surfaces. The authors have used the raw point cloud data from off-the-shelf RGB/D cameras to construct paths over the surface, complimented by online path corrections during the UT scan.
Force control is used to ensure the UT probe maintains constant contact with the part through path corrections. Force data in the context of robotic UT-NDT have traditionally utilised roller probe technology, used to provide local corrections to the estimated position of a known part within the robot’s frame of references [9,10,11]. These examples have, however, used known digital-twin models with limited positioning corrections required.
The main draw-back of roller probes are the uni-directional speed allowed along the tool’s axis, along with shear forces causing degradation of the roller probe’s polymer components. An alternative UT contact measurement device is the Conformable Wedge (CW) probe. Filled with water, the wedge’s material has a near-identical UT wave-speed as water, preventing reflection at the boundary. The main drawback of conformable wedges used within a force control schema is the potential for bursting. This method is not suitable for sharp surfaces or high forces, which is defined dependant on the probe material and thickness.
While CW probes are more sensitive to sharp corners than roller probes, the nearly-flat aperture of the sensor presents the advantage of additional information in the form of surface friction. Corroded parts prevalent within sectors such as remanufacturing require surface pre-preparation to clean and level rusted patches in accordance with ISO standard 16809:2017 [12]. In certain regions, operators may have insufficiently prepared the surface, or the surface thickness may not allow levelling. In these cases, surface roughness data are advantageous to map overlooked regions of the surface in the preparation stage. While laser data provides high-accuracy surface position feedback, the couplant fills pits on the surface, presenting it as smooth and preventing it from providing such a metric. For conformable and flat rigid surfaces separated by a thin film of fluid, the frictional force is a sum of the hydrodynamic friction due to viscosity, and the asperity or contact force [13]. The hydrodynamic friction term τ , contact load P, area of the CW probe A, contact friction term f c , and normal force F define the dimensionless coefficient of friction μ in a mixed-lubrication regime;
μ = A τ dA + A f C P dA F .
Contributions of hydrodynamic and contact friction are proportional to the surface area exposed to each regime. Along the Stribeck curve [14], the hydrodynamic coefficient of friction is related in a highly non-linear way with the average load P, wedge/couplant relative speed v, and fluid viscosity ν by the Hersey number H = ν v / P . The Stribeck curve is shown in Figure 1.
This paper has sought to introduce an autonomous scanning procedure that address the issues with moulded part manufacturing and remanufacturing industries where an accurate digital twin is not always available. The presented method autonomously plans paths and deploys a UT sensor without prior knowledge of the surface through:
  • location of and planning over a surface using a noisy and incomplete representation of the surface,
  • simultaneous UT probe alignment, scanning, and path corrections using local force and laser-sensor feedback.
The novel path planning method presented overcomes limitations of traditional visual systems such as variable-lighting artefacts [15] and RGB/D specific artefacts [16] without the need for complex and potentially inaccurate point cloud corrections. Further, the requirement for apriori knowledge of the surface, such as imposed primitives, are removed. This is completed while taking full advantage of the data available to minimise the risk of collisions in-process.
The presented method enables single-shot part localisation strategies that could not be used under previous vision-enabled path planning regimes that require greater levels of accuracy. The novel method of robotic force control with a conformable wedge probe has also been demonstrated to offer additional information unavailable to roller or UT probes in the form of surface friction measurements. The full path planning execution process proposed is presented in Figure 2.
The paper is laid out as follows. Section 2 covers the algorithms used to localise and then plan paths over a point cloud representation of the part. Section 3 then applies this path to the robot, describing how the best collision-free path over and approach to the part is calculated in robotic configuration coordinates. Section 4 then provides details of how the initial path plan is locally corrected online with sensor and internal robotic readings. the method of surface reconstruction and data visualisation is then covered in Section 5 to aid in operator interpretation of UT results. Finally, experimental results that display the efficacy of each section of the pipeline are presented in Section 6.

2. Surface Localisation and Profiling

Structured light systems emit low-powered Infra-Red (IR) light that are reflected by objects and received by at least two cameras. The combined observations of the projected pattern are used to triangulate and calculate a depth map in the camera’s field of view. The key disadvantage of these systems is their sensitivity to ambient light and surface reflectivity. Ambient lighting conditions with wide band-width rays, such as direct sunlight, can hide and distort the projected pattern observed by the RGB/D camera system. For highly reflective surfaces, sections of the part may be missing or suffer significant noise as a result.
Applying traditional robotic-NDT path-planning methods to RGB/D point clouds requires surface position and normal estimation for each of the visited points, so as to align the tool correctly. However, even minor point cloud distortions can effect the numerical accuracy of the extracted vector. An advantage of RGB/D cameras is the ability to apply traditional computer-vision techniques to the point cloud map attained. In segmenting parts, two approaches were used for parts of varying scale. For parts smaller than the field of view of the camera, an initial image of the cell and a subsequent image were used in combination with the structural-similarity indices of the two colour images to locate the part. For larger parts, clustering of points to find the nearest point cloud were used to identify the part.
Once the part’s point cloud position is attained and normals are approximated, a rough path for the robot to follow is extracted from the camera’s inherent rasterisation, shown in Figure 3. Though the RGB/D camera is composed of two cameras, their co-planarity allows the field to view to be merged and simplified to a single camera’s. Excess information within the point cloud has then been utilised as environmental information to detect collisions.
An additional method implemented for guided scanning enables the operator to select a bounding polygon on the colour image, specifying a scanning region. A path is planned on the points within the polygon on the colour and depth maps.

3. Robotic Path Pre-Planning

Environmental data from the RGB/D cameras can inform the process of a potential collision due to a given robotic motion. Since the robot is controlled with speed commands on an uncertain surface, collision detection and prevention is completed before the scan takes place. A custom CUDA-kernel was written to check whether the rough path taken by the robot is collision free, and to remove way-points that may cause a collision given their estimated normals and position. The environmental data consists of the whole point cloud of the part and cell that is then separately refined for path planning, as well as a bounding box set to prevent collisions with the cell. The process of this function is documented in Algorithm 1.
Algorithm 1: Calculate possible collision-free paths, and the cost of each path from each potential initial configuration.
for p Path do
     Ω p = { θ p i = I K ( p , i ) : not Collision ( θ p i ) }
end for
for k { 0 : MaxConfigs } , p Path do
     NextBestConfig ( k , p ) = i : min i Ω P + 1 | θ p k θ p + 1 j |
     ConfigCost ( k , p ) = | θ p k θ p + 1 i |
end for
for k { 0 : MaxConfigs } do
     t = 0 , G e n C o u n t = 0
    while  t + 1 | Path |  do
        if  ConfigCost ( k , p ) < L  then
            k + 1 = NextBestConfig ( t , k )
            Cost ( k ) + = ConfigCost ( k , p )
            G e n C o u n t + = 1
            t = t + 1
        else
            t = | Path | + 1
        end if
    end while
end for
Then either the closest initial configuration to the current one can then be used, the one which will lead to the longest collision-free path (given by ’GenCount’ in Algorithm 1, or the one that will lead to the shortest distance travelled in configuration-space depending on operator preference.
After an initial configuration is decided, the approach is calculated using linear motion in the configuration space. If a collision is found, then a Rapidly-exploring Random Tree (RRT) framework calculates a collision free path from the current to the starting configurations.
The limitations of this pre-scan method is that the uncertainty in the point-cloud position and orientation may lead to positions being removed without causing a collision in process. However, the speed command control method requires regular updates, rendering an in-process collision detection method for an unknown surface unfeasible due to latency.

4. Local Robotic Corrections

Once the rough surface has been profiled and localised, the robot needs to traverse the surface. Due to hardware and numerical limitations, real-time path corrections are provided from auxiliary sensors.
The point cloud’s inaccuracies and coupling method of the conformable wedge probe require the conditions:
  • initial coupling to the surface from the erroneous point cloud;
  • traversal method that keeps the sensor probe in contact with the surface;
  • direction commands for the robotic platform to visit each way-point in the path that also compensate for positioning errors;
  • stop conditions when visiting each way-point.
The Universal-Robot’s UR10e platform deployed comes equipped with force-torque control, enabling the robot to move in a given direction until a force is felt. By utilising the first point’s normal estimation, the robot is able to couple to the surface satisfying Condition: 1.
The Universal Robot’s force control mode also allows users to set directed forces and admissible deviations while executing a path. By requiring the robot apply a set force in the tool’s z-direction and allowing deviations in the works-space dimension that composes the majority of the current tool’s z direction, the robot was able to maintain contact with the part. Allowing motion in all cardinal directions resulted in undesirable motion, requiring the restriction to the direction that gave the largest contribution.
The robotic paths were executed by setting the Cartesian and rotational directions of the tool’s target velocity, v = [ p ˙ x , p ˙ y , p ˙ z , ω ^ x , ω ^ y , ω ^ z ] within a control loop. While a directed point or planar force/torque sensor has the capability to correct for orientation deviations, the conformable wedge probe presented a non-planar surface that allowed the robot to slip into misalignment.
To aid in the high-accuracy reconstruction of the surface, three linear laser sensors were rigidly attached to the flange. In addition to providing high-accuracy surface positions, the live measurements provided on-line orientation corrections. The laser-measurements provide the approximated surface orientation given by a 3 × 3 rotation matrix R s within the world-frame, while the robot has current rotation R r . Universal robots utilise the se ( 3 ) convention, with the required orientation correction given by;
ω = R 1 R s R r T ,
where
R ω = exp 0 ω z ω y ω z 0 ω x ω y ω x 0 ,
is the conversion between twist vectors and SE ( 3 ) group elements [17]. The angular difference is converted to angular velocity ω ^ using the average t ¯ and standard deviation σ t of previous loop-durations, ω ^ = 0.5 ω / ( t ¯ + σ t ) . The damping factor is used to prevent over-corrections and loss of contact with the surface with variable loop speeds and geometric factors.
While the force kept the tool coupled to the part, and the lasers kept the tool normal to the part satisfying Condition 2, data obtained by the RGB/D camera set the directional speed values p ˙ .
Given a rough target position P t + 1 , current robotic position P r , current 3 × 3 flange orientation matrix R r = d x , d y , d z , and target tcp speed s, the values p ˙ were calculated.
Taking the projection to the surface’s tangent plane;
d P Proj = P t + 1 P r · d x d x + P t + 1 P r · d y d y ,
and scaling this to the desired velocity; p ˙ = s · d P Proj / | d P Proj | . When the remaining in-plane distance from the target is small enough that the robot would over-shoot in a control-loop cycle, the speed is reduced to the estimated value. This satisfies Condition: 3. The projection is necessary since point cloud errors along the surface normal result in jerky motion, with the velocity vector set away from the target surface while the robot tries to maintain a constant force against the surface.
Finally, the stop condition for the robot at each way-point is summarised by taking the current tool position P r , velocity vector v , and way point P t + 1 . The distance q of the robot from the way-point in the plane is;
q = P t + 1 P t · v 1 | v | .
Once q has reached a threshold value, the robot is considered to have reached the desired way-point, satisfying Condition: 4.
This method is only suitable for point cloud representation of surfaces that fulfil the following condition: (a) there is a curve along the surface γ ( t + δ t ) that intersects the current and next way-points P t , P t + 1 such that γ ˙ ( δ t ) P r o j ( P t + 1 γ ( t ) ) , and (b) the global minima of the parameter q is attained at only one point along this curve. Point cloud representations of surfaces that do not follow this can only allow sub-optimal solutions to the path followed, or no solutions at all.
This process does assume that each point within the point cloud represents the closest position on the surface. This increases the accuracy threshold for suitable RGB/D sensors when applied to surfaces of high-curvature. For a surface with maximum normal curvature κ , the maximal inaccuracy can be δ with δ < 1 / κ .
The speed control for the robot is summarised in Algorithm 2.
Algorithm 2: Speed control algorithm applied to reach each control point.
Define acceptable stopping radius r and expected speed s.
for p Path do
     P T C P = GetCurrentPose ( ) ,
     t = | P T C P p | , s C = s ,
    while  t > r  do
         P T C P = GetCurrentPose ( ) ,
         L = GetLaserData ( ) ,
        if  | L | ==3 then
           Extract surface orientation matrix: f ( L , P T C P ) = R ,
        else
           Assert the current TCP orientation is the surface’s; R = R T C P
        end if
        Project vector difference to surface tangent: δ P = Proj p P T C P ,
        Update distance value: t = | δ P | ,
        Moderate speed with average loop-time so far; s C = min t / d t ¯ , s .
    end while
end for
  • Force Control
Since the velocity control acts as a Position controller (P-controller), the force considered is only required to counter frictional forces felt, and ensure the coupling pressure is moderately consistent to assist automatic gating procedures by minimising corrections requisite in a varying-offset conformable wedge aperture.
Force control is determined by the Position-Integral (PI) errors experienced. Determined by the given direction of motion, local roughness and errors in tool mass and centre-of-mass calibration, the frictional force is not consistent, with purely P-control used. Under the assumption that the tool is always aligned to the surface by laser-feedback, PI control is used for correcting the normal-force. PI controllers are more reliable than full Position-Integral-Derivative (PID) controllers from force-velocity/position control perspectives when applied to cobots such as the UR10e [18].
Fundamentally, the tangential velocity and force controls act as P controllers, set to follow the local values required to achieve the global target of attending each way-point. Meanwhile the normal force, with a desired global consistency, is a PI controller.

5. Surface Reconstruction

The complete set of laser measurements are then used in post-processing to reconstruct a digital-twin of the part. The UT measurements provide a heat map from A-scan data, as is traditional in robotised NDT.
The TCP data at each UT measurement was used to project UT data onto the reconstructed surface, allowing a full 3D digital-twin representation of conventional C-scans. Variations in couplant thickness of up to a millimetre require the reconstructed surface to undergo a smoothing process. The chosen reconstruction method is the ball-pivoting algorithm [19], requiring a provided surface normal. The normal direction of the surface or robot at each discovered laser-point is provided to aid in this surface reconstruction. The surface is then passed through a Laplace filter for smoothing. The advantage of using the laser data over the RGB/D point cloud is the inherent quality of the data resulting in fewer smoothing errors.
Once this is complete, the A-scan data are projected to the surface as a C-scan, as well as the surface friction data and re-made into C-scan and friction map digital twin models.

6. Experimental Results

An Intel D415 RGB/D camera was used to collect depth measurements, handled by the realsense2 Python package. The point cloud, colour and depth measurements have then been processed using Open3D Python software [20]. A Universal Robots UR10e platform was deployed, controlled through the Real-Time-Data-Exchange (RTDE) package and supplemented by the On-Robot HEX E/H QC force-torque sensor, integrated through UR-Cap software.
The CUDA library was imported into the Python environment and handled the pre-processing of the paths. For online orientation corrections, three Panasonic HG-C 24V class II laser distance sensors relayed data to the external controller via an Arduino-uno at a power low enough to prevent surface ablations. Completed within a laboratory in direct proximity with a window, the the experiments show that the process is robust when exposed to a mixture of low and high intensity ambient lighting levels to be representative of site-inspection work. The robotic platform was statically mounted, with parts placed onto a table within the work-volume. A normal force of 50 N was selected to maintain a high quality UT signal response, while also ensuring the CW probe is not damaged or split.
To test the collision detection pre-planning approach, a large non-planar surface was chosen, representing a wing-section from an aerospace component. A highly reflective CFRP section was also chosen to demonstrate resilience to holed and noisy point cloud data. A small calibration plate was chosen to demonstrate the use of the Structural Similarity Index to the location of small surfaces. Finally, a smooth and rough (0.1 mm ridged) CFRP components were chosen to test the applicability of surface friction measurements, and a friction stir weld plate used to validate friction-coefficient imaging.
The robot was capable of significant orientation and positional velocity control in the presence of a large curvature, as shown in Figure 4. In these experiments, the distance-metric used was not consistently monotonic in the case of inflection; however, the direction of motion was consistently correct. This agrees with the requirement made in Section 4, as the surface inflects between points but the path taken passes local minima in the planar-distance parameter.
Principle to the path planning process is the initial collision detection procedure to remove points that potentially damage the part or robot. The same wing as used in Figure 4 was placed at an angle relative to the robot that would cause a collision. The result is a successful removal of points that would potentially cause a collision, as shown in Figure 5.
The ability to plan and scan over holed and noisy point cloud data is highlighted in Figure 6. Figure 6a shows the initial point cloud corrupted by light-interference. The resultant path executed by the robot is shown in Figure 6b, shown to cover the part despite the missing and noisy data.
The images shown in Figure 7 demonstrates the structural similarity method of point-cloud selection prior to clustering and refinement. This is additionally resilient to variable lighting conditions, the main drawback of which is the potential for error when reflective regions are close to the surface of interest, resulting in artefacts seen in Figure 7c. The point cloud of the artefacts and the surface are then clustered, and the largest or closest to the camera is chosen, resulting in the surface alone being selected in Figure 7d. The operator should be aware of the placement of small parts so that the surface is not closer to reflective background regions than the clustering radius.
C-scan representations of the data have been validated by imaging the calibration block, seen in Figure 8. The three side-drilled holes of varying depth are clearly seen on the coloured digital-twin of the surface, highlighted within the image.
Both a smooth and a rough CFRP sample were used to test friction data at variable pressures and speeds. The results are presented as histogram values in Figure 9.
While the measured friction across the surface can differentiate between the differing surface roughness values, there is a significant change in the rough surface’s measured friction value. Changes in the hydrodynamic friction of the UT couplant with TCP speed, combined with a mechanical locking of the spring-mechanism explain the shift in friction values across the observed data. The mechanical design of the tool allows for an angled locking of the UT’s sprung support-struts due to the force acting along the strut’s central axis. For rougher surfaces, the frictional force in this direction is great enough to lodge the probe and then on the return, dislodges the tool. The angled face of the tool then also creates a ’bow-wave’ of UT couplant in one direction, allowing the tool to more smoothly pass over the surface with increased lubrication. The result is a split in the frequencies of observed friction values for plates of un-varying roughness, with principle modes becoming visible with greater speed as hydrodynamic friction increases. This is particularly noticeable on the smooth sample, as an increase in speed thereby furthers the split in friction values. Both samples see the peaks of friction inversely correlated with tool load. On the Stribeck curve seen in Figure 1, the majority of the tool’s aperture is in the well-coupled region, ensuring high quality UT data as there is a thick layer of couplant between the tool and part. This alternating bow-wave effect can also be seen in Figure 10.
At the raster end-points, the misalignment force is seen as spikes in the friction. The result is that along each raster-line, changes in the friction data can be accurately gathered. However, these data are not transferable across arbitrary raster line-paths. For the purpose of informing operators of changes in surface friction, this is sufficient to highlight potential issues such as significant surface corrosion within the region scanned.
Scanning of a friction-stir-weld is shown in Figure 10. The ridges of a friction-stir welded plate, shown in Figure 10a with 0.5 mm ridges at the edge of the weld-line.
In Figure 10b, the ridges can be seen as peaks in the observed coefficient of friction. The normal force control is to maintain consistent contact between the surface and probe, with surface to probe offset values for the curved sample scan presented in Figure 11.
The standard deviation of 0.36 mm demonstrates the consistency of contact enabled by both the force-control implemented and the sprung tool deployed.

7. Discussion

The two-scale part localisation and path planning methods presented were shown to be robust to different parts with minimal requirements on the placement within the work-cell. However, the criticality of initial tool alignment to guarantee full surface traversal success may require either lasers with greater ranges or operator intervention to re-align the tool in cases of extreme curvature or point-cloud error.
The front-loading of collision avoidance in path planning stages allows the positioning algorithm to react in real-time, correcting position and orientation deviations in a timely way. However, parallelised methods for simultaneous path correction and collision detection may require investigation for robotic platforms that have limited configuration spaces. The UR platform’s joints, unlike many others, have the advantage of a ± 2 π r a d operating range allowing continuous motions over more complex shapes. Taking advantage of these data allows full collision-free coverage of surfaces whose point cloud representations cover all regions of extremal curvature. Future works will investigate integrating this process with the CSFA to capture positions outside of the initial point cloud but still within the robot’s working-range.
While both of the part-localisation stages provided accurate surface positions to use as way-points, for smaller surfaces more appropriate tools such as roller probes or jet-phased array tools may be preferable to prevent wear and tear due to sharp corners.
The novel friction metric has demonstrated it is possible to differentiate regions of high and low friction within each raster-segment, providing useful information to the operator when a surface may be improperly prepared. However, a global friction-coefficient metric is not possible due to mechanical effects.
Finally, the speed-control and correction mechanism has displayed effectiveness in path planning between inaccurate points using local data. The key draw-back is the limitation to convex-shaped regions of parts. For the use-cases considered, the method was sufficient, however for use cases that present large holes in parts, the method may be modified so that the robot retracts within holed regions of a surface.

8. Conclusions

This paper has presented a novel method of autonomous robotised UT deployment that locates the part, plans a path over its surface, approaches the surface without collisions, and scans the surface. In doing so, the proposed methodology frees up operator time and enables flexible collision-free UT sensor deployment to unseen surfaces without the need for a digital twin and without the need for a lengthy pre-scanning and path planning step. The path planning methodology enables greater use of RGB/D cameras applied to robotised NDT, negating their relatively low accuracy without relying on limited-information primitives. Online sensor corrections of the robot’s pose and directions of motion provide a significant bridge from local corrections to global coverage of parts using limited information. The approach further enables future part localisation methodologies that may use Artificial Intelligence approaches to segment work-spaces, potentially resulting in drop-out of surface point cloud positions due to mislabelling.
Local changes in friction can be mapped to an accuracy within that of the CW probe’s planar aperture, informing operators of potential issues with surface preparation and signal gating which is of great interest to the automation of NDT processes.
Mentioned throughout the text are the process’ limitations. Surfaces with inflections between way points may not result in scans that correctly identify way-points. This can be solved by introducing multiple frames within a single path planning step. Small parts undergoing scan procedures in highly reflective work cells may require operator awareness for successful path planning using the Structural Similarity method. Path planning over parts smaller than the work-frame may also result in the probe partially overhanging the surface’s lip. In processing UT data, the operator needs to be aware of any double reflections that may cause UT-signal gating issues. For online corrections the laser-distance measurements may be too widely spaced apart, preventing accurate surface normal alignment. To avoid this issue, the width of their spacing should be minimised. Finally, the success of the path planning method is reliant on the camera calibration quality. The operator is responsible for accurately calibrating the camera tool, so that the produced paths are accurately placed within the robotic work-cell.

Author Contributions

Conceptualization, Methodology, Investigation, Software, Validation: A.P. Supervision: M.S., G.P. and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Engineering and Physical Sciences Research Council (EPSRC) as part of an ICASE PhD studentship, grant number S513908/1. This project is part of an initiative known as the Advanced Engineering Materials Research Institute (AEMRI), which is funded by the Welsh European Funding Office (WEFO) using European Regional Development Funds (ERDF).

Acknowledgments

Special thanks to Nathan Hartley for designing the tool mount used within this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Morozov, M.; Pierce, S.; MacLeod, C.; Mineo, C.; Summan, R. Off-line scan path planning for robotic NDT. Measurement 2018, 122, 284–290. [Google Scholar] [CrossRef]
  2. Zakaria, M.; Aminanda, Y.; Rashidi, S.A.; Sah, M.A.M. Spring-back of Thick Uni-Directional Carbon Fibre Reinforced Composite Laminate for Aircraft Structure Application. J. Phys. Conf. Ser. 2018, 1005, 012003. [Google Scholar] [CrossRef] [Green Version]
  3. Poole, A.; Sutcliffe, M.; Pierce, G.; Gachagan, A. A Novel Complete-Surface-Finding Algorithm for Online Surface Scanning with Limited View Sensors. Sensors 2021, 21, 7692. [Google Scholar] [CrossRef]
  4. Dobie, G.; Summan, R.; Macleod, C.N.; Pierce, S.G. Visual odometry and image mosaicing for NDE. NDT E Int. 2013, 57, 17–25. [Google Scholar]
  5. Khan, A.; Mineo, C.; Dobie, G.; Macleod, C.N.; Pierce, G. Vision guided robotic inspection for parts in manufacturing and remanufacturing industry. J. Remanuf. 2020, 11, 49–70. [Google Scholar]
  6. Ivan, V.; Garriga-Casanovas, A.; Merkt, W.; Cegla, F.B.; Vijayakumar, S. Autonomous Non-Destructive Remote Robotic Inspection of Offshore Assets. In Proceedings of the OTC Offshore Technology Conference, Houston, TX, USA, 2–5 May 2020. [Google Scholar] [CrossRef]
  7. Chen, C.; Wang, S.; Huang, S. A Point Cloud-Based Feature Recognition and Path Planning Method. Shock Vib. 2022, 2022, 1050038. [Google Scholar] [CrossRef]
  8. Car, M.; Markovic, L.; Ivanovic, A.; Orsag, M.; Bogdan, S. Autonomous Wind-Turbine Blade Inspection Using LiDAR-Equipped Unmanned Aerial Vehicle. IEEE Access 2020, 8, 131380–131387. [Google Scholar] [CrossRef]
  9. Mineo, C.; MacLeod, C.; Morozov, M.; Pierce, S.G.; Summan, R.; Rodden, T.; Kahani, D.; Powell, J.; McCubbin, P.; McCubbin, C.; et al. Flexible integration of robotics, ultrasonics and metrology for the inspection of aerospace components. In 43rd Review of Progress in Quantitative Nondestructive Evaluation; American Institute of Physics: College Park, MD, USA, 2017; Volume 1806, p. 020026. [Google Scholar] [CrossRef] [Green Version]
  10. Watson, R.; Kamel, M.; Zhang, D.; Dobie, G.; MacLeod, C.; Pierce, S.G.; Nieto, J. Dry Coupled Ultrasonic Non-Destructive Evaluation Using an Over-Actuated Unmanned Aerial Vehicle. In IEEE Transactions on Automation Science and Engineering; IEEE: Piscataway, NJ, USA, 2021; pp. 1–16. [Google Scholar] [CrossRef]
  11. Trujillo, M.; Martinez-de Dios, J.R.; Martín, C.; Viguria, A.; Ollero, A. Novel Aerial Manipulator for Accurate and Robust Industrial NDT Contact Inspection: A New Tool for the Oil and Gas Inspection Industry. Sensors 2019, 19, 1305. [Google Scholar] [CrossRef] [Green Version]
  12. ISO 16809; Non-Destructive Testing—Ultrasonic Thickness Measurement. International Organization for Standardization: Geneva, Switzerland, 2017.
  13. Wang, Y. Friction in Conformal Contact Interface. In Encyclopedia of Tribology; Springer: Boston, MA, USA, 2013; pp. 1311–1315. [Google Scholar] [CrossRef]
  14. Wang, Y.; Wang, Q.J. Stribeck Curves. In Encyclopedia of Tribology; Wang, Q.J., Chung, Y.W., Eds.; Springer: Boston, MA, USA, 2013; pp. 3365–3370. [Google Scholar] [CrossRef]
  15. Aqel, M.O.A.; Marhaban, M.H.; Saripan, M.I.; Ismail, N.B. Review of visual odometry: Types, approaches, challenges, and applications. SpringerPlus 2016, 5, 1897. [Google Scholar] [CrossRef] [Green Version]
  16. Ibrahim, M.M.; Liu, Q.; Khan, R.; Yang, J.; Adeli, E.; Yang, Y. Depth map artefacts reduction: A review. IET Image Process. 2020, 14, 2630–2644. [Google Scholar] [CrossRef]
  17. Ivancevic, V.G.; Ivancevic, T.T. Lecture Notes in Lie Groups. arXiv 2011, arXiv:1104.1106. [Google Scholar]
  18. Pérez-Ubeda, R.; Zotovic-Stanisic, R.; Gutiérrez, S.C. Force Control Improvement in Collaborative Robots through Theory Analysis and Experimental Endorsement. Appl. Sci. 2020, 10, 4329. [Google Scholar] [CrossRef]
  19. Bernardini, F.; Mittleman, J.; Rushmeier, H.; Silva, C.; Taubin, G. The ball-pivoting algorithm for surface reconstruction. IEEE Trans. Vis. Comput. Graph. 1999, 5, 349–359. [Google Scholar] [CrossRef]
  20. Zhou, Q.Y.; Park, J.; Koltun, V. Open3D: A Modern Library for 3D Data Processing. arXiv 2018, arXiv:1801.09847. [Google Scholar]
Figure 1. Stribeck curve; coefficient of friction against the Hersey number in the three lubrication regimes.
Figure 1. Stribeck curve; coefficient of friction against the Hersey number in the three lubrication regimes.
Applsci 12 05288 g001
Figure 2. Part profiling, path planning, and path execution pipeline requiring minimal and noisy/incomplete visual information.
Figure 2. Part profiling, path planning, and path execution pipeline requiring minimal and noisy/incomplete visual information.
Applsci 12 05288 g002
Figure 3. The viewing frustum of a camera intersects with the part along planes given by either each row or column of pixels. The planes of this chosen direction return points along the surface that form a raster-segments. These ordered raster segments can be reduced (red) if too close together, or regionally padded (blue) if the smallest distance from one raster-segment to another is larger than the sensor aperture. Key points (dots) can then be extracted directly from the point cloud and used as way-points. The overall path combining original (green) and inferred raster lines (blue) are denoted.
Figure 3. The viewing frustum of a camera intersects with the part along planes given by either each row or column of pixels. The planes of this chosen direction return points along the surface that form a raster-segments. These ordered raster segments can be reduced (red) if too close together, or regionally padded (blue) if the smallest distance from one raster-segment to another is larger than the sensor aperture. Key points (dots) can then be extracted directly from the point cloud and used as way-points. The overall path combining original (green) and inferred raster lines (blue) are denoted.
Applsci 12 05288 g003
Figure 4. Scanning of a mock-aerofoil sample. The original RGB/D data poorly represented the surface, with an offset of 5 mm. (a) Robotic-arm deployed to a mock aerofoil with large surface curvature. (b) Surface reconstructed using laser-data, waypoints from the RGB/D image embedded.
Figure 4. Scanning of a mock-aerofoil sample. The original RGB/D data poorly represented the surface, with an offset of 5 mm. (a) Robotic-arm deployed to a mock aerofoil with large surface curvature. (b) Surface reconstructed using laser-data, waypoints from the RGB/D image embedded.
Applsci 12 05288 g004
Figure 5. Scanning of a mock-aerofoil sample. The original RGB/D data poorly represented the surface, with an offset of 5 mm. Imaged within RoboDK. (a) Path, and in high-frequency segments the laser-discovered positions on the part. (b) Original Point cloud of the part and environment.
Figure 5. Scanning of a mock-aerofoil sample. The original RGB/D data poorly represented the surface, with an offset of 5 mm. Imaged within RoboDK. (a) Path, and in high-frequency segments the laser-discovered positions on the part. (b) Original Point cloud of the part and environment.
Applsci 12 05288 g005
Figure 6. Point cloud and C-scan of the part. The average error of RGB/D point cloud poses from the best fit plane of the collected laser data is 14.7 mm. (a) Point cloud map of a reflective CFRP component. Some sections of the point-cloud are missing, the surface data is considerably irregular. Visual artefacts incorrectly placed within the part are circled. (b) Heat-mapped C-scan of the part. The 5 mm thickness of the CFRP component placed the back-wall signal within the dead-zone of the 5 MHz probe. The support strut of the part can be seen within the gated region as higher-heat.
Figure 6. Point cloud and C-scan of the part. The average error of RGB/D point cloud poses from the best fit plane of the collected laser data is 14.7 mm. (a) Point cloud map of a reflective CFRP component. Some sections of the point-cloud are missing, the surface data is considerably irregular. Visual artefacts incorrectly placed within the part are circled. (b) Heat-mapped C-scan of the part. The 5 mm thickness of the CFRP component placed the back-wall signal within the dead-zone of the 5 MHz probe. The support strut of the part can be seen within the gated region as higher-heat.
Applsci 12 05288 g006
Figure 7. Structural-similarity based part localisation and path generation process for a calibration block with side-drilled holes. (a) Calibration image showing just the workcell. (b) Subsequent image with the part present. (c) Deducted image—kept pixels are shown in white, discarded pixels in black. (d) Resulting point cloud after remaining point clustering and selection. The identified part is in blue, extracted way-points in red.
Figure 7. Structural-similarity based part localisation and path generation process for a calibration block with side-drilled holes. (a) Calibration image showing just the workcell. (b) Subsequent image with the part present. (c) Deducted image—kept pixels are shown in white, discarded pixels in black. (d) Resulting point cloud after remaining point clustering and selection. The identified part is in blue, extracted way-points in red.
Applsci 12 05288 g007
Figure 8. The three ascending side-drilled holes can be seen in the solid box. One disadvantage of this path planning method is the potential for the tool to go over the lip of the part, presenting a second reflection within the gated region as seen in the dashed box. This can be gated out by an operator, as shown. (a) Initial image of the part, with static gating applied. (b) After dynamic gating, the three side-drilled holes are highlighted, surface reflections caused by non-planar probe apertures have been removed.
Figure 8. The three ascending side-drilled holes can be seen in the solid box. One disadvantage of this path planning method is the potential for the tool to go over the lip of the part, presenting a second reflection within the gated region as seen in the dashed box. This can be gated out by an operator, as shown. (a) Initial image of the part, with static gating applied. (b) After dynamic gating, the three side-drilled holes are highlighted, surface reflections caused by non-planar probe apertures have been removed.
Applsci 12 05288 g008
Figure 9. Histogram plot for two different raster paths over two plates of different roughness values. Results for rough and smooth CFRP surfaces show the effects of speed and applied force on the frequency of the observed friction values as a percentage of the total number of observations.
Figure 9. Histogram plot for two different raster paths over two plates of different roughness values. Results for rough and smooth CFRP surfaces show the effects of speed and applied force on the frequency of the observed friction values as a percentage of the total number of observations.
Applsci 12 05288 g009
Figure 10. Friction analysis of a friction-stir weld shows that measuring the surface-friction can indicate discontinuities in the surface. The upper section presented a sharpened lip, avoided for the safety of the wedge. (a) Colour image of the welding plate. The lower half of the plate was scanned as part of this analysis. (b) Friction coefficient encountered as a heat-map over the surface. The accuracy of the frictional values is determined by the size and shape of the wedge’s surface—in this case, 25 × 25 mm.
Figure 10. Friction analysis of a friction-stir weld shows that measuring the surface-friction can indicate discontinuities in the surface. The upper section presented a sharpened lip, avoided for the safety of the wedge. (a) Colour image of the welding plate. The lower half of the plate was scanned as part of this analysis. (b) Friction coefficient encountered as a heat-map over the surface. The accuracy of the frictional values is determined by the size and shape of the wedge’s surface—in this case, 25 × 25 mm.
Applsci 12 05288 g010
Figure 11. The maximum signal response for each A-scan used to calculate probe-surface offset.
Figure 11. The maximum signal response for each A-scan used to calculate probe-surface offset.
Applsci 12 05288 g011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Poole, A.; Sutcliffe, M.; Pierce, G.; Gachagan, A. Autonomous, Digital-Twin Free Path Planning and Deployment for Robotic NDT: Introducing LPAS: Locate, Plan, Approach, Scan Using Low Cost Vision Sensors. Appl. Sci. 2022, 12, 5288. https://doi.org/10.3390/app12105288

AMA Style

Poole A, Sutcliffe M, Pierce G, Gachagan A. Autonomous, Digital-Twin Free Path Planning and Deployment for Robotic NDT: Introducing LPAS: Locate, Plan, Approach, Scan Using Low Cost Vision Sensors. Applied Sciences. 2022; 12(10):5288. https://doi.org/10.3390/app12105288

Chicago/Turabian Style

Poole, Alastair, Mark Sutcliffe, Gareth Pierce, and Anthony Gachagan. 2022. "Autonomous, Digital-Twin Free Path Planning and Deployment for Robotic NDT: Introducing LPAS: Locate, Plan, Approach, Scan Using Low Cost Vision Sensors" Applied Sciences 12, no. 10: 5288. https://doi.org/10.3390/app12105288

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop