A suite of robotic solutions for nuclear waste decommissioning

: Dealing safely with nuclear waste is an imperative for the nuclear industry. Increasingly, robots are being developed to carry out complex tasks such as perceiving, grasping, cutting, and manipulating waste. Radioactive material can be sorted, and either stored safely or disposed of appropriately, entirely through the actions of remotely controlled robots. Radiological characterisation is also critical during the decommissioning of nuclear facilities. It involves the detection and labelling of radiation levels, waste materials, and contaminants, as well as determining other related parameters (e.g., thermal and chemical), with the data visualised as 3D scene models. This paper overviews work by researchers at the QMUL Centre for Advanced Robotics (ARQ), a partner in the UK EPSRC National Centre for Nuclear Robotics (NCNR), a consortium working on the development of radiation-hardened robots ﬁt to handle nuclear waste. Three areas of nuclear-related research are covered here: human–robot interfaces for remote operations, sensor delivery, and intelligent robotic manipulation.


Introduction
As robotic technologies and artificial intelligence have advanced, roboticists have turned their attention to the challenge of extending autonomous operations to ever more complex environments. The big "success story" in terms of robot deployment in recent decades has undoubtedly been industrial automation. Reaching the same levels of reliability and consistency in performance remains a challenge in more extreme environments-the nuclear industry being a prime example.
The harsh environment and precarious nature of the physical tasks involved impose several challenges in terms of deploying robots for nuclear clean-up. Nuclear decommissioning use cases present certain requirements that a standard commercial system would not be able to meet. Factors such as radiation effects, remote maintenance, and constraints to deployment are impositions with which commercial systems do not generally need to reckon. This paper reviews recent research by the authors that addresses some of these challenges from the perspective of the nuclear sector. It also examines the requisite adaptations that would facilitate crossover from mainstream robotics.
The robotics teams at QMUL, a partner in the UK EPSRC National Centre for Nuclear Robotics, have contributed to the three areas of research summarised in Figure 1: humanrobot interfaces, radiation sensor delivery, and robotic manipulation.  Figure 1 additionally outlines a possible route to the implementation of robotic tasks in nuclear decommissioning: a human operator programmes the robot through a userfriendly GUI, or directly teleoperates the robot to perform a required task. The task could be the delivery of appropriate sensors, e.g., radiation sensors, to a required location; or the physical exploration of an environment using visual, proximity, and tactile sensors; or the grasping and manipulation of objects in a remote, potentially radioactive environment. This paper is structured as follows. Section 2 motivates the discussion and introduces an illustrative use case for the integration of the proposed solutions. Section 3 focuses on robotic grasping and manipulation. Section 4 covers sensors and sensor deployment with the help of robotic manipulators. Section 5 overviews human-robot interfaces for performing robotic tasks (either automatic or teleoperated). Section 6 concludes the paper and outlines future challenges.

Robotics Research for the Nuclear Environment-Overview and Applications
The clean-up of nuclear power plants involves the dismantling and decontamination of industrial infrastructure, along with the decommissioning of nuclear waste. High-energy radiation sources hasten the degradation of industrial equipment through exposure to ionising radiation, with the equipment eventually becoming inoperative and having to be dismantled. Surfaces and machinery that have come into contact with radioactive material will, on the other hand, need to be decontaminated. The hazardous nature of this activity and the toxicity of all substances and contaminated infrastructure within the environment necessitate the wearing of cumbersome protective gear, such as hazmat suits, by human operatives in the field. The work is therefore not only dangerous but difficult. A label often attached to nuclear sites subject to decommissioning is that of "extreme environment", a descriptor also used to designate other environments that share some of the same operating characteristics-typically outer space, deep-sea and deep mines [1]. These kinds of environments present a strong case for increased automation and remote operation as well as the replacement of human workers by robots [2]. As a consequence, there has been a surge of interest from roboticists and industry stakeholders as regards extreme environments. Despite this, there remains little provision for automation, most notably in older facilities, and manual operations still make up the bulk of the cleanup effort.
Robots have played a key role in the nuclear sector that dates back to atmospheric nuclear weapons testing and the clean-up of Three Mile Island [3]. Typically, they have been used to access radiologically hazardous areas and to collect samples from which to ascertain the scope and magnitude of the risk. More recently, in the wake of robotic deployment related to the Fukushima nuclear disaster, public interest was piqued by the idea that robots could potentially act as human rescuers in certain situations. Despite some negativity and scepticism apparent in media coverage ("dying" robots "failing" the clean-up effort), robots have shown considerable value to the US DOE's clean-up efforts, especially in decommissioning nuclear waste that had accumulated as a by-product of their weapons programme [4].
For industrial applications more generally, the introduction of robotic systems has seen steady increase over the last half-century, and the trend shows no sign of abating. Incremental improvements in robotic technologies have led to a rise in productivity, improved safety, and a reduction in costs, among other frequently cited benefits, largely confined to robots in highly structured environments [5]. The hope is that the same benefits can transfer over to more complex, less structured working environments. As robotic technologies continue to advance, so too will the opportunities for technology transfer from one sector to another, be it from manufacturing to nuclear or, even, vice versa.
Several ground rules have been established in relation to the development of nuclear robotics [5]. These include utilising existing equipment as much as possible rather than creating entirely new robotic systems; sticking to tethered rather than wireless control, so that a robot can always be retrieved should problems arise; ensuring robots are small enough to fit through interior hatch openings; and making them sufficiently robust and waterproof to cope with underwater work and withstand high-pressure spraying during post-work decontamination.
In the past, developing bespoke equipment has generally been the default route for nuclear applications. Experience suggests, however, that this isn't always optimal, as pre-existing commercial developments and applications can sometimes provide directly transferable technologies and solutions. An example of this is teleoperated robot hand control: a robust ergonomically designed multi-function handheld joystick controller can be used for many hours without causing operator discomfort [6]. While developing and testing a device from scratch would be both costly and time-consuming, the gaming industry has already made huge advances in this regard and it would, therefore, make sense to look into how best to transfer these kinds of commercial off-the-shelf (COTS) technologies to nuclear applications.

An Illustrative Use Case
Let us consider the integrated solution depicted in Figure 1 and the use case illustrated in Figure 2. Nuclear facilities are often housed in large buildings, where nuclear waste is stored. Such buildings are often completely sealed-no door, window, or any other aperture is present. The stored waste can remain inside these buildings for a long time, without any monitoring, and it, therefore, remains impossible to ascertain the precise state of the waste containers or the building itself. A way for our integrated solution to address this scenario would involve creating a small aperture in the outer wall and introducing sensors mounted on a robot arm; the arm would squeeze through to the other side and proceed to monitor the inside of the building. More specifically: -A soft eversion robot (see Section 4.3) can enter the building through the small aperture; - The robot can be equipped with sensors and specific protocols can be employed to inspect the surfaces of the walls and the waste containers, either using touch (see Section 4.2) or a combination of vision and touch (see Section 4.1), to evaluate their structural integrity; - The robot can be equipped with a gripper (see Section 3.1) to grasp and manipulate objects (see Section 3.2) inside the building; - The movements of the robot can be controlled by human teleoperation with haptic feedback (see Section 5) or can be programmed to be autonomous (see Section 3.3).

Task-Oriented Design and System Development of a Dexterous Robotic Gripper
Deployable manipulators equipped with grippers are key to enabling robots to replace human operators within a radiation-exposed nuclear environment ( Figure 2). Although rigid-bodied robots have been specially developed to perform essential tasks in extreme environments, they come equipped with electronic and electrical components that are not radiation-proof, and are therefore susceptible to the ill-effects of radiation [7]. Despite some extremely limited choice of rad-hard components (a combined consequence of the extremely high material costs and the relative dearth of suppliers), electroniccomponent-free robotic devices have enormous potential for applications in radioactive environments. The key factors in the development of radiation-resistant robotic grippers for remote operation using mechanical and materials intelligence are classified as follows [8]: Leader/follower operation: Though the traditional master/slave system for manipulation of hazardous material may fall short in terms of the required distance between operator and robot, this system has advantages in terms of its simplicity and affordability [9].
Underactuated design for higher affordance: In the development of robotic grippers, the concept of underactuation has been widely adopted as an effective approach to embedding a high number of degrees of freedom (DOFs) without increasing the number of actuators that need to be controlled. An underactuated robotic gripper not only reduces the complexity of its actuation and control systems but also lends itself to inherent compliance and adaptability in relation to how it interacts with its environment [10,11].
Materials intelligence: The use of deformable materials could also contribute to inherent compliance, thereby engendering greater self-adaptability and dexterity. Deformable soft-bodied robots are lightweight and flexible while offering high payload capacity and resistance, a combination ideally suited to extreme scenarios, among them certain nuclear applications requiring power and flexibility [12,13].
High power actuation and variable stiffness: Pneumatic actuators operate by channelling compressed air through tubes into soft materials, effectively acting as muscles. The principal benefits of this type of system are its simplicity and its ability to generate large forces. To increase dexterity, pneumatic pouch actuator-driven variable stiffness control can be achieved by using long tubes at a safe distance that allow the gripper to better adapt to objects of varying shape [14,15].
Waterproof and low cost: With new robotic technologies such as origami-inspired design/production and pneumatic actuation, soft robotic systems, made from deformable PVC materials, are capable of high payload/mass ratio, demonstrate high reliability and scalability, are waterproof, and can be easily integrated into existing equipment. They can also be built at a relatively low cost [16][17][18].
As with all rigid-bodied robotic devices, conventional grippers have limited capabilities interacting with irregularly shaped objects, unless equipped with a considerable number of actuators and high-resolution sensors. In contrast, soft robotic grippers have proven abilities grasping a wide variety of objects. Bearing in mind the dual objectives of minimising the use of electronic components while keeping controllers and power sources away from any radiation-contaminated environment, we proposed a novel design of flexure hinges and incorporated them into a robotic gripper. These hinges had adjustable stiffness (realised via shape morphing-see below) and were incorporated into gripper fingers, along with pneumatic pouch actuators. This enabled the deployment of the gripper within confined spaces, and indeed even in water ponds within the nuclear plant [19].
The stiffness variation described above is, as suggested, achieved through shape morphing. For a homogeneous beam of uniform cross-section, the flexural stiffness about any specific axis of bending is a function of the shape of that cross-section. The area of cross-section and the distribution of mass about the centroid affect the flexural stiffness of the beam. The cross-section of the flexural hinges in the fingers show two distinct elements-the thick central region, and two thin, bendable adjoining flaps (shown in Figure 3). An embedded pneumatic actuator attached to the two flaps can be pressurised to vary the flap angle and thus control the overall flexural stiffness of the hinge. When a moment M is applied to the beam, the curvature k about the moment axis is given by k = M/EI where E is Young's modulus of the material and I is the second moment of area about the neutral axis in the plane of the moment axis. Initially, the flap is horizontal and aligned with the longer axis of symmetry in the cross-section of the flexure hinge (as seen in Figure 3 for the case of flap angle = 0 degrees). The second moment of area is at its lowest in this condition and the flexural hinge offers little resistance to bending. When the flap bends, this causes an overall increase in the second moment of area of the cross-section. As well as the change in orientation of the flap, the shift of the neutral axis away from the initial neutral axis also causes an increase in the second moment of area. For a flap of width 1 cm and height 2 mm, the second moment of inertia about the horizontal axis of symmetry is 25 times that about the vertical axis of symmetry. Thus, changing the orientation of the flaps can be used to effect a substantial change in the stiffness of the beam. Figure 3 shows a representational drawing of the change in the flexural stiffness with varying flap angles. As the flap angle increases from 0 • to 60 • , the ratio of the flexural stiffness to initial flexural stiffness EI/(EI) 0 increases due to the increase in the second moment of inertia. As a consequence, resistance to bending increases.
As Figure 3 shows, the actuation of different pouches, and consequent tendon response, leads to different gripper configurations. A "1" represents an actuated pouch at the flexure hinge while a "0" represents an unactuated one. The ability of these pneumatic pouches to control the flexural stiffness of the flexible fingers shows great potential in producing a variety of grasp modes. In this section, various combinations are tested by actuating the pouches embedded in the two flexible fingers and identifying the different configurations achievable by the under-actuated gripper.
Given the wide range of configurations that can be achieved by the gripper and the high conformability of the flexible fingers, as shown in Figure 3, we can demonstrate the gripper's versatility in grasping different objects encountered in daily life-indeed the gripper proves capable of handling objects of different shape and size and of variable stiffness (rigid to soft). It suggests that the new gripper could be used in storage and other confined environments for picking and placing tasks.

Safe Object Grasping and Manipulation
Manipulating objects in nuclear environments brings with it two key challenges. Objects are often unknown, (i.e., we do not have access to analytical models or experience from previous learning) and these objects need to be handled safely (i.e., without breaking or dropping them). Haptic intelligence, the use of tactile sensing in exploratory procedures, is therefore vitally important for a robot hand employed in these settings. Grasp safety can be maximised through appropriate haptic procedures both before and during the manipulation of an object.
Before manipulation, the object can be explored haptically to identify an ideal grasp metric. To minimise the number of exploratory actions required, unscented Bayesian optimisation has proven very effective [20,21]. We implemented a full perception-action pipeline [22] in which we employed unscented Bayesian optimisation to identify grasps that maximise a force closure metric, in order to determine a configuration that has a high probability of being robust, before using it to pick up and transport an object. This approach works in applications in which time is not critical but safety is; indeed, haptic exploration does require a certain amount of time (ultimately depending on the complexity of the explored object) but, as our experiments demonstrate, it dramatically increases the chances of keeping the object stable within the grasp once it has been picked up.
During manipulation, real-time feedback from tactile/force sensors in the robot fingertips helps keep a stable grasp of the object. We demonstrated this in a set of experiments in which we learned in-hand manipulation actions from human demonstrations and executed them on novel unknown objects [23]. What we learn from demonstration is the intended motion of the object rather than the profile of forces applied to it. However, with the aid of a compliant controller that leverages real-time force feedback from the fingertip sensors, these actions can be executed successfully without drops or breakages.
In a different set of experiments, we used a parallel robotic gripper equipped with a recently developed tactile skin that can measure 3D contact forces at multiple contact points [24]. We initially recorded tactile data during several pick-and-place actions using different objects, grasped in different configurations. In these instances, the gripper applied a constant force on the object, and on some occasions we observed object slips. We then used these labelled data to train a classifier that would detect slip events and were able to report a high degree of classification accuracy [25].
Notably, we were able to show that the trained classifier could also be applied to a set of novel unknown objects, detecting slip events when handling objects that had not been included in the training process, albeit with a lower degree of accuracy. Interestingly, a different version of this tactile skin has been used to sensorise the fingertips and fingers of a dexterous robotic hand [26], suggesting that this approach could equally be applied to dexterous hands.

A Software Framework for Robot Manipulation
We have recently introduced the Grasping Robot Integration & Prototyping (GRIP) framework [27], a novel open-source software platform that facilitates robot programming and deployment. GRIP is hardware-agnostic and its main aim is to reduce the time and effort required to programme complex robotic manipulation tasks, by enabling the integration of disparate components. Its graphical user interface (GUI), for example, helps and guides the user through all the necessary steps, from hardware and software integration to the design and execution of complex tasks.
GRIP provides a systems framework in which robots can be operated with ROScompatible software components from various sources, in addition to those that are custommade. Both low-level and high-level components (kinematic libraries, controllers, motion planners, sensors, and learning methods) can be easily integrated in a relatively straightforward way (see Figure 4). The GRIP framework, we believe, moves us closer to the potential application of pre-existing software components within the nuclear setting. Overview of the GRIP framework. The architecture allows users to operate robots with integrated components (hardware and/or software) for grasping and manipulation tasks. The task editor provides an intuitive interface for designing and tuning the robot's behaviour. Arrows indicate the different (mutually compatible) ways to interface external components (Reprinted with permission from ref. [27]. Copyright 2020 IEEE).
To ensure the framework is as accessible as possible, we needed to keep the requisite programming to a minimum. In doing so we focused, in particular, on the following issues: • Robot interfacing-using components widely available online; • Software and sensor integration-regardless of implementation details; • Variables definition-to be used during robot execution; • Task design and execution using integrated components-via a visual drag and drop interface.
GRIP enables the rapid integration and deployment of a broad range of components onto a robot, eliciting minimal limitations in the execution of a specific task. The various integration options available within the framework enable components of different origins to be linked together. By way of example, a specially designed controller for a robot end-effector could be made to work alongside MoveIt! [28] in the operation of the arm.
The integration of different components (such as sensors, actuators, controllers, etc.) even in moderately complex robotic scenarios requires clear communication pathways and agile management of information flow. GRIP can provide an interface that allows for the management and propagation of custom ROS messages. Per Figure 5, GRIP abstracts a given robot prototype as a set of configuration-dependent building blocks which interfaces, for ease of manipulation by the user, via an interactive GUI that effectively walks the user through each stage of the workflow-from robot integration to task execution. Robots can be configured through MoveIt! (blue) or using an existing launch file that gathers all components to be run (orange). External low-level components can also be integrated into our framework by wrapping them into ROS actions or services (red). Black arrows indicate consistent operations across the integration modalities (Reprinted with permission from ref. [27]. Copyright 2020 IEEE).
To further facilitate usability, especially when integration involves unfamiliar components (and therefore unknown syntax), we incorporated a real-time parser, which flags up any invalid input.
Having interfaced a robot in GRIP, we move our attention to behavioural design using a set of integrated components. As with other robot programming frameworks [29,30], this is carried out using state machines in a drag-and-drop programming environment, making the process more intuitive than text-based programming, with no specific prior knowledge required. As per Figure 6, task generation involves the following steps: • Dragging and dropping states or state machines in the window; • Configuring each state; • Amalgamating the outcomes of the various elements to define the behaviour of the robot.
This graphical programming approach simplifies the logic of the task, presenting it in a clear visual format. The ability to modify states within the task editor strips away the considerable complexity of configuring challenging robotic tasks when programming with a text editor. GRIP's visual programming editor thus lowers the configuration overhead when setting up grasping and manipulation tasks. It has also been shown to be usable, and indeed beneficial, to both experienced and novice users as revealed by a user study involving different levels of expertise in robotics [27]. Figure 6. The appearance of the task editor when designing a bimanual operation. The user can navigate both between and within hierarchies, via the sub-windows that are created when new containers are added. Different levels of zoom will show or hide the configuration data of each state, to ensure an appropriate visualisation (Reprinted with permission from ref. [27]. Copyright 2020 IEEE).
As a system integration platform for solving complex robotic tasks, GRIP has, in our opinion, considerable potential value towards developing engineering systems for hazardous environments. A typical task in nuclear decommissioning is to pick and sort lightly irradiated elements into various containers-a task that mandates the maintenance of a stable object pose throughout the procedure. We therefore had GRIP integrate a varied set of software and hardware components (e.g., uSkin sensors [24]) into the robot arm and gripper and then used the task editor to implement an autonomous pick-and-place task with added slip-detection. A tactile sensor detects any slippages during the grasping phase; where slip is identified, the robot replaces the object onto the surface and re-grasps it, maintaining its pose.
Another typical decommissioning use case involves manoeuvring a soft robot into a building containing nuclear material, to detect and assess cracking in any vessels housing nuclear material. In this situation, GRIP takes care of the integration of the soft robot controller, with vision and force/tactile sensors to supply observational data that is entered into a crack detection algorithm. GRIP enables the rapid prototyping of different exploratory strategies, facilitating evaluation comparisons.

Using Visual-Tactile Sensing for Surface Inspection
Automatic inspection of pipework [31], vats, tanks, and other vessels for mechanical fractures could be expected to form an important part of an early warning system in relation to chemical/radioactive waste management in nuclear environments. Established industry-standard techniques for inspecting large structures include X-ray scanning [32], eddy-current techniques [33], those exploiting fluid motion dynamics [34], and vision-based crack detection [35]. Methods such as these bring a certain overhead in terms of needing specialist equipment (and its potentially high cost) along with in situ experienced personnel, militating against their use in nuclear environments. A further drawback in environments that have limited luminosity and are subject to strong radiation is that electronic instruments (e.g., cameras) can be rendered inoperable or prone to degradation and failure.
In light of this, we advocate a new technique that uses fibre-optic tactile sensors in the detection of surface cracks. It has been developed with a nuclear decommissioning use case in mind that involves remotely operated robots. Using fibre optics brings certain advantages such as its function being unaffected by gamma radiation [36][37][38]. It, therefore, presents a potential route to replacing electrical cables in nuclear power plants [39,40].
In [41,42], we propose a software framework for an integrated force and proximity sensor in the shape of a finger, as detailed in [43]. This sensor is comprised of 3D-printed rigid (VeroClear Glossy) and soft (Nylon-PA2200) components, enabling a certain amount of flexure when pressed up against objects in the environment.
As per Figure 7a, the sensor uses three fibre optic light guides (D1, D2, and D3), each consisting of a pair of optical fibres, relying on light intensity modulation to determine the deformation of the flexible mid-section. A fourth pair of optical fibres (P) is responsible for proximity sensing, measuring the distance between the sensor's tip and objects in its vicinity. A Keyence FS-N11MN light-to-voltage transducer is attached to each light guide. Changes in light intensity are therefore observed. Further details regarding the functioning of the device can be found in [43,44]. On the left, the frames are captured by the webcam. In the centre, the object detection results on the previously acquired frames. On the right, the tactile results are shown. (d) Raw measurements of multiple runs from the four sensing elements of the sensor (deformations D1, D2, D3, and proximity P) for the "crack" surface. (Adapted from ref. [41]).
In [41,42], we establish a process that combines tactile and optical proximity sensing as the basis for efficient automatic crack detection. We leverage the Keyence sensor coupled with learning algorithms for the detection of cracks and protuberances using deformation and proximity readings. When a particular crack is identified, the system automatically classifies it according to its width, this process running both on-and off-line. Data collection and testing of the proposed algorithm involved mounting the sensor onto the end-effector of a Touch desktop haptic interface (previously called Phantom Omni, latterly Geomagic), as depicted in Figure 7b. We had the Geomagic execute a periodic sliding movement tracing the tactile sensor over a sample surface. An Arduino Mega ADK, connected to the computer via USB port, was used to acquire the data at 400 Hz via 4 analogue pins. The data were later matched against the absolute tip position of the tactile and proximity sensor, as Figure 7d shows. Being fibre-optic in design, the sensing instrument is resilient to gamma radiation, and should therefore readily work in a nuclear environment [36]. Moreover, subject to certain constraints, the nylon parts of the sensor are also usable within certain radiation parameters, as explained in [45].
The mean detection rate for cracks was ∼94%, while the mean correct classification rate for crack widths was ∼80%. A technique for online classification has also been developed, allowing for surface exploration in real time. The technique presented in [46] for multi-modal visuo-tactile surface inspection that enables the detection and classification of fractures can reasonably be applied to a remote inspection use case in a nuclear environment. As an example, a teleoperated robot manipulator equipped with this sensor would be able to scan for areas of interest. Visual imagery captured by such a robot could then be processed by the proposed algorithm to pinpoint those areas that have a high probability of containing mechanical fractures [41]. Subsequent to this, the robot homes in on the identified area, utilising its on-board sensor-embedded manipulator to perform in situ surface exploration at close quarters to gather further information on potential damage.
The presented technique employs object detection to determine the location of surface fractures that are then further inspected and verified via optical fibre proximity sensing. The sensor data is captured during physical interaction between a customised robotic finger and the environment. A complete description of the model and datasets can be found in [46]. A pair of experiments were conducted to evaluate the efficacy of the multi-modal solution: one to assess the online detection rate and another to gauge the time taken to perform surface exploration and identify cracks. Figure 7c presents a data sample processed by the aforementioned algorithm. Following visuo-tactile fusion, the model had elicited a detection rate of 92.85% of all cracks. Surface exploration by tactile sensing alone took an average of 199 s, this figure falling to 31 s when leveraging both modalities.

Surface Characterisation with Highly Sensitive Bio-Inspired Tactile Cilia
Finer characterisation of the texture of a surface can be obtained by employing a more sensitive tactile or force sensor. To this end, we developed a miniaturised sensor [47] inspired by the physical structure of biological cilia, which are found in many living organisms [48], e.g., the hair on our skin, the whiskers of a rat, and the trichomes of ciliated cells. Our sensor is composed of a flexible magnetic cilium whose magnetic field is measured by a Giant MagnetoResistive (GMR) sensor. When the cilia are deformed through external contact, the magnetic field changes, providing information on the nature of that contact; indeed, both the physical structure of the flexible cilia and the high sensitivity of the GMR sensor allow for the measurement of very small deformations, rendering this physical scanning method able to extract very detailed information about the texture of a surface. We applied this idea to two different scenarios. In one case we scanned a thin metallic sheet into which we had intentionally worked some defects in the form of holes and cavities, and were able to precisely detect the position and size of those defects by using a simple signal processing analysis of the sensor data [49]. In another set of experiments, we scanned two different types of fruit (apples and strawberries) that were labelled as either ripe or senescent, and trained classifiers based on the data collected with the sensor. We were able to report high classification accuracy for both apples and strawberries, using different versions of the sensor, either with one single cilium or with a matrix of cilia [50].

Highly Manoeuvrable Eversion Robot for Sensor Delivery
Soft robots are characterised by flexible and compliant bodies and an associated high number of degrees of freedom. They also tend to be lighter in weight and lower in cost than their rigid-body counterparts, the former of these two properties making them relatively easy to transport and deploy. Being compliant, soft robots can re-shape and mould themselves to the environment, such as when colliding with an obstacle. A variety of means of actuation exist to steer (soft) continuum manipulators towards a target or region of interest, such as tendon mechanisms [51][52][53], McKibben muscles [54], pouch motors [55,56], inflatable pleats [57], inflatable bladders [58][59][60], and pneumatic artificial muscles [61,62]. Despite these advances in actuation, most soft robots currently still face limitations in relation to elongation or extension of their main structure and are therefore unable to make traverses over long distances.
Eversion robots, a new class of soft robot, can, however, grow longitudinally, exploiting the principle of eversion. This, combined with their ability to squeeze through narrow openings, opens up the possibility of robots being able to access locations that would otherwise have remained inaccessible. These inflatable robots overcome the elongation issue, achieving extension ratios of 1 (initial length) to 100 (fully extended) [63], enabling access to remote environments [64].
An eversion robot resembles a sleeve of circular cross-section which is folded in on itself [65]. Under pressure the folds near the tip start to unravel, turning inside out and forcing longitudinal displacement or growth. In free space, growth is linear; when butting up against obstacles, the expanding eversion robot conforms to its surroundings. This property renders it well suited to negotiating an unstructured environment or confined space. The principal downside is the limited capacity for bending inherent to most designs of this type.
We have proposed a novel way to enhance bending actuation of the robot's structure [66]. The augmented design features an eversion robot body that integrates a central chamber acting as the backbone with actuators that enable bending and help manoeuvre the manipulator. The proposed method results in greatly improved bending capacity (133% improvement in bending angle) in an eversion robot with externally attached actuators. The added manoeuvrability represents a step-change in the development of eversion robots for use in remote and hard-to-reach environments. These enhancements, as well as ongoing work to prime an eversion robot for sensor deployment, allow for delivery of sensor loads to cluttered scenes in nuclear environments inaccessible to humans.
Eversion robots can deploy sensors to remote locations in each of the following three ways: • (For unknown environments or destinations) Because the tip extends continuously, attaching a gripper or indeed any end-effector to it would enable relevant tasks to be carried out once the target destination is reached. • (For unknown environments or destinations) The eversion robot has a longitudinal hollow, so once it has extended and reached its target, the sensor can be passed from one end of the robot (base) to the other end of the robot (tip)-as shown in Figure 8. • (For known environments or destinations) Attaching the sensor to a predetermined position within the body of the robot. Provided we know the precise location of the target, we can place the sensor within the robot at the exact point that will unfold upon reaching that target. In this way the sensor can be deployed to the correct position.

Human-Machine Interfaces for Efficient Robot Teleoperation in Extreme Environments
Reliable, easy to learn, and easy to operate human-machine interfaces are critical for the efficient and safe teleoperation of robotic systems performing tasks in extreme environments. At QMUL we have proposed several novel interaction methods [6,[67][68][69] that utilise virtual reality and haptic technologies (as outlined in the following subsections) to efficiently teleoperate robots located in remote and hazardous environments. The proposed human-machine interfaces can be employed within the integrated telerobotic system shown in Figure 1.

Virtual Reality-Based Teleoperation
We present an overview of a virtual reality (VR)-based robot teleoperation interface that can facilitate multiple exploration and manipulation tasks in extreme environments. In comparison to conventional robot teleoperation interfaces (2D displays, keyboards, and joysticks), interfaces using VR headsets and handheld wireless controllers provide a human operator with improved depth perception [70], more intuitive control, and better remote-environment exploratory capacity [71,72].
In our recent work we compared a human operator's ability to perceive and navigate a remote environment with the help of VR-based teleoperation interfaces employing different visualisation and remote-camera operation modes [6]. We considered the following video camera configurations for the remote environment: using a single external static camera; a camera attached to and manoeuvred by a robotic manipulator (in-hand dynamic camera); a combination of in-hand dynamic and external static cameras; and an in-hand dynamic camera in conjunction with scene visualisation based on OctoMap occupancy mapping. These four remote-scene representation modes were compared in an experimental study with human participants. The experimental task was to explore the remote environment and to detect and identify objects placed randomly in the vicinity of the teleoperated robot manipulator. Performance on each task was assessed in terms of completion time, number of correctly identified objects, and NASA task load index.
Our study showed that the in-hand dynamic camera operated by a robot manipulator combined with OctoMap visualisation provided a human operator with better understanding of the remote environment whilst requiring relatively small communication bandwidth. However, an in-hand camera hindered grasping tasks, as the RGB-D camera could not properly register objects that were very close to it (minimal distance varies in the approx. range 10-15 cm). To improve the performance of grasping tasks when RGB-D cameras were used, we have proposed several VR-based manipulation techniques that utilise remote scene segmentation and cloning for VR representation and a set of VR gestures to control remote grasping (see Figure 9) [6]. A video demonstration of the system is available here (https://youtu.be/3vZaEykMS_E (accessed on 28 September 2021)). Additionally, we have demonstrated that it can be beneficial for VR-based teleoperation interfaces to introduce workspace scaling if rate mode control is used such that the human-operator joystick's displacement is mapped onto the desired speed of the remote robot's end-effector [73]. The commands for the teleoperated robotic manipulator were sent using the interoperability teleoperation protocol for switching between position and rate control modes [74][75][76].

Haptic Feedback for Robot Teleoperation
Providing tactile feedback to a human operator is another key factor in achieving safe and successful task completion in remote robotic manipulation. Traditional haptic interfaces can generate force or vibrotactile feedback to characterise the haptic properties of the remote environment; therefore, we have explored the use of simple vibration motors as a very affordable and portable solution [77][78][79][80][81]. However, very few of these solutions can simultaneously render force, texture, and shape. To address this limitation, several novel haptic devices based on particle jamming have been developed, allowing air pressure control to affect jamming transition and inflate the touch surface and the source of the vibrations (via eccentric motors or linear resonant actuators). The prototype interface [67,82] and the proposed joystick interface [83] are shown in Figure 10. Our solution uses a vacuum pump to generate an area of low pressure within a rigid casing, forcing the soft cover into the device and causing the particles to jam. Two materials were considered for the soft haptic pad-Polyvinyl Chloride (PVC) and vinyl. During testing, the PVC sheet proved too stiff to effectively deform and jam the particles, though vinyl was moderately effective. The particle filling was also selected following initial experimentation, with plastic balls ultimately replaced by quinoa seeds.  We have tested how the vibrotactile waves propagate through the particle jamming interface at different air pressure levels. An accelerometer attached to the tactile surface of the prototype was used to measure the vibrations. Experimental results show that the amplitude of vibration drops from 50 m/s 2 to 20 m/s 2 over the range of pressures used in the experiment. Another interesting observation is that the shape of the vibration waveform has a noticeable double peak in the soft fluid but rapidly becomes smoother as the fluid stiffens. Testing showed that under low vacuum pressure, and thus with the particles in their soft state, increasing the vibration motor's power created a more pronounced periodic vibration and a slight increase in the measured amplitude. Frequency remained fairly consistent under this condition. Under higher vacuum pressure, and a consequent rigid particle body, increasing motor power doubled the amplitude of vibration by about 50%, after which point vibration amplitude remained consistent [67].
A joystick-shaped implementation (Figure 10b) of the existing particle jamming interface uses two soft pouches to contain the particle fluid and vibrotactile actuator (mounted directly at the back of the soft pouch to keep it in place and allow vibrations to propagate outwards to the user's hand). Stiffness and shape can be controlled by the existing control unit, enhanced by a second pressure regulator and driver for the vibrotactile actuator.

Teleoperation of Legged and Wheeled Mobile Robots
In addition to teleoperation of robotic manipulators it is often necessary to employ mobile robotic systems to safely deliver sensors and tools to remote environments with larger workspaces [84,85]. We have developed a novel lower-limb teleoperation interface that can be used for teleoperation of legged and wheeled robotic systems [69,86,87]. Our interface uses a human operator's legs' input to control a remote mobile robot which is crucial in the creation of a genuinely immersive experience for teleoperation.
The designed ankle interface is shown in Figure 11. The device consists of a single actuated foot platform that rotates around the ankle's coronal axis. The platform is actively impedance-controlled around the horizontal state [68]. A seated human operator uses alternate left/right ankle plantar-/dorsi-flexion (foot tapping) as the input walking command for robot teleoperation. The platform's periodic angular displacements are captured by a shaft encoder and, via a dedicated gait extraction algorithm, mapped to a continuous reference that dictates the remotely controlled robot's gait. As a result, the remote robot can imitate the walking commands of its human operator. Significantly, since the ankle platform is actuated, it can render haptic feedback that can be programmed to reflect the properties of a remote robot's terrain. Figure 11. Teleoperation system based on seated ankle interface to control locomotion of remote mobile robotic systems. A human operator uses feet tapping movements to control the walking of a humanoid robot, i.e., its gait. Terrain feedback from a remote environment is rendered to a human operator.
We have evaluated the designed ankle-based teleoperation interface with a group of human participants and it was demonstrated that the ankle gestures required for the interface are learned [69] and can be efficiently used to control the speed of a remotely teleoperated legged robot [87]. It was also demonstrated that the platform can render different types of haptic terrain feedback [69]. A more recent version of the interface is presented in [88]. The proposed ankle-based input interfaces can be used to teleoperate wheeled mobile robots as well.

Conclusions and Future Challenges
The paper provides an overview of recent advances in robot technologies developed at the Centre for Advanced Robotics @ Queen Mary (ARQ) in relation to dealing with nuclear waste. Recognising the need for remedial solutions for the decommissioning of radioactive waste, ARQ set out to research and evaluate the role of soft robotics in this area of the nuclear industry, with a focus on robot design, sensors and their delivery, grasping and related manipulation tasks, and human-machine interfaces for teleoperated systems. Advancements in these key areas show that reasonable approaches can be found to solve specific problems. With this in mind, initial steps have been taken to create a software framework that allows us to integrate these varied components in a straightforward userfriendly way. We see this framework as a potential catalyst to bringing new technological solutions to what has historically been a somewhat conservative industry.
Despite the many and varied advancements in the field, significant challenges remain. On the upside, soft robots can penetrate relatively inaccessible areas of the nuclear world, and, by keeping their electronics outside the radioactive environment, can operate close to sources of nuclear radiation. However, these types of robots bring their own challenges and further progress in terms of their control and navigation, possibly using data-driven learning-based methods, are much needed. The proposed sensor solutions to identify surfaces in the rough environment of nuclear plants make use of optical fibres as they are somewhat more immune to radiation and have a longer lifespan than their electronic counterparts. However, the accurate and reliable measurement of surface properties over long periods needs to be further improved. Grasping and manipulation techniques have advanced considerably and the robust handling of a range of objects is observable in realistic environments. The next step, i.e., conducting intelligent manipulation in actual nuclear environments, needs to be taken to ultimately demonstrate the feasibility of the proposed approach. Retaining a human element in the loop of any such operation remains crucial, as we currently do not have machines working at high enough automation levels to independently carry out complex tasks in complex environments. We, therefore, remain reliant on human-machine interfaces, and to this end ARQ has enjoyed notable progress in relation to its work on haptic and virtual/augmented reality interfaces. However, to create a completely immersive 'feel' and the most intuitive interactive system for the user remains a challenge.