Next Article in Journal
Backward Walking Induces Significantly Larger Upper-Mu-Rhythm Suppression Effects Than Forward Walking Does
Next Article in Special Issue
Twist-n-Sync: Software Clock Synchronization with Microseconds Accuracy Using MEMS-Gyroscopes
Previous Article in Journal
Estimating the Growing Stem Volume of Coniferous Plantations Based on Random Forest Using an Optimized Variable Selection Method
Previous Article in Special Issue
Optimal Cluster Head Positioning Algorithm for Wireless Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

OMNIVIL—An Autonomous Mobile Manipulator for Flexible Production

1
IaAM Institute, Faculty of Mechanical Engineering and Mechatronics, University of Applied Sciences Aachen, 52074 Aachen, Germany
2
Faculty of Engineering and Built Environment, Tshwane University of Technology, Pretoria 0001, South Africa
*
Authors to whom correspondence should be addressed.
Sensors 2020, 20(24), 7249; https://doi.org/10.3390/s20247249
Submission received: 10 November 2020 / Revised: 12 December 2020 / Accepted: 15 December 2020 / Published: 17 December 2020
(This article belongs to the Special Issue Sensor Networks Applications in Robotics and Mobile Systems)

Abstract

:
Flexible production is a key element in modern industrial manufacturing. Autonomous mobile manipulators can be used to execute various tasks: from logistics, to pick and place, or handling. Therefore, autonomous robotic systems can even increase the flexibility of existing production environments. However, the application of robotic systems is challenging due to their complexity and safety concerns. This paper addresses the design and implementation of the autonomous mobile manipulator OMNIVIL. A holonomic kinematic design provides high maneuverability and the implemented sensor setup with the underlying localization strategies are robust against typical static and dynamic uncertainties in industrial environments. For a safe and efficient human–robot collaboration (HRC), a novel workspace monitoring system (WMS) is developed to detect human co-workers and other objects in the workspace. The multilayer sensor setup and the parallel data analyzing capability provide superior accuracy and reliability. An intuitive zone-based navigation concept is implemented, based on the workspace monitoring system. Preventive behaviors are predefined for a conflict-free interaction with human co-workers. A workspace analyzing tool is implemented for adaptive manipulation, which significantly simplifies the determination of suitable platform positions for a manipulation task.

1. Introduction

In the last decades, the process of automated manufacturing was designed for constant large production batches and well-defined product types. This kind of production is typically supported by static industrial robots, which offer a high repeatability of periodic tasks. In recent years, industrial automation has changed. The huge interest in customized products generates the need for flexible automation [1]. This individual production of user configurable goods, ends up in numerous product variations and rapid response to customer requirements through short-cycle product substitutions [2]. Typical advantages of automated assembly lines like high throughput rates and high repeatability, which result in product cost reduction, are reduced by necessary adjustments of the production process. Manual production would address the novel needs in terms of flexibility and agility [3]. A complete manual production is out of question in most cases, due to the lack of repeatability, the low cost-efficiency and the complex design of today’s goods. The greatest challenge of nowadays manufacturing industry is to improve the flexibility and agility of the automated manufacturing processes.
One approach is the implementation of "mobile industrial manipulators" in the production process. The term mobile industrial manipulator describes an industrial robot arm (manipulator) mounted on an autonomous mobile robot (AMR). A mobile industrial manipulator offers more flexibility and agility regarding its workspace, which is extended by the AMR’s mobility. Common industrial navigation concepts are based on line follower procedures [4] or passive [5] and active [6] landmark detection. These methods are approved in industrial environments, but not as flexible as desired. In research on the other hand, the advanced concepts of mobile robotics, like perception and navigation, are used to improve the mobility for autonomous applications. These "autonomous industrial mobile manipulators" (AIMM) [7] are capable of autonomous navigation, even in dynamic environments. Furthermore, the perceived data can be used to realize a shared human–robot–workspace.
One of the first AIMMs was MORO [8], which was introduced in 1984. MORO is capable of navigating on the shopfloor and executing pick and place tasks. Starting from this pioneer work a lot of further developments were carried out in the related research fields. Figure 1 shows an overview of related research projects conducted over the last decade. In addition, it includes the basic field of application and the used mobile industrial manipulator.
In the research project TAPAS [9] the AIMM Little Helper [10] was developed. Little Helper shows the potential of the technological concept for logistic and assistive tasks in industrial environments. Therefore, scenarios of industrial applications were reproduced in experimental environments for multipart feeding [11] and multirobot assembly [12]. A comparable concept was followed by the project AMADEUS [13]. In an industrial example scenario, the developed AIMM Amadeus successfully refilled production parts at a manual workbench. The project ISABEL [14] addressed transport and handling tasks in semiconductor production and life science laboratories. The research focus was set on perception and motion planning using GPU-based voxel collision detection [15,16]. Kitting and Bin-Picking scenarios were studied in the research project STAMINA [17]. The developed skill-based system SkiROS [18] is integrated in a manufacturing execution system (MES). In contrast to classical approaches, the world model, which represents the environment of the AIMM, will adapt itself to changes based on sensor data. As a result, the AIMM is able to operate in an unknown environment and to manipulate unknown objects without prior knowledge [19]. The research projects VALERIA [20] and CARLoS [21] addressed large-space manufacturing applications. The developed AIMM VALERIA is specifically designed for a collaboration between the human co-worker and the AIMM [22,23]. Visual workspace monitoring and a tactile skin secure a safe interaction [24]. In the project CARLoS, an AIMM was developed for automatic stud welding inside a ship hull. It features an intuitive human–robot interaction (HRI) by implementing a cursor based interface [25]. A new concept of a dreaming robot is introduced in the project RobDream [26]. The basic idea is to improve the capabilities of an AIMM by analyzing data in inactive phases. In an industrial example scenario the AIMM was used for production part delivery services [27]. The research project ColRobot [28] continues some aspects of the VALERIA project. It includes two industrial case studies: Trim part assembly in a van and the preparation and delivery of assembly kits for satellite assembly. The research contribution covers different fields such as object recognition [29], cartesian positioning of manipulators [30] and precision hand-guiding of the end-effector [31]. In the research project THOMAS [32] an AIMM [33] is a core element to create a dynamically re-configurable shop floor [34]. The AIMM should be able to perceive its environment and to cooperate with other robots and human co-workers [35]. A large-scale inspection task is carried out in the project FiberRadar [36]. The development addresses the monitoring of the wind turbine blade production process.
In this paper, we present OMNIVIL, an autonomous mobile manipulator. We provide a holistic overview from the design to the implementation of OMNIVIL in a model factory. The first step includes the novel design and construction of the mobile platform based on an omnidirectional kinematic and standard industrial component. A collaborative manipulator is mounted on top of the platform. The control system is split into lower-level and higher-level tasks, whereby the implemented interface uses the standard Open Platform Communications Unified Architecture [37] (OPC UA). Real-time critical tasks are executed and managed on a programmable logic controller (PLC) with real-time capabilities. For example, the execution of motion commands or the monitoring of the hard real-time safety components. Tasks with a high computational load, like autonomous navigation, workspace monitoring and 6D motion planning are executed on a high-level system, which is an industrial PC.
The following strategies are implemented towards safe and efficient human–robot collaboration (HRC): (1) a novel workspace monitoring concept is presented to address the safety issue when implementing an AIMM, using RGB and thermal images as well as Lidar data; (2) the multilayer sensor setup is improved by the implementation of redundant algorithms for human co-worker detection based on neural networks. The resulting confidence intervals in 3D space are fused by applying the logarithmic opinion pool; (3) the presented navigation and manipulation concepts are developed to simplify the implementation of mobile manipulators in general. The developed zone-based navigation concept can easily be adapted to different production layouts and provides the functionality to switch between different motion behaviors; (4) dynamic zones, based on the 2D position of human co-workers provided by the workspace monitoring system, enable a controlled HRC; and (5) a visualization of the reachability and manipulability of the manipulator simplifies the process of identifying feasible handling spots for a workstation. The visual-servoing approach uses landmarks, detected by an RGB camera.
The rest of the paper is structured as follows: Section 2 describes the design of the mobile manipulator and the implemented sensor concept. Section 3 presents the control system, including the novel workspace monitoring system and the navigation concept. In Section 4, two experiments are demonstrated to evaluate and discuss the navigation capability of the mobile platform and the performance of the workspace monitoring system. Section 5 concludes the paper.

Safety and Complexity in the Domain of AIMMs

The discussed research projects demonstrate the general application potential of AIMMs. A detailed study [38] analyzed 566 manual manufacturing tasks in five different factory types. Approximately 71% of the tasks were determined solvable using an AIMM. However, the technology has not found its way into industry yet [39]. The main reasons are safety concerns and the high complexity of an AIMM.
In an industrial context, the concept of HRC can lead to more flexibility and cost reduction, especially when the production layout is changed often [40]. The level of collaboration can be classified into the four different categories: coexistence, interaction, cooperation and true collaboration [41]. Except for the basic level of coexistence, every level includes a shared human–robot workspace. Therefore, safety regulations are the main technical issues to enable HRC [42]. The use of power and force limited lightweight manipulators has a high potential to widely enable the integration of AIMMs in the manufacturing industry. These types of manipulators are often called collaborative manipulators. In addition, new approaches combine hard real-time safety components (hard-safety), like verified industrial-grade sensors [22], with soft real-time safety components (soft-safety) such as tactile sensors [23,24] or vision based object detection [43]. However, preventing physical harm is only the first step to setup a stress-free and intuitive collaboration [44]. The cognitive or mental workload characterize how beneficial a collaboration between a human and a robot is [45]. The situational adaptation of the robot’s behavior reduces the mental workload of the human co-worker and simultaneously increases the productivity of the AIMM. Therefore, the perception of the environment in a human like manner is a key requirement. Classical approaches, which rely on safety zones or pure obstacle detection, do not provide sufficient meaningful information.
Following [46], an AIMM can be defined as a complex system, as it provides all of the following characteristics:
  • It contains a collection of many interacting objects (software-modules and hardware-components).
  • It makes use of memory or feedbacks.
  • It can adapt its strategies based on stored data or feedback.
  • It is influenced by its environment.
  • It can adapt to its environment.
  • The relations between the objects and the relation between the system and the environment is non-linear and non-trivial.
Following this definition, the complexity of AIMMs has significantly increased over the last decade. The aim of this development is to enable AIMMs to fulfill their tasks without human supervision or instructions. As a result, AIMMs have proven that they are able to autonomously perform various tasks: from logistics, to pick and place or complex handling. However, the implementation of AIMMs in production processes still requires human instructors or operators. Typical examples are the definition of navigation routes or park positions at a workstation as well as the teaching of manipulator motions. Reducing the complexity in this subarea would make this technology more feasible for widespread industrial use. In science many approaches exist to measure the level of complexity. One approach is to measure the time and resources needed to execute a task [47]. This concept can as well be applied to the implementation process of AIMMs. In the industrial context, the time required and the knowledge base needed by the human instructor are comparable quantities.

2. Robot Design

2.1. Mobile Platform

The positioning accuracy and the maneuverability are the major concerns of the mobile platform development. A holonomic kinematic is a special case of omnidirectional kinematic, which enables the platform to move in all directions without changing its orientation prior to moving. Therefore, holonomic kinematics are not subject to any kinematic restrictions, which means all movement commands can be executed instantly.
In this paper, the designed mobile platform is based on four Mecanum wheels [48] providing a holonomic kinematic behavior. Mecanum wheels are susceptible to ground unevenness, which will cause slip and other unexpected motion behaviors. A dedicated wheel suspension must ensure continues ground contact for each individual wheel. A rigid suspension of four wheels is a statically indefinite system, where continues ground contact is not guaranteed. A vertical spring-loaded suspension for each individual wheel solves this problem [49]. However, such a suspension will lead to inaccurate positioning of the manipulator during motion. Usually, the floor in industrial production environments is almost flat. Under this assumption, a pivoting axle for two of the four wheels will result in a statically determined system. The displacement of the wheel support points during a pendulum movement is minimized by centering the pivot point of the pendulum axle in the radial axle of the wheels.
Figure 2 shows the developed drive units and the chassis made from aluminum profiles. The drive units use a fixed/floating bearing for each wheel. Metal bellow couplings secure the servo drives against unexpected forces during motion. The pivoting axle is supported by rubber buffers to avoid direct contact between the pivoting drive unit and the chassis in extreme situations.
The general technical specifications of the developed mobile platform are listed in Table 1.
The collaborative manipulator UR5 is mounted on top of the mobile platform as shown in Figure 3. The manipulator is equipped with an adaptive gripper 2F-85 from ROBOTIQ. The adjustable gripping force and the detection of a successful grasp process makes this gripper suitable for collaborative robots.

2.2. Sensor Concept

The mobile manipulator is equipped with the following sensors to perceive its environment, as shown in Figure 3:
  • two 2D Lidar (Sick TIM-S),
  • one 3D Lidar (ouster OS0-32),
  • one RGB-D camera (Intel-RealSense 435),
  • three RGB cameras (ELP USBFHD04H-L170),
  • one monochrome camera (UI-3251LE),
  • six thermal cameras (Flir Lepton),
  • one inertial measurement unit (IMU) (Xsens MTi-10) and
  • four encoders.
The RGB and thermal cameras as well as the 3D Lidar are part of a multilayer workspace monitoring system to detect human co-workers or other potential collision objects. The cameras are mounted in a hexagon. The thermal camera provides a horizontal field of view (FOV) of 56°. The RGB camera provides a 170° FOV. The 2D Lidars, the IMU, the encoders and the monochrome camera are used for the autonomous navigation of the platform. In addition, the 2D Lidars are part of the lower-level emergency stop circuit. The data provided by the attached RGB-D camera is used to calculate grasp positions and to avoid collisions during the motion of the manipulator.

3. Control System

3.1. Components and Connections

The mobile platform is equipped with a PLC. The B&R PLC-X20 is connected via Powerlink to two APOCOSmicro servo drives. Each servo drive controls two servo motors. The battery module consists of a lithium iron phosphate (LiFePo4) accumulator and an integrated battery management system (BMS). It provides a nominal capacity of 50 Ah at a nominal voltage level of 48 V.
Figure 4 shows a schematic overview of the components and the interior connections of the mobile manipulator. Both lower-level controllers and servo drives are connected to an emergency stop circuit to provide functional safety. An emergency stop can be triggered by an emergency button located at the mobile platform, through a Tyro radio emergency button or by the 2D safety Lidars.
The higher-level controller is an embedded industrial computer (Vecow GPC-1000). The embedded computer is connected to the PLC and the UR5 control unit via a gigabit ethernet router. The router can be integrated in any production environment specific Wi-Fi infrastructure as an access point.

3.2. Software Architecture

The architecture of the control software is also divided into lower- and higher-level tasks. The higher-level software modules are based on the Robot Operating System (ROS) [50]. The developed modules provide the following functionalities:
  • three-dimensional 360° workspace monitoring,
  • autonomous navigation in unstructured dynamic environments,
  • 6 Degree of Freedom (DoF) adaptive manipulation and
  • task management in form of a state machine.
The lower-level control executes real-time critical tasks and commands provided by the higher-level control software. The platform controller makes use of a standard kinematic model as defined in [51]. Figure 5 shows a schematic overview of the main software modules of the higher-level and their interaction with the lower-level.
The integrated state machine manages the global task execution including navigation and manipulation tasks. The multilayer workspace monitoring system determines the position of human co-workers in form of a two-dimensional heatmap H , which is used for autonomous navigation. In addition, it provides a point cloud C to the 6 DoF manipulation module for collision avoidance based on pure obstacle detection.

3.3. Workspace Monitoring System

The mobile manipulator shares the workspace with human co-workers. Therefore, the lower-level hard-safety concept is extended by an additional higher-level workspace monitoring system (WMS). The WMS detects human co-workers and estimates their 2D position in the robot coordinate system R . This approach enables the robot to preventively react to humans in its workspace, e.g., reduce its velocity, plan an alternative path or even stop, before the hard-safety is triggered. The intended concept enables a more intuitive cooperation between the human co-worker and the mobile manipulator.
The WMS covers a 360° horizontal FOV. Its reliability is increased through a multilayer sensor configuration. Therefore, the sensors are mounted in a hexagon (see Figure 6). The integrated 3D Lidar provides an ultra-wide 90° vertical FOV and a vertical resolution of 32 channels, which makes it suitable for 3D obstacle detection even in close distance to the mobile manipulator.
The process of determining the positions of human co-workers is divided into three steps:
  • Step 1: Parallel detection and segmentation of human co-workers in RGB and thermal images.
  • Step 2: Determining the corresponding 2D heatmaps in the robot coordinate system R .
  • Step 3: Fusing of the resulting position information based on the classification confidence levels.
Object detection in image data has recently made remarkable progress through techniques from the field of machine learning. Deep neural networks are robust against a high variance in the object appearance compared to classical image processing methods. Corresponding models can detect or even segment objects under different environmental conditions, which makes them feasible for a human co-worker detection.
The detection process is performed redundantly. The RGB images R G B i i = 1 , 2 , 3 and the thermal image T j j = 1 , 2 , 3 , 4 , 5 , 6 , are analyzed by object detection and segmentation algorithms in parallel. The YOLOv3 [52] model is employed for the objection detection task, which provides a bounding box and a corresponding confidence level α ^ B for each object. For analyzing the RGB image data, we used public weights trained on the COCO dataset [53]. For the thermal images the model is trained with the Flir ADAS dataset [54]. Afterwards we fine-tuned the model using 1000 additional thermal images captured with the Flir Lepton. The images were taken at three different locations: an exhibition, an office and a terrace. We annotated the images at pixel level.
The object segmentation is based on [55], which provides the confidence level α ^ x y at pixel level. For analyzing the RGB image data, we used public weights available in literature [56]. For the thermal images we trained the model with the dataset presented in [57] and performed a fine-tuning step with the self-annotated dataset. Figure 7 shows one exemplary detection set of the WMS at a confidence level of α ^ B > 0.2 and α ^ x y > 0.2 . The detected objects are represented by bounding boxes and the semantic segmentation is represented by blue blend masks.
The two sensor data types and the two detection methods result in four detectors labelled as experts E as follows:
  • E Y R G B :  Bounding-box-based detection of the images R G B i .
  • E Y T h e r m a l :  Bounding-box-based detection of the images T j .
  • E B R G B :  Segmentation-based detection of the images R G B i .
  • E B T h e r m a l :  Segmentation-based detection of the images T j .
Each expert computes 2D image masks reflecting the detection and segmentation results. The image masks contain the classification results at pixel level, which are represented by the confidence levels α ^ B for the experts E Y R G B and E Y T h e r m a l as well as the confidence levels α ^ x y for the experts E B R G B and E B T h e r m a l . Figure 8 shows the image masks of the images R G B 3 and T 5 of the detection set demonstrated in Figure 7. The confidence levels are normalized to a range of 0–255 in the gray scale masks for visualization purposes.
In the next step, each expert projects the 3D points of the point cloud C into the image plane of the image masks. An a priori performed calibration provides the underlying projection matrix. The approach is presented in [58]. The expert extracts a subset of 3D points S C based on the pixel coordinates of each image mask and the related lidar-camera projection matrix. Each 3D point is assigned the confidence level α ^ B or α ^ x y , depending on the expert type. The resulting subset S is clustered using the Euclidean point-to-point distances to remove outliers. Figure 9 shows the final subsets S H S for each individual expert of the detection set demonstrated in Figure 7. Each 3D point is colored according to the related confidence level. Red indicates a confidence level of 1 and blue a confidence level of 0.
The 3D points of the final subset S H S are projected into a 2D grid pattern with the size of 12 m by 12 m and each cell sized 0.2 m by 0.2 m. The origin is located in the center of the grid pattern, corresponding to the robot coordinate system R . The experts use an "argmax" function over all points associated to one cell to calculate the confidence level β ^ of each cell. An additional fifth expert E F u s i o n performs the confidence fusion. The fused confidence level γ ^ is calculated for each cell by applying the logarithmic opinion pool [59] shown in Equation (1).
γ ^ =   i = 1 N β ^ i 1 / N i = 1 N β ^ i 1 / N +   i = 1 N ( 1 β ^ i ) 1 / N
The different confidence levels β ^ i are weighted equally, with N = 4 representing the four experts. Equation (1) will result in γ ^ = 1 if any of the confidence levels β ^ i is equal to 1. In another way, if any confidence level is equal to 0, then γ ^ = 0 . This "veto" behavior [60] is avoided by replacing β ^ = 0 with β ^ = 0.01 and β ^ = 1.0 with β ^ = 0.99 , respectively [61]. Figure 10 shows the different heatmaps provided by the individual experts and the related ground truth.

3.4. Autonomous Navigation

The developed navigation concept is inspired by the zone management within an industrial production environment. The approach is based on virtual navigation zones, which can be defined using 2D polygons. The intuitive concept reduces the complexity of implementing AIMMs in industrial production environments. The virtual zones can be easily adapted to changes in the production layout and do not require any infrastructural modifications. The approach is scalable to numerous zones, including static and dynamic zone types. The zones are used to switch between predefined behaviors of the mobile platform with different settings, such as maximum speed and acceleration, the underlying kinematic model, warning indicators (visual and acoustic), the minimum distance to obstacles and the goal tolerance. It is also possible to switch between different path planners.
For instance, Figure 11a shows an example navigation zone setup in the model factory. The green zones are preferred zones for transportation. The mobile robot can enter the yellow zones if no path through the green zone is available. Entering the yellow zones will result in a decrease of the maximum velocity and a visual warning. The blue zones are goal zones related to individual workstations. Therefore, in the blue zones the maximum speed is reduced significantly and the goal tolerance is decreased to achieve a high positioning accuracy. The red zones are forbidden zones, which are in front of manual workbenches in the exemplary scenario. All the four zone types are static. In contrast, the orange zone is a dynamic zone representing a human co-worker. The creation of the orange zones is based on the heatmap H , representing the 2D positions of human co-workers detected by the WMS. Entering an orange zone will result in a decrease of the velocity, a warning and an increase of the minimal distance to obstacles. The path planning task belonging to the virtual zones is implemented by a layered costmap configuration [62]. The ordering of the layers allows modulating the costs by overwriting them only when and where required. Figure 11b shows the layers in the global costmap with the data being evaluated from bottom to top.
The virtual zone concept relies on accurate localization, obtained by using extended Kalman filtering [63,64]. The localization is split into local and global coordinate systems. The local localization fuses the linear velocities x . O , y . O and the angular velocity θ . O , provided by the platform odometry, with the angular velocity θ . I provided by the IMU. The global localization is based on the ROS module 2D Cartographer [65], which uses a "Ceres"-based [66] scan matcher. Therefore, the laser scan data provided by the two 2D Lidars are merged to cover a 360° FOV around the robot origin. The global localization is based on a prior generated grid map of the production environment and provides a global pose P W =   x W ,   y W ,   θ W T in reference to the static world coordinate system W .
An intermediate path planner enables the adaption of existing path planning algorithms, e.g., A* [67] or Dijkstra [68], to the proposed navigation zone concept. The Radish planner splits the global path from start to goal pose into smaller sub-paths, according to individual zones. The Radish planner aims to provide intermediate waypoints so that the robot footprint stays in preferred zones for as much of the trajectory as possible. The zones are ordered according to the fixed costs of the individual costmap. For the zone setup of the model factory in Figure 11, two intermediate waypoints are necessary, which are in the preferred green transport zone. The first intermediate waypoint is related to the robot start pose P s =   x s ,   y s ,   θ s T . The second intermediate waypoint is related to the desired goal pose P g =   x g ,   y g ,   θ g T . An intermediate waypoint is considered valid if at least 80% of the robot’s footprint is in the green zone. The radish planner aims to find the closest valid waypoint based on an initial pose P i =   x i ,   y i ,   θ i T . Starting from the robot pose P i a circular search is performed with an angular sampling rate Δ θ 1 , according to the Equations (2) and (3). The sampling rate Δ θ 1 determines the number of n positions x i + n ,   y i + n T alongside the circle with the radius r .
x i + n = x i + r cos Δ θ 1 n
y i + n = y i + r sin Δ θ 1 n
The robot footprint is non-circular, therefore the initial orientation angle θ i is variated at each position with a sampling rate of Δ θ 2 , according to Equation (4). The sampling rate Δ θ 2 determines the number of m intermediate waypoints x i + n ,   y i + n ,   θ i + m T at each position x i + n ,   y i + n T .
θ i + m = θ i + Δ θ 2 m
If none of the calculated intermediate waypoints is feasible, the search radius r is increased and the process is repeated. The procedure is cancelled if the radius r is greater than a defined threshold. In that case no intermediate waypoint related to the pose P i is used for the path planning task. This procedure is executed for both, the start and goal pose of the robot.

3.5. Adaptive Manipulation

The 6D motion planning for the industrial manipulator is performed using the ROS module MoveIt [69], which provides sensor data integration for workspace monitoring and monitors the state of the manipulator via its joint positions. In addition, it features different state-of-the-art motion planners including the Open Motion Planning Library (OMPL [70]).
Industrial manipulators provide a high repeatability, which is important for repetitive motion execution. However, repetitive motion execution requires a deterministic environment and a well-defined task description. In contrast to static manipulators, a mobile manipulator is a dynamic system. The positioning of the mobile manipulator will vary each process step, caused by small navigation errors or ground unevenness. An adaptive manipulation must compensate small positioning errors of the mobile platform each time to provide stable task execution.
Therefore, a visual servoing approach based on Augmented Reality (AR)-marker is used to adapt the 6D motion planning. The AR-markers are mounted at the different workstations. The 6D grasp poses are known in the coordinate systems of the AR-markers A . The 6D pose of an AR-marker P a r = x a r ,   y a r ,   z a r ,   r o l l a r ,   p i t c h a r ,   y a w a r T in the robot coordinate system R is calculated based on image data provided by the RGB-D camera using the open-source AR-marker tracking library ALVAR [70]. Figure 12 shows the detection of the AR-marker and the grasping task performed by the mobile manipulator. For the initial approach OMPL’s implementation of RRT-Connect [71] has proven itself to be reliable. MoveIt’s cartesian path planning capability is used to calculate the final linear approach and escape paths.
The workspace of a manipulator is inhomogeneous in terms of reachability and manipulability. The reachability describes the capability of the end-effector to reach a 6D pose P e e = x e e ,   y e e ,   z e e ,   r o l l e e ,   p i t c h e e ,   y a w e e T . The manipulability is the capability to move the end-effector in a specific direction given a 6D pose P e e . Therefore, a limiting factor for the manipulation process is the positioning of the mobile platform in relation to the desired end-effector pose. The determination of an optimal position of the mobile platform can be time consuming and requires expertise in robotics. A simple adaption to different tasks is a core functionality of a flexible mobile manipulator in an industrial environment. Therefore, the adaptive manipulation module of the mobile manipulator OMNIVIL includes a workspace analyzing tool to reduce the complexity. The tool is based on the core idea presented in [72]. It provides a visualization for reachability tasks, e.g., pick and place, as well as manipulability tasks, such as polishing or inspection. Therefore, the workspace of the manipulator is discretized to a 3D voxel grid as shown in Figure 13a. We used the algorithm presented in [73] to create the spherical arranged set of 6D end-effector poses for each voxel. Figure 13b shows the equally distributed end-effector poses.
A 6D pose P e e is reachable if the inverse kinematic of the manipulator is solvable. The reachability index d [72] of an individual voxel is described in Equation (5),
d =   a b 100 ,
where a is the number of reachable end-effector poses and b is the number of all end-effector poses in the voxel.
At each reachable end-effector pose the manipulability index w [74] is calculated following Equation (6),
w = det J e e J e e T ,
where J e e is the Jacobian matrix for the end-effector at that robot configuration and J e e T its transpose. The calculation of the Jacobian is given in Equation (7),
J e e q = P q =   P 1 q 1 P 2 q 1 P 1 q 2 P 2 q 2 P 1 q 6 P 2 q 6 P 6 q 1 P 6 q 2 P 6 q 6 ,
where P is the vector of the end-effector pose and q the vector representing the joint angle configuration. The manipulability index provides an indication of how well a pose P e e can be adjusted at a given robot configuration. A voxel is assigned the average of the manipulability indices w ¯ of all reachable poses P e e . Voxels with no reachable poses P e e are assigned a manipulability index of 0.
An analysis of the workspace based on spherically oriented poses P e e provides a good indication of the workspace in terms of reachability and manipulability. However, in many industrial applications only subsets of these end-effector poses are of interest. Typical examples are pick and place applications, which require a downward facing end-effector or inspection applications, which require a forward-facing end-effector. Therefore, the spherical approach is extended by two additional hemispheres. One hemisphere includes end-effector poses which are oriented to the front and the other hemisphere includes end-effector poses oriented to the ground in respect to the robot coordinate system R . Figure 14 shows a comparison of the three geometrical distributions. The voxel coordinate system V is equally oriented for each voxel.

3.6. Integration in a Model Factory

The developed mobile manipulator OMNIVIL is used in a model factory [75] for experimental purposes. One example product of the model factory is a Lego car (see Figure 12), which is produced according to the customers’ requirements (color and model). The production process includes several workstations: a fully automated robot cell equipped with an industrial delta picker, manual workbenches with AR support and a warehouse system. In an exemplary use case, OMNIVIL transports goods and preassembled parts between the workstations. Figure 15 shows the three workstations, which are approached by OMNIVIL.

4. Experiments and Discussion

4.1. Localization and Positioning Accuracy

We tested the mobile manipulator OMNIVIL in an exemplary logistic use case, implemented in the model factory. The localization experiment includes four different scenarios S , reflecting typical variations in industrial environments. The scenario changes in the experiments are located between 0 und 2 m height, comparable to an industrial production scenario:
  • S s t a t i c : No changes compared to the prior generated map (see Figure 16a).
  • S c r o w d e d : Minor static changes compared to the prior generated map (see Figure 16b).
  • S c r o w d e d : Many static changes compared to the prior generated map (see Figure 16c).
  • S d y n a m i c : This scenario is based on S c r o w d e d but includes dynamic changes. Three human co-workers are continuously transporting boxes and pallets with help of a manual lift truck (see Figure 16d).
The sensing data was stored during 40 runs in each scenario and postprocessed offline, which creates a comparable database. As depicted in Figure 17a, each run includes a full movement of the mobile manipulator between the three workstations in the model factory, whereby the robot was controlled manually.
Figure 17b shows one performed movement process starting from pose P 1 and moving to pose P 2 and P 3 sequentially, before returning to P 1 . The position P 1 is set as reference position to estimate the localization errors and standard deviations. The final positioning of the platform at the position P 1 is secured by mechanical end stops. One complete run has an absolute Euclidean path length of approximately 33 m. It includes several rotations, which sum up to an absolute rotation of around 12.5 rad (about two revolutions).
The differences between the scenarios are compared by the average range value changes in a period of 2D laser beam scanning 360°. This factor of change A is calculated following Equation (8).
A = i = 0 n l M i R i R i n l ,
where M i is the range value of the laser beam i in the current environmental scenario and R i is the range value of the laser beam i in the reference environmental scenario S s t a t i c . The constant n l is the number of valid beams per laser scan. The factor of change A is determined at position P 4 (see Figure 17. Table 2 shows the factor of change for the three static scenarios.
The localization experiment compares four different localization strategies feasible with the implemented sensor concept. All localization approaches use a prior created map. Such pure re-localization strategies minimize the necessary computing power compared to simultaneous localization and mapping (SLAM) approaches.
The 2D map is created in form of an occupancy grid map using OpenSlam’s gmapping [76], which is based on Rao-Blackwellized particle filters, described in [77,78]. The sparse 3D map is created using the 3D SLAM Algorithm LeGO-LOAM [79]. Figure 18 shows the resulting 3D point cloud and the 2D occupancy grid map.
The localization is divided into global and local localization. The local localization is based on the platforms odometry and the IMU data. The local localization provides a mean absolute error of 0.0734 m ± 0.023 m in x-direction, 0.253 m ± 0.067 m in y-direction and 0.008 rad ± 0.007 rad for θ. The calculation is given in Equation (9),
M E A l o c a l =   i = 1 n P i P i + 1 n 1 ,
where n is equal to the total amount of 160 runs and P i i = 1 , , n is the output of the resulting local localization at position P 1 for run i . The relatively small systematic error in combination with the low standard deviation σ qualifies the implemented local localization approach to be used in combination with the global localization approaches.
The global localization uses fixed features according to a static global frame. Four different global localization approaches, L i i = 1 , 2 , 3 , 4 , are compared using 2D or 3D Lidar data:
  • L 1 : 2D Adaptive Monte Carlo localization (AMCL) [80].
  • L 2 : 2D CARTOGRAPER [65], which uses a Ceres-based [66] scan matcher.
  • L 3 : 3D Monte Carlo localization [81], which is based on the Lidar measurement models described in [82] and makes use of an expansion resetting method [83].
  • L 4 : 3D HDL Localization [84], which iteratively applies normal distributions transform (NDT) scan matching [85].
The conducted experiment compares the global localization approaches in four different scenarios. The sample size is 40 runs for each scenario. In contrast to the local localization, an estimation of the systematic error is not possible, as it would require a highly precise 6D reference system. However, the standard deviation measured at the reference position P 1 gives a good indication of the performance. Figure 19 shows the experimentally determined results.
Except for L 1 , all global localization approaches provide a high robustness against static and dynamic changes in the environment, i.e., small standard deviation values. The 3D localization strategies L 3 and L 4 make use of features, which are not affected by the static and dynamic changes, because they are located on a higher spatial level. The algorithm used in L 2 shows a comparable reliability, proving that a 2D localization can be used in such scenarios as well. The best performance was provided by L 4 .
Another important factor besides accuracy is the computational load of the different methods. We performed the evaluation offline using a 12 core Intel i7-8700 @3.2GHz with 32 GB RAM. Table 3 shows a comparison of the computational load and the memory usage for each global localization strategy.
We conducted an additional experiment to evaluate the positioning accuracy of the mobile platform during autonomous navigation. We choose the best performing localization approach L 4 and compared it with the best 2D localization approach L 2 . In addition, we evaluated a marker-based positioning approach. Therefore, we equipped each workstation with an additional AR-Marker. The forward-facing camera of the mobile platform detects the marker. The pose of the marker is determined using the open-source AR-marker tracking library ALVAR. The platform approaches reactively the desired goal pose in relation to the marker using a PID controller.
The experimental procedure to evaluate the positioning error is based on a camera-marker setup. The setup of the reference system is similar to [86]. The starting pose for the autonomous navigation task was set to the initial pose P 4 . The goal poses are the three workstation poses P 1 , P 2 , and P 3 . Each goal pose was approached 40 times in the scenario S s t a t i c . The goal tolerance of the local planner was set to 5 mm in x- and y-direction and 0.005 rad for θ. Table 4 shows the standard deviation σ of the positioning accuracies at the different goal poses for the evaluated positioning strategies.
The marker-less localization strategies L 2 and L 4 proved, that they are comparable to a marker-based positioning of the mobile manipulator. The approach based on L 4 even provides a superior positioning accuracy compared to the marker-based approach. The conducted experiments show the positioning capabilities of the autonomous mobile manipulator OMNIVIL and proved that a robust and precise localization and path execution is possible without any infrastructural modifications in the production environment.

4.2. Human Co-Worker Detection

We performed an experiment to validate the performance of the presented WMS system. Therefore three different datasets were created in the three scenarios S s t a t i c , S c r o w d e d , and S d y n a m i c . In contrast to the localization experiments up to four human co-workers are present in the scenarios. Each dataset consists of 190 dataset subsegments, which sums up to 570 subsegments in total, including various edge cases. Figure 20 gives an exemplary overview of the datasets.
For each dataset, the ground truth subsegments for the human co-workers were manually annotated. To evaluate the performance of the five experts, the resulting confidence heatmaps are analyzed as following: Let N be a threshold of the confidence level, C x y a cell of the expert heatmap and G x y the corresponding cell of the ground truth heatmap. Then C x y is interpreted as a positive detection C x y + if the confidence level L   >   N . Otherwise it is not considered. All positive detections are divided into true positive and false positive detections as follows:
C x y + =     t r u e   p o s i t i v e , G x , y + G x + 1 , y + G x 1 , y + G x , y + 1 + G x , y 1 1 f a l s e   p o s i t i v e , e l s e
A positive detection C x y + is scored as a true positive if the corresponding ground truth G x y or at least one of its close neighbored cells is marked as positive (equals to one). A similar procedure is used to divide false detections C x y into false negative and true negative detections. A negative detection C x y is scored as false negative if the corresponding ground truth G x y or at least one of its close neighbored cells is marked as positive. Figure 21 shows a comparison of the various expert performances based on their characteristic precision/recall curve.
The average precision (AP) is calculated for each expert as shown in Equation (11),
A P =   i = 1 n R i R i 1 P i ,
where n is the number of measurements, R i is the recall and P i is the precision value at the measurement i .
Table 5 shows the AP values in comparison.
The reliability is the basic requirement for a safety module like the presented WMS system. Both precision and recall are equally important. A low precision will result in more false positive detections, obstructing the zone-based navigation. A lower recall indicates more missed human co-workers, which will deny the desired preventive behavior. The experts based on semantic segmentation provide a higher AP compared to the experts based on object detection. This is caused by the related image masks. The image masks based on the object detection approach contain rectangular boxes around the detected objects, whereby the image masks based on semantic segmentation are reflecting the contour of the object.
Apart from the expert E F u s i o n , the expert E B T h e r m a l achieved the best individual score with an AP of 0.83 in the dynamic scenario. The expert E B T h e r m a l shows comparable performance to its RGB counterpart. Both results support the hypothesis that thermal data is well suited to determine human co-workers in cluttered environments.
The AP of the four basic experts fluctuates between 0.62 and 0.83 in the four different scenarios, which is not suitable for a safety system. In contrast, the expert E F u s i o n shows a fluctuation of only 0.02 over the four different scenarios. Its performance surpasses the best basic expert by over 10%. With an average precision of 0.94 over all three datasets the presented WMS system provides the reliability and precision to be used as a soft-safety system in a mobile manipulation application.

4.3. Workspace Analyzing

The workspace evaluation of the mobile manipulator OMNIVIL is based on a robot model to identify self-collisions. The reachability is determined by analyzing 100 evenly distributed end-effector poses for three geometric distributions, namely: sphere, hemisphere-front and hemisphere-down (see Section 3.5). Figure 22 shows the results of the workspace evaluation. For reasons of simplification, we neglected voxels which are located at negative x-positions in the base coordinate system B of the manipulator (see Figure 23). The scalar values for the manipulability indices were normalized to 0–100% for each distribution.
The reachability values depend on the geometric distribution of the end-effector poses. In case of a forward-facing hemisphere, the number of voxels providing a reachability of 50–75% is increased by 9%, compared to the spherical distribution. The downward-facing hemisphere results in more voxels with a reachability over 75%. Compared to the spherical distribution, the number is increased by 11%. Both results support the hypothesis that it is important to determine a geometrical distribution that matches the actual application.
In contrast to reachability, the manipulability index does not provide absolute values. However, the manipulability indices provide a good indication, which areas in the workspace of the manipulator should be preferred for manipulation tasks. Similar to the reachability, the values of the manipulability depend on the geometric distribution of the end-effector poses.
We developed a visualization tool to demonstrate the reachability and manipulability in the workspace of the manipulator. The tool includes functionalities such as adjustable 3D filters and different transparency levels. Therefore, the proposed visualization provides an intuitive way for the operator to position the platform in a spot, which offers high reachability or manipulability capabilities. Figure 23 shows a vertical visualization of the OMNIVIL robot model and the workspace visualization.

4.4. Comparsion with Existing Mobile Manipulators

In the last decade various mobile manipulators were developed in research. Most of them are based on a commercially available mobile platform. Nowadays, even complete mobile manipulators are available on the market. Table 6 compares some of the most recent industrial mobile manipulators.
All mentioned mobile manipulators are equipped with 2D safety Lidars as a hard-safety component, which is common for autonomous vehicles. Advanced sensor technology or even machine learning methods can achieve a high level of soft-safety to enable intuitive HRC. Therefore, the researched mobile manipulators are equipped with various sensors for obstacle detection, namely: RGB-D cameras, ToF cameras, stereo-cameras, and light-field cameras. In addition, all mobile manipulators are equipped with a collaborative manipulator, to enable true HRC. Another promising approach in this direction is the implementation of artificial skins, which can detect proximity or contact forces as well as the related impact location. In contrast to industrial grade hard-safety components, these soft-safety concepts are error-prone and difficult to maintain. Redundancy and the combination of different technologies is crucial to overcome these problems [96]. Therefore, OMNIVIL makes use of a multilayer sensor concept and a redundant data analysis. OMNIVIL is the only mobile manipulator of the evaluated systems, which classifies the detected obstacles.
The mobile manipulator Chimera is based on the commercial platform MiR100 [97]. The MiR100 platform provides a navigation based on static zones. OMNIVIL enlarges this concept by the implementation of dynamic zones. These dynamic zones represent the position of human co-workers in the environment. As a result, OMNIVIL is the only system, which provides preventive behavior adaption in the context of HRC.

5. Conclusions

This study presented the development and implementation of the autonomous mobile manipulator OMNIVIL. It provides insights in the related research fields including autonomous navigation, visual-servoing, workspace monitoring, and 6D motion planning.
The main goal of the development was the identification of technical issues, which discourage this technology from industrial applications and its acceptance. Safety concerns were addressed by combining hard-real-time safety components with a high-level soft-real-time workspace monitoring system. Experiments showed the superior redundant concept, proving the feasibility of human–robot collaboration in industrial use cases. The intuitive navigation zone concept, the adaptive visual-servoing approach and the workspace analyzing tool reduce the complexity of the mobile manipulator. An additional experiment evaluated the localization and positioning capabilities of the mobile platform, without the need of infrastructural modifications. The results show a high robustness against static and dynamic changes in the environment and a suitable accuracy for the execution of manipulation tasks.
Further improvements will focus on the workspace monitoring system. The scalable redundant concept will be extended by analyzing the Lidar data. Furthermore, the adaptive manipulation will be improved for marker-free scenarios to enable the proposed mobile manipulator OMNIVIL to be implemented in a production environment without any infrastructural needs.

Author Contributions

Conceptualization and methodology, H.E.; supervision, S.D. and S.K.; software, H.E., P.C., and H.D.; validation, H.E., P.C., and H.D.; writing—original draft preparation, H.E.; writing—review and editing, H.E., S.D. and S.K.; project administration, H.E. and S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by European Regional Development Fund. Research project FiberRadar (EFRE-0801493).

Acknowledgments

This project was supported by the Faculty of Engineering and Built Environment, Tshwane University of Technology, Pretoria, South-Africa and the Faculty of Mechanical Engineering and Mechatronics, University of Applied Sciences, Aachen, Germany.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mishra, R.; Pundir, A.K.; Ganapathy, L. Manufacturing flexibility research: A review of literature and agenda for future research. Glob. J. Flex. Syst. Manag. 2014, 15, 101–112. [Google Scholar] [CrossRef]
  2. Asadi, N.; Fundin, A.; Jackson, M. The essential constituents of flexible assembly systems: A case study in the heavy vehicle manufacturing industry. Glob. J. Flex. Syst. Manag. 2015, 16, 235–250. [Google Scholar] [CrossRef]
  3. Pedersen, M.R.; Nalpantidis, L.; Bobick, A.; Krüger, V. On the integration of hardware-abstracted robot skills for use in industrial scenarios. In Proceedings of the International Conference on Robots and Systems, Workshop on Cognitive Robotics and Systems: Replicating Human Actions and Activities, Tokyo, Japan, 3–7 November 2013; IEEE: Piscataway, NJ, USA, 2013. [Google Scholar]
  4. Hasan, K.M.; Al Mamun, A. Implementation of autonomous line follower robot. In Proceedings of the International Conference on Informatics, Electronics & Vision, Dhaka, Bangladesh, 18–19 May 2012; pp. 865–869. [Google Scholar]
  5. Herrero-Pérez, D.; Alcaraz-Jiménez, J.J.; Martínez-Barberá, H. An accurate and robust flexible guidance system for indoor industrial environments. Int. J. Adv. Robot. Syst. 2013, 10, 292–302. [Google Scholar] [CrossRef]
  6. Yoon, S.W.; Park, S.-B.; Kim, J.S. Kalman filter sensor fusion for Mecanum wheeled automated guided vehicle localization. J. Sens. 2015, 2015, 347379. [Google Scholar] [CrossRef] [Green Version]
  7. Hvilshøj, M.; Bøgh, S.; Nielsen, O.S.; Madsen, O. Autonomous industrial mobile manipulation (AIMM): Past, present and future. Ind. Robot Int. J. 2012, 39, 120–135. [Google Scholar] [CrossRef]
  8. Schuler, J. Integration von Förder-und Handhabungseinrichtungen; Springer: Berlin, Germany, 1987. [Google Scholar]
  9. TAPAS. Robotics-Enabled Logistics and Assistive Services for the Transformable Factory of the Future. Available online: https://cordis.europa.eu/project/id/260026 (accessed on 11 December 2020).
  10. Hvilshøj, M.; Bøgh, S. “Little Helper”—An Autonomous Industrial Mobile Manipulator Concept. Int. J. Adv. Robot. Syst. 2011, 8, 80–90. [Google Scholar] [CrossRef] [Green Version]
  11. Hvilshøj, M.; Bøgh, S.; Nielsen, O.S.; Madsen, O. Multiple part feeding–real-world application for mobile manipulators. Assem. Autom. 2012, 32, 62–71. [Google Scholar] [CrossRef] [Green Version]
  12. Bogh, S.; Schou, C.; Ruehr, T.; Kogan, Y.; Doemel, A.; Brucker, M.; Eberst, C.; Tornese, R.; Sprunk, C.; Tipaldi, G.D.; et al. Integration and Assessment of Multiple Mobile Manipulators in a Real-World Industrial Production Facility. In Proceedings of the 41st International Symposium on Robotics, Munich, Germany, 2–3 June 2014; VDE: Berlin, Germany, 2014; pp. 1–8. [Google Scholar]
  13. Halt, L.; Meßmer, F.; Hermann, M.; Wochinger, T.; Naumann, M.; Verl, A. AMADEUS-A robotic multipurpose solution for intralogistics. In ROBOTIK 2012 7th German Conference on Robotics, Munich, Germany, 21–22 May 2012; VDE: Berlin, Germany, 2012; pp. 1–6. [Google Scholar]
  14. ISABEL. Innovativer Serviceroboter mit Autonomie und Intuitiver Bedienung für Effiziente Handhabung und Logistik. Available online: http://www.projekt-isabel.de/ (accessed on 11 December 2020).
  15. Hermann, A.; Drews, F.; Bauer, J.; Klemm, S.; Roennau, A.; Dillmann, R. Unified GPU voxel collision detection for mobile manipulation planning. In Proceedings of the International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 4154–4160. [Google Scholar]
  16. Hermann, A.; Mauch, F.; Fischnaller, K.; Klemm, S.; Roennau, A.; Dillmann, R. Anticipate your surroundings: Predictive collision detection between dynamic obstacles and planned robot trajectories on the GPU. In Proceedings of the European Conference on Mobile Robots, Lincoln, UK, 2–4 September 2015; pp. 1–8. [Google Scholar]
  17. STAMINA. Sustainable and Reliable Robotics for Part Handling in Manufacturing Automation. Available online: https://cordis.europa.eu/project/id/610917 (accessed on 11 December 2020).
  18. Krueger, V.; Chazoule, A.; Crosby, M.; Lasnier, A.; Pedersen, M.R.; Rovida, F.; Nalpantidis, L.; Petrick, R.; Toscano, C.; Veiga, G. A Vertical and Cyber–Physical Integration of Cognitive Robots in Manufacturing. Proc. IEEE 2016, 104, 1114–1127. [Google Scholar] [CrossRef]
  19. Rofalis, N.; Nalpantidis, L.; Andersen, N.A.; Krüger, V. Vision-based robotic system for object agnostic placing operations. In Proceedings of the International Conference on Computer Vision Theory and Applications, Rome, Italy, 27–29 February 2016; pp. 465–473. [Google Scholar]
  20. VALERI: Validation of Advanced, Collaborative Robotics for Industrial Applications. Available online: https://cordis.europa.eu/project/id/314774 (accessed on 11 December 2020).
  21. CARLoS. CooperAtive Robot for Large Spaces Manufacturing. Available online: https://cordis.europa.eu/article/id/165133-a-robot-coworker-inside-shipyards (accessed on 11 December 2020).
  22. Saenz, J.; Vogel, C.; Penzlin, F.; Elkmann, N. Safeguarding Collaborative Mobile Manipulators-Evaluation of the VALERI Workspace Monitoring System. Procedia Manuf. 2017, 11, 47–54. [Google Scholar] [CrossRef]
  23. Fritzsche, M.; Saenz, J.; Penzlin, F. A large scale tactile sensor for safe mobile robot manipulation. In Proceedings of the 11th International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 427–428. [Google Scholar]
  24. Saenz, J.; Fritsche, M. Tactile sensors for safety and interaction with the mobile manipulator VALERI. In Proceedings of the ISR 2016: 47st International Symposium on Robotics, Munich, Germany, 21–22 June 2016; VDE: Berlin, Germany, 2016; pp. 1–7. [Google Scholar]
  25. Andersen, R.S.; Bøgh, S.; Moeslund, T.B.; Madsen, O. Intuitive task programming of stud welding robots for ship construction. In Proceedings of the International Conference on Industrial Technology, Seville, Spain, 17–19 March 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 3302–3307. [Google Scholar]
  26. RobDream. Optimising Robot Performance while Dreaming. Available online: https://cordis.europa.eu/project/id/645403 (accessed on 11 December 2020).
  27. Dömel, A.; Kriegel, S.; Kaßecker, M.; Brucker, M.; Bodenmüller, T.; Suppa, M. Toward fully autonomous mobile manipulation for industrial environments. Int. J. Adv. Robot. Syst. 2017, 14, 1–19. [Google Scholar] [CrossRef] [Green Version]
  28. ColRobot. Collaborative Robotics for Assembly and Kitting in Smart Manufacturing. Available online: https://cordis.europa.eu/project/id/688807 (accessed on 11 December 2020).
  29. Costa, C.M.; Sousa, A.; Veiga, G. Pose Invariant Object Recognition Using a Bag of Words Approach. In Proceedings of the 3rd Iberian Robotics Conference, Seville, Spain, 22–24 November 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 153–164. [Google Scholar]
  30. Guérin, J.; Gibaru, O.; Thiery, S.; Nyiri, E. Locally optimal control under unknown dynamics with learnt cost function: Application to industrial robot positioning. J. Phys. Conf. Ser. 2017, 783, 12036. [Google Scholar] [CrossRef]
  31. Safeea, M.; Bearee, R.; Neto, P. End-Effector Precise Hand-Guiding for Collaborative Robots. In Proceedings of the 3rd Iberian Robotics Conference, Seville, Spain, 22–24 November 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 595–605. [Google Scholar]
  32. THOMAS. Mobile Dual Arm Robotic Workers with Embedded Cognition for Hybrid and Dynamically Reconfigurable Manufacturing Systems. Available online: https://cordis.europa.eu/project/id/723616 (accessed on 11 December 2020).
  33. Outón, J.L.; Villaverde, I.; Herrero, H.; Esnaola, U.; Sierra, B. Innovative Mobile Manipulator Solution for Modern Flexible Manufacturing Processes. Sensors 2019, 19, 5414. [Google Scholar] [CrossRef] [Green Version]
  34. Kousi, N.; Michalos, G.; Aivaliotis, S.; Makris, S. An outlook on future assembly systems introducing robotic mobile dual arm workers. Procedia CIRP 2018, 72, 33–38. [Google Scholar] [CrossRef]
  35. Kousi, N.; Stoubos, C.; Gkournelos, C.; Michalos, G.; Makris, S. Enabling Human Robot Interaction in flexible robotic assembly lines: An Augmented Reality based software suite. Procedia CIRP 2019, 81, 1429–1434. [Google Scholar] [CrossRef]
  36. FiberRadar. Available online: https://www.fh-aachen.de/iaam/autonome-mobile-systeme/fiberradar/#c168389 (accessed on 5 November 2020).
  37. OPC Foundation. Unified Architecture. Available online: https://opcfoundation.org/about/opc-technologies/opc-ua/ (accessed on 23 July 2020).
  38. Bøgh, S.; Hvilshøj, M.; Kristiansen, M.; Madsen, O. Identifying and evaluating suitable tasks for autonomous industrial mobile manipulators (AIMM). Int. J. Adv. Manuf. Technol. 2012, 61, 713–726. [Google Scholar] [CrossRef]
  39. Madsen, O.; Bøgh, S.; Schou, C.; Andersen, R.S.; Damgaard, J.S.; Pedersen, M.R.; Krüger, V. Integration of mobile manipulators in an industrial production. Ind. Robot Int. J. Robot. Res. Appl. 2015, 42, 11–18. [Google Scholar] [CrossRef]
  40. Fechter, M.; Foith-Förster, P.; Pfeiffer, M.S.; Bauernhansl, T. Axiomatic Design Approach for Human-robot Collaboration in Flexibly Linked Assembly Layouts. Procedia CIRP 2016, 50, 629–634. [Google Scholar] [CrossRef]
  41. Wang, L.; Gao, R.; Váncza, J.; Krüger, J.; Wang, X.V.; Makris, S.; Chryssolouris, G. Symbiotic human-robot collaborative assembly. CIRP Ann. 2019, 68, 701–726. [Google Scholar] [CrossRef] [Green Version]
  42. Saenz, J.; Elkmann, N.; Gibaru, O.; Neto, P. Survey of methods for design of collaborative robotics applications-why safety is a barrier to more widespread robotics uptake. In Proceedings of the 4th International Conference on Mechatronics and Robotics Engineering, Valenciennes, France, 7–11 February 2018; pp. 95–101. [Google Scholar]
  43. Bexten, S.; Scholle, J.; Saenz, J.; Walter, C.; Elkmann, N. Validation of workspace monitoring and human detection for soft safety with collaborative mobile manipulator using machine learning techniques in the ColRobot project. In Proceedings of the 50th International Symposium on Robotics, Munich, Germany, 20–21 June 2018; pp. 191–198. [Google Scholar]
  44. Lasota, P.A.; Fong, T.; Shah, J.A. A survey of methods for safe human-robot interaction. Found. Trends Robot. 2017, 5, 261–349. [Google Scholar] [CrossRef]
  45. Sheridan, T.B. Humans and Automation: System Design and Research Issues; John Wiley & Sons: New York, NY, USA, 2002. [Google Scholar]
  46. Johnson, N. Simply Complexity: A Clear Guide to Complexity Theory; Oneworld Publications: London, UK, 2009. [Google Scholar]
  47. Gell-Mann, M. What Is Complexity? Complexity and Industrial Clusters; Springer: Berlin/Heidelberg, Germany, 2002; pp. 13–24. [Google Scholar]
  48. Diegel, O.; Badve, A.; Bright, G.; Potgieter, J.; Tlale, S. Improved mecanum wheel design for omni-directional robots. In Proceedings of the Australasian Conference on Robotics and Automation, Auckland, New Zealand, 27–29 November 2002; pp. 117–121. [Google Scholar]
  49. Qian, J.; Zi, B.; Wang, D.; Ma, Y.; Zhang, D. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System. Sensors 2017, 17, 2073. [Google Scholar] [CrossRef] [Green Version]
  50. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In Proceedings of the Workshops at the IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; p. 5. [Google Scholar]
  51. Indiveri, G. Swedish wheeled omnidirectional mobile robots: Kinematics analysis and control. IEEE Trans. Robot. 2009, 25, 164–171. [Google Scholar] [CrossRef]
  52. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  53. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the 13th European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 740–755. [Google Scholar]
  54. FLIR. ADAS Dataset. Available online: https://www.flir.com/oem/adas/adas-dataset-form/ (accessed on 21 September 2020).
  55. Milioto, A.; Mandtler, L.; Stachniss, C. Fast Instance and Semantic Segmentation Exploiting Local Connectivity, Metric Learning, and One-Shot Detection for Robotics. In Proceedings of the International Conference on Robotics and Automation, Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 5481–5487. [Google Scholar]
  56. Milioto, A.; Stachniss, C. Bonnet: An open-source training and deployment framework for semantic segmentation in robotics using cnns. In Proceedings of the International Conference on Robotics and Automation, Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 7094–7100. [Google Scholar]
  57. St-Charles, P.-L.; Bilodeau, G.-A.; Bergevin, R. Online mutual foreground segmentation for multispectral stereo videos. Int. J. Comput. Vis. 2019, 127, 1044–1062. [Google Scholar] [CrossRef] [Green Version]
  58. Dhall, A.; Chelani, K.; Radhakrishnan, V.; Krishna, K.M. LiDAR-camera calibration using 3D-3D point correspondences. arXiv 2017, arXiv:1705.09785. [Google Scholar]
  59. Hinton, G.E. Products of experts. J. Environ. Radioact. 1999, 44, 1–19. [Google Scholar] [CrossRef]
  60. Genest, C.; Zidek, J.V. Combining probability distributions: A critique and an annotated bibliography. Stat. Sci. 1986, 1, 114–135. [Google Scholar] [CrossRef]
  61. Satopää, V.A.; Baron, J.; Foster, D.P.; Mellers, B.A.; Tetlock, P.E.; Ungar, L.H. Combining multiple probability predictions using a simple logit model. Int. J. Forecast. 2014, 30, 344–356. [Google Scholar] [CrossRef]
  62. Lu, D.V.; Hershberger, D.; Smart, W.D. Layered costmaps for context-sensitive navigation. In Proceedings of the International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 709–715. [Google Scholar]
  63. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. Trans. ASME J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  64. Moore, T.; Stouch, D. A generalized extended kalman filter implementation for the robot operating system. In Proceedings of the 13th International Conference on Intelligent Autonomous Systems, Padua, Italy, 15–19 July 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 335–348. [Google Scholar]
  65. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LIDAR SLAM. In Proceedings of the International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1271–1278. [Google Scholar]
  66. Agarwal, S.; Mierle, K. Ceres Solver. Available online: http://ceres-solver.org/ (accessed on 2 September 2020).
  67. Hart, P.; Nilsson, N.; Raphael, B. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
  68. Dijkstra, E.W. A note on two problems in connexion with graphs. Numer. Math. 1959, 1, 269–271. [Google Scholar] [CrossRef] [Green Version]
  69. Sucan, I.A.; Chitta, S. MoveIt! Available online: http://moveit.ros.org (accessed on 30 July 2020).
  70. Otto, K.; Ossi, A.; Mika, H. ALVAR. Available online: http://virtual.vtt.fi/virtual/proj2/multimedia/alvar/index.html (accessed on 30 July 2020).
  71. Kuffner, J.J.; LaValle, S.M. RRT-connect: An efficient approach to single-query path planning. In Proceedings of the International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000; IEEE: Piscataway, NJ, USA, 2000; pp. 995–1001. [Google Scholar]
  72. Zacharias, F.; Borst, C.; Hirzinger, G. Capturing robot workspace structure: Representing robot capabilities. In Proceedings of the International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 3229–3236. [Google Scholar]
  73. Saff, E.B.; Kuijlaars, A.B.J. Distributing many points on a sphere. Math. Intell. 1997, 19, 5–11. [Google Scholar] [CrossRef]
  74. Yoshikawa, T. Manipulability of Robotic Mechanisms. Int. J. Robot. Res. 1985, 4, 3–9. [Google Scholar] [CrossRef]
  75. Ulmer, J.; Braun, S.; Cheng, C.-T.; Dowey, S.; Wollert, J. Human-Centered Gamification Framework for Manufacturing Systems. Procedia CIRP 2020, 93, 670–675. [Google Scholar] [CrossRef]
  76. Grisetti, G.; Stachniss, C.; Burgard, W. OpenSLAM: GMapping. Available online: http://openslam.org/gmapping.html (accessed on 14 October 2020).
  77. Grisettiyz, G.; Stachniss, C.; Burgard, W. Improving grid-based slam with rao-blackwellized particle filters by adaptive proposals and selective resampling. In Proceedings of the International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 2432–2437. [Google Scholar]
  78. Grisetti, G.; Stachniss, C.; Burgard, W. Improved Techniques for Grid Mapping With Rao-Blackwellized Particle Filters. IEEE Trans. Robot. 2007, 23, 34–46. [Google Scholar] [CrossRef] [Green Version]
  79. Shan, T.; Englot, B. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In Proceedings of the International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 4758–4765, ISBN 1538680947. [Google Scholar]
  80. Dellaert, F.; Fox, D.; Burgard, W.; Thrun, S. Monte carlo localization for mobile robots. In Proceedings of the International Conference on Robotics and Automation, Detroit, MI, USA, 10–15 May 1999; IEEE: Piscataway, NJ, USA, 1999; pp. 1322–1328. [Google Scholar]
  81. Watanabe, A.; Hatao, N.; Jomura, S.; Maekawa, D.; Koga, Y. mcl_3dl. Available online: https://github.com/at-wat/mcl_3dl (accessed on 2 September 2020).
  82. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA; London, UK, 2006; ISBN 9780262201629. [Google Scholar]
  83. Ueda, R.; Arai, T.; Sakamoto, K.; Kikuchi, T.; Kamiya, S. Expansion resetting for recovery from fatal error in monte carlo localization-comparison with sensor resetting methods. In Proceedings of the International Conference on Intelligent Robots and Systems, Sendai, Japan, 28 September–2 October; IEEE: Piscataway, NJ, USA, 2004; pp. 2481–2486, ISBN 0780384636. [Google Scholar]
  84. Koide, K.; Miura, J.; Menegatti, E. A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement. Int. J. Adv. Robot. Syst. 2019, 16, 172988141984153. [Google Scholar] [CrossRef]
  85. Magnusson, M.; Lilienthal, A.; Duckett, T. Scan Registration for Autonomous Mining Vehicles Using 3D-NDT: Research Articles. J. Field Robot. 2007, 24, 803–827. [Google Scholar] [CrossRef] [Green Version]
  86. Engemann, H.; Badri, S.; Wenning, M.; Kallweit, S. Implementation of an Autonomous Tool Trolley in a Production Line. In Proceedings of the International Conference on Robotics in Alpe-Adria Danube Region, Kaiserslautern, Germany, 19–21 June 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 117–125. [Google Scholar]
  87. Dieber, B.; Breiling, B. Security Considerations in Modular Mobile Manipulation. In Proceedings of the International Conference on Robotic Computing, Naples, Italy, 25–27 February 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 70–77. [Google Scholar]
  88. Unhelkar, V.V.; Dörr, S.; Bubeck, A.; Lasota, P.A.; Perez, J.; Siu, H.C.; Boerkoel, J.C., Jr.; Tyroller, Q.; Bix, J.; Bartscher, S. Introducing Mobile Robots to Moving-Floor Assembly Lines. 2018. Available online: https://workofthefuturecongress.mit.edu/wp-content/uploads/2019/06/Unhelkar_Shah_etal_RA_Magazine_2018.pdf (accessed on 10 December 2020).
  89. Walter, C.; Schulenberg, E.; Saenz, J.; Penzlin, F.; Elkmann, N. Demonstration of Complex Task Execution using Basic Functionalities: Experiences with the Mobile Assistance Robot, “ANNIE”. In Proceedings of the International Conference on Automated Planning and Scheduling, London, UK, 12–17 June 2016; pp. 10–16. [Google Scholar]
  90. Saenz, J.; Penzlin, F.; Vogel, C.; Fritzsche, M. VALERI—A Collaborative Mobile Manipulator for Aerospace Production. In Advances in Cooperative Robotics; Tokhi, M.O., Virk, G.S., Eds.; World Scientific: Singapore, 2016; pp. 186–195. ISBN 978-981-314-912-0. [Google Scholar]
  91. Vogel, C.; Saenz, J. Optical Workspace Monitoring System for Safeguarding Tools on the Mobile Manipulator VALERI. In Proceedings of the 47th International Symposium on Robotics, Munich, Germany, 21–22 June 2016; VDE: Berlin, Germany, 2016; pp. 1–6. [Google Scholar]
  92. Dean-Leon, E.; Pierce, B.; Bergner, F.; Mittendorfer, P.; Ramirez-Amaro, K.; Burger, W.; Cheng, G. TOMM: Tactile omnidirectional mobile manipulator. In Proceedings of the International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 2441–2447. [Google Scholar]
  93. KUKA. Mobile Robots from KUKA. Available online: https://www.kuka.com/en-de/products/mobility/mobile-robots (accessed on 27 November 2020).
  94. YASKAWA. New Paths for Mobile Robotics. Available online: https://www.motoman.com/en-us/about/blog/new-paths-for-mobile-robotics (accessed on 10 December 2020).
  95. Neobotix. Mobile Manipulator MM-700. Available online: https://www.neobotix-robots.com/products/mobile-manipulators/mobile-manipulator-mm-700 (accessed on 10 December 2020).
  96. Schlotzhauer, A.; Kaiser, L.; Brandstötter, M. Safety of Industrial Applications with Sensitive Mobile Manipulators–Hazards and Related Safety Measures. In Proceedings of the Austrian Robotics Workshop, Innsbruck, Austria, 17–18 May 2018; pp. 43–47. [Google Scholar]
  97. Mobile Industrial Robots. MiR100. Available online: https://www.mobile-industrial-robots.com/en/solutions/robots/mir100/ (accessed on 10 December 2020).
Figure 1. Related research projects from 2009 to 2021.
Figure 1. Related research projects from 2009 to 2021.
Sensors 20 07249 g001
Figure 2. Mechanical components of the mobile platform. (a) Drive units; (b) drive units positioned at chassis.
Figure 2. Mechanical components of the mobile platform. (a) Drive units; (b) drive units positioned at chassis.
Sensors 20 07249 g002
Figure 3. Sensor setup of the mobile manipulator OMNIVIL.
Figure 3. Sensor setup of the mobile manipulator OMNIVIL.
Sensors 20 07249 g003
Figure 4. Components and connections of the control system: sensors (green), actuators (blue), controllers (gray), emergency (red), and additional peripheral devices (orange).
Figure 4. Components and connections of the control system: sensors (green), actuators (blue), controllers (gray), emergency (red), and additional peripheral devices (orange).
Sensors 20 07249 g004
Figure 5. Main modules of the control software.
Figure 5. Main modules of the control software.
Sensors 20 07249 g005
Figure 6. Workspace monitoring system in hexagon form.
Figure 6. Workspace monitoring system in hexagon form.
Sensors 20 07249 g006
Figure 7. Human co-worker detection. (a) Thermal images T j ; (b) RGB images R G B i .
Figure 7. Human co-worker detection. (a) Thermal images T j ; (b) RGB images R G B i .
Sensors 20 07249 g007
Figure 8. Gray scale image masks. (a) R G B 3 and expert E B R G B ; (b) T 5 and expert E B T h e r m a l ; (c) R G B 3 and expert E Y R G B ; (d) T 5 and expert E Y T h e r m a l .
Figure 8. Gray scale image masks. (a) R G B 3 and expert E B R G B ; (b) T 5 and expert E B T h e r m a l ; (c) R G B 3 and expert E Y R G B ; (d) T 5 and expert E Y T h e r m a l .
Sensors 20 07249 g008
Figure 9. Point clouds representing the presence of a human co-worker. (a) E Y R G B ; (b) E Y T h e r m a l ; (c) E B R G B ; (d) E B T h e r m a l .
Figure 9. Point clouds representing the presence of a human co-worker. (a) E Y R G B ; (b) E Y T h e r m a l ; (c) E B R G B ; (d) E B T h e r m a l .
Sensors 20 07249 g009
Figure 10. Heatmaps generated by the five different experts of the workspace monitoring system (WMS) and the ground truth.
Figure 10. Heatmaps generated by the five different experts of the workspace monitoring system (WMS) and the ground truth.
Sensors 20 07249 g010
Figure 11. Navigation concept. (a) Zone setup without inflation layers; (b) costmap layers.
Figure 11. Navigation concept. (a) Zone setup without inflation layers; (b) costmap layers.
Sensors 20 07249 g011
Figure 12. Grasping task based on visual servoing. (a) Augmented Reality (AR)-marker detection and 3D pose estimation; (b) grasping process of a Lego car.
Figure 12. Grasping task based on visual servoing. (a) Augmented Reality (AR)-marker detection and 3D pose estimation; (b) grasping process of a Lego car.
Sensors 20 07249 g012
Figure 13. Workspace analysis. (a) Schematic of the workspace divided into a voxel grid; (b) spherically arranged set of 6D poses P e e for one voxel.
Figure 13. Workspace analysis. (a) Schematic of the workspace divided into a voxel grid; (b) spherically arranged set of 6D poses P e e for one voxel.
Sensors 20 07249 g013
Figure 14. Sets of 6D end-effector poses to calculate reachability and maneuverability. (a) full sphere; (b) hemisphere pointing in forward x-direction; (c) hemisphere pointing in downward z-direction.
Figure 14. Sets of 6D end-effector poses to calculate reachability and maneuverability. (a) full sphere; (b) hemisphere pointing in forward x-direction; (c) hemisphere pointing in downward z-direction.
Sensors 20 07249 g014
Figure 15. Workstations in model factory to be approached by the mobile manipulator. (a) Robot cell with delta picker; (b) manual workbench with augmented reality support; (c) ware-rack.
Figure 15. Workstations in model factory to be approached by the mobile manipulator. (a) Robot cell with delta picker; (b) manual workbench with augmented reality support; (c) ware-rack.
Sensors 20 07249 g015
Figure 16. Localization test scenarios. (a) S s t a t i c ; (b) S c r o w d e d ; (c) S c r o w d e d ; (d) S d y n a m i c .
Figure 16. Localization test scenarios. (a) S s t a t i c ; (b) S c r o w d e d ; (c) S c r o w d e d ; (d) S d y n a m i c .
Sensors 20 07249 g016
Figure 17. Sequential movement through the model factory. (a) Executed path and positions P 1 , P 2 , P 3 and P 4 , with OMNIVIL parking at reference pose P 1 ; (b) executed path in the static world coordinate system W .
Figure 17. Sequential movement through the model factory. (a) Executed path and positions P 1 , P 2 , P 3 and P 4 , with OMNIVIL parking at reference pose P 1 ; (b) executed path in the static world coordinate system W .
Sensors 20 07249 g017
Figure 18. A priori maps used for localization experiments. (a) 3D point cloud map without roof; (b) 2D occupancy grid map.
Figure 18. A priori maps used for localization experiments. (a) 3D point cloud map without roof; (b) 2D occupancy grid map.
Sensors 20 07249 g018
Figure 19. Localization experiment results. (a)   S s t a t i c ; (b) S c r o w d e d ; (c) S c r o w d e d ; (d) S d y n a m i c .
Figure 19. Localization experiment results. (a)   S s t a t i c ; (b) S c r o w d e d ; (c) S c r o w d e d ; (d) S d y n a m i c .
Sensors 20 07249 g019
Figure 20. Exemplary dataset subsegments. (a)   S s t a t i c ; (b) S c r o w d e d ; (c) S d y n a m i c .
Figure 20. Exemplary dataset subsegments. (a)   S s t a t i c ; (b) S c r o w d e d ; (c) S d y n a m i c .
Sensors 20 07249 g020
Figure 21. Evaluation of the expert performances in different scenarios. (a) S s t a t i c ; (b) S c r o w d e d ; (c) S d y n a m i c ; (d) all scenarios ( S a v e r a g e ).
Figure 21. Evaluation of the expert performances in different scenarios. (a) S s t a t i c ; (b) S c r o w d e d ; (c) S d y n a m i c ; (d) all scenarios ( S a v e r a g e ).
Sensors 20 07249 g021
Figure 22. Workspace evaluation of the mobile manipulator OMNIVIL. (a) Reachability (spherical); (b) reachability (hemispherical-front); (c) reachability (hemispherical-down); (d) manipulability (spherical); (e) manipulability (hemispherical-front); (f) manipulability (hemispherical-down).
Figure 22. Workspace evaluation of the mobile manipulator OMNIVIL. (a) Reachability (spherical); (b) reachability (hemispherical-front); (c) reachability (hemispherical-down); (d) manipulability (spherical); (e) manipulability (hemispherical-front); (f) manipulability (hemispherical-down).
Sensors 20 07249 g022
Figure 23. Workspace analyzing divided into reachability and manipulability.
Figure 23. Workspace analyzing divided into reachability and manipulability.
Sensors 20 07249 g023
Table 1. Technical data of the mobile platform.
Table 1. Technical data of the mobile platform.
DescriptionValue
Dimensions1256 × 780 × 522 mm³ (L × W × H)
Ground Clearance42 mm
Weight200 kg
Maximum Payload380 kg
Maximum Velocity1.3 m/s
Wheel TypeMecanum
KinematicHolonomic
Table 2. Medium change per laser beam in the static scenarios relative to S s t a t i c .
Table 2. Medium change per laser beam in the static scenarios relative to S s t a t i c .
ScenarioValid BeamsChange Factor A
S s t a t i c 4160.0
S c r o w d e d 3290.388
S c r o w d e d 3880.512
Table 3. Computational load and memory usage of the global localization strategies.
Table 3. Computational load and memory usage of the global localization strategies.
Localization StrategyCPU
(% of Single Core)
RAM
(GB)
L 1 (2D)200.51
L 2 (2D)463.14
L 3 (3D)1032.56
L 4 (3D)2401.92
Table 4. Positioning accuracy of autonomous navigation.
Table 4. Positioning accuracy of autonomous navigation.
Localization StrategyGoal Poseσx
(mm)
σy
(mm)
σθ
(rad)
L 2 P 1 (workbench)760.004
P 2 (robot cell)9120.007
P 3 (ware rack)11230.012
L 4 P 1 (workbench)3100.003
P 2 (robot cell)330.005
P 3 (ware rack)330.003
A R M a r k e r P 1 (workbench)950.01
P 2 (robot cell)5100.01
P 3 (ware rack)10120.02
Table 5. Comparison of the average precision.
Table 5. Comparison of the average precision.
S static S crowded S dynamic S average
AP E Y R G B 0.750.740.640.71
AP E Y T h e r m a l 0.620.670.670.65
AP E B R G B 0.810.780.770.78
AP E B T h e r m a l 0.800.790.830.81
AP E F u s i o n 0.950.930.930.94
Table 6. Overview of related mobile manipulators developed in research and industry.
Table 6. Overview of related mobile manipulators developed in research and industry.
Institution/CompanyMobile ManipulatorKinematic and
Manipulator
Safety FeaturesNavigation Features
Aalborg UniversityLittleHelper
[10]
differential
KUKA LWR
2D safety Lidar,
ultrasonic sensors
landmark-based localization at workstations
Joanneum ResearchChimera
[87]
differential
UR10
2D safety Lidar,
RGB-D camera
static navigation zones
IPA
Frauenhofer
Amadeus
[13]
omnidirectional
UR10
2D safety Lidarlocalization based on induction wires
IPA
Frauenhofer
rob@work
[88]
omnidirectional
configurable
2D safety Lidartrajectory tracking along dynamic surfaces
IFF
Frauenhofer
ANNIE
[89]
omnidirectional
KUKA LBR 4+
2D safety Lidars,
a light-field camera,
RGB camera
static landmarks to increase the localization accuracy
IFF
Frauenhofer
VALERI
[24,90,91]
Omnidirectional
KUKA LWR
2D safety Lidars, bumper,
stereo camera,
Time of Flight (ToF) camera,
tactile artificial robot skin
not focused
TUMTOMM
[92]
omnidirectional
dual arm UR5
2D safety Lidars,
RGB cameras,
tactile artificial robot skin
not focused
TecnaliaMRP
[33]
omnidirectional
dual arm UR10
2D safety Lidars,
RGB-D, stereo-cameras
2D and 3D perception-based navigation
KUKAKMR iiwa
[93]
omnidirectional
KUKA LBR iiwa
2D safety Lidars+/− 5 mm positioning accuracy
KUKAKMR QUANTEC
[93]
omnidirectional
KUKA KR Quantec
2D safety Lidars+/− 5 mm positioning accuracy
YaskawaYMR12
[94]
differential
MH12F, HC10
2D safety Lidars,
3D cameras, ToF camera
no information
NeobotixMM-700
[95]
differential
configurable
2D safety Lidarsno information
IaAMOMNIVILomnidirectional
UR5
2D safety Lidars, RGB-D camera, RGB cameras, thermal cameras, 3D Lidarstatic and dynamic navigation zones
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Engemann, H.; Du, S.; Kallweit, S.; Cönen, P.; Dawar, H. OMNIVIL—An Autonomous Mobile Manipulator for Flexible Production. Sensors 2020, 20, 7249. https://doi.org/10.3390/s20247249

AMA Style

Engemann H, Du S, Kallweit S, Cönen P, Dawar H. OMNIVIL—An Autonomous Mobile Manipulator for Flexible Production. Sensors. 2020; 20(24):7249. https://doi.org/10.3390/s20247249

Chicago/Turabian Style

Engemann, Heiko, Shengzhi Du, Stephan Kallweit, Patrick Cönen, and Harshal Dawar. 2020. "OMNIVIL—An Autonomous Mobile Manipulator for Flexible Production" Sensors 20, no. 24: 7249. https://doi.org/10.3390/s20247249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop