Next Article in Journal
Lack of Adverse Effects of Cold Physical Plasma-Treated Blood from Leukemia Patients: A Proof-of-Concept Study
Next Article in Special Issue
Longitudinal Mode System Identification of an Insect-like Tailless Flapping-Wing Micro Air Vehicle Using Onboard Sensors
Previous Article in Journal
Improvement of Strength and Strain Characteristics of Lightweight Fiber Concrete by Electromagnetic Activation in a Vortex Layer Apparatus
Previous Article in Special Issue
Service Robots: Trends and Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advanced Applications of Industrial Robotics: New Trends and Possibilities

by
Andrius Dzedzickis
*,
Jurga Subačiūtė-Žemaitienė
,
Ernestas Šutinys
,
Urtė Samukaitė-Bubnienė
* and
Vytautas Bučinskas
Department of Mechatronics, Robotics, and Digital Manufacturing, Vilnius Gediminas Technical University, J. Basanaviciaus Str. 28, LT-03224 Vilnius, Lithuania
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(1), 135; https://doi.org/10.3390/app12010135
Submission received: 9 November 2021 / Revised: 17 December 2021 / Accepted: 19 December 2021 / Published: 23 December 2021
(This article belongs to the Special Issue Trends and Challenges in Robotic Applications)

Abstract

:
This review is dedicated to the advanced applications of robotic technologies in the industrial field. Robotic solutions in areas with non-intensive applications are presented, and their implementations are analysed. We also provide an overview of survey publications and technical reports, classified by application criteria, and the development of the structure of existing solutions, and identify recent research gaps. The analysis results reveal the background to the existing obstacles and problems. These issues relate to the areas of psychology, human nature, special artificial intelligence (AI) implementation, and the robot-oriented object design paradigm. Analysis of robot applications shows that the existing emerging applications in robotics face technical and psychological obstacles. The results of this review revealed four directions of required advancement in robotics: development of intelligent companions; improved implementation of AI-based solutions; robot-oriented design of objects; and psychological solutions for robot–human collaboration.

1. Introduction

The industrial robotics sector is one of the most quickly growing industrial divisions, providing standardised technologies suitable for various automation processes. In ISO 8373:2012 standard [1], an industrial robot is defined as an automatically controlled, reprogrammable, multipurpose manipulator, programmable in three or more axes, which can be stationary or mobile for use in industrial automation applications. However, the same standard creates an exception for wider implementation. It states that the robot’s classification into industrial, service, or other types is undertaken according to its intended application.
According to the International Federation of Robotics (IRF) [2], 373,000 industrial robots were sold globally in 2019. In 2020 the total number of industrial robots operating in factories globally reached 2.7 million. Successful application of industrial robots, their reliability and availability, and the active implementation of the Industry 4.0 concept have stimulated growing interest in robots’ optimisation and the research of new implementations in various areas, especially in non-manufacturing and non-typical applications. According to one of the biggest scientific databases, ScienceDirect [3], more than 4500 scientific papers were published in 2019 using the term “Industrial robot” as a keyword and, in 2020, the number of papers with a similar interest and research direction increased to 5300. Figure 1 shows the annual ratio of new robot installations vs. the number of scientific publications in the ScienceDirect database. Scientific interest in this field is based on a steady increase in the number of publications, independent of the political, economic, and social factors affecting the market for new robots.
This review assesses the recent development trends in robotics, and identifies some of the most relevant ethical, technological and, scientific uncertainties limiting wider implementation possibilities. This literature review is focused mainly on the 2018–2021 applications of industrial robots in fields in which endorsement of robotisation has traditionally been weak (i.e., medical applications, the food industry, agricultural applications, and the civil engineering industry). It also includes fundamental issues such as human–machine interaction, object recognition, path planning, and optimisation.
For this review, main keywords, such as industrial robots, collaborative robots, and robotics, were used to survey published papers over a four-year period. Because this is a widely researched and dynamic area, the review focused on a relatively short time period and encompassed the most recent sources to ensure the analysis conducted was novel.
According to the search request, Google Scholar returned 79,500 results, from which 115 publications were selected. The surveyed articles were selected according to the direction of the literature review and the indicated criteria (application area, novelty and significance of achievements, reliability, and feasibility of results).
Despite the ever-growing field of automation in daily life and society’s accustomed use of smart devices, non-typical applications of robotics are still often viewed with considerable scepticism. The most common myth about robots is that they will occupy human workplaces, leaving human workers without a source of livelihood. Nevertheless, the research provided in [4], which aimed to evaluate the public outcry about robots taking over jobs in electronics and textiles industries in Japan, proved that such a point of view is incorrect. Evaluation of the use of the robots based on their number and real implementation price determined that implementation of robots positively affects productivity, which results in a positive impact for the most vulnerable workers in society, i.e., women, part-time workers, high-school graduates, and aged persons.
Technological and scientific uncertainties also require a special approach. Each robotisation task is unique in its own way. These tasks often require the use of individual tools, the creation of a corresponding working environment, the use of additional sensors or measurement systems, and the implementation of complex control algorithms to expand the functionalities or improve the characteristics of standard robots. In most applications, industrial robots form bigger units as robotic cells or automated/autonomous manufacturing lines. As a result, the robotisation of even a relatively simple task becomes a complex solution requiring a systemic approach.
Moreover, the issue of implementing an industrial robot remains complicated by its interdisciplinary nature: proper organisation of the work cycle is the object of manufacturing management sciences; the design of grippers and related equipment lies within the field of mechanical engineering; and the integration of all devices into a united system, sensor data analysis and whole system control are the objects of mechatronics.
This review focuses on the hardware and software methods used to implement industrial robots in various applications. The aim was to systematically classify the newest achievements in industrial robotics according to application fields without strong robotisation traditions. The analysis of this study was also undertaken from a multidisciplinary perspective, and considers the implementation of computer vision and machine learning for robotic applications.

2. Main Robotisation Strategies

According to the human–robot cooperation type, a review of the most recent trends in industrial robotics applications indicates two main robotisation strategies: classical and modern. In industrial robotics, five typical levels of human–robot cooperation are defined (Figure 2): (i) no collaboration; (ii) coexistence; (iii) synchronisation; (iv) cooperation; (v) collaboration [5].
The classical strategy encompasses the first cooperation level (Figure 2a). It is based on the approach that robots must limit humans in their workplace by creating closed robot cells in which human activity is unacceptable; if a human must enter the robot’s workspace, the robot must be stopped. This approach uses various safety systems to detect and prevent human access to the robot’s workspace. The modern strategy includes the remaining four cooperation levels (Figure 2b–e). This is based on an opposing approach, and states that robots and humans can work in one workplace and collaborate. Such an approach creates additional requirements for robot’s design, control, and sensing systems. Robots adapted to operate in conjunction with human workers are usually defined as collaborative robots or cobots.

2.1. Classical Robotisation Strategy

Following the issuing of the patent for the first industrial robot to George Devol in 1954, the classical robotisation strategy has indicated that robots should replace human workers in routine tasks and unhealthy workplaces. This strategy suggests that humans should be removed from the robot’s workspace (Figure 3a). Direct cooperation between the robot and humans is forbidden due to the potential danger for human health and safety. This approach was later expanded to encompass accuracy, reliability, productivity, and economic factors.
Research provided in [8] analyses the possibilities of implementing service robots in hotels from social, economic, and technical perspectives. The authors indicated the need to evaluate hotel managers’ perceptions regarding the advantages and disadvantages of service robots, compared to human workers, as the primary goal of their research, whereas determining tasks suitable for robotisation was of secondary importance. This approach confirms the assumption that the implementation of robotics in non-traditional applications is often limited not by technological issues, but by the company managers’ attitudes. Analysing questionnaires completed by 79 hotel managers, it was concluded that robots have an advantage over human employees due to better data processing capabilities, work speed, protection of personal data, and fewer mistakes. The main disadvantages of robots were listed as: lack of capability to provide personalised service; inability to handle complaints; lack of friendliness and politeness; inability to implement a special request that goes beyond their programming; and the lack of understanding of emotions.
Despite the common doubts, implementing automation and robotic solutions has a positively impact in many cases. The study provided in [9] analysed the general impact of robot implementation in workplaces for packing furniture parts. The analysis focused on the ergonomic perspective, and found that implementation of robotics eliminates the risk of work-related musculoskeletal disorders. A similar study [8] analysed the design, engineering, and testing of adaptive automation assembly systems to increase automation levels, and to complement human workers’ skills and capabilities in assembling industrial refrigerators. This study showed that automated assembly process productivity could be increased by more than 79%. Implementing an industrial robot instead of partial automation would likely result in an even more significant increase in productivity. Research comparing human capabilities with automated systems is also described in [10]. The authors compared human and automated vision recognition system capabilities to recognise and evaluate forest or mountain trails from a single monocular image acquired from the viewpoint of a robot travelling on the trail. The obtained results showed that a deep neural network-based system, trained on a large dataset, performs better than humans.
Neural network-based algorithms can also be used to control industrial robots to address imperfections in their mechanical systems, which typically behave as non-linear dynamic systems due to a large number of uncertainties. The research presented in [11,12] provides neural network-based methods for advanced control of robot movements. In [11], a perspective non-linear model-based predictive control method for robotic manipulators, which minimises the settling time and position overshoot of each joint, is provided.
The classic strategy is well suited to robotisation of mass production processes in various fields, and its main advantages are clear requirements for work process organisation, robotic cell design, and installation; the availability of a large variety of standard equipment and typical partial solutions; and higher productivity and reliability compared to the cases where human workers perform the same tasks. The main disadvantages are insufficient flexibility, unsuitability for unique production, and high economic costs when it is necessary to adapt the existing robotic cell to a new product or process. Applying a modern robotisation strategy can avoid some of these disadvantages (or at least minimise their impact).

2.2. Modern Robotisation Strategy

The modern robotisation strategy is based on implementing collaborative robots (cobots). According to [13], the definition of cobot was first used in a 1999 US patent [14] and was intended for “an apparatus and method for direct physical integration between a person and a general-purpose manipulator controlled by a computer.” It was the result of the efforts of General Motors to implement robotics in the automotive sector to help humans in assembly operations. The first lightweight cobot, LBR3, designed by a German robotics company, was introduced in 2004 [13]. This has led to the broader development of a modern robotics strategy and new manufacturers in the market. In 2008, the Danish manufacturer Universal Robots released the UR5, a cobot that could safely operate alongside the employees, eliminating the need for safety caging or fencing (Figure 3b). This launched a new era of flexible, user-friendly, and cost-efficient collaborative robots [13], and resulted in the current situation, in which all of the major robot manufacturers have at least a few cobot models in their product range.
The fourth industrial revolution—Industry 4.0—significantly fostered the development of cobot’s technologies, because the concept fitted well with Industry 4.0 content, allowing human–robot collaboration to be realised and being suitable for flexible manufacturing systems. Contrary to typical industrial robots, next-generation robotics uses artificial intelligence (AI) to collaboratively perform tasks and is suitable for uncontrolled/unpredictable environments [15]. Moreover, due to favourable conditions (advances in AI, sensing technologies, and computer vision), collaborative industrial robots have become significantly smarter, showing the potential of reliable and secure cooperation, and increasing the productivity and efficiency of the involved processes [15]. However, it should be noted that Industry 4.0 fostered not only the widespread of robotics, but also posed new challenges. When developing highly automated systems, most of the equipment is related through the Internet of Things (IoT) or other communication technologies. Therefore, cybersecurity and privacy protection of processes used to monitor and control data [16,17] must be considered. The issue of data protection is also becoming more critical due to the latest communication technologies, such as 5G and 6G [18]. These technologies allow the development of standardised wireless communication networks for various control levels (single-cell, production line, factory, network of factories) and, at the same time, makes systems more sensitive to external influences. The main impact of Industry 4.0 and new communication technologies on industrial robots is that their controllers have an increasing number of connections, functions, and protocols to communicate with other “smart” devices.
The study presented in [19] analyses the possibilities of human–robot collaboration in aircraft assembly operations. The benefits of human–robot cooperation were examined in terms of the productivity increase and the levels of satisfaction of the human workers. The obtained results showed that humans and robots could simultaneously work safely in a common area without any physical separation, and significantly reduce time and costs compared with manual operations. Moreover, assessment of employee opinions showed that most employees positively evaluated the implementation of collaborative robots. Nevertheless, employee attitudes depend on their practical experience: it was noticed that experts felt more confident than beginners. This can be explained by the fact that experts better understand the overall manufacturing process and are more accustomed to operating with various equipment.
Compared to traditional industrial robots, cobots have more user-friendly control features and wider teaching options. A new assembly strategy was described in a previous study [20], in which a cobot learnt skills from manual teaching to perform peg-in-hole automatic assembly when the geometric profile and material elastic parameters of parts were inaccurate. The results showed that the manual assembly process could be analysed mathematically, splitting it into a few stages and implementing it as a model in robot control. Using an Elite EC75 manipulator (Elite Robot, Suzhou, CN, an assembly time of less than 20 s was achieved, ensuring a 100% success rate from 30 attempts when the relative error between the peg and hole was ±4.5 mm, and the clearance between the peg and the hole was 0.18 mm.
As a result of the development of sensor and imaging technologies, new applications in robotics are emerging, especially in human–robot collaboration. In [21], detailed research focused on identifying the main strengths and weaknesses of augmented reality (AR) in industrial robots applications. The analysis shows that AR is mainly used to control and program robotic arms, visualise general tasks or robot information, and visualise the industrial robot workspace. Results of the analysis indicate that AR systems are faster than traditional approaches; users have greater appreciation for AR systems in terms of likeability and usability; and AR seems to reduce physical workload, whereas the impact on mental workload depends on the interaction interface [16]. Nevertheless, industrial implementation of AR is still limited by insufficient accuracy, occlusion problems, and the limited field of view of wearable AR devices.
A summary of the analysed robotisation strategies indicates that they both have their specific implementation fields. The classical strategy is well suited to strictly controlled environments. The modern strategy ensures more flexible operation and is suitable for non-predictable environments. Nevertheless, it is necessary to note that the strict line between these strategies has gradually disappeared due to advances in sensing technologies, artificial intelligence, and computer vision. A typical industrial robot equipped with modern sensing and control systems can operate similarly to a cobot. According to [22], collaborative regimes can be realised using industrial robots, laser sensors, and vision systems, or controller alteration if compliance with the ISO/TS 15066 standard—which specifies parameters and materials adapted to safe activities with and near humans—is ensured [23]. This standard defines four main classes of safety requirements for collaborative robots: safety-rated monitored stop; hand-guiding; speed and separation monitoring; and power and force limiting.
In addition, it is essential to mention that all improvements and advances in robotics can be classified into two main types: universal and application dependent. The remaining part of this article reviews and classifies the latest advances in robotics according to the areas of their implementation.

3. Recent Achievements in Industrial Robotics Classified according to Implementation Area

3.1. Human–Machine Interaction

To date, manual human work has been often replaced by robotic systems in industry. However, within complex systems, the interaction between humans and machines/robots (HMI) still needs to occur. HMI is an area of research related to the development of robotic systems based on understanding, evaluation, and analysis, and this system combines various forms of cooperation or interaction with humans. Interaction requires communication between robots and humans, and human communication and collaboration with the robot system can take many forms. However, these forms are greatly influenced by whether the human is close to the robot and the context being used: (i) human–computer context—keyboard, buttons, etc.; (ii) real procedures context—haptics, sensors; and (iii) close and exact interaction. Therefore, both human and robot communication or interaction can be divided into two main categories: remote interaction and exact interaction. Remote interaction takes place by remote operation or supervised control. Close interaction takes place by operation with an assistant or companion. Close interaction may include physical interaction. Because close interactions are the most difficult, it is crucial to consider a number of aspects to ensure a successful collaboration, i.e., a real-time algorithm, “touch” detection and analysis, autonomy, semantic understanding capabilities, and AI-aided anticipation skills. A summary of the relevant research focused on improving and developing HMI methods is provided in Table 1.
The interaction between humans and robots or mechatronic systems encompasses many interdisciplinary fields, including physical sciences, social sciences, psychology, artificial intelligence, computer science, robotics, and engineering. This interaction examines all possible situations in which a human and a robot can systematically collaborate or complement each other. Thus, the main goal is to provide robots with various competencies to facilitate their interaction with humans. To implement such competencies, modelling of real-life situations and predictions is necessary, applying models in interaction with robots, and trying to make this interaction as efficient as possible, i.e., inherently intuitive, based on human experience and artificial intelligence algorithms.
The role of various interfering aspects (Table 2.) in human–robot interaction may lead to different future perspectives.
We can summarise that the growing widespread use of robots and the lack of highly skilled professionals in the market form clear guidelines for future development in the HMI area. The main aspirations are an intuitive, human-friendly interface, faster and simpler programming methods, advanced communication features, and robot reactions to human movements, mood, and even psychological state. Methods to monitor human actions and emotions [33], fusion of sensors’ data, and machine learning are key technologies for further improvement in the HMI area.

3.2. Object Recognition

Object recognition is a typical issue in industrial robotics applications, such as sorting, packaging, grouping, pick and place, and assembling (Table 3). The appropriate recognition method and equipment selection mainly depends on the given task, object type, and the number of recognisable parameters. If there are a small number of parameters, simpler sensing technologies based on typical approaches (geometry measuring, weighing, material properties’ evaluation) can be implemented. Alternatively, if there are a significant number of recognisable parameters, photo or video analysis is preferred. Required information in two- or three-dimensional form from image or video can be extracted using computer vision techniques such as object localisation and recognition. Various techniques of vision-based object recognition have been developed, such as appearance-, model-, template-, and region-based approaches. Most vision recognition methods are based on deep learning [34] and other machine learning methods.
In a previous study [35], a lightweight Franka Emika Panda, cobot with seven degrees of freedom and a Realsense D435 RGB-D camera, mounted on an end effector, was used to extend the default robots’ function. Instead of using a large dataset-based machine learning technique, the authors proposed a method to program the robot from a single demonstration. This robotic system can detect various objects, regardless of their position and orientation, achieving an average success rate of more than 90% in less than 5 min of training time, using an Ubuntu 16.04 server running on an Intel(R) Core(TM) i5-2400 CPU (3.10 GHz) and an NVIDIA Titan X GPU.
Another approach for grasping randomly placed objects was presented in [36]. The authors proposed a set of performance metrics and compared four robotic systems for bin picking, and took first place in the Amazon Robotics Challenge 2017. The survey results show that the most promising solutions for such a task are RGB-D sensors and CNN-based algorithms for object recognition, and a combination of suction-based and typical two-finger grippers for grasping different objects (vacuum grippers for a stiff object with large and smooth surface areas, and two-finger grippers for air-permanent items).
Similar localisation and sorting tasks appear in the food and automotive industries, and in almost every production unit. In [37], an experimental method was proposed using a pneumatic robot arm for separation of objects from a set according to their colour. If the colour of the workpiece is recognisable, it is selected with the help of a robotic arm. If the workpiece colour does not meet the requirements, it is rejected. The described sorting system works according to an image processing algorithm in MATLAB software. More advanced object recognition methods based on simultaneous colour and height detection are presented in [38]. A robotic arm with six degrees of freedom (DoF) and a camera with computer vision software ensure a sorting efficiency of about 99%.
A Five DoF robot arm, “OWI Robotic Arm Edge”, proposed by Pengchang Chen et al., was used to validate the practicality and feasibility of a faster region-based convolutional neural network (faster R-CNN) model using a dataset containing images of symmetric objects [39]. Objects were divided into classes based on colour, and defective and non-defective objects.
Despite significant progress in existing technologies, randomly placed unpredictable objects remain a challenge in robotics. The success of a sorting task often depends on the accuracy with which recognisable parameters can be defined. Yan Yu et al. [40] proposed an RGB-D-based method for solid waste object detection. The waste sorting system consists of a server, vision sensors, industrial robots, and rotational speedometer. Experiments performed on solid waste image analysis resulted in a mean average precision value of 49.1%.
Furthermore, Wen Xiao et al. designed an automatic sorting robot that uses height maps and near-infrared (NIR) hyperspectral images to locate the region of interest (ROI) of objects, and to perform online statistic pixel-based classification in contours [41]. This automatic sorting robot can automatically sort construction and demolition waste ranging in size from 0.05 to 0.5 m. The online recognition accuracy of the developed sorting system reaches almost 100% and ensures operation speed up to 2028 picks/h.
Another challenging issue in object recognition and manipulation is objects having an undefined shaped and contaminated by dust or smaller particles, such as minerals or coal. Quite often, such a task requires not only recognising the object but also determining the position of the centre of mass of the object. Man Li et al. [42] proposed an image processing-based coal and gangue sorting method. Particle analysis of coal and gangue samples is performed using morphological corrosion and expansion methods to obtain a complete, clean target sample. The object’s mass centre is obtained using the centre of the mass method, consisting of particle removal and filling, image binarization, and separation of overlapping samples, reconstruction, and particle analysis. The presented method achieved identification accuracy of coal and gangue samples of 88.3% and 90.0%, and the average object mass centre coordinate errors in the x and y directions were 2.73% and 2.72%, respectively [42].
Intelligent autonomous robots for picking different kinds of objects were studied as a possible means to overcome the current limitations of existing robotic solutions for picking objects in cluttered environments [43]. This autonomous robot, which can also be used for commercial purposes, has an integrated two-finger gripper and a soft robot end effector to grab objects of various shapes. A special algorithm solves 3D perception problems caused by messy environments and selects the right grabbing point. When using lines, the time required depends significantly on the configuration of the objects, and ranges from 0.02 s when the objects have almost the same depth, to 0.06 s in the worst case when the depth of the tactile objects is greater than the lowest depth but not perceived [43].
In robotics, the task of object recognition often includes not only recognition and the determinaton of coordinates, but it also plays an essential role in the creation of a robot control program. Based on the ABB IRB 140 robot and a digital camera, a low-cost shapes identification system was developed and implemented, which is particularly important due to the high variability of welded products [44]. The authors developed an algorithm that recognises the required toolpath from a taken image. The algorithm defines a path as a complex polynomial. It later approximates it by simpler shapes with a lower number of coordinates (line, arc, spline) to realise the tool movement using standard robot programming language features.
Moreover, object recognition can be used for robot machine learning to analyse humans’ behaviour. Such an approach was presented by Hiroaki et al. [45], where the authors studied the behaviour of a human crowd, and formulated a new forecasting task, called crowd density forecasting, using a fixed surveillance camera. The main goal of this experiment was to predict how the density of the crowd would change in unseen future frames. To address this issue, patch-based density forecasting networks (PDFNs) were developed. PDFNs project a variety of complex dynamics of crowd density throughout the scene, based on a set of spatially or spatially overlapping patches, thus adapting the receptive fields of fully convolutional networks. Such a solution could be used to train robotic swarms because they behave similarly to humans in crowded areas.
Table 3. Research focused on object recognition in robotics.
Table 3. Research focused on object recognition in robotics.
ObjectiveTechnologyApproachImprovementRef.
Extended default “program from demonstration” feature of collaborative robots to adapt them to environments with moving objects.Franka Emika Panda cobot with 7 degrees of freedom, with a Realsense D435 RGB-D camera mounted on the end-effector.Grasping method to fine-tune using reinforcement learning techniques.The system can grasp various objects from a demonstration, regardless of their position and orientation, in less than 5 min of training time.[35,46]
Introduction of a set of metrics for primary comparison of robotic systems’ detailed functionality and performance.Robot with different grippers.Recognition method and the grasping method.Developed original robot performance metrics and tested on four robot systems used in the Amazon Robotics Challenge competition. Results of analysis showed the difference between the systems and promising solutions for further improvements.[36,45,47]
To build a low-cost system for identifying shapes to program industrial robots for the 2D welding process.Robot ABB IRB 140 with a digital camera, which detects contours on a 2D surface.A binarisation and contour recognition method.A low-cost system based on an industrial vision was developed and implemented for the simple programming of the movement path.[48,49]
The patch-based density forecasting networks (PDFNs) directly forecast crowd density maps of future frames instead of trajectories of each moving person in the crowd.Fixed surveillance cameraDensity Forecasting in Image Space.
Density Forecasting in Latent Space.
PDFNs.
Spatio-Temporal Patch-Based Gaussian filter.
Proposed patch-based models, PDFN-S and PDFN-ST, outperformed baselines on all the datasets. PDFN-ST successfully forecasted dynamics of individuals, a small group, and a crowd. The approach cannot always forecast sudden changes in walking directions, especially when they happened in the later frames.[45]
To separate the objects from a set according to their colour.Pneumatic Robot armForce in
response to applied pressure.
The proposed robotic arm may be considered for sorting. Servo motors and image processing cameras can be used to achieve higher repeatability and accuracy.[37,50]
An image processing-based method for coal and gangue sorting. Development of a positioning and identification system.Coal and gangue sorting robotThreshold segmentation methods. Clustering method.
Morphological corrosion and expansion methods. The centre of mass method.
Efficiency is evaluated using the images of coal and gangue, which are randomly picked from the production environment. The average coordinate errors in the x and y directions are 2.73% and 2.72%, and the identification accuracy of coal and gangue samples is 88.3% and 90.0%, respectively, and the sum of the time for identification, positioning, and opening the camera for a single sample averaged 0.130 s.[41,51,52]
A computer vision-based robotic sorter is capable of simultaneously detecting and sorting objects by their colours and heights. Vision-based process encompasses identification, manipulation, selection, and sorting objects depending on colour and geometry.A 5 or 6 DOF robotic arm and a camera with the computer vision software detecting various colours and heights and geometries.Computer Vision methods with the Haar Cascade algorithm. The Canny edge detection algorithm is used for shape identification.A robotic arm is used for picking and placing objects based on colour and height. In the proposed system, colour and height sorting efficiency is around 99%. Effectiveness, high accuracy and low cost of computer vision with a robotic arm in the sorting process according to color and shape are revealed.[38,53,54]
A novel multimodal convolutional neural network for RGB-D object detection.A base solid waste sorting system consisting of a server, vision sensors, industrial robot, and rotational speedometer.Comparison with single modal methods.
Washington RGB-D object recognition benchmark evaluated.
Meeting the real-time requirements and ensuring high precision. Achieved 49.1% mean average precision, processing images in real-time at 35.3 FPS on one single Nvidia GTX1080 GPU.
Novel dataset.
[40,55]
Practicality and feasibility of a faster R-CNN model using a dataset containing images of symmetric objects.Five DoF robot arm “OWI Robotic Arm Edge.”CNN learning algorithm that processes images with multiple layers (filters) and classifies objects in images.
Regional Proposal Network (RPN)
The accuracy and precision rate are steadily enhanced. The accuracy rate of detecting defective and non-defective objects is successfully improved, increasing the training dataset to up to 400 images of defective and non-defective objects.[39,56,57]
An automatic sorting robot with height maps and near-infrared (NIR) hyperspectral images to locate objects’ ROI and conduct online statistic pixel-based classification in contours.
24/7 monitoring.
The robotic system with four modules: (1) the main conveyor, (2) a detection module, (3) a light source module, and (4) a manipulator.
Mask-RCNN and YOLOv3 algorithms.
Method for an automatic sorting robot.
Identification include pixel, sub-pixel, object-based methods.
The prototype machine can automatically sort construction and demolition waste with a size range of 0.05–0.5 m. The sorting efficiency can reach 2028 picks/h, and the online recognition accuracy nearly reaches 100%.
Can be applied in technology for land monitoring.
[41,58,59]
Overcoming current limitations on the existing robotic solutions for picking objects in cluttered environments.Intelligent autonomous robots for picking different kinds of objects.
Universal jamming gripper.
A comparative study of the algorithmic performance of the proposed method.When a corner is detected, it takes just 0.003 s to output the target point. With lines, the required time depends on the object’s configuration, ranging from 0.02 s, when objects have almost the same depth, to 0.06 s in the worst-case scenario.[43,60,61,62]
A few main trends can be highlighted from the research analysis related to object recognition in robotics. These can be defined as object recognition for localisation and further manipulation; object recognition for shape evaluation and automatic generation of the robot program code for the corresponding robot movement; and object recognition for behaviour analysis to use as initial data for machine learning algorithms. A large number of reliable solutions have been tested in the industrial environment for the first trend, in contrast to the second and third cases, which are currently being developed.

3.3. Medical Application

The da Vinci Surgical System is the best-known robotic manipulator used in surgery applications. Florian Richter et al. [63] presented a Patient Side Manipulator (PSM) arm technology to implement reinforcement learning algorithms for the surgical da Vinci robots. The authors presented the first open-source reinforcement learning environment for surgical robots, called dVRL [63]. This environment allows fast training of da Vinci robots for autonomous assistance, and collaborative or repetitive tasks, during surgery. During the experiments, the dVRL control policy was effectively learned, and it was found that it could be transferred to a realrobot- with minimal efforts. Although the proposed environment resulted in the simple and primitive actions of reaching and picking, it was useful for suction and debris removal in a real surgical setting.
Meanwhile, in their work, Yohannes Kassahun et al. reviewed the role of machine learning techniques in surgery, focusing on surgical robotics [64]. They found that currently, the research community faces many challenges in applying machine learning in surgery and robotic surgery. The main issues are a lack of high-quality medical and surgical data, a lack of reliable metrics that adequately reflect learning characteristics, and a lack of a structured approach to the effective transfer of surgical skills for automated execution [64]. Nevertheless, the application of deep learning in robotics is a very widely studied field. The article by Harry A. Pierson et al. in 2017 provides a recent review emphasising the benefits and challenges vis-à-vis robotics [65]. Similarly to [64], they found that the main limitations preventing deep learning in medical robotics are the huge volume of training data required and a relatively long training time.
Surgery is not the only field in medicine in which robotic manipulators can be used. Another autonomous robotic grasping system, described by John E. Downey et al., introduces shared control of a robotic arm based on the interaction of a brain–machine interface (BMI) and a vision guiding system [66]. A BMI is used to define a user’s intent to grasp or transfer an object. Visual guidance is used for low-level control tasks, short-range movements, definition of the optimal grasping position, alignment of the robot end-effector, and grasping. Experiments proved that shared control movements were more accurate, efficient, and less complicated than transfer tasks using BMI alone.
Another case that requires fast robot programming methods and is implemented in medicine is the assessment of functional abilities in functional capacity evaluations (FCEs) [67]. Currently, there is no single rational solution that simulates all or many of the standard work tasks that can be used to improve the assessment and rehabilitation of injured workers. Therefore, the authors proposed that, with the use of the robotic system and machine learning algorithms, it is possible to simulate workplace tasks. Such a system can improve the assessment of functional abilities in FCEs and functional rehabilitation by performing reaching manoeuvres or more complex tasks learned from an experienced therapist. Although this type of research is still in its infancy, robotics with integrated machine learning algorithms can improve the assessment of functional abilities [67].
Although the main task of robotic manipulators is the direct manipulation of objects or tools in medicine, these manipulators can also be used for therapeutic purposes for people with mental or physical disorders. Such applications are often limited by the ability to automatically perceive and respond as needed to maintain an engaging interaction. Ognjen Rudovic et al. presented a personalised deep learning framework that can adapt robot perception [68]. The researchers in the experiment focused on robot perception, for which they developed an individualised deep learning system that could automatically assess a patient’s emotional states and level of engagement. This makes it easier to monitor treatment progress and optimise the interaction between the patient and the robot.
Robotic technologies can also be applied in dentistry. To date, there has been a lack of implementation of fundamental ideas. In a comprehensive review of robotics and the application of artificial intelligence, Jasmin Grischke et al. present numerous approaches to apply these technologies [69]. Robotic technologies in dentistry can be used for maxillofacial surgery [70], tooth preparation [71], testing of toothbrushes [72], root canal treatment and plaque removal [73], orthodontics and jaw movement [74], tooth arrangement for full dentures [75], X-ray imaging radiography [76], swab sampling [77], etc.
A summary of research focused on robotics in medical applications is provided in Table 4. It can be seen that robots are still not very popular in this area, and technological and phycological/ethical factors can explain this. From the technical point of view, more active implementation is limited by the lack of fast and reliable robot program preparation methods. Regarding psychological and ethical factors, robots are still unreliable for a large portion of society. Therefore, they are only accepted with significant hesitation.

3.4. Path Planning, Path Optimisation

The process known as robotic navigation aims to achieve accurate positioning and avoiding obstacles in the pathway. It is essential to satisfy constraints such as limited operating space, distance, energy, and time [78]. The path trajectory formation process consists of these four separate modules: perception, when the robot receives the necessary information from the sensors; localisation, when the robot aims to control its position in the environment; path planning; and motion control [79]. The development of autonomous robot path planning and path optimisation algorithms is one of the most challenging current research areas. Nevertheless, any kind of path planning requires information about the initial robot position. In the stationary robot’s case, such information is usually easily accessible, contrary to industrial manipulators mounted on mobile platforms. In mobile robots and automatically guided vehicles (AGV), accurate self-localisation in various environments [80,81] is a basis for further trajectory planning and optimisation.
According to the amount of available information, robot path planning can be categorised into two categories, namely, local and global path planning. Through a local path planning strategy, the robot has rather limited knowledge of the navigation environment. The robot has in-depth knowledge of the navigation environment when planning the global path to reach its destination by following a predetermined path. The robotic path planning method has been applied in many fields, such as reconstructive surgery, ocean and space exploration, and vehicle control. In the case of pure industrial robots, path planning refers to finding the best trajectory to transfer a tool or object to the destination in the robot workspace. It is essential to note that typical industrial robots are not feasible for real-time path planning. Usually, trajectories are prepared in advance using online or offline programming methods. One of the possible techniques is the implementation of specialised commercial computer-aided manufacturing (CAM) software such as Mastercam/Robotmaster or Sprutcam. However, the functionality of such software is relatively constrained and does not go beyond the framework of classical tasks, such as welding or milling. The use of CAM software also requires highly qualified professionals. As a result, the application of this software to individual installations is economically disadvantageous. As an alternative to CAM software, methods based on the copying movements of highly skilled specialists using commercially available equipment, such as MIMIC from Nordbo Robotics (Antvorskov, Denmark), may be used. This platform allows using demonstrations to teach robots smooth, complex paths by recording required movements that are smoothed and optimised. To overcome the limitations caused by the lack of real-time path planning features in robot controllers, additional external controllers and real-time communication with the manipulator is required. In the area of path planning and optimisation, experiments have been conducted for automatic object and 3D position detection [82] quasi-static path optimisation [83], image analysis [84], path smoothing [85], BIM [86], and accurate self-localisation in harsh industrial environments [80,81]. More information about methods and approaches proposed by researchers is listed in Table 5.

3.5. Food Industry

As the world’s population grows, the demand for food also continues to grow. Food suppliers are under pressure to work more efficiently, and consumers want more convenient and sustainable food. Robotics and automation are a key part of the solution to this goal. The food production sector has been relatively slowly robotised compared to other industries [97]. Robotics is applied in food manufacture, packaging, delivery, and cookery (cake decoration) [98]. Although the food industry is ranked fourth in terms of the most-automated sectors, robotic devices capable of processing nutrients of different shapes and materials are in high demand. In addition, these devices help to avoid consequences such as food-borne illness caused directly by the contamination of nutrients by nutrient handlers [99]. For this purpose, a dual-mode soft gripper was developed that can grasp and suck various objects having a weight of up to 1 kg. Soft grippers prevent damage to food [100].
Artificial intelligence-enabled robotic applications are entering the restaurant industry in the food processing and guest service operations. In a review assessing the potential for process innovation in the restaurant sector, an information process for the use of new technologies for process innovation was developed [101]. However, the past year, particularly due to the circumstances of COVID-19, has been a breakthrough year in robotisation in the food industry. A more detailed overview of researches focused on robotising the food industry is provided in Table 6.

3.6. Agricultural Applications

Agricultural robots are a specialised type of technology capable of assisting farmers with a wide range of operations. Their primary role is to tackle labour intensive, repetitive, and physically demanding tasks. Robots are used in planting, seedling identification, and sorting. Autonomous tractors perform the function of weeding and harvesting. Drones and autonomous ground vehicles are used for crop monitoring and condition assessment. In animal husbandry, robots are used for feeding cattle, milking, collecting and sorting eggs, and autonomous cleaning of pens. Cobots are also used in agriculture. These robots possess mechanical arms and make harvesting much easier for farmers. The agriculture robot market size is expected to reach USD 16,640.4 billion by 2026; however, specific robots, rather than industrial robots, will occupy the majority of the market. A detailed overview of research focused on implementing industrial robots in agricultural applications is provided in Table 7.

3.7. Civil Engineering Industry

In general, the construction industry is relatively inefficient from the perspective of automation. Robotics are seldom applied [107]. The main identified challenges for higher adoption of robotics in the construction industry were grouped into four categories: contractor-side economic factors; client-side economic factors; technical and work-culture factors; and weak business case factors. Technical and work-culture factors include an untrained workforce; unproven effectiveness and immature technology; and the current work culture and aversion to change [108].
The perspective of robotics in civil engineering is significantly better. Here, robotics provides considerable opportunities to increase productivity, efficiency, and flexibility, from automated modular house production to robotic welding, material handling on construction sites, and 3D printing of houses or certain structures. Robots make the industry safer and more economical, increase sustainability, and reduce its environmental impact, while improving quality and reducing waste. The total global value of the construction industry is forecast to grow by 85% to USD 15.5 trillion by 2030 [109]. Robots can make construction safer by handling large and heavy loads, working in hazardous locations, and enabling new, safer construction methods. Transferring repetitive and dangerous tasks that humans are increasingly reluctant to perform to robots means that automation can help address the labour and skills crisis, and make the construction industry more attractive [110,111]. Few classic robots are used in the construction process due to the dynamic and inaccurately described environment; however, work on 3D buildings and their environmental models reduces this limitation. A detailed overview of related references is provided in Table 8.

4. Discussion

Implementing an industrial robot in practice is a complex procedure that requires answering many questions about the possibilities of using the robot and the process itself. The situation varies slightly depending on the industry area. Robots have been used in some areas for 30 or more years, whereas, in other areas, the implementation of robots is only beginning. In industrial sectors with a long tradition of robotics, new solutions are relatively more straightforward. These solutions are typically limited to implementing new tools, control algorithms, and robotic action quality control systems. Therefore, our article focuses on areas where traditions of implementing robots do not exist yet, and such solutions are just beginning to be implemented.
Despite the different application areas, some achievements in robotics can be successfully transferred from one industry to another. Furthermore, bypassing limitations in one area often ensures advances in robotics in other sectors. For example, the implementation of computer vision to localise and manipulate randomly placed mechanical parts on a conveyor fostered the robotisation of sorting processes in all industry fields.
This article provided an overview of the main areas where robots are beginning to be implemented, and identified the main challenges and limitations they face (Figure 4).
The conclusion is that tasks performed by the robots and actual limitations are closely related to each other regardless of the implementation field. In this paper, the tasks for which robots are most preferred rather than humans were identified. Typically, these tasks are repetitive and extremely precise operations that require evaluating a considerable amount of data. For example, the implementation of robots for object recognition has three main functions in which robots replace humans: (1) extraction of useful information from massive data flow; (2) accurate movements to manipulate with an object or tool; and (3) repetitive action (sorting). In addition, the food, agriculture, and civil engineering industries aim to replace humans involved in repetitive actions. In contrast, medical applications are mainly related to accurate manipulation and hazardous environments.
Preparation of robots for an operation, particularly in dynamic, varying situations, is a time- and resource-consuming activity. Therefore, a large amount of research focuses on enhancing human–robot interaction and path planning/optimisation issues. The goal is to develop faster and more comfortable methods to operate robots in real time, and to create a possibility for the robot to react to the operator’s emotional state.
Many different factors limit the implementation of industrial robots in typical tasks. The seven main limitations in the reviewed application fields were identified. In summary, the main limitations are the lack of suitable methods, high recognition accuracy, and performance requirements; varying environmental conditions; an excessive number of possible situations; and lack of reliable equipment (tools). Notably, these limitations are unrelated to the robot’s mechanical systems (except the tools). Therefore, most modern robotic solutions are fostered by the development of additional equipment or control algorithms. Computer vision, sensor fusion, and machine learning are becoming major engines driving industrial robots’ wider application. They increase robots’ flexibility and enable them to make smart adaptive solutions, although robots were initially designed only to perform repetitive actions.
As a result of the development of robot control systems, robots’ internal structures have also been improved. These improvements typically include the implementation of new mathematical methods for robot control or optimisation of energy consumption [118]. For example, a previous study [119] provided a methodology that allows implementation of a non-typical Denavit–Hartenberg method for a delta robot.
Nonetheless, despite the recent improvements and smart solutions realised in industrial robots, their widespread use in non-typical areas remains limited. The main limitations and guidelines for further research are new intuitive control methods, user-friendly interfaces, specialised software, and real-time control methods.

5. Conclusions

Analysis of robot applications revealed a number of important issues, and showed that the current rare applications of robot implementations are not always limited by technical difficulties.
Some application fields have no tradition in such activities, such as the civil engineering, food, and agriculture industries. Human–robot cooperation in classical industrial robots and in specialised cobot cases still demands an intensive introduction into these industries. However, in this case, the introduction involves non-technical aspects such as human psychology and personal acceptance of the robots in their working place. Another aspect of the subjective attitude to robots is limited by their acceptance by managers and process designers; however, they are also lacking implementation experience and knowledge of cutting-edge achievements in robotic applications.
Many automation cases are still limited by artificial intelligence (AI) issues related to object recognition, object position recognition, and decision generation for object grabbing and manipulating. This issue arises from the process of widening robotic implementation in existing industries, and therefore many technologies should be redesigned. Nevertheless, pressure due to the absence of a skilled labour force has led to new solutions. Many general solutions using machine vision and sensor fusion (camera–lidar scanner, camera–distance sensors, etc.) have been spontaneously implemented in numerous industrial enterprises. These approaches are starting to appear in home appliances, but market penetration of these solutions remains low.
Robot implementations are often subject to systematic difficulties, such as manipulation and orientation of solid objects with non-stable geometrical shapes. These objects are widely used in industry and home appliances, and include textiles, clothes, and cables. At present, this area has few publications and technical solutions, and is in the research stage; presentations of some of the publicly available cases are at the level of scientific publications. Although clamps and templates are currently used for specific industrial cases, general solutions have not yet been achieved. This situation requires rethinking processes and possibly preparing objects for robotic processing, rather than using tremendous computing and multiplying hardware.
The result of this review points to four evident directions in the field of robotics:
  • development of intelligent companion equipment for robots (sensors, grippers, and servo-applications);
  • AI-based solutions for signal processing and decision making;
  • the redesign of general objects and the related features for robotic applications;
  • provision of psychological solutions for robot–human collaboration and acceptance of robots in the workplace.

Author Contributions

Conceptualisation, V.B. and A.D.; methodology, U.S.-B.; formal analysis, U.S.-B. and E.Š.; investigation, J.S.-Ž.; resources, V.B.; writing—original draft preparation, U.S.-B. and J.S.-Ž.; writing—review and editing, V.B. and A.D.; visualisation, A.D.; supervision, V.B.; project administration, J.S.-Ž.; funding acquisition, V.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work is part of the AI4DI project, receiving funding from the Electronic Components and Systems for European Leadership Joint Undertaking in collaboration with the European Union’s H2020 Framework Programme (H2020/2014-2020) and National Authorities, under grant agreement No 826060.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the project consortium agreement.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. ISO—ISO 8373:2012—Robots and Robotic Devices—Vocabulary. Available online: https://www.iso.org/standard/55890.html (accessed on 7 April 2021).
  2. IFR Presents World Robotics Report 2020—International Federation of Robotics. Available online: https://ifr.org/ifr-press-releases/news/record-2.7-million-robots-work-in-factories-around-the-globe (accessed on 7 April 2021).
  3. ScienceDirect Search Results—Keywords (Industrial Robot). Available online: https://www.sciencedirect.com/search?qs=Industrial%20robot (accessed on 7 April 2021).
  4. Dekle, R. Robots and industrial labor: Evidence from Japan. J. Jpn. Int. Econ. 2020, 58, 101108. [Google Scholar] [CrossRef]
  5. Olivares-Alarcos, A.; Foix, S.; Alenyà, G. On inferring intentions in shared tasks for industrial collaborative robots. Electronics 2019, 8, 1306. [Google Scholar] [CrossRef] [Green Version]
  6. Smith, R.; Cucco, E.; Fairbairn, C. Robotic Development for the Nuclear Environment: Challenges and Strategy. Robotics 2020, 9, 94. [Google Scholar] [CrossRef]
  7. Rojas, R.A.; Wehrle, E.; Vidoni, R. A Multicriteria Motion Planning Approach for Combining Smoothness and Speed in Collaborative Assembly Systems. Appl. Sci. 2020, 10, 5086. [Google Scholar] [CrossRef]
  8. Ivanov, S.; Seyitoğlu, F.; Markova, M. Hotel managers’ perceptions towards the use of robots: A mixed-methods approach. Inf. Technol. Tour. 2020, 22, 505–535. [Google Scholar] [CrossRef]
  9. Colim, A.; Sousa, N.; Carneiro, P.; Costa, N.; Arezes, P.; Cardoso, A. Ergonomic intervention on a packing workstation with robotic aid-case study at a furniture manufacturing industry. Work 2020, 66, 229–237. [Google Scholar] [CrossRef] [PubMed]
  10. Giusti, A.; Guzzi, J.; Ciresan, D.C.; He, F.L.; Rodriguez, J.P.; Fontana, F.; Faessler, M.; Forster, C.; Schmidhuber, J.; Di Caro, G.; et al. A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots. IEEE Robot. Autom. Lett. 2016, 1, 661–667. [Google Scholar] [CrossRef] [Green Version]
  11. Elsisi, M.; Mahmoud, K.; Lehtonen, M.; Darwish, M.M.F. Effective Nonlinear Model Predictive Control Scheme Tuned by Improved NN for Robotic Manipulators. IEEE Access 2021, 9, 64278–64290. [Google Scholar] [CrossRef]
  12. Elsisi, M.; Mahmoud, K.; Lehtonen, M.; Darwish, M.M.F. An improved neural network algorithm to efficiently track various trajectories of robot manipulator arms. IEEE Access 2021, 9, 11911–11920. [Google Scholar] [CrossRef]
  13. A Brief History of Collaborative Robots|Material Handling and Logistics. Available online: https://www.mhlnews.com/technology-automation/article/21124077/a-brief-history-of-collaborative-robots (accessed on 8 April 2021).
  14. Colgate, J.E.; Peshkin, M.A. Cobots. U.S. Patent 5,952,796, 14 September 1999. [Google Scholar]
  15. Galin, R.; Meshcheryakov, R. Automation and robotics in the context of Industry 4.0: The shift to collaborative robots. IOP Conf. Ser. Mater. Sci. Eng. 2019, 537, 032073. [Google Scholar] [CrossRef]
  16. Tran, M.Q.; Elsisi, M.; Mahmoud, K.; Liu, M.K.; Lehtonen, M.; Darwish, M.M.F. Experimental Setup for Online Fault Diagnosis of Induction Machines via Promising IoT and Machine Learning: Towards Industry 4.0 Empowerment. IEEE Access 2021, 9, 115429–115441. [Google Scholar] [CrossRef]
  17. Elsisi, M.; Mahmoud, K.; Lehtonen, M.; Darwish, M.M.F. Reliable Industry 4.0 Based on Machine Learning and IoT for Analyzing, Monitoring, and Securing Smart Meters. Sensors 2021, 21, 487. [Google Scholar] [CrossRef]
  18. Rao, S.K.; Prasad, R. Impact of 5G Technologies on Industry 4.0. Wirel. Pers. Commun. 2018, 100, 145–159. [Google Scholar] [CrossRef]
  19. Pérez, L.; Rodríguez-Jiménez, S.; Rodríguez, N.; Usamentiaga, R.; García, D.F.; Wang, L. Symbiotic human–robot collaborative approach for increased productivity and enhanced safety in the aerospace manufacturing industry. Int. J. Adv. Manuf. Technol. 2020, 106, 851–863. [Google Scholar] [CrossRef]
  20. Song, J.; Chen, Q.; Li, Z. A peg-in-hole robot assembly system based on Gauss mixture model. Robot. Comput. Integr. Manuf. 2021, 67, 101996. [Google Scholar] [CrossRef]
  21. De Pace, F.; Manuri, F.; Sanna, A.; Fornaro, C. A systematic review of Augmented Reality interfaces for collaborative industrial robots. Comput. Ind. Eng. 2020, 149, 106806. [Google Scholar] [CrossRef]
  22. Matheson, E.; Minto, R.; Zampieri, E.G.G.; Faccio, M.; Rosati, G. Human-robot collaboration in manufacturing applications: A review. Robotics 2019, 8, 100. [Google Scholar] [CrossRef] [Green Version]
  23. ISO—ISO/TS 15066:2016—Robots and Robotic Devices—Collaborative Robots. Available online: https://www.iso.org/standard/62996.html (accessed on 2 December 2021).
  24. Tannous, M.; Miraglia, M.; Inglese, F.; Giorgini, L.; Ricciardi, F.; Pelliccia, R.; Milazzo, M.; Stefanini, C. Haptic-based touch detection for collaborative robots in welding applications. Robot. Comput. Integr. Manuf. 2020, 64, 101952. [Google Scholar] [CrossRef]
  25. Tannous, M.; Bologna, F.; Stefanini, C. Load cell torques and force data collection during tele-operated robotic gas tungsten arc welding in presence of collisions. Data Br. 2020, 31, 105981. [Google Scholar] [CrossRef]
  26. Knudsen, M.; Kaivo-oja, J. Collaborative Robots: Frontiers of Current Literature. J. Intell. Syst. Theory Appl. 2020, 3, 13–20. [Google Scholar] [CrossRef]
  27. Ghosh, A.; Soto, D.A.P.; Veres, S.M.; Rossiter, A. Human robot interaction for future remote manipulations in industry 4.0. Proc. IFAC-Pap. 2020, 53, 10223–10228. [Google Scholar] [CrossRef]
  28. Ghosh, A.; Veres, S.M.; Paredes-Soto, D.; Clarke, J.E.; Rossiter, J.A. Intuitive programming with remotely instructed robots inside future gloveboxes. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 209–211. [Google Scholar]
  29. Weidemann, A.; Rußwinkel, N. The Role of Frustration in Human–Robot Interaction—What Is Needed for a Successful Collaboration? Front. Psychol. 2021, 12, 707. [Google Scholar] [CrossRef]
  30. Spezialetti, M.; Placidi, G.; Rossi, S. Emotion Recognition for Human-Robot Interaction: Recent Advances and Future Perspectives. Front. Robot. AI 2020, 7, 532279. [Google Scholar] [CrossRef] [PubMed]
  31. Ge, S.; Wang, P.; Liu, H.; Lin, P.; Gao, J.; Wang, R.; Iramina, K.; Zhang, Q.; Zheng, W. Neural Activity and Decoding of Action Observation Using Combined EEG and fNIRS Measurement. Front. Hum. Neurosci. 2019, 13, 357. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Mavridis, N. A review of verbal and non-verbal human–robot interactive communication. Robot. Auton. Syst. 2015, 63, 22–35. [Google Scholar] [CrossRef] [Green Version]
  33. Dzedzickis, A.; Kaklauskas, A.; Bucinskas, V. Human emotion recognition: Review of sensors and methods. Sensors 2020, 20, 592. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Shubha, P. International Journal of Engineering Technology Research & Management: A review of multi object recognition based on deep learining. Int. J. Eng. Technol. Res. Manag. 2020, 2, 27–33. [Google Scholar]
  35. De Coninck, E.; Verbelen, T.; Van Molle, P.; Simoens, P.; Dhoedt, B. Learning robots to grasp by demonstration. Robot. Auton. Syst. 2020, 127, 103474. [Google Scholar] [CrossRef]
  36. Fujita, M.; Domae, Y.; Noda, A.; Garcia Ricardez, G.A.; Nagatani, T.; Zeng, A.; Song, S.; Rodriguez, A.; Causo, A.; Chen, I.M.; et al. What are the important technologies for bin picking? Technology analysis of robots in competitions based on a set of performance metrics. Adv. Robot. 2020, 34, 560–574. [Google Scholar] [CrossRef]
  37. Sughashini, K.R.; Sunanthini, V.; Johnsi, J.; Nagalakshmi, R.; Sudha, R. A pneumatic robot arm for sorting of objects with chromatic sensor module. Mater. Today Proc. 2021, 45, 6364–6368. [Google Scholar] [CrossRef]
  38. Shaikat, A.S.; Akter, S.; Salma, U. Computer Vision Based Industrial Robotic Arm for Sorting Objects by Color and Height. J. Eng. Adv. 2020, 1, 116–122. [Google Scholar] [CrossRef]
  39. Chen, P.; Elangovan, V. Object Sorting using Faster R-CNN. Int. J. Artif. Intell. Appl. 2020, 11, 27–36. [Google Scholar] [CrossRef]
  40. Yu, Y.; Zou, S.; Yin, K. A novel detection fusion network for solid waste sorting. Int. J. Adv. Robot. Syst. 2020, 17, 172988142094177. [Google Scholar] [CrossRef]
  41. Xiao, W.; Yang, J.; Fang, H.; Zhuang, J.; Ku, Y.; Zhang, X. Development of an automatic sorting robot for construction and demolition waste. Clean Technol. Environ. Policy 2020, 22, 1829–1841. [Google Scholar] [CrossRef]
  42. Li, M.; Duan, Y.; He, X.; Yang, M. Image positioning and identification method and system for coal and gangue sorting robot. Int. J. Coal Prep. Util. 2020, 1–19. [Google Scholar] [CrossRef]
  43. D’Avella, S.; Tripicchio, P.; Avizzano, C.A. A study on picking objects in cluttered environments: Exploiting depth features for a custom low-cost universal jamming gripper. Robot. Comput. Integr. Manuf. 2020, 63, 101888. [Google Scholar] [CrossRef]
  44. Ciszak, O.; Juszkiewicz, J.; Suszyński, M. Programming of Industrial Robots Using the Recognition of Geometric Signs in Flexible Welding Process. Symmetry 2020, 12, 1429. [Google Scholar] [CrossRef]
  45. Minoura, H.; Yonetani, R.; Nishimura, M.; Ushiku, Y. Crowd Density Forecasting by Modeling Patch-Based Dynamics. IEEE Robot. Autom. Lett. 2021, 6, 287–294. [Google Scholar] [CrossRef]
  46. De Coninck, E.; Verbelen, T.; Van Molle, P.; Simoens, P.; Idlab, B.D. Learning to Grasp Arbitrary Household Objects from a Single Demonstration. IEEE Int. Conf. Intell. Robot. Syst. 2019, 2372–2377. [Google Scholar] [CrossRef]
  47. Kaya, O.; Tağlıoğlu, G.B.; Ertuğrul, Ş. The Series Elastic Gripper Design, Object Detection, and Recognition by Touch. J. Mech. Robot. 2022, 14, 014501. [Google Scholar] [CrossRef]
  48. Kulkarni, R.G. Robot Path Planning with Sensor Feedback for Industrial Applications; Wichita State University: Wichita, KS, USA, 2021. [Google Scholar]
  49. Abdalrahman, M.; Brice, A.; Hanson, L. New Era of Automation in Scania’ s Manufacturing Systems—A Method to Automate a Manual Assembly Process; Libraries at Lund University: Lund, Sweden, 2021. [Google Scholar]
  50. Thike, A.; Moe San, Z.Z.; Min Oo, D.Z. Design and Development of an Automatic Color Sorting Machine on Belt Conveyor. Int. J. Sci. Eng. Appl. 2019, 8, 176–179. [Google Scholar] [CrossRef]
  51. Wang, Z.; Xie, S.; Chen, G.; Chi, W.; Ding, Z.; Wang, P. An Online Flexible Sorting Model for Coal and Gangue Based on Multi-Information Fusion. IEEE Access 2021, 9, 90816–90827. [Google Scholar] [CrossRef]
  52. Sun, Z.; Huang, L.; Jia, R. Coal and gangue separating robot system based on computer vision. Sensors 2021, 21, 1349. [Google Scholar] [CrossRef] [PubMed]
  53. Fadhil, A.T.; Abbar, K.A.; Qusay, A.M. Computer Vision-Based System for Classification and Sorting Color Objects. IOP Conf. Ser. Mater. Sci. Eng. 2020, 745, 012030. [Google Scholar] [CrossRef]
  54. Peršak, T.; Viltužnik, B.; Hernavs, J.; Klancnik, S. Vision-Based Sorting Systems for Transparent Plastic Granulate. Appl. Sci. 2020, 10, 4269. [Google Scholar] [CrossRef]
  55. Sun, L.; Zhao, C.; Yan, Z.; Liu, P.; Duckett, T.; Stolkin, R. A novel weakly-supervised approach for RGB-D-based nuclear waste object detection. IEEE Sens. J. 2019, 19, 3487–3500. [Google Scholar] [CrossRef] [Green Version]
  56. Albinali, H.; Alzahrani, F.A. Faster R-CNN for detecting regions in human-annotated micrograph images. In Proceedings of the 2021 International Conference of Women in Data Science at Taif University (WiDSTaif), Taif, Saudi Arabia, 30–31 March 2021. [Google Scholar]
  57. Li, S.; Zhao, X.; Li, W. Analysis of Object Detection Performance Based on Faster R-CNN. J. Phys. Conf. Ser. 2021, 1827, 012085. [Google Scholar] [CrossRef]
  58. Cipta Ramadhan Kete, S.; Darma Tarigan, S.; Effendi, H. Land use classification based on object and pixel using Landsat 8 OLI in Kendari City, Southeast Sulawesi Province, Indonesia. IOP Conf. Ser. Earth Environ. Sci. 2019, 284, 012019. [Google Scholar] [CrossRef]
  59. Hespeler, S.C.; Nemati, H.; Dehghan-Niri, E. Non-destructive thermal imaging for object detection via advanced deep learning for robotic inspection and harvesting of chili peppers. Artif. Intell. Agric. 2021, 5, 102–117. [Google Scholar] [CrossRef]
  60. Birglen, L.; Schlicht, T. A statistical review of industrial robotic grippers. Robot. Comput. Integr. Manuf. 2018, 49, 88–97. [Google Scholar] [CrossRef]
  61. Shim, M.; Kim, J.H. Design and optimization of a robotic gripper for the FEM assembly process of vehicles. Mech. Mach. Theory 2018, 129, 1–16. [Google Scholar] [CrossRef]
  62. Linghu, C.; Zhang, S.; Wang, C.; Yu, K.; Li, C.; Zeng, Y.; Zhu, H.; Jin, X.; You, Z.; Song, J. Universal SMP gripper with massive and selective capabilities for multiscaled, arbitrarily shaped objects. Sci. Adv. 2020, 6, eaay5120. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Richter, F.; Orosco, R.K.; Yip, M.C. Open-Sourced Reinforcement Learning Environments for Surgical Robotics. arXiv 2019, arXiv:1903.02090. [Google Scholar]
  64. Kassahun, Y.; Yu, B.; Tibebu, A.T.; Stoyanov, D.; Giannarou, S.; Metzen, J.H.; Vander Poorten, E. Surgical robotics beyond enhanced dexterity instrumentation: A survey of machine learning techniques and their role in intelligent and autonomous surgical actions. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 553–568. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Pierson, H.A.; Gashler, M.S. Deep learning in robotics: A review of recent research. Adv. Robot. 2017, 31, 821–835. [Google Scholar] [CrossRef] [Green Version]
  66. Downey, J.E.; Weiss, J.M.; Muelling, K.; Venkatraman, A.; Valois, J.S.; Hebert, M.; Bagnell, J.A.; Schwartz, A.B.; Collinger, J.L. Blending of brain-machine interface and vision-guided autonomous robotics improves neuroprosthetic arm performance during grasping. J. Neuroeng. Rehabil. 2016, 13, 28. [Google Scholar] [CrossRef] [Green Version]
  67. Fong, J.; Ocampo, R.; Gross, D.P.; Tavakoli, M. Intelligent Robotics Incorporating Machine Learning Algorithms for Improving Functional Capacity Evaluation and Occupational Rehabilitation. J. Occup. Rehabil. 2020, 30, 362–370. [Google Scholar] [CrossRef] [PubMed]
  68. Rudovic, O.; Lee, J.; Dai, M.; Schuller, B.; Picard, R.W. Personalized machine learning for robot perception of affect and engagement in autism therapy. Sci. Robot. 2018, 3, eaao6760. [Google Scholar] [CrossRef] [Green Version]
  69. Grischke, J.; Johannsmeier, L.; Eich, L.; Griga, L.; Haddadin, S. Dentronics: Towards robotics and artificial intelligence in dentistry. Dent. Mater. 2020, 36, 765–778. [Google Scholar] [CrossRef]
  70. Ma, Q.; Kobayashi, E.; Wang, J.; Hara, K.; Suenaga, H.; Sakuma, I.; Masamune, K. Development and preliminary evaluation of an autonomous surgical system for oral and maxillofacial surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2019, 15, e1997. [Google Scholar] [CrossRef]
  71. Otani, T.; Raigrodski, A.J.; Mancl, L.; Kanuma, I.; Rosen, J. In vitro evaluation of accuracy and precision of automated robotic tooth preparation system for porcelain laminate veneers. J. Prosthet. Dent. 2015, 114, 229–235. [Google Scholar] [CrossRef] [Green Version]
  72. Lang, T.; Staufer, S.; Jennes, B.; Gaengler, P. Clinical validation of robot simulation of toothbrushing—Comparative plaque removal efficacy. BMC Oral Health 2014, 14, 82. [Google Scholar] [CrossRef] [Green Version]
  73. Nelson, C.A.; Hossain, S.G.M.; Al-Okaily, A.; Ong, J. A novel vending machine for supplying root canal tools during surgery. J. Med. Eng. Technol. 2012, 36, 102–116. [Google Scholar] [CrossRef] [PubMed]
  74. Lepidi, L.; Chen, Z.; Ravida, A.; Lan, T.; Wang, H.L.; Li, J. A Full-Digital Technique to Mount a Maxillary Arch Scan on a Virtual Articulator. J. Prosthodont. 2019, 28, 335–338. [Google Scholar] [CrossRef]
  75. Zhang, Y.; De Jiang, J.G.; Liang, T.; Hu, W.P. Kinematics modeling and experimentation of the multi-manipulator tooth-arrangement robot for full denture manufacturing. J. Med. Syst. 2011, 35, 1421–1429. [Google Scholar] [CrossRef] [PubMed]
  76. Spin-Neto, R.; Mudrak, J.; Matzen, L.H.; Christensen, J.; Gotfredsen, E.; Wenzel, A. Cone beam CT image artefacts related to head motion simulated by a robot skull: Visual characteristics and impact on image quality. Dentomaxillofacial Radiol. 2013, 42, 32310645. [Google Scholar] [CrossRef] [Green Version]
  77. Li, C.; Gu, X.; Xiao, X.; Lim, C.M.; Duan, X.; Ren, H. A Flexible Transoral Robot Towards COVID-19 Swab Sampling. Front. Robot. AI 2021, 8, 51. [Google Scholar] [CrossRef]
  78. Jose, K.; Pratihar, D.K. Task allocation and collision-free path planning of centralized multi-robots system for industrial plant inspection using heuristic methods. Rob. Auton. Syst. 2016, 80, 34–42. [Google Scholar] [CrossRef]
  79. Das, P.K.; Jena, P.K. Multi-robot path planning using improved particle swarm optimization algorithm through novel evolutionary operators. Appl. Soft Comput. J. 2020, 92, 106312. [Google Scholar] [CrossRef]
  80. Fascista, A.; Coluccia, A.; Ricci, G. A Pseudo Maximum likelihood approach to position estimation in dynamic multipath environments. Signal Processing 2021, 181, 107907. [Google Scholar] [CrossRef]
  81. Karaagac, A.; Haxhibeqiri, J.; Ridolfi, M.; Joseph, W.; Moerman, I.; Hoebeke, J. Evaluation of accurate indoor localization systems in industrial environments. In Proceedings of the 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Limassol, Cyprus, 12–15 September 2017; pp. 1–8. [Google Scholar]
  82. Makomo, T.J.; Erin, K.; Boru, B. Real Time Application for Automatic Object and 3D Position Detection and Sorting with Robotic Manipulator. Sak. Univ. J. Sci. 2020, 24, 703–711. [Google Scholar] [CrossRef]
  83. Hermansson, T.; Carlson, J.S.; Linn, J.; Kressin, J. Quasi-static path optimization for industrial robots with dress packs. Robot. Comput. Integr. Manuf. 2021, 68, 102055. [Google Scholar] [CrossRef]
  84. Nguyen, V.; Melkote, S. Hybrid statistical modelling of the frequency response function of industrial robots. Robot. Comput. Integr. Manuf. 2021, 70, 102134. [Google Scholar] [CrossRef]
  85. Jiao, J.; Tian, W.; Zhang, L.; Li, B.; Hu, J.; Li, Y.; Li, D.; Zhang, J. Variable stiffness identification and configuration optimization of industrial robots for machining tasks. Res. Sq. 2020. [Google Scholar] [CrossRef]
  86. Ding, L.; Jiang, W.; Zhou, Y.; Zhou, C.; Liu, S. BIM-based task-level planning for robotic brick assembly through image-based 3D modeling. Adv. Eng. Inform. 2020, 43, 100993. [Google Scholar] [CrossRef]
  87. Leroux, M.; Raison, M.; Adadja, T.; Achiche, S. Combination of eyetracking and computer vision for robotics control. In Proceedings of the IEEE Conference on Technologies for Practical Robot Applications, TePRA, Woburn, MA, USA, 11–12 May 2015; IEEE Computer Society: Washington, DC, USA, 2015. [Google Scholar]
  88. Xu, Y.; Fang, G.; Lv, N.; Chen, S.; Jia Zou, J. Computer vision technology for seam tracking in robotic GTAW and GMAW. Robot. Comput. Integr. Manuf. 2015, 32, 25–36. [Google Scholar] [CrossRef]
  89. Rojas, R.A.; Garcia, M.A.R.; Gualtieri, L.; Rauch, E. Combining safety and speed in collaborative assembly systems—An approach to time optimal trajectories for collaborative robots. Procedia CIRP 2021, 97, 308–312. [Google Scholar] [CrossRef]
  90. Roveda, L.; Magni, M.; Cantoni, M.; Piga, D.; Bucca, G. Human–robot collaboration in sensorless assembly task learning enhanced by uncertainties adaptation via Bayesian Optimization. Rob. Auton. Syst. 2021, 136, 103711. [Google Scholar] [CrossRef]
  91. Fu, G.; Gu, T.; Gao, H.; Lu, C. A postprocessing and path optimization based on nonlinear error for multijoint industrial robot-based 3D printing. Int. J. Adv. Robot. Syst. 2020, 17, 172988142095224. [Google Scholar] [CrossRef]
  92. Cvitanic, T.; Nguyen, V.; Melkote, S.N. Pose optimization in robotic machining using static and dynamic stiffness models. Robot. Comput. Integr. Manuf. 2020, 66, 101992. [Google Scholar] [CrossRef]
  93. Wang, Z.; Zhang, R.; Keogh, P. Real-Time Laser Tracker Compensation of Robotic Drilling and Machining. J. Manuf. Mater. Process. 2020, 4, 79. [Google Scholar] [CrossRef]
  94. Schultz, U.P. Reversible control of robots. In Reversible Computation: Extending Horizons of Computing. RC 2020. Lecture Notes in Computer Science); Ulidowski, I., Lanese, I., Schultz, U., Ferreira, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2020; Volume 12070, pp. 177–186. [Google Scholar] [CrossRef]
  95. Jiang, J.; Huang, Z.; Bi, Z.; Ma, X.; Yu, G. State-of-the-Art control strategies for robotic PiH assembly. Robot. Comput. Integr. Manuf. 2020, 65, 101894. [Google Scholar] [CrossRef]
  96. Kumar, S.; Singhal, P.; Krovi, V.N. Computer-vision-based decision support in surgical robotics. IEEE Des. Test 2015, 32, 89–97. [Google Scholar] [CrossRef]
  97. Bader, F.; Rahimifard, S. Challenges for industrial robot applications in food manufacturing. In Proceedings of the 2nd International Symposium on Computer Science and Intelligent Control, Stockholm, Sweden, 21–23 September 2018. [Google Scholar]
  98. Grobbelaar, W.; Verma, A.; Shukla, V.K. Analyzing human robotic interaction in the food industry. J. Phys. Conf. Ser. 2021, 1714, 012032. [Google Scholar] [CrossRef]
  99. Sandey, K.K.; Qureshi, M.A.; Meshram, B.D.; Agrawal, A.; Uprit, S. Robotics—An Emerging Technology in Dairy Industry. Int. J. Eng. Trends Technol. 2017, 43, 58–62. [Google Scholar]
  100. Wang, Z.; Or, K.; Hirai, S. A dual-mode soft gripper for food packaging. Rob. Auton. Syst. 2020, 125, 103427. [Google Scholar] [CrossRef]
  101. Blöcher, K.; Alt, R. AI and robotics in the European restaurant sector: Assessing potentials for process innovation in a high-contact service industry. Electron. Mark. 2020, 31, 529–551. [Google Scholar] [CrossRef]
  102. Bader, F.; Rahimifard, S. A methodology for the selection of industrial robots in food handling. Innov. Food Sci. Emerg. Technol. 2020, 64, 102379. [Google Scholar] [CrossRef]
  103. Boschetti, G.; Carbone, G. Advances in Italian Mechanism Science; Springer: Cham, Switzerland, 2017; Volume 18, ISBN 9783030558062. [Google Scholar]
  104. Zhang, B.; Xie, Y.; Zhou, J.; Wang, K.; Zhang, Z. State-of-the-art robotic grippers, grasping and control strategies, as well as their applications in agricultural robots: A review. Comput. Electron. Agric. 2020, 177, 105694. [Google Scholar] [CrossRef]
  105. Chang, C.-L.; Lin, K.-M. Smart Agricultural Machine with a Computer Vision-Based Weeding and Variable-Rate Irrigation Scheme. Robotics 2018, 7, 38. [Google Scholar] [CrossRef] [Green Version]
  106. Patrício, D.I.; Rieder, R. Computer vision and artificial intelligence in precision agriculture for grain crops: A systematic review. Comput. Electron. Agric. 2018, 153, 69–81. [Google Scholar] [CrossRef] [Green Version]
  107. Tankova, T.; da Silva, L.S. Robotics and Additive Manufacturing in the Construction Industry. Curr. Robot. Rep. 2020, 1, 13–18. [Google Scholar] [CrossRef] [Green Version]
  108. Davila Delgado, J.M.; Oyedele, L.; Ajayi, A.; Akanbi, L.; Akinade, O.; Bilal, M.; Owolabi, H. Robotics and automated systems in construction: Understanding industry-specific challenges for adoption. J. Build. Eng. 2019, 26, 100868. [Google Scholar] [CrossRef]
  109. Robinson, G. Global Construction Market to Grow $8 Trillion by 2030: Driven by China, US and India; Global Construction Perspectives and Oxford Economics: London, UK, 2016; Volume 44, pp. 1–3. [Google Scholar]
  110. Aparicio, C.C.; Balzan, A.; Trabucco, D. Robotics in construction: Framework and future directions. Int. J. High-Rise Build. 2020, 9, 105–111. [Google Scholar]
  111. Follini, C.; Magnago, V.; Freitag, K.; Terzer, M.; Marcher, C.; Riedl, M.; Giusti, A.; Matt, D.T. Bim-integrated collaborative robotics for application in building construction and maintenance. Robotics 2021, 10, 2. [Google Scholar] [CrossRef]
  112. Parascho, S.; Han, I.X.; Walker, S.; Beghini, A.; Bruun, E.P.G.; Adriaenssens, S. Robotic vault: A cooperative robotic assembly method for brick vault construction. Constr. Robot. 2020, 4, 117–126. [Google Scholar] [CrossRef]
  113. Kazemian, A.; Yuan, X.; Davtalab, O.; Khoshnevis, B. Computer vision for real-time extrusion quality monitoring and control in robotic construction. Autom. Constr. 2019, 101, 92–98. [Google Scholar] [CrossRef]
  114. Gautam, M.; Fagerlund, H.; Greicevci, B.; Christophe, F.; Havula, J. Collaborative Robotics in Construction: A Test Case on Screwing Gypsum Boards on Ceiling. In Proceedings of the 2020 5th International Conference on Green Technology and Sustainable Development, Ho Chi Minh City, Vietnam, 27–28 November 2020; pp. 88–93. [Google Scholar]
  115. Balzan, A.; Aparicio, C.C.; Trabucco, D. Robotics in construction: State-of-art of on-site advanced devices. Int. J. High-Rise Build. 2020, 9, 95–104. [Google Scholar]
  116. Ghasempourabadi, M.; Taraz, M. Human-robot interaction in construction: A literature review. Malays. J. Sustain. Environ. 2021, 8, 49–74. [Google Scholar]
  117. Bodea, S.; Mindermann, P.; Gresser, G.T.; Menges, A. Additive Manufacturing of Large Coreless Filament Wound Composite Elements for Building Construction. 3D Print. Addit. Manuf. 2021. ahead of print. [Google Scholar] [CrossRef]
  118. Zhang, M.; Yan, J. A data-driven method for optimizing the energy consumption of industrial robots. J. Clean. Prod. 2021, 285, 124862. [Google Scholar] [CrossRef]
  119. Aksoy, S.; Ozan, E. Robots and Their Applications. Int. Res. J. Eng. Technol. 2020. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The annual ratio of publications to newly installed industrial robots.
Figure 1. The annual ratio of publications to newly installed industrial robots.
Applsci 12 00135 g001
Figure 2. Human–robot cooperation levels [5]: (a) no collaboration, the robot remains inside a closed work cell; (b) coexistence, removed cells, but separate workspaces; (c) synchronisation, sharing of the workspace, but never at the same time; (d) cooperation, shared task and workspace, no physical interaction; (e) collaboration, operators and robots exchange forces.
Figure 2. Human–robot cooperation levels [5]: (a) no collaboration, the robot remains inside a closed work cell; (b) coexistence, removed cells, but separate workspaces; (c) synchronisation, sharing of the workspace, but never at the same time; (d) cooperation, shared task and workspace, no physical interaction; (e) collaboration, operators and robots exchange forces.
Applsci 12 00135 g002
Figure 3. Comparison of the operating environment: (a) industrial robots (adapted from [6]); (b) collaborative robots (adapted from [7]).
Figure 3. Comparison of the operating environment: (a) industrial robots (adapted from [6]); (b) collaborative robots (adapted from [7]).
Applsci 12 00135 g003
Figure 4. Relations between robot implementation areas, typical tasks and limitations.
Figure 4. Relations between robot implementation areas, typical tasks and limitations.
Applsci 12 00135 g004
Table 1. Research focused on human–machine interaction.
Table 1. Research focused on human–machine interaction.
ObjectiveTechnologyApproachImprovementRef.
To improve flexibility, productivity and quality of a multi-pass gas tungsten arc welding (GTAW) process performed by a collaborative robot.A haptic interface.
6-axis robotic arm (Mitsubishi MELFA RV-13FM-D).
The end effector with GTAW torch.
A monitoring camera (Xiris XVC-1000).
A Load Cell (ATI Industrial Automation Mini45-E) to evaluate tool force interactions with work pieces.
A haptic-based approach is designed and tested in a manufacturing scenario proposing light and low-cost real-time algorithms for “touch” detection.Two main criteria were analysed to assess the performance: the 3-Sigma rule and the Hampel identifier. Experimental results showed better performance of the 3-Sigma rule in terms of precision percentage (mean value of 99.9%) and miss rate (mean value of 10%) concerning the Hampel identifier. Results confirmed the influence of the contamination level related to the dataset. This algorithm adds significant advances to enable the use of light and simple machine learning approaches in real-time applications.[24,25]
To produce more advanced or complex forms of interaction by enabling cobots with semantic understanding capabilities or AI-aided anticipation skills.Collaborative robotsArtificial intelligence.The overview provides hints of future cobot developments and identifies future research frontiers related to economic, social, and technological dimensions.[26]
To strike a balance in order to find a suitable level of autonomy for human operators.Model of Remotely Instructed Robots (RIRs.)Modelling method.Developed model in which the robot is autonomous in task execution, but also aids the operator’s ultimate decision-making process about what to do next. Presentation of the robot’s own model of the work scene enables corrections to be made by the robot, as well as it can enhance the operator’s confidence in the robot’s work.[27,28]
Table 2. Interfering aspects in human–robot interaction.
Table 2. Interfering aspects in human–robot interaction.
ObjectiveInteractionApproachSolutionRef.
FrustrationClose cooperative workControlled coordinationSense of control of frustration, affective computing.[29]
Emotion recognitionBy collecting different kinds of data.Discrete models describing emotions used, facial expression analysis, camera positioning.Affective computing.
Empowering robots to observe, interpret and express emotions. Endow robots with emotional intelligence.
[30]
Decoding of action observationElucidating the neural mechanisms of action observation and intention understanding.Decoding the underlying neural processes.The dynamic involvement of the mirror neuron systems (MNS) and the theory of mind ToM/mentalising network during action observation.[31]
Verbal and non-verbal communicationInteractive communication.Symbol groundingComposition of grounded semantics, online negotiation of meaning, affective interaction and closed-loop affective dialogue, mixed speech-motor planning, massive acquisition of data-driven models for human–robot communication through crowd-sourced online games, real-time exploitation of online information and services for enhanced human–robot communication.[32]
Table 4. Robotic solutions in medical applications.
Table 4. Robotic solutions in medical applications.
ObjectiveTechnologyApproachImprovementRef.
Create bridge between reinforcement learning and the surgical robotics communities by presenting the first open-sourced reinforcement learning environments for surgical da Vinci robots.Patient Side Manipulator (PSM) arm.
Da VinciR©Surgical Robot.
Large Needle Driver (LND), with a jaw gripper to grab objects such as suturing needle.
Reinforced learning,
OpenAI Gym
DDPG (Deep Deterministic Policy Gradients) and HER (Hindsight Experience Replay)
V-REP physics simulator
Developed new reinforced learning environment for fast and effective training of surgical da Vinci robots for autonomous operations.[63]
A method of shared control where the user controls a prosthetic arm using a brain–machine interface and receives assistance with positioning the hand when it approaches an object.Brain–machine interface system.
Robotic arm.
RGB-D camera mounted above the arm base.
Shared control system.
An autonomous robotic grasping system
Shared control system for a robotic manipulator, making control more accurate, more efficient, and less difficult than an alone control system.[66]
A personalised deep learning framework can adapt robot perception of children’s affective states and engagement to different cultures and individuals.Unobtrusive audiovisual sensors and wearable sensors, providing the child’s heart-rate, skin-conductance (EDA), body temperature, and accelerometer data.Feed-forward multilayer neural networks.
GPA-net
Achieved an average agreement of ~60% with human experts to estimate effect and engagement.[68]
An overview of existing applications and concepts of robotic systems and artificial intelligence in dentistry, for functional capacity evaluations, of the role of ML in surgery using surgical robotics, of deep learning vis-à-vis physical robotic systems, focused on contemporary research.An overviewAn overviewAn overview[64,65,67,69]
Transoral robot towards COVID-19 swab sampling.Flexible manipulator, an endoscope with a monitor, a master device.Teleoperated configuration for swab samplingA flexible transoral robot with a teleoperated configuration is proposed to address the surgeons’ risks during the face-to-face COVID-19 swab sampling.[77]
Table 5. Research focused on path planning and optimisation.
Table 5. Research focused on path planning and optimisation.
ObjectiveTechnologyApproachImprovementRef.
The position of the objects—possible trajectory to an object in real-time.A robotic system consisting of an ABB IRB120 robot equipped with a gripper and a 3D Kinect sensor.Detection of the workpieces.
Object recognition techniques are applied using available algorithms in MATLAB’s Computer Vision and Image Acquisition Toolbox.
The algorithm for finding 3D object position according to colour segmentation in real-time. The main focus was on finding the depth of an object from the Kinect sensor. Kinect could distinguish colour correctly, and the robot could accurately navigate to the detected object.[82]
The combination of eye-tracking and computer vision automate the approach of a robot to its targeted point by acquiring its 3D location.Eye-tracking device,
webcam.
Image analysis and geometrical reconstruction.The computed coordinates of the target 3D localisation have an average error of 5.5 cm, which is 92% more accurate than eye-tracking only for the point of gaze calculation, with an estimated error of 72 cm.[87]
Computer vision technology for real-time seam tracking in robotic gas tungsten arc welding (GTAW).Welding robot GTAW—the robot arm, the robot controller, the vision system, isolation unit, the weld power supply, and the host computer.
Passive vision system.
Passive vision system
image processing.
The developed method is feasible and sufficient to meet the specific precision requirements of some applications in robotic seam tracking.[88]
A higher fidelity model for predicting the entire pose-dependent FRF of an industrial robot by combining the advantages of Experimental Modal Analysis (EMA) with Operational Modal Analysis for milling processes.KUKA KR500-3 6 DOF industrial robotHybrid statistical modelling: Frequency Response Function (FRF) modelling method.A Bayesian inference and hyperparameter updating approach for updating the EMA-calibrated GPR models of the robot FRF with OMA-based FRF data improved the model’s compliance RMSE by 26% and 27% in the x and y direction tool paths, respectively, compared to only EMA-based calibration. The methodology reduced the average number of iterations and calibration times required to determine the optimal GPR model hyperparameters by 50.3% and 31.3%, respectively.[84]
Safe trajectories without neglecting cognitive ergonomics and production efficiency aspects.UR3 lightweight robotExperimental tasksThe task’s execution time was reduced by 13.1% regarding the robot’s default planner and 19.6% concerning the minimum jerk smooth collaboration planner.
This new approach is highly relevant for manufacturers of collaborative robots (e.g., for integration as a path option in the robot pendant software) and for users (e.g., an online service for calculating the optimal path and subsequent transfer to the robot).
[89]
An industrial robot moving between stud welding operations in a stud welding station.Industrial robotQuasi-static path optimisation for an industrial robotThe method was successfully applied to a stud welding station for an industrial robot moving between two stud welding operations. Even for a difficult case, the optimised path reduced the internal force in the dress pack. It kept the dressed robot from the surrounding geometry with a prescribed safety clearance during the entire robot motion.[83]
An industrial assembly task for learning and optimisation, considering uncertainties.A Franka EMIKA Panda manipulatorTask trajectory learning approach.
Task optimisation approach.
The proposed approach made the robot learn the task execution and compensate for the task uncertainties. The HMM + BO methodology and the HMM algorithm without optimisation were compared. This comparison shows the capabilities of the optimisation stage to compensate for task uncertainties. In particular, the HMM + BO methodology shows an assembly task success rate of 93%, while the HMM algorithm shows a success rate of only 19%.[90]
The postprocessing and path optimisation based on the non-linear errors to improve the accuracy of multi-joint industrial robot-based 3D printing.Multi-joint industrial robot for 3D printingPath smoothing methodMulti-joint industrial robot-based 3D printing can be used for the high-precision printing of complex freeform surfaces. An industrial robot with only three joints is used, and the solutions of joint angles for the tool orientations are not proposed, which is essential for printing the freeform surface.[91]
A comparative study of robot pose optimisation using static and dynamic stiffness models for different cutting scenarios.KUKA KR 500–3 industrial robot,
aluminium 6061
Complete pose (CP) and the decoupled partial pose (DPP) methods.
Effect of optimisation method on machining accuracy
A dynamic model-based robot pose optimisation yields significant improvement over a static model-based optimisation for cutting conditions where the time-varying cutting forces approach the robot’s natural frequencies. A static model-based optimisation is sufficient when the frequency content of the cutting forces is not close to the robot’s natural frequencies.[92]
The feasibility and validity of proposed stiffness identification and configuration optimisation methods.KUKA KR500 industrial robotRobot stiffness characteristics and optimisation methods.
Point selection method
The smooth processing strategy improves optimisation efficiency, ensuring minimal stiffness loss. According to the machining results of a cylinder head of a vehicle engine, the milling quality was improved obviously after the configuration optimisation, and the validity of these methods are verified.[85]
Real-time compensation setups.A standard KUKA KR120R2500 PRO industrial robot with a spindle end-effectorReal-time Closed Loop Compensation methodReal-time metrology feedback cannot fully compensate for the sudden error spikes caused by the backlash. The mitigation strategy of automatically reducing feed rate (ASC) was demonstrated to reduce backlash error significantly. However, ASC considerably increases the cycle time for a toolpath that involves many direction reversals and leads to uneven cutter chip load and variation in surface finish. Backlash, therefore, remains the largest source of residual error for a robot under real-time metrology compensation.[93]
Building Information Model (BIM)-based robotic assembly model that contains all the required information for planning.ABB IRB6700-235 robot (6 DOF), a construction plane (approximately 1.5 m × 0.9 m), a scene modelling camera (Sony a5100), and a modelling computer (Dell Precise).Image-based 3D modelling method.
Experimental method
A general IFC model for robotic assembly contains all the information needed for task-level planning; BIM and image-based modelling are used to calibrate robot pose for the unification of the robot coordinate system, construction area, and assembly task; a simple conversion process is presented to convert the 3D placement point coordinates of each brick into the robotic control instructions.
In the process of experimental verification, task-level planning can maintain the same accuracy as that of the traditional method but saves time when facing more complex tasks.
[86]
A model of reversibly controlled industrial robots based on abstract semantics.Robotic assemblyError recovery using reverse executionA programming model which enables robot assembly programs to be executed in reverse. Temporarily switching the direction of program execution can be an efficient error recovery mechanism. Additional benefits arise from supporting reversibility in robotic assembly language, namely, increased code reuse and automatically derived disassembly sequences.[94]
The control strategies for robotic PiH assemblies and the limitations of the current robotic assembly technologies.Robotic PiH assemblyTypical peg-in-hole (PiH) assembly methodsThe system outperforms the operator performing the same task with magnified visual feedback regarding both completion time and the number of successful insertions.
The proposed strategies can correctly diagnose the assembly process’s position errors and effectively realise error recovery.
[95]
An overview of computer vision for preoperative, intraoperative, and postoperative surgical stages to assist with planning, tool detection, identification, pose tracking, and augmented reality, for surgical skill assessment and retrospective analysis of the procedure.An overviewAn overviewAn overview[96]
Table 6. Research focused on the food industry.
Table 6. Research focused on the food industry.
ObjectiveTechnologyApproachImprovementRef.
The applications of industrial robots in the food industry and their automation prospects. A 4-step Food Industrial Robot Methodology for selecting industrial robots for food processing operations.Articulated robot,
parallel robot,
Cartesian robot
The four steps within the Food Industrial Robot Methodology (FIRM).The FIRM presented in this paper outlined the ability to classify industrial robot capabilities and match them to specific characteristics of foodstuffs and requirements for their processing based on four steps that navigate eight tasks. This work also identified many factors that should lay the groundwork for future research in the application of industrial robots within food manufacturing.[102]
Identification, analysis, and understanding robotics in one of the largest sectors, the food chain.Robots in the food chainCase study of a Delivery BotThe emergence of robotics in business is widely seen across the world. However, the trust in human–robotic interaction appears to be underdeveloped. Reducing the number of repetitive jobs by replacing them with robots is not replacing jobs but paving the way for more intelligent jobs.[98]
Maximise performance by utilising fewer resourcesDual model soft gripper for food packagingGrasp and suck process for various types of objects having a weight up to 1 kgThe proposed dual-mode gripper can perform grasp and suck functions for multiple types of nutrients. Additional improvements may be automatic switching of the gripper finger configuration and distance adjustment.[100]
Challenges in the application of industrial robots in the food industryAn overviewAn overviewAn overview[97]
Path planning optimisation technique in the food industryThe proposed optimisation technique is based on the use of an off-axis toolEPSON T6 SCARA robotThis path optimisation technique shortens the cycle time and reduces energy consumption.[103]
Table 7. Research focused on agricultural applications.
Table 7. Research focused on agricultural applications.
ObjectiveTechnologyApproachImprovementRef.
The potential applications in agriculture by presenting a variety of manipulators and various forms of sensors.Parallel grippers, angular grippers, and biologically inspired grippers manufactured by Festo.
Various sensors
Application methods.State-of-the-art robotic grippers, grasping and control strategies, and their applications in agricultural robots. Applications of robotic grippers in food, agricultural, and bio-system engineering were summarised in detail.[104]
A scheme that combines computer vision and multi-tasking processes to develop a small-scale smart agricultural machine that can automatically weed and perform variable rate irrigation within a cultivated field.The frames of the machine, the weeding and watering mechanism, the image and soil moisture sensor, the actuator, and the graphical user interface (GUI)Image processing methods such as HSV (hue (H), saturation (S), value (V)) colour conversion, estimation of thresholds during the image binary segmentation process, and morphology operator procedures.
Fuzzy logic,
multi-tasking processes
The system can classify plants and weeds in real time with an average classification rate of 90% or higher. This allows the machine to perform weeding and watering while maintaining the moisture content of the deep soil at 80 ± 10% and an average weeding rate of 90%.[105]
A systematic overview aiming to identify the applicability of computer vision in precision agriculture to produce the five most-produced grains in the world: maise, rice, wheat, soybean, and barley. Different approaches to treat disease detection, grain quality, and phenotyping.An overviewAn overviewAn overview[106]
Table 8. Research focused on implementing robots in the construction and civil engineering industry.
Table 8. Research focused on implementing robots in the construction and civil engineering industry.
ObjectiveTechnologyApproachImprovementRef.
A novel fabrication process for the assembly of full-scale masonry vaults without falsework.Two industrial robotic arms (ABB 4600 2.55). The prototype of the robotically assembled brick vault.The fabrication method is based on a cooperative assembly approach in which two robots alternate between placement and support first to build a stable central arch.Cooperative robotic assembly methods can be applied to constructing a spanning structure built without a temporary falsework. Where traditional manufacturing techniques require geometric guides, this project shows how it can instead leverage the robots’ precision to accurately place bricks in bespoke orientations.[112]
A computer vision for real-time extrusion quality monitoring during robotic building construction.Laboratory-scale concrete printer.
Logitech 720p camera to capture extrusion videos. The extrusion videos are processed in real-time by a Raspberry Pi 3B.
OpenCV library, adopted, shape-based approach.
Gaussian filter.
The developed system can print up to ten 120 cm long concrete layers. It uses an extrusion mechanism similar to the Contour Crafting machine to print layers having a height of 3.81 cm and a width of 2.54 cm, from concrete and mortar at different linear speeds (up to 10 cm/s) and deposition rates.
The vision system detected all designed variation levels (±5 to ±15 L/m3 change in the water in the mixture).
In terms of accuracy and responsiveness to material variations, the obtained experimental results imply the excellent potential for using computer vision for automated quality monitoring of construction-scale 3D printing.
[113]
Presents the possibilities of applying lightweight cobots to individual tasks in the construction sector.Presenting of light robotics together with 3D printing technology provides the rapid advantage of prototyping to test ideas and applications.The simplest visual system was used to follow a simplified approach, which can be controlled directly by a robot controller.Future research on increasing the dynamics of torsional tasks using a mobile robot with a scissor lift could result in the cobot and mobile platform covering the entire construction area.[114]
To determine if improved robotic technologies have also been used in the building industry.An overviewAn overviewAn overview[115]
To determine how robotic automation can help in the construction industry.A common framework for current technological innovation in this field and a development plan were outlined.The projected impacts on traditional processes, construction sites, emerging technologies, and related professions are summarised to identify future implications and future directions toward self-sufficiency.Artificial intelligence must be a successful factor in the involvement of robotic devices in the construction industry.[110]
Provide a systematic overview of human-robot interactions concerning various types of robotsHuman–robot interaction, human–robot cooperation (HRC).An overviewFurther investigation of multi-function robots, human–robot interaction in robotic fabrication, and multipurpose robots.[116]
The main goal is to fully describe feedback based on sensor informed programs for process monitoring and fabrication data collection and analysis.Additive manufacturing.An overviewEffective robotic production still requires the communication and management of progressively improving materials and building systems.[117]
Application of a Building Information Modelling (BIM) method for efficient and simple deployment of robot systems for building construction and operationBIM integrative, collaborative robotics.The robot is provided with a priori geometric and semantic information about the environment with the help of the BIM system.Future improvements consist of the assessment of the actual applicability of the system on the construction site and closing the gap between robotic systems and the construction site.[111]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dzedzickis, A.; Subačiūtė-Žemaitienė, J.; Šutinys, E.; Samukaitė-Bubnienė, U.; Bučinskas, V. Advanced Applications of Industrial Robotics: New Trends and Possibilities. Appl. Sci. 2022, 12, 135. https://doi.org/10.3390/app12010135

AMA Style

Dzedzickis A, Subačiūtė-Žemaitienė J, Šutinys E, Samukaitė-Bubnienė U, Bučinskas V. Advanced Applications of Industrial Robotics: New Trends and Possibilities. Applied Sciences. 2022; 12(1):135. https://doi.org/10.3390/app12010135

Chicago/Turabian Style

Dzedzickis, Andrius, Jurga Subačiūtė-Žemaitienė, Ernestas Šutinys, Urtė Samukaitė-Bubnienė, and Vytautas Bučinskas. 2022. "Advanced Applications of Industrial Robotics: New Trends and Possibilities" Applied Sciences 12, no. 1: 135. https://doi.org/10.3390/app12010135

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop