Next Article in Journal
Influence of Different Reflux Groove Structures on the Flow Characteristics of the Roots Pump
Next Article in Special Issue
PolyDexFrame: Deep Reinforcement Learning-Based Pick-and-Place of Objects in Clutter
Previous Article in Journal
A Novel Optimistic Local Path Planner: Agoraphilic Navigation Algorithm in Dynamic Environment
Previous Article in Special Issue
Autonomous Visual Navigation for a Flower Pollination Drone
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Connective Framework for Social Collaborative Robotic System

by
Syed Osama Bin Islam
* and
Waqas Akbar Lughmani
Department of Mechanical Engineering, Capital University of Science and Technology (CUST), Islamabad Expressway, Zone 5, Islamabad 45750, Pakistan
*
Author to whom correspondence should be addressed.
Machines 2022, 10(11), 1086; https://doi.org/10.3390/machines10111086
Submission received: 8 October 2022 / Revised: 3 November 2022 / Accepted: 4 November 2022 / Published: 17 November 2022

Abstract

:
Social intelligence in robotics appeared quite recently in the field of artificial intelligence (AI) and robotics. It is becoming increasingly evident that social and interaction skills are essentially required in any application where robots need to interact with humans. While the workspaces have transformed into fully shared spaces for performing collaborative tasks, human–robot collaboration (HRC) poses many challenges to the nature of interactions and social behavior among the collaborators. The complex dynamic environment coupled with uncertainty, anomaly, and threats raises questions about the safety and security of the cyber-physical production system (CPPS) in which HRC is involved. Interactions in the social sphere include both physical and psychological safety issues. In this work, we proposed a connective framework that can quickly respond to changing physical and psychological safety state of a CPPS. The first layer executes the production plan and monitors the changes through sensors. The second layer evaluates the situations in terms of their severity as anxiety by applying a quantification method that obtains support from a knowledge base. The third layer responds to the situations through the optimal allocation of resources. The fourth layer decides on the actions to mitigate the anxiety through the allocated resources suggested by the optimization layer. Experimental validation of the proposed method was performed on industrial case studies involving HRC. The results demonstrated that the proposed method improves the decision-making of a CPPS experiencing complex situations, ensures physical safety, and effectively enhances the productivity of the human–robot team by leveraging psychological comfort.

1. Introduction

Conventionally, robots were used with the capabilities of nominal sensors and intelligence in the industry. The tasks assigned to them were normally repetitive in nature [1]. However, recent developments in the industry have spawned a wide range of robots. The up-gradations are common in services [2,3], exploration and rescue [4,5], and therapy operations [6,7]. The advent of human and robot interaction in the above-mentioned fields has given rise to the specialized field of human–robot interaction (HRI) [8]. The field of HRI is now transforming into social HRI and is of particular importance. The interactions in this field include cognitive, social, and emotional activities with the robots [9]. Moreover, a new term of human–machine collaboration (HMC) arises in this context, and human–robot collaboration (HRC) specifically for robots. Chandrasekaran et al. [10] described HRC as a measure for improved task performance of the robot and reduced tasks for humans. A shared space is used by humans and robots while performing collaborative tasks, and it is expected that they attain a common objective while conforming to the rules of social interaction. These robots perform joint actions, obey rules of social interaction (such as proxemics) and still act efficiently and legibly [11]. Since these interactions take place continuously during collaborative human and robot tasks, they place strong demands on collaborators’ personal and environmental safety. In addition to physical contact, the control system of the human–robot collaboration system must cater to uncertainties and ensure stability. Mead et al. [12] identified three types of requirements for HRI commonly used by computer models of proxemics:
  • Physical requirements, based on collaborators’ distance and orientation;
  • Psychological requirements, based on the collaborators’ relationship;
  • Psychophysiological requirements based on the sensory experience through social stimuli.
The broad categories of safety for the robotic collaborative system in the social domain are physical and psychological safety. The earlier provides safety from physical hazards, whereas the latter is from psychological discomforts such as close interaction with machines, monotonous operations, or deviations from the task.
Different physical safety protocols for HRC-based CPS were presented by authors [13,14,15,16], whose activation is dependent on the proximity between the cobot and the human operator. Collision avoidance is a common solution to provide physical safety that includes avoiding unwanted contact with people or environmental obstacles [17]. These techniques depend on the measurement of the distance in-between the robot and the obstacle [16,18]. Motion planning techniques are one of the key strategies to estimate collision-free trajectories such as configuration × time–space method [19]; collision-free vertices [20,21]; representation of objects as spheres; and exploring collision-free path [22], potential field method [23], and virtual spring and damping methods [24]. Unfortunately, collision avoidance can fail because of the sensors and robotic movement limitations, as sometimes human actions are quicker than robotic actions. It is nevertheless feasible to sense the bodily collision and counteract it [25,26,27], which allows eradicating the robot from the contact area. In such circumstances, robots can use variable stiffness actuation [28,29] while effectively controlled [30,31], and lightweight robots with compliant joints [32] may be used to lessen the impact forces on the contact. Lately, a multi-layered neural network method involving dynamics of the manipulator joints (measurement of torque through sensors and intrinsic joint positions through kinematics) has been used [33] to discover the location of the collision on the robot (collided link). While accidents due to sudden contact between humans and robots may be restricted by designing lightweight/compliant mechanical manipulators [30] and collision detection/response strategies [27], collision avoidance in complex, unpredictable, and dynamic environments is still mainly dependent on the employment of exteroceptive sensors.
From the perspective of HRI, psychological safety means ensuring interactions that do not cause undue stress and discomfort in the long run. Therefore, a robot that can sense worker fatigue with whom it works is able to take essential safeguards to avoid accidents. An illustration of it was demonstrated in [34], a human–computer integration where biosensors were mounted on the operator, and physiological signals (such as anxiety) were measured and transmitted to the robot. Researchers are required to build robots that must respond to emotional information received from the operator, also known as “affective computing” [35]. In [36,37,38], measurements of cardiac activity, skin conductivity, respiration, and eye muscle activity in conjunction with fuzzy inference tools were used to change the robot’s trajectory for safe HRC. Recently, Chadalavada et al. [39] demonstrated how a robot (automated forklift) could show its intentions using spatial augmented reality (SAR) so that humans can visually understand the robot’s navigational intent and feel safe next to it.
It seems now essential that methods ensuring both physical and psychological safety be developed for HRC. Lasota et al. [40] introduced a distinct concept of combining both physical and psychological safety for safe HRC. By measuring the distance between the robot and the human in real-time, they precisely control the speed of the robot at low separation distances for collision avoidance and stress relief. In another paper [14], the same authors established via quantitative metrics that human-conscious motion planning leads to more effective HRC. Lately, Dragan et al. [41] described that psychological safety is ensured by legibility; the operator may feel more comfortable if they can judge the robot’s intention by its motion.
Humans, robots, and machines lately work in the ambit of cyber-physical systems (CPS) under the umbrella of Industry 4.0. They are smart systems that contain both physical and computational elements. The concept involves decision-making through real-time data evaluation collected from interconnected sensors [42]. Monostori [43] proposed overall automation of these layers for production systems interconnecting necessary physical elements such as machines, robots, sensors, and conveyors through computer science and termed it a cyber-physical production system (CPPS). Green et al. [44] highlighted two important components of an active system of collaboration between humans and robots. First, the adjustable autonomy of the robotic system is an essential part of an effective collaboration that enhances productivity. Second, awareness of situations, or knowing what is happening in the robot workspace, is also essential for collaboration. The effectiveness of HRC depends on the effective monitoring of human and environmental actions and the use of AI to predict actions and states of mind to which humans may have contributed to the task. Lemaignan et al. [11] emphasized the architecture of the decision layer of social robots. The paper is an attempt to characterize the challenges and present a series of critical decision problems that must be solved for cognitive robots to successfully share space and tasks with humans. In this context, the authors identified logical reasoning, perception-based situational assessment, affordability analysis, representation and acquisition of knowledge-based models for involved actors (humans and robots, etc.), multimodal dialogue, human-conscious mission planning, and HRC task planning.
As the industry is transforming from automation to intelligence, a need is felt to extend the concept of psychological safety from humans to the CPSs. Psychological safety is equally essential for an intelligent CPS as physical safety is perceived [45]. Although interactions among human operators, physical and computational layers of CPS can be quite demanding in terms of cognitive resources, the psychological aspects of safety are not extensively taken into account by existing systems. However, there is no connective framework that assesses and counters both physical and psychological issues of a CPPS in a social domain. Flexible CPPS in this regard is required to counter the physical and psychological uncertainties by defining contingencies. This indicates the requirement for an efficient and reliable framework for next-generation CPPS:
  • A framework that can quantify both the physical and psychological safety of the CPPS;
  • A framework that can assess the CPPS’s current state and provide a thinking base accordingly to make it flexible and safe;
  • A framework that can decide, optimize and control based on the situational assessment.
Our research focused on addressing both physical and psychological issues faced by a CPPS. We proposed a layered framework for knowledge-based decision-making in a CPS. The CPS, at the core, performs the desired operations through the interactions between its physical component (PC), computer component (CC), and human component (HC). A situational assessment layer is proposed above the central layer to assess the anxiety of the situations faced. Calculation of the matching score is suggested for the relevance of each situation’s anxiety to each resource, and we named it an “anxiety factor”. The third layer is the resource optimization layer, above the situational assessment layer. This layer optimizes the allocation of available resources through an optimization algorithm using the evaluated matching score, the objective function, and the defined constraints. The last is the logic-based decision-making layer. This layer embeds predefined logic to decide on complex situations, thereby tasking different resources using the experts’ knowledge, evaluated optimization, and calculated anxieties. The logic remains specific to each identical case/situation embedded in a CPS scenario. The proposed framework is validated through experimental case studies facing several situations.

2. Methodology

The connective framework and an overview of its execution are shown in Figure 1. We propose a knowledge-based modular software system where different modules represent different layers of the proposed framework. However, we do not propose the number of modules to be fixed and may vary as per the requirement of a particular case. The data of all the layers are stored in a central database.
The basic modules are the main module for the sensing and process control layer of the CPS, the anxiety module for the situational assessment layer, the optimization module for the resource optimization layer, and the decision-making module for the decision-making layer. The general connectivity of the modules is shown in Figure 2.
If multiple changes are faced, the matching score is ascertained for each situation through the anxiety module. Accordingly, the situations with the higher matching score (anxiety factor) are assigned the best possible resources through the optimization module. The contingencies to handle the braved situations are looked after by the decision-making module. In this context, outputs from the anxiety and the optimization modules are given to the start of the decision-making module, which decides on the assignment of resources to the tasks based on the defined logic. The term anxiety and the anxiety factor are defined, and the method for their estimation is stated in this paper.

2.1. Anxiety of Cyber-Physical Systems

Anxiety is the human body’s natural response to stress. It is a feeling of fear or apprehension about what is to come, also recognized as the unpleasant state when an expectation is not achieved due to any stressful, dangerous, or unfamiliar situation. The central task of the overall system in the brain is to compare, quite generally, actual with expected stimuli. We wanted to elaborate more on the term that should not be confused with risk. Risk is based on hazard, whereas “anxiety can be defined as an urge to perform any job either to avoid a hazard or to do a righteous job”. When multiple situations are confronted, all generate anxiety, but there would be limited resources to handle each. Therefore, we defined anxiety to make it scalable for CPSs. There may be different levels of anxiety generated by situations. The module is initialized by the results of Ishikawa, which assigns an index to each anticipated situation. After initial indexing is performed, a novel and intelligent technique based on medical knowledge categorizes situations into different anxiety types. Each type relates to a particular level of severity.

2.1.1. Categorization of Anxiety

The criterion for categorizing and indexing anxiety in different situations faced by the CPS is defined in Table 1. The categories are named concerning the characteristics matched with the medical anxieties. Individual severity depends on the category and the repeatability of the situation. It can be said that the “Quantification” of psychological safety for the CPS is being performed. Anxiety for a situation is the sum of the lowest severity limit and the index “I” estimated by Ishikawa. The value of I ranges from 0 to 100. The detailed procedure for calculating index I is explained in [46].
Total severity is also calculated for a particular instance, and an alarm is raised if it crosses a certain limit. It is the sum of all current emerging situations in a single iteration, whose severities are estimated.
T o t a l   s e v e r i t y = G + P + O + T + K + N  

2.1.2. Anxiety Factor Calculation

The anxiety factor, i.e., the matching score, is then calculated through a mathematical model dependent on the category and the variables related to tasks/resources. The procedure is repeated at each iteration and the situations faced are analyzed. The anxiety factor for a corresponding resource and a situation is calculated as:
  a r s = Q r s   [ A r s + ( p r s   t r s ) ]
Anxiety “A” is the parameter defined by the category of the particular situation, which is G, P, O, etc., and calculated as explained in Table 1. Its value ranges from 0 to 100. “p” is the preference variable and is the ascending order of anxiety level sorted for different situations; it can also be referred to as priority index number. “t” is the task variable; it defines which task can better be performed by which resource. The prior resource is assigned a lower value and subsequently ascending value for lower priorities. The value of t depends upon the number of resources, e.g., if there are two resources, then t = 0, 1. “Q” is the resource suitability variable; it has the value “1” if a resource is suitable, and “0” if not suitable. In order to simplify the mathematical notation of the model formulation, we defined the indices for resources “r” and situations “s”.
The expression for “a” can be written as:
ars[0,100 + S] for all resources rR and situations sS;
rR: index and set of resources;
sS: index and set of situations.
S is the total no of situations. As the value of p ranges from 1 to S, hence the range of the anxiety factor is from 0 to 100 + S.
The anxiety factor “a”, is the value of anxiety calculated through the above variables for a particular situation tackled by a specific resource.

2.2. Layers of the Decision-Making System

The framework is explained layer by layer. The CPs layer is the main layer that controls all physical and human elements. The physical component involves machines, robots, conveyors, sensors, display/output devices, input devices, etc. The main role of the layer is to execute the intended process for which the production plan is uploaded. It also looks for changes at every cycle of the operation. For this, various sensing techniques, such as proxemics and visual, physiological, or social cues, etc., may be used. Different situations are expected to affect the desired output are registered.
The situational assessment layer assess confronted situations for priority index (anxiety) by making use of the HC’s knowledge base. First, the index of expected situations is calculated with the aid of Ishikawa analysis, as proposed in Section 2.1. The category of each situation is then identified. Based on the category, the matching score (anxiety factor) for all expected situations is calculated, which defines the resource suitability to the particular situation. On the initialization of the process, the layer first becomes aware of the situations that are detected by the CPS layer. The layer then links the matching scores to the related situations and re-estimates if the category of the situation changed by the HC during the operation.
The resource optimization layer optimizes the allocation of resources through an optimization algorithm. The module checks the number of situations; if there is a single situation, the program directly moves to the decision-making layer, and if there are multiple situations, the program moves to the optimization algorithm. The algorithm employs mixed-integer programming (MIP) technique making use of the matching score, objective function, and the defined constraints. The Gurobi Optimizer [47] is one of the state-of-the-art solvers for mathematical optimization problems. The MIP model of the stated problem is implemented in the Gurobi Optimizer. The overall technique addresses all the situations sequentially in terms of priority.
The decision-making module encompasses the logic defined by experts to handle the braved situations and decides on the assignment of resources to tackle them. The main consideration for the assignment of resources is the allocation recommendation by the optimization module; however, the implementation is carried out by the decision-making module. Commands are then given to the physical resources, which could be a human operator, a cobot, a machine, etc., depending on the ascertained tasks. The human component is also given cautions through social signals on observation of social norms and obsessions.
The implementation of the proposed method includes two types of actions, one to be taken before activating the system and others are happening in real-time during the process cycle. Both the pre-process and in-process steps concerning each layer are shown in Table 2.

2.3. Resource Optimization

We introduced a decision variable, “X”, for each possible assignment of resources to the situations. In general, we can say that any decision variable Xrs equals “1” if resource rR is assigned to the situation sS or “0” otherwise.

2.3.1. Situation Constraints

We discussed the constraints associated with situations. These constraints ensure that each situation is handled by exactly one resource. This corresponds to the following:
r R     X r s 1
Less than <1 is included to incorporate the null when no resource is assigned to the situation in an iteration.

2.3.2. Resource Constraints

The resource constraint ensures that, at most, one situation is assigned to each resource. However, it is possible sometimes that not all the resources are assigned. For example, if CPS encounters two situations and only one resource is suitable to handle both. Then the situations are handled sequentially by the resource in order of anxiety. We can write this constraint as follows:
s S X r s   1
This constraint is less than <1 to allow the possibility that a resource is not assigned to any situation.

2.3.3. Objective Function

The objective is to maximize the total matching score (anxiety factor) of the assignments that satisfy both the situation and resource constraints. The objective function can be concisely written as follows:
M a x i m i z e   s S     r R     a r s   X r s

3. Experimental Validation

Examples of automotive parts assembly and beverage packaging were considered, as shown in Figure 3. The scenario involves a cobot and a human operator performing tasks dependent on information received from multiple sensors integrated with a collaborative CPPS. Different items arrive at workstations. They are then assembled or packaged in a sequence through a cobot. A human supervisor assists in the completion of the process by performing several tasks. In addition to the dedicated tasks, the supervisor also monitors operations for anomalies and erroneous activities, i.e., they are in both collaborative and supervisory roles. As the assembly/packaging process completes, the supervisor gives a command for the next one, and the cobot moves accordingly. The cobot performs specific operations in collaboration with the operator to complete the task, e.g., in the case of packaging, the robot picks bottles and cans in a sequence from specific locations and drops them in specific slots in the crate, whereas in the case of assembly, the robot picks two types of gears from specific locations in a sequence and place them on a case for assembly. The operator, in the case of packaging, replaces the crate on completion of packaging, and in the case of assembly, tightens the top plate with screws and presses a button for the conveyor to deliver the next case. The human supervisor is also responsible for corrective actions on wrong item arrival, wrong sequence, or absence of item from the location. We call this whole process a standard procedure.
There may be situations when the process may not proceed as intended, and it may face various unwanted and unforeseen situations. In order to cater to this, different situations were anticipated for both the cases that can emerge during a cycle; these include the intended situation as well as unwanted situations. The considered situations are as follows: the right item is the main task/intended situation i.e., the right items (beverages/gears) are in place for pick and place operations/assembly; the wrong item is the item at the work location is either not in list or wrong in sequence; no item is when no item appears at the work location; the human interference is when operator interferes in any task at any location, as they find that the robot may not be able to perform the task or they find any anomaly; displaced case/crate is that the operation of packaging/assembly cannot be completed when the crate/case is displaced from the designated location; unidentified person is that any unknown person in the workspace is a hazard to the system and to themselves; foreign object is any object not required in the workspace is also a hazard to the system; obsession is any situation actually not affecting but disturbing the outcome of the system, and it is generally established after few iterations when the operator realizes that the situation is a false alarm, e.g., in both the case studies the object detection algorithm detects the table on which the items are placed as a foreign object; time delay is the completion time variance in the intended operation; threshold distance is the breach of minimum distance that is established to be safe for collaboration between the human and the robot; cobot power failure is when the cobot stops work due to power failure; cobot collision is when the cobot collides with the human operator. Different situational awareness techniques were used to detect the listed situations. The object detection technique is used to detect a right item, a wrong item, no item, an unidentified person, and foreign objects. YOLOv3 [48] is trained to detect gears, bottles, and cans along with day-to-day general objects. RFID sensor in the helmet is used to identify the authorized operator, an IR proximity sensor to detect the displaced case/crate, impact sensors in the cobot to detect cobot collision, a power sensor to detect cobot power failure, and the clock to gauge the time delay. The human interventions are detected through a pose estimation algorithm which takes the feed through a camera installed above the workspace. Open pose [49] is used to detect different human poses. The separation distance between the cobot and the operator is used to evaluate the threshold distance; this was performed by calculating the distance between the center of the cobot and the operator detection bounding box. The question is how to map these situations to calculate the anxiety index and anxiety factor and then analyze and decide on requisite actions on them. As soon as any of the listed situations are detected through the visual sensing/other detection techniques, an input of detection is given to the main program. The situation’s severity, anxiety index, variables relative to resources, and anxiety factor were already evaluated before the commencement of the operation are taken into account. The severity I for each situation was assessed by the experts by assigning weights to each situation against all other situations in the Ishikawa diagram, as shown in Figure 4. “1” was assigned to the other situation, which was decided to have low priority than the main situation and “0” if it had high priority. The weight was assigned based on the voting of experts. The ranking of the situation (index I) was the total of weights assigned under it. The anxiety A was then calculated for each situation by putting in values of categories and severity as stated in Table 1. There were two resources, a human and a cobot; the matching score (ars) was then estimated as explained in Section 2 for both the resources vs. the situations and fed into the central database. The severity, categories, the value of anxiety evaluated for the cases, and the anxiety factors are shown in Table 3.
The main program was divided into five modules to implement the approach on the case studies, which are the main module, item in place module, anxiety module, optimization module, and the decision-making module. The main module holds the other modules. This module keeps count of the operations in the cycle and performs the intended task. It looks for the situations during each iteration and ascertains the matching score through the anxiety module in case multiple situations emerge. The item-in-place module is specific to each particular case and checks whether the right or wrong item is in place through object detection and item count. A machine vision camera was placed to detect the objects. The module adapts the contingency plan if the right item is not in place through the decision-making module. The optimization module assigns the resources to the situations by identifying the highest matching score through MIP using Equations (3)–(5). By analyzing Table 3, we can see that the first basis for the resource assignment is the resource suitability variable Q; if it is zero, the resource cannot be assigned to the situation. The second basis is the task variable t; in our case, 0 was assigned to the preferred resource. Hence the resource with t = 0 was assigned if it was not already committed and if the resource suitability variable was not “0”. The third basis is the preference variable p, the situation with the higher value was addressed first by the two resources, and the remaining was addressed subsequently after the disposal of the initial ones. The cumulative effect of all these variables is the anxiety factor. The table gives a clear depiction of when any situation appears, which out of two resources can be assigned, which the prior resource is, and which situation is addressed first. The decision-making module then decides the contingencies for the identified situations through actions performed by the resources. The actions are either performed by the cobot or the human supervisor in our scenario, which is dependent on the specific role assigned to the resource for a particular case. The roles assigned to the situations are: for the right item, the robot picks up and places the item at designated location, the human can do so but the preference is given to the robot; for the wrong item, the robot picks up the item and places it at the spot dedicated for them, the count, however, is not increased and the human has to place the right item at the location; in the case of no item in location, the robot moves to the next location and the human places the item at the drop point; if the human operator interferes due to any anomaly in the process or defected item, they may drop the item at the drop point or the wrong item spot, the count is then incorporated by checking the human pose when performing the action; for a displaced case/crate, the robot adjusts it by pushing it to the fixed enclosure, and if the robot is not available, then the human performs it; if an unidentified person enters the workspace, the human has to remove them from the area, and a caution is displayed on the screen for the human to perform this action; similarly, if a foreign object appears in the workspace, a caution is raised to the human to remove the object from the workspace, and in both the last two cases, the robot stops action until the human presses the button for resume operation; in the case of obsession, none of the resources perform any action and perform the task as intended; if the time delay and threshold distance is breached, a caution is given to the human operator to analyze and adjust accordingly; in case of cobot power failure, the human checks the reason, rectifies it, and resumes the operation, however, the whole system is to remain at stand still; similarly in case of cobot collision, the whole system comes to stop and the human checks, resolve the issue, and then resumes the operation. When analyzing, we see that all the actions are interlinked with the variable assigned in Table 3. The experts actually decide the values of the variables based on experience and consultation.

3.1. Survey

We carried out a survey that consisted of participants pooled from university students and faculty. The ages of the participants range from 19 to 40 years (M = 28.7, SD = 7.73).
The participants were informed before the experiment to monitor the screen for instructions during the operation continuously. We devised our own metrics similar to the work in [43]. The subjects had nominal knowledge of the specifics of the system and were briefed on the assigned task only. A total of 20 subjects participated, the repeated-measure design was considered for the experiment, and subjects were divided into two groups randomly. Two types of conditions were assigned, the first group included those who first worked with the decision-making design (n = 11), and the second group included those who worked with the standard design without a decision-making system (n = 9). The subjects were not informed before the experiment what condition was assigned and what metrics were being measured. We measured both quantitative and subjective measures. The quantitative measures include the decision time taken to decide on each situation and the accuracy of the process. The subjective measures include perceived safety, comfort, and legibility based on the questionnaire responses.
First, both groups executed a training round, during which the participants performed the complete experiment by themselves, without an assistant and the robot, to familiarize themselves with the task. Next, all participants were provided with a human assistant to perform the task collaboratively as would be required to perform with a robot in the subsequent phase, i.e., the 3rd phase. Finally, two task executions were conducted firstly in one condition and later in a switched mode; however, the sequence was different for both groups. A questionnaire was given to participants after each task execution. Each participant performed two training sessions with the robot before each task execution to build mental compatibility. In order to prevent any involuntary bias from the participants, the first task execution was conducted in a way that the participant was unaware of which out of two conditions each participant had been assigned to. Before the conduct of the alternate mode, participants were educated that the system would behave differently during the second phase. The participants were briefed; the robot could take some automated measures, and they were required to monitor instructions on the screen.
The questions, shown in Table 4, were intended to determine each participant’s satisfaction with the robot as a teammate as well as their perceived safety, comfort, and legibility. A 5-point Likert scale was used for the two questionnaires on which the participants had to respond, strongly disagree to strongly agree for the first questionnaire and much less to much more for the second questionnaire. Based on the dependent measures, the two main hypotheses in this experiment were as follows:
Hypothesis 1.
Using a decision-making framework for anxiety will lead to more fluent human–robot collaboration based on timely automated decisions and the accuracy of the approach.
Hypothesis 2.
Participants will be more satisfied with the decision-making framework performance while collaborating with the robot and will feel more comfortable, safe, and legible compared with a CPPS that uses standard task planning.
The automated decision time cannot be calculated for both conditions as the standard approach does not cater to unprecedented situations; the operator has to stop the system and apply countermeasures. The accuracy of the approach is defined as the number of errors (situations that could not be handled automatically) observed during one cycle divided by the total number of iterations. At least 2 situations were intentionally generated in each trial to check the system response.

3.2. Results

The detection of the objects through the object detection technique is shown in Figure 5.
Similarly, human detection was also carried out through the object detection technique. The results for the detection of an authorized operator and an unidentified person are shown in Figure 6. The identification of the authorized operator within the workspace through object detection was complemented by the RFID sensor in the helmet.
Detection of specific poses of the operator is shown in Figure 7, the initial is the detection of the pose of the operator when interfering in assembly, and the latter two are the poses of the operator while placing objects himself. It is pertinent to mention here that the contingencies were adopted only when a particular pose detected is complemented by the position of the robot at the same location. This is because, while performing parallel operations, the same pose could be detected while the cobot may be operating at some other place.
Our approach is capable of detecting multiple situations and their disposal at once. As an example, two individual situations were considered at a time, which are a displaced case/crate and an unidentified person. An inspector entered the workspace and displaced the outer case in the first scenario, and the crate entered the workspace in the second during inspection (see Figure 8).
The time taken for decision for handling situations for two of the cycles in one test run is shown in Figure 9. Anxieties, total anxiety, and the maximum anxiety situation for each iteration are shown. The maximum time taken to decide on a single situation was noted as 0.03 s.
The quantitative analysis from the survey was carried out; the overall mean automated decision-time (MADT) for the proposed method and accuracy of both approaches were calculated with a minimum of two situations in each cycle. It was revealed that the mean automated decision time for a contingency is 0.21 s, as can be seen in Table 5. The decision-making system (accuracy of 89.98%) was found to be 16.85% more accurate than the standard system (accuracy of 73.125%). The t-test for the significance of the results was conducted with a confidence level set to p < 0.05 (95% confidence level), and the p-value for the test was found to be p < 0.01. The standard error mean (SEM) for the MADT was found to be 9.36 × 10−5, the upper limit was found to be 0.0211 s, and the lower limit was 0.0208 s.
Significant differences (at p < 0.05) were found for the questions of fluency with the collaborative robot; the participants exposed to the decision-making system agreed more strongly with “I trusted the robot to do the right thing at right time” (p < 0.01) and “The robot and I worked together for better task performance” (p < 0.01), and disagreed more strongly with “The robot did not understand how I desired the task to be executed” (p < 0.01) and “The robot kept disturbing during the task” (p < 0.01).
Similarly (at confidence level p < 0.05), significant differences were found for the questions of perceived safety, comfort, and legibility; the participants exposed to the decision-making system agreed much more with “I felt safe while working with the robot” (p < 0.01), “I trusted the robot would not harm me” (p < 0.01), and “I understand what robot will be doing ahead” (p < 0.01), and agreed much less with “The robot moved too drastic for my comfort” (p < 0.01) and “The robot endangered safety of unknown persons in workspace” (p < 0.01).
The results support both the hypothesis in favor of the decision-making system; they indicate that the proposed approach leads to more fluent HRC (Hypothesis 1) and also highlight that the method extends safety, comfort, and legibility (Hypothesis 2).

3.3. Discussion

The research shows that the previous works address limited situations and solely focus on a single aspect, e.g., collision avoidance, motion planning, or psychological safety. The proposed method is seen to collectively address all the issues earmarked by the experts that a CPPS may face. Here, we also wanted to compare the current method with the previous work [46]. Only the most prior situation with maximum anxiety can be handled at once using the previous method, whereas the current approach is optimized to employ all available resources to relieve the current state of anxiety. The method is feasible to include any number of resources by modifying the equations in the optimization algorithm. At times when limited resources are available, situations with higher anxiety were addressed first, followed by lower ones. The current method combines human knowledge and intelligence with AI techniques to minimize the decision time; the maximum time recorded to decide on a scenario is 0.03 s, which shows that the method is not time-intensive. The indexing of anxiety was carried out through a more generic approach in the previous method that lacked rationale for scaling different levels. The current method partially uses the previous technique; however, a logical approach with a biomimetic connection was presented, which makes it easy to differentiate and prioritize situations.
The contribution of the works is that the CPPS was made more intelligent, safe, resilient, and smart. The decision-making of the production system was improved, which provides it the flexibility to tackle multiple situations at once in an optimal manner and can fit in any industrial scenario; manufacturing, assembly, packaging, etc. The amalgamation of four layers highlights the contribution of AI in the industry that increases the overall productivity of the system. The work is a real manifestation of the integration of the computers in Industry 4.0, which is implemented in every layer of the proposed work, i.e., the functioning of the CPS, situational assessment, optimization, and decision-making through software, algorithms, AI, and interface.
The technique in the future can be reinforced by machine learning techniques to make it more intelligent. The CPPS may learn in real-time from the environment and will improve the knowledge base continuously. As the existing system is dependent on predefined solutions, it is proposed that the system may learn through repetitive patterns and solutions provided to the previously confronted situations and may automatically resort to the same solutions. Cloud base and semantic systems are emerging trends that may be incorporated further to enhance the capability and customization of these systems. In this way, multiple situations can be updated and omitted automatically in the system, though supervision by the human component is still recommended. We used machine-learning-based image processing techniques as a tool, which are known for uncertainties due to the statistical methods and probability distributions used in them. The probabilistic nature of these models is prone to errors that cannot be ignored. In the future, the confidence level of detection may be included in the calculation of the anxiety factor. The accuracy of the method may either be increased by selecting a threshold level of detection or adding a variable incorporating the detection level that will affect the anxiety factor of the confronted situation.

4. Conclusions

A framework for social safety was proposed that caters to the physical and cognitive issues of a collaborative robotic system. The framework intelligently safeguards the interest of both human and machine teams with the introduction of the unique concept of the “anxiety factor of CPS”. A connective framework was laid out that incorporates anxiety generated by dynamic situations and decides, keeping in consideration previous knowledge, human intelligence, and AI. The method also employs optimization supported by visual and IR sensing techniques that accommodates any number of resources. A flexible system for a collaborative system was thereby produced in which the cooperating elements can respond well to untoward situations. Experimental validation of the method on real-world problems proved its efficacy and applicability. Thus the decision-making of the Collaborative robotic CPPS is improved, which is flexible to cater to multiple situations optimally and can fit in any industrial scenario such as manufacturing, assembly, packaging, etc., ultimately enhancing task efficiency, productivity, and safety.

Author Contributions

Conceptualization, S.O.B.I.; methodology, S.O.B.I.; software, S.O.B.I.; validation, S.O.B.I.; formal analysis, S.O.B.I.; investigation, S.O.B.I.; resources, S.O.B.I.; data curation, S.O.B.I.; writing—S.O.B.I.; writing—review and editing, W.A.L.; visualization, S.O.B.I.; supervision, W.A.L.; project administration, S.O.B.I. and W.A.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Thrun, S. Toward a framework for human-robot interaction. Hum. Comput. Interact. 2004, 19, 9–24. [Google Scholar]
  2. Bien, Z.Z.; Lee, H. Effective learning system techniques for human–robot interaction in service environment. Knowl. Based Syst. 2007, 20, 439–456. Available online: https://www.sciencedirect.com/science/article/pii/S0950705107000135 (accessed on 11 September 2022). [CrossRef]
  3. Severinson-Eklundh, K.; Green, A.; Hüttenrauch, H. Social and collaborative aspects of interaction with a service robot. Robot. Auton. Syst. 2003, 42, 223–234. Available online: https://www.sciencedirect.com/science/article/pii/S0921889002003779 (accessed on 11 September 2022). [CrossRef]
  4. Murphy, R.R. Human-Robot Interaction in Rescue Robotics. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2004, 34, 138–153. [Google Scholar] [CrossRef]
  5. Nourbakhsh, I.R.; Sycara, K.; Koes, M.; Yong, M.; Lewis, M.; Burion, S. Human-Robot Teaming for Search and Rescue. Available online: https://ieeexplore.ieee.org/abstract/document/1401846/ (accessed on 11 September 2022).
  6. Kwakkel, G.; Kollen, B. Effects of robot-assisted therapy on upper limb recovery after stroke: A systematic review. Neurorehabil Neural Repair 2008, 22, 111–121. [Google Scholar] [CrossRef] [Green Version]
  7. Scassellati, B.; Admoni, H.; Matarić, M. Robots for use in autism research. Annu. Rev. Biomed. Eng. 2012, 14, 275–294. [Google Scholar] [CrossRef] [Green Version]
  8. Goodrich, M. Human–robot interaction: A survey. Found. Trends® Hum.–Comput. Interaction. 2008, 1, 203–275. [Google Scholar] [CrossRef]
  9. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A Survey of Socially Interactive Robots: Concepts, Design and Applications. 2002. Available online: http://uhra.herts.ac.uk/handle/2299/3821 (accessed on 11 September 2022).
  10. Chandrasekaran, B.; Conrad, J.M. Human-robot collaboration: A survey. Int. J. Hum. Robot. 2008, 5, 47–66. [Google Scholar]
  11. Lemaignan, S.; Warnier, M.; Sisbot, E.A.; Clodic, A.; Alami, R. Artificial cognition for social human–robot interaction: An implementation. Artif. Intell. 2017, 247, 45–69. [Google Scholar] [CrossRef] [Green Version]
  12. Mead, R. Autonomous human–robot proxemics: Socially aware navigation based on interaction potential. Auton. Robot. 2017, 41, 1189–1201. [Google Scholar] [CrossRef]
  13. Krüger, J.; Lien, T.K.; Verl, A. Cooperation of human and machines in assembly lines. CIRP Ann. Manuf. Technol. 2009, 58, 628–646. [Google Scholar] [CrossRef]
  14. Lasota, P. Analyzing the effects of human-aware motion planning on close-proximity human–robot collaboration. SAGE J. 2015, 57, 21–33. [Google Scholar] [CrossRef] [PubMed]
  15. Morato, C.; Kaipa, K.; Zhao, B.; Gupta, S.K. Toward Safe Human Robot Collaboration by Using Multiple Kinects Based Real-Time Human Tracking. 2014. Available online: https://asmedigitalcollection.asme.org/computingengineering/article-abstract/14/1/011006/370115 (accessed on 2 October 2021).
  16. Flacco, F.; Kröger, T.; de Luca, A.; Khatib, O. A depth space approach to human-robot collision avoidance. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 338–345. [Google Scholar] [CrossRef] [Green Version]
  17. Schiavi, R.; Bicchi, A.; Flacco, F. Integration of Active and Passive Compliance Control for Safe Human-Robot Coexistence. 2009. Available online: https://ieeexplore.ieee.org/abstract/document/5152571/ (accessed on 19 June 2022).
  18. Safeea, M. Minimum Distance Calculation Using Laser Scanner and IMUs for Safe Human-Robot Interaction. 2019. Available online: https://www.sciencedirect.com/science/article/pii/S0736584518301200 (accessed on 19 June 2022).
  19. Vannoy, J. Real-time adaptive motion planning (RAMP) of mobile manipulators in dynamic environments with unforeseen changes. IEEE Trans. Robot. 2008, 24, 1199. [Google Scholar] [CrossRef]
  20. Yang, Y.; Brock, O. Elastic Roadmaps: Globally Task-Consistent Motion for Autonomous Mobile Manipulation in Dynamic Environments. 2002. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102.3248 (accessed on 24 June 2022).
  21. Yang, Y.; Brock, O.; Yang, Y.; Brock, O. Elastic roadmaps—Motion generation for autonomous mobile manipulation. Auton. Robot. 2010, 28, 113–130. [Google Scholar] [CrossRef]
  22. Balan, L. Real-time 3D collision avoidance method for safe human and robot coexistence. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 276–282. [Google Scholar] [CrossRef]
  23. Haddadin, S.; Belder, R. Dynamic Motion Planning for Robots in Partially Unknown Environments. 2011. Available online: https://www.sciencedirect.com/science/article/pii/S1474667016447051 (accessed on 19 June 2022).
  24. Haddadin, S. Real-Time Reactive Motion Generation Based on Variable Attractor Dynamics and Shaped Velocities. 2010. Available online: https://ieeexplore.ieee.org/abstract/document/5650246/ (accessed on 19 June 2022).
  25. de Luca, A. Sensorless Robot Collision Detection and Hybrid Force/Motion Control. 2005. Available online: https://ieeexplore.ieee.org/abstract/document/1570247/ (accessed on 19 June 2022).
  26. de Luca, A.; Albu-Schäffer, A.; Haddadin, S.; Hirzinger, G. Collision Detection and Safe Reaction with the DLR-III Lightweight Manipulator Arm. 2006. Available online: https://ieeexplore.ieee.org/abstract/document/4058607/ (accessed on 19 June 2022).
  27. Haddadin, S.; Albu-Schäffer, A.; de Luca, A.; Hirzinger, G. Collision detection and reaction: A contribution to safe physical human-robot interaction. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3356–3363. [Google Scholar] [CrossRef] [Green Version]
  28. Grioli, G.; Bicchi, A.; Schiavi, R.; Grioli, G.; Sen, S.; Bicchi, A. VSA-II: A novel prototype of variable stiffness actuator for safe and performing robots interacting with humans. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008. [Google Scholar] [CrossRef]
  29. Eiberger, O.; Haddadin, S.; Weis, M.; Albu-Schäffer, A.; Hirzinger, G. On Joint Design with Intrinsic Variable Compliance: Derivation of the DLR QA-Joint. Available online: https://ieeexplore.ieee.org/abstract/document/5509662/ (accessed on 19 June 2022).
  30. Bicchi, A.; Tonietti, G.; Magazine, A. Fast and “Soft-Arm” Tactics. 2004. Available online: https://ieeexplore.ieee.org/abstract/document/1310939/ (accessed on 19 June 2022).
  31. de Luca, A.; Flacco, F.; Bicchi, A.; Centro, R.S. Nonlinear Decoupled Motion-Stiffness Control and Collision Detection/Reaction for the VSA-II Variable Stiffness Device. 2009. Available online: https://ieeexplore.ieee.org/abstract/document/5354809/ (accessed on 19 June 2022).
  32. Hirzinger, G.; Albu-Schaffer, A.; Hiihnle, M.; Schaefer, I.; Sporer, N.; Oberpfaffenhofen, D. On a new generation of torque controlled light-weight robots. In Proceedings of the 2001 ICRA IEEE International Conference on Robotics and Automation (Cat. No.01CH37164), Seoul, Korea, 21–26 May 2001. [Google Scholar] [CrossRef]
  33. Sharkawy, A.-N.; Koustoumpardis, P.; Aspragathos, N.A.; Koustoumpardis, P.N.; Aspragathos, N. Human–robot collisions detection for safe human–robot interaction using one multi-input–output neural network. Soft Comput. 2020, 24, 6687–6719. [Google Scholar] [CrossRef]
  34. Rani, P.; Sarkar, N.; Smith, C.A.; Kirby, L.D. Anxiety detecting robotic system—Towards implicit human-robot collaboration. Robotica 2004, 22, 85–95. [Google Scholar] [CrossRef]
  35. Picard, R.W. Affective computing: Challenges. Int. J. Hum. Comput.-Stud. 2003, 59, 55–64. [Google Scholar] [CrossRef]
  36. Kulić, D.; Croft, E. Pre-collision safety strategies for human-robot interaction. Auton. Robot. 2007, 22, 149–164. [Google Scholar] [CrossRef]
  37. Bethel, C.; Burke, J. Psychophysiological Experimental Design for Use in Human-Robot Interaction Studies. 2007. Available online: https://ieeexplore.ieee.org/abstract/document/4621744/ (accessed on 19 June 2022).
  38. Bethel, C.; Salomon, K. Preliminary results: Humans find emotive non-anthropomorphic robots more calming. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, La Jolla, CA, USA, 11–13 March 2009; pp. 291–292. [Google Scholar] [CrossRef]
  39. Chadalavada, R.; Andreasson, H. Bi-Directional Navigation Intent Communication Using Spatial Augmented Reality and Eye-tracking Glasses for Improved Safety in Human–Robot Interaction. 2020. Available online: https://www.sciencedirect.com/science/article/pii/S0736584518303351 (accessed on 19 June 2022).
  40. Lasota, P.; Rossano, G. Toward safe close-proximity human-robot interaction with standard industrial robots. In Proceedings of the 2014 IEEE International Conference on Automation Science and Engineering (CASE), New Taipei, Taiwan, 18–22 August 2014; pp. 339–344. [Google Scholar] [CrossRef] [Green Version]
  41. ADragan, D.; Bauman, S.; Forlizzi, J.; Srinivasa, S.S. Effects of Robot Motion on Human-Robot Collaborationi. In Proceedings of the 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Portland, OR, USA, 2–5 March 2015; pp. 51–58. [Google Scholar] [CrossRef] [Green Version]
  42. Rad, C.; Hancu, O.; Takacs, I. Smart Monitoring of Potato Crop: A Cyber-Physical System Architecture Model in the Field of Precision Agriculture. 2021. Available online: https://www.sciencedirect.com/science/article/pii/S2210784315001746 (accessed on 1 October 2021).
  43. Cyber-Physical Production Systems: Roots, Expectations and R&D Challenges. 2014. Available online: https://www.sciencedirect.com/science/article/pii/S2212827114003497 (accessed on 1 October 2021).
  44. SGreen, A.; Billinghurst, M.; Chen, X.; Chase, J.G. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design. Int. J. Adv. Robot. Syst. 2008, 5, 1–18. [Google Scholar] [CrossRef]
  45. D’Auria, D.; Persia, F. A collaborative robotic cyber physical system for surgery applications. In Proceedings of the 2017 IEEE International Conference on Information Reuse and Integration (IRI), San Diego, CA, USA, 4–6 August 2017; pp. 79–83. [Google Scholar] [CrossRef]
  46. Islam, S.O.B.; Lughmani, W.A.; Qureshi, W.S.; Khalid, A.; Mariscal, M.A.; Garcia-Herrero, S. Exploiting visual cues for safe and flexible cyber-physical production systems. Adv. Mech. Eng. 2019, 11, 12. [Google Scholar] [CrossRef]
  47. Gurobi Optimization Inc. 2021. Available online: https://www.gurobi.com/ (accessed on 1 October 2021).
  48. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. 2018. Available online: http://arxiv.org/abs/1804.02767 (accessed on 1 October 2021).
  49. Cao, Z.; Hidalgo, G.; Simon, T. OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. 2019. Available online: https://ieeexplore.ieee.org/abstract/document/8765346/ (accessed on 2 October 2021).
Figure 1. Decision-making connective framework for CPPS.
Figure 1. Decision-making connective framework for CPPS.
Machines 10 01086 g001
Figure 2. Module-based software implementation for decision-making.
Figure 2. Module-based software implementation for decision-making.
Machines 10 01086 g002
Figure 3. (a) Experimental Case 1; (b) Experimental Case 2; (c) Interface of CPPS elements.
Figure 3. (a) Experimental Case 1; (b) Experimental Case 2; (c) Interface of CPPS elements.
Machines 10 01086 g003
Figure 4. Calculation of index I for situations through Ishikawa.
Figure 4. Calculation of index I for situations through Ishikawa.
Machines 10 01086 g004
Figure 5. Detection of objects through object detection algorithm, (a) gear 1; (b) gear 2; (c) bottles; (d) cans.
Figure 5. Detection of objects through object detection algorithm, (a) gear 1; (b) gear 2; (c) bottles; (d) cans.
Machines 10 01086 g005
Figure 6. Detection of persons through object detection algorithm, (a) operator in blue and unidentified person in red; (b) authorized operator; (c) operator and unidentified person.
Figure 6. Detection of persons through object detection algorithm, (a) operator in blue and unidentified person in red; (b) authorized operator; (c) operator and unidentified person.
Machines 10 01086 g006
Figure 7. Detection of specific poses through pose detection algorithm, (a) pose while interference in assembly; (b) pose while interference in picking of gears; (c) pose while interference in picking of beverages.
Figure 7. Detection of specific poses through pose detection algorithm, (a) pose while interference in assembly; (b) pose while interference in picking of gears; (c) pose while interference in picking of beverages.
Machines 10 01086 g007
Figure 8. (a) Detection of unidentified person and displaced case; (b) Case being aligned by the cobot; (c) Detection of unidentified person and displaced crate; (d) Crate being aligned by the cobot.
Figure 8. (a) Detection of unidentified person and displaced case; (b) Case being aligned by the cobot; (c) Detection of unidentified person and displaced crate; (d) Crate being aligned by the cobot.
Machines 10 01086 g008
Figure 9. Anxiety evaluation and counter strategy decision time.
Figure 9. Anxiety evaluation and counter strategy decision time.
Machines 10 01086 g009
Table 1. Anxiety Categories.
Table 1. Anxiety Categories.
LevelNameDescriptionSeverityEquationDetailed Description
1PanicEmergency>80 to 100P = 80 + I × 20/100Medical: situation to which affected person is not accustomed and has a serious impact.
CPS: a severe unknown incident identified by impact, action be taken by all stakeholders; however, it must be handled by HC, the most intelligent resource.
2Post traumaticTrauma/
Fear
>60 to 80T = 60 + I × 20/100Medical: serious incident of the past, affected person remains under constant stress that it may occur again.
CPS: assessment criterion is emergency declaration by HC on occurrence. Complete system must stop, and observations be rectified by the operator. After certain repetitions without any damage, the same may be transferred to specific phobia.
3Agora-phobia/ Specific-PhobiaKnown Specific Situation>20 to 60K = 20 + I × 40/100Medical: specific situation that is critical but known may be relieved based on available knowledge.
CPS: known situations previously defined and not considered in panic/obsession/social norms. They may be transferred to the obsession or post-traumatic category on confirmation of false alarm or emergency.
4General AnxietyIntended Situation20G = 20Medical: day-to-day routine situation for which refined solutions exist.
CPS: ideal/intended situation. Act as a reference to categorize other situations.
5Social normsEtiquettes>0 to 20N = 0 + I × 20/100Medical: a communal problem not liked by humans.
CPS: warnings that may not affect the current scenario, however, may affect the performance at later stage.
6ObsessionFalse0O = 0Medical: repetitive thought that leads to ritual or a false alarm.
CPS: previously declared false alarms through Ishikawa, and a null value is assigned. Repetition and continuity are indicators; however, confirmation from HC is required.
Table 2. Process for decision-making CPPS.
Table 2. Process for decision-making CPPS.
SerLayerPre-Process FormalitiesProcess Pseudo-Code
1CPSPrepare Manufacturing/ Production plan
2 Register expected situations
3 Start
4 Upload Execution Plan
5 Initialize
6 Check state of human component
7 Check state of physical components
8 While no uncertainty
9 Execute the process
10 If uncertainty
11 Go to situational assessment layer
12 Else go to initialize
13 End
14Situational AssessmentIndex the defined situations’ anxieties through Ishikawa method
15 Categorize situations into the type of anxiety
16 Assign weights to key variables for estimation of matching score
17 Calculate the matching score (anxiety factor) with respect to resources
18 Initialize
19 Upload the matching scores
20 Check the emerged situations
21 Check matching score of emerged situations
22 If category changed by operator
23 Re-designate the anxiety category of the situation
24 Re-estimate of matching score in case category is changed
25 Move to Resource Optimization Layer
26Resource OptimizationDefining the optimization criteria for allocation of resources; the objective function and the constraints, MIP in our case
27 Initialize
28 Case 1: Single situation
29 Move to Decision-Making Layer
30
31 Case 2: Multiple situations
32 Allocate resource to situations through MIP
33 Move to Decision-Making Layer
34Decision MakingDefine and design the logic for each situation with inputs from experts for anxiety mitigation strategy
35 Initialize
36 Upload resource allocation data
37 Check situations
38 Check resource allocation to situations
39 Assign task to available resources based on allocation and logic
40 Execute the task through PC
41 Exhibit social signals to HC
42 Move to CPS Layer
Table 3. Expected situations, their anxiety, variables, and anxiety factors.
Table 3. Expected situations, their anxiety, variables, and anxiety factors.
SituationSeverity (I)CategoryParameter (A)Task Variable (t)Preference Variable (p)Resource Suitability Variable (Q)Anxiety Factor (a)
HumanCobotHumanCobotHumanCobotHumanCobot
Right item27G201044112324
Wrong item45K381066114344
No item36K34.401551039.40
Human interference63K45.2108801053.2
Displaced case/crate54K41.610771147.648.6
Unidentified person72K48.801991057.80
Foreign object81K52.40110101062.40
Obsession0O011110000
Time delay9N1.80122103.80
Threshold distance18N3.60133106.60
Cobot Power Failure100P100011212101120
Cobot Collision91P98.201111110109.20
Table 4. Questionnaires.
Table 4. Questionnaires.
Fluency with the Collaborative Robot
1I trusted the robot to do the right thing at the right time.
2The robot did not understand how I desired the task to be executed.
3The robot kept disturbing me during the task.
4The robot and I worked together for better task performance.
Perceived safety, comfort, and legibility
5I felt safe while working with the robot.
6I trusted the robot would not harm me.
7The robot moved too drastically for my comfort.
8The robot endangered the safety of unknown persons in the workspace.
9I understand what the robot will be doing ahead.
Table 5. Accuracy of two methods: mean automated decision-time and the SEM.
Table 5. Accuracy of two methods: mean automated decision-time and the SEM.
SubjectStandard MethodDecision-Making MethodRemarks
IterationsErrorsAccuracyIterationsErrorsAccuracyMADT
1827516193.750.0206Average Accuracy (Standard) = 73.12%
SEM = 1.02
Average Accuracy (Decision-Making Method) = 89.98%
SEM = 0.94
MADT=0.021 s
SEM = 9.36 × 10−5
p = 1.83 × 10−10
2827515193.330.0206
3827514192.850.0207
4827517288.230.0211
5827516193.750.0206
6827513284.610.0215
7827512191.660.0208
88362.514285.710.0214
98275102800.022
108362.511190.900.0209
11827512191.660.0208
128275101900.021
138275153800.022
148362.514192.850.0207
15827511190.900.0209
16827512191.660.0208
17827512191.660.0208
18827511190.900.0209
19827513192.300.0207
20827514192.850.0207
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Islam, S.O.B.; Lughmani, W.A. A Connective Framework for Social Collaborative Robotic System. Machines 2022, 10, 1086. https://doi.org/10.3390/machines10111086

AMA Style

Islam SOB, Lughmani WA. A Connective Framework for Social Collaborative Robotic System. Machines. 2022; 10(11):1086. https://doi.org/10.3390/machines10111086

Chicago/Turabian Style

Islam, Syed Osama Bin, and Waqas Akbar Lughmani. 2022. "A Connective Framework for Social Collaborative Robotic System" Machines 10, no. 11: 1086. https://doi.org/10.3390/machines10111086

APA Style

Islam, S. O. B., & Lughmani, W. A. (2022). A Connective Framework for Social Collaborative Robotic System. Machines, 10(11), 1086. https://doi.org/10.3390/machines10111086

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop