Next Article in Journal
Integration and Flight Test of a 7 kW Turboelectric Vertical Take-Off and Landing Unmanned Aircraft
Next Article in Special Issue
Multi-Granularity Semantic Collaborative Reasoning Network for Visual Dialog
Previous Article in Journal
Characteristics of Magnetic Fields Induced by the Wake of an Underwater Vehicle
Previous Article in Special Issue
Important Features Selection and Classification of Adult and Child from Handwriting Using Machine Learning Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Smart Interactive Technologies in the Human-Centric Factory 5.0: A Survey

Computer Science Department, University of Turin, 10149 Turin, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(16), 7965; https://doi.org/10.3390/app12167965
Submission received: 29 April 2022 / Revised: 29 July 2022 / Accepted: 30 July 2022 / Published: 9 August 2022
(This article belongs to the Special Issue Human and Artificial Intelligence)

Abstract

:
In this survey paper, we focus on smart interactive technologies and providing a picture of the current state of the art, exploring the way new discoveries and recent technologies changed workers’ operations and activities on the factory floor. We focus in particular on the Industry 4.0 and 5.0 visions, wherein smart interactive technologies can bring benefits to the intelligent behavior machines can expose in a human-centric AI perspective. We consider smart technologies wherein the intelligence may be in and/or behind the user interfaces, and for both groups we try to highlight the importance of designing them with a human-centric approach, framed in the smart factory context. We review relevant work in the field with the aim of highlighting the pros and cons of each technology and its adoption in the industry. Furthermore, we try to collect guidelines for the human-centric integration of smart interactive technologies in the smart factory. In this wa y, we hope to provide the future designers and adopters of such technologies with concrete help in choosing among different options and implementing them in a user-centric manner. To this aim, surveyed works have been also classified based on the supported task(s) and production process phases/activities: access to knowledge, logistics, maintenance, planning, production, security, workers’ wellbeing, and warehousing.

1. Introduction

In 2015, the introduction of emerging technologies such as wireless sensor networks, big data, cloud computing, embedded systems, and mobile Internet into the manufacturing environment determined the conditions for factories to enter the era of the fourth industrial revolution [1]. As a reaction to the massive installation of such technologies on modern factory floors, the strategic initiative called “Industry 4.0” was proposed and adopted as part of the “High-Tech Strategy 2020 Action Plan” of the German government [1]. Other main industrial countries followed, proposing strategies that were compliant with the idea of the just-born Industry 4.0. The Industry 4.0 concept describes production-oriented cyber-physical systems (CPS) which integrate production facilities, warehousing systems, logistics, and even social requirements to establish global value creation networks [2].
Various changes have been introduced by the smart factory vision in the ways production is conceived and workers operate on factory floors. Such changes involve different levels of the production process, introducing a number of disruptive technologies which enable the digitalization of the manufacturing sector, enclosed in different areas of expertise:
  • Data, computational power, and connectivity;
  • Analytic and artificial intelligence;
  • Human–machine interaction;
  • Digital-to-physical conversion.
In January 2021, the European Commission published a report called “Industry 5.0. Towards a sustainable, human-centric and resilient European industry” [3], presenting the need to speed up the transformation already underway, which uses digital and green technologies to heal the environment and the economy. Industry appears to be the pivot of this important transition, representing the only means to achieve wellbeing from a human- and a production-related point of view but also respecting the environment. Industry 5.0, which is intended to complement and extend the features of Industry 4.0, aims to make the industry sector more sustainable and resilient.
The report describes the main building blocks of the Industry 5.0 approach [3]:
  • The industry must now become the accelerator and enabler of change and innovation;
  • Digital technologies such as artificial intelligence (AI) and robotics can optimize human–machine interactions, underlining the added value human workers bring to the factory floor;
  • By developing innovative technologies in a human-centric way, Industry 5.0 can support and empower, rather than replace, workers;
  • Greening the economy will be successful with European industry taking a strong leadership role;
  • Industry 5.0 will also have a transformative impact on society, especially for industry workers, who may see their role changed, requiring new skills.
In the Industry 5.0 vision, technologies can optimize both workplaces and worker’s performances, favoring the interaction between human and machine, rather than replacing one with the other. The main difference between the fourth and fifth industry visions is that humans are envisioned to collaborate with robots and innovative technologies, which have to be designed in a human-centric way.
Industry 5.0 identifies the following six enabling technologies [4]:
  • Individualized human–machine interaction technologies that interconnect and combine the strengths of humans and machines.
  • Bio-inspired technologies and smart materials that allow materials to have embedded sensors and enhanced features while being recyclable.
  • Digital twins and simulations to model entire systems.
  • Data transmission, storage, and analysis technologies that are able to handle data and foster system interoperability.
  • Artificial intelligence to detect, for example, causalities in complex, dynamic systems, leading to actionable intelligence.
  • Technologies for energy efficiency, renewables, storage, and autonomy.
According to Xu et al. [5], Industry 5.0 is not a technology-driven revolution but a value-driven initiative that promotes technological transformation with a particular purpose, generating value by putting together economy (profitability, scalability, and business models), ecology (CO2 reduction and circular economy), and society (societal challenges and human-centricity).
In this paper, keeping the context of the smart industry, we focus in particular on the dimension of smart interactive technologies, a term that can be traced back to that of intelligent user interfaces (IUIs), which aim at improving the symbiosis between humans and computers by merging artificial intelligence (AI) and human–computer interaction (HCI), including intelligent capabilities in the interface in order to improve performance, usability, and experience in critical ways (for more details, see [6]). This may also involve designing an interface that effectively leverages human skills and capabilities so that human performance with application improves. In addition, human-centered artificial intelligence (HCAI) has recently become a very popular term with a similar purpose and focuses on the need of bringing the human into the center of the AI design, thus creating systems that provide smart computations that are beneficial to humans, supporting them to achieve their objectives, “focusing on enhancing human performance, making systems reliable, safe, and trustworthy” [7]. Previously, researchers and developers directed their attention towards developing AI algorithms and systems with an emphasis on the machines’ autonomy. Conversely, HCAI focuses on user experience design, putting human users at the center of design thinking. Researchers and developers of HCAI systems measure their success with human performance and satisfaction metrics [8]. Indeed, compared with traditional technologies, AI-based technologies elicit different expectations from a user’s perspective. Besides the established principles of human-centered design, there are further important aspects that are peculiar to this type of system and that need to be considered during design, such as transparency, explainability, interpretability, user control vs. system autonomy, fairness, etc. [9].
In the following, we discuss some of the most relevant smart interactive technologies which will characterize the Industry 5.0 scenario. According to Sonntag [6], what defines an intelligent user interface, which is a term embracing smart interactive technologies as explained above, is the fact that the intelligence can be found:
  • In the user interface(s) of the system, with the purpose of enabling an effective, natural, or otherwise appropriate interaction of users with the system. Examples are human-like communication methods such as speech or gesture [10], multimodal interfaces and smart environments (including IoT- and smart-object-based environments) [11,12], systems that personalize the modality of interaction to individual users taking into account her/his previous choices and preferences [13], etc.
  • Behind the user interface, as, for instance, in personalized and not personalized recommender systems [14,15], i.e., systems that employ intelligent technology to support information retrieval; intelligent learning environments [16]; interface agents/robots that perform complex or repetitive tasks with some guidance from the user [17]; and situated assistance systems that monitor and support a user’s daily activities [18], as, for instance, in IoT-based industrial environments [19], etc.
In Section 2, we propose examples that can be classified either in the first group, as large displays proposing touch and touchless interaction, virtual and augmented reality, and wearable, tangible user interfaces, or in the second group, as collaborative robots, or in both, as is the case for IoT and smart environments. In particular, we focus on the interactive part of such technologies, thus excluding from our discussion those cases where intelligence has a relatively limited impact on the type and modality of interaction.

2. Smart Interactive Technologies in the Human-Centric Smart Factory

The model of the smart industry, both in the 4.0 and 5.0 visions, is enabled by advanced digitalization and the spread of the Internet of Things (IoT), cyber-physical systems, and smart technologies on the factory floor, and it is destined to radically change the approach to work in today’s industry. A significant increase in the demand for workforce is awaited at all levels, in order to manage complexity, abstraction, and problem solving processes. From the point of view of industrial workers, the introduction of opportunities and improvements in the quality of work are expected: a more stimulating work environment, greater autonomy, and opportunities for personal development. As a consequence, workers will be led to act on their own individual initiatives, acquiring excellent interaction and communication skills and organizing their workflows [20]. In this new vision, the term human-centred factory aims to define new social sustainable workplaces where the human dimension is a key cornerstone, highlighting the requirements for shifting from a traditional task-centric production to a worker-centric production [21]. Therefore, it becomes of fundamental importance to improve the interaction and collaboration between humans and intelligent machines, as with, for instance, cobots.
In Section 2.1, we present a set of relevant smart interactive technologies for the present and future of the Industry 5.0. For each of the presented technologies, we propose a brief definition and we summarize the main related work, and then we give examples in the smart factory. The reader should notice that the term smart factory includes in its definition what is understood as Industry 4.0 and 5.0, and we use it interchangeably We conclude the subsection with a brief summary of challenges and a set of guidelines.

2.1. Research Methodology

This article surveyed almost 100 papers and technical reports describing projects and visions related to a human-centered perspective in the smart factory, starting from 2018. At that time (2018–2021), we were involved in two national smart factory projects, HUMANS (human-centered manufacturing systems) (https://dmd.it/humans/en/humans/, accessed on 29 July 2022) and HOME (hierarchical open manufacturing Europe) (https://www.home-opensystem.org/index.php/en/home-3/, accessed on 29 July 2022) with the goal of leading research tasks focused on the interface and the interaction between man and machine through intelligent applications, as in wearable technologies, touchless interaction, interactive (production line) data visualization, etc. With regard to these aspects, we provided our industrial partners with requirements, guidelines, and specifications for the interaction between man and machine in the smart industry, as well as simulation demos on data visualization [22], gestural interaction [23,24], and wearable technologies.
Thus, we started collecting material on the basis of the requirements we had for the aforementioned projects. We reviewed the main survey papers ([25,26,27,28,29,30,31,32,33]) and company technical reports ([31,34,35]) in the field. Staring from the work referred to in the above-mentioned surveys, we extended our research using Researchgate as a platform and search keywords based on the main technologies we explored (collaborative robots, tangibles, wearables, large displays, IoT, gestural interaction, large displays, etc.), contextualized to the smart factory.
A fundamental inspiration for the technologies to consider and review, under a smart interaction perspective, came from the work by Romero, Stahre et al. [36], who developed the concept of Operator 4.0, which aims at expanding the capabilities of the industry worker with innovative technological means, rather than replacing the worker with robots. Operator 4.0 includes eight future projections of extended operators: the super-strength operator (operator + exoskeleton), the augmented operator (operator + augmented reality), the virtual operator (operator + virtual reality), the healthy operator (operator + wearable tracker), the smarter operator (operator + intelligent personal assistant), the collaborative operator (operator + collaborative robot), the social operator (operator + social networks), and the analytical operator (operator + big data analytics).
We decided to present the results of our analysis following a systematic frame, structured in such a way as to provide a set of guidelines for each chapter so that, in addition to reviewing the available technologies, we could offer a useful tool for understanding how to best implement them. The guidelines were derived from the works found for each chapter.
We believe that the analysis presented in this survey could provide hints for the design and development of new human-centered solutions in the factory of the future. According to our knowledge, no other comprehensive surveys have yet been published on smart technologies for Industry 5.0 that focus on their design in a human-centric way.

2.2. Large Displays

Large displays first appeared in the early 1960s, but they only began to be used in real installations at beginning of the new millennium. One of their advantages is their ability to bring users to collaborate and socialize, both in business and in entertainment contexts. While only touch interaction was supported at first, interaction through touchless gestures has become possible nowadays, due to the technological evolution. According to [37], large displays are characterized by five attributes or dimensions:
  • Orientation (vertical, horizontal, diagonal, or at ground level);
  • Display technology (monitor, front projection, or rear projection);
  • Purpose (gaming, entertainment, productivity, social interactions, or advertising);
  • Interaction methods (touch, touchless, tangible interfaces, or via an external device);
  • Location (office/workplace, museums, universities, shops, or on the street).
In their study, the authors of [37] examined a number of scientific papers relating to large displays and observed that most displays are positioned vertically and are actually monitors, not simple screens with front- or rear-projected images. Such displays can be used at the workplace, where they can foster cooperation between staff members.

2.2.1. Touch and Touchless Interaction

Touch and touchless interaction modes can affect the type of applications to be used on large displays and have different pros and cons [37].
Touch interaction. Sambrooks and Wilkinson [38] carried out a study where they asked participants to perform a series of tasks where they had to select one or more elements through different interaction modes. Touch interaction was very precise and had a low margin of error. This positive result is probably partly due to the familiarity users have with touch interaction, albeit to a lesser extent than with mouse-based interaction, which proved to guarantee the highest level of precision. This primacy may depend on various factors, such as the size of the screen with which one interacts, the size of the icons or objects to be selected, or the problem of “fat fingers”, whereby users with large fingers may encounter difficulty in using small screens.
It should be emphasized how touch interaction has evolved over time, passing from the simple touch of a single hand to multitouch, which allows the use of multiple fingers for actions such as zoom-in and zoom-out and moving elements on the screen and gives the possibility of using the human touch in combination with other interactive tools.
However, touch interaction can be uncomfortable in some occasions, especially in the business and industrial sectors and in cases where users need to wear gloves [23], or when there are other disturbing elements. Touchless interaction, where users do not have to touch any interface, can be used as an alternative to touch interaction on such occasions.
Touchless interaction. Different interaction styles can be classified under the “touchless” umbrella term, ranging from hand-based gestures recognized by a tracking device to the detection of the posture, position, or presence of the user’s body to gaze tracking and facial expression recognition. The best known examples of touchless interaction are often associated with the gaming and entertainment domain, where both the tracking of body movements (e.g., using Kinect (https://developer.microsoft.com/en-us/windows/kinect/, accessed on 29 July 2022)) and the detection of gestures executed via an external, dedicated device (e.g., PlayStation Move) are used. Other dedicated devices, such as the Myo bracelet, can be worn by users. Touchless interaction exploits the natural language of the body, which is why it could significantly reduce the distance between the user and the interface, especially if used in the business context. However, it is essential to design the interface and user experience (UX) so that they are as fluid and engaging as possible, in order to accompany the user in the interactive process and not cause frustration. In this regard, the UX design will also have to take into account the effort that repeating actions with gestural interfaces can determine, much higher than when approaching a touch- or mouse-based interface. In addition, designers will have to make sure that the interface layout allows a precise execution of actions, also considering the fact that many potential users are not very familiar with this type of interaction. For this reason, it is also necessary to ensure that the interface returns clear feedback and makes any error reversible, including those caused by involuntary body movements and incorrectly detected gestures (immersion syndrome).

2.2.2. The Smart Factory Context

Smart factories are proving increasingly capable of absorbing and proposing solutions from a wide range of disciplines. The introduction of gestural interfaces and the widespread use of large screens is a case in point. Gestural interfaces, similarly to other technologies and approaches in the HCI field, are part of the physical layer in [39]’s classification of technologies for the smart factory.
Applications: 3D object manipulation and task browsing. Ref. [40] investigated the interaction between people and 3D objects shown on public displays in an urban planning scenario. Participants were asked to perform tasks through spontaneously produced hand-gestures and phone-gestures. The process led to the definition of two sets of user-defined gestures. Although the proposed study limited its investigations to the area of urban planning, similar studies which identify sets of user-defined gestures can be conducted in various fields, including the industry sector.
For example, Ref. [24] proposed the use of large displays in combination with touchless gestural interaction on the shop floor (Figure 1). In the context of a smart industry project, the project consortium developed a smart armband which allows us to detect gestures from movement and muscle biosignals, while a machine learning library allows us to calibrate and recognize task-specific gestures (Figure 2). The definition of an appropriate set of gestures underwent several steps, including a guessability study [23]. The proposed application was tested in small industry specialized in sheet metal fabrication, where welders frequently switch between their workbench and a nearby desktop computer to browse the tasks they have been assigned and visualize 3D models of the final product. Results were generally positive and the participants were favorable to our solution and willing to use it in their everyday work activities.
Tackling usability issues. An interesting solution, specifically focusing on the context of smart factories and combining gesture recognition with augmented reality (see Section 2.4) to address usability issues, was given by [41]. To overcome the hard and time-taking learning curves in switching from an industrial device to another, Ref. [41] investigated the development of a universal interaction device, capable of communicating with various field devices and plant modules of an industrial facility via common wireless communication standards. Their aim is the creation of a platform that has one user interface for all purposes, is nonproprietary, and can be designed individually dependent on its owner’s requirements. Merging together gestures recognition with augmented reality, they offer intuitive interactions, freeing the operator from the constraints of manipulating hand-held objects.

2.2.3. Challenges and Guidelines for Touch and Touchless Interaction with Large Displays in the Smart Factory

Based on our analysis of the relevant literature, it is apparent that free-form gestural interaction is useful in contexts where touch-based interaction is not possible (e.g., because users wear gloves) or not advisable (e.g., because of hygiene policies). On the other hand, performing gestures might be physically demanding and, especially in the case of touchless interaction, some specific issues may emerge, such as:
  • Unintentional gestures might be misinterpreted by the system as intentional gestures.
  • Carrying out tasks that require precision through free-form gestures might be problematic.
More specifically, Garzotto and Valoriani [42] proposed the following guidelines for the design of gestural interaction, based on previous work by [10,43,44].
  • Guideline 1: Semantic intuitiveness. Gestures should have a clear cognitive association with the semantic functions they perform and the effects they achieve.
  • Guideline 2: Minimize fatigue. Gestural communication involves more muscles than keyboard interaction or speech. Gestural commands must therefore be concise and quick and minimize the user’s effort and physical stress.
  • Guideline 3: Learnability. It must be easy for the user to learn how to perform and remember gestures, minimizing the mental load of recalling movement trajectories and associated actions. The gestures that are most natural and easy to learn and are immediately assimilated by the user are those that belong to everyday life or involve the least physical effort. These gestures should be associated with the most frequent interactions.
  • Guideline 4: Intentionality (immersion syndrome). Users can perform unintended gestures, i.e., movements that are not meant to communicate with the system they are interacting with. These are usually evoked when the user is communicating simultaneously with other devices or people, or just resting his or her body. Immersion syndrome occurs if every movement is interpreted by the system, whether or not it was intended, which may determine interaction effects against the user’s will. The designer must identify well-defined means to detect the intention of the gestures, as distinguishing useful movements from unintentional ones is not easy.
  • Guideline 5: Precision. Tasks that require precise interaction, e.g., the fine selection of a specific value in a large set of alternatives presented on the screen, may be problematic: when operating at a distance, we cannot obtain a good resolution because of the intrinsic instability of movements in free space. Touchless gestural input or control should be carefully designed with a special attention to precision.
  • Guideline 6: Feedback. Appropriate feedback indicating the effects and correctness of the gesture performed is necessary for successful interaction, to improve users’ confidence in the system, to allow users to learn the appropriate manner of performance, and to help users understand what was wrong with their actions.
  • Guideline 7: Provide reversible actions. Commands must be easy to undo to easily cancel any unintended action, and “backward navigation” must be supported, to allow user return to previously seen objects or revise previous choices.

2.3. Virtual Reality

Virtual reality (VR) implies a complete immersion in a digitally built world. VR first appeared in the late 1980s, but it took another thirty years before it became actually available [45]. Only in the last few years affordable devices such as virtual reality cases for smartphones appeared on the market (Figure 3), providing potential enhancements in the field of manufacturing also for smaller businesses. Similarly to augmented reality (see Section 2.4), VR is often combined with gestural interfaces. Since VR creates immersive experiences, meaningful interactions with the virtual environment enabling users to touch, move, and interact with virtual objects via standardized gestures must be supported. Thus, studies and enhancements in the field of gestural interfaces do have an impact also in this area, determining significant advancements in the perceived naturalness of the virtual environment.
On the other hand, one cannot ignore the effects that technologies such as VR have on users in terms of the modification of their consciousness and perception. Ref. [46] pointed out how web or mobile interfaces can potentially, in specific cases, disconnect users from the physical world, increasing the risk of user alienation and lowering the user experience. It is thus trivial to think how such risks grow exponentially with the adoption of systems that provide a full immersion in a digitally built world. Ref. [47] explored the effects of virtual reality on the modification of the consciousness of users and the pathological implications that arise in such systems. It highlighted the risks and pointed out the need for serious scientific study in the field in order to gain the best from the adoption of such technologies in environments such as the smart factories’ shop floors, limiting potential side effects.

2.3.1. The Smart Factory Context

VR can be adopted at many stages of the production process.
Applications: (remote) factory layout planning. Factory layout planning, for example, is a long standing area in production engineering that could potentially benefit from VR integration to allow workers and equipment to be more productive. Facility layout techniques and, particularly, factory layout planning, apply to the case where several physical means have to be located in a certain area, aiming at developing an efficient and effective plant layout for all the available resources [48]. Ref. [49] proposed a modeling approach for VR-supported layout planning (VLP) tasks. The authors identified three methods for modeling the virtual environment:
  • Using cameras or scanners along with algorithms to automatically convert image and video data into spatial data.
  • Modeling facilities entirely using computer-aided-design software (CAD) or virtual reality modeling languages (VRML).
  • Combining the previous two as a hybrid approach.
Collaboration between users situated in different locations could also be supported by immersive virtual reality user interfaces (VRUIs). Ref. [50] described a VR-based approach to factory planning, aimed to allow the simultaneous visualization, investigation, and analysis of data by multiple connected users. The authors classified interactions into human–human interactions and human–machine interactions, to analyze and assign them, taking into account the needs of factory planning in a virtual environment. They structured the whole factory planning process into three fields: target planning, conceptual planning, and realization planning. Moreover, they explained how to speed up actions within the planning process by implementing collaborative factory planning tools realized by interconnected but spatially distributed VR systems.
Applications: virtual commissioning and digital twins. Related to VR factory planning is virtual commissioning (VC), namely, evaluating a production line in a virtual environment before the physical production line is constructed [51]. According to Lee and Park [52], a virtual manufacturing system (namely, virtual commissioning) is a computer-based environment that simulates individual manufacturing processes in an efficient way. Indeed, virtual commissioning enables the full verification of a manufacturing system by performing a simulation involving a virtual plant and a real controller. This requires the virtual plant model to be fully described at the level of sensors and actuators. Although virtual commissioning can significantly reduce the time and effort required at the real commissioning stage, there are obstacles to the implementation of virtual commissioning. Since a virtual plant needs to communicate with a real controller, the virtual devices should be modeled at the level of sensors and actuators, which is not easy for control engineers who do not have in-depth knowledge on modeling and simulation [52]. Closely related to VC is the emerging digital twin (DT) technology, commonly referred to as one of the key enabling technologies of Industry 5.0. A DT can be defined as an evolving digital profile of the historical and current behavior of a physical object or process that helps optimize business performance (https://www2.deloitte.com/content/dam/Deloitte/kr/Documents/insights/deloitte-newsletter/2017/26_201706/kr_insights_deloitte-newsletter-26_report_02_en.pdf, accessed on 29 July 2022). As VC can be defined as the validation of automated industrial production systems before any physical commissioning is made, a DT could be intended as a virtual model of a physical industrial production system being constantly updated and updating the physical object through a real-time and bidirectional data exchange (a DT usually includes data streams in both directions between the physical and virtual objects). According to Lidell et al. [52], the increased use of VC and DTs is important for many reasons, such as “increasing safety for operators through minimising harmful situations and correctly validating safety systems, as well as allowing improved working conditions, as described in the second interview. Furthermore, increased use of VC and DTs should also make the process of designing and developing production systems more cost-effective. More use of VC and DTs could also minimize wastes, such as ordering wrong components and machines, and to optimize the resource and energy usage through simulations and using DTs”.
Tools for the creation of VR environments. Finally, Ref. [45] investigated the problem of teaching how to create industry-themed virtual reality environments to mechanical engineers and faced the absence of tools that would fulfill the purpose without requiring complicated coding. The authors created their own framework, using a game engine called a source engine and enriching it with a library of textures, models, and scripts called DigiTov, later adapted also for Unity3D.

2.3.2. Challenges and Guidelines for Virtual Reality Interfaces in the Smart Factory

Our review shows that VR technologies can prove useful in a smart factory context, especially when it comes to planning and enhancing collaboration. On the other hand, there may be negative physical side effects such as nausea, seizures, or eye soreness; in addition, users’ consciousness may be affected, bringing a loss of spatial awareness, dizziness, and disorientation (https://www.classvr.com/health-and-safety/, accessed on 29 July 2022).
Ref. [53] reviewed the relevant literature and identified the following guidelines to support the development of VR applications:
  • Guideline 1: The degree of freedom should be minimal;
  • Guideline 2: Avoid sickness related to brightness, acceleration, and the unnecessary use of images;
  • Guideline 3: Create the sense of a 3D environment by using depth cues;
  • Guideline 4: The correct use of user interface (UI) elements;
  • Guideline 5: A user guide that helps to start the 3D environment;
  • Guideline 6: Use a minimum number of controls, which helps the user to remember the controls;
  • Guideline 7: Virtual objects should be made from real-world objects;
  • Guideline 8: Try to use Gestalt principles such as similarity, proximity, and hierarchy;
  • Guideline 9: Try to give feedback to the user when they interact with any virtual object;
  • Guideline 10: Use audio to help the user experience the real world in a virtual environment.

2.4. Augmented and Mixed Reality

Augmented reality (AR) allows the user to interact with a real-world environment where objects are enhanced by computer-generated virtual projections of data and information, sometimes making use of multiple sensory modalities such as visual, auditory, haptic, somatosensory, and olfactory.
Widely adopted in combination with gestural interfaces to obtain immersive experiences, augmented reality requires that some device is used to display the aforementioned projections and therefore allow users to interact with them. Apart from large displays (see Section 2.2), many augmented reality applications take advantage of mobile phones and tablets. A more specific solution is represented by smart glasses and head-mounted displays. The latter in particular are already used in the military and engineering fields and consist of devices that allow the display to be positioned in front of the user’s eyes using a helmet or headbands, allowing total freedom of movement. Such a display can be monocular (i.e., for one eye only), biocular (i.e., two displays are used, showing the same image), or binocular (i.e., for stereoscopic images). Designing an augmented reality experience requires that several factors are taken into account, such as the device chosen for the projections, which can influence their effectiveness, and the surrounding environment, which may have unsuitable surfaces (too bright, reflective, or transparent) for displaying augmented data and may not be suitable for performing certain gestures for safety reasons.
Largely discussed in the literature, a standard definition of mixed reality is yet to be established. Rather than presenting a radically different paradigm, the concept of mixed reality refers to the different levels of the distortion of the real environment, from pure augmented reality to fully virtual reality. Ref. [54] defined a model with two extrema: a fully real environment, the real world, and a fully virtual environment. Each level in between represents the different levels of what the author calls mixed reality.
Ref. [55] reported on a literature survey of 68 papers as well as interviews with AR/VR experts, aimed at understanding the state of the art of mixed reality technologies and related theories. The feature which distinguishes mixed reality from augmented reality is the creation of fully explorable, 3D images in a real-world environment. As described in Section 2.4, in fact, augmented reality only enriches the real environment with 2D elements such as markers, information panels, etc.

2.4.1. The Smart Factory Context

Applications: assembly and maintenance. Augmented reality is applied to guide and help workers in processes such as the assembly and maintenance of complex objects and quality checking, thus decreasing users’ cognitive load and improving efficiency [56,57]. More specifically, workers can interact with three-dimensional projections of the objects they are going to assemble and have the possibility of making any necessary checks before starting to build objects in the physical world.
Applications: access to technical data. Another possibility regards the provisioning of technical data, such as manuals, component availability, and maintenance history [57]: augmented reality can improve efficiency in providing relevant information in time as well as geo-located at the appropriate place. In this respect, Ref. [58] identified a series of principles to improve user experience:
  • Interoperability, i.e., the use of the same standards for texts and visual elements, so as to facilitate human–machine interaction through documentation;
  • Virtualization, i.e., paper documents are virtually copied on the machine, which can thus monitor user actions;
  • Decentralization, i.e., documents are divided into sections, so that the machine can show users only the part they need at a certain time;
  • Real-time functionality, i.e., the system must be able to analyze the collected data in real time in order to detect any errors. Similarly, technical documentation must be updated in real time, if necessary;
  • Service orientation, i.e., carrying out each procedure as if it were a service (e.g., remote maintenance operations);
  • Modularity, i.e., the adoption of a modular structure which allows greater flexibility, for example, when new technical procedures must be included within the existing documentation.
An example of real-time technical data delivery is given by [59], who developed an app aimed to provide the operators on the shop floor with technical manuals, operating diagrams, maintenance history, and components availability in the warehouse, connected to the smart manufacturing software.
Available devices and technologies. As far as specific devices and technologies are concerned, augmented reality smart glasses (ARSG) are widely used in the context of smart factories [60]. However, they are prone to give rise to privacy-related issues, in that the use of cameras and other sensors could affect users’ behavior and decision making. According to [61], in fact, privacy concerns are one of the factors influencing consumers’ decisions to adopt ARSGs.
Regarding mixed reality, the first commercial solution was Microsoft HoloLens, developed due to a collaboration between NASA and Microsoft. Devised at first for gaming purposes, Microsoft HoloLens were then widely applied on the shop floors of smart factories. A real-world application scenario was provided by Fifthingenium (https://fifthingenium.com/, accessed on 29 July 2022), a company specialized in hybrid reality solutions for the industry sector: the Holo Prototype Viewer allows workers to interact with 3D models in their physical environments, creating a mixed reality experience.

2.4.2. Challenges and Guidelines for Augmented and Mixed Reality Interfaces in the Industry 4.0 Context

Our review shows that augmented and mixed reality applications can be helpful when it comes to dynamically providing information to operators on production lines, as well as interactive manuals to be used in the assembly and maintenance areas. However, in addition to privacy-related issues, implementation can be difficult:
  • The smooth motions of augmented contents can be hard to obtain with ordinary mobile devices or tablet gyroscopes.
  • Depending on the projection surface texture and location, limitations may arise which may hinder an accurate understanding of the surface itself.
To sum up these observations and inspire augmented and mixed reality implementation in the smart factory, we report the following guidelines, based on the work of [60]:
  • Guideline 1: Selection. An accurate choice of the device that will support the implementation of AR on the shop floor can affect its effectiveness on the production process and must be perpetrated through a step-by-step evaluation of the market.
  • Guideline 2: Compliance. Privacy policies must be examined and choices on the technologies to be adopted must follow such requirements in order to avoid inapplicable decisions.
Further attention points, which mainly take an implementation-oriented perspective, are included in Google Augmented Reality Design Guidelines (https://designguidelines.withgoogle.com/ar-design/augmented-reality-design-guidelines/introduction.html, accessed on 29 July 2022):
  • Guideline 3: Environment. Surfaces where augmented reality contents will appear must have correct light exposure and adequate textures. Dim lighting, extremely bright environments, and transparent or reflective surfaces can compromise an accurate understanding of surfaces.
  • Guideline 4: Movements. When designing the AR experience, exploit the interaction possibilities given by the 360-degree virtual world and encourage users to use movements to dynamically explore the environment
  • Guideline 5: Safety. The immersive experience provided by the AR must not divert the operator’s consciousness from the real world around. Movements must be designed accordingly, to prevent them from unconsciously performing dangerous actions.

2.5. Internet of Things

While the concept of a network of smart devices was discussed as early as 1982, it is only in the last two decades that the increasing possibility of embedding sensors and Internet connectivity into physical devices has led to the definition of an entirely new interaction paradigm, the Internet of Things, which has rapidly brought radical enhancements to fields as diverse as home automation and industry.
A full Internet of Things definition dates back to what Ashton wrote in 2009 [62]: if we had computers that knew everything there was to know about things, using the data they collected without our help, we could track and count everything and significantly reduce waste, losses, and costs. We would know when products needed to be replaced, repaired, or recalled from store warehouses and what the percentage of their wear and tear was. We must enable computers to use their own means of collecting information so that they can see, hear, and feel the world’s trends in all their beauty. RFID and sensor technologies enable computers to observe, detect, and understand the world without the constraints of human input.
A further complete definition was given by Rand Europe (https://www.rand.org/randeurope/research/projects/internet-of-things.html, accessed on 29 July 2022), a nonprofit research institute: The “Internet of Things comes from today’s Internet, by creating a pervasive and self-organizing network of interconnected, identifiable, and addressable physical entities to enable application development across key vertical industries through embedded chips, sensors, actuators, and inexpensive miniaturization”. Finally, Ref. [63] defined the Internet of Things as follows: “The Internet of Things (IoT) is an emerging concept quickly gaining ground in the modern wireless telecommunications landscape. The underlying idea of this paradigm lies in the ever-present around us of a multitude of things or objects, for example, radio frequency identification tags (RFID), sensors, actuators, smartphones, etc., capable of mutually interact and cooperate with one another to pursue shared objectives through common addressing patterns.”
According to Skobelev and Borovik [64], the implementation of IoT technologies assumes a transfer of computation to the virtual world (cloud) where each virtual twin of objects in the real world acts according to the selected algorithm and rules. For communication in the real and virtual worlds, intelligent agents may be used. They can perceive information from the real world, make decisions, and coordinate them with other objects or users in real time. At the same time, real objects can work independently or be parts of more complex objects (household things, flexible production lines, groups of drones, etc.).
Moving to a real-world example, modern automobiles, where sensor-gathered data are used to enhance the driving experience, are a case in point: for example, sensors can monitor tire pressure to prevent wheels from locking up or collect information on specific parts of the engine. While in this example data are kept within the system itself, when technologies such as the GPS come into play, a whole new set of possibilities is presented, where information can travel from the vehicle to other external systems. The vehicle therefore becomes smart, exploiting internal data to communicate with other entities and enhance its potential.
Similarly to what has happened in the world of augmented reality with the introduction of ARSGs (Section 2.4), the Internet of Things has also caused the occurrence of privacy issues. In fact, as the Internet of Things is evolving into a decentralized system of cooperating smart objects, such decentralization has a great impact on the way personal information generated and consumed by smart objects should be protected [65]. To address this issue, Ref. [65] proposed a framework which allows users to specify privacy preferences based on a three-level taxonomy of object “smartness”, i.e., the object capability of sensing and processing individual data. Starting from the idea that, due to the complexity of data flows among different devices, it is easy for users to lose control of the way their data are distributed and processed, the model implements privacy preferences which allow users not only to pose conditions on which portion of their data can be collected, for what purpose, for how long, and by whom but also to limit the way data can be elaborated to derive new information.

2.5.1. The Smart Factory Context

The Internet of Things is expected to have a huge impact on smart and connected factories.
Applications: access to sensor-gathered data. Almost any existing object or device can be linked to back-end services and become capable to gather and analyze data, elaborate them, and display additional information obtained through physical analytics, possibly leveraging on augmented reality techniques [66]. For example, in a production plant, IoT devices can be used to monitor parameters such as temperature and pressure and to consequently switch different production processes on and off. They can also be employed to monitor hazards such as harmful gas leaks and activate countermeasures such as the ringing alarms meant to alert human operators [67]. Beyond its application on the shop floor, the Internet of Things has already brought changes to the whole product lifecycle: in fact, sensor-gathered data can not only be used to show additional information to the user but also to foster research and therefore enhance the services provided. Likewise, IoT technologies can be exploited, in combination with machine learning models which run on sensor-collected data, to test product quality, thus reducing the time and cost of testing [68]. Furthermore, the Internet of Things can help enhance the supply chain infrastructure so as to improve internal and external connectivity with suppliers and customers. Among other things, IoT devices can be used to track storage conditions throughout the supply chain and to facilitate product traceability [69].
Applications: energy efficiency. In the context of a smart factory project, Ref. [22] proposed the pervasive installation of sensors on production lines to solve consumption management issues. To perform efficient data monitoring with the aim to manage consumption within the context of the smart factory and thus promote a more sustainable approach, all of the ever-changing fields that bring innovation to the fourth industrial revolution were involved in the projects. Technological advancements in information visualization techniques allowed fluent interactions between end-users and big amounts of data; enhancements in machine learning (see [70] for more details) and artificial intelligence engineering made the extraction of valuable information easier to perform on retrieved data; the distance between the digital and physical worlds has been shortened by the pervasive installation of sensors on the production lines and by a participatory approach to the design of the overall cyber-physical system. In a similar vein, Ref. [71] suggested to exploit an IoT layer to make industrial systems more energy-efficient. Loads with such variations that can compromise power quality and increase energy usage are monitored in real time using a sensor-area network, and a central processing server is in charge of deciding which actions to take in order to optimize power consumption.
Enabling technologies. Indoor positioning systems are systems which use wireless communication networks (short- to long-range) [72] and can be easily adapted to address challenges in asset management, people tracking, security, or warehouses [73], thus having direct implications in the developments of the smart factory. Many are the technologies adopted to implement localization systems, from optical sensors to sound waves sensors to electromagnetic field sensors. Among them, radio-frequency-based systems represent a key enabler technology. To this purpose, Ref. [74] provided a state-of-the-art review on one particular type of radio frequency system, the radio frequency identification system commonly known as RFID that represents one of the most suitable choices due to its cost-effectiveness and energy efficiency.

2.5.2. Challenges and Guidelines for Internet of Things in the Industry 4.0 Context

The Internet of Things solutions can have several benefits in the smart factory, such as:
  • Large amounts of data can be gathered through connected objects.
  • The automation of the network can be enhanced.
  • Machine-to-machine communication, as well as human–machine interaction, can be improved.
Such enhancements, however, come at a price. As the amount of data gathered grows, so does the risk of cyber-attacks, making security issues of foremost importance [68]. In addition, the wide use of Internet of Things technologies may negatively impact energy consumption costs.
Cicibaş and Demir [75] proposed a series of guidelines which address both technical and social issues for manufacturing companies. We report here an extract of the guidelines tackling social issues which specifically focus on IoT acceptance and stakeholder involvement and were formulated based on previous work [76,77]:
  • Guideline 1: User acceptance. Seek ways to achieve user acceptance. Pay special attention to conferences, trainings, and other types of information-sharing activities.
  • Guideline 2: Privacy and ethics. Inform users and let them adjust privacy settings for private data collections using IoT devices.
  • Guideline 3: Education and training. Develop and conduct an effective training program for the users.
  • Guideline 4: Stakeholder management. Identify all stakeholders and pay attention to stakeholder management.
Other guidelines which also clearly embrace a human perspective are more commercial in their nature, such as those which can be drawn from https://www.uxmatters.com/mt/archives/2022/05/designing-for-the-internet-of-things-iot.php, accessed on 29 July 2022:
  • Guideline 5: The importance of UX research. During the initial phases of design, it is always a good idea to think about what value an IoT device would offer to the users and must deliver to the business.
  • Guideline 6: Taking a holistic view. Ideally, IoT solutions consist of multiple devices that have various capabilities—both digitally and physically. One must take a holistic approach to designing an IoT device, looking across the whole system, which needs to work seamlessly together to create a meaningful experience for users.
  • Guideline 7: Safety and security. IoT solutions are not purely digital. Once the IoT is placed into a real-world context, the consequences can be severe when something goes wrong.
Other guidelines are proposed by https://www.iotforall.com/designing-user-experience-iot-products, accessed on 29 July 2022:
  • Guideline 8: Simplified onboarding. The first step of introducing a new system to users can also be the hardest. In the case of multidevice interaction, it often implies repeated authentications, gateway processes that differ from device to device, and switching to additional services, such as email, for verification. Simplified onboarding—secure but effortless authentication with code verification instead of passwords—is a promising beginning.
  • Guideline 9: Smooth cross-device design and interaction. The key to a consistent user experience across multiple IoT products is in the cloud. Cloud-based apps and connected devices allow us to keep all the parts of the system constantly up-to-date. As a result, it provides users with seamless transitions between system elements with minimum effort, adaptation, and wasted time.
  • Guideline 10: One-space experience. One of the most problematic tasks in UX design for IoT is minimizing the gaps between the physical world of connected devices and creating a smooth experience across all system elements. […] The challenge of a seamless experience is to integrate diverse independent components into a one-stop solution while saving its functionality and reliability.
  • Guideline 11: New interfaces. […] Today, the designers of consumer-oriented IoT products already focus on voice and audio, with more and more digital assistants seen in the home. However, voice is not the only new interface. The future of smooth user experience becomes more contextual and natural […].
Finally, we report some further guidelines from andrei-klubnikin.medium.com/, https://andrei-klubnikin.medium.com/5-steps-to-great-iot-user-experience-5913955587f1, accessed on 29 July 2022:
  • Guideline 12: Provide the ultimate user experience. As general as it sounds, the Internet of Things’ user experience design principles still revolve around usability, accessibility, utility, and desirability. […] There are several factors that affect the Internet of Things’ user experience, including high power consumption, the lack of a display, the accuracy of sensor data, and device interoperability, and these issues should be addressed during the proof of concept stage.
  • Guideline 13: Do not take Internet connectivity for granted. Although the key idea behind every IoT project is to connect either consumer electronics or initially dumb objects to the Internet and enable “things” to exchange data over a network, a smart device should perform basic functions even in an offline mode.
  • Guideline 14: Keep interoperability in mind. Without open-source APIs, reliable device management platforms and unified communication protocols (ZigBee, for instance), IoT is just a bunch of objects connected to the cloud and mobile apps. What we need is a global interconnected environment where products created by different vendors interact with each other.
  • Guideline 15: Embrace accessibility. The Internet of Things can potentially remove the barriers people with special needs face on a daily basis. […] That is why forward-thinking vendors enhance their connected solutions with voice recognition and even eye-tracking technologies, thus raising the quality of life for special consumers.

2.6. Wearable

Among smart objects (see Section 2.5) are all those devices which can be woven or otherwise incorporated into clothing or worn as accessories. Many examples of wearable devices have been developed at an experimental level, while some of them have actually made it to the market and eventually become accepted as everyday objects. Popular examples of wearable devices range from fashion items (also known as fashion electronics or fashion technology) to activity trackers or healthcare solutions, able to keep track of body values via specific sensors.
Smart glasses (see Section 2.4), which can be potentially coupled with graduated lenses, also fall into this category. Notice that, while augmented reality mainly involves superimposing interactive computer graphics onto physical objects in the real world, smart glasses have mainly been designed for microinteractions [78]. Being designed for mobility, hands-free interactions such as gestures, voice recognition, and eye-tracking are all good candidates for possible interactions with these devices.
Definitely more common than smart glasses are smartwatches (Figure 4). New guidelines, such as the WatchOs Human Interface Guidelines by Apple (https://developer.apple.com/design/human-interface-guidelines/watchos/overview/themes/, accessed on 29 July 2022), have been defined to support the design of appropriate interfaces for such small devices. On the other hand, Ref. [79] investigated around-device interaction modalities using electric field sensing: more specifically, the authors explored gestural interactions going beyond the boundaries of screens, introducing a new concept of tangible user interfaces and enabling a spontaneous binding between physical objects and digital functions.

2.6.1. The Smart Factory Context

Due to the interaction modalities they allow, enabling hand-held, touch, and touchless inputs [78], smart glasses can play an important role in smart factories.
Applications: access to knowledge. Ref. [80] studied an application used to document knowledge about assembly and maintenance processes using video recording with smart glasses (namely, Google Glasses). In particular, the application profiles and identifies not only users but also the working context, taking advantage of QR codes or barcodes placed on the machines, thus allowing users to retrieve or post videos from and to a repository. This application was evaluated by administering a survey to a few experienced workers, who were firstly instructed about the usage and interaction modalities of the technology and then proceeded with an on-site test, thus allowing the authors to assess the application on real shop floors. The same evaluation was performed in two different companies: in the first one, the system was used to document standardized tasks within assembly processes in the automotive sector, while in the second case the challenge was to document maintenance tasks. Results showed significant improvements in efficiency and reliability, in comparison with the usual documentation modalities already in use.
Applications: workers’ wellbeing. Focusing on the wellbeing of workers, the HuManS (human-centered manufacturing system) project (https://dmd.it/humans/en/humans/, accessed on 29 July 2022) experimented with the creation of a hardware/software Internet of Things architecture for monitoring, analyzing, and controlling the posture of users. Sensorized shirts were designed to be worn by workers on the shop floors, along with an app capable of receiving and elaborating data from the wearable devices. Workers were supposed to log into the app and provide personal data such as their weight and height. They could then monitor their movements in real time during the progress of their daily work and use the app to examine various figures showing the history of their movements during the work shifts.

2.6.2. Challenges and Guidelines for Wearable Devices in the Industry 4.0 Context

Being closely connected to Internet of Things smart objects, most of the pros, cons, and guidelines discussed in Section 2.5.2 can be applied to wearable devices. In addition, as we have seen, wearable devices can improve workers’ mobility and support the implementation of other technologies into everyday objects (think of smart glasses with augmented reality). As discussed in Section 2.4.1, however, privacy issues may arise when wearable devices are adopted on the shop floor.
In addition to the guidelines proposed for Internet of Things solutions (see Section 2.5.2), a few more points are worth mentioning (https://developer.apple.com/design/human-interface-guidelines/watchos/overview/apps/, accessed on 29 July 2022):
  • Guideline 1: Glanceability. Make sure the user interface is organized so that people can quickly and easily find the information they need and perform actions.
  • Guideline 2: Privacy. Obscure personal information that users would not want casual observers to see, such as health data. In connection with Guideline 1, make sure other types of information remain glanceable, to ease task completion.

2.7. Tangible User Interfaces

In the Internet of Things era, most devices still provide only web or mobile interfaces. Ref. [46] argued that constant interaction with such interfaces could decrease user experience and possibly lead to user alienation from the physical world, these being interactions disconnected from tangible reality. On the contrary, allowing the binding between physical objects and digital functions, augmented reality and other interactive mediums open up to the world of tangible user interfaces (TUI). Tangibles are a particular type of user interface where a person interacts with digital information through the physical environment by touching, displacing, rotating, sliding, or generally interacting in different ways with physical objects that provide inputs to a system and feedback to the user [81].
The analysis carried out by [46] summarizes the current trends in tangible interaction and extrapolates eight properties that could be exploited for designing tangible user interfaces for IoT objects. Such properties range from the ability to leverage natural human skills such as haptic and peripheral interactions to the possibility of integrating tangible interactions with IoT objects in users’ daily routines.
In a similar vein, Ref. [82] studied how taxonomies and design principles for tangible interaction should be mapped into the new landscape of IoT systems, investigating parallels between the properties of IoT systems and tangible interactions and therefore envisaging a shift from the world of IoT (Internet of Things) to that of IoTT (Internet of Tangible Things).

2.7.1. The Smart Factory Context

Applications: assembly. Focusing on the smart factory context, Ref. [83] explored the concept of user-defined tangibles: users can turn any physical object at their workplace into a tangible control, thus spontaneously binding it to digital functions. As far as supporting technologies are concerned, the authors found that, in manual assembly workplaces, projection is more suitable than surface-computers, since it cannot be affected by the accidental drop of materials, which is a common event in such a scenario. Consequently, Ref. [83] designed a system which combines a top-mounted Kinect and a top-mounted projector to enable touch interaction, the highlighting of objects, and the display of controls, along with a bottom-mounted leap motion aimed at capturing the user’s gestures.
Enabling technologies. Many techniques can be adopted to track objects and enable interactions, among which are RFID tags, capacitive systems, cameras, and magnets. Envisaging the assembly line in factories as a possible application scenario, Ref. [84] explored radar sensing as a way to support tangible interaction with six sensing mechanisms: counting, ordering, and identifying the objects and tracking the orientations, movements and distances of these objects. The authors showed that miniature radar sensing is accurate even with minimal training and that it can support new forms of tangible interaction.

2.7.2. Challenges and Guidelines for Tangible User Interfaces in the Industry 4.0 Context

As discussed in our review, tangible interfaces can help to make interaction more natural and engaging. In particular, tangibles can:
  • Stimulate users to interact with the concrete world around them, thus contrasting the sense of alienation which may arise from continuous exposure to screen-based devices.
  • Provide immediate feedback in the real world, instead of exploiting a graphical interface which provides a representation of reality.
On the other hand, acceptability issues may arise when digital objects incorporating tangibles replace the everyday objects operators are used to [85]. In addition, tangibles can be hardly standardized, which implies that operators might be required to make a substantial effort to learn how to use each of them [85]. Similarly, the use of some tangibles might be restricted to specific environments [85].
In addition to the guidelines proposed for Internet of Things solutions (see Section 2.5.2) based on the work of [83], we report a brief list of guidelines for the implementation of tangible user interfaces in the smart factory which specifically focus on the above-mentioned issues:
  • Guideline 1: Codesigning. Whenever possible, try to involve users in the design process of tangibles, in order to avoid unexpected acceptability issues.
  • Guideline 2: Learning. Consider the possibility of undertaking training sessions to allow operators to build the mental models required to operate TUIs.

2.8. Collaborative Robots

Another paradigm that changed with the development of the Smart Industry 4.0 and 5.0 is surely that of human–robot interaction (HRI), leading to the modification and enhancement of the acceptance level of collaborative robots on the shop floors.
The first collaborative robot was devised in 1996 by James E. Colgate and Michael A. Peshkin, who defined it as “an apparatus and a method for direct physical interaction between a person and a generic manipulator controlled by a computer” [86]. The term “cobot” was later listed among the new terms by the Wall Street Journal, meaning a collaborative robot designed to help workers in their businesses rather than replace them [87]. Today, more than twenty years after its invention, the concept of collaborative robotics has commonly taken on the meaning of work sharing. Collaboration is manifested through human access to the robotic system and the workspace to perform functionally related actions [88]. The collaboration between humans and robots aims to combine human skills and flexibility with the benefits associated with robotic systems. This allows an increase in productivity and product quality while reducing ergonomics-related risks for operators [89].
Different levels of collaboration between humans and robots are possible [90]. Conventionally, in the factory, the robot is located inside protected areas that are not accessible to humans; access to the workspace is only allowed when the robot is stationary to carry out maintenance or programming operations. This is the first level of collaboration and is characterized by a strict separation between workspaces. The second level can be called coexistence: in this case there is no sharing of the workspace, but a physical barrier is missing. At this level, humans can access the robot’s work area, but human presence is detected by a safety system that causes the robot to regulate the power and speed of movements. The third level is that of synchronized operations, in which worker and robot share the same workspace, but at different times; therefore, there is a condition of temporal separation. A fourth level is that of cooperation, in which spatial and temporal separation are reduced and man and robot are allowed to occupy the same work area at the same time, remaining separate, however, because of the lack of joint activities. Finally, at the highest level of collaboration, man and robot work on a common task without any temporal or spatial separation of the work area, but rather a voluntary contact between man and machine can be envisaged.
The collaboration between humans and robots in charge to assist their work deserves to be investigated and improved. HRI being a sub-branch of HCI that is rapidly emerging and creating its own standards, a stand-alone research sector deserves to be considered for this category of interactions and the relevant studies that were carried out.

2.8.1. The Smart Factory Context

Given a clear definition of what a collaborative robot is and what its components are, we move one step forward in our examination of the current state of the art, providing real-world examples of their implementation on shop floors.
Applications: e-waste management. Ref. [91] put forward the adoption of collaborative robots to solve e-waste management problems, optimizing the recycling process of electronic equipment. Companies are always more subject to public and government pressure to reduce their environmental impact. When dealing with e-waste, manual operations can be financially prohibitive and full automation is not easy to implement due to the lack of uniformity of devices. It is trivial to notice how this is clearly a scenario where a collaboration between humans and robots may bring enhancements to the process. Alvarez-de-los Mozos and Renteria [91] examined the e-waste management techniques and the limitations of fully automated techniques for waste electrical and electronic equipment (WEEE) and then proposed a solution for WEEE recycling that involves the use of collaborative robots. The authors brought a real-world example discussing the use of Liam, a collaborative robot developed by Apple to effectively disassemble the iPhone (Figure 5). The authors also pointed out that one of the main problems that can possibly arise when dealing with bigger electronic appliances is that of identifying cables, flexible parts, or components which are usually difficult to recognize. This represents a point in the process where collaboration can happen and a skilled operator could carry out the job, leaving the tedious and potentially dangerous tasks of operating the materials to the robots.
Applications: assembly. Ref. [92] investigated the combination of sensors, embedded in wearable devices with gestures recognition, to propose an HRI framework applicable in assembly operations, where collaborative robots can assist workers, delivering tools and parts and holding objects. The aim of this and many other investigations in the field is that of exploiting the best abilities of robots, such as accuracy or repetitive work, and the best abilities of humans, for example, cognition and management, in order to reach a collaborative scenario where the most is made out of every available resource. Moreover, we should consider that mobile robots and exoskeletons have the potential to make certain tasks less physically demanding, see Spada at al. for more details [93]. This may allow women to take on tasks that were previously reserved for men due to the required physical strength. A vast range of further opportunities will arise by the further digitalization of the workforce [36].
Focusing on the general context of performing dull tasks on production lines, a study aimed at enhancing the effectiveness of already existing robots was carried out by [94]. The authors started their work from the assumption that collaborative robots are more useful when they can be displaced at a level of easiness that makes them “mobile” [94]. They investigated a system to enable robots such as Baxter and Sawyer by Rethink Robotics to smartly perform movements within the shop floor, sensing persons or obstacles and moving safely throughout the space. A downward-facing QR code camera was used for the precise placement of the robot at a work station and, when not assigned to a specific cobot, the platform can be used as a general-purpose automated guided vehicle.
Worker–robot interaction and collaboration. A key feature of collaborative robots is their ability to partner with human operators in mixed teams. They need to coordinate their actions to engage in joint activities and to coordinate their behavior to human behaviors at different levels: semantic, contextual, temporal, and more, see [95], which investigated the cognitive systems that build the awareness needed to obtain such interactions. The authors provided a tool for addressing this problem by using the notion of deep hybrid representations and the facilities that this common state representation offers for the tight coupling of planners on different layers of abstraction. According to Villani et al. [17], the main challenges related to cobots are: safety issues, intuitive user interfaces, so that human operators can easily interact with the robot, and design methods, which mean control laws, sensors and task allocation, and planning approaches, which allow the human operator to safely stand close to the robot, actively sharing the working area and tasks and providing the interaction system with the required flexibility [17]. In particular, regarding the worker–robot interaction, the use of collaborative robots in industrial processes proves beneficial also given the fact that they can be managed and taught through intuitive systems, based on augmented reality [96], walk-through programming [97], or programming by demonstration [98].
As far as intuitive user interfaces are concerned, differently from instructing a (skilled) human worker on how to carry out a task, programming a robot requires providing the robot with explicit motion-oriented instructions, detailing the points and trajectories that the robot has to follow. Nonetheless, the goal is that of explicitly instructing the robot in a human-friendly manner and without negatively affecting the productivity of the system. To this purpose, Villani at al. [17] proposed to use novel approaches as walk-through programming, programming by demonstration, and the use of multimodal interfaces and augmented/virtual reality, which are characterized by high intuitiveness since they constitute instances of natural and tangible user interfaces (NUIs and TUIs, respectively). For instance, NUIs allow users to directly manipulate and interact with robots rather than instruct them to do so by typing commands. Techniques used include, for instance, speech, gestures, eye tracking, facial expression, and haptics, in addition to the traditional ones, namely, keyboard, mouse, monitor, touchpad, and touchscreen.
It is worthwhile mentioning control techniques and approaches aimed at improving the safety and ergonomics of operators interacting with robots. Typical control problems related to safety include collision avoidance, collision detection, motion planning, and safety-oriented control system design. Similarly, for ergonomics, they include scheduling and ergonomics-oriented control system design, as well as the common area of motion planning. Several approaches are available to tackle each problem: for example, considering the ergonomics area, these include: biological and nonbiological trajectory optimization, minimum jerk trajectory planning (motion planning), mixed-integer linear programs, stochastic Petri-nets, cognitive load optimization, feedforward/feedback optimization, decision making models (for scheduling), haptic assistance, optimal control, whole-body control, game theory, gesture-based control, admittance control, learning-based control, and reinforcement learning (for ergonomics-oriented control system design). See for more details the comprehensive survey by Proia et al. [99].
Cobots as autonomous systems. Ref. [100] carried out a useful study on the paradigm of the smart factory, focusing on the role of cobots in this context. The authors explored how cobots are defined and highlighted how learning processes can be carried out by such robots, through the adoption of artificial intelligence techniques, in order to enhance productivity and the quality of manufactured goods and thus create a smart factory. Examining the nine pillars of Industry 4.0, Ref. [100] discussed the role of collaborating robots in the scope of the first pillar: autonomous systems. They defined cobots as automated systems, including sensors, actuators, and controllers, capable of performing tasks continuously and designed to be applied in the industrial field [100]. Two types of autonomous systems were shown in their study, i.e., multiagent systems and intelligent industrial robots.
This second category of autonomous systems is particularly interesting for this section. Ref. [100] provided a standard definition of what such robots are, in terms of their characteristic components: an “intelligent industrial robot is a useful combination of a manipulator arm, sensors, and intelligent controllers, which replaces a human worker and can complete tasks and resolve the problems. Eventually, it will be able to learn from humans at first. The use of these machines in industrial automation can improve productivity and product quality, creating smart industry”.

2.8.2. Challenges and Guidelines for Collaborative Robots in the Industry 4.0 Context

As highlighted by [101], industrial collaborative robotics is one of the most promising technologies of the smart industry. In particular, human–robot collaboration in assembly will be particularly interesting for manufacturing companies. In this context, the interaction between humans and robots opens new possibilities:
  • The elimination of repetitive or dangerous tasks from human operators’ concerns, to allow the human resources to focus on those tasks that better suit human minds.
  • A reduction in risks on the shop floor.
However, there are also challenges:
  • A long learning curve, to allow a smooth interaction between operators and robots.
  • Possible difficulties in the realization of a smooth inclusion into mixed teams, due to the difficulties in creating coordinated behaviors in such robots.
  • Eventual slowdowns of the production process may result from faults, especially if many tasks are assigned to collaborative robots.
Ref. [95] proposed a set of guidelines for the adoption of collaborative robots on factory floors:
  • Guideline 1: Selection. A correct distinction between the tasks that should be carried out by human operators and those that better belong to robots must be conducted before planning the work and tasks must be assigned accordingly.
  • Guideline 2: Behavior. When designing collaborative robots or when making decisions on which solution to adopt, their ability to coordinate their behavior with that of humans must be taken into consideration as a priority.
  • Guideline 3: Safety. ISO/TS 15066:2016 specifies safety requirements for collaborative industrial robot systems and the work environment and must be taken into consideration when adopting such solutions within a smart factory (https://www.iso.org/standard/62996.html, accessed on 29 July 2022).
Ref. [102] proposed and then validated [101] new design guidelines for systems integrator designers to develop safe and ergonomic collaborative assembly workstations. We report the most general ones:
  • Guideline 4. Minimize specific mechanical hazards related to the entrapment of human body parts.
  • Guideline 5. Minimize specific mechanical hazards related to collisions with human body parts.
  • Guideline 6. Minimize specific mechanical hazards related to robot system parts falling.
  • Guideline 7. Minimize the biomechanical overload of upper limbs related to repetitive tasks.
  • Guideline 8. Minimize the biomechanical overload of the whole body related to the manual lifting/lowering of objects.
  • Guideline 9. Minimize the biomechanical overload of head/neck/trunk/upper or lower limbs related to static or awkward working postures.
  • Guideline 10. Maximize operator psychological wellbeing and satisfaction.
  • Guideline 11. Maximize the efficiency of manual and robot assembly activities.

3. Discussion and Conclusions

Smart interactive technologies are revolutionizing workers’ activities on the factory floor. While throughout our survey we have adopted a technology-driven perspective, illustrating the changes and possibilities enabled by the emerging technologies in the Industry 4.0 and 5.0 visions, Table 1 summarizes the contributions of surveyed work which specifically falls into the smart factory context by highlighting the problems and phases they address in the production process. As we can see, most transformations regard the production phase and access to knowledge.
Communication, learning, and knowledge-sharing. When examining the modifications brought or suggested by the fourth and later fifth industrial revolutions (respectively: smart manufacturing, smart mass production, smart products, smart working, smart supply chain, and system(s) optimization; sustainability, environmental stewardship, human-centricity, and social benefit, see for more details [103]) to the modern factory floors, however, one cannot ignore how such changes are influencing the way communication happens between operators and coworkers and between operators and machines. For example, Ref. [30] investigated the mutual human–machine learning in smart factories, with the ultimate goal to identify new learning patterns in such environments. They defined mutual learning as a bidirectional process involving reciprocal exchange, dependence, action, or influence within human and machine collaboration, which results in creating new meaning or concepts, enriching the existing ones, or improving skills and abilities in association with each group of learners, and distinguished three groups of tasks that can be carried out within the smart factory: those assigned specifically to humans, those dispensed for machines, and the shared ones, where exchange and thus mutual learning occurs. Ref. [30] then illustrated a conceptual model for mutual learning, based on the model of hybrid learning proposed by Zitter and Hoeve [104]. All their results have been applied and tested in a real-world context, at the TU Wien Pilot Factory.
All in all, we can state that learning processes within the smart factory are and will be increasingly more affected by the process of digitalization. In this vein, Ref. [105] reviewed virtual training systems with a focus on their teaching styles and identified new research directions in the field of adaptive training systems.
Benefits derived from the changes introduced with the Industry 4.0 and 5.0 extend to activities carried out outside the factory walls. A case in point is the work of [20], which examined knowledge sharing solutions based on Industry 4.0 to improve mobile service technicians’ daily work performance and work satisfaction. The authors started a human-centered design process that led to the creation of the Mobile Service Technician 4.0 concept: it utilizes industrial internet, virtual, and augmented reality as well as wearable technologies to improve both the user experience of workers within the examined field and the quality of their work.
Human-centricity. The Industry 5.0 paradigm reinserts proactively humans back into the automation chain [106], and this means that technology used in manufacturing should be “adapted to the needs, and diversity of industry workers, instead of having the worker continuously adapt to ever-evolving technology. The worker is more empowered and the working environment is more inclusive. To achieve this, workers are to be closely involved in the design and deployment of new industrial technologies, including robotics and AI” [3]. Hence, approaches such as codesign and prototyping should be adopted in this new vision, also helped by new technologies, such as virtual and augmented reality, that allow prototype simulations before the actual realization.
In addition to that, Industry 5.0 also emphasizes human-centricity through the use of AI-based technologies to empower the worker’s performance and capacity. In this regard, wearable devices that boost cognitive and operational capacities are increasingly being utilized and improved in manufacturing industries [107]. Exoskeletons, i.e., augmenter equipment that give extra strength and physical capabilities to protect the operator from the adverse effects of heavy workloads [93], are a case in point. According to Jafari et al. [108], virtual technologies such as smart AR glasses, spatial AR projectors, etc., are viable and novel gadgets that facilitate flexible operations and technical guidance through information transmission and virtualization. Moreover, wearables could open new channels for alerting workers and their general practitioners about critical health conditions, both physical and mental, as well as supporting workers in adopting healthy behaviors in the workplace [3].
However, these improvements in working conditions cannot be conducted at the expense of workers’ fundamental rights of privacy, security, autonomy, and human dignity. According to our vision, it is essential that future HCI and HCAI specialists become aware of the potential ethical and practical issues of smart interactive technologies, also considering the smart factory context and the new, central role of workers, see for more details Longo et al. [107].
Sustainability. Another relevant Industry 5.0 concept, also emphasized in the 2021 European Commission’s report [3] and highlighted in Section 2.5.2, is the one of environmental sustainability. According to Akundi et al. [103] Industry 5.0 “recognizes the capacity of industry to fulfill social objectives beyond employment and development, to become a sustainable source of development, by making production regard the limitations of our planet and prioritizing employee health first”. Sustainability is closely related also to the promotion of a circular economy, i.e., the idea of developing circular processes that reuse, repurpose, and recycle natural resources, reducing waste and environmental impact [3]. One of the enabling technologies for reaching sustainability goals is certainly IoT. Drawing from the IoT Guidelines for Sustainability produced by the World Economic Forum (https://www.weforum.org/, accessed on 29 July 2022) [109], we recall a set of points which specifically refer to the sustainability and impact measurement area. Firstly, along with all the valuable data they may collect with IoT systems, smart factories should make sure to measure and process energy usage data, so as to minimize costs, increase savings, and reduce waste (consumption). Then, smart factories should embrace a sustainability awareness culture to respond to new generational demand, enhancing brand reputation and attracting top talent (culture). Furthermore, potential impact should be evaluated and results measured based on some ad hoc framework, such as the United Nations Sustainable Development Goals (https://sdgs.un.org/goals, accessed on 29 July 2022) (impact). When planning an Internet of Things project, potentially addressable sustainable development goals and targets should be identified and incorporated into the commercial design (goals). Finally, RFID or GPS sensors monitoring should be implemented both to track products in the delivery process and to track inventory items within the warehouse and the production lines (monitoring).
Further challenges. Along with all the enhancements and improvements brought by the Industry 4.0 and 5.0 to production processes and to the workers’ performances, unavoidably there come new risks for both individuals and organizations that can directly affect productivity and translate into financial risks. Herrmann [110] gave an overview of the technical components of a smart factory, raising the awareness of this manufacturing trend in terms of risks evaluation. The author focused on the topics of standardization, information security, the availability of the IT structures, the availability of fast Internet, complex systems, as well as organizational and financial risks in the scope of the fourth industrial revolution. He pointed out how investigation in the field must be pushed parallel to the development progress and highlighted the need for further research in order to provide a complete overview of the smart factory and its status. Last but not least, in the definition of Industry 5.0 we found the concept of resilience, referring to the need to develop a higher degree of robustness in industrial production, arming it better against disruptions and ensuring it can provide and support critical infrastructure in times of crisis. The future industry needs to be resilient enough to swiftly navigate the (geo-)political shifts and natural emergencies [3], as sadly witnessed by the recent events of COVID-19 and the war between Russia and Ukraine.
In this paper, we have provided a picture of the current state of the art of smart interactive technologies on the factory floor, and we have also explored the way new technologies are changing the relations between workers and operations. On the one hand, we wanted to emphasize the fact that smart factories provide a challenging and stimulating environment, where workers are required to be resourceful and possess excellent communication, organization, and collaboration skills, in order to manage complexity and abstraction in problem solving processes, as also highlighted by [111]. On the other hand, we wanted to provide some practical examples of the use of intelligent technologies in the smart factory, also proposing guidelines to design interactions that should be human-centered.
Intelligent system components may have unexpected and biased behavior, due to the success and large use of probabilistic approaches such as machine learning, neural networks, deep learning, etc., based on the data collected in large data sets which may have some latent bias (see, for instance, [112]) and thus confuse users, erode their confidence, and lead to the abandonment of AI technology. High-profile reports of failures (see for example: https://spectrum.ieee.org/ai-failures, accessed on 29 July 2022, and https://www.ftc.gov/news-events/news/press-releases/2022/06/ftc-report-warns-about-using-artificial-intelligence-combat-online-problems, accessed on 29 July 2022) range from humorous and embarrassing mistakes (e.g., autocompletion errors, misunderstandings in conversational agents, etc.) to more serious circumstances in which users cannot effectively handle an AI system (e.g., driving a semiautonomous car). These factors show that designers and developers need knowledge on proper methodologies to create effective human-centered intelligent systems. User in control is one of the pillars of human-centered design: this can be achieved by granting transparency in system behavior, i.e., in the form of the explainability of the AI decision making process empowering the end-users to configure and adapt such behavior (for more details, see [9]). This example shows how important it is to consider human factors and human perspectives in intelligent systems, which need to be designed and implemented in a user-centered/human-centric way. We hope that with the discussions, examples, and guidelines reported in this survey paper, we have made a small but relevant advance with respect to this goal.

Author Contributions

Conceptualization, D.B., C.G. and F.V.; Investigation, D.B., C.G. and F.V.; Methodology, D.B., C.G. and F.V.; Writing—original draft, D.B., C.G. and F.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by Regione Piemonte, grant number 319-50 (Programma Operativo Regionale POR-FESR 2014/2020, HOME project).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, S.; Wan, J.; Li, D.; Zhang, C. Implementing Smart Factory of Industrie 4.0: An Outlook. Int. J. Distrib. Sens. Netw. 2016, 2016, 1–10. [Google Scholar] [CrossRef] [Green Version]
  2. Frazzon, E.; Ehm, J.; Makuschewitz, T.; Scholz-Reiter, B. Towards Socio-Cyber-Physical Systems in Production Networks. Procedia CIRP 2013, 7, 49–54. [Google Scholar] [CrossRef]
  3. Directorate-General for Research and Innovation European Commission; Breque, M.; De Nul, L.; Petridis, A. Industry 5.0: Towards a Sustainable, Human-Centric and Resilient European Industry; Publications Office: Luxembourg, 2021. [CrossRef]
  4. Directorate-General for Research and Innovation European Commission; Müller, J. Enabling Technologies for Industry 5.0—Results of a Workshop with Europe’s Technology Leaders; Publications Office: Luxembourg, 2020.
  5. Xu, X.; Lu, Y.; Vogel-Heuser, B.; Wang, L. Industry 4.0 and Industry 5.0—Inception, conception and perception. J. Manuf. Syst. 2021, 61, 530–535. [Google Scholar] [CrossRef]
  6. Sonntag, D. Intelligent User Interfaces—A Tutorial. arXiv 2017, arXiv:1702.05250. [Google Scholar]
  7. Shneiderman, B. Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy. Int. J. Hum. Comput. Interact. 2020, 36, 495–504. [Google Scholar] [CrossRef] [Green Version]
  8. Shneiderman, B. Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-Centered AI Systems. ACM Trans. Interact. Intell. Syst. 2020, 10, 26. [Google Scholar] [CrossRef]
  9. Costabile, M.F.; Gena, C.; Matera, M.; Paternò, F.; Tortora, G.; Zancanaro, M. Teaching HCI for AI: Co-Design of a Syllabus. Final Report by the Workshop Organizers. 1999. Available online: http://sigchitaly.eu/en/hci4ai-syllabus-it/workshop-results/ (accessed on 29 July 2022).
  10. Baudel, T.; Beaudouin-Lafon, M. Charade: Remote Control of Objects Using Free-hand Gestures. Commun. ACM 1993, 36, 28–35. [Google Scholar] [CrossRef]
  11. Blumendorf, M.; Feuerstack, S.; Albayrak, S. Multimodal User Interfaces for Smart Environments: The Multi-Access Service Platform. In Proceedings of the Working Conference on Advanced Visual Interfaces, AVI’08, Napoli, Italy, 28–30 May 2008; Association for Computing Machinery: New York, NY, USA, 2008; pp. 478–479. [Google Scholar] [CrossRef]
  12. Gianotti, M.; Riccardi, F.; Cosentino, G.; Garzotto, F.; Matera, M. Modeling Interactive Smart Spaces. In Proceedings of the Conceptual Modeling, Vienna, Austria, 3 November 2020; Dobbie, G., Frank, U., Kappel, G., Liddle, S.W., Mayr, H.C., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 403–417. [Google Scholar]
  13. Jameson, A.; Gabrielli, S.; Kristensson, P.O.; Reinecke, K.; Cena, F.; Gena, C.; Vernero, F. How can we support users’ preferential choice? In Proceedings of the International Conference on Human Factors in Computing Systems, CHI 2011, Extended Abstracts Volume, Vancouver, BC, Canada, 7–12 May 2011; Tan, D.S., Amershi, S., Begole, B., Kellogg, W.A., Tungare, M., Eds.; ACM: New York, NY, USA, 2011; pp. 409–418. [Google Scholar] [CrossRef]
  14. Jannach, D.; Zanker, M.; Felfernig, A.; Friedrich, G. Recommender Systems—An Introduction; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  15. Gena, C.; Grillo, P.; Lieto, A.; Mattutino, C.; Vernero, F. When Personalization Is Not an Option: An In-The-Wild Study on Persuasive News Recommendation. Information 2019, 10, 300. [Google Scholar] [CrossRef] [Green Version]
  16. Desmarais, M.C.; Baker, R.S. A review of recent advances in learner and skill modeling in intelligent learning environments. User Model.-User-Adapt. Interact. 2012, 22, 9–38. [Google Scholar] [CrossRef] [Green Version]
  17. Villani, V.; Pini, F.; Leali, F.; Secchi, C. Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics 2018, 55, 248–266. [Google Scholar] [CrossRef]
  18. Corcella, L.; Manca, M.; Nordvik, J.E.; Paternò, F.; Sanders, A.; Santoro, C. Enabling personalisation of remote elderly assistance. Multim. Tools Appl. 2019, 78, 21557–21583. [Google Scholar] [CrossRef]
  19. Hajjaji, Y.; Boulila, W.; Farah, I.R.; Romdhani, I.; Hussain, A. Big data and IoT-based applications in smart environments: A systematic review. Comput. Sci. Rev. 2021, 39, 100318. [Google Scholar] [CrossRef]
  20. Kaasinen, E.; Aromaa, S.; Väätänen, A.; Mäkelä, V.; Hakulinen, J.; Keskinen, T.; Elo, J.; Siltanen, S.; Rauhala, V.; Aaltonen, I.; et al. Mobile Service Technician 4.0: Knowledge-Sharing Solutions for Industrial Field Maintenance. IxD&A 2018, 38, 6–27. [Google Scholar]
  21. MAY, G.; Taisch, M.; Bettoni, A.; Maghazei, O.; Matarazzo, A.; Stahl, B. A New Human-centric Factory Model. Procedia CIRP 2015, 26, 103–108. [Google Scholar] [CrossRef] [Green Version]
  22. Benedetto, F.; Brunetti, D.; Gena, C.; Lai, M.; Meo, R.; Vernero, F. Intelligent monitoring applications for Industry 4.0. In Proceedings of the IUI ’20: 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020; ACM: New York, NY, USA, 2020; pp. 67–68. [Google Scholar] [CrossRef] [Green Version]
  23. Andolina, S.; Ariano, P.; Brunetti, D.; Celadon, N.; Coppo, G.; Favetto, A.; Gena, C.; Giordano, S.; Vernero, F. Experimenting with Large Displays and Gestural Interaction in the Smart Factory. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics, SMC 2019, Bari, Italy, 6–9 October 2019; pp. 2864–2869. [Google Scholar] [CrossRef]
  24. Andolina, S.; Ariano, P.; Brunetti, D.; Celadon, N.; Coppo, G.; Favetto, A.; Gena, C.; Giordano, S.; Vernero, F. Introducing Gestural Interaction on the Shop Floor: Empirical Evaluations. In Proceedings of the Human-Computer Interaction-INTERACT 2021-18th IFIP TC 13 International Conference, Bari, Italy, 30 August–3 September 2021; Ardito, C., Lanzilotti, R., Malizia, A., Petrie, H., Piccinno, A., Desolda, G., Inkpen, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2021; Volume 12936, pp. 451–455. [Google Scholar] [CrossRef]
  25. Krupitzer, C.; Müller, S.; Lesch, V.; Züfle, M.; Edinger, J.; Lemken, A.; Schäfer, D.; Kounev, S.; Becker, C. A Survey on Human Machine Interaction in Industry 4.0. arXiv 2020. [Google Scholar] [CrossRef]
  26. Golightly, D.; Sharples, S.; Patel, H.; Ratchev, S. Manufacturing in the cloud: A human factors perspective. Int. J. Ind. Ergon. 2016, 55, 12–22. [Google Scholar] [CrossRef] [Green Version]
  27. Spasojević-Brkić, V.; Putnik, G.; Shah, V.; Castro, H.; Veljkovic, Z. Human-Computer Interactions and User Interfaces for Remote Control of Manufacturing Systems. FME Trans. 2013, 41, 250–255. [Google Scholar]
  28. Gorecky, D.; Schmitt, M.; Loskyll, M.; Zühlke, D. Human-machine-interaction in the industry 4.0 era. In Proceedings of the 2014 12th IEEE International Conference on Industrial Informatics (INDIN), Porto Alegre, Brazil, 27–30 July 2014; pp. 289–294. [Google Scholar]
  29. Aehnelt, M.; Klamma, R.; Pammer, V. Human Computer Interaction Perspectives on Industry 4.0. Interact. Des. Archit. 2019, 38. Available online: https://www.researchgate.net/publication/332466625_Human_Computer_Interaction_Perspectives_on_Industry_40_Interaction_Design_Architectures_Vol_38 (accessed on 29 July 2022). [CrossRef]
  30. Ansari, F.; Erol, S.; Sihn, W. Rethinking Human-Machine Learning in Industry 4.0: How Does the Paradigm Shift Treat the Role of Human Learning? Procedia Manuf. 2018, 23, 117–122. [Google Scholar] [CrossRef]
  31. Deneen, K. Human-Machine Interface Technologies: What Impact on Industry 4.0? Available online: https://medium.com/astercapital/human-machine-interface-technologies-what-impact-on-industry-4-0-6a105f97529d/ (accessed on 29 July 2022).
  32. Klumpp, M.; Hesenius, M.; Meyer, O.; Ruiner, C.; Gruhn, V. Production logistics and human-computer interaction—State-of-the-art, challenges and requirements for the future. Int. J. Adv. Manuf. Technol. 2019, 105, 3691–3709. [Google Scholar] [CrossRef] [Green Version]
  33. Ras, E.; Wild, F.; Stahl, C.; Baudet, A. Bridging the Skills Gap of Workers in Industry 4.0 by Human Performance Augmentation Tools: Challenges and Roadmap. In Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments, PETRA’17, Island of Rhodes, Greece, 21–23 June 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 428–432. [Google Scholar] [CrossRef] [Green Version]
  34. Merz, E. Industrie 4.0: Usability Design for the Industry of Tomorrow. Available online: https://www.amnytt.no/getfile.php/2875229.2265.rtbaurxvwt/Usability_Industrie_4.0_EN.pdf (accessed on 29 July 2022).
  35. Deloitte. The Industry 4.0 Paradox. Available online: https://www2.deloitte.com/global/en/pages/energy-and-resources/articles/the-industry-4-0-paradox.html (accessed on 29 July 2022).
  36. Romero, D.; Stahre, J.; Wuest, T.; Noran, O.; Bernus, P.; Fast-Berglund, Å.; Gorecky, D. Towards an Operator 4.0 Typology: A Human-Centric Perspective on the Fourth Industrial Revolution Technologies. In Proceedings of the International Conference on Computers & Industrial Engineering (CIE46), Tianjin, China, 29–31 October 2016; pp. 1–11. [Google Scholar]
  37. Ardito, C.; Buono, P.; Costabile, M.F.; Desolda, G. Interaction with Large Displays: A Survey. ACM Comput. Surv. 2015, 47, 46:1–46:38. [Google Scholar] [CrossRef]
  38. Sambrooks, L.; Wilkinson, B. Comparison of Gestural, Touch, and Mouse Interaction with Fitts’ Law. In Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration, OzCHI’13, Adelaide, Australia, 25–29 November 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 119–122. [Google Scholar] [CrossRef]
  39. Lacueva-Pérez, F.J.; Khakurel, J.; Brandl, P.; Hannola, L.; Gracia-Bandrés, M.A.; Schafler, M. Assessing TRL of HCI Technologies Supporting Shop Floor Workers. In Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference, PETRA ’18, Corfu, Greece, 26–29 June 2018; ACM: New York, NY, USA, 2018; pp. 311–318. [Google Scholar] [CrossRef]
  40. Du, G.; Degbelo, A.; Kray, C.; Painho, M. Gestural interaction with 3D objects shown on public displays: An elicitation study. Interact. Des. Archit. 2018, 2018, 184–202. [Google Scholar] [CrossRef]
  41. Meixner, G.; Petersen, N.; Koessling, H. User Interaction Evolution in the SmartFactoryKL. In Proceedings of the 24th BCS Interaction Specialist Group Conference, BCS ’10, Dundee, UK, 6–10 September 2010; British Computer Society: Swinton, UK, 2010; pp. 211–220. [Google Scholar]
  42. Garzotto, F.; Valoriani, M. Touchless Gestural Interaction with Small Displays: A Case Study. In Proceedings of the Biannual Conference of the Italian Chapter of SIGCHI, Trento, Italy, 16–20 September 2013; ACM: New York, NY, USA, 2013; pp. 26:1–26:10. [Google Scholar] [CrossRef]
  43. Kuikkaniemi, K.; Jacucci, G.; Turpeinen, M.; Hoggan, E.; Müller, J. From Space to Stage: How Interactive Screens Will Change Urban Life. Computer 2011, 44, 40–47. [Google Scholar] [CrossRef]
  44. Norman, D.A. Natural User Interfaces Are Not Natural. Interactions 2010, 17, 6–10. [Google Scholar] [CrossRef]
  45. Horejsi, P.; Polcar, J.; Rohlíková, L. Digital Factory and Virtual Reality: Teaching Virtual Reality Principles with Game Engines; IntechOpen Limited: London, UK, 2016. [Google Scholar] [CrossRef] [Green Version]
  46. Angelini, L.; Mugellini, E.; Abou Khaled, O.; Couture, N. Internet of Tangible Things (IoTT): Challenges and Opportunities for Tangible Interaction with IoT. Informatics 2018, 5, 7. [Google Scholar] [CrossRef] [Green Version]
  47. Lutsenko, Y.V. Shine and Poverty of Virtual Reality. Polythematic Online Sci. J. Kuban State Agrar. Univ. 2016, 124. [Google Scholar] [CrossRef]
  48. Khurana Rohit Monga, V. Facility Layout Planning: A Review. Int. J. Innov. Res. Sci. Eng. Technol. 2015, 04, 976–980. [Google Scholar] [CrossRef]
  49. Gong, L.; Berglund, J.; Fast-Berglund, Å.; Johansson, B.; Wang, Z.; Börjesson, T. Development of virtual reality support to factory layout planning. Int. J. Interact. Des. Manuf. 2019, 13, 935–945. [Google Scholar] [CrossRef] [Green Version]
  50. Menck, N.; Yang, X.; Weidig, C.; Winkes, P.; Lauer, C.; Hagen, H.; Hamann, B.; Aurich, J. Collaborative Factory Planning in Virtual Reality. Procedia CIRP 2012, 3, 317–322. [Google Scholar] [CrossRef] [Green Version]
  51. Lidell, A.; Ericson, S.; Ng, A. The Current and Future Challenges for Virtual Commissioning and Digital Twins of Production Lines. Adv. Transdiscipl. Eng. 2022, 21, 508–519. [Google Scholar] [CrossRef]
  52. Lee, C.G.; Park, S.C. Survey on the virtual commissioning of manufacturing systems. J. Comput. Des. Eng. 2014, 1, 213–222. [Google Scholar] [CrossRef] [Green Version]
  53. Rasheed, G.; Khan, M.; Malik, N.; Akhunzada, A. Measuring Learnability through Virtual Reality Laboratory Application: A User Study. Sustainability 2021, 13, 10812. [Google Scholar] [CrossRef]
  54. Milgram, P.; Takemura, H.; Utsumi, A.; Kishino, F. Augmented reality: A class of displays on the reality-virtuality continuum. Telemanipulator Telepresence Technol. 1994, 2351, 282–292. [Google Scholar] [CrossRef]
  55. Speicher, M.; Hall, B.; Nebeling, M. What is Mixed Reality? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19), Glasgow, Scotland, UK, 4–9 May 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–15. [Google Scholar] [CrossRef]
  56. Perez, F.J.L.; Brandl, P.; Bandres, M.A.G. Technolgy Monitoring: Report on Information Needed for Workers in the Smart Factory; Technical Report; FACTS4WORKERS (Worker Centric Workplace in Smart Factories): Graz, Austria, 2006. [Google Scholar]
  57. Perey, C.; Wild, F.; Helin, K.; Janak, M.; Davies, P.; Ryan, P. Advanced manufacturing with augmented reality. In Proceedings of the 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Munich, Germany, 10–12 September 2014; p. 1. [Google Scholar] [CrossRef]
  58. Gattullo, M.; Scurati, G.W.; Fiorentino, M.; Uva, A.E.; Ferrise, F.; Bordegoni, M. Towards augmented reality manuals for industry 4.0: A methodology. Robot. Comput. Integr. Manuf. 2019, 56, 276–286. [Google Scholar] [CrossRef]
  59. Deac, C.; Popa, C.L.; Ghinea, M.; Cotet, C. Using Augmented Reality in Smart Manufacturing. In Proceedings of the 28th DAAAM International Symposium, Zadar, Croatia, 8–11 November 2017; DAAAM International: Vienna, Austria, 2017; pp. 727–732. [Google Scholar] [CrossRef]
  60. Syberfeldt, A.; Danielsson, O.; Gustavsson, P. Augmented Reality Smart Glasses in the Smart Factory: Product Evaluation Guidelines and Review of Available Products. IEEE Access 2017, 5, 9118–9130. [Google Scholar] [CrossRef]
  61. Rauschnabel, P.; He, J.; Ro, Y.K. Antecedents to the adoption of augmented reality smart glasses: A closer look at privacy risks. J. Bus. Res. 2018, 92, 374–384. [Google Scholar] [CrossRef]
  62. Ashton, K. That Internet of Things Thing: In the Real World Things Matter More than Ideas. Available online: https://www.rfidjournal.com/that-internet-of-things-thing (accessed on 29 July 2022).
  63. Atzori, L.; Iera, A.; Morabito, G. The Internet of Things: A survey. Comput. Netw. 2010, 54, 2787–2805. [Google Scholar] [CrossRef]
  64. Skobelev, P.; Borovik, S. On the way from Industry 4.0 to Industry 5.0: From digital manufacturing to digital society. Sci. Tech. Union Mech. Eng. 2021, 2, 307–311. [Google Scholar]
  65. Sagirlar, G.; Carminati, B.; Ferrari, E. Decentralizing Privacy Enforcement for Internet of Things Smart Objects. arXiv 2018, arXiv:1804.02161. [Google Scholar] [CrossRef] [Green Version]
  66. Wehle, H.D. Augmented Reality and the Internet of Things (IoT)/Industry 4.0. Available online: https://www.researchgate.net/profile/Hans-Dieter-Wehle/publication/288642701_Augmented_Reality_and_the_Internet_of_Things_IoT_Industry_40_en/links/5682703308aebccc4e0df03f/Augmented-Reality-and-the-Internet-of-Things-IoT-Industry-40-en.pdf (accessed on 29 July 2022).
  67. Zhang, C.; Chen, Y. A Review of Research Relevant to the Emerging Industry Trends: Industry 4.0, IoT, Block Chain, and Business Analytics. J. Ind. Integr. Manag. 2019, 5, 165–180. [Google Scholar] [CrossRef]
  68. Fatima, Z.; Tanveer, M.H.; Waseemullah; Zardari, S.; Naz, L.F.; Khadim, H.; Ahmed, N.; Tahir, M. Production Plant and Warehouse Automation with IoT and Industry 5.0. Appl. Sci. 2022, 12, 2053. [Google Scholar] [CrossRef]
  69. Ben-Daya, M.; Hassini, E.; Bahroun, Z. Internet of things and supply chain management: A literature review. Int. J. Prod. Res. 2019, 57, 4719–4742. [Google Scholar] [CrossRef] [Green Version]
  70. Vaccaro, L.; Sansonetti, G.; Micarelli, A. An Empirical Review of Automated Machine Learning. computers 2021, 10, 11. [Google Scholar] [CrossRef]
  71. Mahmud, B. Internet of Things (IOT) for Manufacturing Logistics on SAP ERP Applications. J. Telecommun. Electron. Comput. Eng. (JTEC) 2017, 9, 43–47. [Google Scholar]
  72. Xiao, J.; Zhou, Z.; Yi, Y.; Ni, L.M. A Survey on Wireless Indoor Localization from the Device Perspective. ACM Comput. Surv. 2016, 49, 25:1–25:31. [Google Scholar] [CrossRef]
  73. Fang, Y.; Cho, Y.; Zhang, S.; Perez, E. Case Study of BIM and Cloud–Enabled Real-Time RFID Indoor Localization for Construction Management Applications. J. Constr. Eng. Manag. 2016, 142, 05016003. [Google Scholar] [CrossRef]
  74. Alsinglawi, B.; Elkhodr, M.; Nguyen, Q.V.; Gunawardana, U.; Maeder, A.J.; Simoff, S.J. RFID Localisation For Internet Of Things Smart Homes: A Survey. Int. J. Comput. Netw. Commun. 2017, 9, 81–99. [Google Scholar] [CrossRef]
  75. Cicibaş, H.; Demir, K. Integrating Internet of Things (IoT) into Enterprise Organizations: Socio-Technical Issues and Guidelines. J. Manag. Inf. Syst. 2016, 1, 106–117. [Google Scholar]
  76. Demir, K. A Survey on Challenges of Software Project Management. In Proceedings of the 2009 International Conference on Software Engineering Research & Practice, Las Vegas, NV, USA, 13–16 July 2009; CSREA Press: Sterling, VA, USA, 2009; Volume 2, pp. 579–585. [Google Scholar]
  77. European Commission; EPoSS. Internet of Things in 2020 Roadmap for the Future. Available online: https://docbox.etsi.org/erm/Open/CERP%2020080609-10/Internet-of-Things_in_2020_EC-EPoSS_Workshop_Report_2008_v1-1.pdf (accessed on 29 July 2022).
  78. Lee, L.; Hui, P. Interaction Methods for Smart Glasses: A Survey. IEEE Access 2018, 6, 28712–28732. [Google Scholar] [CrossRef]
  79. Zhou, J.; Zhang, Y.; Laput, G.; Harrison, C. AuraSense: Enabling Expressive Around-Smartwatch Interactions with Electric Field Sensing. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 81–86. [Google Scholar] [CrossRef]
  80. Quint, F.; Loch, F.; Weber, H.; Venitz, J.; Gröber, M.; Liedel, J. Evaluation of Smart Glasses for Documentation in Manufacturing. In Proceedings of the Mensch und Computer 2016—Workshopband, Aachen, Germany, 4–7 September 2016. [Google Scholar]
  81. Ishii, H. Tangible Bits: Designing the Seamless Interface between People, Bits, and Atoms. In Proceedings of the 8th International Conference on Intelligent User Interfaces, IUI’03, Miami, FL, USA, 12–15 January 2003; Association for Computing Machinery: New York, NY, USA, 2003; p. 3. [Google Scholar] [CrossRef]
  82. Angelini, L.; Mugellini, E.; Lechelt, Z.; Hornecker, E.; Marshall, P.; Liu, C.; Brereton, M.; Soro, A.; Couture, N.; Abou Khaled, O. Internet of Tangible Things: Workshop on Tangible Interaction with the Internet of Things. In Proceedings of the Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  83. Funk, M.; Korn, O.; Schmidt, A. An Augmented Workplace for Enabling User-Defined Tangibles. In Proceedings of the CHI’14: CHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014. [Google Scholar] [CrossRef]
  84. Yeo, H.S.; Minami, R.; Rodriguez, K.; Shaker, G.; Quigley, A. Exploring Tangible Interactions with Radar Sensing. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2018, 2, 1–25. [Google Scholar] [CrossRef] [Green Version]
  85. Wallbaum, T.; Matviienko, A.; Heuten, W.; Boll, S. Challenges For Designing Tangible Systems. In Proceedings of the 3rd European Tangible Interaction Studio (ETIS 2017), Luxembourg, 19–23 June 2017; pp. 21–23. [Google Scholar]
  86. Colgate, J.; Wannasuphoprasit, W.; Peshkin, M. Cobots: Robots for collaboration with human operators. In Proceedings of the 1996 ASME International Mechanical Engineering Congress and Exposition, Atlanta, GA, USA, 17–22 November 1996; pp. 433–439. [Google Scholar]
  87. Silverman, R.E. The Words of Tomorrow. Available online: https://www.wsj.com/articles/SB944517141695981261/ (accessed on 29 July 2022).
  88. Vicentini, F. La Robotica Collaborativa. Sicurezza e Flessibilità Delle Nuove Forme di Collaborazione Uomo-Robot; Tecniche Nuove: Milano, Italy, 2017. [Google Scholar]
  89. Linsinger, M.; Sudhoff, M.; Lemmerz, K.; Glogowski, P.; Kuhlenkötter, B. Task-based Potential Analysis for Human-Robot Collaboration within Assembly Systems. In Tagungsband des 3. Kongresses Montage Handhabung Industrieroboter; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar] [CrossRef]
  90. Oberc, H.; Prinz, C.; Glogowski, P.; Lemmerz, K.; Kuhlenkötter, B. Human Robot Interaction—Learning how to integrate collaborative robots into manual assembly lines. Procedia Manuf. 2019, 31, 26–31. [Google Scholar] [CrossRef]
  91. Alvarez-de-los Mozos, E.; Renteria, A. Collaborative Robots in e-waste Management. Procedia Manuf. 2017, 11, 55–62. [Google Scholar] [CrossRef] [Green Version]
  92. Neto, P.; Simão, M.; Mendes, N.; Safeea, M. Gesture-based human-robot interaction for human assistance in manufacturing. Int. J. Adv. Manuf. Technol. 2018, 101, 119–135. [Google Scholar] [CrossRef]
  93. Spada, S.; Ghibaudo, L.; Gilotta, S.; Gastaldi, L.; Cavatorta, M.P. Analysis of exoskeleton introduction in industrial reality: Main issues and EAWS risk assessment. In International Conference on Applied Human Factors and Ergonomics; Springer: Berlin/Heidelberg, Germany, 2017; pp. 236–244. [Google Scholar]
  94. Fleck, P. Collaborative Robot. Available online: https://www.researchgate.net/publication/313665204_Collaborative_Robots?channel=doi&linkId=58a21ad545851598babae8ff&showFulltext=true (accessed on 29 July 2022). [CrossRef]
  95. Manso, L.; Bustos, P.; Bandera, J.; Romero-Garcés, A.; Calderita, L.; Marfil, R.; Bandera, A. Deep Representations for Collaborative Robotics; Springer: Berlin/Heidelberg, Germany, 2016; Volume 10087, pp. 179–193. [Google Scholar] [CrossRef]
  96. Lambrecht, J.; Kruger, J. Spatial programming for industrial robots based on gestures and Augmented Reality. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 466–472. [Google Scholar] [CrossRef]
  97. Landi, C.T.; Ferraguti, F.; Secchi, C.; Fantuzzi, C. Tool compensation in walk-through programming for admittance-controlled robots. In Proceedings of the IECON 2016-42nd Annual Conference of the IEEE Industrial Electronics Society, Florence, Italy, 23–26 October 2016; pp. 5335–5340. [Google Scholar]
  98. Billard, A.G.; Calinon, S.; Dillmann, R. Learning from Humans. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 1995–2014. [Google Scholar] [CrossRef]
  99. Proia, S.; Carli, R.; Cavone, G.; Dotoli, M. Control Techniques for Safe, Ergonomic, and Efficient Human-Robot Collaboration in the Digital Industry: A Survey. IEEE Trans. Autom. Sci. Eng. 2022, 19, 1798–1819. [Google Scholar] [CrossRef]
  100. Benotsmane, R.; Dudás, L.; Kovács, G. Collaborating robots in Industry 4.0 conception. IOP Conf. Ser. Mater. Sci. Eng. 2018, 448, 012023. [Google Scholar] [CrossRef]
  101. Gualtieri, L.; Rauch, E.; Vidoni, R. Development and validation of guidelines for safety in human-robot collaborative assembly systems. Comput. Ind. Eng. 2022, 163, 107801. [Google Scholar] [CrossRef]
  102. Gualtieri, L.; Rauch, E.; Vidoni, R.; Matt, D.T. Safety, Ergonomics and Efficiency in Human-Robot Collaborative Assembly: Design Guidelines and Requirements. Procedia CIRP 2020, 91, 367–372. [Google Scholar] [CrossRef]
  103. Akundi, A.; Euresti, D.; Luna, S.; Ankobiah, W.; Lopes, A.; Edinbarough, I. State of Industry 5.0—Analysis and Identification of Current Research Trends. Appl. Syst. Innov. 2022, 5, 27. [Google Scholar] [CrossRef]
  104. Zitter, I.; Hoeve, A. Hybrid Learning Environments: Merging Learning and Work Processes to Facilitate Knowledge Integration and Transitions; OECD Education Working Papers; OECD Publishing: Paris, France, 2012. [Google Scholar] [CrossRef]
  105. Loch, F.; Böck, S.; Vogel-Heuser, B. Teaching Styles of Virtual Training Systems for Industrial Applications ? A Review of the Literature. IxD&A 2018, 38, 46–63. [Google Scholar]
  106. Demir, K.A.; Döven, G.; Sezen, B. Industry 5.0 and Human-Robot Co-working. Procedia Comput. Sci. 2019, 158, 688–695. [Google Scholar] [CrossRef]
  107. Longo, F.; Padovano, A.; Umbrello, S. Value-oriented and ethical technology engineering in industry 5.0: A human-centric perspective for the design of the factory of the future. Appl. Sci. 2020, 10, 4182. [Google Scholar] [CrossRef]
  108. Jafari, N.; Azarian, M.; Yu, H. Moving from Industry 4.0 to Industry 5.0: What Are the Implications for Smart Logistics? Logistics 2022, 6, 26. [Google Scholar] [CrossRef]
  109. World Economic Forum. Internet of Things Guidelines for Sustainabilitydeas. Available online: https://www3.weforum.org/docs/IoTGuidelinesforSustainability.pdf (accessed on 29 July 2022).
  110. Herrmann, F. The Smart Factory and Its Risks. Systems 2018, 6, 38. [Google Scholar] [CrossRef] [Green Version]
  111. Kaasinen, E.; Schmalfuß, F.; Özturk, C.; Aromaa, S.; Boubekeur, M.; Heilala, J.; Heikkilä, P.; Kuula, T.; Liinasuo, M.; Mach, S.; et al. Empowering and engaging industrial workers with Operator 4.0 solutions. Comput. Ind. Eng. 2020, 139, 105678. [Google Scholar] [CrossRef]
  112. Peng, A.; Nushi, B.; Kiciman, E.; Inkpen, K.; Kamar, E. Investigations of Performance and Bias in Human-AI Teamwork in Hiring. In Proceedings of the AAAI Conference on Artificial Intelligence, Palo Alto, CA, USA, 22 February–1 March 2022. [Google Scholar]
Figure 1. User interacting with a large display via a touchless gestural interface [23,24].
Figure 1. User interacting with a large display via a touchless gestural interface [23,24].
Applsci 12 07965 g001
Figure 2. User testing a smart armband for gestural interaction on the shop floor [23,24].
Figure 2. User testing a smart armband for gestural interaction on the shop floor [23,24].
Applsci 12 07965 g002
Figure 3. Virtual reality visor with smartphone case.
Figure 3. Virtual reality visor with smartphone case.
Applsci 12 07965 g003
Figure 4. The smartwatch, one of the most widely used wearable devices.
Figure 4. The smartwatch, one of the most widely used wearable devices.
Applsci 12 07965 g004
Figure 5. Liam, a collaborative robot by Apple which disassembles iPhones.
Figure 5. Liam, a collaborative robot by Apple which disassembles iPhones.
Applsci 12 07965 g005
Table 1. Surveyed works classified based on the supported task(s) and production process phases/activities: access to knowledge (K), logistics (L), maintenance (M), planning (Pl), production (Pr), security (S), workers’ wellbeing (We), and warehousing (Wa).
Table 1. Surveyed works classified based on the supported task(s) and production process phases/activities: access to knowledge (K), logistics (L), maintenance (M), planning (Pl), production (Pr), security (S), workers’ wellbeing (We), and warehousing (Wa).
TechnologyReferenceSupported TasksKLMPlPrSWeWa
Large displays[24]Task browsing; interaction with 3D objects X
[39]Interaction with information systemsX
[40]Interaction with 3D objects X
[41]Access to different specialized devices (monitoring, diagnosis, and maintenance) X
Virtual reality[45]Create virtual factory environments X
[49]Factory layout planning X
[50]Factory layout planning X
[51]Virtual commissioning X
[52]Virtual commissioning and performance optimization X
Augmented reality[56]Product assembly and maintenance; quality checking X X
[57]Product assembly and maintenance; quality checking; access to technical dataX X X
[58]Access to technical dataX
[59]Access to technical dataX
[60]Assembly, maintenance, quality control, and material handling X X
[61]Product engineering, employee coaching, warehousing, and logistics X X X
Internet of Things[22]Energy consumption data monitoringX
[66]Access and analyze technical dataX
[67]Process monitoring, hazard reductionX XX
[68]Product testing X
[69]Product traceability; monitoring of storage conditions X X
[71]Energy consumption optimizationX X
[73]Asset management, people tracking, security, or warehouse X X X
[74]Asset management, people tracking, security, or warehouse X X X
Wearable[80]Document knowledge about assembly and maintenance processesX
HuManS (https://dmd.it/humans/en/humans/) Monitoring, analyzing, and controlling the posture of user X
accessed on 29 July 2022
Tangible user interfaces[83]Assembly X
[84]Assembly X
Collaborative robots[17]Collaboration in manufacturing tasks X
[36]Strength-demanding tasks X
[91]e-waste management X X
[92]Assembly X
[94]Performing dull tasks on production lines X
[95]Supporting human operators in mixed teams X
[99]Collaboration in manufacturing tasks; posture improvement X X
[100]Assembly X
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Brunetti, D.; Gena, C.; Vernero, F. Smart Interactive Technologies in the Human-Centric Factory 5.0: A Survey. Appl. Sci. 2022, 12, 7965. https://doi.org/10.3390/app12167965

AMA Style

Brunetti D, Gena C, Vernero F. Smart Interactive Technologies in the Human-Centric Factory 5.0: A Survey. Applied Sciences. 2022; 12(16):7965. https://doi.org/10.3390/app12167965

Chicago/Turabian Style

Brunetti, Davide, Cristina Gena, and Fabiana Vernero. 2022. "Smart Interactive Technologies in the Human-Centric Factory 5.0: A Survey" Applied Sciences 12, no. 16: 7965. https://doi.org/10.3390/app12167965

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop