Abstract
Nowadays, mobile robots are playing an important role in different areas of science, industry, academia and even in everyday life. In this sense, their abilities and behaviours become increasingly complex. In particular, in indoor environments, such as hospitals, schools, banks and museums, where the robot coincides with people and other robots, its movement and navigation must be programmed and adapted to robot–robot and human–robot interactions. However, existing approaches are focused either on multi-robot navigation (robot–robot interaction) or social navigation with human presence (human–robot interaction), neglecting the integration of both approaches. Proxemic interaction is recently being used in this domain of research, to improve Human–Robot Interaction (HRI). In this context, we propose an autonomous navigation approach for mobile robots in indoor environments, based on the principles of proxemic theory, integrated with classical navigation algorithms, such as ORCA, Social Momentum, and A*. With this novel approach, the mobile robot adapts its behaviour, by analysing the proximity of people to each other, with respect to it, and with respect to other robots to decide and plan its respective navigation, while showing acceptable social behaviours in presence of humans. We describe our proposed approach and show how proxemics and the classical navigation algorithms are combined to provide an effective navigation, while respecting social human distances. To show the suitability of our approach, we simulate several situations of coexistence of robots and humans, demonstrating an effective social navigation.
1. Introduction
The continuous evolution of social robotic has fostered the presence of mobile robots in many contexts of people’s daily lives; therefore, leading the evolution of service robots. Nowadays, service robots provide support for specific tasks, such as helping elderly [1,2], giving academic classes to children [3,4], helping as guides [5,6,7], among many other tasks [8,9]. In particular, in such indoor environments (e.g, hospitals, schools, banks and museums), where the robot coincides with people and other robots, its movement and navigation must be programmed and adapted to robot–robot and human–robot interactions.
Autonomous navigation is limited to avoid obstacles, while the robot is reaching the goal. Navigation of service robots should consider factors such as human comfort, naturalness and sociability [10]. To ensure the human comfort, the way the robot navigates must give humans the feeling of security [11], in the sense its trajectory does not impede the natural trajectory of humans. That security is achieved simply with the robot’s evasion of humans; however, the robot trajectory may still be rough, causing humans a feeling of insecurity and discomfort. Naturalness refers to the robot executing movements similar to humans, during a trajectory. Most of the methods that try to match such trajectories, adjust the robot’s speed by smoothing its movements between successive points on the path [12,13]. Finally, the sociability of the robot dictates social behaviours of robots, linked to regional or ethical notions, such as keeping social distances with humans, avoiding to interrupt a conversation among people by passing through them.
This social interaction between humans and robots demands special attention, in environments where both cooperate or work independently, in order to make the behaviour of robots efficient and socially acceptable [14,15,16]. In this sense, social navigation is a crucial aspect to consider for robots being part of habitats and work spaces of humans. Thus, the development of social robots, and their safe and natural incorporation into human environments, is an essential and complicated task, which must inevitably consider the distance [17,18]: misuse of distancing can generate a disruptive attitude of humans towards the robots. The way a robot moves reflects its intelligence and delineates its social acceptance, in terms of the perceived safety, comfort and legibility.
Researchers are therefore seeking to develop new flexible and adaptable interactions, in order to make robot behaviour and, in particular navigation, socially acceptable. In this sense, proxemic interaction is becoming an influential approach to implement Human–Robot Interactions (HRI) [19,20,21,22,23]. The original concept of proxemics was proposed by Edward T. Hall in 1966, who first presented the proxemic theory [24], which describes how individuals perceive, interpret and use their personal space relative to the distance among themselves [25]. Social relationships are essential in the life of human beings and can be expressed as how people allow contact and interact among each other in a physical space. Thus, people interactions are based on physical distances or face orientation of others. Both factors describe the level of engagement among people to establish communication. This social science theory has inspired researchers to create seamless interactions between users and digital objects in a cyber environment (ubicomp), this is called proxemic interaction. Thus, proxemic interactions describe relationships among people and digital objects, in terms of five physical proxemic dimensions: Distance, Identity, Location, Movement and Orientation (DILMO), and determine proxemic behaviours, i.e., the interactions as response of such digital objects [26,27,28].
As an acceptable social navigation is based on factors, among which the distance from the robot to people is an aspect to take into account, proxemic interaction is an appropriate approach to establish social behaviours of robots, in terms of DILMO dimensions. Therefore, not only distance can be considered, but identity, location, movement and orientation of humans.
In scenarios where mobile robots interact with humans and other robots, their social behaviour must be adapted to robot–robot or human–robot interaction, accordingly. For robot–robot interaction, traditionally navigation algorithms for multi-robot systems are considered [12,29,30,31]. However, there is still a lack of attention on considering the integration of human–robot and robot–robot interactions, and adapting the robot’s behaviour accordingly.
In this context, we propose a novel approach to adapt the navigation of social robots to different scenarios, by integrating proxemic interactions with traditional navigation algorithm. In scenarios where the robot is alone, it performs ORCA algorithm to avoid obstacles.
In the presence of humans, proxemics is combined with A* and Social Momentum, to achieve social behaviours, by taking into account the social restrictions that its actions entail. With this novel approach, the mobile robot adapts its behaviour, by analysing the proximity of people to each other, with respect to it, and with respect to other robots to decide and plan its respective navigation, while showing acceptable social behaviours in presence of humans. We describe our proposed approach and show how proxemics and the classical navigation algorithms are combined to provide an effective navigation. We simulate different scenarios of coexistence of robots and people to validate our proposal, demonstrating the feasibility of an adaptable and suitable social navigation system.
In summary, the main differences of our proposed approach with respect to existing works are (i) the integration of both human–robot and robot–robot interactions in the same approach, (ii) the consideration of proxemic zones for robots in scenarios in which the robots are interacting with humans, (iii) detection of individual and groups of humans to accordingly decide the navigation, and (iv) integration of traditional navigation approaches with proxemics, by considering all DILMO dimensions to accordingly behave during navigation.
The remainder of this work is organised as follows. Section 2 shows some preliminary concepts of proxemic theory, in addition to the social constraints that can occur in an environment surrounded by people. Section 3 details some studies related to autonomous robot navigation, as well as works done with HRI based in proxemics. Section 4 describes our proposal, which focuses on the development of a navigation system for social robots operating in environments populated by both humans and robots. Section 5 shows our navigation system implemented and tested in different situations. Section 6 discusses improvements that can be carried out. Finally, we draw conclusions in Section 7.
2. Proxemic Theory and Proxemic Interactions: Preliminaries
Social distances are present in our daily existence. The theory of proxemics is a concept used mainly to describe the human use of space. Edward T. Hall proposed the first definition, who pointed out proxemics as “the interrelated observations and theories of humans use of space as a specialised elaboration of culture” [24]. He presented how people perceive, interpret and use space, especially related to distance among people [25]. The theory of proxemics describes how people from different cultures not only speak diverse languages, but also inhabit different sensory worlds. In this respect, the distance has an indispensable role in proxemics in order to establish a region around the person that serves to maintain proper spacing among individuals.
According to Hall’s theory of proxemics, the interaction zones have been classified into four proxemic zones, as shown in Figure 1:
Figure 1.
Interpersonal distances of people according Edward Hall’s proxemic theory (showing radius in meters) [24].
- Intimate zone, defined by a distance of 0–50 cm (0–1.5 feet). This space is reserved for close relationships among people, and physical contact is possible in this zone. Usually, people can access to this zone if the other person allows it (Figure 2a). An invasion of this zone, without a justification, can be interpreted as an attack, generating discomfort. However, there are exceptions according to environments, such as public transportation and lifts, where the person’s intimate zone can be compromised.
Figure 2. Interaction according to Proxemic zones: (a) intimate, (b) personal, (c) social and (d) public. - Personal zone, delimited by distances from 0.5m to 1 m (1.5–4 feet). In this zone, people could have a natural interaction with other people, and it is barely possible to reach contact with their arms in which physical domination is limited (Figure 2b). Beyond it, a person can not freely “get their hands on” someone else.
- Social zone, determined by distances between 1 m and 4 m (4–12 feet). This area can be related to space where people could maintain communication without touching each other (e.g., a meeting table). In this zone, people keep physical distancing among individuals. For example, a business meeting can describe the zone in which people have to speak louder to address others in order to catch their attention (Figure 2c).
- Public zone, with distance greater than 4 m (greater than 12 feet). It describes the distribution of people in urban spaces, such as a concert hall or public meeting, where the people’s attention is focused on a moderator (Figure 2d) or in the street, in parks or museums, where the person is unaware of others. Other people’s identities are unknown among individuals that share the same space. This space sometimes varies depending on the situation. For example, at a concert or on public transportation, people are standing next to each other; in these exceptional cases, social, personal and even intimate zones are invaded. However, it is not considered an invasion as it is aware that the situation does not allow taking the corresponding distance. In other words, sometimes public distance becomes temporarily personal or even intimate space.
Although the proxemic zones of Hall are concentric circles, other studies have proposed different configurations that respond to difference aspects, such as culture, age, gender, personal relationship and context [22]. Figure 3 shows different configuration of proxemic zones [22]. Figure 3a shows the classical four proxemic zones of Hall’s theory, defined as concentric circles. According to the study presented in [32], people are more demanding with respect to their frontal space, considering frontal invasions more uncomfortable. Thus, egg-shaped proxemic areas, as shown in Figure 3b, are more appropriate. In public environments, the personal space refers to the “private sphere” in the Social Force model; thus, the movement carried by pedestrians is influenced by other pedestrians through repulsive forces [33]. Therefore, defining proxemic zones as concentric ellipses (see Figure 3c) seems to be more suitable. In [34], authors performed a study to state that personal space is asymmetrical, as shown in Figure 3d: it is smaller on the dominant side of the pedestrian (right-handed or left-handed). The study demonstrates that when people want to go through an insufficient space, they first evaluate the relationship between the size of the passage and the width of the body.
Figure 3.
Different shapes of proxemic zones: (a) the classical four proxemic zones of Hall’s theory; (b) the personal space refers to the “private sphere” in the Social Force model; (c) proxemic zones as concentric ellipses; (d) proxemic zone is smaller on the dominant side of the pedestrian [22].
In crowed environments, people’s interactions are not only limited to person-to-person. Groups of persons also change the way an individual interacts with groups in which he/she is not participating (e.g., to do not pass through the group). Thus, it is also important to define proxemic zones for group of people. Results presented in [35] demonstrate that people fix more space around a group than with the sum of individual personal spaces. The concept of the “O” space, proposed in [36], allows detecting conversations. The “O” space is the area that delimits the main activity established by a group of people (e.g., having a conversation or focused in a common object or situation). Only participants can enter it, they protect it and others tend to respect it, and its geometric characteristics depend on the size of the body, posture, position and orientation of the participants during the activity. Orientation of people can help deciding which groups are conversing and how the space “O” is defined. The “O” space is extended with the “P” space in [36], which surrounds the space “O” to locate participants and their personal belongings. Figure 4 shows the “O” and “P” spaces in white and red circles, respectively.
Figure 4.
“O” and “P” spaces for groups (white and red, respectively).
Besides the individual and group proxemic zones, human activity can define other virtual spaces that others recognise and respect, such as activity and affordance spaces. The activity space is a social space related to the action carried out by a person. The affordance space is a social space related to a potential activity provided by the environment. In both cases, the notion implies a geometric space, but it does not give an explicit definition of the form, as it can take many forms depending on specific (potential) actions. Figure 5a shows an example of an activity space, in which the human is taking a picture; the surrounding people should avoid this space in order to not interrupt the activity. In Figure 5b, the space in front of the picture can potentially be used to read the information; thus, people should avoid to occupy this space.
Figure 5.
(a) Activity space and (b) affordance space.
Researchers have been inspired by this social science theory to create seamless interactions among users and digital objects in ubicomp environments. The concept of proxemic interaction was first proposed by Ballendat et al. in [26] and was designed for implementing applications in ubicomp environments (see Figure 6). Ubicomp has included different services and smart technologies (Internet, operating system, sensors, microprocessors, interfaces, networks, robotics and mobile protocols), which allow people interacting with the environment in a more natural and more personalised way [37]. Therefore, ubicomp provide new opportunities to explore new approaches for Human–Computer Interactions (HCI), in environments where users have many computing devices that can be employed according to the context required by them.
Figure 6.
Proxemic interactions associate people to digital devices, digital devices to digital devices, and non-digital physical objects to both people and digital devices.
Greenberg et al. [27] identified five dimensions: Distance, Identity, Location, Movement and Orientation (we call them DILMO as an abbreviation), which are associated with people, digital devices and non-digital things in ubicomp environments. Thus, proxemic interaction has been implemented in order to improve HCI in such as ubiquitous environments, by determining proxemic behaviours, i.e., responses of such digital objects according to DILMO dimensions with respect to people or other objects (digital or not) [38]. In the context of HRI, DILMO dimensions can be used to model human–robot, robot–robot, and robot–device interactions.
Proxemic DILMO dimensions can be analysed in a variety of ways according to the measures that can vary by accuracy and the values they return (i.e., discrete or continuous), which in turn depend on the technology used to gather and process them.
Distance is a physical measure used as a parameter to determine the proxemic zone of entities (users, devices and robots), based on Hall’s theory. The zone allows the users to interact with the display, device or robot according to different proximities [27,39]. Typically, short distances allow high-level interactions between devices, between a user and a device, between a robot and a person, between robots, etc.
For social navigation in service robots, determining the human–robot distance, human–human distance and robot–robot distance are important. It can be obtained based on several combination of hardware and software capacities of robots, such as kinect sensor technology, which provides body-tracking and object-tracking capabilities; camera ability, to apply computer vision or thermal imaging analysis; and ultrasonic and infrared sensors, to detect entities at certain distances.
Identity is a term that mainly describes the individuality or role of a person or a particular object that distinguishes one entity from another one in a space [27,40]. The identity can be used, for example, for controlling spatial interactions between a person’s handheld device and all contiguous appliances in order to generate an effective appliance control interface [38] or to display specific content for a specific person (e.g., for access control, content in TV for children is different to the content for parents).
For service robots, Identity can be used to determine, for example, specific people with specific roles or a specific device. Kinect sensors and cameras of robots can be used to find out identities.
Location defines the physical context in which the entities reside. The location allows relationships of entities with objects which are categorised as fixed (e.g., room layout, doors and windows) and semi-fixed objects that are changeable such as chairs, desks, lamps [27,38]. It is an important factor because other measures may depend on the contextual location. Location provides the entity’s positions in the space, that are assessed at any time.
Almost all robot technologies used to get Distance are also useful to obtain Location, as these two measures can be co-related.
Movement is defined as entity’s change of positions over the time [27,38]. The movement includes the directionality allowing interaction between the user and the application. For example, when a user walks towards the screen, the content of the screen is adjusted according to the user’s movement speed or when a human is approaching a robot, it decides an action according to the context.
Movement can be detected by kinect sensors, leap motion sensors, ultrasonic and infrared sensors. Furthermore, kinect and thermal cameras provide capabilities of detecting movements with computer vision or thermal imaging applications.
Orientation provides the information related to the direction in which entities are facing between each other. It is only possible if an entity has a “front face” and the entity can be detected in the visual field of another entity. Orientation can be continuous (e.g., the pitch/roll/ yaw angle of one object relative to another) or discrete (e.g., facing toward or away from the other object) [27].
Orientation is a key dimension in HRI and can be determined with kinect depth and thermal cameras, marker-based motion sensors, combined with computer vision, face recognition, and machine learning techniques. According to the orientation of people, a robot can detect group of people, activity and affordance spaces, and people talking to it.
3. Related Work
The autonomous navigation of mobile robots is a great challenge in the academic area, especially in dynamic environments, with the presence of humans and objects that move or change positions at any time. In order to solve this navigation problem, there are works that try to predict the movement of obstacles, imitate movements of animals, predict routes of people or objects and map the place of performance [41,42,43]. In order for the robot to predict the routes and movements of humans or objects involved in the environment, it is necessary to incorporate sensors, which help in this process, such as scanners [41], tracking sensors, motion capture cameras, etc. Using the sensor data, robots trace the routes that they can follow, while mapping the environment. This planning can be divided into two forms: global or local.
When performing a global planning, the robot usually has a static map, which implies that it already knows the environment while moving. Local planning work with dynamic situations and generally does not know the initial map. These two types of planning commonly work together in a navigation process. Some works are detailed below. In [44], a dynamic local navigation plan is proposed, through the modulation of artificial emotions present in two robots. In [45], a navigation system is described for static and dynamic environments, imitating the behaviour of ants when they move from one point to another. A robot signals the environment with artificial pheromones all the way, to use as a reference for it and other robots. Among the navigation algorithms, it is common to find those that plan the construction of virtual maps, for their correct navigation, such as Simultaneous Localisation and Mapping (SLAM) [41] and the Map Server Agent (MSA) [46], that propose to keep the graphic model of the environment, together with the positions of each object, while searching for the shortest path to arrive at the goal.
In scenarios in which the mapping and the perception of moving objects is performed, the next step is to avoid collisions and respect the human space. In [47], a mathematical algorithm is proposed, that with sensors assistance, produces fast and realistic simulations with, at first, obstacle speed information, generating a field around it to prevent a path with collision. This algorithm is the Optimal Reciprocal Collision Avoidance (ORCA). However, there is still a lack of discussion to improve HRI, as with ORCA, robots perform very well in inhospitable works. Nowadays, robots are being developed to act in services with human contact [48], as the autism therapeutic treatment [49] or museum guides [50]. Therefore, recent studies are focused on improving such HRI, by considering proxemic theory to respect human’s proxemic zones [31,51].
The study presented in [19] conducts experiments to observe interactions among people and robots and extracts some notes, such as the higher mutual visual contact that was obtained, the higher distance between the person and robot; the more likeable the robot, the shorter this distance; men keep higher distance than women, and even more so when the robot make visual contact; also, the human–robot distance was shorter when the human was facing the robot’s back. Another experiment is described in [52], with three different algorithms to model interactions between 105 people and a robot. The best result was obtained with the Social Momentum algorithm, compared with ORCA and a tele-operation approach.
Social Momentum is a cost-based planner that detects an agent’s intentions about collision evasion protocols. This cost function is the weighted sum of the magnitudes of the angular moments, both of the planning agent (robot), and of the other entities in its environment (humans). This type of indirect communication of avoidance strategy results in easy-to-interpret movements, allowing the robot to avoid entanglements in its path. In an area overrun by many agents, whether people or robots, the planning agent interacts with each other, in the sense that each action that the robot takes transmits signs of intentions or preferences about evasion strategies (move right or left). Social momentum makes the robot read the movement preferences and associate them with its own. Social Momentum allows the robot to act pertinently and simplify everyone’s decision-making. In conclusion, Social Momentum is a frequent replanning algorithm in which in each planning cycle it determines an action [53]. If the robot identifies a future fault and decreases its velocity, its inactive time will be shorter, but if the robot does not cushion its path, it will be parked for a considerable time, which will cause a human discomfort in robot’s presence. The ideal is that the robot does not invade the intimate human space and keeps a constant and soft velocity, but not too slow to create a discomfort or get its task late. Social interaction spaces are widely modelled by using Gaussian functions [54].
As well as the global and local planning may be merged to more complete planning elaboration, it is also possible to combine Social Momentum and ORCA. In [12], the velocity is an important aspect in Social Momentum. Observing the experiences, Social Momentum generates a constant and cushioned speed to the human route. Some works have analysed what is the limit of the robot speed that does not take the human comfort out, and also does not take the task efficiency out. In [22], the authors explain some models of human behaviour that could be implemented in a social navigation and some models based on proxemics. Authors conclude that in several cases it is not enough to analyse the human trajectory. In [55], some particular cases are presented in which the robot is forced to interact with humans through a chatbot API. This work analyses and places phrases that the robot can use to maintain verbal communication during its navigation. In [56], the use of social robots focused on the workplace is addressed; the surveys investigate the effects of the presence of robots in an area normally inhabited by humans. In [57], a planner is presented that is capable of predicting the trajectories followed by humans and in turn planning the future trajectory of the robot. Considering that the position of the objects is very significant, in [58] an analysis is given on how a robot should position itself when facing a certain person, for this the research is based on proxemic principles. In [59], the theory of equilibrium is used, which analyses the impact that coexistence between humans and robots has generated. In [60], an analysis is done on the proxemics that drones must respect for people. Researchers consider new distances between the human–drone interaction. In [61], authors propose a trajectory planning system that takes into account the time of day and the possibility or not of using some spaces in specific periods of time. The propose a method that restricts or penalises the route planned by the robot, due to time-dependent variables. The authors also present an example of a case in which a patient undergoes a physiotherapy section, by used a feature table for this purpose. In this case, considering the scheduling of the referred physiotherapy section, the robot must plan a route over a distance limit greater than usual; thus, the usability and proxemics limits are respected. This social information is added to a graph and is used later for trajectory planning.
Another approach to model the social space is proposed in [62]. Authors present a new definition of social space, named as Dynamic Social Force (). This new social space definition is based on a fuzzy inference system and the parameters of these functions are adjusted by using reinforcement learning. They use reinforcement learning to determinate parameters for the Gaussian function. Despite that, it is not investigated the proposed method to generalize group of people.
Table 1 summarises a comparison of the revised studies, some of which have considered proxemics to offer a social navigation. We compare them in terms of DILMO dimensions, if they are able to detect individual or group of people and the considered proxemic zones. Most studies consider distance and orientation, as well as individual personal zones. Few works consider other DILMO dimensions, such as location and movement, and proxemic zones for groups of people. Our proposed social navigation system is aimed at considering all DILMO dimensions and proxemic zones for individuals and groups of people.
Table 1.
Comparative table of studies on social navigation based on proxemics.
The main differences of our proposed approach with respect to existing works are (i) the integration of both human–robot and robot–robot interactions in the same approach, (ii) the consideration of proxemic zones for robots in scenarios in which the robots are interacting with humans, (iii) detection of individual and groups of humans to accordingly decide the navigation and (iv) integration of traditional navigation approaches with proxemics, by considering all DILMO dimensions to accordingly behave during navigation.
6. Discussion
The first version and simulation of our proposal demonstrate the feasibility and suitability of a robot navigation system able to be adapted to different situations of humans and robots presence and provide social behaviour to robots. This experience also gives the opportunity of extracting its current limitations and some lessons learned.
6.1. Social Rules to Robots
Robot navigation in human-populated environments is a subject of great interest among the international scientific community in the area of robotics. In order to be accepted in these scenarios, it is important for robots to navigate respecting social rules. Avoid getting too close to a person, not interrupting conversations or asking for permission or collaboration when it is required by social conventions are some of the behaviours that robots must exhibit. This paper presents a social navigation system that integrates different software agents within a cognitive architecture for robots and describes, as the main contribution, the corpus that allows to establish behaviours of robots in presence of humans in real situations to improve the human-aware navigation system.
The corpus has been experimentally evaluated by the simulation of different situations in a museum, where robots need to plan interactions with people and other robots. Results are analysed qualitatively, according to the behaviour expected by the robot in the interaction performed. Results show how the corpus presented in this paper improves the robot navigation, making it more socially accepted.
In the current version of the proposed social navigation system, proxemic zones are modelled with the same symetric Gaussian function for people and robots, representing them as concentric circles. However, the parametrisation of proxemic spaces, represented by asymmetric Gaussian, can be defined by the social conditions of the environment, for example, according to culture, gender and customs. Allowing flexibility and improvement of the autonomous navigation system according to social restrictions. Thus, our approach can be improved and extended with the ability of robots to recognize different characteristics of people and accordingly define proxemic zones.
6.2. Detection of Groups of People
In the context of social navigation in environments populated by humans and robots, it is also relevant to consider proxemic zones for group of persons. The spatial patterns adopted by people in conversations act as social cues to inform robots about their activity. The current proposal considers Distance, to define proxemic zones; Orientation of people, to decide which groups are conversing and where the space “O” is located; and Identity, to identify robots from people. Considering all other DILMO dimensions, this proposal can be improved in this aspect. Location and Movement can also be perceived by the sensorial capabilities of social robots to detect groups being formed, to define the “P” space, etc.; therefore, to have benefit from that knowledge to identify social interactions in indoor environments.
6.3. Activity and Affordance Spaces
In general, an affordance space can be crossed without causing any disturbance, unlike the space of activity, but blocking a space of affordance might not be socially accepted. With the current proposal, activity spaces can be detected based on the proxemic zones and orientation of people. For the recognition of affordance spaces, the perceived geometric characteristics of the environment must be linked with the semantic information of the objects in the environment to achieve a semantic navigation of the robot. However, the task is complicated as the perception of the environment made by the sensors is objective, while the human abstraction of space is very subjective. Then, it is a matter of inferring according to the context and previous knowledge about the environment, which spots of the empty space are restricted to the robot’s navigation. In any of the cases, it is necessary to take into account the semantics of the space when planning socially acceptable navigation solutions.
6.4. Inclusion of Other Features in Robots Navigation
In environments populated by humans and robots in the context of social navigation, in fact, we could not take into account only the proxemic zones, as, in the human and robot interaction, there are other features that need to be analysed. In this context, several restrictions related to common spaces, limitations, customs, habits, age, culture, person emotion and the emotions of a group of people will make the navigation system safer and efficient. In this sense, the proposal can be improved by considering all DILMO dimensions, combined with perception and recognition capabilities of robots based on traditional Machine Learning techniques and other planning strategies.
7. Conclusions
The need of interaction between machines and humans is becoming more common in people’s daily lives. The effort to improve these relationships through the interpretation of social behaviours are more and more frequent among researchers in area of social robotics. The theory of proxemics is a concept used mainly to describe the human use of space. Thus, carefully designed proxemic behaviours in robots might foster closer human–robot relationships and enable widespread acceptance of robots, contributing to their seamless integration into society. However, when the robot is sharing the same environment only with other robots, it is not necessary to consider social restrictions. Thus, an effective social navigation should adapt to these situations.
In this context, we propose an adaptable and efficient social navigation approach based on the ORCA algorithm to prevents collisions in real-time, in situations with only robots and on the Social Momentum algorithm combined with A* to detect the proxemic zones of humans and robots. Proxemic zones for a robot are defined only if it located inside a human’s personal zone. We use symmetric Gaussians to represent distances and proxemic zones, which can be parametrised depending on personal or cultural characteristics.
In this paper, we show the implementation of our proposed robotic navigation system and illustrate its functionality, by simulating several situations of multi-robot navigation (robot–robot interaction) and social navigation with human presence (human–robot interaction).
For future work, we plan to improve our proposal as we explain in Section 6, for example, by using asymmetric Gaussian function, which will allow modelling the proxemic distances according to the different social characteristics and by considering all DILMO dimensions. We are also looking forward to implement it in a real robot system.
Author Contributions
Conceptualisation: D.B.-A., Y.C. and J.D.-A.; data curation: M.D.; methodology: D.B.-A., J.D.-A., Y.C. and M.D.; software: M.D., D.B.-A., J.D.-A. and J.V.; validation, M.D.; investigation: M.D.; writing—original draft preparation: Y.C., J.D.-A., D.B.-A., M.D. and J.V.; writing—review and editing: J.D.-A., Y.C., D.B.-A., M.D. and J.V. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by FONDO NACIONAL DE DESARROLLO CIENTÍFICO, TECNOLÓGICO Y DE INNOVACIÓN TECNOLÓGICA - FONDECYT as executing entity of CONCYTEC under grant agreement no. 01-2019-FONDECYT-BM-INC.INV in the project RUTAS: Robots for Urban Tourism Centers, Autonomous and Semantic-based.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Vercelli, A.; Rainero, I.; Ciferri, L.; Boido, M.; Pirri, F. Robots in elderly care. Digit.-Sci. J. Digit. Cult. 2018, 2, 37–50. [Google Scholar]
- Martinez-Martin, E.; del Pobil, A.P. Personal robot assistants for elderly care: An overview. In Personal Assistants: Emerging Computational Technologies; Springer: Berlin, Germany, 2018; pp. 77–91. [Google Scholar]
- Lee, S.; Noh, H.; Lee, J.; Lee, K.; Lee, G.G.; Sagong, S.; Kim, M. On the effectiveness of robot-assisted language learning. ReCALL 2011, 23, 25–58. [Google Scholar] [CrossRef]
- Toh, L.P.E.; Causo, A.; Tzuo, P.W.; Chen, I.M.; Yeo, S.H. A review on the use of robots in education and young children. J. Educ. Technol. Soc. 2016, 19, 148–163. [Google Scholar]
- Shiomi, M.; Kanda, T.; Ishiguro, H.; Hagita, N. Interactive humanoid robots for a science museum. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, Salt Lake City, UT, USA, 2–3 March 2006; pp. 305–312. [Google Scholar]
- Al-Wazzan, A.; Al-Farhan, R.; Al-Ali, F.; El-Abd, M. Tour-guide robot. In Proceedings of the IEEE 2016 International Conference on Industrial Informatics and Computer Systems (CIICS), Sharjah-Dubai, UAE, 13–15 March 2016; pp. 1–5. [Google Scholar]
- Sasaki, Y.; Nitta, J. Long-term demonstration experiment of autonomous mobile robot in a science museum. In Proceedings of the 2017 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Ottawa, ON, Canada, 5–7 October 2017; pp. 304–310. [Google Scholar]
- Pieska, S.; Luimula, M.; Jauhiainen, J.; Spiz, V. Social service robots in wellness and restaurant applications. J. Commun. Comput. 2013, 10, 116–123. [Google Scholar]
- Khan, A.; Anwar, Y. Robots in healthcare: A survey. In Science and Information Conference; Springer: Berlin, Germany, 2019; pp. 280–292. [Google Scholar]
- Kruse, T.; Pandey, A.K.; Alami, R.; Kirsch, A. Human-aware robot navigation: A survey. Robot. Auton. Syst. 2013, 61, 1726–1743. [Google Scholar] [CrossRef]
- Mitka, E.; Gasteratos, A.; Kyriakoulis, N.; Mouroutsos, S.G. Safety certification requirements for domestic robots. Saf. Sci. 2012, 50, 1888–1897. [Google Scholar]
- Zheng, K.; Glas, D.F.; Kanda, T.; Ishiguro, H.; Hagita, N. Supervisory control of multiple social robots for navigation. In Proceedings of the IEEE 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 17–24. [Google Scholar]
- Ravankar, A.; Ravankar, A.A.; Kobayashi, Y.; Hoshino, Y.; Peng, C.C. Path smoothing techniques in robot navigation: State-of-the-art, current and future challenges. Sensors 2018, 18, 3170. [Google Scholar] [CrossRef]
- Breazeal, C.; Dautenhahn, K.; Kanda, T. Social robotics. In Springer Handbook of Robotics; Springer: Berlin, Germany, 2016; pp. 1935–1972. [Google Scholar]
- Nurmaini, S.; Tutuko, B. Intelligent Robotics Navigation System: Problems, Methods, and Algorithm. Int. J. Electr. Comput. Eng. (2088-8708) 2017, 7, 3711–3726. [Google Scholar] [CrossRef][Green Version]
- Čaić, M.; Mahr, D.; Oderkerken-Schröder, G. Value of social robots in services: Social cognition perspective. J. Serv. Mark. 2019, 33, 463–478. [Google Scholar]
- Mead, R.; Matarić, M.J. Perceptual models of human-robot proxemics. In Experimental Robotics; Springer: Berlin, Germany, 2016; pp. 261–276. [Google Scholar]
- Redondo, M.E.L. Comfortability Detection for Adaptive Human-Robot Interactions. In Proceedings of the 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), Cambridge, UK, 3–6 September 2019; pp. 35–39. [Google Scholar]
- Mumm, J.; Mutlu, B. Human-robot proxemics: Physical and psychological distancing in human-robot interaction. In Proceedings of the 6th International Conference on Human-Robot Interaction, Lausanne, Switzerland, 8–11 March 201; pp. 331–338. [CrossRef]
- Henkel, Z.; Bethel, C.L.; Murphy, R.R.; Srinivasan, V. Evaluation of proxemic scaling functions for social robotics. IEEE Trans. Hum. Mach. Syst. 2014, 44, 374–385. [Google Scholar] [CrossRef]
- Lasota, P.A.; Fong, T.; Shah, J.A. A Survey of Methods for Safe Human-Robot Interaction; Now Publishers: Delft, The Netherlands, 2017. [Google Scholar]
- Rios-Martinez, J.; Spalanzani, A.; Laugier, C. From proxemics theory to socially-aware navigation: A survey. Int. J. Soc. Robot. 2015, 7, 137–153. [Google Scholar] [CrossRef]
- Saunderson, S.; Nejat, G. How robots influence humans: A survey of nonverbal communication in social human–robot interaction. Int. J. Soc. Robot. 2019, 11, 575–608. [Google Scholar] [CrossRef]
- Hall, E.T. The Hidden Dimension: An Anthropologist Examines Man’s Use of Space in Private and Public; Anchor Books; Doubleday & Company Inc.: New York, NY, USA, 1966. [Google Scholar]
- Evans, G.W.; Lepore, S.J.; Allen, K.M. Cross-cultural differences in tolerance for crowding: Fact or fiction? J. Personal. Soc. Psychol. 2000, 79, 204. [Google Scholar] [CrossRef]
- Ballendat, T.; Marquardt, N.; Saul, G. Proxemic interaction: Designing for a proximity and orientation-aware environment. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, Saarbrücken, Germany, 7–10 November 2010; pp. 121–130. [Google Scholar]
- Greenberg, S.; Marquardt, N.; Ballendat, T.; Diaz-Marino, R.; Wang, M. Proxemic interactions: The new ubicomp? Interactions 2011, 18, 42–50. [Google Scholar] [CrossRef]
- Wolf, K.; Abdelrahman, Y.; Kubitza, T.; Schmidt, A. Proxemic zones of exhibits and their manipulation using floor projection. In Proceedings of the ACM International Symposium on Pervasive Displays, Oulu, Finland, 20–22 June 2016; pp. 33–37. [Google Scholar]
- Avrunin, E.; Simmons, R. Using human approach paths to improve social navigation. In Proceedings of the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 73–74. [Google Scholar]
- Feil-Seifer, D.; Matarić, M. Using proxemics to evaluate human-robot interaction. In Proceedings of the 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Osaka, Japan, 2–5 March 2010; pp. 143–144. [Google Scholar]
- Tokmurzina, D.; Sagitzhan, N.; Nurgaliyev, A.; Sandygulova, A. Exploring Child-Robot Proxemics. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 257–258. [Google Scholar]
- Hayduk, L.A. The shape of personal space: An experimental investigation. Can. J. Behav. Sci. Can. Des. Sci. Du Comport. 1981, 13, 87. [Google Scholar] [CrossRef]
- Helbing, D.; Molnar, P. Social force model for pedestrian dynamics. Phys. Rev. E 1995, 51, 4282. [Google Scholar] [CrossRef]
- Gérin-Lajoie, M.; Richards, C.L.; Fung, J.; McFadyen, B.J. Characteristics of personal space during obstacle circumvention in physical and virtual environments. Gait Posture 2008, 27, 239–247. [Google Scholar] [CrossRef] [PubMed]
- Krueger, J. Extended cognition and the space of social interaction. Conscious. Cogn. 2011, 20, 643–657. [Google Scholar] [CrossRef]
- Kendon, A. Spacing and orientation in co-present interaction. In Development of Multimodal Interfaces: Active Listening and Synchrony; Springer: Berlin, Germany, 2010; pp. 1–15. [Google Scholar]
- Nilsson, T.; Fischer, J.E.; Crabtree, A.; Goulden, M.; Spence, J.; Costanza, E. Visions, Values, and Videos: Revisiting Envisionings in Service of UbiComp Design for the Home. arXiv 2020, arXiv:2005.08952. [Google Scholar]
- Ledo, D.; Greenberg, S.; Marquardt, N.; Boring, S. Proxemic-aware controls: Designing remote controls for ubiquitous computing ecologies. In Proceedings of the International Conference on Human-Computer Interaction with Mobile Devices and Services, Copenhagen, Denmark, 24–27 August 2015; pp. 187–198. [Google Scholar]
- Marquardt, N.; Hinckley, K.; Greenberg, S. Cross-device interaction via micro-mobility and f-formations. In Proceedings of the Symposium on User Interface Software and Technology, Cambridge, MA, USA, 7–10 October 2012; pp. 13–22. [Google Scholar]
- Marquardt, N.; Diaz-Marino, R.; Boring, S.; Greenberg, S. The proximity toolkit: Prototyping proxemic interactions in ubiquitous computing ecologies. In Proceedings of the Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, 16–19 October 2011; pp. 315–326. [Google Scholar]
- Abbenseth, J.; Lopez, F.G.; Henkel, C.; Dörr, S. Cloud-Based Cooperative Navigation for Mobile Service Robots in Dynamic Industrial Environments. In SAC’17, Proceedings of the Symposium on Applied Computing, Marrakech, Morocco, 3–7 April 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 283–288. [Google Scholar] [CrossRef]
- Foka, A.F.; Trahanias, P.E. Predictive autonomous robot navigation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 30 September–4 October 2002; Volume 1, pp. 490–495. [Google Scholar]
- Lambrinos, D.; Möller, R.; Labhart, T.; Pfeifer, R.; Wehner, R. A mobile robot employing insect strategies for navigation. Robot. Auton. Syst. 2000, 30, 39–64. [Google Scholar] [CrossRef]
- Guzzi, J.; Giusti, A.; Gambardella, L.M.; Di Caro, G.A. A model of artificial emotions for behavior-modulation and implicit coordination in multi-robot systems. In Proceedings of the Genetic and Evolutionary Computation Conference, Kyoto, Japan, 15–19 July 2018; pp. 21–28. [Google Scholar]
- Cazangi, R.R.; Von Zuben, F.J.; Figueiredo, M.F. Autonomous navigation system applied to collective robotics with ant-inspired communication. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, Washington, DC, USA, 25–29 June 2005; pp. 121–128. [Google Scholar]
- Turek, W. Scalable navigation system for mobile robots based on the agent dual-space control paradigm. In Proceedings of the International Conference and Workshop on Emerging Trends in Technology, Mumbai, India, 26–27 February 2010; pp. 606–612. [Google Scholar]
- Van Den Berg, J.; Guy, S.J.; Lin, M.; Manocha, D. Reciprocal n-body collision avoidance. In Robotics research; Springer: Berlin, Germany, 2011; pp. 3–19. [Google Scholar]
- Wilkes, D.M.; Alford, A.; Pack, R.T.; Rogers, T.; Peters, R.; Kawamura, K. Toward socially intelligent service robots. Appl. Artif. Intell. 1998, 12, 729–766. [Google Scholar] [CrossRef]
- Scassellati, B.; Admoni, H.; Matarić, M. Robots for use in autism research. Annu. Rev. Biomed. Eng. 2012, 14, 275–294. [Google Scholar] [CrossRef] [PubMed]
- Burgard, W.; Cremers, A.B.; Fox, D.; Hähnel, D.; Lakemeyer, G.; Schulz, D.; Steiner, W.; Thrun, S. The interactive museum tour-guide robot. In Proceedings of the AAAI/IAAI, Madison, WI, USA, 27–29 July 1998; pp. 11–18. [Google Scholar]
- Pantic, M.; Evers, V.; Deisenroth, M.; Merino, L.; Schuller, B. Social and affective robotics tutorial. In Proceedings of the 24th ACM international conference on Multimedia, Amsterdam, The Netherlands, 15–19 October 2016; pp. 1477–1478. [Google Scholar]
- Mavrogiannis, C.; Hutchinson, A.M.; Macdonald, J.; Alves-Oliveira, P.; Knepper, R.A. Effects of distinct robot navigation strategies on human behavior in a crowded environment. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Korea, 11–14 March 2019; pp. 421–430. [Google Scholar]
- Mavrogiannis, C.I.; Thomason, W.B.; Knepper, R.A. Social momentum: A framework for legible navigation in dynamic multi-agent environments. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 361–369. [Google Scholar]
- Vega, A.; Cintas, R.; Manso, L.J.; Bustos, P.; Núñez, P. Socially-Accepted Path Planning for Robot Navigation Based on Social Interaction Spaces. In Iberian Robotics Conference; Springer: Berlin, Germany, 2019; pp. 644–655. [Google Scholar]
- Lobato, C.; Vega-Magro, A.; Núñez, P.; Manso, L. Human-robot dialogue and Collaboration for social navigation in crowded environments. In Proceedings of the 2019 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Porto, Portugal, 24–26 April 2019; pp. 1–6. [Google Scholar]
- Riether, N.; Hegel, F.; Wrede, B.; Horstmann, G. Social facilitation with social robots? In Proceedings of the 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Boston, MA, USA, 5–8 March 2012; pp. 41–47. [Google Scholar]
- Khambhaita, H.; Alami, R. A human-robot cooperative navigation planner. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 161–162. [Google Scholar]
- Mead, R.; Matarić, M.J. Autonomous human-robot proxemics: A robot-centered approach. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; p. 573. [Google Scholar]
- Sakamoto, D.; Ono, T. Sociality of robots: Do robots construct or collapse human relations? In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, Salt Lake City, UT, USA, 2–3 March 2006; pp. 355–356. [Google Scholar]
- Han, J.; Bae, I. Social Proxemics of Human-Drone Interaction: Flying Altitude and Size. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; p. 376. [Google Scholar]
- Vega-Magro, A.; Calderita, L.V.; Bustos, P.; Núñez, P. Human-aware Robot Navigation based on Time-dependent Social Interaction Spaces: A use case for assistive robotics. In Proceedings of the 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Azores, Portugal, 15–17 April 2020; pp. 140–145. [Google Scholar]
- Patompak, P.; Jeong, S.; Nilkhamhang, I.; Chong, N.Y. Learning Proxemics for Personalized Human–Robot Social Interaction. Int. J. Soc. Robot. 2019, 12, 267–280. [Google Scholar] [CrossRef]
- Durand, N. Constant speed optimal reciprocal collision avoidance. Transp. Res. Part C Emerg. Technol. 2018, 96, 366–379. [Google Scholar] [CrossRef]
- Nascimento, L.B.; Morais, D.S.; Barrios-Aranibar, D.; Santos, V.G.; Pereira, D.S.; Alsina, P.J.; Medeiros, A.A. A Multi-Robot Path Planning Approach Based on Probabilistic Foam. In Proceedings of the 2019 Latin American Robotics Symposium (LARS), 2019 Brazilian Symposium on Robotics (SBR) and 2019Workshop on Robotics in Education (WRE), Ro Grande do Sul, Brazil, 22–26 October 2019; pp. 329–334. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).